content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Thinkfinity Lesson Plans
Subject: Arts,Mathematics
Title: To Fret or Not to Fret
Description: In this unit of two lessons, from Illuminations, students explore geometric sequences and exponential functions by considering the placement of frets on stringed instruments. They study
the placement of frets on a fretted instrument then use their discoveries to place frets on a fretless instrument.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Exploring Measurement, Sequences, and Curves with Stringed Instruments
Description: In this lesson, one of a multi-part unit from Illuminations, students measure lengths on stringed musical instruments. They discuss how the placement of frets on a fretted instrument is
determined by a geometric sequence.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: To Fret or...
Description: This reproducible activity, from an Illuminations lesson, features questions dealing with measuring distances on fretted stringed instruments.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Not to Fret
Description: This reproducible activity sheet, from an Illuminations lesson, presents a line drawing of a guitar's neck showing the location of the nut and the 12th fret. In the lesson, students
measure lengths on stringed musical instruments and discuss how the placement of frets on a fretted instrument is determined by a geometric sequence.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: To Fret or Not to Fret
Description: This reproducible worksheet, from an Illuminations lesson, presents a series of questions related to fretted instruments and geometric sequences. In the lesson, students compare
geometric sequences with exponential functions.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Fretting
Description: In this lesson, one of a multi-part unit from Illuminations, students use their discoveries from the first lesson to place frets on a fretless instrument. They then compare geometric
sequences with exponential functions.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Seeing Music, Hearing Waves
Description: Using this reproducible activity sheet, from an Illuminations lesson, students calculate the frequencies of two octaves of a chromatic musical scale in standard pitch. They then
experiment with different combinations of notes and related sine waves to observe why some combinations of musical notes sound harmonious and others have a dissonance.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Seeing Music, Hearing Waves: Selected Answers and Solutions
Description: This reproducible teacher sheet, from an Illuminations lesson, provides selected solutions to an activity in which students calculate the frequencies of two octaves of a chromatic
musical scale in standard pitch. Students then experiment with different combinations of notes and related sine waves to observe why some combinations of musical notes sound harmonious and others
have a dissonance.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Seeing Music
Description: In this Illuminations lesson, students calculate terms of a geometric sequence to determine frequencies of the chromatic scale. They then compare sine waves to see and hear the
trigonometry behind harmonious and dissonant note combinations.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
|
{"url":"http://alex.state.al.us/all.php?std_id=54481","timestamp":"2014-04-16T07:21:30Z","content_type":null,"content_length":"55496","record_id":"<urn:uuid:e162caab-4d17-4237-96cd-8ea79956ae62>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Add, Subtract, Multiply, and Divide with Units
How to Add, Subtract, Multiply, and Divide with Units
Anything that can be counted is a unit. Because you can count units, this means that you can apply the Big Four operations (addition, subtraction, multiplication, and division) to them.
Add and subtract units
Adding and subtracting units isn’t very different from adding and subtracting numbers. Just remember that you can only add or subtract when the units are the same. For example,
3 chairs + 2 chairs = 5 chairs
4 oranges – 1 orange = 3 oranges
What happens when you try to add or subtract different units? Here’s an example:
3 chairs + 2 tables = ?
The only way you can complete this addition is to make the units the same:
3 pieces of furniture + 2 pieces of furniture = 5 pieces of furniture
Multiply and divide units
You can always multiply and divide units by a number. For example, suppose you have four chairs but find that you need twice as many for a party. Here’s how you represent this idea in math:
4 chairs 2 = 8 chairs
Similarly, suppose you have 20 cherries and want to split them among four people. Here’s how you represent this idea:
20 cherries 4 = 5 cherries
But you have to be careful when multiplying or dividing units by units. For example:
2 apples 3 apples = ? WRONG!
12 hats 6 hats = ? WRONG!
Neither of these equations makes any sense. In these cases, multiplying or dividing by units is meaningless.
In many cases, however, multiplying and dividing units is okay. For example, multiplying units of length (such as inches, miles, or meters) results in square units:
3 inches 3 inches = 9 square inches
10 miles 5 miles = 50 square miles
100 meters 200 meters = 20,000 square meters
Similarly, here are some examples of when dividing units makes sense:
12 slices of pizza 4 people = 3 slices of pizza/person
140 miles 2 hours = 70 miles/hour
In these cases, you read the fraction slash (/) as per: slices of pizza per person or miles per hour.
|
{"url":"http://www.dummies.com/how-to/content/how-to-add-subtract-multiply-and-divide-with-units.html","timestamp":"2014-04-20T05:15:01Z","content_type":null,"content_length":"50915","record_id":"<urn:uuid:1878c59c-e3d5-49a4-b36c-ac595354f25c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Power function: Series representations (subsection 06/01)
Generalized power series
Expansions at generic point a==a[0]
For the function itself
Expansions at a==0
For the function itself
Expansions at generic point z==z[0]
For the function itself
Expansions of f(z)^a at z==z[0]
Expansions on branch cuts
For the function itself
Expansions at z==1
For the function itself
General case
Special cases
Expansions of (1+z)^a at z==0
For the function itself
General case
Special cases
Expansions of (1+Sum[k=1]^infinityc[k] z^k)^a at z==0
Expansions of (1+z)^a at z==infinity
For the function itself
General case
Special cases
|
{"url":"http://functions.wolfram.com/ElementaryFunctions/Power/06/01/ShowAll.html","timestamp":"2014-04-16T19:19:22Z","content_type":null,"content_length":"78097","record_id":"<urn:uuid:a954cb7a-337d-4682-b7ca-ca55f483b1cb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Top Cited Articles during 1996 in hep-ex
The 50 most highly cited papers during 1996 in the hep-ex archive
Keep in mind that citation counts can never be exact, there is something like a 5% error in most of these numbers. Please do not fret about number 32 versus 33, as this is often not a statistically
significant difference. Remember the detailed warning about the accuracy of these counts.
Also note that the counts shown, and used in the rankings, are the counts as of Wed 7-Mar-2007. Further, the counts shown by the ranking are only the cites satisfying the criteria for that list.
Actual citation numbers in the database may change as corrections are made and papers are added, the links will take you to the updated numbers. The lists, however, will not update.
Observation of the top quark
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.74:2632-2637,1995 [arXiv: hep-ex/9503003]
[1291 Total citations in HEP]
Observation of top quark production in anti-p p collisions
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.74:2626-2631,1995 [arXiv: hep-ex/9503002]
[1345 Total citations in HEP]
First measurement of the deep inelastic structure of proton diffraction
By H1 Collaboration (T. Ahmed et al.).
Published in:Phys.Lett.B348:681-696,1995 [arXiv: hep-ex/9503005]
[298 Total citations in HEP]
Evidence for top quark production in anti-p p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.73:225-231,1994 [arXiv: hep-ex/9405005]
[567 Total citations in HEP]
A Measurement of the proton structure function f2 (x, Q**2)
By H1 Collaboration (T. Ahmed et al.).
Published in:Nucl.Phys.B439:471-502,1995 [arXiv: hep-ex/9503001]
[284 Total citations in HEP]
Inclusive jet cross-section in anti-p p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.77:438-443,1996 [arXiv: hep-ex/9601008]
[298 Total citations in HEP]
Measurement of the diffractive structure function in deep elastic scattering at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Z.Phys.C68:569-584,1995 [arXiv: hep-ex/9505010]
[230 Total citations in HEP]
A Measurement and QCD analysis of the proton structure function f2 (x, q**2) at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Nucl.Phys.B470:3-40,1996 [arXiv: hep-ex/9603004]
[423 Total citations in HEP]
Measurement of the proton structure function F2 at low x and low q**2 at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Z.Phys.C69:607-620,1996 [arXiv: hep-ex/9510009]
[204 Total citations in HEP]
Measurement of the anti-B ---> D* lepton anti-neutrino branching fractions and |V(cb)|
By CLEO Collaboration (B. Barish et al.).
Published in:Phys.Rev.D51:1014-1033,1995 [arXiv: hep-ex/9406005]
[175 Total citations in HEP]
Measurement of the W boson mass
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.75:11-16,1995 [arXiv: hep-ex/9503007]
[166 Total citations in HEP]
Measurement of the cross-section for the reaction gamma p ---> J / psi p with the ZEUS detector at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B350:120-134,1995 [arXiv: hep-ex/9503015]
[141 Total citations in HEP]
Precise measurement of the left-right cross-section asymmetry in Z boson production by e+ e- collisions
By SLD Collaboration (K. Abe et al.).
Published in:Phys.Rev.Lett.73:25-29,1994 [arXiv: hep-ex/9404001]
[197 Total citations in HEP]
Measurements of the proton and deuteron spin structure function g2 and asymmetry A2
By E143 Collaboration (K. Abe et al.).
Published in:Phys.Rev.Lett.76:587-591,1996 [arXiv: hep-ex/9511013]
[168 Total citations in HEP]
Exclusive rho0 production in deep inelastic electron - proton scattering at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B356:601-616,1995 [arXiv: hep-ex/9507001]
[121 Total citations in HEP]
Results from the LSND neutrino oscillation search for anti-muon-neutrino ---> anti-electron-neutrino
By James E. Hill (Pennsylvania U.).
Published in:Phys.Rev.Lett.75:2654-2657,1995 [arXiv: hep-ex/9504009]
[97 Total citations in HEP]
The Gluon density of the proton at low x from a QCD analysis of F2
By H1 Collaboration (S. Aid et al.).
Published in:Phys.Lett.B354:494-505,1995 [arXiv: hep-ex/9506001]
[87 Total citations in HEP]
Spin asymmetry in muon - proton deep inelastic scattering on a transversely polarized target
By Spin Muon Collaboration (SMC) (D. Adams et al.).
Published in:Phys.Lett.B336:125-130,1994 [arXiv: hep-ex/9408001]
[154 Total citations in HEP]
Transverse energy and forward jet production in the low x regime at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Phys.Lett.B356:118-128,1995 [arXiv: hep-ex/9506012]
[93 Total citations in HEP]
Dijet cross-sections in photoproduction at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B348:665-680,1995 [arXiv: hep-ex/9502008]
[119 Total citations in HEP]
Measurement of the B meson differential cross-section, d sigma / d p(T), in p anti-p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.75:1451-1455,1995 [arXiv: hep-ex/9503013]
[156 Total citations in HEP]
Measurement of the total photon-proton cross-section and its decomposition at 200-GeV center-of-mass energy
By H1 Collaboration (S. Aid et al.).
Published in:Z.Phys.C69:27-38,1995 [arXiv: hep-ex/9509001]
[169 Total citations in HEP]
Measurement of the W W gamma gauge boson couplings in p anti-p collisions at s**(1/2) = 1.8-TeV
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.75:1034-1039,1995 [arXiv: hep-ex/9505007]
[87 Total citations in HEP]
W and Z boson production in p anti-p collisions at s**(1/2) = 1.8-TeV
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.75:1456-1461,1995 [arXiv: hep-ex/9505013]
[106 Total citations in HEP]
Inclusive parton cross-sections in photoproduction and photon structure
By H1 Collaboration (T. Ahmed et al.).
Published in:Nucl.Phys.B445:195-218,1995 [arXiv: hep-ex/9504004]
[93 Total citations in HEP]
The Charge asymmetry in W boson decays produced in p anti-p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.74:850-854,1995 [arXiv: hep-ex/9501008]
[84 Total citations in HEP]
Measurement of elastic rho0 photoproduction at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Z.Phys.C69:39-54,1995 [arXiv: hep-ex/9507011]
[108 Total citations in HEP]
Measurement of alpha-s from jet rates in deep inelastic scattering at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B363:201-216,1995 [arXiv: hep-ex/9510001]
[75 Total citations in HEP]
Measurement of alpha-s (M(Z)**2) from hadronic event observables at the Z0 resonance
By SLD Collaboration (K. Abe et al.).
Published in:Phys.Rev.D51:962-984,1995 [arXiv: hep-ex/9501003]
[125 Total citations in HEP]
A Direct determination of the gluon density in the proton at low x
By H1 Collaboration (S. Aid et al.).
Published in:Nucl.Phys.B449:3-24,1995 [arXiv: hep-ex/9505014]
[56 Total citations in HEP]
Determination of the strange quark content of the nucleon from a next-to-leading order QCD analysis of neutrino charm production
By CCFR Collaboration (A.O. Bazarko et al.).
Published in:Z.Phys.C65:189-198,1995 [arXiv: hep-ex/9406007]
[263 Total citations in HEP]
Study of D*+- (2010) production in e p collisions at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B349:225-237,1995 [arXiv: hep-ex/9502002]
[79 Total citations in HEP]
Diffractive hard photoproduction at HERA and evidence for the gluon content of the pomeron
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B356:129-146,1995 [arXiv: hep-ex/9506009]
[112 Total citations in HEP]
Asymmetries between the production of d+ and d- mesons from 500-GeV/c pi- - nucleon interactions as a function of xF and p(t)**2
By E791 Collaboration (E.M. Aitala et al.).
Published in:Phys.Lett.B371:157-162,1996 [arXiv: hep-ex/9601001]
[181 Total citations in HEP]
Elastic and inelastic photoproduction of J / psi mesons at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Nucl.Phys.B472:3-31,1996 [arXiv: hep-ex/9603005]
[224 Total citations in HEP]
Search for new particles decaying to dijets in p anti-p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.74:3538-3543,1995 [arXiv: hep-ex/9501001]
[55 Total citations in HEP]
Measurements of the Q**2 dependence of the proton and deuteron spin structure functions g1(p) and g1(d)
By E143 Collaboration (K. Abe et al.).
Published in:Phys.Lett.B364:61-68,1995 [arXiv: hep-ex/9511015]
[124 Total citations in HEP]
Limits on W W Z and W W gamma couplings from W W and W Z production in p anti-p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.75:1017-1022,1995 [arXiv: hep-ex/9503009]
[95 Total citations in HEP]
Search for W boson pair production in p anti-p collisions at s**(1/2) = 1.8-TeV
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.75:1023-1027,1995 [arXiv: hep-ex/9503012]
[73 Total citations in HEP]
The E791 parallel architecture data acquisition system
By S. Amato, J.R.T. de Mello Neto, J. de Miranda (Rio de Janeiro, CBPF), C. James (Fermilab), D.J. Summers (Mississippi U.), Stephen B. Bracker.
Published in:Nucl.Instrum.Meth.A324:535-542,1993 [arXiv: hep-ex/0001003]
[140 Total citations in HEP]
Measurement of the Z Z gamma and Z gamma gamma couplings in p anti-p collisions at s**(1/2) = 1.8-TeV
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.75:1028,1995 [arXiv: hep-ex/9503010]
[62 Total citations in HEP]
Jets and energy flow in photon - proton collisions at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Z.Phys.C70:17-30,1996 [arXiv: hep-ex/9511012]
[77 Total citations in HEP]
Elastic photoproduction of rho0 mesons at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Nucl.Phys.B463:3-32,1996 [arXiv: hep-ex/9601004]
[102 Total citations in HEP]
Cumulant to factorial moment ratio and multiplicity data
By I.M. Dremin (Lebedev Inst.), V. Arena, G. Boca, G. Gianini, S. Malvezzi, M. Merlo, S.P. Ratti, C. Riccardi, G. Salvadori, L. Viola, P. Vitulo (Pavia U. & INFN, Pavia).
Published in:Phys.Lett.B336:119-124,1994 [arXiv: hep-ex/9405007]
[52 Total citations in HEP]
Search for exclusive charmless hadronic B decays
By CLEO Collaboration (D.M. Asner et al.).
Published in:Phys.Rev.D53:1039-1050,1996 [arXiv: hep-ex/9508004]
[152 Total citations in HEP]
Kinematic evidence for top quark pair production in W + multi - jet events in p anti-p collisions at s**(1/2) = 1.8-TeV
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.D51:4623-4637,1995 [arXiv: hep-ex/9412009]
[65 Total citations in HEP]
Observation of anisotropic event shapes and transverse flow in Au + Au collisions at AGS energy
By E877 Collaboration (J. Barrette et al.).
Published in:Phys.Rev.Lett.73:2532-2535,1994 [arXiv: hep-ex/9405003]
[113 Total citations in HEP]
Search for the decay K+ --> pi+ neutrino anti-neutrino
By S.C. Adler, M.S. Atiya, I-H. Chiang, J.S. Frank, J.S. Haggerty, T.F. Kycia, K.K. Li, L.S. Littenberg, A. Sambamurti, A. Stevens, R.C. Strand, C. Witzig (Brookhaven), W.C. Louis (Los
Alamos), D.S. Akerib, M. Ardebili, M.R. Convery, M.M. Ito, Daniel R. Marlow, R.A. McPherson, P.D. Meyers, M.A. Selen, F.C. Shoemaker, A.J.S. Smith (Princeton U.), E.W. Blackmore, D.A. Bryman,
L. Felawka, P. Kitching, A. Konaka, V.A. Kujala, Y. Kuno, J.A. Macdonald, T. Nakano, T. Numao, P. Padley, J-M. Poutissou, R. Poutissou, J. Roy, R. Soluk, A.S. Turcot (TRIUMF).
Published in:Phys.Rev.Lett.76:1421-1424,1996 [arXiv: hep-ex/9510006]
[45 Total citations in HEP]
Measurement of the Omega(c)0 lifetime
By WA89 Collaboration (M.I. Adamovich et al.).
Published in:Phys.Lett.B358:151-161,1995 [arXiv: hep-ex/9507004]
[29 Total citations in HEP]
Jet production via strongly interacting color singlet exchange in p anti-p collisions
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.76:734-739,1996 [arXiv: hep-ex/9509013]
[79 Total citations in HEP]
A Search for leptoquarks at HERA
By H1 Collaboration (S. Aid et al.).
Published in:Phys.Lett.B369:173-185,1996 [arXiv: hep-ex/9512001]
[55 Total citations in HEP]
Search for high mass top quark production in p anti-p collisions at s**(1/2) = 1.8-TeV
By D0 Collaboration (S. Abachi et al.).
Published in:Phys.Rev.Lett.74:2422-2426,1995 [arXiv: hep-ex/9411001]
[72 Total citations in HEP]
Rapidity gaps between jets in photoproduction at HERA
By ZEUS Collaboration (M. Derrick et al.).
Published in:Phys.Lett.B369:55-68,1996 [arXiv: hep-ex/9510012]
[88 Total citations in HEP]
Observation of a narrow state decaying into Xi(c)+ pi-
By CLEO Collaboration (P. Avery et al.).
Published in:Phys.Rev.Lett.75:4364-4368,1995 [arXiv: hep-ex/9508010]
[60 Total citations in HEP]
Measurement of the diffractive cross-section in deep inelastic scattering
By ZEUS Collaboration (M. Derrick et al.).
Published in:Z.Phys.C70:391-412,1996 [arXiv: hep-ex/9602010]
[72 Total citations in HEP]
Study of t anti-t production p anti-p collisions using total transverse energy
By CDF Collaboration (F. Abe et al.).
Published in:Phys.Rev.Lett.75:3997,1995 [arXiv: hep-ex/9506006]
[36 Total citations in HEP]
|
{"url":"http://www.slac.stanford.edu/spires/topcites/1996/eprints/to_hep-ex_annual.shtml","timestamp":"2014-04-20T18:57:24Z","content_type":null,"content_length":"35117","record_id":"<urn:uuid:892af448-9a7f-41ee-9bd5-1ded46f25912>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Designing for Reliability and Robustness
No design is free from uncertainty or natural variation. For example, how will the design be used? How will it respond to environmental factors or to changes in manufacturing or operational
processes? These kinds of uncertainty compound the challenge of creating designs that are reliable and robust –designs that perform as expected over time and are insensitive to changes in
manufacturing, operational, or environmental factors.
Using an automotive suspension system as an example, this article describes tools and techniques in MATLAB^®, Statistics Toolbox™, and Optimization Toolbox™ software that let you extend a traditional
design optimization approach to account for uncertainty in your design, improving quality and reducing prototype testing and overall development effort.
We begin by designing a suspension system that minimizes the forces experienced by front- and rear-seat passengers when the automobile travels over a bump in the road. We then modify the design to
account for suspension system reliability; we want to ensure that the suspension system will perform well for at least 100,000 miles. We conclude our analysis by verifying that the design is
resilient to, or unaffected by, changes in cargo and passenger mass.
Performing Traditional Design Optimization
Our Simulink suspension system model (Figure 1) has two inputs – the bump starting and ending height – and eight adjustable parameters. We can modify the four parameters that define the front and
rear suspension system stiffness and damping rate: kf, kr, cf, cr. The remaining parameters are defined by applying passenger and cargo loading to the vehicle, and are not considered to be design
The model outputs are angular acceleration about the center of gravity (thetadotddot,IMAGE) and vertical acceleration (zdotdot,IMAGE). Figure 2 illustrates the model response for our initial design
to a simulated bump in the road.
Our goal is to set the parameters kf, kr, cf, and cr to minimize the discomfort that front- and rear-seat passengers experience as a result of traveling over a bump in the road. We use acceleration
as a proxy for passenger discomfort. The design optimization problem can be summarized as follows:
Objective: Minimize peak and total acceleration (
Design variables: Front/rear spring/shock absorber design (kr, kr, cf, cr)
Car is level when at rest.
Design constraints: Suspension system maintains a natural frequency of vibration below 2 Hz.
Damping ratio remains between 0.3 and 0.5.
This problem is nonlinear in both response (Figure 2) and design constraints. To solve it, a nonlinear optimization solver is required. The Optimization Toolbox solver fmincon is designed
specifically for this type of problem.
We begin by casting our optimization problem into the form accepted by fmincon. The table below summarizes the problem formulation that fmincon accepts and the definition of the suspension problem in
MATLAB syntax.
fmincon Standard Form Suspension Problem (MATLAB M-code)
Objective myCostFcn(x,simParms) (see Figure 3)
Design variables x x = [kf, cf, kr, cr]
Nonlinear constraints mynonlcon(x,simParms) (see Figure 3)
A = []; % none for this problem
Linear constraints b = [];
Aeq = [Lf 0 -Lr 0]; % level car
beq = 0;
Bound constraints lb = [10000; 100; 10000; 100];
ub = [100000; 10000; 100000; 10000];
The design objective is defined as an M-file function myCostFcn that accepts two inputs: the design vector x and simParms (Figure 3). x contains our design variables for the suspension system.
simParms is a structure that passes in the remaining defining parameters of the Simulink model(Mb, Lf, Lr, and Iyy). myCostFcn runs the suspension model defined by x and simParms and returns a
measure of passenger discomfort, calculated as the weighted average of the peak and total acceleration, as shown in Figure 3. Passenger discomfort is normalized so that our initial design has a
discomfort level of 1.
Nonlinear constraints are defined in the M-file function mynonlcon that returns values for c(x) and ceq(x). The linear and bound constraints are defined as shown in the table as constant coefficient
matrices (A, Aeq) or vectors (b, beq, lb, ub).
Figure 3 shows our problem defined and solved using the Optimization Tool graphical user interface (optimtool), which simplifies the tasks of defining an optimization problem, choosing an appropriate
solver, setting solver options, and running the solver.
Using a traditional optimization approach, we found that the optimal design was one where x = [kf, cf, kr, cr] = [13333, 2225, 10000, 1927].
Figure 4 shows a standard Optimization Toolbox solution progress plot. The top plot shows the current value of the design variables for the current solver iteration, which at iteration 11 is the
final solution. The bottom plot shows the objective function value (passenger discomfort relative to the initial design) across solver iterations. This plot shows that our initial design (iteration
0) had a discomfort level of 1, while the optimal design, found after 11 iterations, has a discomfort level of 0.46 – a reduction of 54% from our initial design.
Ensuring Suspension System Reliability
Our optimal design satisfies the design constraints, but is it a reliable design? Will it perform as expected over a given time period? We want to ensure that our suspension design will perform as
intended for at least 100,000 miles.
To estimate the suspension's reliability, we use historical maintenance data for similar suspension system designs (Figure 5). The horizontal axis represents the driving time, reported as miles. The
vertical axis shows how many suspension systems had degraded performance requiring repair or servicing. The different sets of data apply to suspension systems with different damping ratios. The
damping ratio is defined as
where c is the damping coefficient (cf or cr), k is the spring stiffness (kf or kr), and M is the amount of mass supported by the front or rear suspension. The damping ratio is a measure of the
stiffness of the suspension system.
We fit Weibull distributions to the historical data using the Distribution Fitting Tool (dfittool). Each fit provides a probability model that we can use to predict our suspension system reliability
as a function of miles driven. Collectively, the three Weibull fits let us predict how the damping ratio affects the suspension system reliability as a function of miles driven. For example, the
optimal design found previously has a damping ratio for the front and rear suspension of 0.5. Using the plots in Figure 5, we can expect that after 100,000 miles of operation, our design will have
88% of the original designs operating without the need for repair. Conversely, 12% of the original designs will require repair before 100,000 miles.
We want to improve our design so that it has a 90% survival rate at 100,000 miles of operation. We add this reliability constraint to our traditional optimization problem by adding a nonlinear
constraint to mynonlcon.
Plimit = 0.90; % maximum probability of shock absorber failure
A = @(dampRatio) -1.0129e+005.*dampRatio.^2 -28805.*dampRatio + 2.1831e+005;
B = @(dampRatio) 1.6865.*dampRatio.^2 -1.8534.*dampRatio + 4.1507;
Ps = @(miles, dampRatio) 1 - wblcdf(miles, A(dampRatio), B(dampRatio));
% Add inequality constraint to existing constraints
c = [c; % keep original constraints
Plimit- Ps(cdf, Mileage);... % front reliability constraint
Plimit- Ps(cdr, Mileage)]; % rear reliability constraint
We solve the optimization problem as before using optimtool. The results, summarized in Figure 6, show that including the reliability constraint changed the design values for cf and cr and resulted
in a slightly higher discomfort level. The reliability-based design still performs better than the initial design.
Optimizing for Robustness
Our design is now reliable—it meets our life and design goals—but it may not be robust. Operation of the suspension-system design is affected by changes in the mass distribution of the passengers or
cargo loads. To be robust, the design must be insensitive to changes in mass distribution.
To account for a distribution of mass loadings in our design, we use Monte Carlo simulation, repeatedly running the Simulink model for a wide range of mass loadings at a given design. The Monte Carlo
simulation will result in the model output having a distribution of values for a given set of design variables. The objective of our optimization problem is therefore to minimize the mean and
standard deviation of passenger discomfort.
We replace the single simulation call in myCostFcn with the Monte Carlo simulation and optimize for the set of design values that minimize the mean and standard deviation of total passenger
discomfort. We’ll assume that the mass distribution of passengers and trunk loads follow Rayleigh distributions and randomly sample the distributions to define conditions to test our design
nRuns = 10000;
front = 40 + raylrnd(40, nRuns, 1); % front passengers - adults (kg)
back = 1.36 + raylrnd(40, nRuns, 1); % rear passengers – includes children
trunk = raylrnd(10, nRuns, 1); % additional mass for luggage (kg)
The total mass, center of gravity, and moment of inertia are adjusted to account for the changes in mass distribution of the automobile.
mcMb = Mb + front + back + trunk; % total mass
mcCm = (front.*rf - back.*rr - trunk.*rt)./mcMb; % update center of mass
% Adjust moment of inertia
mcIyy = Iyy + front.*rf.^2 + back.*rr.^2 + trunk.*rt.^2- mcMb.*mcCm.^2;
The optimization problem, including the reliability constraints, is solved as before in optimtool. The results are shown in Figure 7. The robust design has an average discomfort level that is higher
than that in the reliability-based design, resulting in a design with higher damping coefficient values.
The discomfort measure reported in Figure 7 is an average value for the robust design case. Figure 8 displays a scatter-matrix plot that summarizes variability seen in discomfort as a result of
different mass loadings for the robust design case.
The diagonals show a histogram of the variables listed on the axis. The plots above and below the diagonal are useful for quickly finding trends across variables. The histograms for front, back, and
trunk represent the distribution of the inputs to our simulation. The histogram for discomfort shows that it is concentrated around the value 0.47 and is approximately normally distributed. The plots
below the diagonal do not show a strong trend in discomfort with trunk loading levels, indicating that our design is robust to changes in this parameter. There is a definite trend associated with the
front loading on discomfort. Front loading appears to be approximately linear with a min of 0.43 and a max of 0.52. A trend between back loading and discomfort can also be seen, but from this plot it
is difficult determine if it is linear. From this plot, it is difficult to determine whether our design is robust with respect to back loading.
Using a cumulative probability plot of the discomfort results (Figure 9), we estimate that 90% of the time, passengers will experience less than 50% of the discomfort they would have experienced with
the initial design. We can also see that our new design maintains a normalized level of discomfort below 0.52 nearly 100% of the time. We therefore conclude that our optimized design overall is
robust to expected variation in loadings and will perform better than our initial design.
Design Trade-Offs
This article showed how MATLAB, Statistics Toolbox, and Optimization Toolbox can be used to capture uncertainty within a simulation-based design problem in order to find an optimal suspension design
that is reliable and robust.
We began by showing how to reformulate a design problem as an optimization problem that resulted in a design that performed better than the initial design. We then modified the optimization problem
to include a reliability constraint. The results showed that a trade-off in performance was required to meet our reliability goals.
We completed our analysis by including the uncertainty that we would expect to see in the mass loading of the automobile. The results showed that we derived a different design if we accounted for
uncertainty in operation and quantified the expected variability in performance. The final design traded performance to maintain reliability and robustness.
|
{"url":"http://www.mathworks.es/company/newsletters/articles/designing-for-reliability-and-robustness.html?nocookie=true","timestamp":"2014-04-24T09:16:56Z","content_type":null,"content_length":"42729","record_id":"<urn:uuid:0eb59462-ae21-44ba-888c-0b97265f3a5c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Probability question
December 17th 2009, 04:59 AM #1
Dec 2009
[SOLVED] Probability question
Basic calculator allowed (no graphic or T) please help me answer this review question with steps:
A bakery has determinate that the demand for its white bread has a normal distribution with a mean 7,200 loaves and standard deviation 300 loaves. Based on cost considerations, the company has
decidedTo produce a sufficient number of loaves so that it will fully supply demand on 94% of all days.
a. How many loaves of bread should the company produce each day?
b. Based on the production in part a, on what percentage of days will the company be left with more than 500 loaves of unsold bread?
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/statistics/120933-solved-probability-question.html","timestamp":"2014-04-16T08:18:06Z","content_type":null,"content_length":"29233","record_id":"<urn:uuid:ad8b8b8b-4479-4c92-9129-35ed1cae813f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Curriculum Choices: Tackling Math - Simple Homeschool
When it comes to teaching math, some homeschooling mamas and papas have to stifle an inner groan. Many of us had bad experiences with this subject when we were in school, and we’re reluctant to pass
on that attitude to our bright-eyed kids.
This is why it’s especially important that the math curriculum we choose fits with our children’s learning styles and with our family’s educational philosophy. Of course many math skills can and
should be learned through life–using recipes, balancing a checkbook, playing board games, and much more.
But if you’re on the lookout for a traditional math curriculum to use at home, here are six popular ones to consider.
1. Math-U-See
Math-U-See is a manipulative-based program covering all grade levels. The program centers around a series of plastic manipulative blocks, which are color-coded to represent each number. These blocks
can be used throughout the entire curriculum, even into the high school levels if necessary.
Math-U-See includes a DVD presentation of each lesson–parents can either watch it alone or with their students. This curriculum practices the spiral approach to math, continuing to review concepts
taught previously. Each level comes with a student workbook, a teacher’s guide, and a test booklet.
Our family uses Math-U-See in a flexible, informal way, and it has been a good fit. I love that the accompanying DVD helps me feel as though I’m not teaching completely on my own.
2. Saxon
Saxon math presents concepts incrementally, introducing one new idea then reviewing and adding to it continuously until mastery.
Younger grade levels require the use of many manipulatives, but in middle and high school it switches to a traditional textbook and test approach.
Saxon is known for being relatively easy to teach, with the goal that children will eventually be able to do most of the program independently.
3. Singapore Math
As its name suggests, Singapore Math was originally created and used by the Ministry of Education in Singapore. The goal of the program is to lay a solid foundation for mental math, enabling a child
to think mathematically instead of just memorizing.
Each level comes with two textbooks as well as one or two consummable workbooks. There is an optional home instructor’s guide for some grades, which many parents use to give them tips on how to
present the lessons.
Singapore Math is considered slightly more advanced in concept than the typical Western counterpart, so some homeschoolers use an earlier grade level for their children.
Photo by Jimmie
4. RightStart Math
RightStart Math uses an abacus as its foundation–a visual tool to convey and illustrate new concepts. Problem solving is encouraged, while worksheets and flashcards are minimized. Math games are
often used to review material, along with a variety of manipulatives. This combination makes it an appealing choice for a visual or kinesthetic learner.
RightStart discourages children from counting, reasoning that it is slow and disregards the importance of place value. Instead the program focuses on the visualization of math concepts and facts,
using the abacus as the main teaching tool.
RightStart carries curriculum for kindergarten up through middle school.
5. Teaching Textbooks
Worried that you can’t homeschool because you could never teach higher math? Well fear no more!
Teaching Textbooks, a program created specifically for homeschoolers, allows children to complete all lessons with a set of interactive CD-ROMs. Students view the lesson, do the problems, and then
watch a tutor explain the ones they missed.
Each level includes a student workbook, an answer booklet, and a series of CD-ROMs that contains all lessons and solutions to problems. The computer grades each lesson automatically.
Teaching Textbooks has programs from 3rd grade all the way up to Pre-Calculus.
6. Family Math
For those looking for a more organic way to include math concepts in your life at home, Family MathThis book, and its companion Family Math for Young Children, feature over 300 pages of practical
math activities–far away from workbooks and into real life.
The activities use simple math manipulatives that are found at home to create a fun math foundation. This book can also be used alongside a traditional curriculum.
Math can be tackled and even enjoyed in a homeschooling setting–the key is to make the curriculum match your family, instead of trying to make your family match any one curriculum.
Please share your experiences with the math curriculums you’ve tried at home.
1. Sarah says:
Thank you for this informative post. I am going to start homeschooling next year and trying to figure out curriculum has been challenging for me. I truly appreciate and thirst for information
from people about what they are doing, as it helps me understand better if it might work for my family.
Now I have to figure out how to figure out how my children learn so I can choose what’s best for them individually.
2. Heidi says:
We’ve used RightStart for a few years now. I love the way it teaches concepts, the math games, and the manipulatives. It is teacher-intensive, though, which is frustrating at times. I do have a
son who needs that one-on-one instruction, so maybe we’d be at the same place no matter which curriculum we used.
I would highly recommend the MathTacular DVDs from Sonlight.com. They have been an incredible supplement to our math studies. I thought they were a little hokey at first, but all 3 of my boys can
watch them for hours, and they have really learned the concepts. Fabulous.
□ We have those DVDs as well, Heidi–and my crew absolutely love them, too!
.-= Jamie ~ Simple Homeschool’s last blog: Curriculum Choices: Tackling Math =-.
☆ se7en says:
I third this!!! We love Mathtacular, we watch it during school holidays… movie fun nights!!! I know, what a mother… but they really love it!!! and I just love that someone else is doing
math manipulatives and not me!!!
.-= se7en’s last blog: The Week that Was – 2.45 =-.
3. Renee says:
This is a great resource page Jamie for people figuring out what they want to do. I think I’ll tack it on the end of “math post” as a place to look for further ideas.
4. Sara says:
Thanks for all the Math ideas and links. I recently bought Math Envelope Centers by The Mailbox Series. Educationl stores usually stock them. The math games are in a ready to go format. All that
is needed is an envelope to make the games. I like it because my kiddos get a little board with constant workbook pages. It makes Math sort of like a game. The particular bookI bought is for
grades 2-3.
5. Phyllis says:
I would like to add the Videotext program to your list. We absolutely love it. It only covers middle school & high school Algebra and Geometry, but it is a nice program to switch to for those
programs that only cover the Elementry grades or for those who want a change at that point. It is very step-by-step, very visual and it focuses on understanding WHYS of math. If you understand
what the different operations are doing, you will understand math better and be more able to solve mathmathical problems.
.-= Phyllis’s last blog: This Moment; This Beautiful Life =-.
6. Kara Fleck says:
What a great resource this post is! Math is a little intimidating for me because I had such a horrible experience with it when I was in school. I try to keep my own issues about it away from my
kids as I teach them … in a way, re-learning math along side them has been very healing for me and my own math issues
Family Math sounds like something that would be a good fit for us – I’m definitely going to look into it
I really love what you say at the end – “the key is to make the curriculum match your family, instead of trying to make your family match any one curriculum.”
.-= Kara Fleck’s last blog: The Best of Both Worlds: Bringing Our Favorite Indoor Activities to the Outdoors =-.
7. Kika says:
Saxon doesn’t actually ‘require’ the use of manipulatives. We never bought any although on occasion employed items from home as a form of manipulative. We didn’t use their paper money, either,
preferring to practice with real coins and shopping. Some kids learn best using manipulatives while others find it unnecessary and Saxon fit us well in this regard.
8. Laura @ Getting There says:
Thank you for this! I was thinking of trying something new for math next year, and I think Teaching Textbooks might fit the bill. I had not heard of it before.
.-= Laura @ Getting There’s last blog: What does "simple living" mean to you? =-.
9. PanJiaLe says:
Here is another great resource that my husband introduced me to: http://www.khanacademy.org/
I love that this guy does all this for free. He definitely has the gift of teaching. We’ve been enjoying watching his videos ourselves in order to brush up on our maths/sciences.
10. se7en says:
We have used Singapore math forever!!! They do learn a lot of stuff that I never thought they could learn until they were older. I love it, I don’t get bored with it – which is quite important to
any curriculum. My kids don’t always love it, when they are in the early grades they enjoy it and lose momentum when they get a heap of mental math around grade 3/4. If I sit with them it is
bearable but on their own the burden is well too much for all of us!!! My grade 7 fellow has had a revival this year and is totally loving his Singapore math… does it before breakfast every
.-= se7en’s last blog: The Week that Was – 2.45 =-.
□ Mandi says:
I’m not totally loving Singapore, but I’ve had more than one person tell me to just hold on and it will get better. My kids like it so far, so I’m just kind of trudging through and looking at
other options in the meantime.
11. Bekki says:
We use Horizons math. I have no idea how it compares to the others. My husband has a PHd in statistics and chose this one over many of the others. I reallly can’t tell you why but it is working
well and my children love it. We also have the Mathtacular DVD’s from Sonlight and they like to watch those also.
.-= Bekki’s last blog: Have I gone crazy?! =-.
12. Jill T says:
We love math-u-see but I’ve heard raving over Teaching Textbooks- mainly for the time issue- the child can do it on their own…mom doesnt have to check the problems and try to figure it our
herself and teach it. I’ve checked it out but the content reveiws were not exactly thrillling in comared to math-u-see’s core instruction. We are happy w/ math-u-see now but I’ver heard of
families switching over once their child doesnt want to use the manipulitives anymore. I’ve made a side not in my head to do some more research
13. katie says:
thank you for this post! I have been homeschooling my oldest daughter for kindergarten this year and we are planning on continuing for first grade. But I have been struggling with our math
curriculum this year. It’s good, but it’s a public school curriculum so it requires a lot of things that I just don’t have. And altering it to fit us has required more time than I have been able
to give. I at least feel confident that I don’t have to be done by June, so I know that we will get through it… but still it would be to have a curriculum more suited to a homeschool style. I
will definitely look into these. Thanks again.
14. A Simple Twist of Faith says:
I am planning to home school my five year old in the fall, and am still trying to determine curriculum. So far, I have heard good things from other home schooling Moms about Saxon Math. Has
anyone used their Kindergarten program? I must admit I am more of English abd History type of gal.
.-= A Simple Twist of Faith’s last blog: Stress? What stress? =-.
□ Kika says:
I’ve used Saxon math all along up to algebra I at this point. I really like Saxon but don’t find the kindergarten year necessary. My kids already end up knowing everything in the “curriculm”
before they hit their actual kindergarten year – so it is the only year I wouldn’t recommend purchasing. They tend to like to start the grade one year early. In particular, they’ve enjoyed
the ‘facts practice’ sheets – but doing them untimed.
☆ Virginia says:
Kika, did you do all the meeting book activities (in saxon k or 1) every time you do math? We did about half of the K math and found that part so repetitive!
I love saxon math for my older kids (they’re doing 6/5). We didn’t finish the saxon k, though, with the younger one because it seemed too easy for her. We’re just going to jump ahead to
saxon 1.
○ Kika says:
No – we keep it real simple. In gd 1 and up I like the facts practice (used on a regular, if not daily, basis) and I get the younger kids writing their date at the top of their page
to learn how to do so but otherwise they do half the questions only – none of the ‘scripted stuff’ (meeting book). Also, I’ve learned to let my kids just do tests (as in skip lessons)
until their marks fall below a certain grade (ex. 85% or 90%). Once their mark falls below this then we can start doing actual lessons – otherwise, if they already know the material
why make them do the book work? The final 20 lessons or so are preview for the following year and the first 20-30 lessons of each level are review from the following year. I found it
very helpful to figure this out! My younger siblings and my own kids have used Saxon in this way and have all done very well in math. I actually like math as does my husband – but the
idea for us is keep it simple and don’t waste time bogging the kids down in concepts they already understand. I agree that the Kindergarten book is too easy. My kids didn’t need it as
they already knew the concepts within and so we moved on to the grade one book early and just progressed at a comfortable rate for each child.
■ Virginia says:
Thank you. That is SO helpful. I like math, too, and hope to pass it on to our kids.
15. Virginia says:
After trying a couple of other math programs we switched to Saxon and have been so happy. I had heard that it was too repetitive and a lot of work, so avoided it at first. My daughter just wasn’t
“getting” math, though. Saxon has been really helpful because all the concepts are taught in small increments, building on previous concepts in a logical order, with constant review that helps
them remember what they’ve learned. My dd10 and ds9 just could not seem to learn their times tables, no matter what we tried. They learned them quickly when we started using the timed facts
practice sheets and they’ve found it so helpful to know them now. Now dd understands math and does well at it, so we’re happy.
16. Jessica says:
My kids really like jump math. Its not really well known, but I love it. I adore math, and this is one of the best programs I have ever seen for teaching every child to love math. And its not
17. Shana says:
We are using Math Mammoth and we really love it. It’s a mastery program which seems to work best with our kids. Math Mammoth was written by a homeschool mom who has an extensive back round in
math. It feels good to support another homeschool mom with the purchase of a curriculum.
I also like that it can be purchased as a download, which is what we did. Having the printable files on my computer allows me to just print what we need and makes it easy to skip over sections
that my kids don’t need. It’s also a very affordable curriculum!
18. Jane says:
Thank you so much for all these useful links! Another free resource that I find useful for days when a math game or hands on activity is needed is:
19. erin says:
We use Horizons for my K, who is a concrete thinker. Singapore has been the best fit for our G1 who is a creative thinker. We’ll be using Teaching Textbooks for G4 and then (probably) Life of
Fred. Finding a good match for math has been a struggle!
erin’s latest post: Woefully Wednesday
20. Sara says:
My family used Saxon, just the work books, when we were home schooled.
It was wonderful, the only thing that really taught us the material…versus memorizing just for a test. College hit and I was SHOCKED at the horrible curriculum’s and text used for math courses. I
took math and statistic courses at 2 different universities and found the same horrible curriculum. I was SO thankful for Saxon at that point.
Now I’m going to teach my kiddos at home, I hope to use the Saxon as effectively, if not more so because Saxon has more to offer these days. But I do appreciate the tips on other curriculum, and
21. Fran says:
This is our first year with Math-U-See, and we love it. My oldest child does not use the manipulatives, but my younger two love to use them. With or without using the manipulatives, MUS has been
great for us. The lessons are thorough (but short!) and the DVD is engaging. We’ve tried other math curriculums in the past, but this one is a keeper! I wish I had been taught math using MUS.
Fran’s latest post: Ah- The Smell of a Fresh Spreadsheet in the Morning!
22. Rachel says:
My son’s traditional school used the Saxon math program, and it is really geared more towards kids that have trouble with math.
He was literally bored to tears until we started working with him at home (my husband and I both have technical degrees). I don’t homeschool precisely because my son’s needs are progressing at a
pace at which our math-teaching skills will be exceeded, but when we began looking at schools that stressed math and science, we noticed that none of them used Saxon math.
23. Bon Crowder says:
As the math mom, I researched many curriculums at a homeschool convention last year. I found Math On The Level to be a great one. That’s the one I endorse.
Alas, Daughter is only two, so I’ve yet to be able to dig into any personally, but that one looks the best from a mathematician’s standpoint.
Bon Crowder’s latest post: [50 Word Friday] Two Thousand Twelve
Share Your Thoughts Cancel reply
|
{"url":"http://simplehomeschool.net/curriculum-choices-tackling-math/","timestamp":"2014-04-21T04:33:48Z","content_type":null,"content_length":"102555","record_id":"<urn:uuid:337eacc2-ecbb-4d27-9e1a-5a01a4d97165>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Squared Weights in MBH98
A couple of weeks ago, I said that I would document (at least for Jean S and UC) an observation about the use of squared weights in MBH98. I realize that most readers won’t be fascinated with this
particular exposition, but indulge us a little since this sort of entry is actually a very useful of diarizing results. It also shows the inaccuracy of verbal presentation – which is hard enough even
for careful writers.
MBH98 did not mention any weighting of proxies anywhere in the description of methodology. Scott Rutherford sent me a list of 112 weights in the original email, so I’ve been aware of the use of
weights for the proxies right from the start. Weights are shown in the proxy lists in the Corrigendum SI (see for example here for AD1400) and these match the weights provided by Rutherford in this
period. While weights are indicated in these proxy lists, the Corrigendum itself did not mention the use of weights nor is their use mentioned in any methodological description in the Corrigendum SI.
In one place, Wahl and Ammann 2007 say that the weights don’t “matter”, but this is contradicted elsewhere. For example, all parties recognize that different results occur depending on whether 2 or 5
PCs from the NOAMER network are used together with the other 20 proxies in the AD1400 network (22 or 25 total series in the regression network). Leaving aside the issue of whether one choice or
another is “right”, we note for now that both alternatives can be represented merely through the use of weights of (1,1,0,0,0) in the one case and (1,1,1,1,1) in the other case – if the proxies were
weighted uniformly. If the PC proxies were weighted according to their eigenvalue proportion – a plausible alternative, then the weight on the 4th PC in a centered calculation would decline, assuming
that the weight for the total network were held constant – again a plausible alternative.
But before evaluating these issues, one needs to examine exactly how weights in MBH are assigned. Again Wahl and Ammann are no help as they ignore the entire matter. At this point, I don’t know how
the original weights were assigned. There appears to be some effort to downweight nearby and related series. For example, in the AD1400 list, the three nearby Southeast US precipitation
reconstructions are assigned weights of 0.33, while Tornetrask and Polar Urals are assigned weights of 1. Each of 4 series from Quelccaya are assigned weights of 0.5 while a Greenland dO18 series is
assigned a weight of 0.5. The largest weight is assigned to Yakutia. We’ve discussed this interesting series elsewhere in connection with Juckes. It was updated under the alter ego “Indigirka River”
and the update has very pronounced MWP. Juckes had a very lame excuse for not using the updated version. Inclusion of the update would have a large impact on a re-stated MBH99 using the same proxies.
Aside from how the weights were assigned, the impact of the assigned weights on the proxies in MBH formalism differs substantially from an intuitive implementation of the stated methodology. In our
implementation of MBH (and Wahl and Ammann did it identically), following the MBH description, we calculated a matrix of calibration coefficients by a series of multiple regressions of the proxies Y
against a network U of temperature PCs in the calibration period (in AD1400 and AD1000, this is just the temperature PC1.) This can be represented as follows:
$G=(U^TU)^{-1} U^TY$
Then the unrescaled network of reconstructed RPCs $\tilde{U}$ was calculated by a weighted regression (using a standard formula) as follows, denoting the diagonal of weights by P:
$\tilde{U}= YPG^T (GPG^T)^{-1}$
However, this is Mann and things are never as “simple” as this. In the discussion below, I’ve first transliterated the Fortran code provided in response to the House Energy and Commerce Committee
(located here ) into a sort of pseudo-code in which the blizzard of pointless code managing subscripts in Fortran is reduced to matrix operations, first using the native nomenclature and then showing
the simplified matrix derivations.
You can find the relevant section of code by using the word “resolution” to scroll down the code, the relevant section commencing with:
c NOW WE TRAIN THE p PROXY RECORDS AGAINST THE FIRST
c neofs ANNUAL/SEASONAL RESOLUTION INSTRUMENTAL pcs
Scrolling down a bit further, the calculation of the calibration coefficients is done through SVD operations that are actually algebraically identical to pseudoinverse operations more familiar in
regressions. Comments to these calculations mention weights several times:
c set specified weights on data
c downweight proxy weights to adjust for size
c of proxy vs. instrumental samples
c weights on PCs are proportional to their singular values
Here is my transliteration of the code for the calculation of the calibration coefficients:
S0< -S[1:ipc]
# S is diagonal of eigenvalues from temperature SVD;
# ipc number of retained target PCs
weight0<- S0/sum(S0) #
B0<-aprox * diag(weightprx)
# aprox is the matrix of proxies standardized on 1902-1980
#weightprx is vector of weights assigned to each proxy;
AA<-anew * diag(weight0)
# this step weights the temperature PCs by normalized eigenvalues
[UU,SS,VV] <-svd(AA)
# SVD of weighted temperature PCs : Mann's regression technique
work0<- diag(1/SS) * t(UU) * B0[cal,]
# this corresponds algebraically to part of pseudoinverse used in regression
#cal here denotes an index for the calibration period 1902-1980
x0<- VV * work0
# this finishes the calculation of the regression coefficients
#beta is the matrix of regression coefficients, then used for estimation of RPCs
Summarizing this verbose code:
[UU,SS,VV] < -svd(anew[1:79,1:ipc] * diag(weight0) )
beta= VV* diag(1/SS) * t(UU) * aprox[index,] * diag(weightprx)
Commentary: Mann uses SVD to carry out matrix inversions. There is an identity relating the pseudoinverse used in regression calculations to Mann’s SVD methods, that is very useful in analyzing this
code. If the SVD of a matrix is represented as $X=USV^T$ , the pseudoinverse of X can be represented by the following:
$(X^TX )^{-1} X^T = VS^{-1} U^T$
This can be seen merely by substituting in the pseudoinverse and cancelling.
Note that U,S and V as used here are merely local SVD decompositions and do not pertain to the “global” uses of U, S and V in the article, which I’ve reserved the products of the original SVD
decomposition of the gridcell temperature network $T$:
$[U,S,V]= svd(T,nu=16,nv=16)$
Defining L as the k=ipc truncated and normalized eigenvalue matrix and keeping in mind that $U$ is the network of retained temperature PCs, we can collect the above pseudocode as:
$[UU,SS,VV]= svd(UL,nu=ipc,nv=ipc)$
$\hat{\beta}_{MBH}= VV * diag (SS^{-1}) *UU^T * Y * P$
Applying the pseudoinverse identity to $UL$ in the above expression, we can convert this to more familiar nomenclature:
$\hat{\beta}_{MBH}= ( (UL)^T(UL))^{-1}(UL)^T YP$
$\hat{\beta}_{MBH}=L^{-1}(U^TU)^{-1} L^{-1}LU^TYP$
$\hat{\beta}_{MBH}=L^{-1}C_{uu}^{-1} C_{uy}P$
In our emulations in 2005 (and also in the Wahl and Ammann 2007 emulation which I reconciled to ours almost immediately in May 2005), the matrix of calibration coefficients was calculated without
weighting the target PCs and without weighting the proxies (in this step) as follows:
$\hat{\beta}_{WA}=(U^TU)^{-1}U^TY = C_{uu}C_{uy}$
The two coefficient matrices are connected easily as follows:
$\hat{\beta}_{MBH}= L^{-1}\hat{\beta}_{WA}P$
The two weights have quite different effects in the calculation. The weights $L$ are ultimately cancelled out in a later re-scaling operation, but the weights $P$ carry forward and can have a
substantial impact on downstream results (e.g. the NOAMER PC controversies.)
Following what seemed to be the most implausible interpretation of the sketchy description, we weighted the proxies in the estimation step; Wahl and Ammann dispensed with this procedure altogether,
arguing that the reconstruction was “robust” to an “important” methodological “simplification” – the deletion of any attempt at weighting proxies (a point which disregards the issue of whether such
weighting for geographical distribution or over-representation of one type or site has a logical purpose.)
Scrolling down a bit further, one finds the reconstruction step in the code described as:
Once again, here is a transliteration of the Fortran blizzard into matrix notation first following the native nomenclature of the code:
B0 = aprox * diag(weightprx)
#this repeats previous calculation: aprox is the proxy matrix, weightprx the weights
AA< -beta
# beta is carried forward from prior step
[UU,SS,VV] -svd(t(AA))
# again the regression is done by SVD, this time on the matrix of calibration coefficients
work0<- B0 * UU * diag( 1/SS)
work0 <-work0 * t(VV)
#this is regression carried out using the SVD equivalent to pseudoinverse
x0<- work0
#this is the matrix of reconstructed RPCs
Summarizing by collecting the terms:
[UU,SS,VV] = svd(beta) )
x0= aprox * diag(weightprx) * UU* diag( 1/SS) * t(VV)
Commentary: Using our notation, the unrescaled reconstructed RPCs denoted by $\tilde{U}$ instead of x0 are obtained:
$\tilde{U}=Y*P * UU * SS^{-1} VV^T$
Once again the pseudoinverse identity can be applied, this time for $L^{-1}GP$ where $G=C_{uu}^{-1}C_{uy}$ yielding:
$\tilde{U}=Y*P * (L^{-1}GP)^T ( (L^{-1}GP) * (L^{-1}GP)^T)^{-1}$ where $G=C_{uu}^{-1}C_{uy}$
$\tilde{U}=YP^2 G^T (GP^2G^T)^{-1}L$
Expressed in terms of C matrices, this expression becomes:
$\tilde{U}=Y*P^2 (C_{uu}^{-1}C_{uy})^T (C_{uu}^{-1}C_{uy} P^2 (C_{uu}^{-1}C_{uy})^T)^{-1}L$
$\tilde{U}=Y*P^2 C_{uy}^T (C_{uy} P^2 C_{uy}^T)^{-1} C_{uu}L$
The form of the above expression is precisely identical to the form of the expression resulting from application of the conventional expression for weighted regression shown above, which was
(additionally incorporating the L weighting, which is removed in a later step):
$\tilde{U}=YP G^T (GPG^T)^{-1}L$
However, there is one obvious difference. The Mannian implementation, which, rather than using any form of conventional regression software, using his ad hoc “proprietary” code, ends up with the
proxy weights being squared.
In a follow-up post, I’ll show how the L weighting (but not the P weighting) falls out in re-scaling.
The Wahl and Ammann implementation omitted the weighting of proxies – something that they proclaimed as a “simplification of the method”. If you have highly uneven geographic distribution, it doesn’t
seem like a bad idea to allow for that through some sort of stratification. For example, the MBH AD1400 network has 4 separate series from Quelccaya glacier (out of only 22 in the regression network)
– the AD!000 network has all f in a network of only 14 proxies. These consist of dO18 values and accumulation values from 2 different cores. It doesn’t make any sense to think that all 4 proxies are
separately recording information relevant to the reconstruction of multiple “climate fields” – so the averaging or weighting of series from the same site seems a prerequisite. Otherwise, why not have
ring width measurements from individual trees? Some sort of averaging is implicitly done already. In another case, the AD1400 network has two ring width series from two nearby French sites and two
nearby Morocco sites, which might easily have been averaged or even weighted through a PC network, as opposed to being used separately.
While some methodological interest attaches to these steps, in terms of actual impact on MBH, the only thing that “matters” is the weight on the bristlecones – one can upweight or downweight the
various “other” series, but the MBH network is functionally equivalent to bristlecones +white noise, so upweighting or downweighting the white noise series doesn’t really “matter”
14 Comments
I realize that most readers won’t be fascinated with this particular exposition, but indulge us a little since this sort of entry is actually a very useful of diarizing results.
Well, to detect mathematical smoke screens , this is the only way to go. For example, Hadley says that
Calculating error bars on a time series average of HadCRUT3 is not a trivial process.
And then there’s no further explanation.. And code that generates s21 CIs is not available. I’ve seen this method applied elsewhere (non-climate science).
MBH9x algorithm is another smoke screen, the underlying model behind the smoke is
where Y is proxy record and X is matrix of annual grid-box temperature. Matrix B is not restricted in any way!
In a follow-up post, I’ll show how the L weighting (but not the P weighting falls) our in re-scaling.
I’m having trouble parsing this sentence.
Should “falls” be outside the parentheses?
Is “our in” a transposition of “in our”?
Or is “our” a typo for “out” as in “falls out in”?
Steve: Yes. Corrected.
3. ve
Oh dear, something disastrous has happened to my intended posting. Seems to have vanished into the bowels of XPPro. I’ll have to re-write it all, but not now – too late for me to think straight.
Sorry about that. The ve. is all that’s left :-((
4. I’ll take your word for the math, Steve, but I’m sure you’re correct.
However, the real problem I see with this aspect of the MBH procedure is not that they square their weights, but that they are using arbitrary weights. The square of an arbitrary weight is no
more arbitrary than the original weight itself, so it is not clear that squaring these weights makes them any worse.
Your first and second equation do multivariate “Classical Calibration Estimation”, as discussed by UC in last year’s thread “UC on CCE” at http://www.climateaudit.org/?p=2445, based on an
equation in the calibration period like
Y = UG + E,
where Y is an nxq matrix of proxies, U is an nxp matrix of temperature indicators or PCs, G is a pxq matrix of coefficients determined in your first equation, E is the nxq matrix of errors. (I’ve
suppressed the constant term, but that should be in there as well.) Your second equation estimates a reconstructed value (or values) of U, say U*, from new values of the proxies, say Y*, under
the assumption that the G’s are the same as in the calibration period.
However, the correct P matrix to use is not what feels good (per Mann), but rather the reciprocal of an estimator of the covariance matrix of the residuals. If p and q are small in comparison to
n, and it looks like the variances are unequal and there are non-zero covariances, the obvious estimator of this covariance matrix is Ehat’Ehat/(n-p-1), using calibration period values of the
residuals Ehat. If p and q are biggish relative to n, the method will still work, but perhaps only if you impose enough parsimonious restrictions on it. If you think the proxies should be
independent (probably untrue for MBH), you can even make P diagonal.
In no event, however, do you determine weights, as Mann did, by just looking at the proxies in isolation rather than at the variances and covariances of their residuals in the calibration
equations. Mann is evidently muddling together the variance (or standard deviation?) of the proxy itself with the variance (or standard deviation?) of its residuals in the calibration equation.
Incidentally, if the covariance matrix is diagonal, and one happens to know the inverse standard deviations of the errors rather than their inverse variances, then these weights should in fact be
squared. I’m sure Mann just didn’t realize what he was doing, however.
5. Steve,
You may remember sending me the MBH98 data several years ago, I think as an Excel file, for which I’ve been eternally grateful! You also sent the weights used by Mann, and the identifications of
the hitherto mysterious (scaled) temperature data cols 21 – 31.
I’ve worked in my somewhat simplistic fashion on this wealth of information for a large number of hours, and I hope that I’ve arrived at a reasonable understanding of what might be gleaned from
Reading your post above (which left my matrix algebra far behind I fear) I have been trying to get to grips with the concept of “calibration” in the context of the 112 columns. I always ask
myself why it is necessary, and here you may well be able to sort me out. The view that I take is that Mann et al assembled, presumably in good faith, a large collection of data that they thought
might be indicators of climate. The original values were from many sources, and amongst them was a formidable array of dendrochronological stuff. I wrote to Prof Mann, in the form of a
handwritten airmail, which generated an immediate reply with references to sites where I could download large amounts of data – several hundred columns – which were I have presumed the data from
which he formed his principal components as reported in the 112 published data columns.
As a total amateur I have no way in which I could assess the technical validity or otherwise of the columns labelled PC1, PC2 etc, but I have presumed that each of them represents something that
a knowledgeable person would expect to be related systematically to the climate parameter that most of us believe to be of great significance on the worldwide scale, to wit temperature. It would
be comfortable to be able to assume that such a relationship was roughly “linear”, and positively correlated with temperature, our target parameter. If it were not, why would anyone wish to
include it in a rational assembly of climate indicators?
So, I begin with this assumption. Next, perusing Mann’s data one is instantly struck by the amazing variability in the scale and location of the recorded data. There is no way one could hope to
build a meaningful composite value (some sort of average) from data on such diverse and seemingly random collection of numbers. Thus I took the rather obvious step of standardising each column to
mean zero, variance one. Neglecting the now hot topic of column weights one can average across the data rows to form some sort of estimate of what the “climate” was like during any given year, on
a standardised scale having no practical units attached to it. In order to relate this to some sort of real temperature scale one would have to have reliable actual temperatures values from some
source. The stumbling block I have is that I cannot see where this information, which I think must necessarily come from outside the Mann data collection, arises. Where is it reported, and where
can it be verified?
So there’s the fundamental problem I cannot resolve for myself.
In my investigations of MBH98 I have sidestepped (ignored) the calibration problem, and used simple averages across columns (or single columns) as an index of climate The 112 columns individually
make for unwieldy handling and reporting, but it seems to me to be a logical step to assemble the standardised columns into “homogeneous” groups. For example, columns 1 – 9 are all from
observations on tropical seas, 10, 11 and 21 – 31 are “real” temperatures – i.e. not proxies, although from various regions (weighted towards Europe) – but are presumably a potential source of a
“gold standard” for calibration purposes. I selected 9 groups- someone else might well have chosen differently – and looked carefully at the behaviour of these over time. Of course the records
are of varying length, but there is a core set starting in 1820 and ending around 1970 for which every column is complete.
If one is searching for confirmation of the alleged “hockey stick” property of Mann’s data set this time period is quite appropriate as a data base. Its centre is roughly the spot where the HS is
said to become apparent. Some people might well propose using the correlation coefficient as a means for identifying, or even quantifying, a possible relationship between the various groups, but
this is far too restrictive and prescriptive in my opinion. Remember that correlation coefficients are computed as /linear/ correlations, and totally ignore the most important property of these
data, which is that they are time series, and thus have a considerable degree of inherent commonality. What we need is an objective method of comparison that uses this vital component of our
information. Based on industrial experience in investigating the past behaviour of test rigs my obvious choice was to form the cumulative sum of the data over the period of interest using the
period mean as the basis of the cusum. This is a very simple operation which anyone who has used a spreadsheet can readily program. I don’t use spreadsheets, but some statistical software that
carries out this sort of operation in a very simple way.
One great property of cusum analysis lies in its remarkable ability to display the major features of a time series even in the presence of substantial amounts of “noise” – in this case the
supposed random component of climate data. Another is the ease with which periods of stability can be identified, and because of this the ease of detection of rapid or abrupt changes in regime.
Having identified possible periods of stability it is easy to examine these by standard statistical methods – trend analysis for example – to confirm of refute the indications provided by the
cusum plot. The general grand scale form of the cusum indicates the overall trend in the data, which for climate data over the last 100 to 200 years is upwards as we know. This results in a
roughly U or V shaped cusum plot, but with very important details. These are the aforesaid periods of stability, characterised by roughly straight segments of the cusum, and periods of rapid,
detectable, change, indicated by a clear curved cusum segment. Changes may be steady or in the case of single site data very abrupt indeed. To verify this look at data for NW Atlantic (Greenland)
sites for the last 100 years and concentrate on what happened near September 1922. There is loads of data available, including some very recent and brilliantly assembled stuff by Vinther et al.
Looking at MBH98 it is very apparent that the real temperatures and most of the proxies produce related cusum patterns. This is rather encouraging, I think, but what is NOT shown on these cusum
plots is anything remotely indicating the cusum of a hockey stick.
If anyone reading this would like to see some plots I shall be delighted to provide some from my huge archive! Can’t do that in this posting because I use two totally different computer systems
which cannot as yet be linked readily. However, I can do it in a two stage operation, and anyway I’ve not yet found out how to post them to a forum such as this :-((
You really have to see these plots, of which I have many hundreds!, to appreciate properly what can be found using industrial engineering technology on climate data.
I have loads more to write should anyone be interested.
Please respond, or shoot me down if you wish!
6. Re #5
You said:
“Thus I took the rather obvious step of standardising each column to mean zero, variance one. Neglecting the now hot topic of column weights one can average across the data rows to form some sort
of estimate of what the “climate” was like during any given year, on a standardised scale having no practical units attached to it.’
I assume that you averaged the rows of deviations of the normalized column data. If so, it seems to me that is ok if the data in the columns are normaly distributed and stationary. If the column
data are not normally distributed or stationary, the row average is the row average but may be meanningless. For example, if you normalize such a row average, the cumsum plot may show
well-defined trends that cannot be explained.
I have used your method when trying to understand local and regional temporal and spatial variations in USHCN surface temperature data. The U’s and V”s are eye-appealing but the underlying
non-linearities and non-stationarity are confounding.
7. I have a somewhat shorter question for Steve, namely after MBH have estimated the temperature PC’s U as in your second equation, how do they get from there to NH temperature T? This never seems
to be discussed, and MBH 9x don’t offer much help. Is there a second CCE exercise where the U’s are calibrated to T, or are the U’s just averaged together somehow?
Is there really any point in bothering with the temperature PC’s if one is ultimately only interested in T? Just calibrating T directly to the Y’s would be a lot simpler.
8. Mann’s smoke screen is quite efficient, I’m about to get lost ;) Anyway, Hu in #4 links to my post that begins with residuals My_recon – original_MBH99, and I get this match without these scaling
steps (P,L) (hence I write about identity S in my post, S^-1 = P in Steve’s notation). Some notes:
1) My code downloads temperature U-vectors directly, I’ve tried to reproduce them via SVD, monthly gridded temperature data needs to be multiplied by cosine lat gridponts.tx and divided by
tcpa-standard.txt and then SVD and then downsampled to annual to get quite close to archived U-vectors.
2) I don’t use P or L, but if I undestood correctly, L cancels out. So the remaining difference between my and Steve’s implementation is P. And as Hu notes, P (S^-1 in my post) by Mann is not
obtained in conventional way, I tried conventional P in my post, http://signals.auditblogs.com/files/2007/11/ad1600_rec_cce.png
9. In #6, HMcCard asked for a bit of clarification of the method I used in examining MBH98 data.
The reason for standardising each of the data columns was to avoid spurious weighting that would occur if the “raw” values, as published by Mann et al were used. The shape of the plots of
individual columns against the time variable is of course totally unaffected by this transform. Its departure from the mean of column time series is simply a measure of how each column value
differed from its mean, and thus I think it is reasonable to average across column to provide an estimate of the consensus value of the departure from the mean for the data as a whole (or as a
rational sub-sample) of the data columns for each data year. After all, it must be supposed that in any given year each data column would be reflecting the climate of the time, otherwise why
bother with time series at all.
Using a restricted data set, the period from 1820 to 1980, all columns are complete with what one hopes are valid observational numbers. The standardisation is thus unlikely, I feel, to produce a
biased mean.
Of course, the data in the columns are /NOT/ stationary! They are climate data, and thus change due to a variety of factors, and without doubt the general change over this period has been
upwards. That is what we are interested in, I think, and what I hope to do is to demonstrate the manner in which this upward trend takes (took) place. What I find is that much of the change took
place over restricted periods of time, with generally stable periods between the change events or segments. Individual data columns also show this type of behaviour, and it is really interesting
to look at the form of the cusums of the single columns. Some appear to be genuine un-tampered with data, but others show very strong signs of having been smoothed before being reported. Cusum
plots demonstrate this very easily. For single site data, or assemblies of data of very similar origin such as the 13 temperature columns, transitions between stable regimes are generally very
clear indeed, thus encouraging me to believe that the cusum method holds some promise in evaluation of assemblies of time series data.
If someone can tell me how to insert a GIF into this contribution (presuming it’s allowed) I’ll be very happy to show just what happens using these techniques.
It is very interesting to read that you have also used a technique similar to mine. I wonder exactly how you interpret the cusum patterns. I contemplate the very grand scale shape, and propose an
hypothesis – “the region between 1830 and 1854 is stationary” for example – and then test the original data to see if this hypothesis holds up. Another hypothesis might be “In late 1922 a very
abrupt change took place”. This would be signalled by something approaching an elbow in the cusum plot. It could be verified by a higher resolution analysis, using monthly rather than annual
data, suitably de-seasonailised, and by examination of the apparently more stable segments on either side of the hypothesised elbow. Such “verification” of the import of cusum plots seems to work
very well.
I certainly am not a fan of hypothesising a simple linear model to climate data and working with the residuals. This seems to be a standard technique for inducing stationarity over a chosen
period, but my knowledge of standard time series methods is slight. As for assessing the data for having a normal distribution I really cannot be sure of how this might affect ones deductions if
it were found not to be the case. In my experience attempting to disprove the hypothesis that a given collection of observed data are normally distributed is seldom successful, unless is is
grossly and obviously non-normal, such as daily temperatures over a year in a non-tropical site. How would non-normality influence deductions regarding climate data. I’ve nothing to base any
ideas on, so would welcome some instruction.
10. Robin, you need to have a website somewhere you can upload your images to, and link from there – if you do not have server space, there are free accounts you can set up such as photobucket.com or
imageshack.com that allow you to upload images.
Having uploaded your image somewhere, copy ONLY the direct URL of the image to your clipboard (i.e. without any enclosing BB or HTML code), click the [ Img ] button above the comment box and
paste your URL in.
The image may not show in the Preview (that’s a software bug), but should be displayed when you submit the comment. A direct URL pasted in your comment may be a useful back up until you get used
to how everything works.
11. Robin,
Re #9
I’ll try to be brief in my response to your request but I may be sniped for being off-topic. If so, contact me and we can continue the dialogue via e-mail.
As we both know, your so-called CUMSUM method is a simple way to find data trends and evaluate trend patterns. Permit me define a few terms that I will use; you might have something else in mind
for the same term. First, I’m referring to an nxm data array where the data observations and index to n-rows in m-columns. The data in each column is normalized by transforming the raw data point
in each cell to become the difference between the data point and the column average (or arithmetic mean) divided by the STDEV of the column data. The data normalized data in each cell is measured
in STDEV units. After normalizing each data column, a new nxm array can be formed for the normalized data set. Of course, the normalized data in each column can be integrated to form a CUMSUM
column to look temporal changes in the form of trends. Second, I am referring to persistent trends wherein the deviations in several adjacent cells either are more or less than the column
average. For example, if the row index is in years, decadal trends would be persistent. Third, graphical display of single CUMSUM data column may show easily-recognized V- and U-shaped trends and
/or invert V- and U-shaped trends. The V-shaped trends may be the most interesting and perplexing because it signifies an abrupt or stepwise change in the underlying data has occurred that
changed the sign of the slope of the CUMSUM curve. Of course, a U- shaped trend is a steep negative slope followed by a near-zero slope followed by a steep positive slope. A quadratic-shaped or
conic section-shaped trend is also common and easily recognized. It is caused by a linear trend in the underlying data where curvature of the CUMSUM trend depends on the slope of the linear trend
in the underlying data. If the CUMSUM trends for each column are significantly different in shape and phasing from other columns, different factors are influencing the underlying data. Finally, I
am referring to stationary as being the stability of the statistical distribution of the data in a column. I am not referring to the stability of a trend. The data in a column doesn’t have to
normally-distributed to be stationary. In long data columns, it is quite possible the data are normally-distributed over a long string of cells but become skewed or non-normal in other strings of
cells. These changes in distribution mean the data are non-stationary.
As I mentioned in #6, I was trying to understand the temporal and spatial variations in some USHCN surface temperature data. I chose several sites where monthly data was available from 1897 to
2005. Hence, my data array was 109×12 for each station. After normalizing the monthly temperature data in each column, I plotted the CUMCUM data. As you know, all CUMSUM data columns start at
zero and return to zero. The difference in the trajectory of the CUMSUM curve over the 109 cells in a monthly column sets the trend pattern. I was struck by the differences in the trend patterns
for each month. Although some monthly patterns were similar in some features, there were many significant differences. For example, at one site, there essentially no persistent trends in November
over the 109 yr interval whereas large decadal trends were observed in the trend patterns for October and December. I observed the same perplexing similarities and differences in trend patterns
for the other sites that I examined.
I’ll not take the time to describe the similarities and differences. Suffice to say, they involved V-, U-, conic-shaped trends and phase delays, as well as, non-normal and non-stationary inter
val. All of this indicated there were many other inexplicable non-linear factors influencing the temperature data. Some of the factors were undoubtedly due to weather matters and other related to
longer-term climate changes.
Yes, it is possible to add another column in the data array and include the average of the 12 monthly data column. The CUMSUM for the average annual data exhibits its own trend pattern and it is
different from any of the monthly patterns. That’s no surprise since averaging step is merely the equal weighting or blending of the monthly data. This obviously suppresses or obscures all but
the large amplitude trends. In #5, I understand that you averaged your 112 column data set (not the normalized data set) to from an equally-weighted “total” data column. If my understanding is
correct, then I believe that most of the lesser trends are suppressed. Therefore, this is different than using a PC methodology to select a smaller number of unequally-weighted data columns.
So … this probably didn’t provide you with any new insights. My cautionary note would be: “ … be sure that you have an idea about what kind of trends that you are looking for before using too
highly aggregated data sets.”
12. Hu,
I have a somewhat shorter question for Steve, namely after MBH have estimated the temperature PC’s U as in your second equation, how do they get from there to NH temperature T? This never
seems to be discussed, and MBH 9x don’t offer much help. Is there a second CCE exercise where the U’s are calibrated to T, or are the U’s just averaged together somehow?
Reconstructed Us are multiplied by original S and V^T, and then re-scaled by tcpa-standard.txt , and this is the grid-point reconstruction. RE’s and other stats are computed from cosine-weighted
averages of this grid-point recon (NH, sparse, global etc.). This is, of course, wrong way to compute these stats.
I’m still puzzled by P-weights, without these weights I can replicate MBH99 results more accurately..
13. #12. UC, It’s possible that he didn’t use P-weights in MBH99, just in 98. I’ll experiment a little with that. There are some odd changes between the two studies – the secret CI method in MBH99 is
still puzzling. Also he used 3 NOAMER PCs in AD1000 as opposed 2 NOAMER PCs in 1400 and 1450 – something that would seem impossible if he used a Preisendofer-style Rule N, as he said .
14. Steve,
There are some odd changes between the two studies
I use http://www.nature.com/nature/journal/v430/n6995/extref/FigureData/nhmean.txt and http://holocene.meteo.psu.edu/shared/research/ONLINE-PREPRINTS/Millennium/DATA/RECONS/nhem-recon.dat
no differences in NH mean, and without P I can replicate that better. Yet, there is another version of MBH98 somewhere, see http://www.ncdc.noaa.gov/paleo/ei/ei_attriold.html
The results for Figure 7 of MBH98, through an oversight, were based on the penultimate set of temperature reconstructions, and not repeated with the NH series resulting from the final set of
pattern reconstructions used elsewhere in the manuscript (and shown in Figure 6).
One Trackback
1. [...] serious problems with their use (or non-use) of the covariance matrix of the residuals. See, eg, “Squared Weights in MBH98″> and “An Example of MBH [...]
Post a Comment
|
{"url":"http://climateaudit.org/2008/04/05/squared-weights-in-mbh98/?like=1&source=post_flair&_wpnonce=1fd78fe65c","timestamp":"2014-04-19T05:47:19Z","content_type":null,"content_length":"125988","record_id":"<urn:uuid:d3806a72-6599-4db9-8bb0-94d6fcc85f6e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semisimple Hopf algebras with commutative character ring
up vote 2 down vote favorite
Suppose that $A$ is a semisimple Hopf algebra with a commutative character ring. Does it follow that $A$ is quasitriangular, i.e $\mathrm{Rep}(A)$ is a braided tensor category?
I think I 've seen this statement in a paper without a proof long time ago. It might be obvious although I don't see how to construct a braiding just knowing non-functorial commutativity of the
tensor products.
hopf-algebras braided-tensor-cat
add comment
1 Answer
active oldest votes
No, it does not follow.
up vote 5 In this paper (Example 6.14) we proved that if a Tambara-Yamagami fusion category admits a braiding then its dimension is a power of 2. Note that a Tambara-Yamagami category has a
down vote commutative Grothendieck ring. Hopf algebras whose representation category is of Tambara-Yamagami type are classified by Tambara (Representations of tensor categories with fusion rules
accepted of self-duality for abelian groups, Isr. J. Math. 118 (2000), 29-60). For example, there is a Hopf algebra $A = k^9 \oplus M_3(k)$ (so-called Kac-Paljutkin algebra) with commutative
chracater ring and $Rep(A)$ admitting no braiding.
Thank you for answer! I also thought that might not be true in general. – Sebastian Burciu May 30 '10 at 6:56
add comment
Not the answer you're looking for? Browse other questions tagged hopf-algebras braided-tensor-cat or ask your own question.
|
{"url":"http://mathoverflow.net/questions/26363/semisimple-hopf-algebras-with-commutative-character-ring/26381","timestamp":"2014-04-17T07:37:52Z","content_type":null,"content_length":"50750","record_id":"<urn:uuid:96403889-28b3-4f13-8b7c-710b4671a350>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Water Resources of the United States
The following documentation was taken from:
U.S. Geological Survey Water-Resources Investigations Report 94-4002: Nationwide summary of U.S. Geological Survey regional regression equations for estimating magnitude and frequency of floods for
ungaged sites, 1993
North Dakota is divided into three hydrologic regions (fig. 1). The regression equations developed for these regions are for estimating peak discharges having recurrence intervals T that range from 2
to 500 years. The explanatory basin variables used in the equations are contributing drainage area (CA), in square miles; and main channel slope (S), in feet per mile. The regression equations were
developed from peak-discharge records available for 192 continuous- and partial-record streamflow gaging stations and are applicable to rural, unregulated streams draining 1,000 square miles or less.
The standard errors of estimate for the regression equations range from 55 to 98 percent. The equivalent years of record range from 2.0 to 12.0 years. The report by Williams-Sether (1992) includes
basin and flood-frequency characteristics of the streams used to define the peak-flow relations. The report also includes basin and flood-frequency characteristics of streams with drainage areas over
1,000 square miles and that were not used to define the peak-flow regression relations.
Topographic maps, the hydrologic regions map (fig. 1), and the following regression equations are used to estimate the needed peak discharges QT, in cubic feet per second, having selected recurrence
intervals T.
Region A
Region B
Region C
Williams-Sether, T., 1992, Techniques for estimating peak-flow frequency relations for North Dakota: U.S. Geological Survey Water-Resources Investigations Report 92-4020, 57 p.
Figure 1. Flood-frequency region map for North Dakota. (PostScript file of Figure 1.)
|
{"url":"http://water.usgs.gov/software/NFF/manual/nd/index.html","timestamp":"2014-04-16T19:14:28Z","content_type":null,"content_length":"8302","record_id":"<urn:uuid:9404209d-e244-4fea-b1fb-c1a3393b9384>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Endless Chord - Oscillation?
June 1st 2009, 10:01 PM #1
[SOLVED] Endless Chord - Oscillation?
An endless chord consists of two portions of lengths $2l$ and $2l'$ respectively, knotted together. The mass per unit length of each string is $m$ and $m'$. It is placed in stable equilibrium
over a smooth peg and slightly displaced. Find time period of oscillation.
Any ideas?
Last edited by fardeen_gen; June 1st 2009 at 10:53 PM. Reason: Added diagram
This one is actually quite easy!
Here's how to solve it but I must point out that this can only be true if $\color[rgb]{1,0,0} m' > m$ .
June 3rd 2009, 02:47 PM #2
May 2009
|
{"url":"http://mathhelpforum.com/advanced-applied-math/91500-solved-endless-chord-oscillation.html","timestamp":"2014-04-19T10:59:48Z","content_type":null,"content_length":"39237","record_id":"<urn:uuid:5f62c29e-edc3-4e26-a2f9-c747f1cb7f9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Walk Like a Sabermetrician
Recently I have started reading the 2006 Hardball Times Baseball Annual. I will do a book review at some time in the future but for now it will suffice to say that you should probably get this book.
Anyway, for now I just have some comments on a technical issue that was brought up by reading Dan Fox’s article “Are You Feeling Lucky?” Mr. Fox also has an excellent blog, Dan Agonistes (linked on
side of page) in addition to his writing for the Hardball Times.
Anyway, the article examines team’s runs scored and allowed versus their BsR estimates, and runs scored and allowed versus W% by using Pythagenpat. There is a typo in the Pythpat formula--they have
it as RPG^2.85, when it should be RPG^.285. But obviously the formula was applied correctly in the article, and it’s just a production mistake. There is also an error in the Indians’ and Mariners’
runs allowed that leads to a faulty conclusion about who “should have” won the AL Central (which this Indians fan just happened to notice). The Indians were not actually “lucky”--in fact, Bill James’
analysis in his Handbook, based on RC and RC Allowed, shows that the Indians were the best team in baseball, and easily the “unluckiest” or “least efficient”. Anyway, Dan told me that he will have an
updated version of the article on their website, and will fix that minor snafu.
The main point here is not to criticize the article, because it’s a fine article, but to mention that there is a simple way to use the Pythagenpat relationship to estimate Runs Per Win. What Fox does
is take, say, a seven game margin above Pythpat expectation, and multiply this by a RPW factor to give an equivalent number of runs. This is not technically precise, since RPW is a linear concept and
Pythpat is not, but of course the linear approximation works very well and so this does not really present a problem in the analysis. Fox uses Palmer’s RPW = 10*sqrt(RPG/9). This formula is fine, but
I would just like to point out there is a similar formula that comes directly from Pythpat. David Smyth, in the past, has published a formula that gives the RPW for any team, from Pyth:
RPW = 2*(R-RA)*(R^x + RA^x)/(R^x - RA^x)
Where R and RA are per game, and x is the exponent. You can check and verify that this formula works. However, at R = RA, it is undefined because the denominator will be zero. And this is a shame,
because it is the point where R = RA that we would want to examine in order to conclude that in a context with an RPG of X, RPW is Y.
However, if we differentiate PW% with respect to RD, we will find a formula that gives the correct result at the R = RA point. This formula is:
RPW = ((2*RPG*(RR^x + 1)^2*(.5 - RD/(2*RPG))^2)/(x*RR^(x-1))
That’s confusing as heck, but remember, we want to evaluate it at R = RA. So RR = 1 and RD = 0. One raised to any power is one, so we can simplify to:
RPW = ((2*RPG*(1 + 1)^2*(.5 – 0/(2*RPG))^2)/(x)
= ((2*RPG*(2)^2*(.5)^2)/x
= (2*RPG)/x
And what is x? We’ve set x equal to RPG to some power. Various people use different values--I originally published it as .29, David Smyth originally published it as .287, Davenport and Fox used .285,
Tango Tiger found that .28 would probably provide the best combination of accuracy with extreme and regular teams. I’ll continue using x here just so that it is applicable to any of these choices.
Since x = RPG^z, we have this equation for RPW:
RPW = (2*RPG)/RPG^z
And this can be rewritten as:
RPW = 2*RPG^(1 - z)
So this is somewhere around 2*RPG^.72. So at 9.18 RPG, the 2005 average value, Palmer gives 10*sqrt(9.18/9) = 10.10 and Pythpat gives 2*9.18^.72 = 9.87. In case you are curious how these work with
some real teams, with 1984-2003 teams, Palmer’s formula gives a RMSE of 3.938 and the one presented here gives 3.895. So you do not have to sacrifice accuracy with the run-of-the-mill teams.
The known point discovered by Smyth, that at RPG = 1, x must equal 1, also by definition states that RPW must equal 2 when RPG = 1. If you have a team that scores 100 runs and allows 62 runs, they
will go 100-62. Their RD is 38, and 38/2 = 19. 19 is your estimate of wins above .500, and .500 is 81 wins, so 81+19 = 100. So the RPW must be two when the RPG is one. The Pythpat-based formula of
course returns this result. The Palmer RPW gives 3.33.
As a final note, one thing I cannot quite figure out from Fox’s article is whether he is using Pythpat and Palmer to find an overall value for the league, and then using that value for each team, or
whether he is using the specific value for each team. The second approach would again be more precise, but the first is an alright assumption for simplicity’s sake.
4 comments:
1. Thanks for the mention, and yes I have made the corrections we emailed about at http://www.hardballtimes.com/main/article/second-look-at-luck/.
As to your last question, if I understand you right, I'm using the 10.10 value from Palmer and then applying that value to each team. You're right, that the other would be more accurrate.
Thanks for remidning me of the simplified formula. I had seen it somewhere before but forgot all about it.
2. Ralph CaolaFebruary 5, 2006 at 4:17 PM
Being a subject close to my heart, I found “Runs per Win” very interesting. I have the following comments and questions:
1. I also enjoyed Dan Fox’s article. However, I thought his use of PythagenPat was overkill. I think PythagenPat always overcomplicates calculations at a team level. A Pythagorean calculation
using an exponent of 2 or 1.88 is plenty accurate enough.
2. I also thought Fox’s use of Palmer’s “square root” formula for RPW was fine, but, being biased, I would have used the formula I derived (By The Numbers, November 2003). It is RPW=2*RPG/x,
where x is the Pythagorean exponent. When x=2, RPW is simply RPG.
3. Can you provide the reference for David Smyth’s formula for RPW?
4. You mention that Smyth’s formula is undefined when R=Ra and I understand why you think “it’s a shame”. But, when R=Ra, W% is theoretically 0.500, and wins above 0.500 are theoretically zero,
also. So, RPW will be 0/0, and, therefore, also undefined (unless you can apply L’Hopital’s Rule).
5. A little bit of algebra on Smyth’s formula at x=2, yields RPW = 2*(R^2+Ra^2)/(R+Ra). With this formula, you never have the problem of division by zero.
6. You wrote “However, if we differentiate PW% with respect to RD … we find … RPW = ((2*RPG*(RR^x + 1)^2*(.5 - RD/(2*RPG))^2)/(x*RR^(x-1))” Is PW% Pythagorean winning percentage? If so, what
formula did you differentiate to get this? Maybe it doesn’t matter because when I did it, it also distilled down to RPW=2*RPW/x.
Ralph Caola
3. 1. I don't disagree that it is ok to use 2, but I also don't see any reason why not to use a better estimate if you are inclined to do so.
3. It was posted on a FanHome thread sometime in the past, but I don't think it is one that is currently on the board.
4. I didn't think of it that way, that's a good point.
6. Yes, I was using PW% to abbreviate Pythagorean W%. I differentiated Run Ratio with respect to Run Differential(dRR/dRD) and then PW% with respect to to RR(dPW%/dRR). I multiplied those to get
dPW%/dRD, which is RPW. It's good to know that we got the same result.
4. Ralph CaolaFebruary 6, 2006 at 6:24 AM
In the last line of my previous comment I wrote:
It should be:
Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason.
|
{"url":"http://walksaber.blogspot.com/2006/01/runs-per-win.html?showComment=1139174220000","timestamp":"2014-04-21T10:54:06Z","content_type":null,"content_length":"107084","record_id":"<urn:uuid:7b737c6f-d5dd-4846-95c8-f5a5f15f7020>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - Can the electromagnetic vector potential be written in terms of a complex field?
Spinnor Nov18-12 11:03 AM
Can the electromagnetic vector potential be written in terms of a complex field?
Is there a complex field that when properly interpreted yields the four components of electromagnetic vector potential, A_0, A_1, A_2, and A_3?
Somewhat along the lines of the complex field ψ yielding information about a particles energy, momentum, and position probability.
Thanks for any help!
andrien Nov18-12 01:56 PM
Re: Can the electromagnetic vector potential be written in terms of a complex field?
faraday tensor does describe in a single way the fields.it is an antisymmetric tensor having six independent components.However one can write maxwell eqn in a form similar to dirac eqn in which E and
B are used in some form like E+iB. Although those potentials can be combined to called four potentials but it is just a way of simplification and covariance.
All times are GMT -5. The time now is 02:24 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums
|
{"url":"http://www.physicsforums.com/printthread.php?t=653164","timestamp":"2014-04-21T07:24:19Z","content_type":null,"content_length":"4813","record_id":"<urn:uuid:01724991-66f0-4a12-aee7-f1040e4ec935>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Very simple classification rules perform well on most commonly used datasets
Results 1 - 10 of 339
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently
generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Cited by 1625 (21 self)
Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates
classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss ” which is a method for forcing a learning algorithm of multi-label
conceptsto concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real
learning problems. We performed two sets of experiments. The first set compared boosting to Breiman’s “bagging ” method when used to aggregate various classifiers (including decision trees and single
attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of
boosting using a nearest-neighbor classifier on an OCR problem.
- Annals of Statistics , 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can
often be dramatically improved by sequentially applying them to reweighted versions of the input data, and t ..."
Cited by 1217 (21 self)
Add to MetaCart
Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often
be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this
seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as
an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical
results to boosting. Direct multi-class generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multi-class generalizations of
boosting in most...
- MACHINE LEARNING: PROCEEDINGS OF THE ELEVENTH INTERNATIONAL , 1994
"... We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and
show that the definitions used in the machine learning literature do not adequately partition the features ..."
Cited by 594 (23 self)
Add to MetaCart
We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show
that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees
of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected
should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any
induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
- MACHINE LEARNING , 1999
"... Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world
datasets. We review these algorithms and describe a large empirical study comparing several variants in co ..."
Cited by 539 (2 self)
Add to MetaCart
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world
datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The
purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias
and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while
boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves
differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no
pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used,
as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing
the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical
problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances,
emphasizing not only "hard" areas but also outliers and noise.
- ARTIFICIAL INTELLIGENCE , 1997
"... In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting
relevant features, and the problem of selecting relevant examples. We describe the advances that have been mad ..."
Cited by 423 (1 self)
Add to MetaCart
In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant
features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a
general framework that we use to compare different methods. We close with some challenges for future work in this area.
- in A. Prieditis & S. Russell, eds, Machine Learning: Proceedings of the Twelfth International Conference , 1995
"... Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify de n-ing characteristics of
the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised dis ..."
Cited by 408 (10 self)
Add to MetaCart
Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify de n-ing characteristics of the
methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms.
We found that the performance of the Naive-Bayes algorithm signi cantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized
version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm signi cantly improved if features were discretized in
advance � in our experiments, the performance never signi cantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretizing features. 1
"... The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it
exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed ..."
Cited by 295 (8 self)
Add to MetaCart
The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it
exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence,
and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification
can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the
actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary
conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial
degrees of attribute dependence.
- Journal of Artificial Intelligence Research , 1994
"... This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split
(in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned espe ..."
Cited by 251 (13 self)
Add to MetaCart
This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in
the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic
or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more
accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. 1. Introduction Current data collection technology
provides a unique challenge and opportunity for automated machine learning techniques. The advent of major scientific projects such as the Human Genome Project, the Hubble Space Telescope, and the
human brain mappi...
, 2008
"... To survive in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury, decision-makers must use bounded rationality. In this precis of Simple
heuristics that make us smart, we explore fast and frugal heuristics—simple rules for making decisions with re ..."
Cited by 244 (8 self)
Add to MetaCart
To survive in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury, decision-makers must use bounded rationality. In this precis of Simple heuristics
that make us smart, we explore fast and frugal heuristics—simple rules for making decisions with realistic mental resources. These heuristics enable smart choices to be made quickly and with a
minimum of information by exploiting the way that information is structured in particular environments. Despite limiting information search and processing, simple heuristics perform comparably to
more complex algorithms, particularly when generalizing to new data—simplicity leads to robustness.
- CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE , 1994
"... In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding
the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space ..."
Cited by 208 (7 self)
Add to MetaCart
In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the
naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space of features. We hypothesize that this approach will improve asymptotic accuracy in domains that
involve correlated features without reducing the rate of learning in ones that do not. We report experimental results on six natural domains, including comparisons with decision-tree induction, that
support these hypotheses. In closing, we discuss other approaches to extending naive Bayesian classifiers and outline some directions for future research.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.103.7226","timestamp":"2014-04-20T15:00:12Z","content_type":null,"content_length":"41080","record_id":"<urn:uuid:a11ed47a-3bc2-4848-a2c8-fc0201827418>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
• GyroMoment[bnum, vec, omega] returns a SysLoad data object that applies a moment to body bnum to simulate the gyroscopic effects of the axially symmetric body spinning with angular speed omega,
about an axis originating from the centroid of the body, parallel to vec. The constant angular velocity omega is relative to the local reference frame of bnum.
• After creation, the SysLoad object is passed to SetLoads to alter the state of the current Mech model.
• GyroMoment accepts the Inertia option to specify the inertia properties of the spinning body.
• If Inertia is unspecified, the inertia properties are taken from body bnum.
• GyroMoment produces meaningful results only if the vector vec is parallel to a principal moment of inertia of the body, and if the other two principal moments are equal in magnitude. This implies
that the body must be axially symmetric about the vector vec.
• GyroMoment essentially models body bnum as if it were spinning at a constant velocity relative to its own local coordinates system.
• To model zero-torque motion, instead of constant velocity, use GyroFilter to remove the components of the inertia tensor that are aligned with vec from the body's inertia tensor.
• See also: Body, Load, Moment.
Further Examples
Load the Modeler3D package and define a simple model.
A load object is applied to the model that simulates the gyroscopic loads of a body spinning about the local X axis on the crank.
Solution->Kinematic is used here because the gyroscopic moments are a function of the angular velocity of the crank.
The reaction forces on the crank are orthogonal to both the spin of the gyro and the angular velocity of the crank.
See HelpModel3D.
|
{"url":"http://reference.wolfram.com/applications/mechsystems/FunctionIndex/GyroMoment.html","timestamp":"2014-04-20T08:42:48Z","content_type":null,"content_length":"34033","record_id":"<urn:uuid:8a57ef4e-83d1-4349-aff8-818106c1c472>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Wikipedia, the free encyclopedia
Peridynamics is a formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures.
Purpose of peridynamics
The peridynamic theory is based on integral equations, in contrast with the classical theory of continuum mechanics, which is based on partial differential equations. Since partial derivatives do not
exist on crack surfaces and other singularities, the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation. The integral equations of
the peridynamic theory can be applied directly, because they do not require partial derivatives.
The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach avoid the need for the special techniques of fracture
mechanics. For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor.
Definition and basic terminology
The basic equation of peridynamics is the following equation of motion:
$\rho(x)\ddot u(x,t)=\int_R f(u(x',t)-u(x,t),x'-x,x)dV_{x'} + b(x,t)$
where x is a point in a body R, t is time, u is the displacement vector field, and ρ is the mass density in the undeformed body. x' is a dummy variable of integration.
The vector valued function f is the force density that x' exerts on x. This force density depends on the relative displacement and relative position vectors between x' and x. The dimensions of f are
force per volume squared. The function f is called the "pairwise force function" and contains all the constitutive (material-dependent) properties. It describes how the internal forces depend on the
The interaction between any x and x' is called a "bond." The physical mechanism in this interaction need not be specified. It is usually assumed that f vanishes whenever x' is outside a neighborhood
of x (in the undeformed configuration) called the horizon.
The term "peridynamic," an adjective, was proposed in the year 2000 and comes from the prefix peri, which means all around, near, or surrounding; and the root dyna, which means force or power. The
term "peridynamics," a noun, is a shortened form of the phrase peridynamic model of solid mechanics.
Pairwise force functions
Using the abbreviated notation u = u(x,t) and u' = u(x',t) Newton's third law places the following restriction on f:
$\displaystyle f(u-u', x-x', x') = -f(u'-u, x'-x, x)$
for any x,x',u,u'. This equation states that the force density vector that x exerts on x' equals minus the force density vector that x' exerts on x. Balance of angular momentum requires that f be
parallel to the vector connecting the deformed position of x to the deformed position of x':
$\displaystyle ((x'+u')-(x+u))\times f(u'-u, x'-x, x)=0.$
A pairwise force function is specified by a graph of | f | versus bond elongation e, defined by
$\displaystyle e=|(x'+u')-(x+u)|-|x'-x|.$
A schematic of a pairwise force function for the bond connecting two typical points is shown in the following figure:
Damage is incorporated in the pairwise force function by allowing bonds to break when their elongation exceeds some prescribed value. After a bond breaks, it no longer sustains any force, and the
endpoints are effectively disconnected from each other. When a bond breaks, the force it was carrying is redistributed to other bonds that have not yet broken. This increased load makes it more
likely that these other bonds will break. The process of bond breakage and load redistribution, leading to further breakage, is how cracks grow in the peridynamic model.
Peridynamic states
The theory described above assumes that each peridynamic bond responds independently of all the others. This is an oversimplification for most materials and leads to restrictions on the types of
materials that can be modeled. In particular, this assumption implies that any isotropic linear elastic solid is restricted to a Poisson ratio of 1/4.
To address this lack of generality, the idea of "peridynamic states" was introduced. This allows the force density in each bond to depend on the stretches in all the bonds connected to its endpoints,
in addition to its own stretch. For example, the force in a bond could depend on the net volume changes at the endpoints. The effect of this volume change, relative to the effect of the bond stretch,
determines the Poisson ratio. With peridynamic states, any material that can be modeled within the standard theory of continuum mechanics can be modeled as a peridynamic material, while retaining the
advantages of the peridynamic theory for fracture.
See also
Further reading
External links
|
{"url":"http://www.thefullwiki.org/Peridynamics","timestamp":"2014-04-20T03:18:18Z","content_type":null,"content_length":"59437","record_id":"<urn:uuid:9186b13c-be65-4edf-aaae-77c1b3f65273>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Verify the following
could someone please give me some hints about how to prove this?
Suppose that $a,~b,~\&~c$ are positive integers such that $a>b$. Further, suppose that we have $a$ blue balls and $c+b$ red balls. Now we put the balls all together in a box from which we randomly
choose $b$ balls. Surely you can see that we can choose from $0\text{ to }b$ blue balls. That is all that summation says.
Last edited by Plato; May 9th 2012 at 04:02 PM.
|
{"url":"http://mathhelpforum.com/number-theory/198415-verify-following.html","timestamp":"2014-04-17T19:24:56Z","content_type":null,"content_length":"42987","record_id":"<urn:uuid:ad9793d9-52a9-427a-8acc-01b7ef2bb12e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
s well-ordering theorem
proof of Zermelo's well-ordering theorem
proof of Zermelo’s well-ordering theorem
Let $X$ be any set and let $f$ be a choice function on $\mathcal{P}(X)\setminus\{\emptyset\}$. Then define a function $i$ by transfinite recursion on the class of ordinals as follows:
$i(\beta)=f(X-\bigcup_{{\gamma<\beta}}\{i(\gamma)\})\text{ unless }X-\bigcup_{{% \gamma<\beta}}\{i(\gamma)\}=\emptyset\text{ or }i(\gamma)\text{ is undefined % for some }\gamma<\beta$
(the function is undefined if either of the unless clauses holds).
Thus $i(0)$ is just $f(X)$ (the least element of $X$), and $i(1)=f(X-\{i(0)\})$ (the least element of $X$ other than $i(0)$).
Define by the axiom of replacement $\beta=i^{{-1}}[X]=\{\gamma\mid i(\gamma)=x\text{ for some }x\in X\}$. Since $\beta$ is a set of ordinals, it cannot contain all the ordinals (by the Burali-Forti
Since the ordinals are well ordered, there is a least ordinal $\alpha$ not in $\beta$, and therefore $i(\alpha)$ is undefined. It cannot be that the second unless clause holds (since $\alpha$ is the
least such ordinal) so it must be that $X-\bigcup_{{\gamma<\alpha}}\{i(\gamma)\}=\emptyset$, and therefore for every $x\in X$ there is some $\gamma<\alpha$ such that $i(\gamma)=x$. Since we already
know that $i$ is injective, it is a bijection between $\alpha$ and $X$, and therefore establishes a well-ordering of $X$ by $x<_{X}y\leftrightarrow i^{{-1}}(x)<i^{{-1}}(y)$.
The reverse is simple. If $C$ is a set of nonempty sets, select any well ordering of $\bigcup C$. Then a choice function is just $f(a)=$ the least member of $a$ under that well ordering.
Mathematics Subject Classification
no label found
Added: 2002-08-25 - 21:25
|
{"url":"http://planetmath.org/ProofOfZermelosWellOrderingTheorem","timestamp":"2014-04-18T23:17:07Z","content_type":null,"content_length":"71130","record_id":"<urn:uuid:fc7173d4-c6c3-4016-81fc-ac913d3cd474>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
October 28th 2006, 12:21 AM #1
Oct 2006
need help!!!
If we have two sets, A and B, contained in the universal set
Can someone tell me why the smallest possible number of elements of
(A intersect B) occurs when A union B = universal set???
i know that
n(A intersect B) = n(A) + n(B) - n(A union B)
so to make it smaller, we need to make n(A union B) as big as possible.
But is there a way to explain this without using the formula above???
Last edited by acc100jt; October 28th 2006 at 01:06 AM.
If we have two sets, A and B, contained in the universal set
Can someone tell me why the smallest possible number of elements of
(A intersect B) occurs when A union B = universal set???
i know that
n(A intersect B) = n(A) + n(B) - n(A union B)
so to make it smaller, we need to make n(A union B) as big as possible.
But is there a way to explain this without using the formula above???
I'm not sure I understand the question. for instance let D={0,1,2,3,4} be
our universe of discourse (universal set), and let A={0,1}, B={2,3}, then
|A Intersect B|=0, but A Union B != D.
Similarly let A={0,1,2}, B={2,3,4}, then |A Intersect B|=1, and A Union B =D.
sorry, let me restate my questions,
If the sets A and B are not disjoint.
n(universal set)=60
then the least n(A intersect B) occurs when (A union B)=universal set.
Because n(A)+n(B)>n(U), there must be some elements shared between
A and B. The smallest number that could be shared is 6 (that is:
n(A)+n(B)-n(U), any smaller number of shared elements will leave n(A Union
B)>60, which would be a contradiction).
But if A and B share exactly 6 elements then n(A Union B)=60, and so
A Union B=U.
October 28th 2006, 02:17 AM #2
Grand Panjandrum
Nov 2005
October 28th 2006, 02:25 AM #3
Oct 2006
October 28th 2006, 02:40 AM #4
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/discrete-math/6943-need-help.html","timestamp":"2014-04-19T08:40:04Z","content_type":null,"content_length":"35945","record_id":"<urn:uuid:f1334a6b-a508-4169-9d16-45c4615d6c62>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Biographical Sketch of Henry W. Gould
Henry W. Gould, Professor Emeritus of Mathematics, West Virginia University, was born in Portsmouth, Va. (26 August 1928). He graduated from Woodrow Wilson High School in Portsmouth in Jan. 1946. He
studied at the Norfolk Divison of the College of William and Mary (now Old Dominion University) (1946-48), at the University of Virginia where he received his B.A. (1954) and M.A. (1956) in
Mathematics, and at the University of North Carolina at Chapel Hill (1957-58), where he was a Research Assistant to Professor Alfred T. Brauer in his work on the location of characteristic roots of
matrices using ovals of Cassini. He is also a graduate of the National Radio Institute, Washington, D.C. (1947), and studied communications theory at The Southeastern Signal School (TSESS), U. S.
Army, Fort Gordon, Ga. (1951-52). In the late 1980's Gould studied Chinese, linguistics, and took graduate work in behavioral psychology (Behaviorology a la B. F. Skinner) at West Virginia
University. Beginning in 1952, and until his death in 1999, Gould's mentor in mathematical research was Professor Leonard Carlitz at Duke University.
In 1957, at the University of Virginia, he was elected a full member of the Sigma Xi Research Society for his distinction in mathematics.
Early in his career (1963) he was elected a Fellow of the American Association for the Advancement of Science.
Gould was elected to the Beta Chapter of the national mathematics honorary Pi Mu Epsilon at the University of North Carolina (Chapel Hill) in 1957, and was one of the charter members of the Alpha
Chapter at West Virginia University in 1967.
He joined the faculty of West Virginia University as an instructor in 1958, and received the rank of Professor in 1969, becoming Professor Emeritus, Spring 2007, after 49 years of service at WVU. He
continues his research, writing and working with faculty and students at West Virginia University even as he is past the age of 83 years.
He has been a consultant with the National Security Agency, Principal Investigator at WVU with several College of Arts and Sciences grants, and grants from the National Science Foundation on the
topic of Combinatorial Identities, and has served as a reviewer for the Mathematical Reviews and the Zentralblatt für Mathematik. At West Virginia University he directed a Research Program under
auspices of the Office of the Provost in 1976-77 concerned with mathematical computations for coal mine valuation, using Bondurant's variation of the Hoskold actuarial formula.
He received the J. Shelton Horsley Research Award from the Virginia Academy of Science in 1977.
In 1976 he was invited to organize a Special Session on Combinatorial Identities for the American Mathematical Society at its Summer Meeting in Toronto, Canada. He was an invited lecturer at the NSF
CBMS Regional Conference on Special Functions at V.P.I. in 1974. He was a Visiting Lecturer for the Mathematical Association of America (1967-70) and for the Society for Industrial and Applied
Mathematics (1974-76). He was an invited participant to the first Annual Symposium on the History of Mathematics held at the National Museum of Science and Technology, Smithsonian Institution,
Washington, D.C., in 1976, concerned with Cauchy's contributions to analysis. Gould has published extensive bibliographies on combinatorial topics and on Cauchy's integral theorem.
Professor Gould was elected a Foundation Fellow of the Institute of Combinatorics and its Applications (1990).
Gould was elected an Honorary Fellow of the Institute of Combinatorics and its Applications at the 19th Annual General Meeting of the Institute, 10 March 2010 at Florida Atlantic Univesity.
Gould's continuing research and long-time service to West Virginia were recognized by his receiving the prestigious Benedum Distinguished Scholar Award for Physical Sciences and Technology in
ceremonies in March 1988.
On the occasion of his 70-th birthday in 1998, a Colloquium was held at West Virginia University in honor of Henry and his wife Jean, with invited talks by mathematicians from the U.S., Canada,
Zimbabwe, and Denmark. A special framed letter of commendation was presented to him from U. S. Senator Robert C. Byrd, acknowledging and praising Gould's long-time service to WVU. Gould jokingly
called the event his "mid-career celebration".
Volume 204(1999) of the journal Discrete Mathematics was dedicated in honor of Gould and his work and contained numerous invited papers in his honor. The volume was edited by Ira M. Gessel, Louis W.
Shapiro and Douglas Rogers. It contained an amusing biographical preface by the editors.
Professor Gould has published over 200 papers, which have appeared in about 20 countries. He is the author of the widely used, major reference book Combinatorial Identities published in 1972. His
research has been in combinatorial analysis, number theory, special functions of mathematical physics, and the history of mathematics and astronomy.
Some of his early work (1956) was used by Oakley and Wisner (1957) to enumerate hexaflexagons.
In 1962 Gould was one of the founding editors of the number theory journal Fibonacci Quarterly and for many years has been an associate editor of the Journal of Mathematical Research and Exposition
founded by L. C. Hsu and published at Dalian, People's Republic of China. Professors Hsu and Gould began a research collaboration in 1965, seven years before Nixon's famous visit to China. Gould is
also an associate editor of the on-line electronic Journal of Integer Sequences, and is a member of the editorial board of the journal Applicable Analysis and Discrete Mathematics published by the
University of Belgrade, Serbia.
From 1974 to 1979, Professor Gould was Editor-in-Chief of the Proceedings of the West Virginia Academy of Science.
Gould founded and circulated a mathematical serial Mathematica Monongaliae, of which 12 issues were published from 1961 to 1971. Several of these have been reprinted extensively, such as issue No.
12, a "Bibliography of Bell and Catalan Numbers". This annotated bibliography was cited in two separate articles by Martin Gardner in his mathematical column in the Scientific American magazine.
Issue No. 10, a "Chronological Bibliography of the Cauchy Integral Theorem" listing 200 proofs of the famous theorem was coauthored with Herbert K. Fallin. The bibliography was cited in the journal
Historia Mathematica. With the aid of WVU graduate student Timothy Glatzer, the Bell and Catalan number bibliography is being revised and re-alphabetized with the intent of making it available
At WVU he was in charge of the development of a Departmental Mathematics Research Library 1960-2007. The library had only some 89 journals in 1958, but was expanded to reach as many as 250 titles by
1978. Since then money has been tight at WVU and subscriptions have been lost. The WVU Departmental Library was discontinued in June 2008 and its holdings moved into Wise Library and older materials
into a WVU Depository storage facility. The former library space is now used to house an undergraduate calculus tutorial center with more than a hundred computers.
Gould served as mathematics consultant to the 'Dear Abby' newspaper column. One interesting aspect of this work was writing an explanation of the three ancient Greek problems (trisecting an angle,
squaring the circle, and duplicating the cube). A pamphlet on this material was sent to hundreds of readers (mostly secondary school students) in every state and overseas, who wanted to know more
about these famous problems.
An article entitled "An Interview with H. W. Gould", by Prof. Scott H. Brown, appears in the College Mathematics Journal, Vol. 37, No. 5, November, 2006, pp. 370 - 379. This includes biographical
notes and photos.
As Professor Emeritus of Mathematics, Gould continues his research and service at West Virginia University. On the occasion of his recent retirement another special framed letter of commendation was
presented to him from U. S. Senator Robert C. Byrd, and a similar letter from U. S. Senator Jay Rockefeller, acknowledging and praising Gould's long service to WVU.
A Mathematics Colloquium was held in Gould's honor on 20 Sept. 2007, at which time George E. Andrews, Evan Pugh Professor in the Department of Mathematics at The Pennsylvania State University and
President-Elect of the American Mathematical Society, presented a paper “Gould’s Function and Problems in Partitions,” as part of the WVU Distinguished Lecture Series in Mathematics in the Eberly
College of Arts and Sciences. Andrews' paper was motivated by Gould's 1964 paper on compositions into relatively prime parts.
Also at this celebration Dot Underwood, Assistant to West Virginia Governor Joe Manchin, III, presented a special certificate to Gould, signed by the Governor, making Gould an "Honorary Mountaineer"
in recognition of his 49 years of outstanding service to WVU and for his dedication to research and education in the field of mathematics.
Among his recent activities, Gould began a research collaboration with Dr. Jocelyn Quaintance, Visiting Research Assistant Professor of Mathematics at WVU in 2006-2010. They are working on a
long-term revision of Gould's 1972 book "Combinatorial Identities", and Gould's manuscript notes covering 1945-1990. Since 2010 Dr. Quaintance has been supported as a consultant by a private
Other Biographical Details:
Among his hobbies are poetry, cryptography, philosophy, debating, hiking in the mountains, stamp collecting, genealogy, astronomy, carpentry and Dumpster Diving (he is the local "Oscar the Grouch",
known for retrieving valuable papers, books and journals from WVU dumpsters and creating the "WVU Math Library in Exile"). He has had a long-time interest in drawing cartoons and caricatures, being
influenced by his late distant cousin Chester Gould who invented and drew "Dick Tracy" for 50 years. Chester's grandfather was from West Virginia. One of Gould's principal hobbies is book collecting,
and his personal library has holdings of some 60,000 general titles, but also several thousand mathematics titles and many journals and a large mathematics reprint file. Gould now contributes notes
on Facebook.
In 1945 Gould learned glass blowing and designed and built a Geiger counter tube (with associated circuitry) that was used in efforts to locate lost radium at Norfolk General Hospital. In 1944-45
Gould designed and built several giant million-volt Tesla and Oudin coils, and designed a special, small, cigar-box Tesla coil for the late Hubert J. Davis, Director of the Norfolk County Teaching
Aids Library, that was used in schools all over the South to encourage students to study science.
In 1944-45 he built and operated Civil Defense UHF radio station WJWB-12 in Portsmouth, Va, as part of the War Emergency Radio Service (WERS). Gould has been an amateur radio operator (ham) since
1955. His call letters are K4CQA (and also for a time WB8OSE), and he has contacted stations in over 100 countries using Morse code at up to 25 words per minute. He served as a Class I Official
Observor for the American Radio Relay League, monitoring amateur radio operations, making precise frequency measurements, and sending warning notices. He received certified ARRL recognition of Morse
code proficieny of 25 words per minute,
In the mid-1950's he received a special Letter of Commendation from the Secretary of the Federal Communications Commission for helping to monitor and close down an illegal ham station operating on a
secret NATO cruise in the English Channel.
Gould was instrumental in establishing amateur radio clubs at the University of North Carolina (1957) and at West Virginia University (1958). He worked for some years as a radio engineer (holder of
FCC First Class Radio Operator License since 1955 and a third class commercial licence from 1945) and announcer, especially at WUVA, the student radio station at the University of Virginia (1948-57),
where he was specially recognized for his work in improving the broadcast transmitting equipment. As an announcer he ran programs of classical, popular and country music.
Starting 4 Oct. 1957, at Chapel Hill, NC, Gould monitored the first 300 orbits of the first Russian SPUTNIK, which broadcast beeping sounds on 20.005 MHz. He followed the news of the Sputnik in the
Soviet newspaper PRAVDA.The orbits were traced out on a world globe and tracking data was sent by ham radio to Cape Canaveral. He monitored 200 orbits of the second Sputnik and 150 orbits of the
third Sputnik, and developed a mathematical formula to determine the distance of closest approach of the Sputnik above the observer. This was based on measuring the Doppler effect by checking against
a precise frequency standard. The formula and proof were published in the Proceedings of the West Virginia Academy of Science, Vol. 36(1964), pp. 169-174.
He was one of the founders of the Morgantown Astronomy Club. From 1965 to 1980 Prof. Gould, together with Wilbur Bluhm (deceased), A. Dale Randolph (deceased), and others, founded and ran the
Morgantown Astronomy Club. Gould was President and edited their newsletter for ten years. The club had 60 members and some 40 telescopes. Public viewing sessions were held.
Gould's wife Jean is a full-time student at West Virginia University, receiving a bachelor's degree in art history (specializing in fifteenth and sixteenth century Florentine sculpture and painting)
in 2005, and receiving her BFA degree in sculpture in May 2009, and she is now studying therapeutic horsemanship and working with autistic children.
For ten years (circa 1970-1980) Gould and his late first wife Josephine founded and ran the Morgantown Orchid Society and Gould edited their Newletter. Lectures and slide presentations were given,
introducing many people to orchid culture. Bus trips were organized to take hundreds of people to orchid shows in Pittsburgh.
Gould has had a life-long interest in languages. Besides being fluent in German, he has taken courses in Greek, Hebrew, Arabic, and Chinese. He began his study of Chinese characters circa 1942, and
took courses in spoken Chinese circa 1989 at West Virginia University. He subscribed to Pravda, Izvestia and Krokodil in order to practice reading Russian. His interests have included artificial
languages, word studies and cryptography. Learning from his father to count in Arabic at an early age, and with a deep interest in the Middle East, Gould served for three years as Faculty Adviser to
the WVU Arab Students Club 1958-61.
Some general articles relating to Gould's life and work:
1. Susannah Grimm, "WVU-China Correspondence Resumes After President's Visit", WVU Magazine,
Vol. 4, No. 3, Fall 1972, pp. 20-21. (with photo)
2. Earl Lemley Core, "The Monongalia Story. A Bicentennial History", 5 Volumes, 1974-1984, Published
by the McClain Printing Co., Parsons, W. Va. Gould is credited in several volumes for information
about the history of the WVU Mathematics Department and the old WVU Observatory (1901-1919).
3. Elsa Nadler, "The Case for Vectors", Inquiry (WVU magazine), Spring 1986, pp. 25-27. (photo)
4. Ira M. Gessel, Louis W. Shapiro and Douglas Rogers (Editors). Biographical notes about Henry Gould,
Discrete Mathematics,Volume 204(1999), front pages, Entire volume was dedicated in honor of Gould.
5. Norman Julian, "Numbers are No. 1 in this math professor's life", Dominion Post, Morgantown, WV,
Sunday, 2 April, 2000.
6. by the editors, Photo and short story about Half Professors Charlie Brown and Spotsie Ann,
Bull. Inst. Combinatorics and its Appls., 36(2002), Sept., p. 102.
7. Scott H. Brown, "An Interview with H. W. Gould", College Mathematics Journal, Vol. 37, No. 5,
November, 2006, pp. 370-379. (eight photos)
8. Mary Ellen Mazey, Dean, Eberly College, "Our Accomplished Faculty", The Eberly College Magazine,
Spring 2008, p. 1, contains remarks about Gould's work at WVU.
9. Katherine Kline, story about Henry Gould retiring, The Eberly College Magazine, Spring 2008,
10. Malinda Reinke, Man + Math = Love, Retired Professor Heney W. Gould Knows Formulas,
Dominion Post newspaper, Mon. Nov 7, 2011 Section: Front Page and page 4.
Some Theses directed at WVU:
Allen Taylor Hopper, Generalized Hermite Polynomials, M.S., 1961.
Herbert Kirk Fallin, Jr., Cauchy's Integral Theorem, M.A., 1964
(Co-directed with Prof. Charles H. Vehse).
Charles F. Waiveris, Jr., Factorials: Their History, Generalizations, and Characterizations, M.A., 1973.
Michael J. Kuchinski, Catalan Structures and Correspondences, M.S., 1977. This thesis is linked to
Gould's 1971 bibliography of Catalan Numbers for the references.
Michael M. Watts, An Examination of Mine Valuation Formulas, M.B.A., 1977
(data on W. Va. coal mines collected by Jairo Velez).
Stephen A. Ford, Analyses of Formulas for Coal Tax Evaluation (Hoskold-Bondurant Formulas),
M.S., Computer Science, 1977.
John P. Ryan, An Historical Consideration of the Theory of Primitive Roots, M.S., 1980.
Louis Worthy Kolitsch, A Relationship Between the Partition Function and and the Bracket Function,
Master's Project Paper, 1981 (Co-directed with Prof. Michael E. Mays).
Temba Shonhiwa, Investigations in Number Theory Functions, Ph.D. Dissertation, 1996.
Stephen L. Richardson, Jr., Enumeration of the Generalized Catalan Numbers, M.S., 2005.
Updated 18 November 2011
|
{"url":"http://www.math.wvu.edu/~gould/vita.html","timestamp":"2014-04-20T20:56:16Z","content_type":null,"content_length":"22190","record_id":"<urn:uuid:361af6d2-df21-452f-805e-eb7998aae52f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimation of Taylor series
June 2nd 2013, 12:37 AM #1
Junior Member
Mar 2013
New Zealand
Estimation of Taylor series
For question 11 (which I think is more straightforward than 10), I think I have the answer, which is 2decimal places
For question 10, I end up with the following formula
R9 = (Cos(w) (2-pi/2) ^9 )/9!
However, I have never understood what w is (some textbooks and videos use z) other than it is a number between x and c. I assume x is 2 and c is pi/2 but what number is inbetween. Depending on
what I use I get a massive difference in "error". What is w?
Re: Estimation of Taylor series
Hey lukasaurus.
The statement is that w is a number between x and c and is just an existence theorem: in other words, the proof says that the condition exists and not what it is.
What you should do is to use the statement to get the maximum possible error and use that to get the answer for the sine.
June 2nd 2013, 05:23 PM #2
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/calculus/219519-estimation-taylor-series.html","timestamp":"2014-04-17T23:02:56Z","content_type":null,"content_length":"33524","record_id":"<urn:uuid:b3345ac1-f347-41ca-85ba-32915a8eae56>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
|
olecular-dynamics simulation
^Kai Kadau's web pages:
home page
molecular-dynamics simulations movie page (this page)
graphic page
sport page
Molecular-dynamics simulation movies
The MPEG-movies are results of large-scale molecular-dynamics simulations performed with the SPaSM code using massive parallel computing systems, which enable us to follow in detail the dynamics of
structural phase transformations and other modes of plasticity.
Most of the simulations were performed by using an embedded-atom method (EAM) potential (M.S. Daw and M.I. Baskes PRL 50, 1285 (1983.), M.S. Daw and M.I. Baskes PRB 29, 1285 (1983.)) which consist of
a density depend and a two body potential term. The EAM potential is able to reproduce fundamental properties of metals in contrast to pure pair-potentials like the Lenard-Jones (LJ) potential.
Rayleigh-Taylor instability by particle methods:
The applications of the Rayleigh-Taylor instability---i.e. the mixing of a heavy fluid on top of a light in a gravitational field ---range from astrophysics (supernova explosions), to geophysics
(formation of salt domes), all the way to inertial confinement fusion (collapse of ICF
capsules), as well as the general turbulent mixing of fluids. Therefore, its fundamental understanding is of relevance not only to the foundations of hydrodynamics but also to a broad range of
subjects, including physics, chemistry, biology, and geology.
We show that quantitative theoretical investigations on the atomistic level of the Rayleigh-Taylor instability --- as the classical example of complex turbulent hydrodynamic flows --- compare
favorably to recent experiments. Using the latest generation of supercomputers (the LANL Q machine) we solve Newtonian equations of motion for up to 100 million particles for as many as 250,000
integration steps --- an enormous numerical venture. A quantitative comparison of these ``nanohydrodynamic'' flows with continuum descriptions (Navier-Stokes equations) and macroscopic experiments
demonstrates that large-scale atomistic simulations can provide insight into complex hydrodynamic phenomena.
The movie gives a nice impression of what happens during such instabilities; it shows a quasi 2 dimensional simulation including about 12 million atoms interacting via Lenard-Jones potentials (only
the repulsive part for the AB interaction in order to maximize surface tension)
Computer power increased dramatically since 2003 and we were able to run the Rayleigh-Taylor problem with over 7 billion particles on BlueGene/L. Also, the Atwood number (A=(rho1-rho2)/(rho1+rho2),
rho1/2=mass density of heavy/light fluid) dependence has been investigated in more detail.
2D-RT by MD, 12 million particles (0.9MB) 3D-RT by DSMC, 7 billion particles (3 MB) 2D-RT by DSMC A=0.29, 100 million particles (8 MB) 2D-RT by DSMC A=0.67, 100 million particles (8 MB) 2D-RT by DSMC
A=0.98, 100 million particles (8 MB)
Proc. Natl. Acad. Sci. 104, 7741 (2007), The Importance of Fluctuations in Fluid Mixing by K. Kadau, C. Rosenblatt, J.L. Barber, T.C. Germann, Z. Huang, P. Carles, and Berni J. Alder. (for supporting
on-line material including movies click here)
Int. J. Mod. Phys. C 17, 1755 (2006), Molecular-Dynamics Comes of Age: 320 Billion Atom Simulation on BlueGene/L by K. Kadau, T.C. Germann, and P.S. Lomdahl.
Supercomputing `05 (2005)(SC05, ACM 1-59593-061-2/05/0011).[Gordon Bell Prize finalist paper] 25 Tflop/s Multibillion-Atom Molecular Dynamics Simulations and Visualization/Analysis on BlueGene/L, by
T.C. Germann, K. Kadau, P.S. Lomdahl
Proc. Natl. Acad. Sci. 101, 5851 (2004), Nanohydrodynamics simulations: An atomistic view of the Rayleigh-Taylor instability by K. Kadau, T.C. Germann, N.G. Hadjiconstantinou, P.S. Lomdahl, G.
Dimonte, B.L. Holian, and B.J. Alder. (for supporting on-line material including movies click here)
Int. J. Mod. Phys. C 15, 193 (2004), Large-Scale Molecular-Dynamics Simulation of 19 Billion particles by K. Kadau, T.C. Germann, and P.S. Lomdahl
DPG pro-physik.de, 08.04.2004, Supercomputer laesst 100 Millionen Atome tanzen by Rainer Scharf.
Shock-induced structural phase transformation in bcc iron:
Shock waves were initiated by a 'momentum mirror' (B.L. Holian and P.S. Lomdahl, Science 280, 965 (1998)), which specularly reflects any atoms atoms reaching reaching the face of of the perfectly
flat infinitely massive piston (left) moving at a piston velocity. The resulting shock waves in the iron single crystal moves (from left to right) along the (001) direction in the initial bcc
structure (gray). Above the threshold for the structural transformation (about 15GPa, about 10 percent uniaxial compression, or about a piston velocity of 5 percent of the longitudinal sound
velocity) into the close-packed structure (red) many grains of the close-packed material nucleate in a displacive manner (martensitic-like) within the uniaxially compressed bcc structure (blue).
Crystallographic different oriented grains are separated by grain boundaries (yellow). Depending on the shock strength the transformed region can be a mixed phase region and the resulting shock wave
structure is a split two-wave structure consisting of an elastic precursor and a slower transformation wave. The initial nucleation takes place along the (bcc011) close-packed planes transforming
into the close-packed planes of the close-packed material. The comparison of the nucleation process for three different shock strength (increasing from left to right in the movie) is shown in the
last movie where only atoms with a lateral displacement larger than about 1/6 of the nearest neighbor distance are shown. The samples consist of aprox 8 million atoms (i.e. 40.2nm x 40.2nm x 57.4nm)
and was simulated for 8.76ps.
Shock along Fe-bcc(001), piston velocity=417m/s
Shock along Fe-bcc(001), piston velocity=471m/s
Shock along Fe-bcc(001), piston velocity=689m/s
Nucleation of close-packed material for 3 different shock-strength
Phys. Rev. Lett. 98, 135701 (2007), Shock Waves in Polycrystalline Iron by K. Kadau, T. C. Germann, P. S. Lomdahl, R.C. Albers, J.S. Wark, A. Higginbotham, and Brad Lee Holian. (for supporting
on-line material including movies click here)
Phys. Rev. B 72, 064120 (2005), Atomistic Simulations of Shock-Induced Structural Transformations in bcc Iron Single Crystals for Different Crystallographic Orientations by K. Kadau, T.C. Germann,
P.S. Lomdahl, Brad Lee Holian.
Phys. Rev. Lett. 95, 075502 (2005), Direct Observation of the alpha-epsilon Transition in Shock-Compressed Iron via Nanosecond X-ray Diffraction by D.H. Kalantar, J.F. Belak, G. W. Collins, J. D.
Colvin, H.M. Davies, J.H. Eggert, T.C. Germann, J. Hawreliak, B.L. Holian, K. Kadau, P.S. Lomdahl, H.E. Lorenzana, M.A. Meyers, K. Rosolankova, M.S. Schneider, J. Sheppard, J.S. Stoelken, J.S. Wark.
Science 296, 1681 (2002), Microscopic View of Structural Phase Transitions Induced by Shock Waves by K. Kadau, T.C. Germann, P.S. Lomdahl, and B.L. Holian.
Frankfurter Allgemeine Zeitung, 12.06.2002, Nr. 133 / Seite N2, Schock im Eisenkristall by Rainer Scharf.
Science 280, 965 (1998), Plasticity Induced by Shock Waves in Nonequilibrium Molecular-Dynamics Simulations by B.L. Holian and P.S. Lomdahl.
Temperature induced displacive structural transformations (austenite/martensite)
Cubic iron-nickel alloy nano-particles (only half of the sample is shown in order to see the interior) undergo a martensitic transformation (first movie) from the high temperature fcc austenitic
phase (red) into the low temperature bcc martensitic phase (green) (due to technical reasons the surface is green or blue depending on the orientation of the surface). The heterogeneous nucleation
starts at defects like corners and grows into the interior of the nano-particle with a fraction of the sound velocity (that is, the growth velocity depends on the crystallographic direction.) with a
transient mixed phase region having a needle-like pattern. The final structure is twinned and can undergo a transformation back to the original austenite structure (second movie). The sample contains
one million atoms (edge length=24nm) and was simulated for 67.5ps.
martensitic transformation at low temperatures
austenitic (back) transformation at high temperatures
Phase Transitions 75, 59 (2002), Atomistic investigations of the thermodynamical stability and martensitic nucleation of Fe80 Ni20 nanoparticles by K. Kadau and P. Entel.
J. Phys. IV France 11, Pr8-17 (2001), Large-scale molecular-dynamics study of the nucleation process of martensite in Fe-Ni alloys by K. Kadau, P. Entel, T.C. Germann, P.S. Lomdahl, and B.L. Holian.
Phys. Rev. B 57, 5140 (1998), Martensite-austenite transition and phonon dispersion curves of Fe1-xNix studied by molecular dynamics simulations by R. Meyer and P. Entel.
J. Magn. Magn. Mater. 177-181, 1409 (1998), Numerical simulation of martensitic transformations in magnetic transition-metal alloys by P. Entel, R. Meyer, K. Kadau, H.C. Herper, M. Acet, E.F.
Mechanical properties of nano-phase metals (tensile test)
We model tensile testing of nano-phase Al (Al modelled by an EAM potential described in: Phase Transitions 75, 265 (2002), Atomistic modeling of diffusion in Aluminum by S. Grabowski, K. Kadau, and
P. Entel.) by either sintering from spherical nano-particles or setting up by a Voronoi construction.
Different crystallographic oriented grains (red) are separated by grain boundaries or pores (yellow). Under tensile testing different modes of plasticity such as grain rotation and grain-boundary
movements are observed. In addition to that and in contrast to other materials like Cu, No, and Pd (see Literature) crack propagation along the grain-boundaries at high strains (around 10 percent
depending on grain size) is observed. As in the case of Cu,Pd, and Ni an inverse Hall-Petch effect (i.e. softening at smallest grain size) is observed. the case of the sintered nano phase material
the remaining pores significantly reduce the strength of the material (growth of pores). For some quantitative results check here.
tensile test nanophase Aluminum (sintered)
tensile test nanophase Aluminum (Voronoi-constructed)
NANOTECH 2002 Proceedings of the second International Conference on Computational Nanoscience and Nanotechnology, 338 (2002), Molecular-dynamics study of physical properties in sintered
nano-particles by K. Kadau, P.S. Lomdahl, P. Entel, D. Kadau, M. Kreth, T.C. Germann, B.L. Holian, F. Westerhoff, and D.E. Wolf.
Nature 391, 561-563 (1998), Softening of nanocrystalline metals at very small grain sizes by J. Schiøtz, F. D. Di Tolla and K. W. Jacobsen.
Science 296, 66 (2002), Grain Boundaries and Dislocations by H. van Swygenhoven.
Impact of a Nano-Meteor with 11 miles/sec
Motivated by a visit at the Meteor Crater or Barringer Crater near Flagstaff in Arizona we model an impact of a nano-meteor consisting of 418 iron atoms smashing into a slab of one million bcc
ordered iron atoms. The impact velocity was chosen to 11 miles/sec which was the impact velocity of the Meteor Crater, the incident angle in the simulation is 45 degree. It's amazing how fast the
kinetic energy is transformed into potential energy (i.e. deformation). As early as 4 pico seconds after the impact a crater has formed which consumed about half of the kinetic energy of the meteor (
see graph) . Another interesting aspect is the formed aspect ratio (the ratio between the crater depth and width) which is for the meteor crater about 1/5 for the time being (that might has been
different directly after the impact). The nano-meteor has an aspect ratio of 1/3 four pico seconds after the impact. As you can see in the movies the crater seems to get larger in diameter, so things
are going into the right direction. We also find that a perpendicular impact doesn't change that much the crater formation and the energy transformation. For this simulations we worked with an
integration time step of 0.125 femto second s -and increase to 0.25 femto seconds didn't change the picture but rather led to numerical instabilities at later times ...
By the way the Barringer Crater has a diameter of about a mile and was impacted some 50,000 years ago by a meteor consisting of an iron nickel compound with a diameter of 150 feet and weighed no less
than 300,000 tons. Between miles and nanometers there is a factor of 10 to the power of 12, but still there are similarities, at least for certain situations ...
3 dimensional view of impact
side view
airplane view
Theoretical Low-Temperature Physics (University Duisburg, Germany)
Theoretical Division (T-11) (LANL, USA)
Kai Kadau's home page
|
{"url":"http://www.thp.uni-duisburg.de/~kai/index_1.html","timestamp":"2014-04-18T09:46:20Z","content_type":null,"content_length":"18665","record_id":"<urn:uuid:77657ccf-72b6-4da9-89b9-ae725f2a2854>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Median Filters: A Tutorial
- N ,” IEEE Transactions on Signal Processing , 1999
"... This paper is concerned with regression under a "sum" of partial order constraints. Examples include locally monotonic, piecewise monotonic, runlength constrained, and unimodal and oligomodal
regression. These are of interest not only in nonlinear filtering but also in density estimation and chromat ..."
Cited by 13 (2 self)
Add to MetaCart
This paper is concerned with regression under a "sum" of partial order constraints. Examples include locally monotonic, piecewise monotonic, runlength constrained, and unimodal and oligomodal
regression. These are of interest not only in nonlinear filtering but also in density estimation and chromatographic analysis. It is shown that under a least absolute error criterion, these problems
can be transformed into appropriate finite problems, which can then be efficiently solved via dynamic programming techniques. Although the result does not carry over to least squares regression,
hybrid programming algorithms can be developed to solve least squares counterparts of certain problems in the class. Index Terms--- Dynamic programming, locally monotonic, monotone regression,
nonlinear filtering, oligomodal, piecewise monotonic, regression under order constraints, runlength constrained, unimodal. I.
, 1997
"... Locally monotonic regression is the optimal counterpart of iterated median filtering. In [1], Restrepo and Bovik developed an elegant mathematical framework in which they studied locally
monotonic regressions in R N . The drawback is that the complexity of their algorithms is exponential in N . In ..."
Cited by 6 (1 self)
Add to MetaCart
Locally monotonic regression is the optimal counterpart of iterated median filtering. In [1], Restrepo and Bovik developed an elegant mathematical framework in which they studied locally monotonic
regressions in R N . The drawback is that the complexity of their algorithms is exponential in N . In this paper, we consider digital locally monotonic regressions, in which the output symbols are
drawn from a finite alphabet, and, by making a connection to Viterbi decoding, provide a fast O(jAj 2 ffN) algorithm that computes any such regression, where jAj is the size of the digital output
alphabet, ff stands for lomo-degree, and N is sample size. This is linear in N , and it renders the technique applicable in practice. I. Introduction Local monotonicity is a property that appears in
the study of the set of root signals of the median filter [2], [3], [4], [5], [6], [7], [8]; it constraints the roughness of a signal by limiting the rate at which the signal undergoes changes of
trend (inc...
, 1995
"... Simple nonlinear filters are often used to enforce "hard" syntactic constraints while remaining close to the observation data; e.g., in the binary case it is common practice to employ iterations
of a suitable median, or a one-pass recursive median, openclose, or closopen filter to impose a minimum s ..."
Cited by 6 (3 self)
Add to MetaCart
Simple nonlinear filters are often used to enforce "hard" syntactic constraints while remaining close to the observation data; e.g., in the binary case it is common practice to employ iterations of a
suitable median, or a one-pass recursive median, openclose, or closopen filter to impose a minimum symbol run-length constraint while remaining "faithful" to the observation. Unfortunately, these
filters are - in general - suboptimal. Motivated by this observation, we pose the following optimization: Given a finite-alphabet sequence of finite extent, y = fy(n)g N \Gamma1 n=0 , find a
sequence, b x = fbx(n)g N \Gamma1 n=0 , which minimizes d(x; y) = P N \Gamma1 n=0 dn (y(n); x(n)) subject to: x is piecewise constant of plateau run-length M . We show how a suitable reformulation of
the problem naturally leads to a simple and efficient Viterbi-type optimal algorithmic solution. We call the resulting nonlinear input-output operator the Viterbi Optimal Runlength-Constrained
, 2006
"... SUMMARY: Building operators are confronted with large volumes of continuous data from multiple environmental sensors which require interpretation. The ABSTRACTOR system under development
summarises historical data for interpretation and building performance assessment. The ABSTRACTOR algorithm conve ..."
Add to MetaCart
SUMMARY: Building operators are confronted with large volumes of continuous data from multiple environmental sensors which require interpretation. The ABSTRACTOR system under development summarises
historical data for interpretation and building performance assessment. The ABSTRACTOR algorithm converts time series data into a set of linear trends which achieves data compression and facilitates
the identification of significant events on concurrent data streams. It uses a temporal expert system based on associational reasoning and applies three consecutive processes: filtering, which is
used to remove noise; interval identification to generate temporal intervals from the filtered data- intervals which are characterised by a common direction of change (i.e increasing, decreasing or
steady); and interpretation which performs summarisation and assists building performance assessments. Using the temporal intervals, interpretation involves differentiating between events which are
environmentally insignificant and events which are environmentally significant. Inherent in this process are rules to represent these events. These rules support temporal reasoning and encapsulate
knowledge to differentiate between events.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1963336","timestamp":"2014-04-17T15:42:49Z","content_type":null,"content_length":"21358","record_id":"<urn:uuid:0db18de1-f48c-4ec1-ad27-8cea414dddda>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Method of moments
October 24th 2009, 09:51 PM #1
Mar 2009
X is a discrete RV with P(X=1) = p , P(X=2)=1-p; three independent observations of X are made x1=1, x2=2,x3=2.
a)find the method of moments estimate of p
In order to find the method of moments estimate of p, I know I need to find the 1st moment and the 2nd moment.
What I did is.... E(p) = 1/3 p +2/3 (1-p) ...am I on the right track? when I continue to find the 2nd moment E(p square)...it turns out weird... the final answer for this question is 1/3.
d) if p has a prior distribution that is uniform on [0,1], what is its posterior density?
Actually, I have difficulty in finding the posterior density. If anyone could do it as an exmaple for me. That would be perfect. Thanks!
First of all, p is a probability.
You want the expected value of X.
For MOM you set the expected value equal to the sample mean.
$(1)(p)+(2)(1-p)={1+2+2\over 3}$
yup, I get $\hat p={1\over 3}$
X is a discrete RV with P(X=1) = p , P(X=2)=1-p; three independent observations of X are made x1=1, x2=2,x3=2.
a)find the method of moments estimate of p
In order to find the method of moments estimate of p, I know I need to find the 1st moment and the 2nd moment.
What I did is.... E(p) = 1/3 p +2/3 (1-p) ...am I on the right track? when I continue to find the 2nd moment E(p square)...it turns out weird... the final answer for this question is 1/3.
d) if p has a prior distribution that is uniform on [0,1], what is its posterior density?
Actually, I have difficulty in finding the posterior density. If anyone could do it as an exmaple for me. That would be perfect. Thanks!
$f(p | data) \propto f(p) \cdot f(data | p)$ where $f(p)$ is the prior distribution of $p$, $f(p | data)$ is the posterior distribution and $f(dtat | p)$ is the likelihood function.
Therefore $f(p | data) \propto 1 \cdot p(1 - p)^2$.
Therefore $f(p | data) = k p(1 - p)^2$ where $k$ is a normalising constant whose value is easily found to be 12.
Therefore $f(p | data) = 12 p(1 - p)^2$.
Note that $E(p) = \frac{2}{5}$.
But are we supposed to find E(x)?? I think we should find the E(p).... need more explaination..THANKS!
$E(X)= (1)(p)+(2)(1-p)$
$\bar X={1+2+2\over 3}$
The idea is to set these equal.
As I said yesterday, p is a number, NOT a random variable in the first part.
E(3)=3, same with p, E(p)=p.
YOU don't want E(p).
$f(p | data) \propto f(p) \cdot f(data | p)$ where $f(p)$ is the prior distribution of $p$, $f(p | data)$ is the posterior distribution and $f(dtat | p)$ is the likelihood function.
Therefore $f(p | data) \propto 1 \cdot p(1 - p)^2$.
Therefore $f(p | data) = k p(1 - p)^2$ where $k$ is a normalising constant whose value is easily found to be 12.
Therefore $f(p | data) = 12 p(1 - p)^2$.
Note that $E(p) = \frac{2}{5}$.
Thanks for your reply. But I don't understand how do you get the constant value 12. and how to get the E(P) = 2/5. since p is uniform distribution [0,1], the E(P) = a+b/2 ...?? is it?
Since f(p | data) is a pdf, $\int_{0}^{1} f(p | data) \, dp = k \int_0^1 p(1 - p)^2 \, dp = 1$. Therefore ....
$E(p) = \int_0^1 p f(p | data) \, dp = ....$
October 24th 2009, 10:04 PM #2
October 24th 2009, 11:36 PM #3
October 25th 2009, 02:54 PM #4
Mar 2009
October 25th 2009, 03:05 PM #5
October 25th 2009, 03:07 PM #6
Mar 2009
October 25th 2009, 03:12 PM #7
October 26th 2009, 03:45 AM #8
|
{"url":"http://mathhelpforum.com/advanced-statistics/110228-method-moments.html","timestamp":"2014-04-17T05:34:33Z","content_type":null,"content_length":"62432","record_id":"<urn:uuid:7ec00ff7-570a-4ce2-8557-bb211aa54d36>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Specialty sights: getting more from your lines of position
Jan 1, 2003
One of the drawbacks to sun-sight celestial navigation is that each sight yields only one LOP. Of course, a running fix is often possible, but that means a wait between sun sights of a couple of
hours or more to allow for the azimuth to change significantly. While any single line of position is better than no line of position, and the sun line running fix is certainly an admirable sort of
fix, there may be times during the day when the navigator wants more information from the single LOP.
With a little advance planning, a navigator can get a sun LOP that yields cross-track information (a "course" LOP) or speed data (a "speed" line). The key to this method is performing the
calculations that tell you the optimum time to take these specialty sights.
A good illustration of getting and using this sort of information is the noon sight; when sailing an east-west course, the noon sight indicates any set to the north or south, and when sailing more
north-south, the noon sight indicates progress along the track.
The simple method to use the rest of the day is to simply shoot the sun (or any other body, for that matter) when it is more or less dead ahead or astern to get a speed line. And, when a course or
cross-track error line is desired, shoot the sun when it is more or less abeam.
But the knowledgeable navigator with a bit of time on his hands can more exactly determine speed and cross-track measurements by using one of the key pieces of information derived from any sun sight,
the azimuth (true bearing to the sun). To use this technique, the navigator must enter the sight-reduction tables and the Nautical Almanac and work backwards. Let's take a look at how this process
To set the stage, let's assume that on June 18, 1997, we are on a sailboat crossing the Gulf Stream, heading to Bermuda in the biennial Marion-Bermuda Race. We want to use our sun sights not only for
determining an estimated position but also to detect the presence of any current, as well as get a measure of progress down the rhumb line of 135° true. We figure that, if we can calculate the time
when a sun sight will yield an azimuth of 135° true, we will be able to draw an LOP perpendicular to the rhumb line, giving us our "down track" progress. Another simple calculation for an azimuth
perpendicular to the rhumb line will yield an LOP parallel to the rhumb line, giving us an indication of set.
We'll need a couple of pieces of information to get started. First, a DR position: let's use 38° N, 68° W. Next, the declination of the sun on the day in questionin degrees only. Looking in the
Nautical Almanac for the date, we find the declination to be N 23°. Next, let's calculate the azimuth that will set up our desired LOPs. Recall how the sight reduction tables are set up: in the
morning, when LHA will be large, the Zn will be equal to the Z (Z is the number printed in the sight reduction table); in the afternoon, when LHA will be small, the Zn will be equal to 360 - Z (for a
quick reminder of these formulae, refer to the upper left corner of every page in the sight reduction table). So, a morning sight with a large LHA, taken at the right time, will yield our desired Zn
close to our course line, and we will plot the LOP across our course, giving us a measure of progress down the rhumb line. An afternoon sight with a small LHA, again taken at the right time, will
yield a Zn perpendicular to the rhumb line, and we will plot the LOP parallel to our course, giving us an indication of set and drift. A bit of math yields the entries we want to look for in the
tables:Large LHA (morning): Zn = Z look for Z of 135°, gives Zn of 135°Small LHA (afternoon): Zn = 360° ?
Z look for Z of 135°, gives Zn of 225° Now we're ready to go to work. Open up the sight-reduction tables (Pub. No. 249, volume II) to the appropriate pages for a latitude of 38°, a declination of
23°, same name (since latitude and declination are both north). First, let's find an answer to our progress down the rhumb line, which means searching for a Z close to 135°. Searching down the column
for declination of 23°N, we find an entry for 135°, corresponding to an LHA of 345° (don't forget this is a morning sun line, so the LHA must be taken from the right hand column, where the values are
large). Jot down the Hc of 70° 15'. Now we have the basic information, but we still need to figure out what time of day will yield this answer. How to do this? All we have to do is work backwards to
find the GHA and then the time, which corresponds to an LHA of 345°. We do this by first reversing the calculation for LHA, adding back our west longitude to arrive at GHA: LHA 345°W 413° ?360° GHA
053°Now take the GHA of 053° and match it up with the correct GMT time: open the Nautical Almanac to June 18 and search for a GHA a bit less than 53 (remember to look in the sun column!). We find the
nearest value is 44° 43', which corresponds to 15 hrs. Subtract 44° 43' from 53 to find the remaining minutes and seconds of GHA. Then we go to the increments and corrections table in the back of the
almanac and scan the GHA values until we find 08° 17'in this case, the correct value is found in the 33 minute box, across from the 08 seconds entry: GHA 53° 00' 15 hrs - 44° 43' 33:08 08° 17' We're
done. We now know that if we go up on deck at around 15:33 GMT and shoot a sun line, the plotted LOP will give us a pretty good idea of our progress down the rhumb line. (The sight should be reduced
and plotted as a normal sun sight.) For our second problem, how to measure set, or cross-track error, we will use much the same analysis. This time, when searching through the sight-reduction tables
we are still looking for a Z of 135°. Why? Because we want to draw an LOP closely parallel to the rhumb line, which requires a Zn of 224°, or 90° from the desired LOP of 135°. Since Zn = 360° - Z,
this means that we must find a Z of about 135° again. Going back to the sight-reduction tables, we can go to the same entry. But this time use the smaller LHA value from the left hand column
(remember, afternoon sight, small LHA)15°. Doing the same reverse calculations as before, we end up with first the GHA and then the corresponding GMT time: LHA 015° GHA 083° 17 hrs - 74° 42.8' GHA
083° 33:09 08° 17.2' So, if we take a sun sight at about 17:33:09 GMT, reduce, and plot it, the resulting LOP will give a good indication of any set or cross-track error. This can be of use to a
navigator trying to gauge the strength or direction of a mid ocean current, or to a sailor just trying to stay on the rhumb line. Note that using either of these two specialty sun lines in a running
fix makes the fix even more valuable to the navigator. Of course, a more exact calculation could be done by refining the DR and interpolating for minutes of declination, but in most cases the above
rough precalculation is sufficient. The more industrious navigator may want to use these methods for other bodies as well. So, by using just a few carefully preplanned sun sights, the competent
navigator can maximize the utility of sun sight celestial navigation. And, of course, you'll also have a lot of fun in the bargain.
|
{"url":"http://www.oceannavigator.com/January-February-2003/Specialty-sights-getting-more-from-your-lines-of-position/","timestamp":"2014-04-20T15:51:14Z","content_type":null,"content_length":"22893","record_id":"<urn:uuid:06cabfcd-dc7f-4e8a-a7ee-9488fc070d73>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Given The Following SFG. With Input R(s) And Output ... | Chegg.com
Image text transcribed for accessibility: Given the following SFG. with input R(s) and output Y(s) and the output of the integrator indicated as x1. x2. x2 Find the transfer function Y(s)/R(s) Now
create three new directed branches fl. f2. and f3 emanating from the state variable xl,.x2, x3 respectively and terminating on node e. Find the new transfer function Y(s) R(s) with the three new
branches What values should be chosen for fl, f2, f2 so that the denominator of Y(s) R(s) becomes s3.
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/given-following-sfg-input-r-s-output-y-s-output-integrator-indicated-x1-x2-x2-find-transfe-q1119466","timestamp":"2014-04-16T17:56:12Z","content_type":null,"content_length":"20877","record_id":"<urn:uuid:f70448b9-3c32-4b85-8a4d-88a6e24c2f53>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the system of equations.2x-4y+3z=-8 x+3y-2z=9 3x+2y+z=13
• one year ago
• one year ago
Best Response
You've already chosen the best response.
it has the a b c d choice A. -1, -4, -2 B. 1 4 2 C. 4 2 1 D. -8 9 13 i think it is c i just need to make sure
Best Response
You've already chosen the best response.
Answar is B 1 4 2
Best Response
You've already chosen the best response.
ok thank you can you help me with another one that is like that hold on ill message you the link
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50bf9d26e4b0689d52fdca12","timestamp":"2014-04-16T10:29:11Z","content_type":null,"content_length":"32579","record_id":"<urn:uuid:bd7254e6-b1fd-4cba-bf39-c0a9380efdca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trying to move a Camera [Archive] - OpenGL Discussion and Help Forums
Stack Overflow
12-26-2002, 09:11 AM
I am trying to move a Camera using sine's and cosine's, but I don't know where to start. I am using glLookAt(), but how would I program the rotation to use my defined variable that contains my sine's
and cosine's.
- VC6-OGL
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-136203.html","timestamp":"2014-04-19T17:12:13Z","content_type":null,"content_length":"5207","record_id":"<urn:uuid:84901478-5cc0-4cc4-a96c-ab374561f790>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Laplace Transforms to solve Initial-Value problem
I, for one, am not seeing why there has to be a $\dfrac{B}{(s+2)^{2}}$ term when there isn't one in the LT. Is there one in the LT? If so, why didn't it cancel? Here's what I have from the original
DE: $y'' + 4y' + 4y = e^{-t}(\sin(t) + \cos(t)), y(0) = 0, y'(0) = 0.$ LT gives $s^{2}Y+4sY+4Y=\dfrac{1}{(s+1)^{2}+1}+\dfrac{s+1}{( s+1)^{2}+1}=\dfrac{s+2}{(s+1)^{2}+1}.$ This implies $Y(s+2)^{2}=\
dfrac{s+2}{(s+1)^{2}+1},$ or $Y(s+2)=\dfrac{1}{(s+1)^{2}+1},$ and hence $Y=\dfrac{1}{(s+2)((s+1)^{2}+1)}.$ There's no $1/(s+2)^{2}$ in there. Did I do something wrong?
Well, you know you're a mathematician if you can do vector calculus but not long division. (Wink)
|
{"url":"http://mathhelpforum.com/differential-equations/165735-laplace-transforms-solve-initial-value-problem-2-print.html","timestamp":"2014-04-17T19:48:48Z","content_type":null,"content_length":"14803","record_id":"<urn:uuid:c14cb535-6451-4bcf-972c-11710e38e3c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
|
← Lecture 4 | Lecture 6 →
By Giuseppe Tenti and annotated by Douglas Wilhelm Harder.
We will now look at 2nd-order differential equations. A general 2nd-order differential equation is of the form
however, this cannot be solved in general. We will begin by looking at specific cases which have a much simpler form.
Linear Homogeneous 2nd-order Ordinary Differential Equations with Constant Coefficients
A linear homogeneous 2nd-order differential equation with constant coefficients is a differential equation of the form:
Equation 5.1.
This ODE is fundamental as a description of oscillation and vibration in all areas of science including electrical, mechanical, but also biological systems. To solve this, we will first look at the
statements made by Leonhard Euler in 1739 in a letter to Johann Bernoulli (paraphrased):
and then we choose
Euler then noticed that this requires that
Under this assumption, we have that
and therefore
and therefore, as the exponential function
This is called the characteristic equation of the differential equation and is a quadratic equation with roots
There are three cases on the descriminant
Case 1:
There are two roots
Example 1
Find the general solution of the differential equation
First, the characteristic equation is
Figure 1 shows a number of solutions to this differential equation using various coefficients of
Figure 1. Numerous solutions to the differential equation of Example 1.
Looking at these, we note that there appear to be two classes of solutions: there is either zero or one local extreme points and all solutions tend to zero as
Figure 2. Three representative examples of solutions to Example 1.
Case 2:
In this case, the two roots are real and coincident:
The trick for finding the second solution was first proposed by D'Alambert in 1748. He suggested that you have two solutions
Now, the two linearly independent solutions are
But any linear combination of these two solutions must also be a solution and in particular, let us define
This must be a solution for any any finite value of l'Hôpital's Rule:
The simplest form of the two linearly independent solutions is therefore
Example 2
Find the general solution of the differential equation
First, the characteristic equation is
Figure 3 shows a number of solutions to this differential equation using various coefficients of
Figure 3. Numerous solutions to the differential equation of Example 2.
As with the previous case, there are either zero or one local extreme points and all solutions tend to zero as
Case 3:
In this case, the roots of the characteristic equation are complex conjugates of the form
however, these are complex-valued functions, yet any linear combination of these solutions is also a solution. This freedom allows us to find two real-valued linearly independent solutions:
• Sum the solutions:
• Take the difference:
The constants
As an alternate formulation, we may write this using two constants in the amplitude-phase form:
Example 3
Find the general solution of the differential equation
First, the characteristic equation is
The two linearly independent solutions are, therefore,
or, an alternate formulation is
Figure 4 shows a number of solutions to this differential equation using various coefficients of
Figure 4. Numerous solutions to the differential equation of Example 3.
Figure 5. The two linearly independent solutions to the differential equation of Example 3.
Examples with Animations
The reader may wonder about the transition from real to complex roots of this system, and therefore, the three animations shown in Figures 6-8 will show how the solutions to a differential equation
change as the parameters change. In all cases, the differential equation will start with two different real roots, will make the transition to two identical real roots with the differential equation
and then continue with two complex roots. In each case, the reader will note that there is a smooth transition from a solution with no extreme points past
Figure 6. Solutions to the differential equation
Figure 7. Solutions to the differential equation
Figure 8. Solutions to the differential equation
Another possibility is the transition from a solution with no roots past
• The solution will decay towards the zero solution,
• The solution may pass the zero solution once and then decay towards it, or
• The solution will oscillate infinitely many times around the zero solution.
This natural and smooth transition from one type of solution to another should be expected, for example, calculating the integral
Figure 9. The integral
|
{"url":"https://ece.uwaterloo.ca/~math211/Lectures/05/","timestamp":"2014-04-16T19:22:21Z","content_type":null,"content_length":"22363","record_id":"<urn:uuid:8dcbc7b1-5605-488d-b5c9-5d3c3775d113>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
|
x : scalar or array-like of length N
Point or points at which to evaluate the derivatives
der : None or integer
How many derivatives to extract; None for all potentially nonzero derivatives (that is a number equal to the number of points). This number includes the function value as 0th derivative.
Returns :
——- :
d : array
If the interpolator’s values are R-dimensional then the returned array will be der by N by R. If x is a scalar, the middle dimension will be dropped; if R is 1 then the last dimension will be
|
{"url":"http://docs.scipy.org/doc/scipy-0.8.x/reference/generated/scipy.interpolate.KroghInterpolator.derivatives.html","timestamp":"2014-04-18T10:36:20Z","content_type":null,"content_length":"6007","record_id":"<urn:uuid:db8c0d9b-7071-499f-8832-c01c846e2b27>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Finding the (x, y, z) of a point on a sphere
thx for answering - i had long given up
Yes the sphere is centred at the origin and i know the radius.
The other thread didn't appear to apply - but i don't know enough to be sure.
I have solved the problem using coordinate geometry but wondered if there was a more elegant solution using vectors...
|
{"url":"http://www.physicsforums.com/showpost.php?p=3724386&postcount=3","timestamp":"2014-04-19T12:31:05Z","content_type":null,"content_length":"7380","record_id":"<urn:uuid:c12eae5e-cb94-4a21-bd00-0f9e330b56ae>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Extra Stuff: Gambling Ramblings
I picked up a new book on my trip to Reno: Extra Stuff: Gambling Ramblings by Peter Griffin. Griffin is the author of one of my favorite books in my collection of books on gambling topics: The Theory
of Blackjack. This book includes all sorts of interesting tidbits of gambling theory.
The book had a particularly interesting and surprising discussion on the Kelly Criterion: a method of wagering that ensures the quickest maximization of bankroll when you have positive expectation in
a game. Basically, if you have a probability p > 0.5, you maximize your bankroll when you wager a fraction of your bankroll equal to 2 * p – 1.
Griffin asked an interesting question: what is the probability at any step that you actually have reached the highest bankroll that you’ve ever seen in that step. When the bets are unit sized, you
can derive rather simply (and prove via simulation) that the odds are 2 * p – 1 (interestingly, the same fraction used by the Kelly Criterion) that you have reached your peak earnings. But if you try
to graph the resulting curve when you use proportional Kelly style bets, you get a function which is not only fairly complicated, but is in fact discontinuous. This seemed very unintuitive to me, so
I wrote a simple program to duplicate the result and plotted it with gnuplot. For each probability p, I simulated one million wagers, and counted the number of times that I reached a new maximum.
Check out the graph:
The discontinuities are real, and the discussion is quite illuminating.
Addendum: The discontinuities occur because of the following. Imagine that you are at an all time high, and then suffer a loss, then a win. When you lose, your bankroll is multiplied by 1-f, and when
you win it is multiplied by 1+f. Taken together, you get 1 – f^2, which is always less than one, so you know that after all possible sequences of length two that ends in a win (you need to minimally
end with a win to reach a peak) you can’t reach a peak.
How about length three? Well, let’s try a loss followed by two wins. You have (1-f) (1+f)^2, which you want to be one (or higher). Solving this, we get 1 + f – f^2 – f^3 = 1, which means f – f^2-f^3
= 0, or 1 – f – f^2 = 0. Solving using the quadratic formula, we find that f yields a value of one precisely at (sqrt (5) – 1)/2, a number commonly referred to as the golden mean or phi. Sure enough,
our graph displays a discontinuity there. At just below this value, a loss followed by two wins is insufficient to generate a new high, but at just over this value, it is. Since the probability of
these particular sequences varies only infinitesmally, we see a strong discontunity in the chances of reaching a new high when f varies in this neighborhood.
Other possible sequences (two losses followed by three wins, for example) also generate similar but smaller discontinuities.
Very interesting.
At least to me.
But I’m a geek.
Addendum^2: For fun, try reading Kelly’s Original Paper and figure out what it says about gambling.
Comment from Tom Duff
Time 7/22/2005 at 10:20 am
Looking at the Kelly paper, I noticed that it was published in the Bell Systems Technical Journal. I don’t remember ever hearing about him when I was at The Labs, but a little googling found some
other John L Kelly, Jr. references from which I can piece together some history. He looks to have been a really interesting person. He died around 1970 and apparently worked at Bell Labs until the
end — there’s a posthumous echo-canceller patent assigned to AT&T. (Kelly’s co-inventor was Ben Logan, who in addition to being the world’s leading expert on high-pass functions, was, as Tex Logan, a
world-class fiddler, having played in Bill Monroe’s Bluegrass Boys in the 1950s.) He wrote a bunch of signal processing and computer science papers, including one (about a programming language for
signal processing called BLODI [for BLOck DIagrams]) that I have a copy of in my office. He supervised Elwyn Berklekamp’s 1963 Masters thesis about a program for optimally playing bridge hands
double-dummy (i.e. with all 52 cards visible.) William Poundstone’s most recent book is about the history of mathematicians going after the stock market — his point of departure is Kelly and Claude
Shannon’s work in the 1950′s on gambling & signal processing. These are all guys (Berlekamp, Shannon, Logan, Poundstone) I know about for other reasons, and Kelly is connected to all of them. I
suspect there’s an interesting mathematician biography to be written here…
Comment from Andrew Grumet
Time 7/23/2005 at 5:31 am
If you haven’t already, Ben Mezrich’s book Bringing Down the House is fascinating tale of what happened when a group of mathematically sophisticated kids applied techniques like these, with some
intriguing modifications and perhaps not so surprising results, at real casinos. I was at MIT during this time, and sort of vaguely aware of it, perhaps one degree of Kevin Bacon from the people
involved, but thought they were just hyping themselves. The book changed my take on it.
Comment from Kevin
Time 7/23/2005 at 11:51 am
p > 0.5… Where do you get a game like that?.
Assuming you could find a game with a p of 0.5, like flipping a coin.
2 * .5 -1 = ZERO
Zero, a good wager.
Editor’s note: Well, while casino games aren’t particularly good at giving you such opportunities, other opportunities such as sports betting, horse racing, or (with a lot of complication) the stock
market can and often do have positive expectation. As was pointed out to me by Tom Duff, Ed Thorp wrote the original blackjack card counting book Beat the Dealer, and then went off and made
gajillions in the stock market, essentially inventing hedge funds. But even if this didn’t have any practical application, it’s still interesting stuff.
Comment from Eric
Time 8/2/2005 at 7:36 am
Many stock trades (esp in hedge funds) will look at the collective time series in a way that is essentially the same as the analysis of Brownian motion and showing the probability of the next move
being an upward or downward move. Taking that probability and then applying Kelly’s (2P-1)B where B is your bankroll then gives maximum gains.
The problem is that since so many large funds use that (combined with the theory of the Gambler’s Ruin), it loses its effectiveness and you therefore need to add in extra layers or tweaks in your
analysis. That is where the big money is made these days – tweaks on that basic concept – if you are doing time series analysis.
Two interesting things to note with that:
1) if you had just a P value of .60 and could sustain that for a few years (I haven’t run the equation out in awhile, so I don’t recall the exact time period, but I believe it is either 3 or 7
years), then you would have literally all of the money in the world.
So needless to say, it is rare in the stock market to see values as high as .60. In the short term you do, but then they are corrected out again as people act on that information and exploit it.
Normally though you see values in the .51-.56 range.
2) this type of analysis is technically a fractal and you can apply the concept to tick data, or every hour, every day, month, year, however you like (although once you look at a period longer than
every day, you don’t really have enough data for it to work).
It also applies to many other areas and is generally tied in to any system with human involvement – many argue largely based on the way they move in crowds (look into the El Farol Problem).
I am a programmer for a hedge fund admin company and am trying to start my own hedge fund, so this area is particularly of interest to me.
My code shows excellent profits if you have $50K, and then as with any large system, will collapse (in terms of gains) as you reach larger bankrolls (around $100-500M). Part of that is due to a
Heisenberg sort of idea where the more money you are throwing at it, the more you rstar
American taxes negate a lot of the profit at the lower level ($50K), as do trading fees – hence why it is an attractive method for hedge funds (offshore and therefore the tax issue is not at play).
Although to be honest, I’d rather in the end just be a pro poker player
Love the site!
Comment from Eric
Time 8/2/2005 at 7:39 am
Oops – I trailed off there on the Heisenberg – was essentially going to note that the more money you throw at the trades, the more you start directly having an effect on the price and therefore
changing the way your analysis outcome works.
With smaller dollar amounts you can move in and out with less effect and therefore retain success.
Comment from bookie buster scam
Time 4/8/2010 at 2:30 pm
So happy to digest such a insightful post that does not depend on base posturing to get the point across. Thanks for an entertaining read.
Write a comment
Recent Comments
• m0n5t3r on Visual Cryptography
• Barrie Gilbert on Crystal Sets to Sideband, by Frank W. Harris
|
{"url":"http://brainwagon.org/2005/07/21/extra-stuff-gambling-ramblings/","timestamp":"2014-04-19T06:54:09Z","content_type":null,"content_length":"53094","record_id":"<urn:uuid:586f1b48-04c0-431a-a765-431b7a1795c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Author: James Lucietti
Have you ever wondered what shape a football is? No, it is not a sphere - it is far closer to something Most people think that mathematics consists of either just arithmetic, or a collection of very
called a truncated icosahedron, also known as a "buckyball". It consists of 12 black pentagons and 20 abstract and technical topics which the layperson has no chance of grasping. But this really
white hexagons and is about the most effective way of creating something nearly spherical out of flat is not true: of course many areas are too technical for the non-mathematician, but there are
panels. Curious sporting-related mathematical facts like this can be found throughout Eastaway and also many beautiful and non-trivial facts which can be expressed in ordinary language for
Haigh's book "How to take a penalty, the hidden mathematics of sport". everyone to appreciate.
Read more... Read more...
Early in our mathematical careers, we are introduced to prime numbers. These special integers, which
possess no divisors other than themselves and 1, are the building blocks for all the integers. Thus an
understanding of the properties of primes, including where to find them, is an essential part of number
theory, and any serious discussion of prime numbers will inevitably lead to what is arguably
mathematics' greatest unsolved problem: The Riemann Hypothesis.
Read more...
|
{"url":"http://plus.maths.org/content/list-by-author/James%20Lucietti","timestamp":"2014-04-20T00:57:14Z","content_type":null,"content_length":"21973","record_id":"<urn:uuid:048bf9da-05a8-4249-b1ff-bd385db0b0d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
History of Mathematics Course Proposal
Approved by University Studies Sub-Committee. A2C2 action pending.
University Studies Course Approval Proposal
Unity and Diversity Multicultural Perspectives
The Department of Mathematics and Statistics proposes the following course for inclusion in University Studies, Unity and Diversity, Multicultural Perspectives at Winona State University. This was
approved by the full department on Thursday, January 4, 2001.
Course: History of Mathematics (MATH 410), 3 s.h.
Catalog Description: General view of the historical development of the elementary branches of mathematics. This is a University Studies course satisfying requirements in Multicultural Perspectives.
Prerequisites: MATH 160 and MATH 210. Offered fall semester.
This is an existing course, previously approved by A2C2.
Department Contact Person for this course:
Jeffrey R. Anderson, Mathematics and Statistics Department Chair
Email janderson@winona.edu
General Discussion of University Studies Multicultural Perspectives in relation to MATH 410:
University Studies: Multicultural Perspectives
The purpose of the Multicultural Perspectives requirement in University Studies is to develop students� understanding of diversity (gender, ethnicity, race, etc.) within and between societies.
Courses in this area will help students employ a multicultural perspective for examining historical events; contemporary social, economic, and political issues; and artistic, literary, and
philosophical expressions. Courses that fulfill the Multicultural Perspectives requirement must address at least three of the following outcomes. These courses must include requirements and learning
activities that promote students' abilities to...
a. demonstrate knowledge of diverse patterns and similarities of thought, values, and beliefs as manifest in different cultures;
In MATH 410, students are required to repeat mathematical calculations from eras of history, utilizing the same sorts of notations and methodologies as originally used. The reason for this is partly
to give students a better sense of the true development of mathematics, through fits and starts (as opposed to a neatly written out theorem from start to finish). Often these calculations are tedious
and use ancient or arcane notations and techniques. For example, the Greek, Archimedes, determined areas via a method of exhaustion. The problem is simple for a first-year calculus student to
determine by integration, but the method of exhaustion is truly exhausting. When required to do such work, which students prefer to relegate to mathematics having nothing to do with the real world,
the reason and motivation for this work naturally is posed. Why is the method of exhaustion required when integration will do the job faster? Why did the Egyptians use such strange symbols for
calculation when the regular numbers are much better? What is the purpose for using base 2 or base 16 or any other base? Why must all the calculations done by Viete be labored over when roots of a
polynomial may be approximated by a calculator? Coming from the perspective of MATH 410 students, these questions are well-motivated, and these students generally become very interested in the fact
that the answers lie "outside of mathematics" and in the particular culture and societal influences of the time studied. By the end of the semester, students develop the ability to see the many ways
a culture provided motivation for the mathematics of the time. It is interesting to see the many different ways mathematicians through the millennia approached the same problems; it is, however,
haunting to see that, despite all the cultural differences of the Egyptians, Greeks, Chinese, Mayas, Incas, Arabians, Russians, Europeans, and Americans, the majority of mathematics is still very
b. understand the extent to which cultural differences influence the interpretation and expression of events, ideas, and experiences;
In some cultures, such as the Romans, mathematics was largely non-expressive and confined to applications of immediate benefit to society. In others, such as the Greeks, mathematics did have an
aspect of application, but was more focussed upon the development of an absolute truth. Societies allowing more open argumentation cause the development of more of the pure forms of mathematics,
while in closed societies, the old Soviet Union, for example, mathematics centered around the needs of the state and was highly monitored. In many ways, it has been war providing a catalyst for the
exchange of ideas and the proliferation of new ways of thinking. An example of this is the Crusades causing the spread of Arabic mathematics (algebra and algorithm names take from an Arab author)
to Europe and ultimately giving rise to the Renaissance. In MATH 410, students investigate these differences through comparison and also by examining the effect of the influence of mathematics of one
culture on another upon its introduction. Through individual projects, students are able to further study the effects of sub-cultures, within the European continent, for example.
c. understand the extent to which cultural differences influence the interactions between individuals and/or groups;
d. examine different cultures through their various expressions; and/or
The mathematics, its notation, and calculational methods provide a useful and interesting comparison of cultures. Students in MATH 410 study the mathematical inductionism of the Egyptians and the
question of why the mathematics of this culture was so content with using examples as proof. Greek society is, of course, famous for giving rise to deductive reasoning and the belief in an absolute
truth. Middle Eastern societies, such as the Arabs, were generally interested in creating methods for solving entire classes of problems and essentially gave rise to what we know of as algebra. It is
also useful to examine the opposite point-of-view: the effect of mathematics on society. During its time of greatest activity, projective geometry developed out the Italian desire to draw and paint
in correct 3-D perspective. In turn, the existence of projective geometry had much effect on some of histories most famous artwork. In the middle 1600's, the lifestyle of most people was dismal at
best. Hunger, disease, and a short life were the most common elements in Europe of the time. Beginning in the 1700's, mathematicians, physicists, and astronomers began exploring the use of a newly
developed mathematical tool: the calculus. Since that time, calculus as applied to the motion of the planets, the flow of fluids, and the movement of electricity are but a few examples that have had
profound impact on society at large. Students in History of Mathematics undergo the investigation of such effects and come to better understand the interconnected nature of mathematics, sciences in
general, and societal and cultural influences.
e. possess the skills necessary for interaction with someone from a different culture or cultural group.
|
{"url":"http://www.winona.edu/ifo/courseproposals/Math_and_Stats/ay2000-2001.htm/Math410.htm","timestamp":"2014-04-17T09:37:01Z","content_type":null,"content_length":"9184","record_id":"<urn:uuid:308daac8-5ff6-4c03-bf78-fa3eb28aff27>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Density of states of zigzag graphene nanoribbons?
Could anyone tell me how to calculate the DOS of a zigzag GNR? It seems the E(k) relation of zGNRs can not expressed in analytical solution and hence the DOS as well. Is there a way to calculate and
plot it numerically?
|
{"url":"http://www.physicsforums.com/showthread.php?p=3893597","timestamp":"2014-04-18T03:12:36Z","content_type":null,"content_length":"19941","record_id":"<urn:uuid:5c6187fa-5137-4404-988f-1f002b22044d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Correspondence by electronic mail
- Information Processing Letters , 1998
"... This note answers questions on whether three identities known to hold for orthomodular lattices are true also for ortholattices. One identity is shown to fail by MACE, a program that searches
for counterexamples, an the other two are proved to hold by EQP, an equational theorem prover. The problems, ..."
Cited by 22 (2 self)
Add to MetaCart
This note answers questions on whether three identities known to hold for orthomodular lattices are true also for ortholattices. One identity is shown to fail by MACE, a program that searches for
counterexamples, an the other two are proved to hold by EQP, an equational theorem prover. The problems, from work in quantum logic, were given to us by Norman Megill. Keywords: Automatic theorem
proving, ortholattice, quantum logic, theory of computation. 1 Introduction An ortholattice is an algebra with a binary operation (join) and a unary operation 0 (complement) satisfying the following
(independent) set of identities. x y = (x 0 y 0 ) 0 (definition of meet) x y = y x (x y) z = x (y z) x (x y) = x x 00 = x x (y y 0 ) = y y 0 Supported by the Mathematical, Information, and
Computational Sciences Division subprogram of the Office of Computational and Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38. From these identities one can...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2970819","timestamp":"2014-04-17T14:14:15Z","content_type":null,"content_length":"12497","record_id":"<urn:uuid:64620bec-562d-4f08-8a79-67c5189f269a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LGSOLV v.1.0
The main problem for programmers working witn large matrices is how to keep them in computer's memory. Being written in a disk-storage, matrix involves a lot of disk exchanges, because all
Matrix-Algorithms use it "row-by-row" and "column-by-column" at the same time. Here we're proposing an original modification of known Gauss method for pivot element elimination that uses internal
rows-transposition-vector and works with matrix strictly "col-by-col".
Our algorithm factorizes the source matrix so that created factor-matrix may be used several times for quick solution of Linear System with any Right Parts.
LGSOLV consist of several FORTRAN-written functions, described in README.DOC file. The file DEMOTEST.EXE is given as example of using LGSOLV.
This is shareware.
(modified from lgsolv1.inf)
|
{"url":"http://archives.math.utk.edu/software/msdos/numerical.analysis/lgsolv/.html","timestamp":"2014-04-19T04:50:45Z","content_type":null,"content_length":"2786","record_id":"<urn:uuid:6cf15592-835b-4799-9cc5-d68ab379bd5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kenilworth, IL Algebra Tutor
Find a Kenilworth, IL Algebra Tutor
...I also provide Excel tutoring to working professionals and Small Businesses that seek to learn Excel for normal Business use to the creation of advanced Excel workbooks that are aimed at
automation of repetitive tasks and thereby resulting in sharp cycle-time reduction. I have created an Excel w...
18 Subjects: including algebra 2, algebra 1, geometry, ASVAB
...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help
all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon.
12 Subjects: including algebra 1, algebra 2, calculus, geometry
...Everyday millions of children feel defeated by math. Math anxiety negatively impacts their sleep, their school experience, their social integration, and their self-esteem. They may consider
themselves "dumb" or "less than" others.
10 Subjects: including algebra 1, algebra 2, geometry, SAT math
...I have previously tutored biology through a Northwestern University program, where I led a class each week for my peers this past year. I graduated from Northwestern University with a
Bachelor's degree in biology. I graduated from Northwestern University with a Bachelor's in biology with a specific concentration in physiology.
7 Subjects: including algebra 1, geometry, biology, ACT Math
...Slow motion is breathtaking in the movies, and is magnificent in math. We'll get you comfortable with the pieces, with putting them all together, and with doing it all in an impressive manner.
As students approach algebra, they've already used mystery numbers all over the place. (Remember 2nd grade and 3 + [] = 5?
14 Subjects: including algebra 1, algebra 2, geometry, ASVAB
Related Kenilworth, IL Tutors
Kenilworth, IL Accounting Tutors
Kenilworth, IL ACT Tutors
Kenilworth, IL Algebra Tutors
Kenilworth, IL Algebra 2 Tutors
Kenilworth, IL Calculus Tutors
Kenilworth, IL Geometry Tutors
Kenilworth, IL Math Tutors
Kenilworth, IL Prealgebra Tutors
Kenilworth, IL Precalculus Tutors
Kenilworth, IL SAT Tutors
Kenilworth, IL SAT Math Tutors
Kenilworth, IL Science Tutors
Kenilworth, IL Statistics Tutors
Kenilworth, IL Trigonometry Tutors
Nearby Cities With algebra Tutor
Bannockburn, IL algebra Tutors
Evanston, IL algebra Tutors
Fort Sheridan algebra Tutors
Glencoe, IL algebra Tutors
Glenview, IL algebra Tutors
Golf, IL algebra Tutors
Highwood, IL algebra Tutors
Indian Creek, IL algebra Tutors
Morton Grove algebra Tutors
Niles, IL algebra Tutors
Northfield, IL algebra Tutors
Skokie algebra Tutors
Third Lake, IL algebra Tutors
Wilmette algebra Tutors
Winnetka, IL algebra Tutors
|
{"url":"http://www.purplemath.com/Kenilworth_IL_Algebra_tutors.php","timestamp":"2014-04-17T15:39:33Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:59a13835-0c29-4213-833e-eaaf1049b4b9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Facebook Interview Question Software Engineer / Developers
• Write a program for Palindrome
Country: United States
Interview Type: Phone Interview
Comment hidden because of low score. Click to expand.
boolean isPalindrome(String s) {
if (s == null || s.length() == 0) {
return false;
for (int s = 0, e = s.length() - 1; s < e; ++s, --e) {
if (s.charSt(s) != s.charAt(e)) {
return false;
return true;
Comment hidden because of low score. Click to expand.
e = s.length() - 1 <--- computing s.length() is a O(n) complexity. if you put that in loop, then you will end up making an O(n^2) solution. Not many pic this detail but pointing this out just in
Comment hidden because of low score. Click to expand.
No, s.length() is O(1) in java. However, it's true for strlen(s) in C
Comment hidden because of low score. Click to expand.
Is it that simple at Facebook ?
Comment hidden because of low score. Click to expand.
You would think so would you. Are these "official" Facebook questions? Presumptuous at best!
Comment hidden because of low score. Click to expand.
boolean palinCheck(String s)
if(s.length()<=1)return true;
else if(s.charAt(0)==s.charAt(s.length()-1)))
return true&&palincheck(s.substring(1,s.length()-1));
else return false;
Comment hidden because of low score. Click to expand.
Palindrome for what? string / array(int etc...) / linked list?
Comment hidden because of low score. Click to expand.
its easy when you go with string. But if provided with integer, then here comes the trick.
boolean prime(int n){
int length = 0;
int temp=n;
while( temp!=0 ) {temp/=10;length++;}
int i=0;
while(n!=0 ){
int lsd = n%10;
int msd = (int)(n/Math.pow(10,length-1));
if( lsd == msd ){
n %= Math.pow(10,length-1);
n /= 10;
length -= 2;
return false;
return true;
Comment hidden because of low score. Click to expand.
For number just reverse the number and check if they are equal.
rev = 0;
while ( num != 0 ) {
rev = rev*10 + num%10;
Comment hidden because of low score. Click to expand.
It must work for numbers:
for (i=n;i>9;i=n)
a= n/10;
b= n%10;
Comment hidden because of low score. Click to expand.
Of course, n is the number.<integer> :)
Comment hidden because of low score. Click to expand.
You'll get the reverse number by this.. print the original number before it to create the palindrome......
Comment hidden because of low score. Click to expand.
def isPalindrome(s):
for i in range(len(s) / 2):
if s[i] != s[len(s)-i-1]:
return False
return True
s = raw_input("Enter string: ")
Comment hidden because of low score. Click to expand.
I'd like to add a little thing to make this case insensitive:
def isPalindrome(s):
for i in range(len(s) / 2):
front_char = s[i].lower()
back_char = s[len(s)-i-1].lower()
if front_char != back_char:
return False
return True
s = raw_input("Enter string: ")
Comment hidden because of low score. Click to expand.
#include <iostream>
int main()
std::cout << "1234321" << std::endl;
return 0;
|
{"url":"http://www.careercup.com/question?id=13270698","timestamp":"2014-04-18T00:40:48Z","content_type":null,"content_length":"68830","record_id":"<urn:uuid:87f8e7ab-9c25-4523-aff5-0554080ee2bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integral upper limit a function
I am new to Mathematica and am having trouble performing a general multiple integral.
My issue is that the upper limit of the first of a triple integral is a function of the next integration variable.
Mathmatical 9 gives several error messages.
First, it says that it can't tell if the limit function is real or not. Second, it gives partial answer as a conditional with half a page of real and imaginary constraints.
I cannot deternine how to tell Mathematica that all values and functions are real.
Luther Nayhm I have checked the syntax multiple times, which is no guarantee of not missing something.
Simple numerical limits show that the integral is not transcendental and gives real values.
I have looked at a half dozen sources to determine how to trouble shoot the integral and how to insert assumptions, but nothing seems to fix the issue.
Any pointers would be welcome.
Hi. Can you give a simple example of the kind of integral you mention? A multiple integral where "the upper limit of the first of a triple integral is a function of the next integration
variable. If I try to express this as an iterated integral, do you mean something like this:
Integrate[Integrate[f[x, y], {x, 0, 1}], {y, 0, g[x]}]
Integrate[f[x, y], {y, 0, g[x]}, {x, 0, 1}]
Clarke You can use the Assumptions option to inform most functions that some constants are supposed to be real. Please see
1 Vote the tutorial on using Assumptions
If possible, please try giving the smallest example possible of the issue you are seeing.
I looked at the Assumptions and the Reals useage, but cannot get it to work when I am doing certain integrals. When I re-did the integral again that has been troubling, I found that I had
misstated my issue to you before, though both Assumptions and Reals is an issue for me in simplifying the integration times and repetitions.
When I perform
Integragte[r1 r2/Sqrt[r1^2+r2^2+d+r1 r2 W],{r1,0,Sqrt[a-z1^2]},{r2,0,Sqrt[a-z2^2]}],
the first integral is calulated but the second is not. It is left undone, which usualy means it cannot be done, except that it can be done by hand if the limit d>r1+r2 is involked. However,
I have to keep the Sqrt limits regardless of my approach.That is what I cannot get to work.
Updating This integral is further integrated over the z1 and z2 variables, which have simple limits, and there is a final double integral over W(t1,t2). I am trying to find a way of doing the
Name complete integration, but it may not be possible without expanding the integrand as a power series and performing a less complex integral term by term.
I have had some success with NIntegrate when I can use numerical limits, but not always, since I need to invoke some Assumptions about the limits being real. The equation is a parametric
model, and I want to be able to vary
to produce a plot of the integral value vs d.
Well, Mathematica does a very general check of the expression to find any part that it can simplify. So if your expression contains too many radicals and Log function, it would be fairly
difficult to handle since you might not be aware of the really complicated conditions which requires too much computation. A good idea is always to simplify this type of problem at first
At first glance the up limit of the integral can be simplified. They do not have dependance. I will just go ahead the do the integral separatedly:
Integrate[x y/Sqrt[x^2 + y^2 + d + x*y*w], x]
Then I will have a result. Use
Rule and Replace functions
Shenghui to find the value of the first step integral (I just copy and paste the results): Then you can copy the above two expressions into the new integrate function:
1 Vote y (Sqrt[a^2 + d + a w y + y^2] -
1/2 w y Log[2 a + w y + 2 Sqrt[a^2 + d + a w y + y^2]]) -
y (Sqrt[d + y^2] - 1/2 w y Log[w y + 2 Sqrt[d + y^2]]), y]
The result isFinally, if you really want to have the up limits to be your version, you just need to use the rule function again:
<aforementioned results>/.{a->XXX,b->YYY}
The problem you can see here is that once Mathematica does the symbolic computation, it basically keeps everything there and perhaps more than that you are aware of. Moreover the dependances
of the radicals and wrapped Log and fractional power really makes a direct way cheesy.
Wow! You know way more than I do about Mathematica. I will have to study your approach.
One point is that my limits need to remain Sqrt functions, since I am performing a specific volume integral and the limits are a sphere as described in cylindrical coordinates. Consequently,
I will have to investigate whether your approach as changed the integral into something that is not relevant to the scenario I am modeling.
Luther Other than that, your attention to this issue is much appreciated. I will post back what I find...it will take a while...days? Weeks? But it is something to pursue where there was nothing
Nayhm before.
You can have a try of this problem by yourself using Rule and some replace functions in Mathematica first. This is how you learn this languange.
Shenghui Yang
I have played around with your approach. It appears that while it works, there are other integrals in the model that cannot be solved symbolically. These occur in the last integrals over the
"w" function. I may be able to do this numerically.
On the other hand, your rule and replacement approach will allow me to develop a power series expansion. The "d" in the expression is a function of z1 and z2. I can pull it out of the Sqrt
function, which leaves me with 1/Sqrt[1+a] where for most values, a<1 and the series converges. However, I need to actually replace the "a" in the power series by the complete expression and
allow Mathematica to do the algebra for me. I can integrate these term by term with simple integrals. It is tedious until I learn how to use other capabilities within Mathematica.
I have run across two other issues, however, with the power series expansion. First, there does not seem to be any one variable about which I can perform the expansion without using the symbol
Luther "a" to replace all functions at once.
The second thing is that the variables within the numerator allow the complete integrated series to converge term by term even when I chose values for "d" which would not allow the power
series to actually converge. The total integral is more than the sum of its parts as far as the stability of the model is concerned. I did not expect this. What appears to be happening is that
each term, in the limit, after being integrated comes to a finite value that is less than one.
|
{"url":"http://community.wolfram.com/groups/-/m/t/135754?p_p_auth=7LmnUBVV","timestamp":"2014-04-16T21:51:29Z","content_type":null,"content_length":"91325","record_id":"<urn:uuid:f78f3a2b-cc45-491a-8f50-8752b0b0bc6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How Did I Move?
A common problem when students learn about the slope-intercept equation y = mx + b is that they mechanically substitute for m and b without understanding their meaning. This lesson is intended to
provide students with a method for understanding that m is a rate of change and b is the value when x = 0. This kinesthetic activity allows students to form a physical interpretation of slope and y
-intercept by running across a football field. Students will be able to verbalize the meaning of the equation to reinforce understanding and discover that slope (or rate of movement) is the same for
all sets of points given a set of data with a linear relationship.
Create a set of index cards for each group of three students, with a different position at time 0 on each card. Each group should have unique set of index cards. Each group member will be assigned a
different role on the football field for each of the three tasks. If a football field is not readily available, a hallway or other space can be used. Students should have a simple method for
measuring their distance (e.g., number of blocks on the wall, tiles on the floor, etc.) so that they focus on the concept of movement as a rate of change rather than spending time measuring distance.
Use the following criteria to create the three index cards for each group:
• Task 1: Specify a starting location and a time.
• Task 2: Specify a different starting location and a distance.
• Task 3: Specify the same starting and ending location, along with a time.
An example set of index cards is shown below.
Football Field Activity
Distribute page 1 of the How Did I Move? activity sheet, which includes a drawing of the field. Read the instructions on the page with students. Stress that students can only run from left to right,
and they must begin at the field position specified on their index card. Inform students that they must move in a forward direction; they cannot run forward and then back again. Also, let them know
that the remaining pages of the activity sheet will have them analyze the data they collect on the football field. Allow students to ask questions to clarify the tasks. When all student questions
have been answered, take them to the football field.
How Did I Move? Activity Sheet
At the field, provide each group with a set of 3 index cards, a stopwatch, and a pencil. Ask students to rotate through the roles of football player, recorder, and timer. Each player should run or
act out the situation that corresponds to the data provided on his or her index card. When groups complete their tasks, distribute the remaining pages of the activity sheet. Students can either work
on this at the field or, when all students have finished gathering their data, back in the classroom.
Completing the How Did I Move? Activity Sheet
Students should work together in their groups to complete pages 2 and 3 of the activity sheet. Circulate among groups and help them with any difficulties they may have.
• Question 2: When students plot their data, make sure each student plots 3 lines – one for each member of their group. Remind them to label each line with the runner's name.
• Question 3:
□ a) Students should see that the steepest line gains yardage faster. If students have difficulty determining which runner was fastest, they could use 2 pencils or 2 rulers to compare the
steepness of the lines.
□ b) This is an interesting question that depends on students’ interpretation. They could say that the student who didn’t move was slowest, or they could say it was the student with the flatter
line. Strictly speaking, both answers are correct, so listen to the discussion and encourage students to justify their answer.
□ c) The graph for the group member who did not move is a horizontal line.
• Question 4: Based on the data and graphs in the previous questions, students should be able to calculate the speeds and determine the linear equations, but be prepared to provide guidance if
necessary. Students should use the slope formula to determine the slope of each line, and then use the slope to write the linear equation. Some students may also choose to use ^rise/[run], but
the scale of the graph may make this difficult.
• Question 6: Students should be able to determine that the greater the value of m, the faster the student ran.
• Question 7: Remind students that y = 100 when a touchdown is scored, because a football field is 100 yards long. Students may be confused by the equation for the third member of their group (the
one who didn't move), but they should be able to determine that this player would never score a touchdown.
By the end of the activity, students should be able to relate yards per second to their movement (m) and their beginning position (b) to their position at time 0. Clarify for students that m does not
stand for movement and b does not stand for beginning. This is just a mnemonic for remembering the role slope and y-intercept play in the equation y = mx + b
Coleman's Touchdown and The Winning Goal
In Coleman's Touchdown, students are presented with 7 questions that help to reinforce the concepts from the previous activity. They predict when Coleman will score a touchdown, and discover — either
visually or computationally — that his speed (or rate of change) is the same between any two points on the graph. This may be confusing or unexpected for some students.
Coleman's Touchdown Activity Sheet & Answer Key
If time allows, allow students to work on The Winning Goal activity sheet. The activity allows students to compare two different forms of data for two players on a field hockey team. Kaitlin’s data
are presented in a table, and Brea’s data are presented in a graph. Students analyze these data, create the slope-intercept equations, and make a recommendation to the field hockey coach regarding
which player to substitute based on a mathematical analysis of each player's speed. This helps reinforce the concepts learned earlier in the lesson.
The Winning Goal Activity Sheet & Answer Key
1. Ask groups to present their results from the football activity to the class. They could compare graphs and discuss how they are the same and how they are different. Finally, the class can
determine which student had the fastest speed overall for this activity.
2. Ask each group to present the answer to one of the Questions for Students.
1. You may use remote-controlled cars instead of individual students to compare speeds.
2. Have students create a problem to The Winning Goal activity and pass it to another group to find the faster player.
3. Ask students to create other rate of change scenarios, such as how much they earn at their jobs vs. the amount of time they work. They should represent their data using tables, graphs, equations,
and written explanations.
Questions for Students
1. How can you tell by looking at a graph which student is fastest?
[The steepest line corresponds to the fastest student. Students with steeper slopes traveled a greater distance during the time interval. This is an ideal time to bring up the visual representation
of ^rise/[run], where rise is the distance and run is the time.]
2. What happens to a line if run is changed in the formula m = ^rise/[run]?
[If run is increased, the fraction becomes smaller. This would correspond to a traveling the same distance in a great time resulting in a slower speed in this scenario. If rise decreases, the
opposite happens.]
3. Why is y = mx + b called the slope-intercept form of a linear equation?
[The value of m represents slope and b represents the y-intercept. In other words, without doing any calculations, you can see the slope and y-intercept of a line just by looking at the equation.]
4. What is a real-world example that demonstrates the meaning of slope?
[In this lesson, slope represents speed in yards per second. Other examples of slope include miles per gallon, dollars per hour, or cost per minute. In general, slope refers to the rate of change in
a linear equation.]
5. What is a real-world example that demonstrates the meaning of y-intercept?
[In this lesson, the y-intercept is position 0 on the football field, i.e., the goal line of the opposing team. Another common example is a cell phone plan—often, the monthly charge includes a fixed
cost plus some cost per minute. The y-intercept is the fixed cost.]
6. What did you notice about the slope between any 2 points on the line representing Coleman’s position? Why did this happen?
[The slopes are all the same in the activity. This is because the slope between any two points on any given line is the same. This relates to the constant motion result in Lesson 1.]
7. If a player were at position 0 and position 100 simultaneously at time 0, what would the slope of that player's line be?
[There would be no slope. On a graph, this would be represented by a vertical line. The situation is impossible because a person cannot physically at 2 places at the same time. You may wish to ask
students to compare this scenario with the one experienced by the student who stayed in one place in the How Did I Move? activity.]
Teacher Reflection
• How did your lesson address auditory, tactile and visual learning styles?
• How did students demonstrate understanding of the materials presented?
• Did students make the connection between slope, speed, and rate of change?
• How did students communicate that they understand the meaning of the slope-intercept equation?
• What were some of the ways in which students illustrated that they were actively engaged in the learning process?
• What, if any, issues arose with classroom management? How did you correct them? If you use this lesson in the future, what could you do to prevent these problems?
In this lesson, students use remote-controlled cars to create a system of equations. The solution of the system corresponds to the cars crashing. Multiple representations are woven together
throughout the lesson, using graphs, scatter plots, equations, tables, and technological tools. Students calculate the time and place of the crash mathematically, and then test the results by
crashing the cars into each other.
6-8, 9-12
This investigation uses a motion detector to help students understand graphs and equations. Students experience constant and variable rates of change and are challenged to consider graphs where no
movements are possible to create them. Multiple representations are used throughout the lesson to allow students to build their fluency with in how graphs, tables, equations, and physical modeling
are connected. The lesson also allows students to investigate multiple function types, including linear, exponential, quadratic, and piecewise.
Learning Objectives
Students will:
• Collect data in an activity on a football field
• Compare movement and starting positions based on the data
• Create the slope-intercept equations relating m to their movement and speed and b to their beginning running location
• Use the equation to predict the distance at any given time
Common Core State Standards – Mathematics
Grade 8, Expression/Equation
• CCSS.Math.Content.8.EE.B.6
Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the
origin and the equation y = mx + b for a line intercepting the vertical axis at b.
Grade 8, Functions
• CCSS.Math.Content.8.F.B.4
Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y)
values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or
a table of values.
Grade 8, Stats & Probability
• CCSS.Math.Content.8.SP.A.2
Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and
informally assess the model fit by judging the closeness of the data points to the line.
Grade 8, Stats & Probability
• CCSS.Math.Content.8.SP.A.3
Use the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. For example, in a linear model for a biology experiment,
interpret a slope of 1.5 cm/hr as meaning that an additional hour of sunlight each day is associated with an additional 1.5 cm in mature plant height.
|
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=2800","timestamp":"2014-04-18T08:48:25Z","content_type":null,"content_length":"98420","record_id":"<urn:uuid:ea3bec09-802d-46cc-a2ee-19ccae269734>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Factor completely p4 – 8p2 + 16
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It's always good to let p^2 = x, then you'll have x^2 - 8x + 16 Which is much simpler to factor. After factoring, simply replace x with p^2
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b51232e4b061f4f8ff6b4d","timestamp":"2014-04-18T08:11:00Z","content_type":null,"content_length":"30203","record_id":"<urn:uuid:2807004b-f3b3-4704-81a6-0cf5bd891ead>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fractional Helly for more than one piercing
up vote 7 down vote favorite
Fractional Helly Theorem says the following:
For every $0<\alpha\leq 1$ there exists $\beta = \beta(d, \alpha)$ with the following property. Let $C_1 , C_2 , ..., C_n$ be convex sets in $R^d$, $n \geq d + 1$, and at least $\alpha {n \choose
d+1}$ of the collection of sets of size $d + 1$ have non-empty intersection, so there exists a point contained in at least $\beta n$ sets. Where $\beta(\alpha)=1-(1-\alpha)^\frac{1}{(d+1)}$.
Now, my question is whether the fractional Helly is true for more than one piercing also? More precisely, if at least $0<\alpha'\leq 1$ fraction of ${n \choose k(d+1)}$ sets are pierced by at most
$k$ points, then at least $\beta'n$ sets are pierced by at most $k$ points. Where $\beta'=\beta'(\alpha',k,d)$ and $\beta'$ approaches to $1$ as $\alpha'$ approaches to $1$.
I have asked the same question in math.stackexchange also. Sorry for repeating the question here.
discrete-geometry computational-geometry
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged discrete-geometry computational-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/122647/fractional-helly-for-more-than-one-piercing","timestamp":"2014-04-19T15:18:32Z","content_type":null,"content_length":"46374","record_id":"<urn:uuid:6866f962-97bc-4f76-acef-7fbc4043ac34>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Billerica SAT Math Tutor
Find a North Billerica SAT Math Tutor
...I will travel throughout the area to meet in your home, library, or wherever is comfortable for you.Materials Physics Research Associate, Harvard, current Geophysics postdoctoral fellow, MIT,
2010-2012 Physics PhD, Brandeis University, 2010 -Includes experience teaching and lecturing Physics...
16 Subjects: including SAT math, calculus, physics, geometry
...Many problems on the SAT are unlike any that students may have experienced in their Math classes at school. I help my students with "SAT Math" by a) expanding and deepening their understanding
of the ideas behind Mathematics; b) showing them how to think on their feet and apply basic Mathematica...
14 Subjects: including SAT math, calculus, geometry, algebra 1
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including SAT math, chemistry, calculus, geometry
...I also have an A.S. in Computer Information Systems from Holyoke Community College. I am an experienced, self-motivated, and detail-oriented administrative management and teaching professional
with advanced computer proficiency in MS Word, Excel, Power point, and Access. Throughout my employmen...
30 Subjects: including SAT math, English, writing, reading
...I went to Lesley University for my undergraduate degree in education and mathematics. I went back to Lesley for my first Master's degree in education to be a Reading Specialist. Currently, I am
finishing my second Master's degree in Teaching Mathematics at Rivier University.
14 Subjects: including SAT math, reading, geometry, algebra 1
|
{"url":"http://www.purplemath.com/North_Billerica_SAT_Math_tutors.php","timestamp":"2014-04-21T11:09:04Z","content_type":null,"content_length":"24327","record_id":"<urn:uuid:d154b59c-55e4-4a95-9fb4-90829f43c725>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formal methods
From Wikipedia, the free encyclopedia
In computer science, specifically software engineering and hardware engineering, formal methods are a particular kind of mathematically based techniques for the specification, development and
verification of software and hardware systems.^1 The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing
appropriate mathematical analysis can contribute to the reliability and robustness of a design.^2
Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and
program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.^3
Formal methods can be used at a number of levels:
Level 0: Formal specification may be undertaken and then a program developed from this informally. This has been dubbed formal methods lite. This may be the most cost-effective option in many cases.
Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may
be undertaken. This may be most appropriate in high-integrity systems involving safety or security.
Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. This can be very expensive and is only practically worthwhile if the cost of mistakes is extremely high (e.g.,
in critical parts of microprocessor design).
Further information on this is expanded below.
As with programming language semantics, styles of formal methods may be roughly classified as follows:
• Denotational semantics, in which the meaning of a system is expressed in the mathematical theory of domains. Proponents of such methods rely on the well-understood nature of domains to give
meaning to the system; critics point out that not every system may be intuitively or naturally viewed as a function.
• Operational semantics, in which the meaning of a system is expressed as a sequence of actions of a (presumably) simpler computational model. Proponents of such methods point to the simplicity of
their models as a means to expressive clarity; critics counter that the problem of semantics has just been delayed (who defines the semantics of the simpler model?).
• Axiomatic semantics, in which the meaning of the system is expressed in terms of preconditions and postconditions which are true before and after the system performs a task, respectively.
Proponents note the connection to classical logic; critics note that such semantics never really describe what a system does (merely what is true before and afterwards).
Lightweight formal methods
Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design.^4^5 They contend that the expressiveness of the languages involved, as
well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, various lightweight formal methods, which emphasize partial
specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include the Alloy object modelling notation,^6 Denney's synthesis of some aspects of
the Z notation with use case driven development,^7 and the CSK VDM Tools.^8
Formal methods can be applied at various points through the development process.
Formal methods may be used to give a description of the system to be developed, at whatever level(s) of detail desired. This formal description can be used to guide further development activities
(see following sections); additionally, it can be used to verify that the requirements for the system being developed have been completely and accurately specified.
The need for formal specification systems has been noted for years. In the ALGOL 58 report,^9 John Backus presented a formal notation for describing programming language syntax (later named Backus
Normal Form then renamed Backus-Naur Form (BNF)^10). Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs wasn't completed in time for inclusion in the
report. "Therefore the formal treatment of the semantics of legal programs will be included in a subsequent paper." It never appeared.
Once a formal specification has been produced, the specification may be used as a guide while the concrete system is developed during the design process (i.e., realized typically in software, but
also potentially in hardware). For example:
• If the formal specification is in an operational semantics, the observed behavior of the concrete system can be compared with the behavior of the specification (which itself should be executable
or simulateable). Additionally, the operational commands of the specification may be amenable to direct translation into executable code.
• If the formal specification is in an axiomatic semantics, the preconditions and postconditions of the specification may become assertions in the executable code.
Once a formal specification has been developed, the specification may be used as the basis for proving properties of the specification (and hopefully by inference the developed system).
Human-directed proof
Sometimes, the motivation for proving the correctness of a system is not the obvious need for re-assurance of the correctness of the system, but a desire to understand the system better.
Consequently, some proofs of correctness are produced in the style of mathematical proof: handwritten (or typeset) using natural language, using a level of informality common to such proofs. A "good"
proof is one which is readable and understandable by other human readers.
Critics of such approaches point out that the ambiguity inherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details
typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise.
Automated proof
In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into two general categories:
• Automated theorem proving, in which a system attempts to produce a formal proof from scratch, given a description of the system, a set of logical axioms, and a set of inference rules.
• Model checking, in which a system verifies certain properties by means of an exhaustive search of all possible states that a system could enter during its execution.
Some automated theorem provers require guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in
checking millions of uninteresting states if not given a sufficiently abstract model.
Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training
required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners.
Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the
program which aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each
step in their proof, making it possible to perform, given suitable tools, independent verification.
Formal methods are applied in different areas of hardware and software, including routers, Ethernet switches, routing protocols, and security applications. There are several examples in which FMs
have been used to verify the functionality of the hardware and software used in DCs. IBM used ACL2, a theorem prover, in AMD x86 processor development process. Intel uses FMs to verify its hardware
and firmware (permanent software programmed into a read-only memory). There are several other projects of NASA in which FMs are applied, such as Next Generation Air Transportation System, Unmanned
Aircraft System integration in National Airspace System,^11 and Airborne Coordinated Conflict Resolution and Detection (ACCoRD).^12
Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM, Intel, and AMD. There are many areas of hardware, where Intel have used FMs to verify
the working of the products, such as parameterized verification of cache coherent protocol,^13 Intel Core i7 processor execution engine validation ^14 (using theorem proving, BDD’s, and symbolic
evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover,^15 and verification of high performance dual-port gigabit Ethernet controller with a support for PCI express
protocol and Intel advance management technology using Cadence.^16 Similarly, IBM has used formal methods in the verification of power gates,^17 registers,^18 and functional verification of the IBM
Power7 microprocessor.^19
Formal methods and notations
This section is in a list format that may be better presented using prose. (August 2009)
There are a variety of formal methods and notations available.
Specification languages
Model checkers
See also
1. ^ R. W. Butler (2001-08-06). "What is Formal Methods?". Retrieved 2006-11-16.
2. ^ C. Michael Holloway. Why Engineers Should Consider Formal Methods. 16th Digital Avionics Systems Conference (27–30 October 1997). Retrieved 2006-11-16.
3. ^ Monin, pp.3-4
4. ^ Daniel Jackson and Jeannette Wing, "Lightweight Formal Methods", IEEE Computer, April 1996
5. ^ Vinu George and Rayford Vaughn, "Application of Lightweight Formal Methods in Requirement Engineering", Crosstalk: The Journal of Defense Software Engineering, January 2003
6. ^ Daniel Jackson, "Alloy: A Lightweight Object Modelling Notation", ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 11, Issue 2 (April 2002), pp. 256-290
7. ^ Richard Denney, Succeeding with Use Cases: Working Smart to Deliver Quality, Addison-Wesley Professional Publishing, 2005, ISBN 0-321-31643-6.
8. ^ Sten Agerholm and Peter G. Larsen, "A Lightweight Approach to Formal Methods", In Proceedings of the International Workshop on Current Trends in Applied Formal Methods, Boppard, Germany,
Springer-Verlag, October 1998
9. ^ Backus, J.W. (1959). "The Syntax and Semantics of the Proposed International Algebraic Language of Zürich ACM-GAMM Conference". Proceedings of the International Conference on Information
Processing. UNESCO.
10. ^ Knuth, Donald E. (1964), Backus Normal Form vs Backus Naur Form. Communications of the ACM, 7(12):735–736.
11. ^ Gheorghe, A. V., & Ancel, E. (2008, November). Unmanned aerial systems integration to National Airspace System. In Infrastructure Systems and Services: Building Networks for a Brighter Future
(INFRA), 2008 First International Conference on (pp. 1-5). IEEE.
12. ^ Airborne Coordinated Conflict Resolution and Detection, http://shemesh.larc.nasa.gov/people/cam/ACCoRD/
13. ^ C. T. Chou, P. K. Mannava, S. Park, “A simple method for parameterized verification of cache coherence protocols,” Formal Methods in Computer-Aided Design, pp. 382-398, 2004.
14. ^ Formal Verification in Intel® Core™ i7 Processor Execution Engine Validation, http://cps-vo.org/node/1371, accessed at Sep. 13, 2013.
15. ^ J. Grundy, “Verified optimizations for the Intel IA-64 architecture,” In Theorem Proving in Higher Order Logics, Springer Berlin Heidelberg, 2004, pp. 215-232.
16. ^ E. Seligman, I. Yarom, “Best known methods for using Cadence Conformal LEC,” at Intel.
17. ^ C. Eisner, A. Nahir, K. Yorav, “Functional verification of power gated designs by compositional reasoning,” Computer Aided Verification Springer Berlin Heidelberg, pp. 433-445.
18. ^ P. C. Attie, H. Chockler, “Automatic verification of fault-tolerant register emulations,” Electronic Notes in Theoretical Computer Science, vol. 149, no. 1, pp. 49-60.
19. ^ K. D. Schubert, W. Roesner, J. M. Ludden, J. Jackson, J. Buchert, V. Paruthi, B. Brock, “Functional verification of the IBM POWER7 microprocessor and POWER7 multiprocessor systems,” IBM Journal
of Research and Development, vol. 55, no 3.
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
Further reading
• Jean François Monin and Michael G. Hinchey, Understanding formal methods, Springer, 2003, ISBN 1-85233-247-6.
• Jonathan P. Bowen and Michael G. Hinchey, Formal Methods. In Allen B. Tucker, Jr. (ed.), Computer Science Handbook, 2nd edition, Section XI, Software Engineering,Chapter 106, pages 106-1 –
106-25, Chapman & Hall / CRC Press, Association for Computing Machinery, 2004.
• Michael G. Hinchey, Jonathan P. Bowen, and Emil Vassev, Formal Methods. In Philip A. Laplante (ed.), Encyclopedia of Software Engineering, Taylor & Francis, 2010, pages 308–320.
External links
|
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Formal_methods","timestamp":"2014-04-18T08:37:32Z","content_type":null,"content_length":"125874","record_id":"<urn:uuid:57efb1a6-22b8-408f-ab90-acb02a279cfc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 217
, 1997
"... This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and
the use of the discrete support to represent the sequentially updating prior distribution. Both problems ar ..."
Cited by 519 (15 self)
Add to MetaCart
This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the
use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced
the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform on-line Bayesian calculations about the parameters which
index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. Some key words: Filtering, Markov chain
Monte Carlo, Particle filter, Simulation, SIR, State space. 1 1
, 1994
"... this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian
estimation. The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum ..."
Cited by 354 (37 self)
Add to MetaCart
this paper we exploit Gibbs sampling to provide a likelihood framework for the analysis of stochastic volatility models, demonstrating how to perform either maximum likelihood or Bayesian estimation.
The paper includes an extensive Monte Carlo experiment which compares the efficiency of the maximum likelihood estimator with that of quasi-likelihood and Bayesian estimators proposed in the
literature. We also compare the fit of the stochastic volatility model to that of ARCH models using the likelihood criterion to illustrate the flexibility of the framework presented. Some key words:
ARCH, Bayes estimation, Gibbs sampler, Heteroscedasticity, Maximum likelihood, Quasi-maximum likelihood, Simulation, Stochastic EM algorithm, Stochastic volatility, Stock returns. 1 INTRODUCTION
- IEEE Trans Image Processing , 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients
at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Cited by 350 (18 self)
Add to MetaCart
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at
adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the
coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of
each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by
additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
"... Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and
evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, ..."
Cited by 271 (33 self)
Add to MetaCart
Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of
asset and derivative pricing theories as well as trading and hedging strategies. In response to this, a voluminous literature has emerged for modeling the temporal dependencies in financial market
volatility at the daily and lower frequencies using ARCH and stochastic volatility type models. Most of these studies find highly significant in-sample parameter estimates and pronounced
intertemporal volatility persistence. Meanwhile, when judged by standard forecast evaluation criteria, based on the squared or absolute returns over daily or longer forecast horizons, standard
volatility models provide seemingly poor forecasts. The present paper demonstrates that, contrary to this contention, in empirically realistic situations the models actually produce strikingly
accurate interdaily forecasts f...
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized
volatilities are clearly right-skewed, the distributions of the logarithms of realized volatilities are a ..."
Cited by 265 (34 self)
Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized
volatilities are clearly right-skewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the long-run dynamics of realized logarithmic volatilities are
well approximated by a fractionally-integrated long-memory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic
realized volatilities: a fractionally-integrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the
resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated high-frequency models, we find that our simple Gaussian VAR forecasts generally
produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities,
the resulting lognormal-normal mixture forecast distribution provides conditionally well-calibrated density forecasts of returns, from which we obtain accurate estimates of conditional return
quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitrage-free
multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from high-frequency foreign exchange returns. Next, in section 4 we summarize the salient
distributional features of r...
- International Economic Review , 1997
"... This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label
"good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around po ..."
Cited by 166 (10 self)
Add to MetaCart
This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good".
One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around point predictions. The insight was that the intervals should be narrow in tranquil times and wide
in volatile times, so that the occurrences of observations outside the interval forecast would be spread out over the sample and not come in clusters. An interval forecast that 3 fails to account for
higher-order dynamics may be correct on average (have correct unconditional coverage), but in any given period it will have incorrect conditional coverage characterized by clustered outliers. These
concepts will be defined precisely below, and tests for correct conditional coverage are suggested. Chatfield (1993) emphasizes that model misspecification is a much more important source of poor
interval forecasting than is simple estimation error. Thus, our testing criterion and the tests of this criterion are model free. In this regard, the approach taken here is similar to the one taken
by Diebold and Mariano (1995). This paper can also be seen as establishing a formal framework for the ideas suggested in Granger, White and Kamstra (1989). Recently, financial market participants
have shown increasing interest in interval forecasts as measures of uncertainty. Thus, we apply our methods to the interval forecasts provided by J.P. Morgan (1995). Furthermore, the so-called
"Value-at-Risk" measures suggested for risk measurement correspond to tail forecasts, i.e., one-sided interval forecasts of portfolio returns. Lopez (1996) evaluates these types of forecasts applying
the procedures develo...
, 1997
"... Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach
that allows the relative importance of world and local information to change through time in both the expe ..."
Cited by 157 (28 self)
Add to MetaCart
Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach that
allows the relative importance of world and local information to change through time in both the expected returns and conditional variance processes. Our time-series and cross-sectional models
analyze the reasons that volatility is different across emerging markets, particularly with respect to the timing of capital market reforms. We find that capital market liberalizations often increase
the correlation between local market returns and the world market but do not drive up local market volatility.
- Journal of Empirical Finance , 1998
"... We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines
pseudo-maximum-likelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) ..."
Cited by 102 (4 self)
Add to MetaCart
We propose a method for estimating VaR and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series. Our approach combines
pseudo-maximum-likelihood fitting of GARCH models to estimate the current volatility and extreme value theory (EVT) for estimating the tail of the innovation distribution of the GARCH model. We use
our method to estimate conditional quantiles (VaR) and conditional expected shortfalls (the expected size of a return exceeding VaR), this being an alternative measure of tail risk with better
theoretical properties than the quantile. Using backtesting of historical daily return series we show that our procedure gives better one-day estimates than methods which ignore the heavy tails of
the innovations or the stochastic nature of the volatility. With the help of our fitted models we adopt a Monte Carlo approach to estimating the conditional quantiles of returns over multiple-day
horizons and find that t...
"... This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model specifications and inference methods, and identifies likely directions of future
research. ..."
Cited by 102 (7 self)
Add to MetaCart
This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model specifications and inference methods, and identifies likely directions of future research.
- Handbook of Econometrics , 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other
complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Cited by 92 (17 self)
Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other
complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion
function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the
method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in semi-nonparametric models
with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and non-negativity.
This chapter describes estimation of semi-nonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including
consistency of the sieve extremum estimates, convergence rates of the sieve M-estimates, pointwise normality of series estimates of regression functions, root-n asymptotic normality and efficiency of
sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1295248","timestamp":"2014-04-18T02:02:32Z","content_type":null,"content_length":"40820","record_id":"<urn:uuid:e923d494-c1b8-4f22-8047-2b2d7ece173b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
prime or composite
July 12th 2008, 03:08 PM #1
Jul 2008
prime or composite
hello everyone i have a math review and i want the RIGHT answers so i can check my work. And so I know that I'm doing each problem right thanks.
#1. Is the number 75 prime,composite, or neither? (i say composite?)
#2. Is the number 37 prime,composite, or neither? (i say prime?)
#3. Is the number 1 prime,composite, or neither? (i say neither?)
#4. Is the number 42 prime,composite, or neither? (i say composite?)
Perform the indicated operation.
#5. 15/18 x 12/5 (i say its 2?)
#6. 3 4/7 x 2 1/5 (i say its 55/7?)
#7. 1 4/11 divide 3 (i dont know?)
#8. 5/8 divide 2 1/3 (i dont know?)
#9. 3 1/3+4 7/8+2 5/6 (i dont know?)
#10. 4/7+3/7+2 1/4 ( i say 3 1/4?)
#11. 5/9-1/3 ( i say 16/27?)
#12. 5 7/8-3 1/2 ( i say 2 3/8?)
Answer the following problems.
#13. A utility stock listed at 18 7/8 on Wednesday rose 2 1/2 points on Thursday and dropped 1 5/8 on Friday. What was it worth when the market closed on Friday? (i say 19 3/4?)
#14. Marco added 3 3/4 cup of water to a bowl. He then decided that was too much, so he took a 1/4 cup measuring cup and dipped it into the bowl three times, taking out the water each time, full
to the top of the measuring cup. How much was left in the bowl? ( i say 3 cups?)
#15. Write 8.064 as a fraction. Do not write in lowest terms.
#16. Write 0.0345 as a fraction. Do not write in lowest terms.
Perform the indicated operation(s).
#21.(0.00239x10000) divide 1.5
#22. 4501.2 x 0.003
#23. write 7.8% as a decimal.
#24. write 82.3% as a decimal.
#25. write 2 3/8 as a decimal.
#26. write 15 2/3 as a decimal.
#27. write 3.45 as a percent.
#28. write .012 as a percent.
If anyone knows this stuff let me have it to check my answers because i dont want to flunk. thank you...
hello everyone i have a math review and i want the RIGHT answers so i can check my work. And so I know that I'm doing each problem right thanks.
#1. Is the number 75 prime,composite, or neither? (i say composite?) Correct
#2. Is the number 37 prime,composite, or neither? (i say prime?) Correct
#3. Is the number 1 prime,composite, or neither? (i say neither?) Correct
#4. Is the number 42 prime,composite, or neither? (i say composite?) Correct
Perform the indicated operation.
#5. 15/18 x 12/5 (i say its 2?) Correct
#6. 3 4/7 x 2 1/5 (i say its 55/7?) Correct
#7. 1 4/11 divide 3 (i dont know?) See below
#8. 5/8 divide 2 1/3 (i dont know?) See below
#9. 3 1/3+4 7/8+2 5/6 (i dont know?) See below
#10. 4/7+3/7+2 1/4 ( i say 3 1/4?) Correct
#11. 5/9-1/3 ( i say 16/27?) See below
#12. 5 7/8-3 1/2 ( i say 2 3/8?) Correct
Answer the following problems.
#13. A utility stock listed at 18 7/8 on Wednesday rose 2 1/2 points on Thursday and dropped 1 5/8 on Friday. What was it worth when the market closed on Friday? (i say 19 3/4?)
#14. Marco added 3 3/4 cup of water to a bowl. He then decided that was too much, so he took a 1/4 cup measuring cup and dipped it into the bowl three times, taking out the water each time, full
to the top of the measuring cup. How much was left in the bowl? ( i say 3 cups?)
#15. Write 8.064 as a fraction. Do not write in lowest terms.
#16. Write 0.0345 as a fraction. Do not write in lowest terms.
Perform the indicated operation(s).
#21.(0.00239x10000) divide 1.5
#22. 4501.2 x 0.003
#23. write 7.8% as a decimal.
#24. write 82.3% as a decimal.
#25. write 2 3/8 as a decimal.
#26. write 15 2/3 as a decimal.
#27. write 3.45 as a percent.
#28. write .012 as a percent.
If anyone knows this stuff let me have it to check my answers because i dont want to flunk. thank you...
#7) $3\div1\frac{4}{11}$ or is it $1\frac{4}{11}\div 3$?
1st way: $\frac{3}{1}\div1\frac{4}{11}=\frac{3}{1}\div\frac{ 15}{11}=\frac{3}{1}\times\frac{11}{15}=\frac{33}{1 5}=\frac{11}{5}=2\frac{1}{5}$
2nd way: $1\frac{4}{11}\div 3=\frac{15}{11}\times\frac{1}{3}=\frac{15}{33}=\fr ac{5}{11}$
#8) $\frac{5}{8}\div2\frac{1}{3}$ or is it $2\frac{1}{3}\div \frac{5}{8}$?
You should be able to do this one looking at #7 as a guide.
#9) $3\frac{1}{3}+4\frac{7}{8}+2\frac{5}{6}$
Change mixed numbers to improper fractions.
Find common denominator = 24 and convert each denominator to 24.
Add numerators $\frac{80+117+68}{24}=\frac{265}{24}$
#11) $\frac{5}{9}-\frac{1}{3}=\frac{5}{9}-\frac{3}{9}=\frac{2}{9}$
Last edited by masters; July 12th 2008 at 04:07 PM.
I have a better plan. You SHOW YOUR WORK and someone will be glad to point out where you wander off if you do.
This is NOT showing your work. WHY do you think it is "composite"?
This is NOT showing your work. Here's an example of showing your work.
15/18 * 12/5
Muliply numerators and denominators
(15*12)/(18*5) = 180/90 = 18/9 = 2
Hey, wait a minute, can I simplfy as I go by using the commutative property and associative property?
$\frac{15}{18}*\frac{12}{5} = \frac{5}{6}*\frac{12}{5} = \frac{5}{5}*\frac{12}{6} = 1*2 = 2$
Answer the following problems.
#13. A utility stock listed at 18 7/8 on Wednesday rose 2 1/2 points on Thursday and dropped 1 5/8 on Friday. What was it worth when the market closed on Friday? (i say 19 3/4?) Correct
#14. Marco added 3 3/4 cup of water to a bowl. He then decided that was too much, so he took a 1/4 cup measuring cup and dipped it into the bowl three times, taking out the water each time, full
to the top of the measuring cup. How much was left in the bowl? ( i say 3 cups?) Correct
#15. Write 8.064 as a fraction. Do not write in lowest terms.
3 decimal places indicate thousandths, so $8.064=8\frac{64}{1000}=\frac{8064}{1000}$
#16. Write 0.0345 as a fraction. Do not write in lowest terms.
4 decimal places mean ten-thousandths. so $0.0345=\frac{345}{10000}$
Perform the indicated operation(s).
#21.(0.00239x10000) divide 1.5
Multiply by 10000 by moving the decimal 4 places to the right. Then divide by 1.5. $23.9\div1.5=15.9\overline{3}=\frac{239}{15}$
#22. 4501.2 x 0.003
Simple multiplication. Are you allowed to use a calculator? The product should have 4 decimal places. Try it.
To do these, simply remove the % sign and move the decimal 2 places to the left.
#23. write 7.8% as a decimal.
7.8% = .078
#24. write 82.3% as a decimal.
#25. write 2 3/8 as a decimal.
#26. write 15 2/3 as a decimal.
#27. write 3.45 as a percent.
To do this one, move the decimal 2 places to the right and attach a % sign.
#28. write .012 as a percent.
.012 = 1.2%
If anyone knows this stuff let me have it to check my answers because i dont want to flunk. thank you...
Most of the answers above are in Red. Three words of advice: Practice, Practice, Practice
hey i have one question? On # 21. i really didnt understand how you did that? because i was taught to times them together and then just normaly divide them. But i really didnt undersatnd your
But you did help me alot on them but on number 11. also i thought you were supposed to times both sides by the opposite denominator and i see that you only times the second fraction. That really
confused me. But other wise you really did help me and i thank you. keep up the great work. thanks.
Show your work. Display your intermediate results. It will be easy to see where you wandered off.
hey i have one question? On # 21. i really didnt understand how you did that? because i was taught to times them together and then just normaly divide them. But i really didnt undersatnd your
But you did help me alot on them but on number 11. also i thought you were supposed to times both sides by the opposite denominator and i see that you only times the second fraction. That really
confused me. But other wise you really did help me and i thank you. keep up the great work. thanks.
#21 looks like first you multiply, then you divide. I'm not sure about your wording. When you say "divide", are you really saying "divide by"? I assumed you were dividing the product of
((0.00239x10000) by 1.5 and that's what I did. If you meant it the other way around, it would be $1.5\div(0.00239\times10000)=1.5\div23.9=\frac{15}{ 239}$
#11 looks like a subtraction problem to me. Did I miss something? All I did was make sure I had a common denominator of 9 for the two fractions. Then, I subtracted the numerators.
$\frac{5}{9}$ requires us to do nothing.
$\frac{1}{3}$ must be changed to $\frac{3}{9}$
You're welcome.
#22. 4501.2 x 0.003
Simple multiplication. Are you allowed to use a calculator? The product should have 4 decimal places. Try it.
4501.2X.003=13,503.6 or .60 or .600 or.600000 in other words 1 decimal point
or a million.
BTW, if a number is not prime it is composite. 1 may be considered prime or may be considered neither prime nor composite.
July 12th 2008, 03:49 PM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
July 12th 2008, 03:57 PM #3
MHF Contributor
Aug 2007
July 12th 2008, 04:29 PM #4
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
July 12th 2008, 06:44 PM #5
Jul 2008
July 12th 2008, 07:45 PM #6
MHF Contributor
Aug 2007
July 13th 2008, 09:55 AM #7
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
October 23rd 2008, 09:17 AM #8
Oct 2008
|
{"url":"http://mathhelpforum.com/algebra/43550-prime-composite.html","timestamp":"2014-04-16T08:15:53Z","content_type":null,"content_length":"70034","record_id":"<urn:uuid:0e28a0aa-1654-463a-acfb-63242db784a7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing a T-transitive opening of a proximity
Garmendia, L. and Salvador , A and Montero de Juan, Francisco Javier (2008) Computing a T-transitive opening of a proximity. In Computational intelligence in decision and control : proceedings of the
8th International FLINS Conference. World Scientific, Singapore , pp. 157-162. ISBN 978-981-279-946-3
Official URL: http://eproceedings.worldscinet.com/9789812799470/9789812799470_0026.html
A fast method to compute a T-indistinguishability from a reflexive and symmetric fuzzy relation is given for any left-continuous t-norm, taking O(n(3)) time complexity, where n is the number of
elements in the universe. It is proved that the computed fuzzy relation is a T-transitive opening when T is the minimum t-norm or a strictly growing t-norm. As far as we know, this is the first known
algorithm that computes T-transitive openings preserving the reflexive and symmetric properties.
Item Type: Book Section
Additional Information: 8th International Conference on Fuzzy Logic and Intelligent Technologies in Nuclear Science. Madrid, Spain, 21-24 September 2008
Uncontrolled Keywords: Fuzzy relations; Closures
Subjects: Sciences > Mathematics > Logic, Symbolic and mathematical
ID Code: 16912
Deposited On: 29 Oct 2012 10:59
Last Modified: 29 Oct 2012 10:59
Repository Staff Only: item control page
|
{"url":"http://eprints.ucm.es/16912/","timestamp":"2014-04-19T11:57:41Z","content_type":null,"content_length":"24539","record_id":"<urn:uuid:12cc3845-63ee-414c-9eb3-e3cd1f6056f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Locally Resonant Band Gaps in Flexural Vibrations of a Timoshenko Beam with Periodically Attached Multioscillators
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 146975, 10 pages
Research Article
Locally Resonant Band Gaps in Flexural Vibrations of a Timoshenko Beam with Periodically Attached Multioscillators
College of Civil Engineering and Architecture, Zhejiang University, Hangzhou 310058, China
Received 22 October 2012; Revised 27 January 2013; Accepted 27 January 2013
Academic Editor: Zhongqing Su
Copyright © 2013 Zhenyu Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
A new beam structure with periodically attached multioscillators is proposed based on the idea of locally resonant (LR) phononic crystals (PCs) to reduce flexural vibrations in the
frequency-multiplication ranges. Wave band structures of the new beam are derived by using the transfer matrix method. The multiple band gaps in the beam are then verified by the frequency response
function (FRF), which is calculated through the finite element method. In addition, simplified models are proposed, which contribute to the calculation of the edge frequencies of the band gaps and
enhance the understanding of the LR mechanism of PCs. The accuracy of the simplified models is proven by comparing them with the results derived from the analytical model under different beam
structure parameters. The results suggest that lower frequencies and ranges of frequency multiplications can be achieved in the band gaps which are obtained from the new beam structure with
multioscillators in a unit cell. Therefore, the ideas presented in this paper have the potential to be used in developing new devices with frequency-multiplication characteristics for vibration
isolation or noise control in aerospace and civil structures.
1. Introduction
Methods to control the propagation of elastic waves, such as vibration reduction and noise isolation, are often the focus of engineering studies. Much research has been conducted over many years to
suppress unwanted vibration or noise. A variety of vibration control technologies, including visco-elastic materials, springs, soft materials, hydraulic dampers, and pneumatic isolators, among
others, were gradually developed and are widely used in engineering practice [1]. As technology progresses, scientific equipment and structures are developed further to be more complex and precise.
The control of the higher-order vibration or coupled vibration in these complicated structures, as well as the higher precision and flexibility of the vibration isolation in precise instruments, is
increasingly important. Currently, the traditional vibration/noise control technologies are facing new challenges.
In the last decade, the emergence and development of phononic crystals (PCs) have inspired new ideas for wave control [2–4]. PCs are artificial composite materials that are formed by periodic
variations of properties and structures of the material. One notable aspect of these PCs is the wave filtering property of the so-called “band gaps,” which are selected frequency ranges in which
elastic waves cannot propagate through the periodic system. This property means that the vibration can be well mitigated when its frequency is located in the specified band gaps of the PCs. In
addition, the frequency-space distributions of the band gaps for a PC can be regulated by the properties, geometries, and arrangements of the elements composing the so-called “artificial crystal.”
Such a unique property promises an enormous potential for the development of vibration isolation structures [5], wave filters [6], sonic shields [7], and other applications; these developments may
provide new ways to achieve the aims that are difficult to realise with the traditional vibration/noise control technologies.
There has been a great deal of research on the mechanisms and properties of band gaps. The earlier investigations of PCs are commonly based on the Bragg scattering mechanism [3]. Such band gaps are
called Bragg-type gaps, whose centre frequencies are governed by the Bragg condition , where is the lattice constant of the periodic system and is the wavelength in the host material. The Bragg
condition indicates that Bragg-type gaps are not practical for filtering waves in the low frequency range because the lattice constant must be of the same order as the relevant wavelength. In
contrast, the locally resonant (LR) mechanism proposed by Liu et al. [8] makes it possible to obtain resonance-type gaps with lattice constants that are two orders of magnitude smaller than the
relevant wavelength by proposing a type of LR PC, which has attracted considerable attention in this field [9–16]. The LR mechanism is mainly based on the idea of mounting periodic arrays of local
resonators to a host medium. Thus, the frequency can be tuned to the desired values by varying the parameters and structures of the local resonators. In analogy with LR PCs, the idea of
resonance-type band gaps has recently been attempted, both theoretically and experimentally, for rods [17–19], beams [5, 20–24], pipes [25], and plates [26–28] in vibration-control engineering.
Beams are typical structural elements of many engineering constructions and equipments. The control of wave propagation in beams is of great importance in aerospace and civil structures because the
unwanted transmission of waves can lead to safety issues or environmental consequences. Based on the concept of LR PCs, some research focuses on the existence of low-frequency resonance gaps in
infinite systems and the validation of gap characteristics by calculating/measuring the frequency response functions (FRFs) of finite samples [5, 20, 21]. Yu et al. investigated the flexural
vibration band gaps in beams with locally resonant structures that have a single degree of freedom [5] and two degrees of freedom [20]. Liu et al. discussed the frequency range and attenuation
coefficient of the locally resonant gap with different local resonators [21]. However, these studies all focused on a single band gap, which is not suitable for the reduction of vibrations in the
multiple frequency ranges in engineering because the high-order modes of beams may also be involved in the vibrations. The same is true in a rotor system, in which the flexural vibration is also
increased at two and four times the fundamental frequency due to angular misalignment of the coupling [29].
Recently, the coexistence of resonance-type and Bragg-type band gaps was found in LR beams [22, 23]. Liu and Hussein observed the transition state between resonance-type and Bragg-type band gaps as
well as an interesting wave behaviour caused by the interplay of these two mechanisms in LR beams [22]. Xiao et al. achieved broader band gaps in a locally resonant beam with multiple arrays of
damped resonators at frequencies both below and around the Bragg condition [23]. These researchers’ works can derive multiple band gaps under certain circumstances, whereas the wave attenuation in a
Bragg-type band gap is too small to meet the higher isolation demand. In addition, complicated structure constructions are needed to achieve a significant amount of wave attenuation in the Bragg-type
band gap. Similar research can be found in the study by Wen et al., which attempted to add multiple oscillators to a unit cell of an Euler-Bernoulli beam to obtain multiple resonance-type band gaps [
24]. However, the first gap these researchers derived was too narrow to mitigate vibration in the low-frequency range, and the band gaps in the frequency-multiplication ranges were not provided.
Thus, their research cannot well deal with the problem of vibration reduction within multiple frequency ranges, especially in the frequency-multiplication ranges.
The main purpose of this paper is to achieve more flexible resonance-type multiband gaps by proposing a new beam with periodically attached multioscillators. The lower initial frequency and band gaps
in the frequency-multiplication ranges are expected to be obtained in the new beam, which can meet the demand of wave attenuation in multiple frequency ranges in engineering. In addition, simplified
models for the corresponding edge frequencies of the band gaps are studied, which can contribute to further understanding of the LR mechanism of PCs and the realisation of composite structures with
multiple band gaps. The paper is organised as follows. The exact dispersion relations for the propagation of flexural vibrations in infinite Timoshenko beams that are periodically connected with
multioscillators are derived in Section 2. The analytical results for the band gaps derived from the new beam are illustrated in Section 3, and the transmission FRFs obtained using the finite element
(FE) method are provided to verify the accuracy of the band gap distributions. In Section 4, simplified models are proposed to calculate the initial and terminal frequencies. In addition, the band
gaps in the new beam are compared with the beam studied in [24] under different structure parameters. Finally, conclusions are presented in Section 5.
2. Analytical Models
In this section, the transfer matrix method is used to derive the exact dispersion relations for the Timoshenko beams with periodically attached multioscillators, which allows for continuity
conditions at the two surface boundaries of each unit cell through the use of matrices [29, 30]. The analytical models of the beams are illustrated in Figure 1. The straight beams extend infinitely
along the axis and have an annular cross section. There are two oscillators assembled on the beam at uniformly spaced intervals, and each oscillator comprises a spring and a mass . The length of the
interval is called the lattice constant of the PCs. Only flexural vibrations are assumed to occur in the beam. The transverse displacement of the Timoshenko beam satisfies the following equation of
motion [5, 31]: where and are the Young’s modulus and shear modulus of the beam’s material, respectively; is the density; is the cross-sectional area; is the Timoshenko shear coefficient; and is the
area moment of inertia with respect to the axis perpendicular to the beam axis. By separating out the time variable, can be written as , where is the circular frequency. As discussed in [5], for the
th cell, where , and , the amplitude can be obtained by with where ; , and is the largest integer that is less than .
For Model A, which is shown in Figure 1(a), the dynamic equations of the two oscillators connected between the th and th cell can be derived as where and are the displacements of the two oscillators
of the th cell at the centre of gravity. The absolute values of and are the vibration amplitudes of the two oscillators of the th unit cell. From (4), the relationship between and is
The interactive force between the first oscillator and the beam, , at is
For the case of Model B studied in [24], the dynamic equation for the two unconnected oscillators at the interface shown in Figure 1(b) can be derived as [24]
The solution is
From (8), the interactive force between the oscillators and the beam, , at is
According to the continuity of the displacement, slope, bending moment, and shear force at the interface between the th and th unit cell, ,
By extracting the arbitrary coefficients from (10), , these equations can be written in matrix form as follows where
Due to the periodicity of the structure, the Bloch theorem states that where is wave vector in the direction. The problem can be transformed into an eigenvalue matrix equation: where and I is a unit
The dispersion relation between the wave vector and the frequency can therefore be obtained. For any , if is a real number, is in the pass band. If has an imaginary part, the corresponding wave is
damped in that region, and the imaginary part of can be used to describe the attenuation properties in the band gaps.
3. Numerical Simulation and Comparison
Figure 2 shows two simulation structures of the beam with oscillators based on Model A and Model B, respectively. Both beams are constructed using an aluminium tube, and the oscillators are composed
of soft rubber rings and metal rings. As shown in Figure 2(a), the multioscillators in Model A are structures composed of four connected rings in each unit cell, whereas the periodically attached
multioscillators in Model B presented in Figure 2(b) are two adjacent structures that are each composed of a rubber ring and a metal ring. The inner and outer radii of the tube are m and m,
respectively. The lattice constant is m, and the length of all the rings is m. The outer radius of the first rubber ring, which is in contact with the tube, is m. The outer radius of the first
metal ring, which is in contact with the first rubber ring, is m. The radii of the second rubber ring and the metal ring in Figure 2(a) ( and ) will be determined in the following
All of the material parameters used in the calculations are listed in the Table 1. As discussed in [32], the shear coefficient of the Timoshenko beam can be determined by where denotes the Poisson
The radial stiffness of the rubber ring can be calculated using [33] where is the shape coefficient.
For comparison, the structure parameters for Model B are taken from [24]. The stiffness and mass of the LR structures in Model A and Model B are the same, namely, kg, kg, and N/m. The radii of the
second rings are set to m and m.
The band structures of both models are shown in Figure 3. The complete band gaps are shaded. In both models, two complete band gaps are found between 0 and 800Hz. The band gap characteristics of
Model B based on the Timoshenko beam theory are verified with the results derived in [24]. Compared with the beam with only one oscillator in a unit cell in [5], although the initial frequencies of
the first band gaps are both 309Hz, the total width of the first two band gaps in the Model B is increased from 170.3Hz to 297.1Hz. However, the positions of the first two band gaps are too close
and are not suitable for vibration reduction within multiple frequency ranges, especially if the frequency ranges in which vibration must be reduced have large intervals.
Figure 3(a), obtained using Model A, shows two widely separated band gaps that are not obtained in Model B. The initial frequency of the first gap in Model A is decreased to 201.4Hz. Note that the
centre frequency of the second band gap (511.8–560.9Hz) is approximated two times the centre frequency of the first gap (201.4–348.4Hz); that is, a frequency-multiplication relationship between the
two resonance-type gap can be achieved. The “frequency-multiplication relationship” mentioned here can be explained as follows: the central frequency of the band gaps is close to the frequency (
could be ), where the fundamental frequency is the central frequency of the first band gap. Thus, the flexural vibrations at one and two times the fundamental frequency can be well reduced, a
phenomenon that can be employed to address a case such as the above-mentioned angular misalignment of the coupling in a rotor system. In addition, the first band gap in Model A is wider than the
second band gap, which helps to flexibly reduce vibrations of the beam at low frequencies.
The existence of the band gaps calculated from the infinite system can be verified by the transmission property derived from a corresponding finite system because PCs with a sufficient number of unit
cells can provide a large wave attenuation in the corresponding band gap range [11]. The FRF, which represents the relationship between the wave response and the corresponding frequencies, has been
used to describe vibration gaps effectively. Therefore, the finite system of Model A is created in Abaqus to calculate the FRF. The mesh model for the FE method is illustrated in Figure 4(a), which
has the same geometry in the unit cell as the model in Figure 2(a). Based the analysis of different numbers of unit cells in the structure [11], eight unit cells used in an FE simulation can achieve
a sufficient accuracy for approximating the results of the infinite system. Therefore, the length of the beam is 0.6m in Figure 4(a). To guarantee the free vibration of the beam, there are no
boundary constraints at the ends of the beam. The acceleration is induced at the left end of the beam in the direction, and the corresponding acceleration is extracted at the opposite end. The
frequency responses are illustrated as solid lines in Figure 4(b).
Note the two sharp drops below the 0dB line (dashed line) in the Figure 4(b), which indicate the ranges of the band gaps. Compared with the response outside the band gaps, the average response
attenuation of the two band gaps is approximately 40dB. The ranges of the first two gaps calculated using the FE method are similar to those of the gaps shown in Figure 3(a), which are obtained from
the infinite structure. It can be concluded that the previous analysis is accurate, and using LR structures with equivalent parameters, Model A has better band gaps for vibration reduction in the
frequency-multiplication ranges than does Model B. Considering the advantages of each model, an LR PC beam with the desired band gap properties can be obtained by choosing the most appropriate plan.
4. Simplified Models for the Edge Frequencies of Band Gaps
In this section, the corresponding simplified models for the initial and terminal frequencies of the band gaps for Model A are studied. The simplified models for Model B have been discussed
previously in [24].
4.1. Initial Frequency Model
The initial frequency of the first band gap in a typical LR PC is determined by the resonance frequency of the oscillator in the same direction. In this resonance mode, the oscillators vibrate in
specific directions, and the phases of the oscillator vibrations in adjacent unit cells are reversed to keep the dynamic balance [4, 24]. Thus, the simplified model for the initial frequencies of the
two oscillators can be formed as shown in Figure 5.
The equations of motion for the model are as follows: where and represent the displacements of the respective oscillator.
The natural angular frequency satisfies the equation where , .
Thus, can be obtained, and the initial frequencies of the first two band gaps are .
4.2. Terminal Frequency Model
All of the oscillation phases of the unit cells are in the same direction at the terminal frequency of the band gap. The dynamic balance is given by the antiphases between the LR structures and the
matrix [4, 24]. The matrix mentioned in [4, 24] is the beam in this paper by analogy. As illustrated in Figure 6, the simplified model for the terminal frequencies comprises the beam mass and the
oscillators in the unit cell. There is a static point between the beam and the connected oscillators that divides the model into two parts that have the same natural frequencies. At the static point,
the spring can be considered a series connection of two springs and , which are related by the following equation:
The components to the right of the static point (dashed box) can be observed as a single unit. The natural angular frequency is described by (17) to (19), where is replaced by .
Because the resonances of the matrix and the connected oscillators are at the same frequency,
Thus, the relation between and can be extracted and combined with the previous discussion in Section 4.1. Therefore,
and the terminal frequencies of the first two band gaps are .
Figures 7, 8, and 9 illustrate the dependence of the band gaps on the oscillators’ mass ratio, the oscillators’ stiffness ratio, and the beam’s mass, respectively. In addition, for Model A, the
calculations of the initial and terminal frequencies of the first two band gaps using both the analytical and simplified models are presented to verify the accuracy of the deduced formulae. The
shadow regions indicate the band gaps, and the details of the data are illustrated in the top right corner of each figure.
Figures 7(a), 8(a), and 9(a) show that the frequencies of the band gaps obtained using the simplified models and the analytical model are in good agreement. These results prove the accuracy and
validity of the methods proposed in this paper. The beams with periodically attached multioscillators have similar resonance modes to those of typical LR PCs at the boundary frequencies of the band
gaps. This result reveals the characteristics of the LR mechanism, which is helpful in the construction of new devices with LR band gaps.
By comparing the (a) subfigure with the (b) subfigure in Figures 7–9, it can be seen that the variation tendencies of Model A are similar to those of Model B. With the increase of the oscillators’
mass as well as the beam’s mass or with the reduction of the oscillators’ stiffness, the frequencies of the band gaps are decreased. However, there is usually a distance between the first two band
gaps in Model A, and the band gaps that are widely separated can be obtained without large differences in the parameters of the oscillators. Thus, Model A has better regulation and control abilities
in practical engineering. In addition, with the same material parameters, Model A clearly always has a lower initial frequency and a wider first band gap than Model B and is able to provide a larger
range of vibration reduction at low frequencies. To achieve lower frequencies, Model B should use a larger mass or a smaller stiffness, which is uneconomic in most engineering. In addition, although
the total width of the gaps is larger, a narrow first band gap is almost inevitable in Model B. Furthermore, the band gaps with a frequency-multiplication relationship cannot be well derived in Model
B; thus, this model is not appropriate for vibration damping or noise reduction in the frequency-multiplication ranges. In general, Model A has specific abilities and can be more reasonably and
feasibly applied to practical structures because of the advantages of lower frequencies, frequency-multiplication relationships, and material costs.
5. Conclusions
In this paper, a new Timoshenko beam structure with periodically attached multioscillators is proposed to obtain band gaps in the frequency-multiplication ranges based on the LR mechanism of PCs.
Explicit matrix formulations are derived for the calculation of wave band structures of the new beam by using the transfer matrix method. The gap characteristics of the beam are confirmed by
calculating the FRF of the corresponding finite structure. The numerical calculations of the band structures and the analysis of the model parameters demonstrate that the beams with periodically
attached multioscillators have more abundant gap characteristics than those with only one oscillator in a unit cell. By using common materials and an uncomplicated beam structure, multiple
resonance-type band gaps with large wave-attenuation and frequency-multiplication ranges, together with the wider and lower first band gap, are derived in the new beam; this result was not
illustrated in any of the previous studies on LR PC beams. In addition, simplified models are proposed to deduce accurate estimation formulae for the initial and terminal frequencies of the band gaps
in the new beam. The simplified models will also contribute to enhanced understanding of the LR mechanism of PCs and will facilitate the analysis of similar structures.
The research findings presented in this paper provide suggestions for future studies of small-size PCs with low frequencies and multiple resonance-type band gaps. Moreover, the results can be
employed to create new devices that reduce vibration and mitigate noise in the frequency-multiplication ranges for aerospace and civil structures.
This work is supported by the National Nature Science Foundation of China under Grant nos. 51079127, 51179171, and 51279180.
1. E. I. Rivin, Passive Vibration Isolation, ASME Press, New York, NY, USA, 2003.
2. M. M. Sigalas and E. N. Economou, “Elastic and acoustic wave band structure,” Journal of Sound and Vibration, vol. 158, no. 2, pp. 377–382, 1992. View at Scopus
3. M. S. Kushwaha, P. Halevi, L. Dobrzynski, and B. Djafari-Rouhani, “Acoustic band structure of periodic elastic composites,” Physical Review Letters, vol. 71, no. 13, pp. 2022–2025, 1993. View at
Publisher · View at Google Scholar · View at Scopus
4. X. S. Wen, J. H. Wen, D. L. Yu, et al., Phononic Crystals, National Defense, Industry Press, Beijing, China, 2009.
5. D. Yu, Y. Liu, G. Wang, H. Zhao, and J. Qiu, “Flexural vibration band gaps in Timoshenko beams with locally resonant structures,” Journal of Applied Physics, vol. 100, no. 12, Article ID 124901,
2006. View at Publisher · View at Google Scholar · View at Scopus
6. J. H. Sun, C. W. Lan, C. Y. Kuo, and T. T. Wu, “A ZnO/silicon Lamb wave filter using phononic crystals,” in Proceedings of the IEEE International Frequency Control Symposium, pp. 1–4, Baltimore,
Md, USA, May 2012.
7. K. M. Ho, C. K. Cheng, Z. Yang, X. X. Zhang, and P. Sheng, “Broadband locally resonant sonic shields,” Applied Physics Letters, vol. 83, no. 26, pp. 5566–5568, 2003. View at Publisher · View at
Google Scholar · View at Scopus
8. Z. Liu, X. Zhang, Y. Mao et al., “Locally resonant sonic materials,” Science, vol. 289, no. 5485, pp. 1734–1736, 2000. View at Publisher · View at Google Scholar · View at Scopus
9. C. Goffaux, J. Sánchez-Dehesa, A. L. Yeyati et al., “Evidence of Fano-like interference phenomena in locally resonant materials,” Physical Review Letters, vol. 88, no. 22, Article ID 225502, 4
pages, 2002. View at Scopus
10. P. Sheng, X. X. Zhang, Z. Liu, and C. T. Chan, “Locally resonant sonic materials,” Physica B, vol. 338, no. 1–4, pp. 201–205, 2003. View at Publisher · View at Google Scholar · View at Scopus
11. J. S. Jensen, “Phononic band gaps and vibrations in one- and two-dimensional mass-spring structures,” Journal of Sound and Vibration, vol. 266, no. 5, pp. 1053–1078, 2003. View at Publisher ·
View at Google Scholar · View at Scopus
12. M. Hirsekorn, “Small-size sonic crystals with strong attenuation bands in the audible frequency range,” Applied Physics Letters, vol. 84, no. 17, pp. 3364–3366, 2004. View at Publisher · View at
Google Scholar · View at Scopus
13. G. Wang, D. Yu, J. Wen, Y. Liu, and X. Wen, “One-dimensional phononic crystals with locally resonant structures,” Physics Letters A, vol. 327, no. 5-6, pp. 512–521, 2004. View at Publisher · View
at Google Scholar · View at Scopus
14. Z. Liu, C. T. Chan, and P. Sheng, “Analytic model of phononic crystals with local resonances,” Physical Review B, vol. 71, no. 1, Article ID 014103, 8 pages, 2005. View at Publisher · View at
Google Scholar · View at Scopus
15. H. H. Huang and C. T. Sun, “Wave attenuation mechanism in an acoustic metamaterial with negative effective mass density,” New Journal of Physics, vol. 11, Article ID 013003, 2009. View at
Publisher · View at Google Scholar · View at Scopus
16. H. H. Huang, C. T. Sun, and G. L. Huang, “On the negative effective mass density in acoustic metamaterials,” International Journal of Engineering Science, vol. 47, no. 4, pp. 610–617, 2009. View
at Publisher · View at Google Scholar · View at Scopus
17. G. Wang, X. Wen, J. Wen, and Y. Liu, “Quasi-one-dimensional periodic structure with locally resonant band gap,” Journal of Applied Mechanics, vol. 73, no. 1, pp. 167–170, 2006. View at Publisher
· View at Google Scholar · View at Scopus
18. D. Yu, Y. Liu, G. Wang, L. Cai, and J. Qiu, “Low frequency torsional vibration gaps in the shaft with locally resonant structures,” Physics Letters A, vol. 348, no. 3–6, pp. 410–415, 2006. View
at Publisher · View at Google Scholar · View at Scopus
19. Y. Xiao, J. H. Wen, and X. S. Wen, “Longitudinal wave band gaps in metamaterial-based elastic rods containing multi-degree-of-freedom resonators,” New Journal of Physics, vol. 14, no. 3, Article
ID 033042, 2012. View at Publisher · View at Google Scholar
20. D. Yu, Y. Liu, H. Zhao, G. Wang, and J. Qiu, “Flexural vibration band gaps in Euler-Bernoulli beams with locally resonant structures with two degrees of freedom,” Physical Review B, vol. 73, no.
6, pp. 1–5, 2006. View at Publisher · View at Google Scholar · View at Scopus
21. Y. Liu, D. Yu, L. Li, H. Zhao, J. Wen, and X. Wen, “Design guidelines for flexural wave attenuation of slender beams with local resonators,” Physics Letters A, vol. 362, no. 5-6, pp. 344–347,
2007. View at Publisher · View at Google Scholar · View at Scopus
22. L. Liu and M. I. Hussein, “Wave motion in periodic flexural beams and characterization of the transition between Bragg scattering and local resonance,” Journal of Applied Mechanics, vol. 79, no.
1, Article ID 011003, 17 pages, 2012.
23. Y. Xiao, J. H. Wen, and X. S. Wen, “Broadband locally resonant beams containing multiple periodic arrays of attached resonators,” Physics Letters A, vol. 376, no. 16, pp. 1384–1390, 2012.
24. Q. H. Wen, S. G. Zuo, and H. Wei, “Locally resonant elastic wave band gaps in flexural vibration of multi-oscillators beam,” Acta Physica Sinica, vol. 61, no. 3, Article ID 034301, 2012.
25. D. Yu, J. Wen, H. Zhao, Y. Liu, and X. Wen, “Vibration reduction by using the idea of phononic crystals in a pipe-conveying fluid,” Journal of Sound and Vibration, vol. 318, no. 1-2, pp. 193–205,
2008. View at Publisher · View at Google Scholar · View at Scopus
26. M. Oudich, Y. Li, B. M. Assouar, and Z. Hou, “A sonic band gap based on the locally resonant phononic plates with stubs,” New Journal of Physics, vol. 12, Article ID 083049, 2010. View at
Publisher · View at Google Scholar · View at Scopus
27. J. C. Hsu, “Local resonances-induced low-frequency band gaps in two-dimensional phononic crystal slabs with periodic stepped resonators,” Journal of Physics D, vol. 44, no. 5, Article ID 055401,
2011. View at Publisher · View at Google Scholar · View at Scopus
28. Y. Xiao, J. H. Wen, and X. S. Wen, “Flexural wave band gaps in locally resonant thin plates with periodically attached spring-mass resonators,” Journal of Physics D, vol. 45, no. 19, Article ID
195401, 2012. View at Publisher · View at Google Scholar
29. W. T. Thomson, “Transmission of elastic waves through a stratified solid medium,” Journal of Applied Physics, vol. 21, no. 2, pp. 89–93, 1950. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet · View at Scopus
30. R. Esquivel-Sirvent and G. H. Cocoletzi, “Band structure for the propagation of elastic waves in superlattices,” Journal of the Acoustical Society of America, vol. 95, no. 1, pp. 86–90, 1994.
View at Scopus
31. R. A. Méndez-Sánchez, A. Morales, and J. Flores, “Experimental check on the accuracy of Timoshenko's beam theory,” Journal of Sound and Vibration, vol. 279, no. 1-2, pp. 508–512, 2005. View at
Publisher · View at Google Scholar · View at Scopus
32. T. Kaneko, “On Timoshenko's correction for shear in vibrating beams,” Journal of Physics D, vol. 8, no. 16, pp. 1927–1936, 1975. View at Publisher · View at Google Scholar · View at Scopus
33. C. S. Zhao and S. J. Zhu, “Study on the static stiffness characteristics of rubber-metal ring,” China Mechanical Engineering, vol. 15, no. 11, pp. 962–964, 2004.
|
{"url":"http://www.hindawi.com/journals/mpe/2013/146975/","timestamp":"2014-04-20T05:00:22Z","content_type":null,"content_length":"298582","record_id":"<urn:uuid:b11b4ac1-b653-4d26-a121-3b0de7bf96f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Existence and Uniqueness
February 11th 2011, 11:35 AM
Existence and Uniqueness
Hi everyone,
I'm having trouble with the ideas involved with the uniquesness theorem for a linear nth order IVP.
As an example:
The initial value problem
$3y'''+5y''-y'+7y=0,$$y(1)=0$, $y'(1)=0,$$y''(1)=0$.
Now, I can see that $y=0$ is a trivial solution here, but by the theorem, this has got to be the ONLY solution on any interval containing 1. I don't see this. Is this true? Is there no other
solution to this problem ?
February 11th 2011, 12:17 PM
Let's suppose that the solution od the DE around $x=1$ is of the form...
$\displaystyle y(x)= \sum_{n=0}^{\infty} y^{(n)} (1)\ \frac{(x-1)^{n}}{n!}$ (1)
... and our scope is to find the $y^{(n)}(1)$ for all n. The 'initial conditions' give us $y(1)=y^{'}(1)=y^{''}(1)=0$. The sucessive drivatives are...
$\displaystyle y^{(3)} (1)= - \frac{5}{3}\ y^{(2)} (1) + \frac{1}{3}\ y^{(1)} (1) - \frac{7}{3}\ y(1)=0$ (2)
$\displaystyle y^{(4)} (1) = - \frac{5}{3}\ y^{(3)} (1) + \frac{1}{3}\ y^{(2)} (1) - \frac{7}{3}\ y^{(1)} (1) =0$ (3)
... and so one. All the derivatives in (1) vanish so that y=0 is the only solution...
Kind regards
|
{"url":"http://mathhelpforum.com/differential-equations/170923-existence-uniqueness-print.html","timestamp":"2014-04-20T08:43:25Z","content_type":null,"content_length":"6919","record_id":"<urn:uuid:76136b91-496f-4a9c-876c-9be7db534d15>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate Your Mortgage Amount Based on Monthly Payments
With a little work, you can figure out how much mortgage you can afford.
If you want a mortgage loan, you'll have to prove you can pay it back. Lenders typically check your income, employment, debts and credit history--including past bankruptcies or foreclosures--before
they agree to write a mortgage; they'll also want the house appraised to be certain that it's good collateral for the loan. One of the steps in qualifying for a loan is deciding how high a PITI
payment-- principal and interest on the mortgage, plus taxes and insurance--you'll be able to pay each month.
Calculate your maximum monthly PITI payment. The general rule, according to the Investopedia website, is that PITI should be no more than 28 percent of your monthly income, though some lenders will
go higher. Your total debts, including PITI, plus car loans, student loans and credit-card payments, should be no more than 36 percent. If your gross annual income is, for example, $84,000, divide by
12 to get your monthly income of $7,000. Your maximum PITI would be $1,960; your total debt-to-income ratio would be $2,520.
Subtract taxes and insurance from your monthly PITI payment. If you're thinking of buying a $150,000 house, your real estate agent or local government can help you figure out the taxes; an insurance
agent can give you a rough quote on homeowners insurance premiums. If you have a PITI of $1,960 and insurance and taxes equal around $400, that leaves you with $1,560 a month for paying interest and
Multiply the payment by 12 and then by 30 to see how much 30 years of monthly payments add up to. At $1,560 a month, the total equals $561,600, which is the maximum you could spend on both principal
and interest.
Plug your figures into a mortgage formula. For example, you can adapt the Foner Books formula for calculating monthly payments to calculate the principal instead. First, set “i” to equal the interest
rate divided by 12; and “n” to equal the number of months you’ll be making payments. Add "i" to 1 and raise the result to the power of “n” to get “y.” Divide your monthly payment by the total of
(i*y)/(y-1), and you’ll get the principal on your mortgage. For example, a 15-year mortgage with a 5 percent interest rate means "i" equals 0.004, "n" equals 180 and "y" equals 2.05. You end up with
$1,560 divided by 0.0078; your total mortgage principal would be $200,000.
• If your lender requires mortgage insurance, don't forget to subtract that monthly payment from your PITI along with your homeowners insurance.
• Several websites will provide you with mortgage calculations if you input the financial data.
• If a 28 percent PITI payment would give you a debt-to-income ratio of more than 36 percent, your lender may insist on a PITI of less than 28 percent. Even if the lender is amenable to writing the
mortgage, the higher ratio puts you at greater risk of financial difficulty.
|
{"url":"http://homeguides.sfgate.com/calculate-mortgage-amount-based-monthly-payments-3139.html","timestamp":"2014-04-20T15:52:38Z","content_type":null,"content_length":"33790","record_id":"<urn:uuid:c43771c2-e3dd-4d78-a3ce-8a7ab912207a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Common Core Standards : CCSS.Math.Content.HSF-LE.A.3
Common Core Standards: Math
3. Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
Students should be able to prove that eventually, as long as the functions are headed in the same direction, a quantity increasing exponentially will "beat" linear, quadratic, and polynomial
functions. Not much to it.
It's probably obvious that the function y = 3^x will eventually surpass y = 3x + 3. We can see this via a table of values or a graph. Somewhere down the line, when x gets closer and closer to
infinity, the y value of the exponential function will be larger than the y value of the linear function.
We can see that this happens at x = 2 whether we graph it or look at the table of values.
│x│3^x │3x + 3 │
│1│3 │6 │
│2│9 │9 │
│3│27 │12 │
│4│81 │15 │
│5│243 │18 │
What about other functions? Ones with exponents that aren't 1 or x? What about something like y = x^1000 compared to y = 1000^x? At a large enough x, will 1000^x really surpass x^1000?
The short answer is that yes, it will. Once x = 1000, the two will be equal. For anything greater, the exponential function will emerge victorious. Because even when x = 1001, we know that 1000^1001
> 1001^1000. Eventually, any exponential function with a base greater than 1 will override polynomial functions.
1. Which of the following x values proves that f(x) = 2^x will surpass the function y = x^2 + x + 3?
Answer Explanation:
We could either plug these values into the two functions and see which one works (in which case we'd end up with 40), or we could graph the functions and find the point at which 2^x surpasses x^2
+ x + 3.
2. A local newspaper has projected that its online revenues are growing at a rate modeled by y = 1.5^x while its print media only increases at a rate of y = 36x + 35. At which point (in terms of x)
do online revenues surpass that of print revenues?
Answer Explanation:
You can graph it, solve it mathematically, or construct a table to see that when x is less than 20, our exponential function has surpassed that of the linear one. It took a while, but it got
there eventually!
3. HollerDollar Banking Co. has said that its investment strategy yields a pretty sizeable return rate compared to its competitor bank, Investinus. Its formula for profits models a massive y = x^54
+ 36x^50 + 1500 return rate compared to the other banks, which is modeled by y = 1000^x. HollerDollar claims its investment strategy can never be surpassed. Are they correct?
Correct Answer:
No, Investinus catches up eventually
Answer Explanation:
As x starts out, HollerDollar is definitely in the lead. However, once we start getting into the realm of x = 20, Investinus has caught up sufficiently. By the time we get to x = 27, however,
Investinus is somewhere around y = 1 × 10^81 while HollerDollar is at 1.97 × 10^77, a difference of about 4 orders of magnitude. Maybe we should invest in Investinus.
4. By simply looking at the equations below, which one will eventually surpass the others?
Answer Explanation:
Because this is an exponential function, it will eventually surpass all other since it increases at an ever-increasing rate. That means linear functions and polynomials are simply no match for
5. Which of the following is a linear equation?
Correct Answer:
Answer Explanation:
The standard notation for a linear function is (A). Even though the polynomial answer, (D), can get very large, both the linear and polynomials will be superseded by a (B) or (C) type equation,
which are both exponential.
6. A quadratic function is just a variant of a polynomial function. Is this statement true or false?
Answer Explanation:
A quadratic function is essentially written as ax^2 + bx + c = y. It's a second-degree polynomial (because of the x^2 part). A third-degree polynomial would have x^3 instead of x^2.
7. Which of the following is possible if the function y = x + 50 is surpassed by y = 1.1x?
Answer Explanation:
The easiest way would be to plug in the values of 10, 20, 40, and 50 into the two functions. While x = 40 will give us 40 + 50 = 90 and 1.1^40 ≈ 45.3, x = 50 will give us 50 + 50 = 100 and 1.1^50
≈ 117.4. That means the right answer is (D).
8. At what point will the function y = 16^x surpass y = 16x?
Answer Explanation:
If we substitute x = 1, we get 16 for both (since 16^1 = 16 × 1). However, anything above instantly makes 16^x shoot up into the sky while 16x maintains its relatively slow constant ascent. Even
x = 2 makes a substantial difference between 16(2) = 32 and 16^2 = 256.
9. Which of the following graphs looks most like an exponential function?
Answer Explanation:
Considering what we know about exponential graphs, they need to be constantly increasing. The answer can't be (B) because it increases at a constant rate, not an increasing one. Even though (C)
looks promising, we there is no value for x in y = a^x that would make y negative. That means that (A) is the right answer.
10. Given the following graph, which of the lines/curves, most likely represents the exponential function (assuming there's only one)?
Answer Explanation:
Despite both (A) and (B) looking like curves and having what appear to be ever increasing rates, remember that the exponential function can surpass any polynomial function. That means the
exponential function is most likely the one that surpasses all the other ones. So just to be safe, (A) is our best option.
|
{"url":"http://www.shmoop.com/common-core-standards/ccss-hs-f-le-3.html","timestamp":"2014-04-18T06:30:34Z","content_type":null,"content_length":"65702","record_id":"<urn:uuid:ec087788-c17f-4797-806e-cc7f710f8f28>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] chararray behavior
Alan McIntyre alan.mcintyre@gmail....
Wed Jul 9 00:49:59 CDT 2008
On Tue, Jul 8, 2008 at 3:30 PM, Anne Archibald
<peridot.faceted@gmail.com> wrote:
> In particular, the returned type is always "string of length four",
> which is very peculiar - why four? I realize that variable-length
> strings are a problem (object arrays, I guess?), as is returning
> arrays of varying dtypes (strings of length N), but this definitely
> violates the principle of least surprise...
Hmm..__mul__ calculates the required size of the result array, but the
result of the calculation is a numpy.int32. So ndarray__new__ is
given this int32 as the itemsize argument, and it looks like the
itemsize of the argument (rather than its contained value) is used as
the itemsize of the new array:
>>> np.chararray((1,2), itemsize=5)
chararray([[';<f', '\x00\x00\x00@']],
>>> np.chararray((1,2), itemsize=np.int32(5))
chararray([['{5', '']],
>>> np.chararray((1,2), itemsize=np.int16(5))
chararray([['{5', '']],
Is this expected behavior? I can fix this particular case by forcing
the calculated size to be a Python int, but this treatment of the
itemsize argument seems like it might be an easy way to cause subtle
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035499.html","timestamp":"2014-04-18T11:05:39Z","content_type":null,"content_length":"3930","record_id":"<urn:uuid:47d6aa3a-949c-4c3a-95eb-b7f936679085>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prime Numbers
Date: 01/31/97 at 19:28:21
From: robby
Subject: prime numbers
What are the prime numbers and why are they prime numbers?
Date: 02/03/97 at 13:22:50
From: Doctor Reno
Subject: Re: prime numbers
Hi, Robby!
You've asked my favorite question! I love prime numbers, and spend
a lot of time with my sixth grade class studying them.
Before we talk about prime numbers, though, Robby, I want to review
some multiplication vocabulary with you...
Let's look at the multiplication problem 5 x 6 = 30. The 5 and 6 are
called factors. They are the numbers we multiply together to get the
answer 30, which is called the product. 30 is also called a multiple
of 5 because it is a product of 5 and another number, 6. 30 is also a
multiple of 6 because it is the product of 6 and 5. The number 5 has
many other multiples, like 10, 15, 20, 25, 35, 40, etc. The number 6
also has many other multiples, like 12, 18, 24, 36, 42, etc. Notice,
Robby, that multiples are the "times tables" for a number. 30 is a
multiple of not only 5 and 6, but also of 2 (2x15), 3 (3x10), 10,
and 15.
A prime numbers is a number greater than one whose factors are only
one and itself. In other words, 6 is not prime, because its factors
are 1, 2, 3, and 6 (1x6, 2x3). But 5 is prime, because the only way
you can get a product of 5 is by multiplying 1 and 5 (1x5).
Composite numbers are all the other positive numbers greater than one.
6 is composite.
The number 1 is not prime OR composite because it has only one factor.
Mathematicians have been fascinated by prime numbers for thousands of
years. In fact, Eratosthenes (275-194 BC, Greece), devised a "sieve"
to discover prime numbers. A sieve is like a strainer that you drain
spaghetti through when it is done cooking. The water drains out,
leaving your spaghetti behind. Well, Eratosthenes's sieve drains out
composite numbers and leaves prime numbers behind. To do what
Eratosthenes did, make a chart of the first one hundred whole numbers
The Sieve of Eratosthenes:
Next, cross out 1, because it is not prime. Then, circle 2, because it
is the smallest positive even prime. Now cross out every multiple of
2; in other words, cross out every 2nd number.Then circle 3, the next
prime. Then cross out all of the multiples of 3; in other words, every
third number. Some, like 6, may have already been crossed out since
they may be multiples of 2. Then circle the next open number, 5. Now
cross out all of the multiples of 5, or every 5th number. Continue
doing this until all the numbers through 100 are either circled or
crossed out. Now, Robby, if you have remembered your multiplication
tables, you have just circled all the prime numbers less than 100.
Computer people have written computer programs that do this same sieve
of Eratosthenes. They use the sieve to test computers and to tell them
how much faster one computer runs than another computer. You don't
have to stop the sieve at 100 - you can go up as far as you want to
find all the prime numbers you want to find. But, as you found, there
is a lot of multiplication involved.
Mathematicians are always trying to find large prime numbers. There
are an infinite number of prime numbers, but they always want to know
what the next largest one is that can be written down. They have found
so many prime numbers now that only computers have the time and energy
to look for the next highest prime number. In fact, the newest prime
number is 420,921 digits long! If you want to see what it looks like,
go to:
This prime number was discovered on November 13, 1996, by Joel
Armengaud, a 29-year-old Parisian programmer, on a home computer much
like yours. The number is too long to write out, so it is written in a
special way, using exponents: 2^1398269 - 1. This tells us to
multiply two by itself (2x2x2x2...) 1,398,269 times! And after we do
that, we are supposed to subtract 1 from that product.
There are many different kinds of prime numbers: twin primes, Germain
primes, Fermat primes, and Mersenne primes. Mersenne primes are
getting a lot of publicity now on the internet.
You might also want to read all about prime numbers in the Dr. Math
Investigating prime numbers may also make you curious about perfect
numbers, Fibonacci numbers, and many other numbers. I really hope
that you enjoy investigating prime numbers - they really are fun!
-Doctor Reno, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/57047.html","timestamp":"2014-04-21T04:55:56Z","content_type":null,"content_length":"10411","record_id":"<urn:uuid:f636ab8b-c893-463a-aad7-967b022c0cae>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
o source time
ret quantity is evaluated at the retarded time, o = t ? r=c
High-speed impulsive (HSI) noise is a particularly intense and annoying noise generated by helicopter rotors in high-speed forward flight. This HSI noise is closely associated with the appearance of
shocks and transonic flow around the advancing rotor blades. The quadrupole sources in the Ffowcs Williams{ Hawkings (FW{H) equation1 account for nonlinearities in the vicinity of the rotor blade.
These nonlinearities are of two types, which are described by Lighthill.2; 3 First, the local speed of sound is not constant but varies due to particle acceleration. Second, the finite particle
velocity near the blade influences the velocity of sound propagation. By inclusion of the quadrupole source, the correct physics is mathematically simulated in the acoustic analogy. The quadrupole
source in the FW{H equation was identified by Yu et al.4 as a significant contributor to helicopter HSI noise. Hanson and Fink5 also included the quadrupole source for high-speed propeller noise
prediction but found that it was not a significant noise source in that application. Even though this early work demonstrated the importance of the FW{H quadrupole, it has not been routinely included
in rotor noise predictions because of the difficulty in predicting the source strength of the Lighthill stress tensor Tij and the lack of a computationally efficient algorithm for computing the
quadrupole noise.
In the past few years, the computation of the transonic aerodynamic field around rotor blades has become feasible; hence, renewed interest in prediction of HSI noise has emerged. Yu et al.4 were the
first to successfully utilize advances in CFD by approximating the quadrupole source strength and integrating in the direction normal to the rotor plane. The integration in the normal direction of
the approximate quadrupole source, which is valid in the far field ahead of the helicopter, effectively transforms the volume integration of the quadrupole into a surface integration. More recently,
Schultz and Splettstoesser,6 Schultz et al.,7 and Ianniello and De Bernardis8 have also used this technique with good results. Prieur9 and Prieur et al.10 have developed a frequency domain method for
computing the quadrupole noise of hovering rotors that has yielded good results.
Some attempts have been made to numerically integrate the entire volume around the blade,7; 8
but the computations generally require computer resources comparable to those required by unsteady three-dimensional computational fluid dynamics (CFD) calculations|significantly more than that
required for thickness and loading noise predictions. Farassat11 and his colleagues12; 13 also tried to reduce the computational effort required in computing HSI noise; they recognized that the
appearance of a shock wave coincides with the onset of HSI noise. By assuming that the shock is the dominant contributor of quadrupole noise, the acoustic sources are mathematically confined to the
shock surface. When the shock-noise theory was implemented, the conclusion that the shock noise was a dominant component of the quadrupole source was verified.13 Nevertheless, the difficulty in
accurately extracting the shock geometry, location, and strength from CFD solutions, together with the fact that the shock noise alone did not sufficiently characterize the total quadrupole source
contribution, has postponed the complete implementation of the theory.
The goal of this work is to utilize the far-field approximation to the FW{H quadrupole given by Brentner and Holland14 and extend the formulation to include forward-flight computations. This new
formulation yields efficient numerical prediction of HSI noise without resorting to unnecessary or ad hoc simplifications of the FW{H quadrupole source term. The mathematical manipulations used in
this approach are rigorous and depend only on the far-field assumption, without approximation of the source strength. Numerical time differentiation of integrals is avoided throught the development
of an alternate formulation in which the time differentiation is done analytically. Preliminary calculations with this new formulation demonstrate the potential for efficiency and robustness.
The acoustic analogy approach was chosen because of the substantial knowledge base gained in the development and utilization of thickness and loading noise predictions, based on the FW{H equation.
Further, the fundamental far-field assumption, which is described in the next section, leads to integrals of precisely the same form as current thickness and loading noise calculations; hence, the
existing numerical algorithms can be used directly. Finally, the identification of individual noise components is a unique advantage of the acoustic analogy approach. This new formulation has been
coded and is described in the remainder of this paper. The numerical results are compared with experimental data for both hover and forward-flight conditions.
|
{"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-help---00-0-1-00-0--4----0-0-11-10-0utfZz-8-10&a=d&cl=CL1.267&d=HASH015ec1732defcd9a98d585d3.2","timestamp":"2014-04-19T15:09:30Z","content_type":null,"content_length":"13294","record_id":"<urn:uuid:d55bc122-b076-4dfb-8077-a1fd027e3a47>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Examples of Jensen inequalities
Next: RELATED CONCEPTS Up: THE JENSEN INEQUALITY Previous: THE JENSEN INEQUALITY
The most familiar example of a Jensen inequality occurs when the weights are all equal to 1/N and the convex function is f(x) = x^2. In this case the Jensen inequality gives the familiar result that
the mean square exceeds the square of the mean:
In the other applications we will consider, the population consists of positive members, so the function f(p) need have a positive second derivative only for positive values of p. The function f(p)=1
/p yields a Jensen inequality for the harmonic mean: A more important case is the geometric inequality. Here The more familiar form of the geometric inequality results from exponentiation and a
choice of weights equal to 1/N: In other words, the product of square roots of two values is smaller than half the sum of the values.
A Jensen inequality with an adjustable parameter is suggested by
A most important inequality in information theory and thermodynamics is the one based on
Take logarithms Expand both sides in a Taylor series in powers of The leading term is identical on both sides and can be canceled. Divide both sides by We can now define a positive variable S' with
or without a positive scaling factor Seismograms often contain zeros and gaps. Notice that a single zero p[i] can upset the harmonic H or geometric G inequality, but a single zero has no horrible
effect on S or
Next: RELATED CONCEPTS Up: THE JENSEN INEQUALITY Previous: THE JENSEN INEQUALITY Stanford Exploration Project
|
{"url":"http://sepwww.stanford.edu/sep/prof/pvi/jen/paper_html/node3.html","timestamp":"2014-04-20T23:27:08Z","content_type":null,"content_length":"9625","record_id":"<urn:uuid:96e7ef5d-23a6-46ad-91ad-a42959a4871b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
|
To Reproduce and angle
Given an angle with vertex at A
Step 1. Scribe an arc centered at A and intersecting the arms of the angle at B and C.
Step 2. On the other line, scribe and arc of the same radius centered at A' intersecting the line at B'
Step 3. Use the compass to measure the distance from B to C, and with the compass centered at B' scribe an arc whose radius is the distance from B to C intersecting the arc from Step 2 at C'.
Step 4. Draw the line from A' through C'.
|
{"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_150/c-s/Copy-Angle.html","timestamp":"2014-04-20T11:23:04Z","content_type":null,"content_length":"2112","record_id":"<urn:uuid:2deb0efc-2416-4948-8a3b-2293f1985b4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Choice of estimator
Leigh Roeger posted on Sunday, January 30, 2000 - 9:51 pm
I am working on a multigroup meanstructure analysis. There are 2 groups (boys and girls) who rated their mothers on a 25 item - 4 point (strongly agree, agree, etc) rating scale. The items are very
skewed. The scale consists of three sub-scales or factors. By simply adding items to the subscales girls (on average) rate their mothers better than boys on all three subscales.
I have been perplexed by the results produced from different estimators when testing the latent means. In particular with WLS (when factor loadings and threasholds are invariant between the groups)
one of the latent means goes negative indicating that girls (the second group) rate their mothers more negatively than boys on this factor despite the raw data saying the opposite.
Any ideas on why or how this happens would be much appreciated.
Linda K. Muthen posted on Tuesday, February 01, 2000 - 9:17 am
The only thing that comes to mind is that perhaps girls are not the second group. Do they have the higher code on the gender variable? If so, can you send your input or output and data so we can take
a look at it and give you a better answer?
Anonymous posted on Wednesday, June 01, 2005 - 2:21 pm
I don't know why there are differences between MPLUS probit regression and STATA probit regression. Is it because the default MPLUS probit is estimated by weighted least square while STATA probit is
estimated by maximum likelihood?
If I specify "ANALAYSIS: ESTIMATOR=ML," then the coefficient and s.e. of the MPLUS logistic regression are the same as the STATA logit regression. Can I get the same results of probit regression in
both MPLUS and STATA?
Anonymous posted on Wednesday, June 01, 2005 - 2:23 pm
I don't know why there are differences between MPLUS probit regression and STATA probit regression. Is it because the default MPLUS probit is estimated by weighted least square while STATA probit is
estimated by maximum likelihood?
If I specify "ANALAYSIS: ESTIMATOR=ML," then the coefficient and s.e. of the MPLUS logistic regression are the same as the STATA logit regression. Can I get the same results of probit regression in
both MPLUS and STATA?
bmuthen posted on Wednesday, June 01, 2005 - 5:59 pm
The Mplus "Sample Statistics" (requesting sampstat in the output) gives ML probit regression with a single dependent variable - this should agree with STATA. These sample statistics represent the
first stage of the Mplus weighted least squares estimator.
Marleen de Moor posted on Monday, September 05, 2005 - 4:12 am
Dear Linda and Bengt,
I have a few questions concerning categorical data and the TYPE=TWOLEVEL option.
1. Is it true that Mplus uses a logistic regression for all multilevel analyses (TYPE=TWOLEVEL) with a categorical outcome variable, because estimators available are MLR, ML and MLF, and not WLSMV?
Is it therefore correct to interpret the beta coefficient as the log odds ratio?
2. In my model I would like to correlate the errors of my two dependent variables, of which one is normal and the other categorical. Is that somehow possible with the option TYPE=TWOLEVEL, or is the
only way out using the options TYPE=COMPLEX with ESTIMATOR=WLSMV?
3. Do you have any plans to make it possible to use censored data with TYPE=TWOLEVEL in Mplus in the future?
Thank you very much in advance!
Kind regards, Marleen de Moor
BMuthen posted on Monday, September 05, 2005 - 2:46 pm
1. Yes.
2. You cannot use WITH to specify a residual covariace when one or more outcome is categical in TWOLEVEL analysis with maximum likelihood. You could consider putting a factor behind the two variables
as shown in Example 7.16.
3. Yes.
Sally Czaja posted on Thursday, October 12, 2006 - 1:58 pm
I am testing a path model with 1 independent variable predicting 2 intermediate variables which predict a dependent variable. Each of the endogenous variables has 2-4 control variables. One of the
intermediate variables is dichotomous, which makes the default estimator WLSMV. I’ve read in the MPlus manual and discussion board that this gives a probit regression and that I can specify the
estimator as ML to get logistic regression, which makes sense for the dichotomous DV.
But what kind of regression is done with the continuous DVs (i.e., what are these path coefficients/how are they to be interpreted?)?
(continued in 2nd post)
Sally Czaja posted on Thursday, October 12, 2006 - 2:10 pm
(continued from prior post re path model with 1 IV predicting 2 intermediate variables which predict a DV)
The path coefficients differ, sometimes substantially:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coefficients
For the path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WLSMV. . . . . ML
from IV to the dichotomous variable,. . . . . . . . . . .04 (n.s.). . . . .15 (p<.001)
from dichotomous variable to the final DV,. . . . . .28 (p<.001). . .53 (p<.001)
from IV to the other intermediate variable. . . . . . .10 (p<.01). . . .07 (p.05)
from other intermediate variable to the final DV,. .20 (p<.001). . .14 (p<.001)
What accounts for these differences?. They are both more & less than the approx. 1.7 scale difference between logistic and probit.. I would have thought the pattern of significance would be the same,
even with different methods.
Finally, on what basis do I choose an estimator?. The dichotomous variable has a 76/24 split and skewness & kurtosis statistics are n.s., which suggests it could be treated as normally distributed..
But if I don’t declare it categorical, the fit becomes awful.
I’d really appreciate your help in understanding this area.
Linda K. Muthen posted on Thursday, October 12, 2006 - 2:40 pm
The regression coefficients for the continuous dependent variables are simple linear regression coefficients.
The coefficients will differ between WLSMV and ML because one is probit and the other is logit. They are on a different scale. You should be comparing the ratios.
I would choose WLSMV with a 76/24 split.
Sally Czaja posted on Friday, October 13, 2006 - 12:46 pm
Hi Linda
Sorry, but what ratios are you referring to in your 2nd paragraph?
Could you elaborate on why I should use WLSMV? I'll have to explain this to someone else.
Linda K. Muthen posted on Friday, October 13, 2006 - 2:47 pm
The ratio of a parameter estimate to its standard error. It is the third column of the results.
It seems you want residual covariances. You can't have more than four with maximum likelihood because a model with four dimensions of integration is probably the maximum you can estimate. This is why
I recommended WLSMV.
Sally Czaja posted on Monday, October 16, 2006 - 12:37 pm
Hi Linda
Thank you for your quick responses last week. I have 2 more related questions:
If, as I understand, the coefficients for predictors of continuous DVs are simple linear regression coef. regardless of the estimator (WLSMV or ML), shouldn't they be identical? For 2 paths, I get
.20 in WLSMV vs .14 in MLR (both p<.001); and -.13 (p<.01) in WLSMV vs -.05 (p<.05) in MLR (and smaller differences on other paths).
Also, for a predictor of the dichotomous variable, MLR gives an OR of 2.26 and est./SE of 4.92, while WLSMV gives an OR of 2.55 (using exp(Estimate*1.7)) with est./SE of 2.98. Should they be this far
Thanks for your help.
Linda K. Muthen posted on Tuesday, October 17, 2006 - 7:46 am
They should be the same. You would need to send me your inputs, data, outputs and license number to support@statmodel.com for me to see why they are not.
Odds ratios cannot be computed for probit regression coefficients.
Ramzi Mabsout posted on Wednesday, October 15, 2008 - 4:39 am
From version 5, I see WLSMV can be used with TWO LEVEL. Are the loadings using CFA, categorical variables & no covariates probit coefficients?
Why I cannot conduct multi-group analysis with TWO LEVEL CATEGORICAL CFA & WLSMV? Is my only alternative to use integration in that case?
Thank you very much.
Linda K. Muthen posted on Wednesday, October 15, 2008 - 10:06 am
Your only option in this case is numerical integration.
Ramzi Mabsout posted on Wednesday, October 15, 2008 - 10:42 am
I also cannot conduct analysis with integration: I am requested to use KNOWNCLASS & MIXTURE. Why?
Linda K. Muthen posted on Wednesday, October 15, 2008 - 11:27 am
When numerical analysis is required, multiple group analysis uses the KNOWNCLASS option and TYPE=MIXTURE.
Richard Rivera posted on Tuesday, June 16, 2009 - 8:13 pm
I am conducting multiple logistic regression on a binary outcome. I have missing data, so I am allowing the default to use missing data theory, and I also included INTEGRATION=MONTECARLO;.
I would like to get unbiased estimates of confidence intervals and I know that I can’t use bootstrap CI when I am using the montecarlo integration.
For logisitic regression, there two options for estimation procedures (ML & MLR). For both of these, I asked for confidence intervals in outcome.
When I use ESTIMATOR = MLR I get the same point estimates then when I use
ESTIMATOR = MLR. So I assume that I get log odds (or odds ration) for either ML estimator.
However, I get different standard errors, which estimator should I use?
Richard Rivera posted on Tuesday, June 16, 2009 - 8:25 pm
What I meant to ask:
When conducting multiple logisitic regression with missig data, which estimation procedure would give me the least bias estimates of the standard errors (or confidence intervals)?
Paul Silvia posted on Wednesday, June 17, 2009 - 5:59 am
When ML and MLR diverge in their SE estimates, MLR is generally more trustworthy. Broadly, though, this is often a sign to explore residuals, distributions, and possible influential cases.
Cecily Na posted on Monday, February 07, 2011 - 3:07 pm
Hi Professors,
I am new to Mplus. I used the syntax MODEL = BASIC; Estimator = ML to generate a covariance matrix in Mplus. It was not same as the one produced in SPSS. What's the reason (suppose I treated all
variables as continuous)?
Also, when can I use ML? Can I use it for ordered categorical variables?
Linda K. Muthen posted on Monday, February 07, 2011 - 3:54 pm
It is likely that the sample sizes are not the same. If they are, you may be reading the data incorrectly and should send the problem along with your license number to support@statmodel.com.
Yes, ML can be used for ordered categorical data. See the ESTIMATOR option in the user's guide where there is a table that shows the cases when each estimator can be used.
burak aydin posted on Tuesday, May 10, 2011 - 4:00 pm
An article named "propensity score adjustment for multiple groups SEM" (Hoshino,Kurata & Shigemasu, 2006) uses weighted M estimator. Weights are propensity scores.
I wonder if WLS estimator does the same job?
Bengt O. Muthen posted on Tuesday, May 10, 2011 - 5:23 pm
The Mplus WLS estimator is not based on propensity scores. M estimators are sometimes connected with GEE. The connection between GEE and WLSM is shown in
Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes.
Unpublished technical report.
which is on our web site under Papers, SEM.
Bengt O. Muthen posted on Wednesday, May 11, 2011 - 3:33 pm
Perhaps this can be done using weighted ML, which we call quasi-ML in some of Asparouhov's writing on complex survey data analysis on our web site?
burak aydin posted on Wednesday, May 11, 2011 - 4:06 pm
I made some further search and figured out that residual based GLS estimator is what I need. I know Mplus has traditional GLS estimator. Is there a way to modify GLS estimator to residual based GLS
estimator? (Yuan&Bentler,1997,mean and covariance structure analysis: theoretical and practical improvements)
Furthermore, I d like to learn if there is an estimator which is robust to both non-normality and outliers?
Bengt O. Muthen posted on Thursday, May 12, 2011 - 9:52 am
Don't know the answer to that. The Mplus GLS does not allow weights.
Outlier detection is available in Mplus - see the UG. MLR is in principle robust to model mis-specification, but how well that works with outliers I'm not sure of.
Heike B. posted on Thursday, October 20, 2011 - 3:28 am
Dear Dres. Muthen,
I intend to build a manifest path model containing two exogenous variables and 5 endogenous variables. Three of them are mediators.
The observed variables are means from four-step likert scales (two variables actually are single items). That's why I wanted to treat the data as ordinal.
My sample is small (230 objects), the data is skewed and not normaly distributed.
I tried to estimate the model using WLMSV, however now I would like to add an interaction.
Besides one endogenous variable ended up with eleven categories, so MPLUS did not allow to decleare it as categorical.
Given all this -
1. which estimator would you recommend?
2. if an ML based estimator is recommended, should I declare all my variables as continous?
Many thanks in advance.
Linda K. Muthen posted on Thursday, October 20, 2011 - 2:00 pm
If the original Likert variables have floor or ceiling effects, I would not recommend summing them.
I think you want an interaction between two observed variables. You can create that as the product of the two variables using the DEFINE command.
Both weighted least squares and maximum likelihood estimation can be used with categorical dependent variables.
Miho Tanaka posted on Monday, February 13, 2012 - 11:01 am
I have been working on a SEM for my dissertation. The primary outcome in my model is a binary (whether participant did a hepatitis B screening or not). Predictors are three latent variables by
non-normally distributed continuous factor indicators. By default, Mplus uses WLSMV estimator for both structural and measurement part. I would like to know what is happening to the measurement model
if I allow the default estimator (WLSMV). That is WLSMV is used to non-normally distributed continuous factor indicators. For CFA (only for the measurement part), I may chose to use MLR, rather than
WLSMV. Is there any significant difference by these two estimators? I understand both estimators are robust to non-normality.
Thanks for your advice.
Linda K. Muthen posted on Wednesday, February 15, 2012 - 10:25 am
WLSMV is not robust to non-normality of continuous variables. I would use MLR.
Owis Eilayyan posted on Tuesday, March 20, 2012 - 5:08 pm
I am doing a path analysis. i have 5 intermediate continuous variables and one dependent variable.
I am not sure which type of estimation i should use?
Bengt O. Muthen posted on Tuesday, March 20, 2012 - 6:32 pm
I would use ML or MLR.
Owis Eilayyan posted on Tuesday, March 20, 2012 - 9:06 pm
Hi again,
Thanks for your response. I used MlR and i got this error message:
"*** FATAL ERROR
is that because i have missing values?
Linda K. Muthen posted on Wednesday, March 21, 2012 - 7:08 am
Yes, you must have missing values on a mediator. Add INTEGRATION=MONTECARLO; to the ANALYSIS command.
Owis Eilayyan posted on Wednesday, March 21, 2012 - 7:11 am
Ok, if i removed the missing, can i use MLR or ML estimator? i dont want to use WLSMV.
Bengt O. Muthen posted on Wednesday, March 21, 2012 - 7:39 am
When you add Integration=MonteCarlo you are still doing ML/MLR, it's just that you specify a certain algorithm for doing it.
Your dependent variable must have been categorical or count, in which case missing on mediators leads to numerical integration with MonteCarlo when using the ML or MLR estimator.
Owis Eilayyan posted on Wednesday, March 21, 2012 - 7:46 am
Actually my independent variables have these missing values.
Thanks a lot
Owis Eilayyan posted on Wednesday, March 21, 2012 - 10:01 am
Hello again,
i used Integration=MonteCarlo and ML/MLR estimator but i didnt have Chi-Square Value and RMSEA in the output, is it normally? also, i got a different results (i.e. different direction of
relationships between variables) in ML/MLR versus WLSMV!
Linda K. Muthen posted on Wednesday, March 21, 2012 - 11:00 am
When means, variances, and covariances are not sufficient statistics for model estimation, chi-square and related fit statistics are not available.
Please send the two outputs and your license number to support@statmodel.com.
Owis Eilayyan posted on Wednesday, March 21, 2012 - 11:18 am
When i use WLSMV estimation, i get the fit statistics.
i am using my supervisor program, both of us dont know the license number. where is it written usually?
Linda K. Muthen posted on Wednesday, March 21, 2012 - 1:08 pm
With WLSMV, the statistics for model estimation are thresholds and correlations.
You can login to your account on the website and see it.
Owis Eilayyan posted on Wednesday, March 21, 2012 - 1:17 pm
Sorry for bothering you,
but does that mean with WLSMV, i get a wrong result?
i got a good fit model with WLSMV!
Linda K. Muthen posted on Wednesday, March 21, 2012 - 3:53 pm
We don't make a habit of giving wrong results. WLSMV gives chi-square and related fit statistics.
Owis Eilayyan posted on Wednesday, March 21, 2012 - 4:18 pm
One more question,
so with WLSMV, we get chi-square and related fit statistics while with ML/MLR we dont, is that true?
Also, if i use ML or WLSMV i get similar result, isnt it? that what i understood from your video!
Bengt O. Muthen posted on Wednesday, March 21, 2012 - 4:19 pm
To understand the different aspects of testing model fit in this situation, see
Muthén, B. (1993). Goodness of fit with categorical and other non-normal variables. In K. A. Bollen, & J. S. Long (Eds.), Testing Structural Equation Models (pp. 205-243). Newbury Park, CA: Sage
which is paper #45 at
This chapter makes the distinction between testing the underlying structure (as WLSMV does) versus testing the model against the data (which isn't always feasible as presumably in your case).
Bengt O. Muthen posted on Wednesday, March 21, 2012 - 4:24 pm
ML and WLSMV tends to give similar results when the missing data are MCAR (missing completely at random) or MAR as a function of covariates.
Mauricio Garnier-Villarreal posted on Thursday, April 19, 2012 - 8:04 am
I am running a simulation study with categorical indicators using the BAYES estimator, I have heard that Mplus uses two methods for handling categorical variables: tetrachorical correlation and
direct ML. In the specific case of using the BAYES estimator, which method uses Mplus?
thank you
Bengt O. Muthen posted on Thursday, April 19, 2012 - 10:57 am
Bayes does not use tetrachorics and does not use ML. But like ML, Bayes is a "full-information" estimator that uses all available data in an optimal way. It is equivalent to ML in its missing data
handling. Bayes is an estimator in its own right. So Mplus offers 3 major estimators: WLSMV (which builds on tetrachorics/polychorics), ML, and Bayes.
Owis Eilayyan posted on Monday, April 30, 2012 - 7:16 pm
I would like to ask a technical question with Mplus. When i use WLSMV estimator, i get Chi-Square, RMSEA, and CFI values automatically.
My question is: can i get a Chi-Square, RMSEA, and CFI values with ML estimator?
Linda K. Muthen posted on Tuesday, May 01, 2012 - 10:31 am
With maximum likelihood and categorical variables means, variances, and covariances are not sufficient statistics for model estimation. Because of this, chi-square and related fit statistics are not
Gabriel Nagy posted on Friday, March 01, 2013 - 11:29 am
Dear all,
I have some questions regarding the ODLL algorithm implemented in Mplus.
I’m running a large IRT model including many nonlinear parameter constraints (around 700). ML estimation on basis of the EM algorithm is no longer feasible and the constraints are not supported in
the Bayes framework. I’ve tried out different algorithms and found out that ODLL (in combination with MLF) works well in reasonable time. Unfortunately, I was not able to find any documentation of
the ODLL algorithm. I only found out that ODLL optimizes the observed data likelihood directly.
Is ODLL something like JML (Joint Maximum Likelihood)?
Is ODLL an iterative algorithm (Tech 8 doesn’t report an iteration history for ODLL)?
What is ODLL exactly doing? Are there any references about this algorithm that might be cited in a manuscript?
What about the performance of ODLL relative to other algorithms, such as EM? I suspect that there might be some reasons that the much slower EM algorithm is routinely used in the IRT framework.
Thank you for your help!
Tihomir Asparouhov posted on Monday, March 04, 2013 - 11:00 am
ODLL stands for Observed data log-likelihood. The algorithm optimizes the log-likelihood using the Quasi-Newton method.
You can look at Tech5 for the iterations.
Use the Mplus manual as a reference.
My experience is that in most cases (but definitely not in all cases) the default EMA algorithm is faster. EMA actually contains ODLL within it and is occasionally deployed.
My suggestion is to spend time simplifying your model constraints. There are 3 types of constraints listed in order of complexity
1) New parameters = function of model parameters
2) Dependent parameters = function of independent parameters
3) anything else
Try to use 1 and 2 as much as you can instead of 3. Model constraints can be written in many different ways and using the most optimal way can improve the estimation dramatically.
Anna posted on Sunday, June 02, 2013 - 11:21 pm
I have a model with five observed variables, A, B, C, D, and E. E is categorical. The model proposes an indirect link, A->C->D->E, while B moderates the A->C path. There are missing values on A, B,
C, D. Sample size is around 250.
I would like to know which estimator is more appropriate for testing this kind of model: categorical outcome, aims to test moderated mediation effect, has missing values.
I have tried WLSMV, MLR, and BAYES. The results estimated through these three estimators are actually comparable, and the fit indices in WLSMV and the Bayesian PPC and PSR indicate good fit. I tend
to favor Bayesian estimation because it handles missing data well and it does not require normal distribution. But I am not sure to what extent it is favored against the other two estimators in my
situation. (I don't have specific estimation of the priors.)
Thank you very much for your help!
Linda K. Muthen posted on Monday, June 03, 2013 - 10:55 am
Bayes and missing data handle missing data in the same way. I would choose them above WLSMV if there is a lot of missing data. You can use non-informative priors in Bayes.
Anna posted on Monday, June 03, 2013 - 11:44 am
Dear Linda,
Thank you!
I would like to ask more about these estimators. Beside the difference in handling missing data, are there any other concerns in choosing among these methods?
1. Is WLSMV robust for models with interaction terms and nonnormal distribution of indirect effects (e.g., a*b term)? I read the Muthen, du Toit, and Spisic (2007) technical report and I think that
WLSMV often underestimates SE when the sample is small and skewed.
2. I also wonder if I should correlate the IVs with the interaction term (and perhaps correlate the exogenous covariates) because WLSMV does not automatically do so in the sequential modeling.
3. For MLR, since bootstrapping is not allowed with numerical integration, will this be a big deal for estimation of indirect effects with nonnormal distribution?
Linda K. Muthen posted on Wednesday, June 05, 2013 - 1:31 pm
1. You can use bootstrap with WLSMV.
2. The model is estimated conditioned on the exogenous variables. Their means, variances, and covariances should not be mentioned in the MODEL command. To obtain these values, do a TYPE=BASIC with no
MODEL command.
3. If they have a non-normal distribution, this will not be taken into account.
Back to top
|
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=56","timestamp":"2014-04-16T11:09:51Z","content_type":null,"content_length":"100056","record_id":"<urn:uuid:78901ae6-b208-419d-a932-e0dbc4aaf23e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beyond Regular Expressions: More Incremental String Matching
In my last post I showed how to incrementally match long strings against regular expressions. I now want to apply similar methods to matching languages that can't be described by regular expressions.
(Note that 'language' is just jargon for a set of strings that meet some criterion.) In particular, regular expressions can't be used to test a string for balanced parentheses. This is because we
need some kind of mechanism to count how many open parentheses are still pending and a finite state machine can't represent arbitrary integers.
So let's start with a slightly more abstract description of what was going on last time so we can see the bigger picture. We were storing strings in balanced trees with a kind of 'measurement' or
'digest' of the string stored in the nodes of the tree. Each character mapped to an element of a monoid via a function called
and you can think of the measurement function as acting on entire strings if you
together all of the measurements for each of the characters. So what we have is a function
f :: String -> M
taking strings to some type M (in the last post M was a type of array) with the properties
f (a ++ b) == f a `mappend` f b
f [] == mempty
By noticing that String is itself a monoid we can write this as
f (a `mappend` b) == f a `mappend` f b
f mempty == mempty
Anything satisfying these laws is called a monoid homomorphism, or just homomorphism for short.
So the technique I used worked like this: I found a homomorphism from
to some type with the useful property that for any string s,
f s
still contains all the information required to figure out if we're dealing with a member of our language. If
turns a string into something more efficient to work with then we can make our string matching more efficient.
Now I want to make the notion of "contains all the information required" more precise by considering an example. Consider strings that consist only of the characters
. Our language will be the set of strings whose parentheses balance. In other words the total number of
must match the total number of
, and as we scan from left to right we must never see more
. For example,
are in our language, but
isn't. This language is called the Dyck language.
Suppose we're testing whether or not some string is in the Dyck language. If we see
as a substring then if we delete it from the string, it makes no difference to whether or not the string is in the Dyck language. In fact, if we see
and so on they can all be deleted. On the other hand, you can't delete
without knowing about the rest of the string. Deleting it from
makes no difference to its membership in the Dyck language, but deleting it from
certainly does.
So given a language L, we can say that two strings, x and y, are interchangeable with respect to L if any time we see x as a substring of another string we can replace it with y, and vice versa,
without making any difference to whether the string is in the language. Interchangeable strings are a kind of waste of memory. If we're testing for membership of L there's no need to distinguish
between them. So we'd like our measurement homomorphism to map all interchangeable strings to the same values. But we don't want to map any more strings to the same value because then we lose
information that tells us if a string is an element of L. A homomorphism that strikes this balance perfectly is called the 'canonical homomorphism' and the image of the set of all strings under this
homomorphisms is called the
syntactic monoid
. By 'image', I simply mean all the possible values that could arise from applying the homomorphism to all possible strings.
So lets go back to the Dyck language. Any time we see
we can delete it. But if we delete every occurence of
from a string then all we have left is a bunch of
followed by a bunch of
. Let's say it's p of the former, and q of the latter. Every string of parentheses can be distilled down to a pair of integers ≥0, (p,q). But does this go far enough? Could we distill any further?
Well for any choice of (p,q) it's a good exercise to see that for any other choice of (p',q') there's always a string in the Dyck language where if you have )
as a substring, replacing it with (p',q') gives you something not in the language. So you can't distill any further. Which means we have our syntactic monoid and canonical homomorphism. In this case
the monoid is called the
bicyclic monoid
and we can implement it as follows:
> {-# LANGUAGE TypeSynonymInstances,FlexibleInstances,MultiParamTypeClasses #-}
> import Data.Foldable
> import Data.Monoid
> import Data.FingerTree hiding (fromList)
> import qualified Data.List as L
> data Bicyclic = B Int Int deriving (Eq,Show)
> hom '(' = B 0 1
> hom ')' = B 1 0
> instance Monoid Bicyclic where
> mempty = B 0 0
> B a b `mappend` B c d = B (a-b+max b c) (d-c+max b c)
Where did that code for
come from? Consider )
. We can delete
from the middle many times over.
Now we can more or less reproduce the code of last week and get a Dyck language tester. Once we've distilled a string down to (p,q) we only need to test whether or not p=q=0 to see whether or not our
parentheses are balanced:
> matches' s = x==B 0 0 where
> x = mconcat (map hom s)
> data Elem a = Elem { getElem :: a } deriving Show
> data Size = Size { getSize :: Int } deriving (Eq,Ord,Show)
> instance Monoid Size where
> mempty = Size 0
> Size m `mappend` Size n = Size (m+n)
> instance Measured (Size,Bicyclic) (Elem Char) where
> measure (Elem a) = (Size 1,hom a)
> type FingerString = FingerTree (Size,Bicyclic) (Elem Char)
> insert :: Int -> Char -> FingerString -> FingerString
> insert i c z = l >< (Elem c <| r) where (l,r) = split (\(Size n,_) -> n>i) z
> string = empty :: FingerString
> matchesDyck string = snd (measure string)==B 0 0
> loop string = do
> print $ map getElem (toList string)
> print $ "matches? " ++ show (matchesDyck string)
> print "(Position,Character)"
> r <- getLine
> let (i,c) = read r
> loop $ insert i c string
> main = do
> loop string
There's a completely different way to test membership of the Dyck language. Replace each
with 1 and
with -1. Now scan from left to right keeping track of (1) the sum of all the numbers so far and (2) the minimum value taken by this sum. If the final sum and the final minimal sum are zero, then we
have matching parentheses. But we need to do this on substrings without scanning from the beginning in one go. That's an example of a parallel prefix sum problem and it's what I talked about
So here's an extended exercise: adapt the parallel prefix sum approach to implement incremental Dyck language testing with fingertrees. You should end up with a canonical homomorphism that's similar
to the one above. It'll probably be slightly different but ultimately equivalent.
And here's an even more extended exercise: protein sequences are sequences from a 20 letter alphabet. Each letter can be assigned a hydrophobicity value from
certain tables
. (Pick whichever table you want.) The hydrophobicity of a string is the sum of the hydrophobicities of its letters. Given a string, we can give it a score corresponding to the largest hydrophobicity
of any contiguous substring in it. Use fingertrees and a suitable monoid to track this score as the string is incrementally edited. Note how widely separated substrings can suddenly combine together
as stuff between them is adjusted.
If you're interested in Dyck languages with multiple types of parenthesis that need to match you need something
much more fiendish
8 comments:
While I can see that it works, I don't quite see where the max comes from in "B (a-b+max b c) (d-c+max b c)". My attempt at it was "B (a+c-min b c) (b+d-min b c)", basically take the total ')'s
and total '('s and subtract the b,c pairs.
Also, I'm a bit confused by "we only need to test whether or not p=q" (which would suggest that ")(" is balanced) but then in the code you test whether both p and q are 0 (which is what I
expected the test to be).
On your first point:
max b c+min b c = b+c
You're right about the second point. I wrote the code a few days before the commentary!
I know this is a bit off subject but I am a graduate student at UNLV as well as a weekly math based podcast called Combinations and Permutations where we start with a mathematical topic and spin
off onto as many tangents as we can. You can follow the previous link to the blog page of our podcast, search for us on iTunes, or take a trip over to our host site http://cppodcast.libsyn.com.
Give us a try I do think that you will enjoy what you hear.
You showed one particular example beyond regular expression matching. Will it generalize to any context-free language?
Wei Hu,
To make this idea work requires that the internal state of a parser be simple enough. I guess you could roughly characterise it like this: consider the set of possible transitions the parser
could make from one state to another as a result of reading n characters. We need this set to grow slowly with n. For finite state machines it remains at finite size. For the example shown in
this article it grows roughly as log(n) (the number of bits needed to represent an integer n). But for a LALR parser, say, I think the size of this set grows fast with n, and so it couldn't be
implemented reasonably.
So it's good enough for incrementally lexing a language like C++ or Haskell. But not for parsing it.
I set out to prove you wrong... One can in fact use a monoid-based technique to parse context free languages. In fact the algorithm has been published in 75 by Valiant! It was a bit of work to
set up things correctly so it would behave well though. A full write up will appear in ICFP:
I set out to prove you wrong... One can in fact use a monoid-based technique to parse context free languages. In fact the algorithm has been published in 75 by Valiant! It was a bit of work to
set up things correctly so it would behave well though. A full write up will appear in ICFP:
|
{"url":"http://blog.sigfpe.com/2009/01/beyond-regular-expressions-more.html?showComment=1233456780000","timestamp":"2014-04-20T21:17:00Z","content_type":null,"content_length":"77365","record_id":"<urn:uuid:df263fae-1e90-49c9-9acf-3bb13d46ade0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulation of microwave and laser structures
│ [Next]: Visualization of numerical simulations and production │ │ [Previous]: Melt-flow simulation in Czochralski growth │ [Contents] │ [Index] │
Collaborator: G. Hebermehl, F.-K. Hübner, R. Schlundt
Cooperation with: W. Heinrich, T. Tischler, H. Zscheile (Ferdinand-Braun-Institut für Höchstfrequenztechnik (FBH) Berlin)
Description: Field-oriented methods which describe the physical properties of microwave circuits and optical structures are an indispensable tool to avoid costly and time-consuming redesign cycles.
Commonly the electromagnetic characteristics of the structures are described by their scattering matrix which is extracted from the orthogonal decomposition of the electric field at a pair of
neighboring cross-sectional planes on each waveguide, [2]. The electric field is the solution of a two-dimensional eigenvalue and a three-dimensional boundary value problem for Maxwell's equations in
the frequency domain, [7]. The computational domain is truncated by electric or magnetic walls; open structures are treated using the Perfectly Matched Layer (PML) ([11]) absorbing boundary
The subject under investigation are three-dimensional structures of arbitrary geometry which are connected to the remaining circuit by transmission lines. Ports are defined at the transmission lines'
outer terminations. In order to characterize their electrical behavior the transmission lines are assumed to be infinitely long and longitudinally homogeneous. Short parts of the transmission lines
and the passive structure (discontinuity) form the structure under investigation, [7].
The equations are discretized with orthogonal grids using the Finite Integration Technique (FIT), [1, 4, 16]. Maxwellian grid equations are formulated for staggered non-equidistant rectangular grids
and for tetrahedral nets with corresponding dual Voronoi cells.
A three-dimensional boundary value problem can be formulated using the integral form of Maxwell's equations in the frequency domain in order to compute the electromagnetic field:
^. d = j^ . d ^. d = 0,
^. d = - j^ . d ^. d = 0,
for rectangular grids and
for tetrahedral grids. This results in a two-step procedure: an eigenvalue problem for complex matrices and the solution of large-scale systems of linear algebraic equations with indefinite symmetric
complex matrices.
(1) Eigenmode problem ([2]): The interesting modes of smallest attenuation are found solving a sequence of eigenvalue problems of modified matrices with the aid of the invert mode of the Arnoldi
iteration using shifts implemented in the package ARPACK, [9]. To reduce the execution time for high-dimensional problems, a coarse and a fine grid are used. The use of the linear sparse solver
PARDISO ([13]) and two levels of parallelization results in an additional speed-up of computation time. The eigenvalue problem for rectangular grids is described in [5-8]. The mode fields at the
ports of a transmission line, which is discretized by means of tetrahedral grids, are computed interpolating the results of the rectangular discretization. The PML influences the mode spectrum. Modes
that are related to the PML boundary can be detected using the power part criterion [15].
(2) Boundary value problem ([1]): The electromagnetic fields are computed by the solution of large-scale systems of linear equations with indefinite complex symmetric coefficient matrices. In
general, these matrix problems have to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. The number of right-hand sides depends on the number of ports and
modes. Independent set orderings, Jacobi and SSOR pre-conditioning techniques, [10], and a block quasi-minimal residual algorithm, [3], are applied to solve the systems of the linear algebraic
equations. Details are given in [7] and [14]. In comparison to the simple lossy case, the number of iterations of Krylov subspace methods increases significantly in the presence of PML. Moreover,
overlapping PML conditions at the corner regions of the computational domain lead to an increase of the magnitude of the corresponding off-diagonal elements in comparison to the diagonal ones of the
coefficient matrix. This downgrades the properties of the matrix, [7].
Using rectangular grids, a mesh refinement in one point results in an accumulation of small elementary cells in all coordinate directions. In addition, rectangular grids are not well suited for the
treatment of curved and non-rectangular structures. Thus, tetrahedral nets with corresponding Voronoi cells are used for the three-dimensional boundary value problem. The primary grid is formed by
tetrahedra and the dual grid by the corresponding Voronoi cells, which are polytopes, [12]. The gradient of the electric field divergence at an internal point is obtained considering the partial
volumes of the appropriate Voronoi cell.
1. K. BEILENHOFF, W. HEINRICH, H.L. HARTNAGEL, Improved finite-difference formulation in frequency domain for three-dimensional scattering problems, IEEE Trans. Microwave Theory Techniques, 40
(1992), pp. 540-546.
2. A. CHRIST, H.L. HARTNAGEL, Three-dimensional finite-difference method for the analysis of microwave-device embedding, IEEE Trans. Microwave Theory Techniques, 35 (1987), pp. 688-696.
3. R.W. FREUND, W. MALHOTRA, A Block-QMR algorithm for non-Hermitian linear systems with multiple right-hand sides, Linear Algebra Applications, 254 (1997), pp. 119-157.
4. G. HEBERMEHL, R. SCHLUNDT, H. ZSCHEILE, W. HEINRICH, Improved numerical methods for the simulation of microwave circuits, Surv. Math. Ind., 9 (1999), pp. 117-129.
5. G. HEBERMEHL, F.-K. HÜBNER, R. SCHLUNDT, TH. TISCHLER, H. ZSCHEILE, W. HEINRICH, Numerical simulation of lossy microwave transmission lines including PML, in: Scientific Computing in Electrical
Engineering, U. van Rienen, M. Günther, D. Hecht, eds., vol. 18 of Lecture Notes Comput. Sci. Eng., Springer, Berlin, 2001, pp. 267-275.
6. , in: Perfectly matched layers in transmission lines, in: Numerical Mathematics and Advanced Applications, ENUMATH 2001, F. Brezzi, A. Buffa, S. Corsaro, A. Murli, eds., Springer, Italy, 2003,
pp. 281-290.
7. , Simulation of microwave and semiconductor laser structures including absorbing boundary conditions, in: Challenges in Scientific Computing -- CISC 2002, E. Bänsch, ed., vol. 35 of Lecture Notes
Comput. Sci. Eng., Springer, Berlin, 2003, pp. 131-159.
8. , Eigenmode computation of microwave and laser structures including PML, in: Scientific Computing in Electrical Engineering, W.H.A. Schilders, S.H.M.J. Houben, E.J.W. ter Maten, eds., Mathematics
in Industry, Springer, Berlin, 2004, pp. 196-205.
9. R.B. LEHOUCQ, Analysis and implementation of an implicitly restarted Arnoldi iteration, Technical Report no. 13, Rice University, Department of Computational and Applied Mathematics, Houston,
USA, 1995.
10. Z.S. SACKS, D.M. KINGSLAND, R. LEE, J.-F. LEE, A perfectly matched anisotropic absorber for use as an absorbing boundary condition, IEEE Trans. Antennas Propagation, 43 (1995), pp. 1460-1463.
11. J. SCHEFTER, Discretisation of Maxwell equations on tetrahedral grids, WIAS Technical Report no. 6, 2003.
12. O. SCHENK, K. GÄRTNER, W. FICHTNER, Efficient sparse LU factorization with left-right looking strategy on shared memory multiprocessors, BIT, 40 (2000), pp. 158-176.
13. R. SCHLUNDT, G. HEBERMEHL, F.-K. HÜBNER, W. HEINRICH, H. ZSCHEILE, Iterative solution of systems of linear equations in microwave circuits using a block quasi-minimal residual algorithm, in:
Scientific Computing in Electrical Engineering, U. van Rienen, M. Günther, D. Hecht, eds., vol. 18 of Lecture Notes Comput. Sci. Eng., Springer, Berlin, 2001, pp. 325-333.
14. TH. TISCHLER, W. HEINRICH, The perfectly matched layer as lateral boundary in finite-difference transmission-line analysis, IEEE Trans. Microwave Theory Techniques, 48 (2000), pp. 2249-2253.
Fig. 1: Primary and dual grid; An eight-cell primary grid and its one interior dual cell; Voronoi cell and single tetrahedron. The electric field intensity components marked with red color are
located at the centers of the edges, and the magnetic flux density components marked with black color are normal to the cell faces.
│ [Next]: Visualization of numerical simulations and production │ │ [Previous]: Melt-flow simulation in Czochralski growth │ [Contents] │ [Index] │
LaTeX typesetting by I. Bremer
|
{"url":"http://www.wias-berlin.de/annual_report/2003/node51.html","timestamp":"2014-04-17T18:37:46Z","content_type":null,"content_length":"24173","record_id":"<urn:uuid:a70517f1-70fa-4f8f-ab9c-19ea34b858a1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integration Help
July 24th 2007, 08:31 PM #1
Mar 2007
Integration Help
Can we arrive at a closed form solution to the following integral?
limits of integration are from 0 to 1 and 'a' is a constant.
Phi(x) is a function of x and is equal to 6*x^2-4*x^3+x^4.
Thanks in anticipation.
I suspect that this is not doable in closed from in terms of elementary functions, certainly QuickMath does not like it.
July 25th 2007, 04:46 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/calculus/17187-integration-help.html","timestamp":"2014-04-19T20:02:13Z","content_type":null,"content_length":"33048","record_id":"<urn:uuid:fefbe39f-5295-4952-8cd4-ed4fc0082ebd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Enriched categories, internal categories and change of base
Results 1 - 10 of 15
, 1998
"... We introduce a notion of equipment which generalizes the earlier notion of pro-arrow equipment and includes such familiar constructs as relK, spnK, parK, and proK for a suitable category K,
along with related constructs such as the V-pro arising from a suitable monoidal category V. We further exhibi ..."
Cited by 45 (7 self)
Add to MetaCart
We introduce a notion of equipment which generalizes the earlier notion of pro-arrow equipment and includes such familiar constructs as relK, spnK, parK, and proK for a suitable category K, along
with related constructs such as the V-pro arising from a suitable monoidal category V. We further exhibit the equipments as the objects of a 2-category, in such a way that arbitrary functors F: L ✲ K
induce equipment arrows relF: relL ✲ relK, spnF: spnL ✲ spnK, and so on, and similarly for arbitrary monoidal functors V ✲ W. The article I with the title above dealt with those equipments M having
each M(A, B) only an ordered set, and contained a detailed analysis of the case M = relK; in the present article we allow the M(A, B) to be general categories, and illustrate our results by a
detailed study of the case M = spnK. We show in particular that spn is a locally-fully-faithful 2-functor to the 2-category of equipments, and determine its image on arrows. After analyzing the
nature of adjunctions in the 2-category of equipments, we are able to give a simple characterization of those spnG which arise from a geometric morphism G.
, 2006
"... In this paper we explain the relationship between Frobenius objects in monoidal categories and adjunctions in 2-categories. Specifically, we show that every Frobenius object in a monoidal
category M arises from an ambijunction (simultaneous left and right adjoints) in some 2-categoryDinto which M fu ..."
Cited by 12 (1 self)
Add to MetaCart
In this paper we explain the relationship between Frobenius objects in monoidal categories and adjunctions in 2-categories. Specifically, we show that every Frobenius object in a monoidal category M
arises from an ambijunction (simultaneous left and right adjoints) in some 2-categoryDinto which M fully and faithfully embeds. Since a 2D topological quantum field theory is equivalent to a
commutative Frobenius algebra, this result also shows that every 2D TQFT is obtained from an ambijunction in some 2-category. Our theorem is proved by extending the theory of adjoint monads to the
context of an arbitrary 2-category and utilizing the free completion under Eilenberg-Moore objects. We then categorify this theorem by replacing the monoidal category M with a semistrict monoidal
2-category M, and replacing the 2-categoryD into which it embeds by a semistrict 3-category. To state this more powerful result, we must first define the notion of a ‘Frobenius pseudomonoid’, which
categorifies that of a Frobenius object. We then define the notion of a ‘pseudo ambijunction’, categorifying that of an ambijunction. In each case, the idea is that all the usual axioms now hold only
up to coherent isomorphism. Finally, we show that every Frobenius pseudomonoid in a semistrict monoidal 2-category arises from a pseudo ambijunction in some semistrict 3-category.
, 1998
"... . A finitary monad A on the category of globular sets provides basic algebraic operations from which more involved `pasting' operations can be derived. To makes this rigorous, we define
A-computads and construct a monad on the category of A-computads whose algebras are A-algebras; an action of the n ..."
Cited by 10 (1 self)
Add to MetaCart
. A finitary monad A on the category of globular sets provides basic algebraic operations from which more involved `pasting' operations can be derived. To makes this rigorous, we define A-computads
and construct a monad on the category of A-computads whose algebras are A-algebras; an action of the new monad encapsulates the pasting operations. When A is the monad whose algebras are
n-categories, an A-computad is an n-computad in the sense of R.Street. When A is associated to a higher operad (in the sense of the author) , we obtain pasting in weak n-categories. This is intended
as a first step towards proving the equivalence of the various definitions of weak n-category now in the literature. Introduction This work arose as a reflection on the foundation of higher
dimensional category theory. One of the main ingredients of any proposed definition of weak n-category is the shape of diagrams (pasting scheme) we accept to be composable. In a globular approach [3]
each k-cell has a source ...
- In preparation
"... Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left and right derived functors. Our central observation is that
model categories are the objects of a double category whose vertical and horizontal arrows are left and rig ..."
Cited by 9 (3 self)
Add to MetaCart
Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left and right derived functors. Our central observation is that model
categories are the objects of a double category whose vertical and horizontal arrows are left and right Quillen functors, respectively, and that passage to derived functors is functorial at the level
of this double category. The theory of conjunctions and mates in double categories, which generalizes the theory of adjunctions in 2-categories, then gives us canonical ways to compare composites of
left and right derived functors. Contents
, 2007
"... Abstract. In some bicategories, the 1-cells are ‘morphisms ’ between the 0-cells, such as functors between categories, but in others they are ‘objects ’ over the 0-cells, such ..."
Cited by 6 (1 self)
Add to MetaCart
Abstract. In some bicategories, the 1-cells are ‘morphisms ’ between the 0-cells, such as functors between categories, but in others they are ‘objects ’ over the 0-cells, such
- I), Theory Appl. Categ
"... form a cubical set with compositions x +i y in all directions, which are computed using pushouts and behave ‘categorically ’ in a weak sense, up to suitable comparisons. Actually, we work with a
‘symmetric cubical structure’, which includes the transposition symmetries, because this allows for a str ..."
Cited by 5 (2 self)
Add to MetaCart
form a cubical set with compositions x +i y in all directions, which are computed using pushouts and behave ‘categorically ’ in a weak sense, up to suitable comparisons. Actually, we work with a
‘symmetric cubical structure’, which includes the transposition symmetries, because this allows for a strong simplification of the coherence conditions. These notions will be used in subsequent
papers to study topological cospans and their use in Algebraic Topology, from tangles to cobordisms of manifolds. We also introduce the more general notion of a multiple category, where- to start
with-arrows belong to different sorts, varying in a countable family, and symmetries must be dropped. The present examples seem to show that the symmetric cubical case is better suited for
topological applications.
"... Abstract. The notion of cartesian bicategory, introduced in [C&W] for locally ordered bicategories, is extended to general bicategories. It is shown that a cartesian bicategory is a symmetric
monoidal bicategory. 1. ..."
Add to MetaCart
Abstract. The notion of cartesian bicategory, introduced in [C&W] for locally ordered bicategories, is extended to general bicategories. It is shown that a cartesian bicategory is a symmetric
monoidal bicategory. 1.
"... Abstract. Quantum categories were introduced in [5] as generalizations of both bi(co)algebroids and small categories. We clarify details of that work. In particular, we show explicitly how the
monadic definition of a quantum category unpacks to a set of axioms close to the definitions of a bialgebro ..."
Add to MetaCart
Abstract. Quantum categories were introduced in [5] as generalizations of both bi(co)algebroids and small categories. We clarify details of that work. In particular, we show explicitly how the
monadic definition of a quantum category unpacks to a set of axioms close to the definitions of a bialgebroid in the Hopf algebraic literature. We introduce notions of functor and natural
transformation for quantum categories and consider various constructions on quantum structures.
"... composites of left and right derived functors Michael Shulman Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left
and right derived functors. Our central observation is that model categories are the objects of a double ..."
Add to MetaCart
composites of left and right derived functors Michael Shulman Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left and
right derived functors. Our central observation is that model categories are the objects of a double category whose vertical and horizontal arrows are left and right Quillen functors, respectively,
and that passage to derived functors is functorial at the level of this double category. The theory of conjunctions and mates in double categories, which generalizes the theory of adjunctions and
mates in 2-categories, then gives us canonical ways to compare composites of left and right derived functors. We give a number of sample applications, most of which are improvements
"... composites of left and right derived functors Michael Shulman Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left
and right derived functors. Our central observation is that model categories are the objects of a double ..."
Add to MetaCart
composites of left and right derived functors Michael Shulman Abstract. We introduce a new categorical framework for studying derived functors, and in particular for comparing composites of left and
right derived functors. Our central observation is that model categories are the objects of a double category whose vertical and horizontal arrows are left and right Quillen functors, respectively,
and that passage to derived functors is functorial at the level of this double category. The theory of conjunctions and mates in double categories, which generalizes the theory of adjunctions and
mates in 2-categories, then gives us canonical ways to compare composites of left and right derived functors. We give a number of sample applications, most of which are improvements
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3221732","timestamp":"2014-04-17T05:00:33Z","content_type":null,"content_length":"35068","record_id":"<urn:uuid:b246ba52-655b-4a6c-9741-7dcadbd2d83e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Shape of Content. Creative Writing in Mathematics and Science
This remarkable book collects some interesting creative writing of 21 authors (young poets, writers, artists, mathematicians, geologists and philosophers). At the end, the editors add short
biographical notes of the contributors. The contributions are in the form of short stories, poems, essays, dramas, fictions, nonfictions and play excerpts. Each of them has a strong mathematical or
scientific content. One of the main aims of the book is to show the beauty of mathematics and the sciences and to reveal some areas where art, science and mathematics come together. Another aim is to
present creativity of mathematicians and theoretical scientists and to illustrate their works, results and ideas. The book gives many opportunities to think about and discuss scientific works, their
difficulties and their roles in our society, to learn why some people do science, to encourage young students into science, and to criticise the current situation and system. The book can be
recommended to readers interested in science and literature.
|
{"url":"http://www.euro-math-soc.eu/node/1162","timestamp":"2014-04-21T12:26:22Z","content_type":null,"content_length":"12422","record_id":"<urn:uuid:35ed4ac0-6b8d-4048-a9a4-833138044de7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebra - scientific group videos
Algebra movies. Algebra is a branch of mathematics concerning the study of structure, relation, and quantity. Together with geometry, analysis, combinatorics, and number theory, algebra is one of...
Tags: Algebra
Type: public
Your Status:You are not the member of this group. Created By: benchwork
|
{"url":"http://www.dnatube.com/group/algebra","timestamp":"2014-04-18T18:14:22Z","content_type":null,"content_length":"31174","record_id":"<urn:uuid:c12352f5-d8f6-404b-807f-40c3b8b7c316>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
s of
EPSRC Reference: EP/J00829X/1
Title: Perron-Frobenius theory and max-algebraic combinatorics of nonnegative matrices
Principal Investigator: Butkovic, Professor P
Other Investigators:
Researcher Co-investigators:
Project Partners:
Department: School of Mathematics
Organisation: University of Birmingham
Scheme: Standard Research
Starts: 12 March 2012 Ends: 11 March 2014 Value (£): 175,775
EPSRC Research Topic Classifications: Algebra & Geometry Numerical Analysis
EPSRC Industrial Sector Classifications: No relevance to Underpinning Sectors
Related Grants:
│Panel Date │Panel Name │Outcome │
Panel History: ├───────────┼────────────────────────────────────────────────────────┼─────────┤
│05 Sep 2011│Mathematics Prioritisation Panel Meeting September 2011 │Announced│
Summary on Grant Application Form
Max-algebra is a rapidly evolving branch of mathematics with potential to solve a class of non-linear problems in mathematics, science and engineering that can be given the form of linear problems,
when arithmetical addition is replaced by the operation of maximum and arithmetical multiplication is replaced by addition. Besides the advantage of dealing with non-linear problems as if they were
linear, the techniques of max-algebra enable us in many cases to efficiently describe the set of all solutions and thus to choose a best one with respect to a specified criterion. It also provides an
algebraic encoding of a class of combinatorial or combinatorial optimisation problems.
Although the foundations of max-algebra were created in the first pioneering papers produced in the heart of England 50 years ago, it is mainly after 2000 that we see a remarkable expansion of this
area in a number of research centres worldwide (e.g. Paris, Berkeley, San Diego, Delft, Madison and Moscow). Nowadays it penetrates a range of areas of mathematics from algebraic topology, functional
analysis, linear algebra and geometry, to non-linear, discrete and stochastic optimisation and mathematical biology. The number of conferences, mini-symposia, workshops and other events devoted
partly or wholly to max-algebra is increasing. A number of research monographs have been published, three of them since 2005. Applications are both theoretical (for instance in discrete-event dynamic
systems, control theory and optimisation) and practical (analysis of the Dutch railway network).
Following the recent remarkable expansion of max-algebra and latest research findings, it seems to be the right time to use the recently developed powerful combinatorial techniques of max-algebra to
strengthen the interplay between max-algebra and conventional linear algebra. This means for instance to develop the Perron-Frobenius theory in semirings, develop the theory of max-algebraic tensors,
solve the mean-payoff games and max-algebraic matrix equations. This will have an immediate impact on the understanding of a range of properties of matrices which find applications in other areas of
mathematics, in physics, computer science, engineering, biology and elsewhere in a way similar to that of conventional linear algebra. For instance it will enable researchers to solve systems of
max-algebraic equations, help to analyse complex systems of information technology by using a max-algebraic rather than traditional model, find a steady regime in systems with max-linear dynamics,
model and solve problems arising in solid state physics, or in certain types of scheduling problems.
To feed into this project and also to help to address the challenges, the PI will link this research with the work of the existing UK working group in tropical mathematics funded by the London
Mathematical Society, which he chairs. Research meetings of this group are organised three times a year in Warwick, Manchester and Birmingham and are attended by more than 30 colleagues from a number
of UK universities. The PI will form collaborative networks and strategic partnership with a number of internationally leading centres in max-algebra to further advance the field.
It is expected that as a consequence of this project the PI will obtain support to organise an international conference on tropical mathematics at Birmingham in 2014 or 2015 and in the future to
create a centre for tropical mathematics (CTM), which will have several funded research projects. CTM will organise international research workshops and conferences, provide expertise for industrial
partners and for specialised undergraduate and postgraduate courses. It will closely cooperate with the research group CICADA at the University of Manchester and with the existing similar centre at
the Ecole Polytechnique in Paris.
Key Findings
Among the main findings so far are:
1. Full characterisation of tropical weakly stable matrices in terms of Hamilton cycles. Orbits of these matrices by definition never reach an eigenvector unless they start in one. We proved that
irreducible weakly stable matrices are exactly those whose critical graph is a Hamilton cycle in the associated graph. Using the Frobenius normal form of a matrix a necessary and sufficient condition
was found for any (reducible) matrix. These criteria can be checked in polynomial time. This result enables us to efficiently characterise multiprocessor interactive processes which never reach
stable regime unless they start from a stable state. This research was initiated before the start of the project but was finalised during the current project.
2. Proof that the sequence of eigencones (that is cones of nonnegative eigenvectors) of matrix powers is periodic both in max algebra and in nonnegative linear algebra. Using the max-algebraic
Perron-Frobenius theory we have also shown that the Minkowski sum of the eigencones of matrix powers is equal to the core of the matrix defined as the intersection of nonnegative column spans of
matrix powers, also in max algebra. Based on this, we describe the set of extremal rays of the core. The theory of the matrix core has been developed in max algebra and in nonnegative linear algebra
simultaneously, in order to unify and compare both versions of the same theory. Further substantial results in this area are expected to be finalised soon.
3. (Joint research with M. MacCaig) Conditions for existence and description of max-algebraic integer subeigenvectors and eigenvectors of a given square matrix. We proved that the former can be
solved as easily as the corresponding question without the integrality requirement (that is in polynomial time). An algorithm was presented for finding an integer point in the tropical column space
of a matrix or deciding that no such vector exists. This algorithm was used to solve the problem of integer eigenvectors for any matrix. The algorithm was shown to be pseudopolynomial for finite
matrices, which implies that this problem can be solved in pseudopolynomial time for any irreducible matrix. We have also identified classes of matrices for which it can be solved in polynomial time.
4. We have studied the max-algebraic analogues of equations involving Z-matrices and M-matrices, with an outlook to a more general algebraic setting. We have shown that these equations can be solved
using the Frobenius trace-down method in a way similar to that in nonnegative linear algebra that characterises the solvability in terms of supports and access relations. We proved a description of
the solution set as a combination of the least solution and the eigenspace of the matrix, and provide a general algebraic setting in which this result holds.
5. (Joint research with O. Mason and B. Benek Gursoy) The Analytic Hierarchy Process (AHP) is widely used for decision making involving multiple criteria. A max-algebraic approach to the single
criterion AHP, introduced previously by Elsner and van den Driessche, has been extended to the multi-criteria AHP, by considering multi-objective generalisations of the single objective optimisation
problem. The existence of min-max optimal solutions is characterized by means of the spectral radius of the associated tropical matrix semigroup. The existence of globally optimal solutions is shown
to be related to the commutativity properties of the associated matrices.
6. We have studied the ultradiscrete analogue of the Lax pair. This "pair" is a max-plus linear system comprising four equations. Our starting point is to treat this system as a combination of two
max-plus eigenproblems, with two additional constraints. Though of infinite dimensions, these two eigenproblems can be treated by means of the "standard" max-plus spectral theory. In particular, any
solution to the system can be described as a tropical combination of fundamental eigenvectors associated with each soliton.
7. Several types of semigroups of matrices (commutative, nilpotent and
quasinilpotent) were considered in the joint work of Sergeev with Litvinov and
Shpiz. The existence of a common eigenvector was proved for matrices
with complex or real nonnegative entries both in the conventional
and tropical linear algebra.
8. The joint paper of Sergeev with Litvinov, Rodionov and Sobolevskii
is a survey on universal algorithms for solving Z-equations (also known as Bellman equations) over semirings and especially tropical and idempotent semirings.
Some new algorithms for special types of matrices are also presented.
9. Tropical hemispaces, defined as tropically convex sets whose complements are tropicallly convex, were investigated in the joint paper of Sergeev with Katz and Nitica. The paper introduces the
concept of (P,R)-decomposition yielding a new kind of representation of tropically convex sets extending the classical idea of representing convex sets by means of extreme points and rays. The
tropical hemispaces are then characterized as tropically convex sets admitting a (P,R)-representation of a specific kind.
Potential use in non-academic contexts
1. Most of the problems studied in this project may find immediate application in any multiprocessor interactive system, which is essentially any multi-stage system (mainly in industry but also for
instance in biology) in which processes run in stages and the individual components of the system work interactively so that the work of a component cannot start before the work of some or all
components in the previous stage is not finished. The use of the results in this project is likely to increase efficiency of the system and enable its smooth and stable run.
2. Network Rail has expressed interest in the use of max-algebraic methodology for the analysis of stability of the UK railway network and its possible use in railway scheduling. This follows a
successful attempt for such an application in the Dutch railway network done in recent years. It is expected that this would eventually lead to a smoother and more reliable train service in selected
areas. Consequently, it would have an impact on the efficiency of the use of resources and thus also on environmental sustainability. It is expected that the project will attract attention to
max-algebra as a new mathematical area that provides novel approaches to timetabling in railways and other means of transport. Further funding for this research and its applications is being sought.
No information has been submitted for this grant.
Sectors submitted by the Researcher
Information & Communication Technologies; Manufacturing; Transport
Project URL: http://web.mat.bham.ac.uk/P.Butkovic/Grant_2011.html
Further Information:
Organisation Website: http://www.bham.ac.uk
|
{"url":"http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/J00829X/1","timestamp":"2014-04-20T04:05:15Z","content_type":null,"content_length":"40804","record_id":"<urn:uuid:4a9fde0a-7b46-4837-896a-ca3b86019c89>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Memorandum from R. L. Murray and A. C. Menius, Jr. to C. K. Beck and Reactor Committee, with Appendix[a machine-readable transcription]Memorandum from R. L. Murray and A. C. Menius, Jr. to C. K. Beck and Reactor Committee, with AppendixLibrary of Congress Subject Headings
%MurNBrodpower041651; %MurNBrodpower041651forms; ]> Murray, Raymond L. Menius, A. C., Jr. Creation of machine-readable version: Russell S. Koonts Creation of digital images: Russell S. Koonts
Conversion to TEI.2-conformant markup: NCSU Science and Technology Electronic Text Center. ca. 9 kilobytes NCSU Libraries. Raleigh, NC. Modern English, MurNBrodpower041651
Available from: NC State University Archives
URL: http://www.lib.ncsu.edu/archives/etext/engineering/reactor/murray/
Nuclear Reactor Digitization Project
Raymond L. Murray Reactor Project Notebook
Illustrations have been included from the print version. Scanned by Russell Koonts with Photoshop 5.0 software. Raymond L. Murray A. C. Menius, Jr. 3 pp. Manuscript copy consulted: NCSU Libraries
call number UA105.16
Prepared for the NCSU Libraries Science and Technology Electronic Text Project.
Spell-check and verification made against printed text using Notetab spell checker.
The lineation of the manuscript has been maintained and all end-of-line hyphens have been preserved.
The images exist as archived TIFF images, one or more JPEG versions for general use, and thumbnails.
Keywords in the header are a local Electronic Text Center scheme to aid in establishing analytical groupings.
ID elements are given for each page element and are composed of the text's unique cryptogram and the given page number, as in NEprop033050a for the title page of Clifford Beck's Proposal.
April 16, 1951 English non-fiction; prose LCSH 24-bit color; 400 dpi Memorandum from R. L. Murray and A. C. Menius, Jr. to C. K. Beck and Reactor Committee, with Appendix Typescript 3 pp. April 16,
1951 MurNBrodpower041651
April 16, 1951NCSC-12 TO: C. K. Beck and Reactor Committee FROM: R. L. Murray and A. C. Menius, Jr. SUBJECT: Control Rod-Power Characteristic
For purposes of instrument and control-rod design it is necessary to know the effect of changes in control- rod position in the reactor power level. Such data are normally obtained empirically; but
lacking a precise graph from Los Alamos, a rough theoretical calculation was made.
The power is shown to be approximately proportional to the excess reactivity; the latter in turn is given by an S-shaped curve which is similar to a displaced sine curve. The variation with position
of rod of the power is shown in the attached figure, adjusted to fit a total value of 20,000 microres.
The arguments loading to this graph are given in the appendix of this note.
APPENDIX Derivation of Excess Reactivity Curve
The effect of a cadmium or boron control rod is to depress the neutron flux, usually to zero at the boundary. If such a rod were inserted axially in a circular cylinder of length much greater than
the diameter, the neutron flux would be represented, in a bare reactor, by the function
where Jo is the Bessel function and the dimensions are given in the sketch, Fig. 1.
(This is to be contrasted with the undisturbed flux
The critical condition for such a system would be written
where M2 is an effective migration length for neutrons.
(Undisturbed, the k is given by the same relation without the a)
In a finite cylindrical bare pile, there is an axial sinusoidal variation in flux density,
, "modulating" the flux formulas given above; see Fig. 2.
The approximation is now made that the effective k of a finite cylinder with a rod partially inserted is the average of the undisturbed value kv and the disturbed value kd, with weighting factors
pro- portional to the axial flux that is affected, namely the respective areas under the flux curve
It is easy to show that the effective k by this criterion is
If r1 is the excess reactivity value of the rod and the "rod-in" position corresponds to r = 0, then the excess reactivity r at any
distance the rod is pulled up, Z, is given by
In order to translate this into a power graph it is necessary only to note that if the pile is just critical at zero power with the rod in and the solution at the temperature of the cooling water,
for any other steady state at power P the excess reactivity due to the rod must just balance the drop due to the solution temperature rise, to again reach k = 1.
Calculations on heat removal and experimental data from Los Alamos indicate an essentially linear rise in allowed power with temperature. Thus P/P1 = T/T1 where (P1,T1) is the maximum operating
point, and (P,T) is any other.
If the temperature effect on reactivity is written
where α is the temperature coefficient, then the rod-power correlation is
This is plotted for the following assumed values of the constants
Pl = 10 kw α = 240 μre/°C rl = 1.7 x 104 μre T1 = 70°C
|
{"url":"http://www.lib.ncsu.edu/specialcollections/digital/text/engineering/reactor/murray/MurNBrodpower041651.xml","timestamp":"2014-04-24T15:11:14Z","content_type":null,"content_length":"21882","record_id":"<urn:uuid:fd569c0c-0eb0-4ee2-8edb-8deec983f8de>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple Harmonic Motion in a Spring-Mass System
Time Required Short (2-5 days)
Prerequisites None
Material Availability Readily available
Cost Very Low (under $20)
Safety No issues
Many things in nature are periodic: the seasons of the year, the phases of the moon, the vibration of a violin string, and the beating of the human heart. In each of these cases, the events occur in
repeated cycles, or periods. In this project you will investigate the periodic motion of a spring, using a mini Slinky®. Basic physics will then allow you to determine the Hooke's Law spring
constant. Your analysis will also yield the effective mass of the spring, a factor that is important in real-world engineering applications.
In this science fair project you will investigate the mathematical relationship between the period (the number of seconds per bounce) of a spring and the load (mass) carried by the spring. Based on
the data you collect, you will be able to derive the spring constant, as described in Hooke's Law, as well as the effective mass of the spring.
David Whyte, PhD, Science Buddies
Slinky® is a registered trademark of Poof-Slinky, Inc.
Microsoft, Microsoft Excel is a U.S. registered trademark of Microsoft Corporation.
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
Last edit date: 2013-01-10
This project requires very simple materials to explore the physics of periodic motion. All you need is a mini Slinky® and some weights, such as small fishing sinkers. The period of the Slinky is the
time it takes to go through one down-and-up cycle when it is hung vertically from one end. The spring with the weight is a simple harmonic oscillator, which is a system that follows Hooke's law.
Hooke's law states that when the simple harmonic oscillator is displaced from its equilibrium position, it experiences a restoring force, F, proportional to the displacement, x, where k is a positive
Hooke's Law: F = -kx
As you add weights to the spring, the period (or cycle time) changes. In this project, you will determine how adding more mass to the spring changes the period, T, and then graph this data to
determine the spring constant, k, and the equivalent mass, m[e], of the spring. The equation that relates period to mass, M, is shown below:
• M is the load on the spring in kilograms (kg).
• k is the spring constant in units of Newtons/meter (N/m).
• T is the period in seconds (sec).
In an ideal spring-mass system, the load on the spring would just be the added weight. But real springs contribute some of their own weight to the load. That is why the Slinky bounces even when there
is no weight added. So the equation can be modified to look like this:
In this equation, the total mass pulling down on the spring is actually comprised of two masses, the added weight, m, plus a fraction of the mass of the spring, which we will call the mass equivalent
of the spring, m[e]. Rearranging equation 2, will give you the form of the equation you will use later for graphing, so:
Based on this equation, if you graph the added mass, m vs. k, and the mass equivalent, m[e], of the spring.
Terms and Concepts
To do this project, you should do research that enables you to understand the following terms and concepts:
• Simple harmonic oscillator
• Hooke's law
• Simple harmonic motion
• Physics of springs
• Spring constant
• How does adding mass change the period of a spring?
• The Hyperphysics website has helpful diagrams explaining simple harmonic motion:
Nave, C.R. (2006). Simple Harmonic Motion. Retrieved March 15, 2008 from the Departments of Physics and Astronomy, Georgia State University website: http://hyperphysics.phy-astr.gsu.edu/Hbase/
• Here is a brief introduction to Hooke's Law:
Krowne, A. (2005). Hooke's Law. Retrieved March 13, 2008 from http://planetphysics.org/encyclopedia/HookesLaw.html
• For more advanced students, this high school physics tutorial on Newton's second law of motion can help you understand how to convert units of mass (hanging from the spring) to units of force
(mass × acceleration due to gravity):
Henderson, T. (2004). Newton's Second Law. Retrieved March 13, 2008 from http://www.glenbrook.k12.il.us/gbssci/Phys/Class/newtlaws/u2l3a.html
Materials and Equipment
To do this project, you will need the following materials and equipment:
• Mini Slinky
• Weights to hang from the spring. Here are some tips:
□ Fishing sinkers work well since they have holes in them for attaching to the spring. You could also use hex nuts, or AAA batteries attached to the wire with tape.
□ You will need five identical items to get a spread of data for the graph. The total weight should be around 35 g, or approximately 1 ounce.
□ Depending on the weights you choose, you might need fine wire or string to attach the weights to the spring.
• A scale for measuring actual mass of weights used, accurate to +/- 1 gram. Use an electronic kitchen scale, a scale from your school lab, or a postal scale.
• Stopwatch, or clock with a second hand
• Lab notebook
• Graph paper
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
Experimental Procedure
1. Do your background research so that you are knowledgeable about the terms, concepts, and questions above. Be sure to record your data in your lab notebook as you go along.
2. Measure the mass of one of your weights, using the scale. If your scale does not measure small weights, you can weigh all five of your weights and divide by five. Then measure the mass of the
3. Perform the following steps to collect your data:
a. Hold one end of the spring in your hand and let it bounce gently down and then back up.
b. Count the number of cycles the spring makes in 60 sec with no weight hanging from it.
c. Hang one weight from the spring (using a fine wire or string, if needed).
d. Count the number of cycles the spring goes through in 60 sec with the weight attached.
e. Perform at least three trials for each weight.
f. Repeat steps c-e for a series of different weights.
4. Keep track of your results in a data table like this one. Try using the program Microsoft Excel to make the tables and perform the calculations when you work through this project.
│ Load (mass added to spring) │ Number of cycles per 60 sec │ │
│(g) ├──────────┬──────────┬──────────┤Average │
│ │ Trial #1 │ Trial #2 │ Trial #3 │ │
5. Make another table like the one below to convert your raw data into numbers that can be used to determine the spring constant and spring's effective mass.
│ A │ B │ C │ D │ E │
│ │ Average # cycles in │ f, the frequency, or cycles per second │ │ │
│ Added mass (kg) │ 60 sec │ (1/sec) │ T, the period of spring, or the time for each cycle (sec) │ ^2) │
│ │ (1/min) │ │ │ │
│ Convert to │ From the table │ Divide "Average # cycles in 60 sec" in │ Reciprocal of cycles per second in column C (divide 1 by the │ Multiply value in column D by itself and │
│ kilograms │ above │ column B by 60 │ numbers in column C) │ divide by 4(pi)^2 │
6. Make a graph with "Added mass," m, in kilograms, on the y-axis, and ^2, on the x-axis. Use kilograms rather than grams so that the value of k is in units of N/m, which is equivalent to kg/sec^2.
Usually you are instructed to graph the independent variable (mass in this case) on the x-axis and the measured parameter (m on the y-axis will let you read m[e] from the y-intercept.
This graph of m vs.
Let's look at the equation. It has a form similar to the equation of a straight line: y = ax + b, where a is the slope and b is the y-intercept. In fact, Equation 3 is an equation for a straight
line, with slope equal to k, the spring constant, and y-intercept equal to the negative value of m[e]. In other words, there is a linear relationship between m and (m[e]. The reason you
calculated k and m[e] from the graph of m vs.
How do you determine the slope of the line you have drawn? The slope is measured as change in m, divided by the change in
slope = k = Δm/Δ(
Δm/ Δ(
│ Added mass (kg) │ ^2) │ Δy (kg) │ Δx (sec^2) │ Δy/ Δx │
│ │ │ │ │ (kg/sec^2) │
│ │ │ Subtract one y value from another, larger y value │ Subtract one x value from another, larger x value │ This is the spring constant, k, in units of N/m (kg/sec^2) │
Once you have determined the value of the spring constant, k, from the slope of the line, you're ready to determine the effective mass of the spring. To do this, extend the straight line until it
intersects the vertical y-axis. The line will intersect the y-axis at -m[e] (negative m[e]). Based on theoretical considerations, the absolute value of m[e] should be around one-third of the mass
of the spring.
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
• Perform the project with different types and sizes of springs.
• For an experiment using a spring-based mechanical model of the human knee, see the Science Buddies project Deep Knee Bends: Measuring Knee Stress with a Mechanical Model.
• For a project to investigate Hooke's law and to determine the spring constant by an alternative procedure, see the Science Buddies project Applying Hooke's Law: Make Your Own Spring Scale.
Share your story with Science Buddies!
I Did This Project! Please log in and let us know how things went.
Ask an Expert
The Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions
about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you
Ask an Expert
Related Links
Thank you for your feedback!
|
{"url":"http://www.sciencebuddies.org/science-fair-projects/project_ideas/Phys_p064.shtml","timestamp":"2014-04-18T23:21:03Z","content_type":null,"content_length":"43881","record_id":"<urn:uuid:64424d95-f26c-46b9-b30c-a907b82a3b45>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Calculate an Object’s Velocity Based on Its Displacement
How to Calculate an Object’s Velocity Based on Its Displacement
In physics, velocity, which is the rate of change of position (or speed in a particular direction), is a vector. Imagine that you just hit a ground ball on the baseball diamond and you’re running
along the first-base line, or the s vector, 90 feet at a 45-degree angle to the positive x-axis. But as you run, it occurs to you to ask, Will my velocity enable me to evade the first baseman? A
good question, because the ball is on its way from the shortstop.
Whipping out your calculator, you figure that you need 3.0 seconds to reach first base from home plate; so what’s your velocity? To find your velocity, you quickly divide the s vector by the time it
takes to reach first base:
This expression represents a displacement vector divided by a time, and time is just a scalar. The result must be a vector, too. And it is: velocity, or v:
Your velocity is 30 feet/second at 45 degrees, and it’s a vector, v.
Dividing a vector by a scalar gives you a vector with potentially different units and the same direction.
In this case, you see that dividing a displacement vector, s, by a time gives you a velocity vector, v. It has the same magnitude as when you divided a distance by a time, but now you see a direction
associated with it as well, because the displacement, s, is a vector.
|
{"url":"http://www.dummies.com/how-to/content/how-to-calculate-an-objects-velocity-based-on-its-.html","timestamp":"2014-04-24T14:16:59Z","content_type":null,"content_length":"54171","record_id":"<urn:uuid:a60af05d-467a-44e5-93be-8dc69087134c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Somerset, MA Algebra Tutor
Find a Somerset, MA Algebra Tutor
...Since then, I have always been a math tutor. In a way, being a math tutor came naturally to me. What most of my clients say they like most about having me as a tutor is how I help them find
their own sense of style when it comes to problem solving skills by showing them a few ways of how to solve them, and then letting them choose which method works best for them.
31 Subjects: including algebra 1, algebra 2, chemistry, calculus
...My students do not feel that it's just another day in the classroom. I am flexible in scheduling and availability matters, and aim to satisfy the needs of my students and their families.
Please contact me if this sounds like a fit.
16 Subjects: including algebra 1, reading, ESL/ESOL, GED
...I have been working with elementary level students since I was 16, and feel comfortable and confident in my ability to effectively teach elementary level curriculum. I truly love teaching, and
am hoping to spend the summer working with tutees who need my help! I am available to begin tutoring s...
15 Subjects: including algebra 1, reading, English, prealgebra
...With those two things put together, wonderful doors lay open. My areas of expertise lay in Classical History, Latin, and Art History with a strong background in Math, English and Elementary
Ancient Greek. I am always eager to pass on my knowledge and love of these subjects.
18 Subjects: including algebra 2, vocabulary, European history, English
...My first two years of study were dedicated to a Physics degree and, after much thought, I decided to switch majors. That being said, I did complete all the required math courses including
Calculus 1-3 and Differential Equations and, additionally, Classical Physics 1-3, Electricity and Magnetism,...
15 Subjects: including algebra 2, calculus, algebra 1, English
Related Somerset, MA Tutors
Somerset, MA Accounting Tutors
Somerset, MA ACT Tutors
Somerset, MA Algebra Tutors
Somerset, MA Algebra 2 Tutors
Somerset, MA Calculus Tutors
Somerset, MA Geometry Tutors
Somerset, MA Math Tutors
Somerset, MA Prealgebra Tutors
Somerset, MA Precalculus Tutors
Somerset, MA SAT Tutors
Somerset, MA SAT Math Tutors
Somerset, MA Science Tutors
Somerset, MA Statistics Tutors
Somerset, MA Trigonometry Tutors
Nearby Cities With algebra Tutor
Assonet algebra Tutors
Bristol, RI algebra Tutors
Central Falls algebra Tutors
Dartmouth algebra Tutors
Dighton, MA algebra Tutors
Fall River, MA algebra Tutors
Freetown, MA algebra Tutors
Lincoln, RI algebra Tutors
Middleborough, MA algebra Tutors
Norton, MA algebra Tutors
Portsmouth, RI algebra Tutors
Rehoboth, MA algebra Tutors
Seekonk algebra Tutors
Swansea, MA algebra Tutors
Westport, MA algebra Tutors
|
{"url":"http://www.purplemath.com/somerset_ma_algebra_tutors.php","timestamp":"2014-04-18T11:46:52Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:1dd81b5a-2258-48ab-b21d-5c94c2648c3d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From OeisWiki
About this page
• This is part of the series of OEIS Wiki pages that list works citing the OEIS.
• Additions to these pages are welcomed.
• But if you add anything to these pages, please be very careful — remember that this is a scientific database. Spell authors' names, titles of papers, journal names, volume and page numbers, etc.,
carefully, and preserve the alphabetical ordering.
• If you are unclear about what to do, contact one of the Editors-in-Chief before proceeding.
• Works are arranged in alphabetical order by author's last name.
• Works with the same set of authors are arranged by date, starting with the oldest.
• This section lists works in which the first author's name begins with the letter I.
• The full list of sections is: CiteA, CiteB, CiteC, CiteD, CiteE, CiteF, CiteG, CiteH, CiteI, CiteJ, CiteK, CiteL, CiteM, CiteN, CiteO, CiteP, CiteQ, CiteR, CiteS, CiteT, CiteU, CiteV, CiteW,
CiteX, CiteY, CiteZ.
• For further information, see the main page for Works Citing OEIS.
1. Ionut E. Iacob, T. Bruce McLean and Hua Wang, The V-flex, Triangle Orientation, and Catalan Numbers in Hexaflexagons, The College Mathematics Journal, Vol. 43, No. 1 (January 2012), pp. 6-10.
2. Douglas E. Iannucci, "The Kaprekar Numbers", J. Integer Sequences, Volume 3, 2000, Article 00.1.2.
3. Douglas E. Iannucci and Bertrum Foster, "Kaprekar Triples", J. Integer Sequences, Volume 8, 2005, Article 05.4.8.
4. Douglas E. Iannucci and Donna Mills-Taylor, "On Generalizing the Connell Sequence", J. Integer Sequences, Volume 2, 1999, Article 99.1.7.
5. Douglas E. Iannucci, Deng Moujie and Graeme L. Cohen, "On Perfect Totient Numbers", J. Integer Sequences, Volume 6, 2003, Article 03.4.5.
6. Aminu A. Ibrahim, An enumeration scheme and some algebraic properties of a special (132)-avoiding class of permutation patterns, Trends Apl. Sci. Res. 2 (4) (2007) 334-350
7. A. M. Ibrahim, Extension of factorial concept to negative numbers, Notes on Number Theory and Discrete Mathematics, Vol. 19, 2013, 2, 30-42; http://www.nntdm.net/papers/nntdm-19/
8. Kentaro Ihara, Derivations and automorphisms on non-commutative power series, Journal of Pure and Applied Algebra, Volume 216, Issue 1, January 2012, Pages 192-201; doi:10.1016/j.jpaa.2011.06.004
9. M. Iida, On Triangle of numbers, Josai Mathematical Monographs, Vol. 5 (2012), 61-70; http://libir.josai.ac.jp/infolib/user_contents/pdf/JOS-13447777-05_61.pdf
10. Soichi Ikeda and Kaneaki Matsuoka, On the Lcm-Sum Function, Journal of Integer Sequences, Vol. 17 (2014), Article 14.1.7
11. S. Ikeda, K. Matsuoka, On transcendental numbers generated by certain integer sequences, Siauliai Math. Semin., 8 (16) 2013, 63-69; http://siauliaims.su.lt/pdfai/2013/Iked-Mats-2013.pdf
12. Aleksandar Ilic and Andreja Ilic, doi:10.2298/FIL1103191I On the number of restricted Dyck paths, Filomat 25:3 (2011), 191-201; PDF
13. A. Ilic, S. Klavzar and Y. Rho, Parity index of binary words and powers of prime words, http://www.fmf.uni-lj.si/~klavzar/preprints/BalancedFibo-submit.pdf, 2012
14. L. Ilie and V. Mitrana, Binary Self-Adding Sequences and Languages, TUCS Technical Reports No. 18, May 1996.
15. Images des Maths, CNRS, <a href="http://images.math.cnrs.fr/Lagrange-et-la-variation-des.html">Lagrange et la variation des théorèmes</a> (2013)
16. K. S. Immink, Coding Schemes for Multi-Level Channels that are Intrinsically Resistant Against Unknown Gain and/or Offset Using Reference Symbols, http://www.exp-math.uni-essen.de/~immink/pdf/
jsac13.pdf, 2013.
17. Yoshinari Inaba, "Hyper-Sums of Powers of Integers and the Akiyama-Tanigawa Matrix", J. Integer Sequences, Volume 8, 2005, Article 05.2.7.
18. International Mathematical Union, Minutes of 17th Meeting of Organizing Committee, 2013; http://www.mathunion.org/fileadmin/CEIC/Minutes/17th_Minutes-OC.pdf
19. Eugen J. Ionascu, "A Parametrization of Equilateral Triangles Having Integer Coordinates", J. Integer Sequences, Volume 10, 2007, Article 07.6.7.
20. Eugen J. Ionascu, A characterization of regular tetrahedra in Z^3 (2007); arXiv:0712.3951; Journal of Number Theory, Volume 129, Issue 5, May 2009, Pages 1066-1074.
21. Eugen J. Ionascu, arXiv:math/0701111 Counting all equilateral triangles in {0,1,2,...,n}^3, (2007).
22. E. J. Ionascu, Regular tetrahedra whose vertices have integer coordinates, Acta Math. Univ. Comenianae, Vol. LXXX, 2 (2011), pp. 161-170
23. E. J. Ionascu, Ehrhart's polynomial for equilateral triangles in Z^3, Arxiv preprint arXiv:1107.0695, 2011.
24. E, J, Ionascu, Lattice Platonic Solids and their Ehrhart polynomial, Arxiv preprint arXiv:1111.1150, 2011
25. Ionascu, Eugen J.; and Markov, Andrei; doi:10.1016/j.jnt.2010.07.008 Platonic solids in Z^3, J. Number Theory 131 (2011), no. 1, 138-145.
26. Eugen J. Ionascu and R. A. Obando, Cubes in {0,1,...,N}^3, INTEGERS, 12A (2012), #A9.
27. Lawrence Ip, Catalan numbers and random matrices (1999)
28. J. Iraids, K. Balodis, J. Cernenoks, M. Opmanis, R. Opmanis and K. Podnieks, Integer Complexity: Experimental and Analytical Results. Arxiv preprint arXiv:1203.6462, 2012
29. E. Irurozki, B. Calvo, J. A. Lozano, An R package for permutations, Mallows and Generalized Mallows models, 2014; https://addi.ehu.es/bitstream/10810/11238/1/tr14-5.pdf
30. E. Irurozki, B. Calvo, J. A. Lozano, Sampling and learning the Mallows and Weighted Mallows models under the Hamming distance, 2014; https://addi.ehu.es/bitstream/10810/11240/1/tr14-3.pdf
31. E. Irurozki, B. Calvo, J. A. Lozano, Sampling and learning the Mallows model under the Ulam distance, 2014; https://addi.ehu.es/bitstream/10810/11241/1/tr14-4.pdf
32. ABRAHAM ISGUR, VITALY KUZNETSOV AND STEPHEN M. TANNY, A combinatorial approach for solving certain nested recursions with non-slow solutions, Arxiv preprint arXiv:1202.0276, 2012
33. A. Isgur, D. Reiss, Trees and meta-Fibonacci sequences, El. J. Combinat. 16 (2009) #R129
34. Dan Ismailescu and Peter Seho Park, On Pairwise Intersections of the Fibonacci, Sierpinski, and Riesel Sequences, Journal of Integer Sequences, 16 (2013), #13.9.8.
35. Genta Ito, Least change in the Determinant or Permanent of a matrix under perturbation of a single element: continuous and discrete cases (2008); arXiv:0805.2081
36. Genta Ito, Approximate formulation of the probability that the Determinant or Permanent of a matrix undergoes the least change under perturbation of a single element (2008); arXiv:0805.2083
37. A. Iványi, Leader election in synchronous networks, Acta Univ. Sapientiae, Mathematica, 5, 2 (2013) 54-82.
38. A. IVANYI, L. LUCZ, T. MATUSZKA and S. PIRZADA, Parallel enumeration of degree sequences of simple graphs, Acta Univ. Sapientiae, Informatica, 4, 2 (2012) 260-288.
39. A. Ivanyi and J. E. Schoenfield, Deciding football sequences, Acta Univ. Sapientiae, Informatica, 4, 1 (2012) 130-183, http://www.acta.sapientia.ro/acta-info/C4-1/info41-7.pdf.
40. H. Iwashita, J. Kawahara and S.-I. Minato, ZDD-Based Computation of the Number of Paths in a Graph, Division of Computer Science, Report Series A, September 18, 2012, Hokkaido University, 2012;
41. Kozue Iwata, Shiro Ishiwata and Shin-ichi Nakano, A Compact Encoding of Unordered Binary Trees, in Theory and Applications of Models of Computation, Lecture Notes in Computer Science, 2011,
Volume 6648/2011, 106-113, doi:10.1007/978-3-642-20877-5_11
About this page
• This is part of the series of OEIS Wiki pages that list works citing the OEIS.
• Additions to these pages are welcomed.
• But if you add anything to these pages, please be very careful — remember that this is a scientific database. Spell authors' names, titles of papers, journal names, volume and page numbers, etc.,
carefully, and preserve the alphabetical ordering.
• If you are unclear about what to do, contact one of the Editors-in-Chief before proceeding.
• Works are arranged in alphabetical order by author's last name.
• Works with the same set of authors are arranged by date, starting with the oldest.
• This section lists works in which the first author's name begins with the letter I.
• The full list of sections is: CiteA, CiteB, CiteC, CiteD, CiteE, CiteF, CiteG, CiteH, CiteI, CiteJ, CiteK, CiteL, CiteM, CiteN, CiteO, CiteP, CiteQ, CiteR, CiteS, CiteT, CiteU, CiteV, CiteW,
CiteX, CiteY, CiteZ.
• For further information, see the main page for Works Citing OEIS.
|
{"url":"http://oeis.org/wiki/CiteI","timestamp":"2014-04-17T11:20:12Z","content_type":null,"content_length":"31789","record_id":"<urn:uuid:7c7aa76b-88c1-41b8-b35e-2d431d90790f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
February 23rd 2010, 12:06 AM
How exactly would I do this?
The normal to the ellipse x^2/a^2 + y^2/b^2 = 1 at P(x1, y1) meets the x-axis in N and the Y-axis in G. Prove PN/NG = (1-e^2)/e^2
February 23rd 2010, 12:20 AM
See this thread for some general comments about ellipses that should help with this problem. Remember that the eccentricity is given by $e^2 = 1 - \frac{b^2}{a^2}$.
February 26th 2010, 03:38 PM
I've spent a few days on this but I cant solve it. PLEASE HELP!!!
February 27th 2010, 06:56 AM
I'm not entirely surprised. This is a longer and messier calculation than I would have guessed. Here is an outline of how to do it.
Take P to be the point $(a\cos\theta,b\sin\theta)$. Then (see the link in my other comment above) the normal at P has equation $yb\cos\theta - xa\sin\theta = (b^2 - a^2)\cos\theta\sin\theta$.
Put y = 0 in that equation to see that N is the point $\Bigl(\frac{(a^2-b^2)\cos\theta}a,0\Bigr)$. Put x = 0 to see that G is the point $\Bigl(0,\frac{(b^2-a^2)\sin\theta}b\Bigr)$.
Then use the usual distance formula to see that $PN^2 = \Bigl(\frac{b^2\cos\theta}a\Bigr)^2 + b^2\sin^2\theta = \frac{b^2(b^2\cos^2\theta + a^2\sin^2\theta)}{a^2}$.
Similarly $NG^2 = (a^2-b^2)^2\Bigl(\frac{\cos^2\theta}{a^2} + \frac{\sin^2\theta}{b^2}\Bigr) = \frac{(a^2-b^2)^2(b^2\cos^2\theta + a^2\sin^2\theta)}{a^2b^2}$.
Therefore $\Bigl(\frac{PN}{NG}\Bigr)^2 = \frac{b^4}{(a^2-b^2)^2}$, and so $\frac{PN}{NG} = \frac{b^2}{a^2-b^2} = \frac{a^2(1-e^2)}{a^2e^2} = \frac{1-e^2}{e^2}$.
|
{"url":"http://mathhelpforum.com/geometry/130271-ellipse-print.html","timestamp":"2014-04-17T08:16:03Z","content_type":null,"content_length":"7701","record_id":"<urn:uuid:b2296ccb-c336-42fa-b2f9-7b0d749e687d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: Matheology § 285
Date: Jun 11, 2013 1:18 PM
Author: LudovicoVan
Subject: Re: Matheology § 285
<mueckenh@rz.fh-augsburg.de> wrote in message
> Matheology § 285
> In this article, I argue that it is impossible to complete infinitely many
> tasks in a finite time. A key premise in my argument is that the only
> way to get to 0 tasks remaining is from 1 task remaining, when tasks
> are done 1-by-1. I suggest that the only way to deny this premise is
> by begging the question, that is, by assuming that supertasks are
> possible.
Supertasks are mathematical constructs, and, unless shown that there is
something intrinsically incongruent in these constructions, they are
certainly "possible", and since after Zeno in use to model real-world
problems. Time is also irrelevant, it is impossible to complete infinitely
many tasks *effectively*: but we are using limits, i.e. where the
constructions allow limits to exist, we are not pretending that the process
is completed one step at a time, we are rather leveraging the structural
features that can be legitimately extended.
> Article first published online: 4 MAR 2012
> Pacific Philosophical Quarterly
> Volume 93, Issue 1, pages 1?7, March 2012
> <http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0114.2011.01412.x/full>
> Therefore it is not possible to enumerate all rational numbers
> (always infinitely many remain) by all natural numbers (always
> infinitely many remain) or to traverse the lines of a Cantor list (always
> infinitely many remain).
It is not possible to do so effectively...
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9133176","timestamp":"2014-04-16T09:04:03Z","content_type":null,"content_length":"2848","record_id":"<urn:uuid:ba337df4-756e-4972-8aa3-543d2637efae>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Z94.2 - Anthropometry & Biomechanics: Anthropometry Section
Intro to Anthropometry | Dynamic/Functional Dimension Terms | Static Dimension Terms | Reference Plates | Glossary
Cross Reference List | Primary Bibliography | Secondary Bibliography
Editorial Note: Definitions in this section are treated differently from those in the other sections, with a diagrammatic instead of strictly alphabetical listing. Anthropometry terms can be found
alphabetically in the overall index. When a term is indexed as being in the Anthropometry section, the reader should refer to at least one – if not all – of the three listings in this chapter:
Dimensional Terminology, Plate List, and Cross Reference List. The reader should also refer to the Glossary.
Dynamic/Functional Dimension Terms
FUNCTIONAL ARM REACH CONTOURS. Statistical envelopes which describe functional arm reach at various lateral angles of certain percentages of a representative subject population for restrained and
unrestrained conditions. Specific derivative of this model is control reach contours in which reach to specific control locations by a representative population of users is determined.
FUNCTIONAL LEG REACH CONTOURS. Statistical envelops which describe functional leg reach at various pedal heights of certain percentages of a representative subject population.
EYE CONTOURS. Statistical envelopes (elliptical in shape) which describe where eyes of certain percentages of representative user population are located in workspace environment. Model can be
developed with or without head movement.
HEAD CONTOURS. Statistical envelopes which describe location below which certain percentages of representative user population are located in workspace environment. This definition for head contour
can be generalized to other body landmarks (e.g. knees, elbows, shoulders, etc.).
SLEEP ENVELOPE. Statistical contours which describe height, breadth and length dimensions within which certain percentages of a representative user population can assume specific sleep postures
(e.g., prone or fetal).
< Previous | Next >
|
{"url":"http://www.iienet.org/PrinterFriendly.aspx?id=2622","timestamp":"2014-04-20T22:06:03Z","content_type":null,"content_length":"4593","record_id":"<urn:uuid:311a862b-f393-46ff-aa26-c2c1c5cd095c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How many tacks fit in the plane?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Call a tack the one point union of three open intervals. Can you fit an uncountable number of them on the plane? Or is only a countable number?
up vote 5 down vote favorite gn.general-topology
add comment
Call a tack the one point union of three open intervals. Can you fit an uncountable number of them on the plane? Or is only a countable number?
First of all, the one-point compactification of three open intervals is not a "tack", it's a three-leaf clover. I think that you mean a one-point union of three closed intervals; of
course it doesn't matter if the other three endpoints are there or not. This topological type can be called a "Y" or a "T" or a "simple triod". R.L. Moore published a solution to your
question in 1928. The answer is no. It was generalized in 1944 by his student Gail Young: You can only have countably many $(n-1)$-dimensional tacks in $\mathbb{R}^n$ for any $n \ge 2$.
For her theorem, the name "tack" makes rather more sense, but she calls it a "$T_n$-set".
Actually Moore's theorem applies to a more general kind of triod, in which three tips of the "Y" are connected to the center by "irreducible continua", rather than necessarily intervals.
I don't know whether I might be spoiling a good question, but here in any case is a solution to the original question (see as both Moore and Young did something more general that takes
more discussion). Following domotorp's hint, there is a [S:principle of accumulation onto a countable set of outcomes:S] pigeonhole principle for uncountable sets. If $f:A \to B$ is a
function from an uncountable set $A$ to a countable set $B$, then there is an uncountable inverse image $A' = f^{-1}(b)$. If you want to show that $A$ does not exist, then you might as
well replace it with $A'$. Unlike the finite pigeonhole principle, which becomes more limited with each such replacement, $A'$ has the same cardinality as $A$, so you haven't lost
up vote 24 anything. You are even free to apply the uncountable pigeonhole principle again.
down vote
accepted Suppose that you have uncountably many simple triods in the plane. Given a simple triod, we can choose a circle $C$ with rational radius and rational center with the branch point of the
triod on the inside and the three tips on the outside. Since there are only countably many such circles, there are uncountably many triods with the same circle $C$. We can trim the
segments of each such triod so that they stop when they first touch $C$, to make a pie with three slices (a Mercedes-Benz symbol). Then, given such a triod, we can pick a rational point
in each of three slices of the pie. Since there are only countably many such triples of points, there must be uncountably many triods with the same three points $p$, $q$, and $r$. In
particular there are two such triods, and a suitable version of the Jordan curve theorem implies that they intersect.
The argument can be simplified to just pick a rational triangle that functions as the circle, and whose corners function as the three separated points. But I think that there is something
to learn from the variations together, namely that the infinite pigeonhole principle gives you a lot of control. For instance, with hardly any creativity, you can assume that the triods
are all large.
add comment
First of all, the one-point compactification of three open intervals is not a "tack", it's a three-leaf clover. I think that you mean a one-point union of three closed intervals; of course it doesn't
matter if the other three endpoints are there or not. This topological type can be called a "Y" or a "T" or a "simple triod". R.L. Moore published a solution to your question in 1928. The answer is
no. It was generalized in 1944 by his student Gail Young: You can only have countably many $(n-1)$-dimensional tacks in $\mathbb{R}^n$ for any $n \ge 2$. For her theorem, the name "tack" makes rather
more sense, but she calls it a "$T_n$-set".
Actually Moore's theorem applies to a more general kind of triod, in which three tips of the "Y" are connected to the center by "irreducible continua", rather than necessarily intervals.
I don't know whether I might be spoiling a good question, but here in any case is a solution to the original question (see as both Moore and Young did something more general that takes more
discussion). Following domotorp's hint, there is a principle of accumulation onto a countable set of outcomes pigeonhole principle for uncountable sets. If $f:A \to B$ is a function from an
uncountable set $A$ to a countable set $B$, then there is an uncountable inverse image $A' = f^{-1}(b)$. If you want to show that $A$ does not exist, then you might as well replace it with $A'$.
Unlike the finite pigeonhole principle, which becomes more limited with each such replacement, $A'$ has the same cardinality as $A$, so you haven't lost anything. You are even free to apply the
uncountable pigeonhole principle again.
Suppose that you have uncountably many simple triods in the plane. Given a simple triod, we can choose a circle $C$ with rational radius and rational center with the branch point of the triod on the
inside and the three tips on the outside. Since there are only countably many such circles, there are uncountably many triods with the same circle $C$. We can trim the segments of each such triod so
that they stop when they first touch $C$, to make a pie with three slices (a Mercedes-Benz symbol). Then, given such a triod, we can pick a rational point in each of three slices of the pie. Since
there are only countably many such triples of points, there must be uncountably many triods with the same three points $p$, $q$, and $r$. In particular there are two such triods, and a suitable
version of the Jordan curve theorem implies that they intersect.
The argument can be simplified to just pick a rational triangle that functions as the circle, and whose corners function as the three separated points. But I think that there is something to learn
from the variations together, namely that the infinite pigeonhole principle gives you a lot of control. For instance, with hardly any creativity, you can assume that the triods are all large.
This is a well-known puzzle/problem, the trick is to make an injective mapping from any set of disjoint tacks to triples of $\mathbb Q^2$.
up vote 8 down vote
add comment
This is a well-known puzzle/problem, the trick is to make an injective mapping from any set of disjoint tacks to triples of $\mathbb Q^2$.
|
{"url":"http://mathoverflow.net/questions/27244/how-many-tacks-fit-in-the-plane/27246","timestamp":"2014-04-21T15:54:01Z","content_type":null,"content_length":"61859","record_id":"<urn:uuid:bbe3ffdf-a9af-490f-b5c3-cb511736c9ca>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why Engineering Mathematics is not a "service" subject
We all have sensitive points !
I find that discussion of the 1976 Olympic Men's Hockey final distressing (Yes, Australia lost to New Zealand but please don't talk about it). A statistical analysis of the Bulldogs performance in
AFL is another matter that is best left alone with me (OK, we only have one Premiership).
At work, the description of Mathematics as a "service" subject in Engineering is likely to raise the blood pressure. The term service implies that mathematics is some kind of ssecondary topic to
Engineers, a kind of background material before they get to the meat of their degree. RRRRRRR
Of course, some Engineers don't directly use much mathematics in their daily jobs, this is a particularly true in areas of management, sales/marketing and production. Even in the "hard" technical
areas of Engineering such as design and research only a few are regularly performing mathematical operations in their daily jobs. Quite correctly, alot of Engineering involves "soft" skills
associated with teamwork, communication and generic management skills. I feel no need to denigrate these skills compared to mathematics, physics and the core sciences associated with engineering, as
it clear to me that great engineering is much as a triumph of organisation and human co-operation, as it is a celebration of powerful mathematics and science.
The film "Apollo 13" (and book by Jim Lovell and Jeffrey Kluger) is an excellent portrayal of the need for strong leadership and teamwork, as well as deep technical knowledge, in solving challenging
technical problems - in this case, finding a way to safely return astronauts from a damaged spaceship. "Huston, we have a problem" is the famous catch phrase from the film. This classic
understatement from Gene Kranz (the legendary NASA flight director) who muttered these words in real life, emphasised the need for calm analysis in the face of imminent disaster. As the films shows
in some detail, what follows. is a story of determination, teamwork, careful sciscientific analysis of data and systematic evaluation of the options. The hero's of the films are as much the
scientists, engineers and technicians on the ground as the three men in the damaged ship. We see the various players carefully checking calculations, modifying equations and running algorithms, as
the drama unfolds. Human joy is unleashed as the Astronauts voices are heard after splash down, even the rock like Kranz sheds a tear.
What a wonderful celebration of Engineers and Scientists !
And here is my point ..... all of this is underpinned and linked by mathematical skills and the language of mathematics. It is rigorous training in arithmetic, trigonometry, algebra and advanced
mathematics that allows the engineers to make sound choices under extreme pressure. As they rush to find the right path home, it is confidence in the core mathematics and physics behind their
calculations that allows them to make life and death decisions.
Of course, not many engineering projects are as dramatic as "Apollo 13" but the point remain the same, even when engineers are not directly carrying out mathematical operations and analysis, it is
their training and confidence in mathematics and fundamental sciences that empowers them to make wise choices. Mathematics is not only the language of technology but also one it its corner stones.
Mathematics is not "servicing" Engineering, it is a core topic, a central part of its nervous system and present in all its vital organs.
In summary, my advice to any young engineer is to pay close attention to your mathematics, develop your analytical skills and avoid supporting sportings teams that have only spasmodic success.
3 comments:
1. i agree with this.Engineering Mathematics is not a service subject.its a core subject.
Management Jobs
2. Calling maths a service subject is akin to my teenage son whining and asking why he has to learn maths. This is of course followed by the obligatory adolescent catchcry of "I'll never use any of
this." He has picked the wrong person to argue the toss with here, since I work in software development I call on my maths skills most days :)
3. A petroleum engineering degree is your first class ticket to one of the most lucrative professions in the world. In fact, a B.S. in petroleum engineering will earn you the highest starting salary
for any. Visit View the site for more details.
|
{"url":"http://imaginingarchimedes.blogspot.co.uk/2012/08/wht-engineering-mathematics-is-not.html","timestamp":"2014-04-21T15:34:47Z","content_type":null,"content_length":"56153","record_id":"<urn:uuid:97aa839b-5304-4985-892f-988b6b7fa8f2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What’s special with commutators in the Weyl group of C5?
I have just added to my notes on representation theory the very cute formula of Frobenius that gives, in terms of irreducible characters, the number $N(g)$ of representations of a given element $g$
as a commutator $g=[x,y]=xyx^{-1}y^{-1}$ in a finite group $G$:
where $\chi$ runs over the irreducible (complex) characters of $G$ (this is Proposition 4.4.3 on page 118 of the last version of the notes).
I wanted to mention some applications, and had a vague memory that this was used to show that most or all elements in various simple groups are actual commutators. By searching around a bit, I found
out easily that, indeed, there was a conjecture of Ore from 1951 to the effect that the set of commutators is equal to $G$ for any non-abelian finite simple group $G$, and that (after various earlier
works) this has recently been proved by Liebeck, O’Brien, Shalev and Tiep.
I mentioned this of course, but then I also wanted to give some example of non-commutator, and decided to look for this using Magma (the fact that I am recovering from a dental operation played a
role in inciting me to find something distracting to do). Here’s what I found out.
First, a natural place to look for interesting examples is the class of perfect groups, of course not simple. This is also easy enough to implement since Magma has a database of perfect groups of
“small” order. Either by brute force enumeration of all commutators or by implementing the Frobenius formula, I got the first case of a perfect group $G$, of order $960$, which contains only $840$
distinct commutators.
Then I wanted to know “what” this group really was. Magma gave it to me as a permutation group acting on $16$ letters, with an explicit set of $6$ generators, and with a list of $21$ relations, which
was not very enlightening. However, looking at a composition series, it emerged that $G$ fits in an exact sequence
$1\rightarrow (\mathbf{Z}/2\mathbf{Z})^4\rightarrow G\rightarrow A_5\rightarrow 1.$
This was much better, since after a while it reminded me of one of my favorite types of groups: the Weyl groups $W_{g}$ of the symplectic groups $\mathrm{Sp}_{2g}$ (equivalently, the “generic” Galois
group for the splitting field of a palindromic rational polynomial of degree $2g$), which fit in an relatively similar exact sequence
$1\rightarrow (\mathbf{Z}/2\mathbf{Z})^g\rightarrow W_g\rightarrow S_g\rightarrow 1.$
From there, one gets a strong suspicion that $G$ must be the commutator subgroup of $W_5$, and this was easy to check (again with Magma, though this is certainly well-known; the drop of the rank of
the kernel comes from looking at the determinant in the signed-permutation $5$-dimensional representation, and the drop from $S_5$ to $A_5$ is of course from the signature.)
This identification is quite nice, obviously. In particular, it’s now possible to identify concretely which elements of $G$ are not commutators. It turns out that a single conjugacy class, of order
$120$, is the full set of missing elements. As a signed permutation matrix, it is the conjugacy class of
$g=\begin{pmatrix} 0& -1 & 0 & 0 & 0\\ 1& 0 & 0 & 0 & 0\\ 0& 0 & 0 & 1 & 0\\ 0& 0 & 1 & 0 & 0\\ 0& 0 & 0 & 0 & -1\end{pmatrix},$
and the reason it is not a commutator is that Magma tells us that all commutators in $G$ have trace in $\{-3,-2,0,1,2,5\}$ (always in the signed-permutation representation). Thus the trace $-1$
doesn’t fit…
At least, this is the numerical reason. I feel I should be able to give a theoretical explanation of this, but I haven’t succeeded for the moment. Part of the puzzlement is that this behavior seems
to be special to $W_5$, the Weyl group of the root system $C_5$. Indeed, for $g\in\{2,3,4\}$, the corresponding derived subgroup is not perfect, so the question does not arise (at least in the same
way). And when $g\geq 6$, the derived subgroup $G_g$ of $W_g$ is indeed perfect, but — experimentally! — it seems that all elements of $G_g$ are then commutators.
I haven’t found references to a study of this Ore-type question for those groups, so I don’t know if these “experimental” facts are in fact known to be true. Another question seems natural: does this
special fact have any observable consequence, for instance in Galois theory? I don’t see how, but readers might have better insights…
(P.S. I presume that GAP or Sage would be equally capable of making the computations described here; I used Magma mostly because I know its language better.
P.P.S And the computer also tells us that even for the group $G$ above, all elements are the product of at most two commutators, which a commenter points out is also a simple consequence of the fact
that there are more than $480$ commutators….
P.P.P.S To expand one of my own comments: the element $g$ above is a commutator in the group $W_5$ itself. For instance $g=[x,y]$ with
$x=\begin{pmatrix} 0& 0 & 0 & 0 & -1\\ 0& 1 & 0 & 0 & 0\\ 1& 0 & 0 & 0 & 0\\ 0& 0 & 1 & 0 & 0\\ 0& 0 & 0 & 1 & 0\end{pmatrix},$
$y=\begin{pmatrix} 1& 0 & 0 & 0 & 0\\ 0& 0 & 0 & 0 & -1\\ 0& 1 & 0 & 0 & 0\\ 0& 0 & 1 & 0 & 0\\ 0& 0 & 0 & -1 & 0\end{pmatrix},$
where $yotin G$.)
7 Responses to “What’s special with commutators in the Weyl group of C5?”
1. I hate to make such a trivial comment, but I’d like to point out that the fact that all elements are the product of two commutators follows from the fact that 120 is less than half of 960.
For any subset S of a group G larger than half the group, one has S*S = G. As an example application, every residue modulo an odd prime is the sum of two squares.
2. Of course! Thanks for pointing this out… It seems to be a simple fact I have trouble accepting, since someone else also reminded me of that a few months ago, in a different context. What is most
amusing is that I’ve used the case of quadratic residues in number theory classes… (And my notes contain the clever variant with three subsets of Nikolov and Pyber…)
3. You might be interested in the discussion of this over at Mathoverflow, in particular Torsten Ekedahl’s answer http://mathoverflow.net/questions/44269/
4. Thanks, it’s quite interesting. I wonder if there’s a link between the minimal examples of order 96 and the one of order 960…
□ I looked at the groups of order 96, and I don’t see how they could be related to the one I found.
Out of curiosity I also looked at commutators in $W_5$ itself, and it turns out that all elements of the derived subgroup are commutators of elements in $W_5$…
5. Maybe the moment has passed, but I thought it might be worth making a few comments. Obviously you are interested in finite groups above, but commutators are very important in the theory of
infinite groups too. The *commutator length* of an element (of the commutator subgroup) is the least number of commutators whose product is the given element, and the *commutator width* is the
supremum of this number over all elements. In the world of infinite groups, the “typical” phenomenon is that commutator width is infinite. In fact, one can define the *stable commutator length*
of an element g in [G,G] to be the limit of cl(g^n)/n. In a (nonabelian) free group F (in fact, in a torsion-free hyperbolic group), the stable commutator length of *every* element of [F,F] is
positive – in fact, it is at least 1/2. Stable commutator length is closely related to 2-dimensional bounded cohomology, and has many interesting connections to geometry, dynamics, etc.; see eg.
□ Thanks for the comments! I’ve already seen talks on this (including one by you…) and since I like any quantitative aspects of interesting discrete groups, I certainly hope to learn more of
this one day.
Post a Comment
|
{"url":"http://blogs.ethz.ch/kowalski/2011/08/30/whats-special-with-commutators-in-the-weyl-group-of-c5/","timestamp":"2014-04-20T15:51:39Z","content_type":null,"content_length":"38073","record_id":"<urn:uuid:b948791c-4231-42d9-954b-8dbdaed99d66>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solution to puzzle 76: Square inscribed in a triangle
A triangle has sides 10, 17, and 21. A square is inscribed in the triangle. One side of the square lies on the longest side of the triangle. The other two vertices of the square touch the two
shorter sides of the triangle. What is the length of the side of the square?
By Heron's Formula, the area, A, of a triangle with sides a, b, c is given by A = where s = ½(a + b + c) is the semi-perimeter of the triangle.
Then s = ½(10 + 17 + 21) = 24, and A = 84.
Now drop a perpendicular of length h onto the side of length 21.
We also have A = ½ × base × perpendicular height.
Hence A = 21h/2 = 84, from which h = 8.
Notice that the triangle above the square is similar to the whole triangle. (This follows because its base, the top of the square, is parallel to the base of the whole triangle.)
Let the square have side of length d.
Considering the ratio of altitude to base in each triangle, we have 8/21 = (8 − d)/d = 8/d − 1.
Therefore the length of the side of the square is 168/29.
Using the above approach, it follows that 1/d = 1/c + 1/h, where c is the length of the side on which the square lies, and h is the altitude of the triangle.
(Note that the square will lie inside the triangle, in the configuration shown above, if the side on which it sits lies between two acute angles.)
Ladder against a box
Here is a similar looking puzzle which is, however, slightly trickier.
A 40 meter ladder leaning against a building rests upon the ground and just touches a 9 × 9 × 9 meter annex, which is flush against the wall. The building is perpendicular to the ground.
Assuming the ladder is inclined at more than 45° to the horizontal, what is the height above the ground at which the ladder touches the building?
Hint - Answer - Solution
Further reading
Source: Original; inspired by the Maximum Square on mathschallenge.net
|
{"url":"http://www.qbyte.org/puzzles/p076s.html","timestamp":"2014-04-18T05:34:48Z","content_type":null,"content_length":"6431","record_id":"<urn:uuid:5023c7f2-5508-4416-a2b4-84174449f379>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
polarization formula for homogeneous polynomials
up vote 5 down vote favorite
given a homogeneous polynomial p of dgree n on $R^d$, there is a unique symmetric n-linear functional $B$ on $(R^d)^n$ such that $p(x)=B(x,..,x)$. The question is: Can we get $B$ by means of a
polarization formula as in the case $n=2$ for quadratic forms ?
Thanks in advance.
add comment
1 Answer
active oldest votes
I'm assuming that $R$ are the reals. In any case, you need to be able to divide by $n!$.
Given that, the answer is yes. I can't locate a reference, but here's the formula for $n=3$, say: $$ 6 B(x,y,z) = p(x+y+z) - p(x+y) - p(y+z) - p(z+x) + p(x) + p(y) + p(z) $$ which
should give you a hint as to the general case.
up vote 4 down
vote Edit by Denis Serre. This suggests the general formula $$n!B(x_1,\ldots,x_n)=\sum_I(-1)^{n-|I|}p(x_I),\qquad x_I:=\sum_{i\in I}x_i.$$
Further edit by JMF. The formula is proved in this preprint by Erik G.F. Thomas A polarization identity for multilinear maps
1 @Denis: Indeed! – José Figueroa-O'Farrill Apr 16 '11 at 8:51
add comment
Not the answer you're looking for? Browse other questions tagged polynomials or ask your own question.
|
{"url":"http://mathoverflow.net/questions/61884/polarization-formula-for-homogeneous-polynomials?sort=oldest","timestamp":"2014-04-25T02:14:42Z","content_type":null,"content_length":"51829","record_id":"<urn:uuid:e2a3e013-c968-4b6b-b7d1-baee071a4999>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solving nonconvex planar location problems by finite dominating sets
• It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates
guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature.
The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation
of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation
problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with
accuracy epsilon for solving the general model considered in this paper.
Author: Emilio Carrizosa, Horst W. Hamacher, Rolf Klein, Stefan Nickel
URN (permanent link): urn:nbn:de:hbz:386-kluedo-9407
Serie (Series number): Berichte des Fraunhofer-Instituts für Techno- und Wirtschaftsmathematik (ITWM Report) (18)
Document Type: Preprint
Language of publication: English
Year of Completion: 2000
Year of Publication: 2000
Publishing Institute: Fraunhofer-Institut für Techno- und Wirtschaftsmathematik
Tag: Approximation ; Continuous Location ; Finite Dominating Sets ; Greedy Algorithm; Polyhedral Gauges ; Sandwich Algorithm
Faculties / Organisational entities: Fraunhofer (ITWM)
DDC-Cassification: 510 Mathematik
|
{"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/973","timestamp":"2014-04-16T05:11:20Z","content_type":null,"content_length":"21422","record_id":"<urn:uuid:5baee68d-e7ac-438f-a70b-53ef4606d6e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Venetian Islands, FL Math Tutor
Find a Venetian Islands, FL Math Tutor
...I earned a master's degree in Education after an undergraduate degree in Biochemistry. As for my teaching philosophy, I do my best to to make the subjects fun, applicable and interesting! I
also hold a yoga teacher's certification.
16 Subjects: including SAT math, algebra 1, algebra 2, chemistry
...Being a versatile individual, I have had the opportunity to teach students who functioned below, on and above grade level. I have been very successful at accommodating diverse student needs by
facilitating all styles of learners, offering individualized and extracurricular support and integratin...
23 Subjects: including prealgebra, physics, MCAT, actuarial science
...I also coached 6th grade, JV and Varsity football, so working with adolescents is not unfamiliar to me. It is my goal to work with kids who have gotten behind and bring them up to speed. For
those students who are already good at math but want to stay on top of it, my goal is to make math exciting and always stress to them how important of a subject it is, so they stay on top of it.
4 Subjects: including algebra 1, algebra 2, prealgebra, trigonometry
...I graduated from Georgetown University with a Bachelor's in Biology of Global Health. I recently finished my Master's in Public Health from University of Miami Miller School of Medicine. My
goal in life is to help children as a pediatrician.
19 Subjects: including algebra 2, elementary math, grammar, precalculus
...Together we can tackle math concepts until they make complete sense to the student. I'm best at Pre-Algebra, Algebra 1 and 2, any math at the elementary level, and geometry. I cater the
lessons to what you need to work on as a student and build from there.
7 Subjects: including linear algebra, geometry, prealgebra, elementary math
Related Venetian Islands, FL Tutors
Venetian Islands, FL Accounting Tutors
Venetian Islands, FL ACT Tutors
Venetian Islands, FL Algebra Tutors
Venetian Islands, FL Algebra 2 Tutors
Venetian Islands, FL Calculus Tutors
Venetian Islands, FL Geometry Tutors
Venetian Islands, FL Math Tutors
Venetian Islands, FL Prealgebra Tutors
Venetian Islands, FL Precalculus Tutors
Venetian Islands, FL SAT Tutors
Venetian Islands, FL SAT Math Tutors
Venetian Islands, FL Science Tutors
Venetian Islands, FL Statistics Tutors
Venetian Islands, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Carl Fisher, FL Math Tutors
Fisher Island, FL Math Tutors
Gables By The Sea, FL Math Tutors
Golden Isles, FL Math Tutors
Goulds, FL Math Tutors
Indian Creek, FL Math Tutors
Keystone Islands, FL Math Tutors
Ludlam, FL Math Tutors
Miami Beach Math Tutors
Miami Beach, WA Math Tutors
Port Everglades, FL Math Tutors
Seybold, FL Math Tutors
Sunny Isles, FL Math Tutors
Sunset Island, FL Math Tutors
West Dade, FL Math Tutors
|
{"url":"http://www.purplemath.com/Venetian_Islands_FL_Math_tutors.php","timestamp":"2014-04-18T13:40:22Z","content_type":null,"content_length":"24316","record_id":"<urn:uuid:5e2987d6-1c6e-4eda-aecc-18255f6d2b61>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flower Mound Algebra 1 Tutor
...I teach a memorable strategy that will keep you on track and make the speech process far less painful. Furthermore, my graduate degree in theology required an additional six credit hours in
public speaking. These classes were designed to teach a method for analyzing the subject and the audience in order to ensure effective communication.
29 Subjects: including algebra 1, English, reading, writing
...I taught courses at Richland College and Collin County Community College. My specialties are Physics I and Physics II, both algebra and calculus based. I also have experience with laboratory
experiments and writing lab reports.
8 Subjects: including algebra 1, calculus, physics, geometry
...We started with Kindergarten and continued through high school. I assisted them with a variety of subjects over the years as well as with Math. This gave me experience with teaching all age
levels and in a variety of subjects.
82 Subjects: including algebra 1, English, chemistry, calculus
...I have taken more than 10 classes related to Organic Chemistry and have attended at least 1,000 hours of advanced lectures. My GPA in Organic Chemistry is a 3.4 at the Graduate School Level.
All the organic classes were Honor classes with advanced training.
93 Subjects: including algebra 1, reading, chemistry, English
...I show them how to work around them. I am incredibly creative and innovative, and don't spoil people by letting them use crutches they don't need. What I mean to say is this: If Johnny has no
arms and he has to learn how to write his name with his feet, I may help him alter his pencil, or chan...
43 Subjects: including algebra 1, reading, ESL/ESOL, English
|
{"url":"http://www.purplemath.com/flower_mound_tx_algebra_1_tutors.php","timestamp":"2014-04-18T05:49:38Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:7ee0f95c-53be-4492-a17e-78e0f9cb6756>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Type IIA D-Branes, K-Theory and Matrix Theory
Hořava, Petr (1998) Type IIA D-Branes, K-Theory and Matrix Theory. Advances in Theoretical and Mathematical Physics, 2 (6). pp. 1373-1404. ISSN 1095-0761. http://resolver.caltech.edu/
PDF - Published Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20120117-104604190
We show that all supersymmetric Type IIA D-branes can be constructed as bound states of a certain number of unstable non-supersymmetric Type IIA D9-branes. This string-theoretical construction
demonstrates that D-brane charges in Type IIA theory on spacetime manifold X are classified by the higher K-theory group K^(-1)X, as suggested recently by Witten. In particular, the system of N
D0-branes can be obtained, for any N, in terms of sixteen Type IIA D9-branes. This suggests that the dynamics of Matrix theory is contained in the physics of magnetic vortices on the worldvolume of
sixteen unstable D9-branes, described at low energies by a U(16) gauge theory.
Item Type: Article
Additional © 1998 International Press. arXiv: (Submitted on 15 Dec 1998 (v1), last revised 14 May 1999 (this version, v4)) It is a pleasure to thank Oren Bergman, Eric Gimon, Djordje Minic,
Information: Michael Peskin, John Preskill, John Schwarz, Steve Shenker, Eva Silverstein, Lenny Susskind and Edward Witten for valuable discussions. I wish to thank the Stanford Institute of
Theoretical Physics for hospitality during some parts of this work. This work has been supported by Sherman Fairchild Prize Fellowship and by DOE Grant DE-FG03-92-ER 40701.
Funders: ┌─────────────────────────────────────┬─────────────────────┐
│ Funding Agency │ Grant Number │
│ Sherman Fairchild Prize Fellowship │ UNSPECIFIED │
│ Department of Energy (DOE) │ DE-FG03-92-ER 40701 │
Subject High Energy Physics - Theory (hep-th)
Other ┌──────────────────────────────┬───────────────────────────┐
Numbering │ Other Numbering System Name │ Other Numbering System ID │
System: ├──────────────────────────────┼───────────────────────────┤
│ CALT │ 68-2205 │
Record CaltechAUTHORS:20120117-104604190
Persistent http://resolver.caltech.edu/CaltechAUTHORS:20120117-104604190
Usage No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 28806
Collection: CaltechAUTHORS
Deposited Tony Diaz
Deposited 13 Apr 2012 21:05
Last 26 Dec 2012 14:42
Repository Staff Only: item control page
|
{"url":"http://authors.library.caltech.edu/28806/","timestamp":"2014-04-18T10:37:29Z","content_type":null,"content_length":"27033","record_id":"<urn:uuid:cc5541df-1dc3-4881-b104-4cd9326219c1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Implementation of the Natural Element Method (NEM)
Next: A Study in Contrast: Up: A Note on Natural Previous: Governing Equations and Weak
The implementation of the Natural Element Method (NEM) by means of a Galerkin-based procedure is parallel to that adopted in FEM or Element-Free Galerkin (EFG) method [14], with the key distinction
that separates the three is in the construction of the shape functions
A computational procedure to evaluate the shape functions (n-n coordinates) is outlined in [3], which is extended by [9] to compute the derivatives of the interpolating function 10] adopted
Lasserre's recursive formula [15] to compute the area (volume of a convex polyhedral in 10], it is pointed out that Lasserre's formula [15] is more robust than the one due to [3], which breaks-down
if the point 3,16,9,10].
Next: A Study in Contrast: Up: A Note on Natural Previous: Governing Equations and Weak N. Sukumar
|
{"url":"http://dilbert.engr.ucdavis.edu/~suku/nem/nem_intro/node5.html","timestamp":"2014-04-20T03:54:01Z","content_type":null,"content_length":"5471","record_id":"<urn:uuid:0cb21311-fa29-4fa8-a5e0-05eb541f3fd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] FW: Schroeder-Bernstein Dual
Epstein, Adam A.L.Epstein at warwick.ac.uk
Thu Jun 21 20:03:28 EDT 2007
On Tue, 29 May 2007, Bill Taylor wrote:
>Consider this "dual" to Shroeder-Bernstein:
>** If there are surjections f: X --> Y
>** and g: Y --> X
>** then there is a bijection between X and Y.
There was a response I was expecting to see, based on something I'd read
in Jech's book "The Axiom of Choice". I wanted to double check first, and
the book just came back to the library this afternoon.
Let's write <= for the usual relation on cardinals: |X|<=|Y| whenever X
injects into Y. Schroeder-Bernstein is the assertion
|X|<=|Y| and |Y|<=|X| ==> |X|=|Y|.
Now, define the relation <=* by |X|<=*|Y| whenever Y surjects onto X, or X
is empty.
Note that without using Choice we have
(*) |X|<=|Y| ==> |X|<=*|Y|.
Indeed, we may assume X is not empty, so that there exists some a in X.
Given an injection alpha: X -> Y we obtain a surjection beta: Y -> X which
sends each alpha(x) to x, and sends the remaining points of Y to a.
Now, following Jech (exercise 8, p. 162) fix an infinite but
Dedekind-finite set D, let S be the set of all finite one-to-one
sequences in D, and let T be obtained from S by removing the empty
sequence. Clearly, since T is a subset of S we have |T|<=|S|, whence
|T|<=*|S| by (*). Moreover, since the set S is also Dedekind-finite
(exercise 5, p. 161) we must have the strict inequality |T|<|S|: that is
to say, |T| \neq |S|.
On the other hand, we also have |S|<=*|T| via the map T->S which deletes
the last entry of each nonempty sequence.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-June/011679.html","timestamp":"2014-04-16T22:10:22Z","content_type":null,"content_length":"3921","record_id":"<urn:uuid:77a6e6c1-dfe3-4cd2-b1e9-a318e5d03c2e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
Numerical study on the motion characteristics of a freely falling two-dimensional circular
cylinder in a channel
Sung Wan Son1, Hea Kwon Jeong2, Hyun SikYoon3 and Man Yeong Ha*1
School of mechanical engineering, Pusan Nation University, Korea
2Technical Research Laboratories, Posco, Korea
3Advanced Ship Engineering Research Center, Pusan National University, Pusan, Korea
*corresponding author, E-mail: myha@pusan.ac.kr
Keywords: freely falling circular cylinder, gap ratio, density ratio, transverse motion
A two-dimensional circular cylinder freely falling in a channel has been simulated by using Direct forcing/Fictitious domain -
lattice Boltzmann method (DF/FD-LBM) in order to analyze the characteristics of motion originated by the interaction
between the fluid flow and the cylinder. The wide range of the solid/fluid density ratio has been considered to identify the
effect of the solid/fluid density ratio on the motion characteristics such as the falling time, the transverse force and the
trajectory in the streamwise and transverse directions. In addition, the effect of the gap between the cylinder and the wall on
the motion of a two-dimensional freely falling circular cylinder has been revealed by taking into account a various range of the
gap size. As the cylinder is close to the wall at the initial dropping position, vortex shedding in the wake occurs early since the
shear flow formed in the spacing between the cylinder and the wall drives flow instabilities from the initial stage of freely
falling. In order to consider the characteristics of transverse motion of the cylinder in the initial stage of freely falling,
quantitative information about the cylinder motion variables such as the transverse force, trajectory and settling time has been
1. Introduction
The particle-fluid interaction has been widely applied in
the fields such as the chemical, civil and aero-space
engineering as well as biological science. Some examples
are the transport of radio-nuclides, plasma spray coating,
fluidized bed reactor, blood flow and droplet formation.
Moreover, the particle-fluid-structure interaction has
influence on the response, stability and life of the structure.
It's important to understand the behavior of particle
suspensions or sedimentations, which has been attracting
lots of researcher's attention for the past decades, both
experimentally [1-4] and numerically [2,4,5-17].
For the representative studies about the two-dimensional
motion of the cylinder, Hu et al.[5], Feng et al.[6] and
Hu[9] analyzed the sedimentation of the cylinder in a
narrow cavity by solving Navier-Stokes equations. Hu et
al.[5] showed that a circular cylinder sedimenting in a
narrow channel at small Reynolds number (Re) drifts to the
centre of the channel. At large Re, however, the cylinder
drifts off the centre of the channel and the rotation of the
cylinder is due to uneven shear and oscillation of the
cylinder is due to vortex shedding. Feng et al.[6]
demonstrated the characteristics of the motion of a circular
cylinder sedimenting based on the region of Re dependent
on the cylinder diameter and terminal velocity of the
circular cylinder in a narrow channel. Hu[9] studied the
rotation of a circular cylinder setting close to a solid wall
using the lubrication theory and found that the direction of
rotation reverses at large Re while the rotation is in the
direction as if rolling up the nearer wall at small Re. On
other matters, Namkoong et al.[15] reported the
two-dimensional motion of a circular cylinder freely falling
in an infinite fluid for the range of Re<188. They deduced
correlations of the relationship between St (Strouhal
number) and Re from their numerical results.
According to our survey, many researchers numerically
studied the motion of various shape particles such as
circular cylinder, elliptic cylinder and rectangular cylinder
as well as sphere. Based on the results reported by Joseph's
group, however, many researchers including Joseph's
group performed the numerical calculation about the
motion of the cylinder to validate numerical scheme
proposed by them[7,12-14,16,17]. Moreover, most
simulations except for some results reported by Feng et
al.[6], which deal with the characteristics of the motion of
the cylinder, mentioned above have been studied for the
motion of the cylinder at non-zero small Re (O
below 200. For a small Re, the previous results by Feng et
al[6] and Hu[9] show the transient trajectory of the cylinder
directs toward the centre of the channel without oscillation.
For a large Re (the density of the cylinder is much larger
than one of the small Re), the trajectory of the cylinder may
be different from a small Re cases. Also, the studies for
characteristics of the motion of the cylinder sedimenting
according to the variation of the density ratio (the ratio
between a cylinder and fluid) and the gap between the
initial position of the cylinder and the channel wall at large
Re are not enough.
In this study the motion of a circular cylinder freely
falling in a narrow channel is studied numerically. The
primary objective of the present study is to investigate the
characteristics of a single cylinder motion freely falling
(including the rotation of the cylinder) in a narrow channel
according to the variation of density ratio and gap ratio at
large Re (Re>200). To evaluate the translational motion of
the cylinder, the transient trajectory, transient transverse
force and settling time of the cylinder have been considered.
The characteristics of the rotation of the cylinder according
to the various gap ratios were also reported. Moreover, the
effect of the rotation of a cylinder on the transverse motion
of one was described.
2. Numerical Method and Validation
2.1 Numerical Method
f/ D
L t
0 0x
Figure 1: Schematics of system
Figure 1 shows the fluid domain (Q0) represented by
the Eulerian coordinate and the circular cylinder arranged
by the Lagrangian points with M Lagrangian points
uniformly distributed. The Lagrangian points are arranged
as shown in Figure 2.
A -AV
I N, 4
(a) (b)
Figure 2: (a) Definitions for a circular cylinder to arrange
Lagrangian forcing points (r, is the actual cylinder radius).
(b) Arrangement of Lagrangian points in case of a circular
To solve the incompressible Navier-Stokes equation for
these two different coordinate, the DF/FD LBE was used.
The interaction between the fluid and the cylinder is
calculated by the direct forcing and fictitious domain
method proposed by Yu & Shao[15]. The result of this
interaction is introduced to the governing equation as a
form of external force by the equilibrium velocity approach
Buick & Greated[19].
To investigate the motion of a freely falling circular
cylinder in a closed channel, as shown in figure 1, the
incompressible continuity and momentum equation are
used and given by
V u 0,
+u Vu
-Vp+ vV2u +f ,
where u, P, P v and f are flow velocity, pre
fluid density, kinematic viscosity and external f
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
To find the solutions of Eq. (1) and Eq. (2) in the present
study, the LBE is used. Eq. (3) is so called lattice BGK
equation based on the Boltzmann equation [Chen & Doolen
(1998)] and given as
f,(x+c At,t+At)- f(x,t)= [f(x,t)- f"(x,t)], (3)
where f x, c r and f," mean the density
distribution function, position vector, lattice velocity vector,
single relaxation time and equilibrium density distribution
function, respectively. The subscript a is the direction of
particle and depends on the lattice model. In all the
simulations, a two-dimensional 9-bit model was used. The
equilibrium density distribution function is obtained as
eq 3ce u 9(ca U)2 3U2
f = co p, 1 + 2 4 (4)
S2c4 2c2
where c = Ax/ At and Ax and At are the
magnitude and time spacing of the particle, respectively.
The weighting coefficient co is coo=4/9 and
co =1/9 for a =1,2,3,4 and co =1/36 for
a = 5, 6,7, 8. Each particle velocity vector is represented
by Eq. (5).
0, a=0
c= (cos[(a-l)nr/2], sin[(a-l)nr/2])c, a=1,2,3,4 (5)
c2(cos[(a-5)n /2+ n/4], sin[(a-5) /2+ / 4])c, a=5,6,7,8.
The macroscopic variables such as the fluid density ( p)
and flow velocity (u) in the computational domain are
defined as
Pf =If, (6)
Pfu = fac (7)
The single relaxation time is related to the kinematic
viscosity by
v = (2r -1) (8)
Thb hncir irlda nf th nFI/Fn mthnrl ic tn vtenrd n
problem on a geometrically complex domain to a larger
(1) simpler domain. To obtain the volume force at a location of
the lattice node, a discrete quantity the Lagrangian force
(F,) at points X, and the Eulerian force (f ) at each
lattice site x, are transferred and represented as follows:
F (X,)= f(x,)5,(x,-XI), (9)
where AV, is a finite volume (surface area for 2D) of
each Lagrangian point and calculated by the same manner
as Uhlmann (2005) and Yu & Shao (2007). 6, is a
smoothed approximation of the discrete Dirac Delta
function reported by Roma Peskin & Berger (1999). For
the case of 2D, the discrete delta function 8h is defined
1 x-XO y- Y
h,(x Xo) =h2 ( X ) h )
h h h
1(5-3 -r -3(1- r)2+1 ,0.5
(r) =I1+
hrl 0.5
, otherwise
where h is the lattice size.[Yu and Shao (2007)]
When the equilibrium velocity with momentum body
force from immersed boundary method is substituted in the
equilibrium density distribution function, the change in the
momentum at each lattice is obtained. The external force
from Eq. (10) is introduced to Eq. (3) using an equilibrium
velocity approach as follow:
PfU, =pu, + rf (13)
, (x, t)= f (pf,u,) (14)
where u, is the equilibrium velocity at each lattice site
and fe* is the modified equilibrium density distribution
function from the equilibrium velocity approach and
introduced to Eq. (3) instead of fjq.
2.2 Validation
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
the free fall of cylinder in the vertical channel given by
Glowinski et al.[13], Wang et al.[15], Wan & Turek[19]
where the starting position of cylinder free fall is located at
the vertical centerline. The specific data for the geometry
and physical properties used in this validation study are
given in Table 1. The cylinder is released at t=- and
accelerated by the gravity.
Table 1 Geometry, initial and boundary conditions and
physical parameters
Case 1 2
Computational domain 2 (cm) x (cm) 2 x 6 2 x 6
Diameter of cylinder D, (cm) 0.25 0.25
Density of cylinder p (g/cm3) 1.25 1.5
Density of fluid p (g/cm3) 1.00 1.00
Fluid viscosity v (g/cm3) 0.1 0.01
Initial cylinder location (x, y) (cm) (1,4) (1,4)
Figure 3 shows the comparison of the present
computational results for the time histories of translational
trajectory and translational velocity of the center of a
circular cylinder for Pr =1.5 and v = 0.01 g/cm s (case 2)
with previous results by Glowinski et al.[13] and Wan &
Turek[19]. As shown in this figure, the present
computational results agree very well with computational
results obtained by Glowinski et al.[13] and Wan &
Turek[19]. In this validation test, we considered two
different size of lattice with Ax =1/150 and 1/250 and the
results using Ax =1/150 are almost the same as that using
Ax =1/250. We also calculated the maximum particulate
Reynolds number, Rep which is defined as
Rep,m =max[LpD Uc2 +VT2 /v] where p D,, Uc,
V, and v represent the cylinder density, cylinder
diameter, transverse velocity, translational velocity and
fluid viscosity, respectively. Table 1 shows the comparison
of the present computational results for the maximum
particulate Reynolds number (Repm.) during a cylinder
sedimentation with previous results by Glowinski et al.[13],
Wang et al.[15] and Wan & Turek[19].
To validate the computer code and methodology
developed in the present study, we considered the case of
Table 2 Comparisons of the maximum particulate Reynolds number (Repm ) during free fall of cylinder
Present Glowinski et al.[13] Wang et al.[15] Wan and Turek [19]
Ax 1/50 1/100 1/192 1/256 1/72 1/144 1/256 1/48 1/96
Case 1
Rep,ma 16.87 17.24 17.27 17.31 16.962 17.216 17.307 17.42 17.15
S Ax 1/150 1/250 1/256 1/384 1/72 1/144 1/256 1/48 1/96
Case 2
Rep,,ma 489.1 478.69 450.7 466 502.37 503.26 503.38 442.19 465.52
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
. 100
Present, Ax=l/150
......... Present, Ax=l/250
10 o Wan and Turek (2007)
S Glowinski et at (2001)
-150.1 0.2 0.3 0.4
0 0.1 0.2 0.3 0.4
Figure 3: Time histories of (a) translational trajectory and
(b) translational velocity of the center of a circular cylinder
for pr =1.5 and = 0.01g/(cms).
As shown in this Table 2, our present results for Rem
represents well the computational results given by
Glowinski et al.[13], Wang et al.[15] and Wan & Turek[19]
even though the size of lattice used in the present study is
smaller than that used by previous studies.
In order to validate the present computational results for
the rotation of cylinder due to the interaction between the
fluid and the cylinder, we calculated the rotation rate ( )
of the suspended cylinder in simple shear flow and
compared our computational results with the experimental
results given by Zettner & Yoda[20] and computational
results by Ding & Aidun[21]. Here the height of channel
considered in the present comparison is 2 D and the
width of channel is 4 D where D is the diameter of a
circular cylinder (D ) as 0.25cm. The circular cylinder is
located at the center of channel and the upper and lower
walls of channel moves in the opposite direction in order to
make a shear flow. The viscosity and the density ratio
between the fluid and cylinder used in this comparison are
0.05 g/cm s and 1, respectively. Figure 4 shows the
comparison of the present computational results for Oc /
of the circular
Figure 4: Comparison of the simulation results on the
~c / X of rotation of a circular cylinder in shear flow with
present and previous results.
cylinder rotation in the shear flow with previous
experimental and computational results by Zettner & Yoda
[20] and Ding & Aidun[21]. Here the Reynolds number
(Rex ) used in this validation is defined as
Rez = D /v, (15)
where y = du / dy presents the shear rate. Rex s
considered in this validation are 1, 10, 76.8 and 100,
respectively. The computational domain used in this
validation test is 145 x 73 lattice units for all Re .
When a cylinder suspends freely in between two parallel
plates moving in opposite directions, our computational
results are in good agreement with previous experimental
results by Zettner & Yoda[20] and numerical results by
Ding & Aidun[21].
3. Results and Discussion
A Study for motion characteristics of a freely falling
circular cylinder in the long channel was performed. For
simulation of free fall of a single circular cylinder for 2D,
the specific information for the geometry and physical
properties used in this study are given in Table 3. The
gravitational acceleration is g=-981 cm/s2. The cylinder is
released at t=- and accelerated by the gravity. For an
evaluation of the transient characteristics of cylinder by the
initial location of cylinder, the gap ratio is defined as
G, = G/D,. The gap ratio is varied from 0.1 to 3.5 for all
density ratios considered in the present study. G =0.1 is
the case that the cylinder is closest to the wall and G, =3.5
is the case that the cylinder is located at the centre of the
channel. Both the cylinder and the fluid are at rest at time
t=0 and the cylinder starts abruptly free-fall motion due to
gravity in channel for t > 0.
0 Zetner & Yoda (2001, Ep.)
+ Ding & Aidun (2000, Num.)
O Present
,. I .. I ,
nr I
Table 3 Geometry, initial/boundary conditions and
physical/numerical parameters
Computational domain 2 (cm) (cm)
Diameter of cylinder D, (cm)
Density of cylinder Pp (g/cm3)
Density of fluid Pf (g/cm3)
Fluid viscosity v (g/cm3)
Initial cylinder location (x, y) (cm)
Density ratio P, = P, P
Gapratio G =G /D
Lattice size Ax (cm)
Time step At (sec)
1.25, 1.5, 1.75
(1, 14)
1.25, 1.5, 1.75
0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5
3.1 Flow field
In this section, we present some typical results on the
vortical structures and the cylinder behaviors for a freely
falling cylinder in the narrow vertical channel.
Figure 5 shows the time evolution of vorticity field around
a cylinder for different G, values of 0.1, 1.0, 2.5 and 3.5
at p =1.5 while the cylinder falls freely in the narrow
vertical channel. When G, =3.5, the starting position of the
cylinder is at the horizontal centre of channel (x =0.0) and
as a result the transverse force acting on the cylinder by the
fluid in the horizontal direction is balanced. As a cylinder
falls freely, a pair of symmetric vortex is formed around the
cylinder and grows in its size by maintaining its symmetric
shape, so that we cannot observe any vortex shedding in the
wake of cylinder as shown in Figure 5(d).
When G =2.5, the starting position of cylinder moves
slightly to the right wall of channel and is located at x =1.0.
Because the distance from the right and left walls to the
cylinder center is not the same at G =2.5, the transverse
force acting on the cylinder caused by the fluid is not
balanced. As a result, a pair of symmetric vortices formed
around the cylinder at the initial stage of free fall of
cylinder does not maintain its symmetric shape as the
cylinder falls freely and the counter-clockwise vortex
formed in the left side of near wake around the cylinder
start to be inclined to right side at t =0.25. As the cylinder
falls further freely, we can observe the generation of the
periodic Karman vortex shedding in the wake of cylinder as
time goes by. We can also observe the formation of weak
shear layer on the right wall of channel and its development
as a function of time, because of the interaction between
the right wall of channel and the vortex adjacent to the right
wall in the wake region of the cylinder.
As the distance between the initial position of cylinder and
the right wall of channel decreases with decreasing G,
values from 3.5 to 0.1, the transverse force imbalance
acting on the cylinder caused by the fluid increases. As a
result, as G, decreases, the vortex shedding in the wake
occurs in the earlier time, the extent of vortex shedding
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
formed in the wake region increases, and the interaction
between the vortex in the wake and the right wall of
channel becomes stronger. We can also observe the
transverse motion of cylinder in addition to the free falling
motion of cylinder more clearly with decreasing G .
Especially, when G,=0.1 in which the distance between
the cylinder and the right wall of channel at the starting
point of cylinder free fall is very narrow, the interaction
between the fluid flow around the falling cylinder and the
right wall of channel becomes very strong.
3.2 Force in the transverse directions
Figures 6(a) and 6(b) show the time history of transverse
force and the time history of transverse trajectory, velocity
and force acting on the cylinder by the fluid, respectively,
for different density ratios of p,.=1.25, 1.5 and 1.75 at
G =0.1. After the starting transient time period shown in
figures 6(a) and 6(b), the regularly periodic transverse
force acts on the falling cylinder. When the cylinder density
becomes larger than the fluid density with increasing
density ratio, the amplitude and frequency of oscillating
transverse force acting on the cylinder decrease.
Figure 6(c) shows the distribution of instantaneous
vorticity contours at five different instants denoted by A, B,
C, D and E in figure 6(b), respectively, at the initial stage
just after the cylinder falls freely, when p, =1.5 and
G =0.1. In figure 6(c), the positive and negative values of
instantaneous vorticities are denoted by the solid and dotted
lines, respectively, in the contour range from -300 to 300
with 14 levels. When t =0.04 corresponding to A in figure
6(b) just after the cylinder falls freely in the channel, the
negative strong repulsive transverse force from the right
wall of channel acts on the cylinder because the gap
between the cylinder and the right wall of channel is very
narrow at the value of G =0.1 and as a result the cylinder
migrates to the left direction. At this instant, the negative
and positive vorticities start to be developed from the left
side of cylinder and on the right wall of channel,
respectively, as shown in figure 6(c). When t =0.07
corresponding to B in figure 6(b), the negative repulsive
transverse force between the cylinder and the right wall of
channel reaches a peak value. Because the gap between the
cylinder and the right wall of cylinder at t =0.07 becomes
larger than that at t =0.04 due to the continuous movement
of cylinder to the left direction in the presence of negative
repulsive transverse force, the positive vorticity starts to be
developed from the right side of cylinder whereas the
negative vorticity in the left side of cylinder develops
further and rotates slightly in the clockwise direction. The
positive vorticity on the right wall of channel is elongated.
As the cylinder keeps moving to the left direction due to its
inertia, the magnitude of negative transverse force
decreases and reaches a zero value at t =0.1 corresponding
to C in figure 6(b). At this instant of time, the negative
vorticity in the left side of cylinder rotates further to the
clockwise direction and becomes very close to the positive
vorticity rotating to the counter-clockwise direction from
the right side of cylinder.
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
(a) Gr =0.1
(b) G( =1.0
(c) G = 2.5
(d) G =3.5
Figure 5: Evolution process of vorticity field and a cylinder position during sedimentation for p, =1.5.
(vorticity range: -50 50)
The negative vorticity starts to be developed on the right
wall of channel due to the interaction with the positive
vorticity formed in the right side of cylinder whereas the
positive vorticity on the right wall of channel keeps being
elongated in the upward direction. During the time interval
between t =0.1 and t =0.175, the transverse force acting
on the cylinder becomes positive, keeps increasing and
reaches a peak value at t =0.14 corresponding to D in
figure 6(b). The cylinder keeps moving to the left direction
during the time interval between t =0 and t =0.14. The
distance between the cylinder and the right wall of channel
at t =0.14 has a maximum value during the starting time
period of t =0-0.2 seconds. At t =0.14, the negative
vorticity rotating clockwisely in the left side of cylinder
starts to be torn by the positive vorticity rotating
counter-clockwisely from the right side of cylinder. During
the time intervalbetween t =0.14 and t =0.2, the
transverse force acting on the cylinder decreases with a
positive value and the transverse motion of cylinder
changes its direction from left to the right direction in the
presence of positive transverse force.
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
----------------- p,=1.25
S-- p= 1. 5
10 -------- p 1.75
A C BE
Figure 6: (a) Time histories of transverse force for various
density ratios at G,=0.1, (b) Time history of transverse
force, trajectory and velocity for p,=1.5 at G,=0.1 at
initial stage of freely falling and (c) instantaneous vorticity
contours (Contour values range from -300 to 300 with 14
levels; Positive solid, Negative dashed).
At t =0.175, the negative vorticity rotating clockwisely
from the left side of cylinder is almost torn by the positive
vorticity rotating clockwisely from the right side of cylinder.
So, when G =0.1, we can observe the strong interaction
between the fluid flow around the falling cylinder and the
right wall of channel, giving the strong influence on the
motion of falling cylinder, as shown in figure 6(c).
As G, increases from 0.1 to 1.0, the gap between the
falling cylinder and the right wall of channel increases and
as a result the influence of the presence of channel right
wall on the transverse force acting on the cylinder
decreases. Figures 7(a) and 7(b) show the time history of
transverse force and the time history of transverse trajectory,
velocity and force acting on the cylinder by the fluid,
respectively, for different density ratios of p,=1.25, 1.5
and 1.75 at G =1.0. The variation of transverse force
acting on the cylinder as a function of time at G =1.0 is
generally similar to that at G =0.1. As a result the
regularly oscillating period of transverse force is followed
by the starting transient oscillating period and the
amplitude and frequency of oscillating transverse force
decreases as the density ratio increases. However, when
G, =1.0, the starting transient time period of oscillating
Figure 7: (a) Time histories of transverse force for various
density ratios at G,=1.0, (b) Time history of transverse
force, trajectory and velocity for p,=1.5 at G,=1.0 at
initial stage of freely falling and (c) instantaneous vorticity
contours (Contour values range from -300 to 300 with 14
levels; Positive solid, Negative dashed).
transverse force is longer than that when G, =0.1.
Figure 7(c) shows the distribution of instantaneous
vorticity contours at five different instants denoted by A, B,
C, D and E in figure 7(b), respectively, at the initial stage
just after the cylinder falls freely, when p, =1.5 and
G =1.0. When t =0.04 corresponding to A in figure 7(b)
just after the cylinder falls freely in the channel, the
transverse force acting on the cylinder by the fluid at
G=1.0 is still almost zero unlike to the case of G,=0.1
with the negative transverse force and as a result the
distribution of vorticity formed in the wake of cylinder at
G, =1.0 is almost symmetric. When t =0.12 corresponding
to B in figure 7(b), the small negative peak transverse force
at G =1.0 starts to act on the cylinder and as a result the
symmetric shape of vorticity formed in the wake of
cylinder starts to be broken. Unlike to the case of G =0.1,
when G, =1.0, the negative vorticity is formed on the right
wall of channel and the size of positive vorticity in the right
side of cylinder is larger than the size of negative vorticity
-------- ........ U'/10
'--..---- ----- -- x
D E
0.2 0-.3-.-.- U/10.
. .. ...3 0.4 0.
0.1 0.2 0.3 0.4 0.5
--- F ,/
A C ....
0 0.
0 0.1 0.2 0.3 0.4 0.5
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
in the left side of cylinder. The positive vorticity in the right
side of channel interacts with the negative vorticity formed
on the right wall of channel and is elongated in the
streamwise direction. At t =0.15 corresponding to C in
figure 7(b), the positive transverse force starts to act on the
cylinder while the cylinder still keeps moving to the left
direction. The negative vorticity in the left side of cylinder
rotates in the clockwise direction whereas both the positive
vorticity in the right side of cylinder and the negative
vorticity on the right wall of channel are elongated in the
streamwise direction. At t =0.19 corresponding to D in
figure 7(b), the positive transverse force acting on the
cylinder has a positive peak value and the cylinder changes
its direction to the right. The positive vorticity in the right
side of cylinder starts to be torn by the negative vorticity
rotating clockwisely from the left side of cylinder. The
negative vorticity on the right wall of channel is elongated
further due to the interaction with the positive vortex in
right side of cylinder. At t =0.22 corresponding to E in
figure 7(b), the transverse force acting on the cylinder
becomes almost zero and the distance between the cylinder
and the right wall of channel is smallest during the time
period of t =0-0.25. The negative vorticity rotating
clockwisely from the left side of cylinder starts to be torn
by the positive vorticity rotating counter-clockwisely from
the right side of cylinder. The positive vorticity separated at
t =0.19 is convected to the streamwise direction. The
negative and positive vorticies are formed in series on the
right wall of channel which matches with corresponding
positive and negative vorticies around the cylinder.
When G, is 2.5, the influence of the presence of
channel right wall on the transverse force acting on the
cylinder is much smaller than that when G, is 0.1 and 1.0,
because the starting position of cylinder free fall is close to
the channel center line and the distance between the falling
cylinder and the right wall of channel at G, = 2.5 becomes
larger than that at G,= 0.1 and 1.0. Figures 8(a) and 8(b)
show the time history of transverse force and the time
history of transverse trajectory, velocity and force acting on
the cylinder by the fluid, respectively, for different density
ratios of p, =1.25, 1.5 and 1.75 at G =2.5. Similar to the
cases of G, =0.1 and 1.0, the transverse force acting on the
cylinder at G =2.5 oscillates regularly as a function of
time after the starting transient oscillating period and its
amplitude and frequency increase with increasing density
ratio. However, when G =2.5, the starting transient time
period of oscillating transverse force is much longer than
that when G, =0.1 and 1.0.
Figure 8(c) shows the distribution of instantaneous
vorticity contours at five different instants denoted by A, B,
C, D and E in figure 8(b), respectively, at the initial stage
just after the cylinder falls freely, when p, =1.5 and
G =2.5. At t =0.06 and 0.13 corresponding to A and B in
figure 8(b) during the starting transient period of time after
the cylinder falls freely in the channel, a pair of symmetric
vorticities in the wake of cylinder and the negative
vorticity on the right wall of channel are formed, because
S-- F, /10
..-- - - - - /
0 A B C D
0 0.1 0.2 0.3 0.4 (
Figure 8: (a) Time histories of transverse force for various
density ratios at G,=2.5, (b) Time history of transverse
force, trajectory and velocity for p,=1.5 at G,=2.5 at
initial stage of freely falling and (c) instantaneous vorticity
contours (Contour values range from -300 to 300 with 14
levels; Positive solid, Negative dashed; Dashed dot:
horizontal centre line of channel).
the transverse force acting on the cylinder by the fluid at
G =2.5 is almost zero for the longer period of time than
that at G =0.1 and 1.0. At t =0.18 and 0.215
corresponding to C and D in figure 8(b), because the small
positive transverse force starts to act on the cylinder, the
symmetric shape of a pair of vortices starts to be broken.
As a result the positive vorticity in the right side of cylinder
is slightly longer in the streamwise direction than the
negative vorticity in the left side of cylinder due to the
interaction between the positive vorticity in the right of
cylinder and the right wall of channel. During this period of
time, a pair of vortices in the wake of cylinder and the
negative vorticity on the right wall of channel is elongated
in the streamwise direction as time goes on. At t =0.26
corresponding to E in figure 8(b)), a pair of vortices are
elongated in the streamwise direction and starts to oscillate
in the presence of negative transverse force.
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
3.3 Trajectories in the transverse directions
Figure 9: (a) Transverse trajectories of a circular cylinder
at various gap ratio and density ratio on x-y plane, and (b)
time evolution of transverse trajectories of a circular
cylinder at various gap ratios and density ratios at initial
stage of freely falling.
Figures 9(a) and 9(b) show the trajectory of a cylinder in
the x y plane and the initial evolution of transverse
trajectory of a cylinder, respectively, as a function of time
for different gap ratios and density ratios. The interaction
between the presence of channel wall and vortex shedding
formed in the wake of cylinder determines the trajectory of
cylinder which falls freely in the channel. At the initial
stage of free fall of cylinder, the effect of channel wall
presence on the cylinder trajectory is larger than the effect
of vortex shedding in the wake. As a result, as the gap ratio
decreases, the transverse repulsive force increases and the
distance which the cylinder moves to the left direction
from the starting inlet position increases as shown in
figures 9(a) and 9(b). During this starting period of time, a
pair of vortex formed in the wake of cylinder is almost
symmetric and as a result the trajectory of cylinder does
not depend on the density ratio. In the following transient
period of time after the initial stage of free fall of cylinder,
a symmetric shape of vortices formed in the wake of
cylinder is broken and starts to oscillate due to the vortex
shedding. As a result the trajectory of cylinder oscillates
due to this vortex shedding with increasing amplitude and
frequency as the density ratio increases, in addition to the
cylinder movement to the left caused by the transverse
Figure 10: Settling time of a circular cylinder at various
gap ratios and density ratios.
repulsive force in the presence of channel wall.
As the gap ratio increases, the cylinder moves the longer
distances from the starting point of cylinder free fall before
it starts the oscillatory motion. Especially, when G = 3.5,
the cylinder falls a quite long distance vertically from the
starting position ( y=56 ) to y 32 without any
transverse motion in the transverse direction as shown in
Figures. 9(a) and 9(b), unlike to different cases of G, =0.1,
1.0 and 2.5 with some transverse motion to the left.
However, the distance for the cylinder to fall freely without
any oscillatory motion in the transverse direction does not
depend on the density ratio, meaning that the trajectory of
cylinder starts to oscillates after the cylinder moves the
same distance from the starting position of cylinder free
fall for different density ratios at the same gap ratio. After
the transient periodic trajectory of cylinder, the
quasi-steady periodic trajectory of cylinder is followed
with different amplitudes and frequencies depending on the
density ratio until the cylinder arrives at the bottom of
channel. When G =0.1, 1.0 and 2.5, the cylinder hits the
bottom of channel at the similar position because the
cylinder movement to the left increases due to increasing
repulsive force in the presence of channel wall as the gap
ratio decreases. However, when G = 3.5, the extent in
which the cylinder trajectory deviates from the vertical
centerline is very small because the effect of the channel
wall presence on the cylinder trajectory is negligible.
3.4 The effects of rotation of the cylinder
Figure 10 shows the settling time (t,) of free fall cylinder
as a function of gap ratio for different density ratios. Here
the settling time represents the time required for a free fall
cylinder to arrive at the bottom of the channel. As the gap
ratio increases for the specified density ratio, the distance,
which the trajectory of cylinder travels, decreases with
decreasing transverse motion of cylinder before the
cylinder hits the bottom of channel and as a result the
settling time decreases linearly except the case of G, =3.5.
When G =3.5, the settling time does not follow the linear
A p,=.5
. U .
A A A A A A A
0. '' 0.5 1 .5 2 2.5 3 .5.
0 0.5 1 1.5 2 2.5 3 3.5
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
Figure 11: Time histories of rotational velocity(co ) and
angle of rotation ( 0) of a circular cylinder for various gap
ratios at p, =1.5.( G, =0.1(dashdot), 1.0(dashed),
variation as a function of time and is less than the value
when it follows the linear variation. As the density ratio
increases for the specified gap ratio, the settling time
decreases because the weight of cylinder with higher
density ratio is larger than that with lower density ratio.
When the cylinder falls freely, the cylinder has the
rotational motion in addition to the linear motion. Figure 11
shows the time histories of the angular velocity (co) and
the angular position of the cylinder relative to its initial
angle (O ) when the cylinder starts to fall freely for
different values of G,=0.1, 1.0 and 2.5 at p, =1.5. When
G =0.1, the shear force difference acting on the left and
right sides of cylinder is very large due to a small gap
between the cylinder and the right wall of channel, and as a
result the cylinder starts to rotate in the clockwise direction
with a large value of angular velocity as soon as it starts to
fall freely from its starting position. The rotational direction
obtained from the present computation under the narrow
gap condition between the cylinder and the right wall of
channel is qualitatively similar to that obtained from the
steady state simulation results based on the lubrication
theory by Hu (1995). At the starting time when the cylinder
starts to fall freely, the value of the clockwise angular
velocity increases abruptly and has a maximum value at
t 0.1. In the following time, the angular velocity shows
the transient periodic shape with decreasing absolute
magnitude until it reaches the steady periodic state, which
matches well the history of cylinder trajectory as shown in
figure 9. If the cylinder trajectory and the angular velocity
reach the steady periodic state, the angular velocity
oscillates regularly around a value of zero with a small
amplitude, meaning that the cylinder rotation becomes
much smaller than the linear motion of cylinder at the
steady periodic state. While the cylinder falls freely at
G =0.1 by following the trajectory shown in figure 9, the
angular position of the cylinder relative to the initial angle
( O) keeps rotating in the clockwise direction, with a rapid
decrease in its magnitude at the initial stage of free fall of
cylinder which is followed by the gradual decrease in its
Figure 12: Comparison of time histories of transverse
trajectory and velocity from 'thought experiments'.
magnitude. When the cylinder hits the bottom of channel at
Gr=0.1, 6 -134.70.
When the gap ratio increases from 0.1 to 1.0 and 2.5, the
gap between the cylinder and the right wall of channel
increases and as a result the shear force difference acting
on the left and right sides of cylinder at G =1.0 and 2.5 is
much smaller than that at G =0.1. Thus the angular
velocity at G =1.0 and 2.5 does not decrease abruptly at
the initial stage of cylinder free fall unlike to the case of
G =0.1. The angular velocity at G =1.0 and 2.5 oscillates
regularly around a value of zero in both transient and
steady periodic states. The angular position of the cylinder
relative to the initial angle at G,=1.0 and 2.5 also keeps
rotating in the clockwise direction according to the history
of cylinder trajectory, but the absolute magnitude of O6 at
G =1.0 and 2.5 is much smaller than G =0.1. When the
cylinder hits the bottom of channel at G,=1.0 and 2.5,
q z -24.2 and -4.25 respectively. Thus, when
G, =1.0 and 2.5, the effect of cylinder rotation is relatively
small, compared to case of G =0.1.
In order to consider the effect of cylinder rotation on the
trajectory of cylinder and the transverse force acting on the
cylinder, we calculated the transverse movement of
cylinder ( xe ) and the transverse velocity (Uc ) of cylinder
by considering both cases with and without the cylinder
rotation. Figure 12 shows the time history of the transverse
trajectory and the transverse velocity (Uc) of cylinder for
both cases with and without the cylinder rotation for
p,=1.5 and G,=0.1.
4. Conclusions
A freely falling circular cylinder in a long channel has
been simulated using the DF/FD-LBM method for 2D,
which combines the desired features of the
Direct-forcing/Fictitious domain method and lattice
Boltzmann method.
The gap ratio between the wall and the cylinder is varied
from 0.1 to 3.5 for three density ratios (p, =1.25, 1.5 and
1.75) considered in the present study.
For all p, s, when G, is decreased, the interaction
between the fluid flow around the falling cylinder and the
right wall of channel becomes very strong. Therefore the
vortex shedding in the wake region of the cylinder is
formed earlier and influence on the transverse trajectory as
well as translational trajectory of the cylinder, because the
transverse force imbalance acting on the cylinder caused by
the fluid increases.
For all G, s, when p, is increased, the transverse force
acting on the cylinder by the fluid is larger than small p,.
However the period of the oscillation with transverse
direction is decreased and the amplitude is increased.
This work was supported by the Korea Foundation for
International Cooperation of Science &
Technology(KICOS) through a grant provided by the Korea
Ministry of Education, Science & Technology(MEST) in
2009 (No. K20702000013-07E0200-01310).
1. Fortes, A. F., Joshep, D. D. & Lundgren, T.S., Nonlinear
mechanics of fluidization of beds of spherical particles, J.
Fluid Mech., 177, 467-483 (1987)
2. Cate, A. ten, Nieuwstad,C. H., Derksen, J. J. & Van den
Akker, H. E. A., Particle imaging velocimetry experiments
and lattice-Boltzmann simulations on a single sphere
settling under gravity, Phys. Fluids, 14, 11, pp.4012-4025
3. Pan, T -W, Joseph, D. D., Bai, R., Glowinski, R. &
Sarin, V, Fluidization of 1204 spheres: simulation and
experiment, J. Fluid Mech., 451, 169-191 (2002)
4. Jenny, M., Dusek J. & Bouchet, G., Instability and
transition of a sphere falling or ascending freely in a
Newtonian fluid, J. FluidMech., 508, 201-239 (2111 1).
5. Hu, H. H., Joseph, D. D. & Crochet, M. J., Direct
simulation of fluid particle motions, Theoret. Comput. Fluid
Dynamics, 3, 285-306 (1992)
6. Feng, J., Hu, H. H., & Joseph, D. D., Direct simulation of
initial value problems for the motion of solid bodies in a
Newtonian fluid Part 1. Sedimentation, J. Fluid Mech., 261,
95-134 (1994)
7. Ladd, A. J. C., Numerical simulations of particulate
suspensions via a discretized Boltzmann equation. Part 1.
Theoretical foundation, J. FluidMech., 271, 285-309 (1994)
8. Ladd, A. J. C., Numerical simulations of particulate
suspensions via a discretized Boltzmann equation. Part 2.
Numerical results, J. FluidMech., 271, 311-339 (1994)
9. Hu, H. H., Motion of a circular cylinder in a viscous
liquid between parallel pates, Theoret. Comput. Fluid
Dynamics, 7, 441 (1995)
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
10. Hu, H. H., Direct simulation of flows of solid-liquid
mixtures, It. J. Multiphase Flow, 22, 2, 335-352 (1996)
11. Qi, D., Lattice-Boltzmann simulations of particles in
non-zero- Reynolds-number flows, J. Fluid Mech., 385,
41-62 (1999)
12. Feng, Z. G. & Michaelides, E. E., The immersed
boundary-lattice Boltzmann method for solving
fluid-particles interaction problems, J. Comput. Phys., 195,
602-628 (2" 114)
13. Glowinski, R., Pan, T. W., Hesla, T. I., Joseph, D. D. &
Periaux, J., A fictitious domain approach to the direct
numerical simulation of incompressible viscous flow past
moving rigid bodies: Application to particulate flow, J.
Comput. Phys., 169, 363-426 (2001)
14. Yu, Z., & Shao, Z., A direct-forcing fictitious domain
method for particulate flows, J. Comput. Phys., 227,
292-314 (2007)
15. Namkoong, K., Yoo J. Y. & Choi, H.G., Numerical
analysis of two-dimensional motion of a freely falling
circular cylinder in an infinite fluid, J. Fluid Mech., 604,
33-53 (2008)
16. Wang, Z., Fan, J., & Luo, K., Combined multi-direct
forcing and immersed boundary method for simulating
flows with moving particles, It. J. Multiphase Flow, 34,
283-302 (2008)
17. Uhlmann, M., An immersed boundary method with
direct forcing for the simulation of particulate flows, J.
Comput. Phys., 209, 448-476 (2005)
18. Buick, J. M. & Greated, C. A., Graivity in a lattice
Boltzmann model, Phys Rev. E, 61, 5307 (2000)
19. Wan, D., & Turek, S., An efficient multigrid-FEM
method for the simulation of solid-liquid two phase flows, J.
Comp. Appl. Math., 203, 561-580 ( 2007)
20. Zettner, C. M. and Yoda, M. The circular cylinder in
simple shear at moderate Reynolds numbers: An
experimental study, Experiments in Fluids, 30, 346-353
21. Ding, E. and Aidun, C. K., The dynamics and scaling
law for particles suspended in shear flow with inertia, J.
FluidMech., 423, 317-344 (2000)
|
{"url":"http://ufdc.ufl.edu/UF00102023/00249","timestamp":"2014-04-17T16:29:27Z","content_type":null,"content_length":"70558","record_id":"<urn:uuid:5291803c-bf1f-473e-880e-174f5162c400>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Engineering Electromagnetic
Resources Awards & Gifts
Active MembersToday
» Last 7 Daysmore...
Articles/Knowledge Sharing
Engineering Electromagnetic
Posted Date: 26-Jun-2010 Last Updated: Category: Syllabus
Author: stefi bhavsar Member Level: Silver Points: 2
Syllabus for Engineering Electromagnetic,SEM-V,GTU
UNIT-1. Review of Vector Analysis and Vector Calculus:
Scalars & Vectors, Dot and cross products, Co-ordinate systems and conversions,
Review of line, Surface and volume integrals – Definition of curl, divergence and gradient – Meaning of Divergence theorem and Stokes' theorem
UNIT-2. Electrostatics:
1. Coulomb's law and electrical field intensity: Coulomb's law, Field due to
different charge distributions.
2. Electric flux density, Gauss's law and divergence: Concept of electric flux
density, Gauss's law and its applications, Differential volume element,
Divergence, Maxwell's first eqn. and divergence theorem for electric flux
3. Energy and potential : Energy expanded in moving a point charge in electrical
field, Line integral, Definition of potential difference and potential, Potential field of a point charge and system of charges, Potential gradient, Dipole, Energy density in
electrostatic field.
UNIT-3. Steady Magnetic Field:
Biot-Savart's law, Ampere's circuital law, Applications of this law for an infinitely long coaxial transmission line, Solenoid and toroid, Point form of Ampere's circuital law
concept of flux density, Scalar and vector magnetic potential, Stoke's theorem for magnetic field, Point and integral forms of Maxwell's equations for steady electric and
magnetic fields.
UNIT-4. Electric and Magnetic Fields in Materials:
1. Conductors, dielectrics and capacitance :
Definition of currents and current density, Continuity equation, Metallic
conductors and their properties, Semiconductors, Dielectric materials,
Characteristics, Boundary conditions, Capacitance of a parallel plate capacitor,
Coaxial cable and spherical capacitors.
2. Poisson's and Laplace equations:
Poisson's and Laplace equation, Uniqueness theorem, Examples of solution of
Laplace and Poisson's equations.
3. Magnetic forces, materials and inductance :
Force on a moving charge, Force on a differential current element, Force and
torque on a close circuit, magnetization and permeability, Magnetic boundary
conditions, Magnetic circuit, Self inductance and Mutual inductance.
UNIT-5. Time Varying Fields and Maxwell's Equations:
Faraday's law, Displacement current, Maxwell's equations in point and integral forms for time varying fields .
UNIT-6. Electromagnetic Waves:
The uniform plane waves:
Wave motion in free space, Perfect dielectric, Dielectric, Poynting vector, Power consideration, Propagation in good conductor, Phenomena of skin effect, Reflection of uniform
plane waves, Plane waves at normal incidence, and at oblique incidence, Standing wave ratio.
References Books:
1. W H.Hayt & J A Buck: "Engineering Electromagnetics" TATA McGraw-Hill, 7thEdition
2. Elements of Electromagnetics by Matthew Sadiku, 4th Edition,Oxford University Press.
3. Electromagnetics Joseph Edminister-Schaum's Outline Series, TMH.
4. Electromagnetics with applications by J.D.Krauss and Daniel Fleisch fifth edition,
Mcgraw Hill.
Did you like this resource? Share it with your friends and show your love!
Responses to "Engineering Electromagnetic"
No responses found. Be the first to respond...
Post Comment:
Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic.
No HTML formatting and links to other web sites are allowed.
This is a strictly moderated site. Absolutely no spam allowed.
Name: Sign In to fill automatically.
Email: (Will not be published, but required to validate comment)
Type the numbers and letters shown on the left.
Submit Article
Return to
Article Index
|
{"url":"http://www.indiastudychannel.com/resources/118426-Engineering-Electromagnetic.aspx","timestamp":"2014-04-17T15:38:15Z","content_type":null,"content_length":"27054","record_id":"<urn:uuid:fe8f6682-c9bc-48c8-b95c-d030cf8037c5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Stages of Samadhi
Samsara is a bad dream, a nightmare, a painful illusion. The purpose of our practice is to awaken from that bad dream, as Shakyamuni did, and as many others have since. The mechanism of that
awakening has been analyzed for two and a half millenia. There are many accounts, although I think there is only one awakening. The fundamental technique is samadhi.
As a first approximation, we may say that the word "samadhi" means "concentration." Right samadhi is the eighth component of the Buddha's Noble Eightfold Path. Again, the Buddhist Triple Discipline
consists of shila (morality or discipline), prajna (wisdom or insight), and samadhi. Thus samadhi is fundamental to Buddhism.
The Buddha Shakyamuni did not invent samadhi. He learned it from his teachers during the phase of his life prior to his enlightenment. The traditional account has the young Gautama, after leaving
home, visit a series of yogic teachers who taught him a variety of meditative techniques. He is said to have achieved profound samadhis, mastering what each of these teachers had to offer. In each
case, however, he remained unsatisfied. Finally, sitting all night under the Bodhi Tree, he went beyond what any of his teachers had been able to show him and had the experience that founded
Buddhism. When he came to be a teacher in his own right, he taught many of the techniques that he had learned. Thus there are elements of traditional yoga that are included in the Pali Canon. In
particular, there is a rather specific set of samadhis that are described in the Canon and that are said to have been practiced by the Buddha.
The discussion of samadhi in the Pali Canon is strikingly similar to discussions of samadhi that we find in other traditions from the Common Path --- that is to say, the great Indian religious
tradition that includes Buddhism, Hinduism, Jainism, Yoga, Tantra, and so on. It is reasonable to suppose that there was a single practice, or collection of related practices, which were the common
possession of all of the teachers of the period around 500 BCE when the various strands of the Common Path were emerging. These practices were referred to as "samadhi." We find descriptions of these
practices in a number of ancient texts, some Buddhist and some not.
The clearest and most explicit ancient discussion of samadhi that I know is found in the Yoga Sutras of Patanjali. Little is known about Patanjali. He is traditionally identified with a grammarian of
the same name who flourished in the second century BCE. Many scholars doubt this identification. Indeed, it is sometimes claimed that the Yoga Sutras show the influence of a fully developed Buddhism
which would not have existed until about the fifth century CE. Patanjali himself was certainly not a Buddhist. Still, it is easy to translate much of his approach to enlightenment into Buddhist
terms. At any rate, I will freely adapt some of his terminology and of his conceptual apparatus for the discussion of samadhi. In order to relate my discussion to the Buddhist context, I will also
refer to the Visuddhimagga, or "Path of Purification," of Buddhaghosa. This is the most authoritative and systematic exposition of Theravada Buddhism. It was written in the fifth century CE in
Ceylon. I have used the translation by Bhikkhu Nanamoli.
Preparation for Samadhi
In order to explain samadhi, let me take an example from my own professional experience. Concentration is fundamental to creativity. Most of our undergraduate students, unfortunately, have never
learned even the rudiments of concentration. For example, mathematics has to do with objects which are not in the physical world. It is the study of numbers, functions, geometrical patterns, and a
wealth of other abstract objects. If we want to think about these objects, we have to be able to focus on them. Our students are used to doing their mathematics homework while watching television and
talking on their cellphones. They are multi-tasking, I suppose. That makes it impossible for them to do any serious mathematics, because they cannot focus on the objects they are dealing with. They
can carry out a routine sequence of steps, an algorithm in the jargon, but they cannot do anything that is not prescribed by the rules. Up to a point they can compensate for this deficiency by
drawing pictures. In a calculus course, for example, we teach students how to draw graphs. Today there are graphing calculators and computer software that will produce such pictures automatically.
Such devices, however, are limited in their applicability. They are not good at showing geometrical patterns in three dimensions. They are completely defeated by patterns in more than three
dimensions, because such patterns cannot be seen by the physical eye, only by "the eye of the mind." Non-geometrical patterns can also not be drawn by such devices. Moreover, looking at a physical
picture is nowhere near as good as looking at a mental picture. A mental picture is completely ours, completely under our control. A physical picture is external to us. We may not notice crucial
details. In order to use it, we have to focus on it, concentrate on it --- in effect, turn it into a picture in our mind.
What is it like to work on a mathematical problem that cannot be solved algorithmically --- that is to say, which cannot be solved by a routine application of the rules? First we have to learn the
relevant facts and terminology and computations. We have to become comfortable with the subject domain in which the problem arises. That is the easy part, but can of course be challenging if the
mathematical domain is unfamiliar. Then we have to concentrate on the situation that the problem concerns. Often that amounts to visualizing or otherwise internalizing a pattern of some sort,
although not necessarily a geometrical pattern. When we try to do that, we find that our mind wanders. All sorts of thoughts from our ordinary life come along that are irrelevant to the problem we
are trying to solve. We have to keep bringing our mind back to the problem. Most of our students never get past that difficulty, and therefore they cannot succeed in any serious mathematics course
beyond the sophomore level. The difficulty is that mathematics is abstract, which means in particular that it is emotionally barren. Students often ask things like, "How is this relevant to my life?"
What they are asking for is some sort of hook to help them pull their attention back to the problem. I often taught engineering mathematics, where the technological application could be used as such
a hook. In pure mathematics, the lack of such a hook is often crippling to students. The appeal of the material to the professional mathematician is aesthetic. But that is getting ahead of the story.
The yoga tradition recommends sitting down in an appropriate meditative posture. That has the effect of making the musculature, the outermost sheath, stable and comfortable so that it can be ignored.
My undergraduate students often had abysmal posture, which I think made concentration more difficult. Next, the yoga tradition recommends bringing our breathing under control. This is pranayama. It
has the effect of making the second sheath, the prana body, steady and comfortable so that it can be ignored as well.
The First Stage of Samadhi
The next step in solving the mathematical problem is to get ourselves to think about the problem --- that is to say, to produce a state of mind in which the thoughts that arise are about the problem.
Patanjali refers to this as the first stage of samadhi. In his jargon, which I will usually follow, this is savitarka samadhi, or "samadhi with reasoning." Achieving savitarka samadhi is an example
of shamata. We recognize irrelevant thoughts as irrelevant, and then let them go and do not follow them. However here the goal is not to be aware of the breath and the posture, but of the object of
the concentration --- the mathematical problem.
Buddhaghosa, writing Pali rather than Patanjali's Sanscrit, refers to stages of jhana rather than stages of samadhi. The Pali word "jhana" is cognate to the Sanscrit "dhyana," which means meditation.
I will ignore this minor terminological difference. Buddhaghosa gives two different classifications of the stages of samadhi. I will focus on the second, which is closer to Patanjali. Buddhaghosa's
first stage is characterized by vitakka, corresponding to the Sanscrit "vitarka" of Patanjali. Nanamoli translates this word as "applied thought." What is characteristic of this stage of samadhi, in
my experience, is verbal or conceptual thought. The mind is occupied with discursive reasoning about the object of the samadhi. Buddhaghosa says that vitakka "is the act of keeping the mind anchored
... like the ringing of a bell," or "like a bird's spreading out its wings when about to soar into the air." On the other hand, the thought characteristic of the next stage, vichara, is "quiet, like
the bird's planing with outstretched wings after soaring into the air." (Visuddhimagga IV, 89)
In the first stage of samadhi we are resting in the conceptual mind. That sheath is active, but relaxed and focussed on the problem.
What is essential in any kind of concentration, whether solving a mathematical problem or sitting in meditation, is to let go of the inner monologue that maintains the emotionally charged dream that
is the world of our ordinary life. Then the pain of samsara begins to recede, and there is room for other visions to appear.
When we do mathematics, it is often possible to solve the problem in this first stage of samadhi. We follow the words that come to us, and construct a rational argument that gives the answer to the
problem. That rational argument is called a "proof," and all undergraduate mathematics that goes beyond algorithms amounts to the construction of proofs. Indeed, the same is true of much graduate
mathematics and even some professional mathematics. However, not much creativity happens on this level. Let me assume, for the sake of my exposition, that the mathematical problem we are working on
is sufficiently difficult that it cannot be solved just by reasoning about the situation that the problem concerns.
The Second Stage of Samadhi
In the first stage of samadhi, there is still a stream of words. The thoughts keep coming, although they are all thoughts about the object. The next stage is to stop the thoughts, so that our
consciousness is fully absorbed in the object. We are still aware of the object as being what it is. But the words have stopped. (Actually, this is an oversimplification. Words do still come,
although there are long spaces between them. But we are not attending to the words, and often they are not particularly meaningful. My experience is that the words that come are often some slogan
that is repeated to absorb the functioning of the discursive part of the brain. Such repetition of a slogan is often done deliberately as a meditative technique. In that case it is usually called
Now we are in the second stage of samadhi, whether we are following Patanjali's scheme or Buddhaghosa's. In Patanjali's jargon, this is savichara samadhi, or "samadhi with reflection." (What I am
translating "reflection," Nanamoli translates "sustained thought." It means a non-verbal contemplation of an object or pattern.) It is here that creativity occurs. We are completely absorbed in a
geometrical pattern, let us say. Suddenly, the pattern shifts. Now we see a possible solution to the problem. For example, consider a classical Euclidean problem. It usually turns out that we have to
construct a new line, or something of the sort. We are thinking about the picture, focussed on the picture, and suddenly we see the line. The mathematician, utterly absorbed in his trance, unaware of
what is going on around him (to the great annoyance of his wife), suddenly says, "Oh! I see!" and the trance breaks. He quickly makes a sketch or writes a note on his yellow pad, so that he does not
forget what he saw, and now he is willing to pay attention to his surroundings.
In order to understand samadhi, as we see it discussed in the yogic and Buddhist texts, it is crucial to understand what happens when the mathematician sees the new line in his diagram, or more
generally when he sees what he hopes is the solution to the problem he is working on. His mind is completely focussed on some sort of abstract pattern. He is completely absorbed --- entranced.
Moreover, in his mind, the words have largely stopped. Suddenly, the new object appears. He sees it or hears it. It comes from nowhere. He does not consciously create the new object. It is just
suddenly there. That is the appearance which is the product of his trance. People say vague things like, "it was the product of his unconscious mind." The fact is that no one knows where such ideas
come from. They emerge, magically, from the trance.
In the second stage of samadhi we are ignoring the conceptual mind and are resting in the perceptual mind. The conceptual mind, so to speak, is idling. The perceptual mind is working to decipher the
pattern we are trying to understand.
But a trance has to be interpreted. It is not its own intepretation, contrary to what some people think who write about divine revelation. The idea comes from nowhere, and it comes without
instructions about how it is to be interpreted or used. That is the function of the mathematician's training. He can apply his algorithmic skills, which do not involve samadhi, or his reasoning
skills, which at most involve the first stage of samadhi. One way or another, he tries to make the idea solve the problem. Professional mathematicians often say, "The idea is ...," and then say a few
words. They expect that their hearer can do the routine work that leads to the explicit solution, which may be many pages of computation or reasoning. But the idea is not routine. It emerges from the
Often it turns out that the idea does not work. That is the role of what the mathematician calls "rigor." His training has given him a critical apparatus to distinguish between a correct proof or
computation, and one that only seems correct. Since the trance seems to dictate its own interpretation, it often happens that the mathematician thinks the idea works, but is wrong. Traditionally, he
goes down the hall and shows his work to a colleague, who points out the error if there is one. Mathematicians often say, "I have a solution, but no one has checked it."
The Third Stage of Samadhi
Now suppose that no idea comes, or that the idea that came turned out not to work, and no new idea comes. The mathematician is fully entranced, completely absorbed by the pattern. Now, if he is
lucky, the pattern becomes his consciousness, so that there is no longer any conceptualization whatever. Patanjali's metaphor is that the mind of the yogin is like a piece of glass sitting on a
colored cloth. The glass takes on the color. If you look through the glass, you do not see the glass, but only the color. In that case, there is bliss. The state is joyous. This is the third stage of
samadhi, sananda samadhi, or "samadhi with bliss." (Buddhaghosa separates this stage into two stages. He does not use a word cognate to the Sanscrit ananda, which I am translating as "bliss."
Instead, he speaks of a third stage characterized by piti, which Nanamoli translates "happiness," and a fourth stage characterized by sukha, which Nanamoli translates "bliss." I do not know what
distinction Buddhaghosa, following the Pali tradition, is trying to make. I do not find such a distinction in my practice. I will follow Patanjali in not making the distinction.) The professional
pure mathematician thinks of this state as the reward for his hard work. This state is what makes it all worthwhile. He is usually not able to say much about the state as such. He talks instead about
the "enormous aesthetic appeal" of the subject, or "the amazing beauty of the ideas in this area," or something of the sort. That is, he falsely attributes his bliss to the object of the trance,
rather than to his own mind. This is called "seeking the Buddha outside yourself." It is a characteristic human mistake.
In the third stage of samadhi, we are ignoring the perceptual mind and are resting in the emotions. In the mathematical case, that sheath cannot usually make a contribution to the solution of the
problem. Nevertheless, at this stage it is active and quite prominent in our experience.
I should point out that although both Patanjali and Buddhaghosa, as well as much of the rest of the tradition, describe the third stage of samadhi as blissful --- that is to say, euphoric --- there
are important meditative techniques which lead to other emotional flavors in this stage of samadhi. An example is the Tibetan technique called vipashyana.
The Fourth Stage of Samadhi
The bliss that we experience in the third stage of samadhi is a distraction from the object of the trance. In the mathematical context, it does not contribute to solving the problem. The
mathematician, if he is very accomplished, learns to let go of the bliss. What then happens is that his mind lets go of the specific object he has been studying and opens to a larger vista in which
that object is located. He understands the subject as a whole, seeing broad patterns which, perhaps, illuminate the larger context of his problem. In Patanjali's jargon, this is the fourth stage of
samadhi and is called sasmita samadhi, or "samadhi with self-consciousness." Buddhaghosa says that this fourth stage of samadhi is characterised by ekagatta, which Nanamoli translates "unification of
mind." This word is also often translated "one-pointedness." Buddhaghosa emphasizes that the fourth stage is characterized by equanimity and mindfulness. The meditator is now free from all clinging.
In the mathematical setting, it is in the fourth stage of samadhi that we most strongly encounter the phenomenon that mathematicians call "intuition". It is the highest achievement of the
professional. Colleagues say, "His intuition is wonderful. He was able to see that such and such had to be true, although it took ten years before anyone was able to prove it." His insight emerged
from the trance. It was based on nothing, but was correct.
In the fourth stage of samadhi all of the sheaths are quiet except the innermost Emptiness, which manifests itself as a bare consciousness of the object of our meditation.
Finally, if we can reach the fourth stage of samadhi, there is a way beyond. We must let go of the sense of self, and with it of the object. Then there is nothing at all: no reasoning, no
conceptualization, no bliss, no sense of self. That is what Patanjali calls nirbija samadhi, or "samadhi without a seed". (The "seed" is the object of the concentration.) To my knowledge,
mathematicians do not reach this stage. Or at least, not many of them do.
Lodrö Chödrak
Lodrö Chödrak can be reached by e-mail at
|
{"url":"http://lodrochodrak.com/samadhi.html","timestamp":"2014-04-17T12:30:07Z","content_type":null,"content_length":"20567","record_id":"<urn:uuid:7dbaf73d-f0a1-4bf4-a909-60637fc9cfce>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solomon Lefschetz
Solomon Lefschetz (1884-1972) did pioneering work in algebraic geometry, algebraic topology, and differential equations, and exerted tremendous influence over American mathematics as a professor at
Princeton University and the editor of the journal Annals of Mathematics.
Lefschetz was born in Moscow into a Jewish family. His father had frequent business in Persia, and usually based his family in Paris. At age 21 Lefschetz emigrated to the United States. He worked
briefly at the Baldwin Locomotive Works, then for Westinghouse Electric Company in Pittsburg from 1907 to 1910. His industrial career was cut short by an accident in which he lost both his hands.
Lefschetz started over as a mathematician, receiving his doctorate from Clark University in 1911.
Lefschetz accepted a position at the University of Nebraska, then moved to the University of Kansas. Despite heavy teaching demands and near-complete isolation from other research mathematicians, he
produced research papers of striking originality and importance. (He later wrote that his position "enabled me to develop my ideas in perfect mathematical calm".) In 1924 he went to Princeton as a
visiting professor, and his post was made permanent the next year. Lefschetz became Henry Fine research professor in 1932 and retained that post until his retirement from Princeton in 1953.
Lefschetz began his research by studying algebraic varieties (sets defined by the vanishing of polynomials). He applied to them the ideas of algebraic topology invented by Poincaré, and developed
algebraic intersection theory. Later he recast this intersection theory as the "cup product" in the cohomology theory developed by Alexander, Cech and Whitney, and this is the form in which it
appears in his 1942 monograph Algebraic Topology.
One of Lefschetz's most widely used results, the Lefschetz fixed-point theorem, asserts that a map f from a "nice" compact space to itself has a fixed point (a point x such that f(x)=x) when a
certain numerical invariant (the "Lefschetz number") of f is nonzero. Since the Lefschetz number depends only on the function f induces on the homology groups of the space, it is effectively
computable. For some spaces (for example, those having the same homology groups as a point) the Lefschetz number of any map f from the space to itself is nonzero, so the Lefschetz fixed-point theorem
shows that these spaces have the fixed-point property (any map from the space to itself has a fixed point).
After 1942 Lefschetz shifted his attention to differential equations. He actively pursued the geometric (or "qualitative") approach to nonlinear differential equations (again following in the
footsteps of Poincaré), and established new results on the stability of equilibrium points and periodic orbits. After retiring from Princeton he became a consultant to the Research Institute for
Advanced Studies (R.I.A.S.), an industry-sponsored research center. In 1964 the part of R.I.A.S. devoted to differential equations found a home at Brown University, where it became the Lefschetz
Center for Dynamical Systems, with Lefschetz as visiting Professor of Applied Mathematics.
Lefschetz was famous for his intuitive style of reasoning and strong opinions. Students said of him that he never gave an incorrect result or a correct proof. His judgements could be harsh: for
example, he despised most point-set topology as "baby stuff". He could be wrong, most famously in the case of a paper of William Hodge. Lefschetz thought that Hodge's paper was wrong, and told all
his mathematical friends so. Hodge came to Princeton and gave a seminar on the paper (as Hodge later wrote, "owing to [Lefschetz's] characteristic interruptions the seminar actually lasted for six
sessions"); afterward, Lefschetz publically acknowledged that Hodge was right and wrote to his friends admitting as much.
|
{"url":"http://www.usna.edu/Users/math/meh/lefschetz.html","timestamp":"2014-04-17T18:25:50Z","content_type":null,"content_length":"5005","record_id":"<urn:uuid:b99bb405-41ef-4801-b216-957571807d86>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definite Integration
January 24th 2010, 05:10 AM #1
Jun 2007
Definite Integration
find the total area between the region and the the x-axis.
y = -x^2 - 2x when -3 <= x <= 2 (which i took it as [-3,2])
This was my work:
I integrated the equation:
-(1/3)x^3 - x^2 [-3,2]
I plugged in the numbers and got(plugged in 2 then -3):
-(8/3) - 4 - (9 - 9)... which would equal -20/3, but the answer said +28/3.
find the total area between the region and the the x-axis.
y = -x^2 - 2x when -3 <= x <= 2 (which i took it as [-3,2])
This was my work:
I integrated the equation:
-(1/3)x^3 - x^2 [-3,2]
I plugged in the numbers and got(plugged in 2 then -3):
-(8/3) - 4 - (9 - 9)... which would equal -20/3, but the answer said +28/3.
While the "area beneath a graph", for a graph that is always above the x-axis is the integral itself, that is NOT the "area between the graph and the x-axis" for a graph that is both above and
below the x-axis. The area between a graph, y= f(x), and the x-axis is $\int |f(x)|dx$ since area is always positive.
This graph is below the x-axis for -3< x< -2, above the x-axis for -3< x< 0 and below the x-axis again for x> 0. To get the total area between it and the x-axis, you need to do it in three parts.
For -3< x< 2, the area is $\int_{-3}^{-2} |y| dx= \int_{x= -3}^{-2} x^2+ 2x dx$. For -2< x< 0, the area is $\int_{-2}^0 |y|dx= \int_{-2}^0 -x^2- 2x dx$. For 0< x< 2, the area is $\int_{x=0}^2 |y|
dx= \int_{x=0}^2 x^2+ 2x dx$. The total area is the sum of those three integrals.
January 24th 2010, 05:44 AM #2
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/125141-definite-integration.html","timestamp":"2014-04-20T09:37:08Z","content_type":null,"content_length":"34514","record_id":"<urn:uuid:c8efa0b4-ad2e-4e38-921a-6ae659494d98>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cardiff By The Sea Geometry Tutor
Find a Cardiff By The Sea Geometry Tutor
...I have a year of math tutoring experience for undergraduate college students. Also, I worked at Legoland for 3 years as a ride associate. About me: I grew up in Carlsbad and played high school
softball and soccer.
8 Subjects: including geometry, calculus, algebra 2, algebra 1
...My background in geometry includes years of math training at Stuyvesant High School, a math and science high school in New York City, as well as undergraduate schooling at Dartmouth College. I
am eager not just to get you through your next exam, but also to promote a genuine interest in this subject. Please contact me for more information about geometry tutoring.
22 Subjects: including geometry, chemistry, organic chemistry, algebra 1
...Being a recent graduate, I am equipped with up-to-date research-based teaching practices and am knowledgeable in the current California and National standards. I have student taught in grades
Kinder, 2nd, 4th, and 5th, and have volunteered as a Reading Tutor and Reader for grades K-3 within the ...
25 Subjects: including geometry, English, reading, writing
...Currently, I consult as an IT, database analyst. My teaching experience comes from graduate school, where I was required to teach, and through coaching my children's Math League and Science
Olympiad. I have also tutored children in the No Child Left Behind Program.
24 Subjects: including geometry, chemistry, physics, calculus
...I use all the concepts that you will see in a high school and entry level college chemistry course quite regularly at both school and work. I have experience with both the AP Chemistry course
as well the SAT II subject tests as I managed to attain a perfect score on each of them. I obtained a minor in physics in college and was able to receive a top score on both the AP Physics B & C
19 Subjects: including geometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Cardiff_By_The_Sea_Geometry_tutors.php","timestamp":"2014-04-21T15:27:59Z","content_type":null,"content_length":"24417","record_id":"<urn:uuid:2de63071-0104-4141-b38d-3f6aa142f23a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maths problems!
May 16th 2005, 07:08 AM #1
May 2005
Maths problems!
Please can anyone help me, i am trying to finish a maths paper for work and i am struggling! I need to know the following: A square wave has a period of 0.4 second. What gives the frequency in
hertz (Hz) of the fundamental component of the associated Fourier Series.
From the department of long dead questions
The fundamental should be the same as that of the square wave, that is:
$f=1/\tau=1/0.4=2.5 \text{Hz}$
I hope the reply is not too late, and I hope this is not from a take home exam, we don't approve of cheating
February 11th 2009, 05:40 AM #2
Grand Panjandrum
Nov 2005
February 11th 2009, 01:44 PM #3
|
{"url":"http://mathhelpforum.com/calculus/248-maths-problems.html","timestamp":"2014-04-18T12:28:40Z","content_type":null,"content_length":"37539","record_id":"<urn:uuid:2102eb7c-3c54-4bfe-9af0-16877b3af67e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Speedboat Starts From Rest And Accelerates At ... | Chegg.com
A speedboat starts from rest and accelerates at +2.01 m/s2 for 6.60 s. At the end of this time, the boat continues for an additional 5.50 s with an acceleration of +0.518 m/s2. Following this, the
boat accelerates at -1.49 m/s2 for 8.60 s.
(a) What is the velocity of the boat at t = 20.7 s?
(b) Find the total displacement of the boat.
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/speedboat-starts-rest-accelerates-201-m-s2-660-s-end-time-boat-continues-additional-550-s--q1994775","timestamp":"2014-04-19T05:02:13Z","content_type":null,"content_length":"19903","record_id":"<urn:uuid:98c51560-94e0-434c-9da1-81268a5f6b91>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pushout stability of embeddings, injectivity and categories of algebras
Topology Atlas Document # ppae-27
Pushout stability of embeddings, injectivity and categories of algebras
Lurdes Sousa
Proceedings of the Ninth Prague Topological Symposium (2001) pp. 295-308
In several familiar subcategories of the category T of topological spaces and continuous maps, embeddings are not pushout-stable. But, an interesting feature, capturable in many categories, namely in
categories B of topological spaces, is the following: For M the class of all embeddings, the subclass of all pushout-stable M-morphisms (that is, of those M-morphisms whose pushout along an arbitrary
morphism always belongs to M) is of the form A^Inj for some space A, where A^Inj consists of all morphisms m:X --> Y such that the map Hom(m, A): Hom(Y, A) --> Hom(X, A) is surjective. We study this
phenomenon. We show that, under mild assumptions, the reflective hull of such a space A is the smallest M-reflective subcategory of B; furthermore, the opposite category of this reflective hull is
equivalent to a reflective subcategory of the Eilenberg-Moore category Set^T, where T is the monad induced by the right adjointHom(-, A): T^op et.We also find conditions on a category under which
thepushout-stable -morphisms are of the form^Inj for some category
Mathematics Subject Classification. 18A20 18A40 18B30 18G05 54B30 54C10 54C25.
Keywords. embeddings, injectivity, pushout-stability,(epi)reflective subcategories of ${\mathbb T}$, closure operator, Eilenberg-Moore categories.
Document formats
AtlasImage (for online previewing)
LaTeX 43.4 Kb requires diagrams.tex
DVI 62.8 Kb
PostScript 223.2 Kb
gzipped PostScript 87.8 Kb
PDF 255.1 Kb
Reference list in BibTeX
Comments. This article will be revised and submitted for publication elsewhere.
Copyright © 2002 Charles University and Topology Atlas. Published April 2002.
|
{"url":"http://www.emis.de/proceedings/TopoSym2001/27.htm","timestamp":"2014-04-16T16:05:28Z","content_type":null,"content_length":"3267","record_id":"<urn:uuid:667ce513-0398-4ccf-bcf6-edcffc48ebdf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pricing Handmade Items - Formula
Pricing Handmade Items – Formula
Subscribe or ‘follow’ Epheriell Designs during July 09 and go in the draw to win a $40 gift voucher to use in my shop – Epheriell. Details here.
During this slow sales period for me, I’ve been seriously thinking about how to set this hobby of mine up properly in the event that I ever get serious enough to do this as a business.
That means I need to work out things properly now, and save myself the hassle later on. On Monday I did a post about improving product photography, and thank you so much to all those who gave me
their feedback! It was really helpful.
Today, I’m working on my pricing. The way I have been pricing up to now has been quite esoteric… and I wanted to do it in a more scientific, repeatable way. This means future items will be easier to
price, and that my current items will sell at a level to make a profit, and that is also fair to my customers.
I have found many formulas out there, such as the one I mentioned in the Stitches & Craft Show post. A simple version of this is..
Cost Price (labour + price of materials) x 2 = Wholesale
Wholesale x 2 = Retail
So, what does this mean to me, and you? Well, say you have a labour cost of $20 per hour (think about how much you could live on if this was your full-time business!). And your materials cost for an
item was $5. Lets say I made a pair of earrings that took 1/2 an hour.
20 x .5 = $10 labour + $5 materials = $15.
$15 x 2 = $30 = Wholesale Price
Now, if you want to make a profit – which is the amount you have to grow and re-invest in your business, you should double this amount for Retail, which equals $60.
Sound like a lot, hey? But, in handmade business circles, this is standard practice. It is difficult for those of us who do this as a hobby to look at it like this sometimes – and when you’re
competing with people who sell at a price that doesn’t even begin to come near their true costs, you might feel like you’re being greedy.
I am going to attempt to apply this formula to my work, and see what numbers come out. I used it tonight to price my Little Square Hoops - and discovered that the price I’d already set was almost
exactly on the mark! Amazing. For me, these earrings use a small amount of sterling wire, and are pretty quick to make once I know the design. It will be interesting to apply this to some of my more
complicated designs and see what comes up!
Oh, and did I forget to mention that – because I’m an Aussie – I then have to consider exchange rates? And postage rates since I sell online…? It’s a complicated business!
So, how do you calculate your prices? And do you think the above is a fair formula, and one that will work for you?
What 'cha thinking?
|
{"url":"http://epherielldesigns.com/pricing-handmade-items-formula","timestamp":"2014-04-17T00:49:25Z","content_type":null,"content_length":"52543","record_id":"<urn:uuid:5684c2a3-7ed6-42ad-9447-fb5414f45a97>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of square root being irrational
September 14th 2007, 10:58 AM
Proof of square root being irrational
Let N be a Natural Number such that square root of N is not an integer. Prove that then square root of N is even irrational.
*The problem gives you a hint stating that:
Assume square root of N is in the irrational numbers. Then the Set X := {x is in the real numbers : x multiplied by square root N is in the Natural Numbers} is nonempty. Show that if x is in X
and x' := (x multiplied by square root of N) minus (x[square root of N]) then x' is in X and x' < x. thus square root of 2, 3, 5,... are irrational.
September 14th 2007, 12:34 PM
Let N be a Natural Number such that square root of N is not an integer. Prove that then square root of N is even irrational.
*The problem gives you a hint stating that:
Assume square root of N is in the irrational numbers. Then the Set X := {x is in the real numbers : x multiplied by square root N is in the Natural Numbers} is nonempty. Show that if x is in X
and x' := (x multiplied by square root of N) minus (x[square root of N]) then x' is in X and x' < x. thus square root of 2, 3, 5,... are irrational.
Someone posted this:
If you know the rational root theorem we have:
let y=sqrt(n), then:
and if this has rational roots they are amoung the factors (positive or
negative) of n.
So we have sqrt(n) is an integer, and [sqrt(n)]^2=n, that is n is a
perfect square.
So we have that if sqrt(n) is rational then sqrt(n) is an integer, hence
if sqrt(n) is not an integer it is irrational
September 14th 2007, 01:07 PM
I'm pretty sure we have not covered the rational root theorem in class, and as much sense as that theorem makes, I don't think I can use that to solve the problem on a test if we have not covered
it yet.
|
{"url":"http://mathhelpforum.com/calculus/18966-proof-square-root-being-irrational-print.html","timestamp":"2014-04-20T22:11:24Z","content_type":null,"content_length":"5923","record_id":"<urn:uuid:25a8392d-f359-44af-8579-9c76a3d273e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Short Introduction to Octave
Octave is a high-level computer language that makes computing easy if your priority is not speed. It is, to a large extent, the open-source version of Matlab and one which I find very useful
especially from a pedagogical point of view. For a programmer writing scientific code, Octave provides
• an easy way to test programs before writing them in more advanced and complicated computer languages.
• a fast way to develop small models without fighting with bugs that arise from such issues as memory allocation.
Octave comes with most distributions of Linux. To start it, all you have to do is go to your command line and type "octave". Commands in Octave are usually intuitive and straightforward.
Example :
  octave:1> x=1/sqrt(3);
  octave:2> y=atan(x)*180/pi
  y=30.000
The above example is a simple trigonometric operation. In the first line, a numeric value is assigned to a variable x. In the second line, the arctan of x is calculated and assigned to y, at the same
time being converted to degrees(angles are in radians in Octave). The semicolon (;) at the end of the first line suppresses the output while leaving it out causes the output to be displayed on
screen, as done in the second line.
Octave can work with arrays as easily as it does with single numbers.
Example :
  octave:1> x=0:0.01:pi;
  octave:2> y=sin(x);
  octave:3> plot(x,y)
Here the first line creates an equally-spaced array that goes from 0 to pi in increments of 0.01. The second line evaluates the sine of all the numbers in the array x and assigns to a new array
variable y. Finally, the third line draws a plot of y with respect to x. For plotting, Octave calls another open source program, gnuplot.
In fact, you can do algebra even with matrices :
Example :
  octave:1> M=rand(3,3);
  octave:2> M=M+M';
  octave:3> M*=2;
  octave:4> M=M.^3;
  octave:5> [a,b]=eig(M);
In this example, the rand function creates a 3-by-3 matrix of uniformly distributed random numbers. In the second line, the prime (') operator takes the transpose of the matrix. This is then added to
the matrix itself to symmetrize it. The result is then re-assigned to the same variable. In the third line, the entire matrix is simply multiplied by 2 whereas in the fourth line, each element of the
matrix is raised to the third power. The dot after the matrix denotes a so-called point-by-point operation where the operator is applied to every single element of the matrix rather than the matrix
as a whole. This can also be done with division of one matrix by another. In the final line, the eigenvalues and eigenvectors of the matrix are calculated.
Apart from the obvious algebra, arithmetic and trigonometry operations for numbers, vectors and matrices, Octave also offers the following useful features :
1. Efficient utility functions : diff, ones, zeros, eye, any, eig, linspace, logspace, real, imag, find and many, many more ...
2. Polynomial manipulation : polyval, polyder, polyfit
3. Plotting : plot, mesh, contour, bar, hist
4. Loops and control expressions : while, for, if, if-elseif-else
NOTE that loops are extremely slow in Octave. Avoid using loops as much as possible. Instead make use of functions that are specially optimized. You can test the inefficiency of loops by using
the tic/toc functions for timing.
Example :
  octave:1> N=1000000;
  octave:2> a=randn(N,1);
  octave:3> b=zeros(N,1);
  octave:4> tic;for n=2:N
  > b(n)=a(n)-a(n-1);
  > endfor;toc
  ans = 33.922
  octave:5> tic;c=diff(a);toc
  ans = 0.38571
In the above example, carrying out the same operation (calculating the difference between successive elements of an array) takes about 34 seconds using a loop while with the special diff
function, it only takes a fraction of a second. Of course, the numbers are system-dependent and the difference is negligible for small arrays.
5. Logical operations : &,|,&&,||,==,~=,true,false
6. User-defined functions :
Example :
  octave:1> function out=iseven(in)
  > if ( rem(in,2)==0 )
  > out=true;
  > else
  > out=false;
  > endif
  > endfunction
  octave:2> a=iseven(3)
  a = 0
  octave:3> a=iseven(4)
  a = 1
If you find yourself needing the same function over and over again, you can also place it in a file with a ".m" extension and call it with the same syntax as in the above example, for which the
file would be called "iseven.m". Functions may return any number of values, which can be of mixed type.
Useful Octave pages
You can read more about Octave on the following pages.
|
{"url":"http://www.physics.metu.edu.tr/~hande/octave.html","timestamp":"2014-04-21T14:39:54Z","content_type":null,"content_length":"7532","record_id":"<urn:uuid:d7fe9233-c33f-4288-be62-18159522bf26>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Snake movement (smooth, not a grid) - Java-Gaming.org
I'm trying to make some sort of a snake game, but I'm having trouble with the movement. I refuse to go the "make the snake move along a grid, placing each part of the snake to the cell the previous
part occupied the last update" route as I think it looks stupid, so I started making a smooth moving snake but still with 90 degree turns.
This is working OKish already after the very first try by having each part move independently of each other. Every time the player turns the head, it saves the coordinates and the new direction where
the turn happened and each part checks these coordinates of the previous part and does the same change once it hits the said coordinates, again saving the coordinates for the next part to check them
etc. Once the part has made the turn, I then fix it's position relative to the part before it to make it line up perfectly as there are some rounding errors etc.
But there are some problems still plus you can just feel that the snake isn't connected but it's rather just several blocks moving next to each other (and when it bugs out the snake snaps in two).
I have two different options here.. either I keep going like this and try to perfect the snake movement so that it a) looks good and b) has no chance of crapping out or I take it one step further and
make a freeform moving snake with the head rotating in small degree changes and the body flowing freely (yet the way you'd expect it to do) behind it.
The latter is what I'd like to do, but as I'm not too good at maths I'm having trouble figuring out how to calculate the positions of the body parts.
I came across
which seems interesting to try but I couldn't figure out how to fully translate that into Javanese to see how it works. Again, sucking at maths doesn't help as I can't figure out the corresponding
Java tools to use (namely the Vector2.Transform and Matrix.CreateRotationZ in that code (can I use affine transform here somehow?). I'm doing this on libgdx if there are some tools in their built in
vector classes and whatnot that I should use.
Any input and help would be appreciated to get me going
|
{"url":"http://www.java-gaming.org/index.php?topic=26736.msg235868","timestamp":"2014-04-19T22:13:17Z","content_type":null,"content_length":"112973","record_id":"<urn:uuid:21fe0a68-cc81-4602-b4e3-c7dd9f542528>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stochastic Model
A Stochastic Model
of the Long-Range Financial Status
of the OASDI Program September 2004
The real average covered wage is defined as the ratio of the average nominal OASDI covered wage to the adjusted inflation rate. Because of the expansion of covered employment, the annual growth rate
in the real average covered wage differs significantly from the annual growth rate in a real average economy-wide wage series. In the future, however, the annual growth rates in the two measures are
expected to be approximately identical since projected coverage changes are insignificant. Hence, the historical variation of the annual percent change in the real average economy-wide wage is used
to model the future variation of the annual percent change in the real average covered wage.
The real average economy-wide wage is the ratio of the average nominal wage to the adjusted CPI. The nominal wage is the ratio of wage disbursement as published by the Bureau of Economic Analysis'
(BEA) National Income and Product Accounts (NIPA) to civilian employment. Civilian employment is the sum of total wage employment, as published by the BLS from its Household Survey, and total U.S.
Armed Forces from the Census Bureau. The BLS periodically introduces improvements to its employment data but does not revise earlier data. However, the BLS has developed adjustment factors to improve
the comparability of employment data with earlier years. OCACT has used these factors to adjust the wage employment data.
The formula for calculating the annual percent change in the real average wage, given a nominal wage series, is:
W[t] = (NW[t] ⁄ NW[t][-1]) ⁄ (CPI[t] ⁄ CPI[t][-1]) −1.
W[t] is the annual percent change in the real average wage expressed in decimals in year t; NW[t] is the level of the nominal average wage in year t; and CPI[t] is the level of the CPI in year t.
The model estimates the annual percent changes in the real economy-wide wage as a function of the current unemployment rate and the unemployment rate of the previous year, expressed as log-odds
ratios, over the period from 1968 to 2002. The value for 1974 was an outlier and therefore was excluded in the development of the equation. The R-squared value was 0.53. The actual and fitted values
are shown in figure II.6.
The estimated coefficients and standard error of the regression are then used to simulate the percent change in the real average covered wage. The modified equation is:
W[t] = W[t]^TR −0.06u[t] +0.04u[t][-1] + ε[t] . (9)
In this equation, W[t] represents the percent change in the real average covered wage in year t; W[t]^TR represents the percent change in the real average covered wage from the TR04II in year t; u[t]
represents the deviation of the (log-odds transformed) unemployment rate from the TR04II unemployment rate in year t; and ε[t] represents the random error in year t.
II.6 Real
|
{"url":"http://www.ssa.gov/OACT/NOTES/as117/LR_Stochastic_IIE.html","timestamp":"2014-04-21T12:28:31Z","content_type":null,"content_length":"8629","record_id":"<urn:uuid:0df57bae-1358-4475-bcbc-bf780bbeae9b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spacetime and Geometry:
The internet has a wide array of resources to learn general relativity and keep up with recent research. This page only attempts to hit some of the highlights, not to be comprehensive. See Relativity
on the World Wide Web or some of the other web guides below for further links.
This page is still under construction, sorry for the mess. I will try to update or remove links as they evolve, so please let me know if a link dies.
Jump to:
□ Relativity on the World Wide Web, UC Riverside
A carefully reviewed and annotated list of links, originally prepared by Chris Hillman and maintained by John Baez.
□ Relativity Bookmarks, Syracuse University
Includes a long list of research groups in GR.
□ Relativity and Black Hole Links, University of Colorado
Back to the Spacetime and Geometry main page.
Sean Carroll Self Research Teaching Talks Writings GR Book Activities Blog
|
{"url":"http://www.preposterousuniverse.com/spacetimeandgeometry/resources.html","timestamp":"2014-04-18T10:35:42Z","content_type":null,"content_length":"10592","record_id":"<urn:uuid:58225cc6-00c1-4d86-9a33-5c76cb204322>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|