content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Water Levels--Maps Available
Water Levels--Maps Available
2003 Maps
The 2003 maps are presented as Adobe Acrobat PDF files. You will need the Acrobat PDF Reader, available free from Adobe, to read the maps.
2002 Maps
The 2002 maps are presented as Adobe Acrobat PDF files. You will need the Acrobat PDF Reader, available free from Adobe, to read the maps.
2001 Maps
The 2001 maps are presented as Adobe Acrobat PDF files. You will need the Acrobat PDF Reader, available free from Adobe, to read the maps.
2000 Maps
1999 Maps
1998 Maps
Kansas Geological Survey, Water Level CD-ROM
Send comments and/or suggestions to webadmin@kgs.ku.edu
Updated Feb. 18, 2003
URL = http://www.kgs.ku.edu/Magellan/WaterLevels/CD/Maps/index.htm | {"url":"http://www.kgs.ku.edu/Magellan/WaterLevels/CD/Maps/index.htm","timestamp":"2014-04-18T23:17:03Z","content_type":null,"content_length":"6786","record_id":"<urn:uuid:eb5a4175-a849-415d-b4fe-786139fe64ec>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates Algebra 1 Tutor
...Let me know if I can help! To prepare for the PSAT, students need to gain confidence by becoming familiar with the test content and structure, as well as brushing up on test-taking strategies,
such as time management and knowing when to guess or pass. I am experienced in teaching test preparation skills.
26 Subjects: including algebra 1, reading, English, ASVAB
...I am Aris and I am originally from Washington D.C. I moved to Atlanta a couple of years back to be with my wife who is teaching at Spelman. I currently teach mathematics at Georgia Gwinnett
College in Lawrenceville.
20 Subjects: including algebra 1, calculus, statistics, geometry
Hello, my name is Britton. I am a certified Math teacher with a clear renewable certificate for middle grades math. I have spent the past eight years in and around education and social work.
2 Subjects: including algebra 1, prealgebra
...My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I took one course in Differential Equations and received an A. I also took several other
courses that included Differential Equations in the solution process. When I graduated from college I ...
20 Subjects: including algebra 1, calculus, GRE, GED
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
12 Subjects: including algebra 1, statistics, GRE, algebra 2 | {"url":"http://www.purplemath.com/avondale_estates_ga_algebra_1_tutors.php","timestamp":"2014-04-20T14:06:10Z","content_type":null,"content_length":"24069","record_id":"<urn:uuid:69c82696-0841-4c5d-8ca8-ce46c587b3c1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Organisers: Dr A Corti (Cambridge), Dr M Gross (Warwick), Professor M Reid (Warwick)
Programme theme
Main themes:
Classification of surfaces, 3-folds and higher dimensional varieties. Calabi-Yau 3-folds, mirror symmetry. Moduli. Relations with other areas .
i. Classification: problems on existence and moduli of algebraic surfaces and 3-folds, including methods of projective and birational geometry, commutative algebra, toric geometry, etc. The minimal
model program, flips and birational contractions. The proof of classification of 3-folds and log 3-folds. Biregular and birational geometry of Fano 3-folds.
ii. CYs and mirror symmetry: CY 3-folds in classification, problems of existence, moduli and period maps. Kaehler cone, birational changes of models. Resolution of quotient singularities, McKay
correspondence and stringy geometry. Special Lagrangian geometry and mirror symmetry. Relation with "physics".
iii. Moduli: The heading covers two quite different topics: Moduli of varieties, e.g., Abelian varieties or K3s, and moduli of vector bundles, especially over curves, surfaces and special 3-folds.
iv. Relations with other areas: there are very many of these, not all predictable. Lean towards interested British math community. Algebra, number theory, physics, gauge theory, symplectic geometry
and other aspects of differential geometry, hyperK„hler and related "special" geometries. | {"url":"http://www.newton.ac.uk/programmes/HDG/index.html","timestamp":"2014-04-21T09:39:50Z","content_type":null,"content_length":"3921","record_id":"<urn:uuid:addb9da9-2826-4519-b67f-08059eb0ab72>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vectors and Direction
A study of motion will involve the introduction of a variety of quantities that are used to describe the physical world. Examples of such quantities include distance, displacement, speed, velocity,
acceleration, force, mass, momentum, energy, work, power, etc. All these quantities can by divided into two categories - vectors and scalars. A vector quantity is a quantity that is fully described
by both magnitude and direction. On the other hand, a scalar quantity is a quantity that is fully described by its magnitude. The emphasis of this unit is to understand some fundamentals about
vectors and to apply the fundamentals in order to understand motion and forces that occur in two dimensions.
Examples of vector quantities that have been previously discussed include displacement, velocity, acceleration, and force. Each of these quantities are unique in that a full description of the
quantity demands that both a magnitude and a direction are listed. For example, suppose your teacher tells you "A bag of gold is located outside the classroom. To find it, displace yourself 20
meters." This statement may provide yourself enough information to pique your interest; yet, there is not enough information included in the statement to find the bag of gold. The displacement
required to find the bag of gold has not been fully described. On the other hand, suppose your teacher tells you "A bag of gold is located outside the classroom. To find it, displace yourself from
the center of the classroom door 20 meters in a direction 30 degrees to the west of north." This statement now provides a complete description of the displacement vector - it lists both magnitude (20
meters) and direction (30 degrees to the west of north) relative to a reference or starting position (the center of the classroom door). Vector quantities are not fully described unless both
magnitude and direction are listed.
Vector quantities are often represented by scaled vector diagrams. Vector diagrams depict a vector by use of an arrow drawn to scale in a specific direction. Vector diagrams were introduced and used
in earlier units to depict the forces acting upon an object. Such diagrams are commonly called as free-body diagrams. An example of a scaled vector diagram is shown in the diagram at the right. The
vector diagram depicts a displacement vector. Observe that there are several characteristics of this diagram that make it an appropriately drawn vector diagram.
• a scale is clearly listed
• a vector arrow (with arrowhead) is drawn in a specified direction. The vector arrow has a head and a tail.
• the magnitude and direction of the vector is clearly labeled. In this case, the diagram shows the magnitude is 20 m and the direction is (30 degrees West of North).
Conventions for Describing Directions of Vectors
northeast (at a 45 degree angle); and some vectors are even directed northeast, yet more north than east. Thus, there is a clear need for some form of a convention for identifying the direction of a
vector that is not due East, due West, due South, or due North. There are a variety of conventions for describing the direction of any vector. The two conventions that will be discussed and used in
this unit are described below:
1. The direction of a vector is often expressed as an angle of rotation of the vector about its "tail" from east, west, north, or south. For example, a vector can be said to have a direction of 40
degrees North of West (meaning a vector pointing West has been rotated 40 degrees towards the northerly direction) of 65 degrees East of South (meaning a vector pointing South has been rotated 65
degrees towards the easterly direction).
2. The direction of a vector is often expressed as a counterclockwise angle of rotation of the vector about its "tail" from due East. Using this convention, a vector with a direction of 30 degrees
is a vector that has been rotated 30 degrees in a counterclockwise direction relative to due east. A vector with a direction of 160 degrees is a vector that has been rotated 160 degrees in a
counterclockwise direction relative to due east. A vector with a direction of 270 degrees is a vector that has been rotated 270 degrees in a counterclockwise direction relative to due east. This
is one of the most common conventions for the direction of a vector and will be utilized throughout this unit.
Two illustrations of the second convention (discussed above) for identifying the direction of a vector are shown below.
Observe in the first example that the vector is said to have a direction of 40 degrees. You can think of this direction as follows: suppose a vector pointing East had its tail pinned down and then
the vector was rotated an angle of 40 degrees in the counterclockwise direction. Observe in the second example that the vector is said to have a direction of 240 degrees. This means that the tail of
the vector was pinned down and the vector was rotated an angle of 240 degrees in the counterclockwise direction beginning from due east. A rotation of 240 degrees is equivalent to rotating the vector
through two quadrants (180 degrees) and then an additional 60 degrees into the third quadrant.
Representing the Magnitude of a Vector
The magnitude of a vector in a scaled vector diagram is depicted by the length of the arrow. The arrow is drawn a precise length in accordance with a chosen scale. For example, the diagram at the
right shows a vector with a magnitude of 20 miles. Since the scale used for constructing the diagram is 1 cm = 5 miles, the vector arrow is drawn with a length of 4 cm. That is, 4 cm x (5 miles/1 cm)
= 20 miles.
Using the same scale (1 cm = 5 miles), a displacement vector that is 15 miles will be represented by a vector arrow that is 3 cm in length. Similarly, a 25-mile displacement vector is represented by
a 5-cm long vector arrow. And finally, an 18-mile displacement vector is represented by a 3.6-cm long arrow. See the examples shown below.
In conclusion, vectors can be represented by use of a scaled vector diagram. On such a diagram, a vector arrow is drawn to represent the vector. The arrow has an obvious tail and arrowhead. The
magnitude of a vector is represented by the length of the arrow. A scale is indicated (such as, 1 cm = 5 miles) and the arrow is drawn the proper length according to the chosen scale. The arrow
points in the precise direction. Directions are described by the use of some convention. The most common convention is that the direction of a vector is the counterclockwise angle of rotation which
that vector makes with respect to due East.
In the remainder of this lesson, in the entire unit, and in future units, scaled vector diagrams and the above convention for the direction of a vector will be frequently used to describe motion and
solve problems concerning motion. For this reason, it is critical that you have a comfortable understanding of the means of representing and describing vector quantities. Some practice problems are
available online at the following web page:
Visit the Vector Direction Practice Page | {"url":"http://www.physicsclassroom.com/Class/vectors/U3L1a.cfm","timestamp":"2014-04-16T04:56:54Z","content_type":null,"content_length":"60733","record_id":"<urn:uuid:19d6911e-74c3-4c93-8433-fbceeaa250cd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inductance Measuring Circuit
Inductor values from 1 henry to 1 microhenry can easily be measured with a frequency counter by using the oscillator circuit shown below. The output voltage at the emitters of the push-pull
transistor pair alternatively charges and discharges the inductor under test (Ltest) through resistor R. When the voltage across R rises to ½ the output voltage, the output voltage reverses polarity,
thus continuously repeating the cycle. The rate of charge, t, determines the frequency of oscillation which can be obtained from
Solving for Ltotal, and noting that t is ½ the period of oscillation, we obtain
where f is the frequency displayed on the frequency counter.
In order to obtain high accuracy at low values of inductance, a series inductor, Lseries, should be used to calibrate against the effects of non-ideal circuit performance. An inductor of 100
microhenries is recommended, although the exact inductance value does not matter. With Ltest replaced by a short, calculate Lseries from the equation
where f is the frequency displayed on the frequency counter.
Measurements of small values of inductance can therefore be made to within 1 microhenry using this equation. Other advantages to using a series inductance as a calibration inductance is that the
circuit is guaranteed to oscillate for any value of test inductance (Ltest) and that the inaccuracies resulting from finite op-amp slew rate are cancelled out, even if the positive going slew rate
differs from the negative going slew rate. The accuracy of measurement is in the order of 1% since 1% resistors are used to set the hysteresis voltage, however the measurement accuracy can be
extended to 3 digits by hand selecting the precision resistors to within 0.1% and, if necessary, adding a microcontroller to interpolate between calibrated inductor values using a lookup table.
The 2N3906-2N3904 push-pull transistor pair is required because the CA3100T op-amp does not have enough current output to drive the low impedance feedback circuit consisting of Ltest and R. The 10K
resistor buffers the circuit from the input capacitance of the frequency counter. The op-amp is powered by +/- 7.5 volts (instead of +/- 15 volts) to prevent excessive power dissipation in the
circuit. The 220 ohm resistor is necessary to discharge the inductor during reversals in the polarity of the output voltage, otherwise a spike appears in the output voltage causing inaccuracies in
the measured value of inductance. The circuit can be used with R=100 ohms, 1%, to measure inductance between 0.1 henry and 1 microhenry, or with R=10K for a range of 10 henries to 100 microhenries.
In winding inductors for RF designs, this measuring circuit allows the designer to know exactly when the desired inductance has been achieved. As an example, the number of turns for a single layer
air core inductor is given by
l = inductance in microhenries,
a = coil radius in inches,
b = coil length in inches,
N = number of turns.
For a desired inductance, l, the designer should calculate the number of turns, N, then wind an inductor with a few more than N turns, and gradually back off the number of turns, by measuring the
inductance with the test circuit below until the exact inductance is achieved.
Return to 68HC11 Start Page | {"url":"http://michaelgellis.tripod.com/68hc11/ind.html","timestamp":"2014-04-19T17:05:53Z","content_type":null,"content_length":"4782","record_id":"<urn:uuid:6041eeb9-e107-431e-8d96-3cb1d488f800>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
\[y''+2y'+y=xe^{-x}\] \[r^2+2r+1=0\] \[r_1=r_2=1\] \[y_c=c_1e^{-x}+c_2e^{-x}\] for y_p would I have (before I derive it and solve for A) \[y_p=Ae^{-x}\] or \[y_p=Axe^{-x}\] ?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
your y_c is wrong
Best Response
You've already chosen the best response.
it's -1
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
mah bad...
Best Response
You've already chosen the best response.
it should be \[y_c = c_1 e^{-x} + c_2 x e^{-x}\] see your mistake?
Best Response
You've already chosen the best response.
yeah r=-1
Best Response
You've already chosen the best response.
i was actually referring to the x by c2
Best Response
You've already chosen the best response.
if you have repeated roots you attach least power of x to the next arbitrary constant, remember?
Best Response
You've already chosen the best response.
oh I'm sorry I see it now
Best Response
You've already chosen the best response.
so do you know what y_p should be now?
Best Response
You've already chosen the best response.
what does that have to do with y_p? isn't y_p based on what the RHS of the original equation is?
Best Response
You've already chosen the best response.
are you using undetermined coefficients or variation of parameters>
Best Response
You've already chosen the best response.
based on your question i presumed you were doing variation....which depends on the complementary solution....
Best Response
You've already chosen the best response.
I don't quite know what either of those mean, but I trying to do it based on \[y_p=Ae^{bx}\]
Best Response
You've already chosen the best response.
please don't shoot me :S
Best Response
You've already chosen the best response.
I'm willing to learn =)
Best Response
You've already chosen the best response.
can you describe to me the method you're familiar with? does it involve deriving the yp?
Best Response
You've already chosen the best response.
according to my book I'm doing it based on THE METHOD OF UNDETERMINED COEFFICIENTS
Best Response
You've already chosen the best response.
oh that
Best Response
You've already chosen the best response.
yeah lol I guess THE METHOD OF VARIATION OF PARAMETERS is the next section....I haven't started that yet :P
Best Response
You've already chosen the best response.
\[y_p = Ae^{bx}\] only works if you don't have repeated roots. for repeated roots, you use \[y_p = Ae^{bx} + Bxe^{bx}\]
Best Response
You've already chosen the best response.
because like i said before, if you have repeated roots, you attach the least power of x to the succeeding constant. Since you have xe^-x on the right side, there must be an e^-x before it
Best Response
You've already chosen the best response.
does that make sense?
Best Response
You've already chosen the best response.
"least power of x to the succeeding constant" meaning \[y_c = c_1 e^{-x} + c_2 x e^{-x}\] referring to the x after the c_2
Best Response
You've already chosen the best response.
yes. that. but in this case, we're doing y_p
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
and then I just do the derivative twice to solve for A and B
Best Response
You've already chosen the best response.
sounds right
Best Response
You've already chosen the best response.
thank ya!
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
What did I do wrong? I get 0=x \[y_p=Ae^{-x}+Bxe^{-x}\] \[y_p'=-Ae^{-x}-Bxe^{-x}(x-1)\] \[y_p''=Ae^{-x}+Be^{-x}(x-2)\]
Best Response
You've already chosen the best response.
y ' =-Ae^(-x) -(x-1)Be^(-x) you have an extra x in there..
Best Response
You've already chosen the best response.
oh I see
Best Response
You've already chosen the best response.
ok I'll try it again
Best Response
You've already chosen the best response.
I still get a zero for x....I'll show my work, give me a second to type it :)
Best Response
You've already chosen the best response.
Hold up.
Best Response
You've already chosen the best response.
I think you need a C*x^2*e-x term as part of your solution...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Why a \[Cx^2e^{-x}\] ?
Best Response
You've already chosen the best response.
I think I'm gonna start a new thread. This one is getting too long
Best Response
You've already chosen the best response.
the problem is that your forcing function is the same as one of the solutions to the homogeneous equation.
Best Response
You've already chosen the best response.
so without a term that doesn't end up being zero, you're not going to be able to ever get your DE to be 'forced'.
Best Response
You've already chosen the best response.
There's a quantitative way to justify it, but I can't recall how it goes. The point is xe^-x is the forcing term and one of the solutions to the homogeneous. So you need another term in your yp
if you're ever going to get an actual solution that satisfies the original non-homogeneous equation.
Best Response
You've already chosen the best response.
Qualitatively speaking, you need a faster transient, because the forcing term is resonant with the system, so the whole thing is getting driven to x*e^-x harder than any of the terms you're using
now can drive it.
Best Response
You've already chosen the best response.
Hope that helps; probably not, but I tried:)
Best Response
You've already chosen the best response.
Yeah It kinda makes sense.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505540cfe4b0a91cdf447f8d","timestamp":"2014-04-20T11:14:45Z","content_type":null,"content_length":"138170","record_id":"<urn:uuid:1add1c40-ac47-4528-9800-77e098e6ac69>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dihedral Group
September 30th 2009, 12:19 PM #1
Sep 2009
Prove that the dihedral group of order 6 does not have a subgroup of order 4.
Now I was really considering using the Caley Table and just listing all possible subgroups showing that one of order four does not exist but there has to be an easier way.
So i was thinking maybe a proof by contradiction.
Assume there is a subgroup of order four in the dihedral group of 6.
then find that an element x is not the identity and because it is a subgroup an identity element has to exist.
think fictional subgroup of order four that is not the identity. Say this is x. If x is a rotation what other elements must also be in the group?"
Do you know Lagrange's Theorem?
September 30th 2009, 12:27 PM #2
Jun 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/105254-dihedral-group.html","timestamp":"2014-04-16T05:50:12Z","content_type":null,"content_length":"31558","record_id":"<urn:uuid:4271215c-5acd-4b0d-9136-aad7bd43ef96>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Design of Tilting-Pad Thrust Bearings
Publication: Research › Ph.D. thesis – Annual report year: 2007
title = "On the Design of Tilting-Pad Thrust Bearings",
author = "Niels Heinrichson and Ilmar Santos",
year = "2007",
isbn = "87-90416-22-8",
TY - BOOK
T1 - On the Design of Tilting-Pad Thrust Bearings
A1 - Heinrichson,Niels
AU - Heinrichson,Niels
A2 - Santos,Ilmar
ED - Santos,Ilmar
PY - 2007/6
Y1 - 2007/6
N2 - Pockets are often machined in the surfaces of tilting-pad thrust bearings to allow for hydrostatic jacking in the start-up phase. Pockets and other recesses in the surfaces of bearing pads
influence the pressure distribution and thereby the position of the pivot resulting in the most advantageous pad convergence ratio. In this thesis, a theoretical approach is applied in the attempt to
quantify the influence of recesses in the pad surfaces. The recesses may be relatively deep and enclosed as is the case with pockets designed for hydrostatic jacking. Such recesses are characterized
by low friction and a small pressure build-up. As in parallel-step bearings the recesses may also have a depth of the same order of magnitude as the oil film thickness. Such recesses are
characterized by a strong pressure build-up caused by the reduction of the flow area at the end of the recess. Numerical models based on the Reynolds equation are used. They include the effects of
variations of viscosity with temperature and the deformation of the bearing pads due to pressure and thermal gradients. The models are validated using measurements. Tilting-pad bearings of standard
design are studied and the influences of the bearing length-to-width ratio, pad deformation and injection pocket size are quantified. Suggestions for the design of energy efficient bearings are
given. The results show that correctly dimensioned, bearings with oil injection pockets have smaller friction coefficients than bearings with plain pads. Placing the pockets in the high-pressure
zones close to the trailing edges of the bearing pads causes a substantial reduction in the friction coefficient. The design of the recess sizes and positions leading to the largest improvements is
studied and design suggestions for various pad geometries are given. Parallel-step bearings theoretically have smaller friction coefficients than tilting-pad bearings. A design of a tilting-pad
bearing is suggested which combines the benefits of the two types of bearings in a tilting-pad bearing with inlet pockets. This design results in a substantial reduction of the friction loss. Both
this bearing and the bearing design with enclosed recesses in the high-pressure regions of the pads suffer from a higher sensitivity to the position of the pivot. The design of such bearing is
therefore no trivial task.
AB - Pockets are often machined in the surfaces of tilting-pad thrust bearings to allow for hydrostatic jacking in the start-up phase. Pockets and other recesses in the surfaces of bearing pads
influence the pressure distribution and thereby the position of the pivot resulting in the most advantageous pad convergence ratio. In this thesis, a theoretical approach is applied in the attempt to
quantify the influence of recesses in the pad surfaces. The recesses may be relatively deep and enclosed as is the case with pockets designed for hydrostatic jacking. Such recesses are characterized
by low friction and a small pressure build-up. As in parallel-step bearings the recesses may also have a depth of the same order of magnitude as the oil film thickness. Such recesses are
characterized by a strong pressure build-up caused by the reduction of the flow area at the end of the recess. Numerical models based on the Reynolds equation are used. They include the effects of
variations of viscosity with temperature and the deformation of the bearing pads due to pressure and thermal gradients. The models are validated using measurements. Tilting-pad bearings of standard
design are studied and the influences of the bearing length-to-width ratio, pad deformation and injection pocket size are quantified. Suggestions for the design of energy efficient bearings are
given. The results show that correctly dimensioned, bearings with oil injection pockets have smaller friction coefficients than bearings with plain pads. Placing the pockets in the high-pressure
zones close to the trailing edges of the bearing pads causes a substantial reduction in the friction coefficient. The design of the recess sizes and positions leading to the largest improvements is
studied and design suggestions for various pad geometries are given. Parallel-step bearings theoretically have smaller friction coefficients than tilting-pad bearings. A design of a tilting-pad
bearing is suggested which combines the benefits of the two types of bearings in a tilting-pad bearing with inlet pockets. This design results in a substantial reduction of the friction loss. Both
this bearing and the bearing design with enclosed recesses in the high-pressure regions of the pads suffer from a higher sensitivity to the position of the pivot. The design of such bearing is
therefore no trivial task.
BT - On the Design of Tilting-Pad Thrust Bearings
SN - 87-90416-22-8
ER - | {"url":"http://orbit.dtu.dk/en/publications/on-the-design-of-tiltingpad-thrust-bearings(995a26c5-915f-4b38-82c2-573579b46ca8)/export.html","timestamp":"2014-04-20T21:30:28Z","content_type":null,"content_length":"20623","record_id":"<urn:uuid:af547f8c-6ccf-4c35-bbdb-be0bbde5d187>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: -ml- programming for conditional logit
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: -ml- programming for conditional logit
From Matthew Wibbenmeyer <mwibbenmeyer@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: -ml- programming for conditional logit
Date Tue, 17 May 2011 14:04:26 -0600
I am working on building a maximum likelihood estimator for
conditional logit, with the hopes that it can be modified to
accommodate the estimate a non-linear utility function using a choice
experiment data set. So far, I have written a method d0 program (see
below) that matches results from the 'canned' conditional logit.
However, I would like to develop a method d2 program that will run
more reliably and efficiently on more complex utility functions.
Would anyone with experience coding d2 method conditional logit
estimators be willing to share their methods for coding the gradient
and Hessian?
Specifically my confusion comes from how to code the vector of
explanatory variables (not the coefficients), which enters into the
formula for the conditional logit gradient function. Greene (1997)
writes the gradient function as:
g = sum_{i=1,n} sum_{j=1,J} d_ij*(x_ij - xbar_i)
where d_ij = 1 if y_i = j and 0 otherwise, and where xbar_i =
sum_{j=1,J} P_ij*x_ij
x_ij is a vector of values for the explanatory variables in a
particular observation, and it is unclear to me how to code this
within my ml estimator.
Thank you for your help,
Matt Wibbenmeyer
USDA Forest Service
Rocky Mountain Research Station
PO Box 7669
200 E. Broadway
Missoula, MT 59807
program myclog
args todo b lnf g negH
tempvar denom p b1
mleval `b1' = `b', eq(1)
qui {
egen double `denom' = sum(exp(`b1')), by(obsid)
gen double `p' = exp(`b1')/`denom'
mlsum `lnf' = $ML_y1*log(`p') if $ML_y1==1
if (`todo'==0 | `lnf'>=.) exit
ml model d0 myclog (Eq1: choice = x1 x2 x3, nocons)
ml search
ml max
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-05/msg00868.html","timestamp":"2014-04-20T08:31:52Z","content_type":null,"content_length":"8729","record_id":"<urn:uuid:0e3c5fab-6c35-4b0e-a61c-586eabfd47d3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reading the Principia: The Debate on Newton’s Mathematical Methods for Natural Philosophy from 1687 to 1736
Reading the Principia: The Debate on Newton’s Mathematical Methods for Natural Philosophy from 1687 to 1736, N. Guicciardini,
Cambridge University Press, 1999, pp: 285, ISBN 0-521-64066-0 (hc); Price: $80.00 (hc).
Isaac Newton’s Principia is justly regarded as one most important works in the history of science. Its central role in presenting Newton’s theory of gravitation and applying that theory to the orbits
of the moon and planets is well known to almost all of today’s science students. However, the mathematical presentation of these theories in the Principia would no doubt baffle most undergraduates.
Instead of using the newly discovered calculus, Newton largely relied on a geometrical presentation of his theories, a choice which came back to haunt him when he later tried to prove that he had
developed the calculus before Leibniz.
In Reading the Principia, Niccolò Guicciardini attempts to uncover the reasons and implications of Newton’s decision to focus on geometrical rather than algebraic proofs by investigating how
contemporary readers responded to the book. He focuses on three scientists: Christian Huygens, the leading geometer of his time, Gottfried Wilhelm Leibniz, Newton’s opponent in the priority dispute
for the discovery of calculus, and Newton himself. Guiccardini’s careful study reveals many interesting and surprising facets to the story of the Principia and its reception, and on the ensuing
dispute between Leibniz and Newton. Particularly interesting is the discovery that Newton’s decision to adopt a geometrical approach in his book was largely due to his desire to adhere to the
mathematical styles of the ancients for he believed that all knowledge had been known in ancient times. He went so far are to claim that in formulating his universal law of gravitation he was only
re-discovering lost knowledge that had been known to the ancient Chaldeans, the Pythagoreans and other unnamed ancients.
After setting out the purpose of his book, Guiccardini begins by providing a clear presentation of Newton’s mathematical methods, both those used in the Principia and his methods of fluxions. Part
two comprises the meat and bones of the study: a close analysis of the response of the three readers to the Principia. Finally, in part three, the author studies the aftermath of the Principia in
Britain and in continental Europe. Guiccardini shows that there are more subtleties in the differences in the Principia’s reception between Britain and the continent than are usually acknowledged in
the traditional story of Britain accepting Newton and his fluxions and a Europe rejecting Newton and attracted to Leibniz’s calculus. Eventually, of course, the more clearly versatile language of
Leibniz’s calculus (dx/dt) rather than Newton’s fluxions (x?) won the day.
This book, though not easy going for those not versed in the language of mathematics, will prove a fascinating read for anyone interested Newton and his work, scientific practice and dispute in the
seventeenth century, or indeed in the foundations of today’s mathematical physics.
J.M. Steele,
University of Toronto | {"url":"http://www.cap.ca/brms/Reviews/Reading-Steele.html","timestamp":"2014-04-20T23:47:58Z","content_type":null,"content_length":"5459","record_id":"<urn:uuid:797239a0-7b08-4ef2-8897-ea7a53382790>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
William M. Briggs
No, not the reactors, the press. Ghouls, most of them. They always wish for the worst. The only class of people (besides Communists and morticians) for whom death not only delights, but offers a
chance for personal advancement. The only folk who can speak the word “heartbreak” with a lilt to their voices. Their first thoughts upon hearing of a disaster is how they, and not their colleagues,
can get their face in front of a camera.
Narrating the nauseating particulars of mass death is not a horrible duty that must be stomached, but is instead an opportunity. Not one in a hundred while jetting to the calamity would think to
interrupt their prayers with a plea “take away this cup from me.” Instead they paraphrase Lenin, “It does not matter if three-fourths of mankind is destroyed: all that counts is that I am there to
report it.”
It is true that ignorance is what drives much (there are exceptions) of the reporting on the nuclear reactors. After all, most of these reporters learned their physics from Hollywood movies. Nuclear
reactors, when damaged, melt down blow up: that is what they do. And then all but a minuscule minority suffer from reporteritis, an epidemic psychiatric disorder whereby journalists assume they
become as knowledgeable and important as the people they talk to.
They invite a nuclear physicist to offer a thirty-second soundbite, and that fraction of fact becomes All There Is To Know. The reporter assimilates the information and then opines sagely upon it;
not repeating it word-for-word, but by creating variations on a theme, weaving it into their baseline ignorance.
Of course, in this case we cannot just blame reporters. Our Surgeon General has contributed to the unnecessary panic by starting a run on “radiation pills” on the west coast. She said of the hoarding
that it was “definitely appropriate.” (Has the person in this office ever fulfilled a useful purpose?)
An anonymous but highly knowledgeable source gave this lament:
It has gotten to the point where I can barely watch the news. The hysteria driven media consistently endeavors to one-up itself on the terrors of radiation. You know, “significantly increased
levels of radiation have been detected…” Of course, they fail to provide a baseline dose rate before the earthquake/accident and then do not say what is the significantly increased dose, so that
sane, rational people can actually make comparisons and draw reasonable conclusions. I especially liked one commentator’s lame response. It went something like, “Well, you know…any exposure to
radiation… even small increases, are known to not be good for you…”
Just like reporting on global warming, where it is always, just always, “Worse than we thought”, the radiation levels are always, ever always, increasing and increasing. What reporters should be
doing is obvious: take as much time as necessary and, using actual experts, give as many facts as possible, even at the very real risk of talking over the heads of most of their audience. Difficult
but correct material always trumps simplistic summaries.
A major component of the story, relevant to us, is over-certainty. According the the Nuclear Energy Institute, the reactor damage occurred because of “extraordinary natural forces that were outside
the plant’s required design parameters.” In other words, the events that did occur were not foreseen, or were given such a low probability of occurrence that none thought it worth the trouble.
The people who design these plants are, to use the common phrase, rocket scientists. They are exceedingly bright, but they are still human. If these men can make a mistake in estimating risk on what
are, after all, simple structures, just how over-confident are we in our understanding of systems as complex as Nature or the interactions between people and nations?
The disclaimer I have to, but should not have to, add is that I do not seek to minimize reporting on the dangers to those who live near the plants. But our choice is not a dichotomy: because we do
not minimize does not mean we must maximize.
The Lady Tasting Tea continues tomorrow.
Freeman Dyson Speaks
From our man-on-the-spot WS comes this link to an interview with Freeman Dyson regarding his work “In Praise of Heretics.” That word no longer has religious connotations, a milieu where the meaning
is almost the opposite of what it is now. Warning: bad music alert (the intro to the program). Fun fact: Dyson was a statistician for the RAF in World War II! And he doesn’t have a PhD! (Therefore,
how can he be smart?)
If you haven’t read his Infinite in All Dimensions, do so. There is also The Scientist as Rebel (which I have not yet read).
NPR Unbiased?
NPR’s interim CEO, Joyce Slocum, told the Associated Press, “I think if anyone believes that NPR’s coverage is biased in one direction or another, all they need to do to correct that misperception is
turn on their radio or log onto their computer and listen or read for an hour or two. What they will find is balanced journalism that brings all relevant points of view to an issue and covers it in
depth so that people understand the subtlety and the nuance.”
Slocum is statistically right: many NPR programs have nothing to do with politics and are not in danger of bias, except towards simplicity, our national curse. On the other hand, NPR member stations
are often the only place that one can listen to free “classical” music on the radio (the word is a euphemism for “good” or “serious”). But of the shows, like news programs, that are politically
tinged, it is absurd to claim NPR does not have a leftist bias. So the question, Mr Slocum, is this: why should I continue to give you money?
Quote from an unfortunately titled Cal Thomas piece.
TSA Body X-rays
From chief crank John Dvorak come a link to a TSA press release:
The Transportation Security Administration announced Friday that it would retest every full-body X-ray scanner that emits ionizing radiation — 247 machines at 38 airports — after maintenance
records on some of the devices showed radiation levels 10 times higher than expected.
What a, uh, surprise.
Marxist, Feminist, Structuralist, Post-structuralist…
From an article by Alan Bekhor on scholasticism:
Consider, to take one example from many, the book Beginning Theory — an Introduction to Literary and Cultural Theory by Peter Barry, which has virtually become a set text for any humanities or
literature undergraduate course in a British university today. Five rules, Barry affirms, are to be borne in mind for critical thinking about literature: “politics is pervasive, language is
constitutive, truth is provisional, meaning is contingent, human nature is a myth.”
Each of those five rules are false (or have trivially true interpretations). Given these gross, even scurrilous, falsities as a base, is it therefore any wonder that our humanities departments are in
the shape they are in?
They Are Too In Bad Shape
Via A&LD, comes Uncle Joseph Epstein’s “Lower Education: Sex toys and academic freedom at Northwestern.” After lamenting that Northwestern could do no better than invite commencement speakers like
Stephen Colbert, Julia Louis-Dreyfus, or Dianne Sawyer, he writes of the Michael Bailey Dildo Scandal:
Because a subject exists in the world doesn’t mean that universities have to take it up, no matter how edgy it may seem. Let books be written about it, let research be done upon it, if the money
to support it can be found, but the nature and quality and even the sociology of sexual conduct—all material available elsewhere in more than plentitude for the truly interested—does not cry out
for classroom study. Students don’t need universities to learn about varying tastes in sex, or about the mechanics of human sexuality. They don’t need it because, first, epistemologically, human
sexuality isn’t a body of knowledge upon which there is sufficient agreement to constitute reliable conclusions, for nearly everything on the subject is still in the flux of theorizing and
speculation; and because, second, given the nature of the subject, it tends to be, as the Bailey case shows, exploitative, coarsening, demeaning, and squalid.
It finally happened, though we thought, we hoped, he would go on forever. Danny Stiles, radio personality in the New York metro area for over 60 years, died on Friday, 11 March 2011 at age 87.
Twenty-plus years, each Saturday night from 8 pm to 10 pm, Stiles broadcast the Music Museum, featuring the Great American Songbook on WNYC (820 AM), live from the “the east wing of the Art Deco
penthouse” (as he described his quarters overlooking the picturesque Holland tunnel).
Stiles “on your dials” ranged over many stations in his long career; in recent month hosting a show—sponsored by John’s Pizzeria—on WPAT (930 AM). In the old style of radio, he would intersperse
songs with laudatory comments about how savory, unique, and delicious the pizza was. He integrated the commercials so well that you didn’t always know you were hearing an ad.
Regular listeners knew something was wrong when Danny missed his broadcasts on WYNC for the past month. The station ran recordings of previous shows. They did so again on 12 March, in tribute (and it
might have been the last). Even though he was ill, he continued to record his WPAT show with a voice cracked and tired. The week before he died, he sounded better and even boasted he would return to
WYNC. Alas, he never made it.
For many years, Stiles would host a Friday night get together at New York restaurants, starting at Meli Melo on Madison, and then at Seppi’s in the Parker Meridian Hotel. Seppi’s, a warm bistro
chefed by the friendly Claude Solliard, unexpectedly closed over a year ago, and Stiles’s Friday nights ended soon after (he was briefly at another restaurant downtown).
People would come from all over to dine and have a chance to meet Mr Stiles. Cocktails to start (martinis, of course), then dinner, then music featuring Rick Bogart’s New Orleans trio. Danny would
introduce the band with familiar, comfortable jokes. Bogart would play the clarinet and sing in an acquired-taste style. Danny would often spin records from Bogart on his Saturday night show, too.
It was at Seppi’s where I was introduced to the great-great-great (I’ve forgotten how many) grandson of John Quincy Adams who shared his relative’s name. I met many radio personalities. I even shook
hands with the Kenilworth crooner. And was I busting with pride when, on the WNYC show after one Friday night, Mr Stiles mentioned that he met the “world-class statistician” and his “blond bombshell”
Stiles began each Music Museum with Cherokee from Charlie Barnet, followed closely by Glenn Miller, Artie Shaw, or some other big band piece. In third place was always an early recording of Frank
Sinatra with the Tommy Dorsey band. Many casual Sinatra fans would do well to look these recordings up: many are a treat.
Right around 8:30 to 8:40 pm, Danny would slip into “mob music”, singers from the last 1950s with a distinct Italian bias. He would often spin a favorite record, week on week. Some of his
choices—let’s face it—were a little schmaltzy, others were quirky. But you couldn’t hold it against him. And you didn’t just get the music, but you heard the stories behind the songs, the rich
Every show ended with Shirley Temple singing—and Danny signing—”Goodnight, My Love.” A goofy, playful exit.
Until this last year when, right before Shirley, Danny took to playing (and talking over) the very melancholy Acker Blik instrumental “Stranger on the Shore.” It wasn’t a good omen. He is now
relieved of his having to carry those “excruciatingly heavy” 78s home from the studio.
We heard the door close for the last time tonight. Goodbye, Danny, we will miss you.
Obituary from the New York Times. Nostalgia Alley interview with Danny. Danny Stiles had a website (currently still up) featuring continuous loops of his Friday night shows (they aren’t there now and
are unlikely to return).
The Lady Tasting Tea will begin Monday.
Deadline: 21 May 2011
Family Radio Worldwide has hit the road (in a bunch of mobile homes) to preach that the end shall come just over two months from now.
According to the Daily Mail, “Those who believe in Jesus will be carried into heaven, while the rest of humanity will endure 153 days of ‘death and horror’ before the world ends on October 21.” This
is the Rapture followed by the Tribulation spoken of by millennialists.
Harold Camping, the group’s leader, issued the definite forecast and for that I admire him. I do not believe him, but I approve of his concreteness. Most who preach doom and gloom haven’t the guts to
put definite numbers, shape, or dates on their prognostications. Most are content to sit and wait for events “similar” to what they have vaguely predicted, and then take loose credit for having
actually predicted those events.
Camping and his flock will have the chance to learn from their mistake. Not that Camping will take his failure to heart, but his followers might.
Of course, these sort of things often turn out badly. The evidence of a failed core belief is often too much for the devout to bear. Many in the group will not be able to face their ex-family and
friends who they have abandoned to take up this cause. These poor people will not need our ridicule, but our prayers.
Obama: Women Earn Less
Mr Obama, in his radio address of today (which your intrepid reporter caught a portion of) said, “Today, women still earn on average only about 75 cents for every dollar a man earns. That’s a huge
discrepancy.” He called this “troubling.” He also told us that March is women’s month. Oops: it’s National Women’s Month. (Men don’t get a month.)
Mr Obama’s statistics are faulty. I have done the numbers myself and can report that, within a job and age-matched with men, women not only earn as much as men, but sometimes more (on average).
Particularly, women entering the workforce now earn more than men on average in most career fields. Yes, even engineering. At the very top, where gray hair abounds, men still, on average, have an
edge. But these wrinklies are old and getting older and will be replaced by higher-earning women.
Another reason for the supposed “discrepancy” is that women do not enter the job force in the same proportion as men, but they are increasing that proportion. Thus, some fields are seeing dramatic
increases in women; and, since these women are new employees, they tend to earn less than the older employees, who are more likely to be male.
Thus, Mr Obama, while quoting a (more or less) accurate statistic gave it the wrong interpretation; and not just the wrong interpretation, but one that is exactly the opposite of the truth. But one
well in line with his call for more taxing, spending, and regulation to “correct the balance.”
Abortion Philosophy
Many philosophers have tried, in vain, to show that the only question about whether abortion should be legal depends on whether the fetus is human or not. To kill a human for convenience or personal
gain is murder. Thus, if the fetus is human, then to abort it is to commit murder. Abortion is in no way an “issue” of women’s “rights.” Women do not have the “right” to murder simply because they
are women.
You can argue that the fetus becomes human at a certain point in time. For example, it is not human at conception but becomes so at the beginning of the second trimester. (Never mind measurement; we
are speaking purely philosophically.) Mark Mercer, chairperson of the philosophy department at Halifax’s St. Mary’s University, says the fetus does not become human until after it has been out of the
woman’s womb for 18 months. Mercer says human-like creatures 18 months old or less are not human, but they become so after 18 months of life.
Mercer’s argument, while asinine, is, so far, philosophically sound. The only question is where to draw the line. He draws it rather late because being human is a “legal term which has had a changing
definition throughout history.” Here is his mistake, a common one made by equality-mongers and multiculturalists. Just because a society commends or does not penalize some action, does not imply that
that action is moral. The reason this is so should be obvious (hint: what is a society?).
Abortion “rights” supporters hate debate and tried “to disrupt a debate on abortion at Dalhousie University Tuesday night by ripping down ads, setting off stink-bombs, and covering the ceiling with
helium balloons featuring pro-abortion slogans. In the end, they even turned on” Mercer. Why? Because he refused to acknowledge that abortion was an “issue” of women’s “rights.”
This proves that ignorance isn’t, or doesn’t produce, bliss, but perpetual outrage.
See also this video where MSNBC’s Cenk Uygur argues (soundly) that the fetus’s first heartbeat cannot draw the line because the fetus “is not really a person.” But he also says, “Now, we reached out
to the fetus to see if he or she wanted to come on the show, but it did not say anything, because it does not have a mouth. But if it could talk, I’m pretty sure it would say, could you please get
out of my mother’s uterus.” How could a non-human say “Please abort me” or “Please let my mother decide whether to abort me”? Uygur’s fallacy is that he wants it both ways.
J-school Ghouls
On the radio, a female American reporter in Tokyo, unnecessarily breathless and somewhat disappointed. “I can only imagine if it were here, it would have been much worse and the, uh, the count would
have been much higher.” Body count, of course.
Colleges Now Offer High School Degrees
From the—yes, really—New York Times, a story on how CUNY schools are having to teach college students what they should have learned in high school. The “tide of remedial students has now swelled so
large that the university’s six community colleges — like other two-year schools across the country — are having to rethink what and how they teach, even as they reel from steep cuts in state and
local aid. ”
Isn’t this the same paper that is siding with the Wisconsin (and New York) union teachers, saying that these teachers need more money for the find job they are doing? Thanks to long-time reader and
contributer Ari Scwartz for bringing this to our attention.
Detroit Invaded By Hipsters
Under the We’re-not-sure-this-is-a-good-thing category, Detroit is being taken over by t-shirted, expensively shod hipsters. And why not, when you can buy a perfectly serviceable house for pocket
change. Videos here.
Unintended Consequences of Obamacare
Who could have ever guessed that the you-can’t-know-what’s-in-it-until-you-pass-it Obamacare law would have provisions which actually cause health care costs to increase? Unprecedented. The Wall
Street Journal is reporting on the comedic situation where parents are going to the doctor to ask for prescriptions for aspirin and others over-the-counter remedies. Why? The Obamacare law says that
health care expenses drawn from flexible spending accounts can only be authorized via a doctor’s script. Result: costs increase.
Stuff Academics Like
Just when you thought the postmodern academic culture wars were over, we have a new site documenting the oddities of academics. In the vein as Stuff White People Like, Stuff Academics Like posts
strange things ivory-tower inhabitants find important. Much fun can be had in their “Guess the Fake Title” game, where they display a list of titles from genuine peer-reviewed papers, one of which is
fake. A brief excerpt:
Exemplarity – The Real: Some Problems of a Realist Exemplarity Exposed through the Channel of an Aesthetic Autopoeisis [conference paper]
Tragic Closure: A Genealogy of Creative-Destructive Desire [conference paper]
The True History of His Beard: Joaquin Phoenix and the Boundaries of Cinéma Vérité [conference paper]
Trying the Law: Critical Prosecutions of the Exception [conference paper]
Thinking the Pure Unformed [conference paper]
Alan Ball’s True Blood Antics: Queering the Southern Vampire [conference paper]
Antagonistic Corpo-Real-ities [conference paper]
This list is partial, so it is unknown which, if any, is fake. Be sure not to miss the link to the Write Your Own Academic Sentence site. My entry: “The epistemology of pop culture replays (in
parodic form) the ideology of the nation-state.”
Democrat Peter DeFazio Meddles With Opponent’s Kids?
Many sites (one link) are reporting on the Oregon House race of MoveOn.org-supported Democrat Peter DeFazio (leader of the House “progressive” caucus) versus Republican Art Robinson (ex professor of
chemistry and climate “skeptic”, a no-no in his corner of Oregon). The details are not clear, but Robinson accused DeFazio of conspiring to have three of Robinson’s children (he has six) booted from
Oregon State University’s graduate school. Robinson writes of one of his sons:
Thus, Democrat activist David Hamby and militant feminist and chairman of the nuclear engineering department Kathryn Higley are expelling four-year Ph.D. student Joshua Robinson from OSU at the
end of the current academic quarter and turning over the prompt neutron activation analysis facility Joshua built for his thesis work and all of his work in progress to Higley’s husband, Steven
Reese. Reese, an instructor in the department, has stated that he will use these things for his own professional gain. Joshua’s apparatus, which he built and added to the OSU nuclear reactor with
the guidance and ideas of his mentor, Michael Hartman, earned Joshua the award for best Masters of Nuclear Engineering thesis at OSU and has been widely complimented by scientists at prominent
U.S. nuclear facilities.
Robinson lost to DeFazio. Oregon’s Gazette Times reports that OSU said there was “no factual basis” for Robinson’s claims. The paper also differs in the details saying Robinson didn’t claim his kids
weren’t being kicked out, but that two, not three, were “given unfair deadlines to complete their Ph.D. projects.” Which is a very different thing.
Robinson also alleges the OSU has “ostracized” faculty member Jack Higginbotham (nuclear engineering) for telling Robinson of the conspiracy. OSU was forced to issue a press release which said
Robinson’s claims are “baseless and without merit.”
Anybody have more details on this?
Update Somebody linked to the low-flow toilet story with this must-see video from Rand Paul spanking the Obama administrator’s “Ms. Hogan” on what “pro-choice” means. Busybody!
Clearly, a guy with no hair on his head is bald. But so is a guy with just one—if and only if we define bald as “a man with little or no hair.” If the guy has one hair and we define bald to mean “a
man with no hair” then the man with one hair is not bald. So let us use “a man with little or no hair” as our definition and see where that gets us.
We assume that if a man with one hair is bald (by our definition), then so is a man with just two hairs. And if a man with two hairs is bald, then so is a man with three. We can expand this: if a man
has N hairs and is bald, then a man with N + 1 hairs is also bald. Thus (eventually) a man with a million (say) hairs on his head bald, too. Which is absurd. Any man which such a mane is clearly
fully flocked. Yet our derivation is error free.
This is the Sorites, an ancient puzzle, also given with respect to grains and heaps of sand (the words is derived from the Greek heaped up). More than a few writers on this paradox, after reaching
the gotcha!, now say something like the following:
“We seem to have reached the point where we say that a man with, say, 5,000 hairs is ‘bald’, but one with just one more tiny, wee hair is not. This is nuts. Nobody can see the difference between
5,000 and 5,001 hairs. Something must be wrong with our system of logic.”
The man who says this, or anything like it, makes (at least) two mistakes. I’ve already given a hint of the first error above. There is nothing wrong with logic, but there is with the definition of
bald. That word, when used in this exceedingly formal logical argument itself becomes a formal creature. It is no longer the bald as used colloquially, it is instead like the X used in algebra. It is
an abstract thing, it no longer means real baldness on real men. It means logical X-ness on fictional men.
Indeed, rewrite the Sorites to remove the pseudo-word bald and replace it with X. X now means a man with fewer than Y hairs. If the man with no hairs is X, then so is the man with one hair, and so
forth. Now, at some point we either bump up against Y, in which case the man is no longer X, or Y is the limit and the man is always X except at the limit.
If I were to have originally written the Sorites in this algebraic form—with just Xs and Ys—there never would have been a gotcha!, we never would have questioned the foundations of logic, there would
have been no paradox. That there felt like one when we do use bald instead of X can only mean that we are silently augmenting our argument with hidden premises (which define bald). We figure that
because these premises are unstated, or do not appear in print, they are not truly there.
One hidden premise is that the word bald to me, and to me right now, means a man with a certain shape of head and a certain lack of hair. I need not know how many hairs this man has, but I will make
the judgment bald or not by what I see. Of course, we may, after my judgment, count the man’s hair and thus reach a quantification. My premises fluctuate: they are different for different times and
men, or for the same men but they change depending on what these men wear, or the properties of the light, my relations to these men, or even by how much I have drunk.
My premises are almost certainly different than yours. I may say bald when you do not. That our behavior is not constant or that our judgments do not agree is meaningless. Neither is it relevant—and
here is the second mistake—that I cannot articulate my premises. All that I can do is to say bald or not. Quantification, as I said, can always be had after the fact. But all this will tell us, in
any individual case, is that the man now in front of me has not yet reached Y, or that he has exceeded it. We will not be able to deduce Y (unless the man is willing to undergo experimentation;
however, my premises might change as we add or subtract hair from our recruit).
Unacknowledged, hidden premises are the generator of many “paradoxes.” The most relevant to statistics are in (faulty) criticisms of Laplace’s Rule of Succession, which we can attack another day.
Read the first entry in this series. All of what follows will appear ridiculously obvious to those who have had no statistical training. Those who have must struggle.
In a recent study, a greater fraction of Whites than Blacks were found to have a trait thought desirable (or undesirable, or a trait thought worth tracking). Something caused this disparity to occur.
It cannot be that nothing caused it to occur. “Chance” or “randomness” are not operative agents and thus cannot cause anything to occur. It might be that we cannot know what caused it to occur, or
that we guess incorrectly about what caused it to occur. But, I repeat, something caused this difference.
If you like, substitute “Pill A” and “Pill B”, or “Study 1″ and “Study 2″, etc. for White and Black.
I observed a greater fraction of Whites than Blacks possessing some trait. Given this observation, what is the probability that a greater fraction of Whites than Blacks in my study possessed this
trait? It is 1, or 100%. If you do not believe this, you might be a frequentist.
What is the probability that the proportion of trait-possessing Whites is twice—or thrice, or whatever—as high as Blacks in my study? It is either 1 or 0, depending on whether the proportion of
trait-possessing Whites is twice (or whatever) as high as Blacks. All I have to do is look. No models are needed, no bizarre concepts of “statistical significance.” All we need do is count. We are
done: any empirical question we have about the difference (or similarities) of Whites and Blacks in our study has probability 1 or 0. It is as simple as that.
Now suppose that we will see a certain number of Whites we have not seen before; likewise Blacks (they could even be the same Whites and Blacks if we believed the thing or things that caused the
trait was non-constant). We have not yet measured this new group of Whites and Blacks so that we do not know whether a greater proportion of Whites than Blacks will be found to possess the trait.
Intuition suggests that since we have already observed a group in which a greater proportion of Whites than Blacks possessed the trait, the new group will display the same disparity.
We can quantify this intuition with a model. There are many—many—to choose from. The choice of which one to use is ours. All the results derived from it assume that the model we have chosen is true.
One model simply says, “In any group of Whites and Blacks, a greater proportion of Whites than Blacks will be found to possess the trait.” Conditional on this model—that is, assuming this model is
true—the probability there will be a greater proportion of trait-possessing Whites than Blacks in our new group is 1, or 100%. This simple model only makes a statement about Whites possessing the
trait in higher frequency than Blacks. Thus, we cannot say what is the probability the proportion of trait-possessing Whites is twice (or whatever) as high as Blacks in my study.
Some models do not let you answer all possible questions.
We could create a model which dictates the probability that we find each multiple (from some set) of fractions of Whites than Blacks (e.g. twice, thrice, 1/2, 1/3, etc.), and then use this model to
make probability statements about our new group. Since that would be difficult (and somewhat capricious), we could instead parameterize the differences in proportion.
We could use this model to answer the question, “Given this model is true, and given the observations we have made thus far, what is the probability that the parameters take a certain value?” This
question is not terribly interesting and it does not answer what we really want to know, which is about the differences between Whites and Blacks in our new group. Why ask about some unobservable
parameter? (The right answer is not, “Because everybody else does.”)
But given a fixed value of the parameters, we could answer the question, “Given this parameterized model is true, and given a fixed value of its parameters, and given the observations we have made
thus far, What is the probability a greater fraction of Whites than Blacks will posses the trait?” This is almost what we want to know, but not quite, because it fixes the values of the unobservable
Simple mathematics allows us to answer this question for each possible value of the parameters, and then weighting the answers by the probability that the parameters take those values (this is from
the parameter posterior distribution, which is conditional on the model being true and on the observations we have made thus far). The final number is the probability that the fraction of Whites is
larger than Blacks in our new group. Which is what we wanted to know. (This is called the predictive posterior distribution.)
“Statistical significance” never once enters into this or any real decision. When you hear this term, it is always a dodge. It is an answer to a question nobody asks and nobody wants to know. It
always assumes, as we do, on the truth of a model (though it remains silent about this, hoping by this silence to convince that no other models are possible). It tells us the probabilities of events
that did not happen, and asks us to make decisions based on probabilities of these never-happened events. If you want to be mischievous, ask a frequentist why this makes sense. Homework: Locate
Jeffreys’s relevant quote.
See the first in this series to discover what to do if we suspect our model is not true. | {"url":"http://wmbriggs.com/?paged=149","timestamp":"2014-04-17T07:14:34Z","content_type":null,"content_length":"96048","record_id":"<urn:uuid:97e92778-0710-4831-9dae-69fc979fca23>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Simulation of the Effects of Varying Repetition Rate and Pulse Width of Nanosecond Discharges on Premixed Lean Methane-Air Combustion
Journal of Combustion
Volume 2012 (2012), Article ID 137653, 11 pages
Research Article
A Simulation of the Effects of Varying Repetition Rate and Pulse Width of Nanosecond Discharges on Premixed Lean Methane-Air Combustion
Mechanical Engineering Department, Stanford University, Stanford, CA 94305-3032, USA
Received 11 May 2012; Accepted 17 September 2012
Academic Editor: Nickolay Aleksandrov
Copyright © 2012 Moon Soo Bak and Mark A. Cappelli. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Two-dimensional kinetic simulation has been carried out to investigate the effects of repetition rate and pulse width of nanosecond repetitively pulsed discharges on stabilizing premixed lean
methane-air combustion. The repetition rate and pulse width are varied from 10kHz to 50kHz and from 9ns to 2ns while the total power is kept constant. The lower repetition rates provide larger
amounts of radicals such as O, H, and OH. However, the effect on stabilization is found to be the same for all of the tested repetition rates. The shorter pulse width is found to favor the production
of species in higher electronic states, but the varying effects on stabilization are also found to be small. Our results indicate that the total deposited power is the critical element that
determines the extent of stabilization over this range of discharge properties studied.
1. Introduction
Nanosecond repetitively pulsed discharges have been studied as possible sources to stabilize combustion at fuel-lean and blow-off conditions [1]. The stabilization has been attributed to the
significant radical production and gas heating within the discharges [1, 2]. The relative importance of these two outcomes on the stabilization has been debated. However, the effects of these two
discharge outcomes on stabilization are difficult to separate because the two mechanisms stem from the same process of collisional quenching of electronically excited species (produced by direct
electron impacts) [2, 3]. Also, the influence of radicals such as O and H is more pronounced at high gas temperature where the branching and propagation reactions can compete with the termination
reactions. It is notable that Pilla et al. [4] reported that they were able to stabilize combustion when the discharge is at filamentary mode (e.g., streamers) rather than at glow mode. Deminsky et
al. [5] also reported that an acceleration of the ignition process is seen in premixed supersonic flows with filamentary discharge modes. This suggests that the gas heating that is often more
pronounced in the filamentary mode is necessary for significant radical production and that the elevated gas temperature results in the persistence of radicals during the time between pulses.
A few tens of microjoule of energy per pulse, tens of kilohertz repetition rates, and a few kilovolt peak voltages are typical operation conditions of nanosecond repetitive discharges used in
combustion stabilization. Under these conditions, the average electron number density that can be sustained is about 10^11cm^−3 peaking as high as 10^15cm^−3 with a lower power budget compared to
other types of discharges [6]. There have been several studies carried out of the kinetics responsible for plasma-assisted stabilization of combustion [1], but to our knowledge, there are few, if
any, experimental studies investigating the effect of repetition rate and pulse width (for this type of discharge) on stabilization. This is because the power that is deposited into the discharge
region is difficult to control and accurately characterized.
In this paper, we examine the effect of variations in the repetition rate and the pulse width on premixed methane-air combustion using a 2-D kinetic simulation. In these simulations we set the total
power to be constant and vary the repetition rate between 10kHz and 50kHz and the pulse width from 9ns to 2ns. In this way, the primary variables are the energy per pulse and therefore the amount
of produced radicals and their ability to survive between pulses. The kinetics within the discharge region and the quasi-steady contours between the cases are compared in detail.
2. Simulation Description
The simulated domain, marked as a red-dotted box, is shown in Figure 1. Axisymmetric coordinates are used in this simulation, and the size of the domain is set to 12mm in height (z-axis) and 5.6mm
in radius (r-axis), spanning the discharge region. The center of the discharge region is located 4mm above the lower computational boundary, on the axis, and has size of 1mm in height and 0.35mm
in diameter. This diameter is chosen to agree with the measured diameter in the similar experiments of Pai et al. [7]. A uniform grid spacing (0.333mm along the z-axis and 0.175mm along the r-axis)
is used in these simulations. Species considered include ground and electronic excited states of N[2] (X, A, B, a′, C), the ground electronic states of O[2], , , O, CH[4], H[2]O, CO[2], , H[2]O^+, ,
and free electrons (e), amongst others. These species are added to the reduced reaction mechanism, DRM19 [8], that is often used for simulating methane/air combustion. DRM19 has been tested against
the more detailed GRI-Mech mechanism [9] for computing ignition delay times and laminar flame speeds. Thermodynamic properties for neutral species are calculated based on the use of NASA polynomials
as tabulated in the GRI-Mech package. The thermodynamic properties for electrons and ions are taken from Burcat [10]. Reactions considered are electron-impact excitation and ionization of N[2],
electron-impact dissociation and ionization of O[2] and CH[4], electron-impact ionization of H[2]O and CO[2], ion conversion, recombination of electron and positive ions, quenching of electronically
excited nitrogen () by N[2], dissociative quenching of by O[2] and CH[4], and chemical transformation of neutral species typical in methane/air combustion reaction mechanisms. The reactions pertinent
to the plasma kinetics and associated rates are provided in our previous paper [2]. Rate coefficients for reactions between neutral species and ions are adapted from previous literature, whereas
those for reactions involving electrons are calculated as functions of reduced electric field () based on the solution of the Boltzmann equation, which is facilitated using the commercial software,
BOLSIG+ [11]. Such an approach is necessary since the reactions are coupled to the electron energy distribution function. It is noteworthy that only elastic and inelastic cross-sections associated
with the major species N[2], O[2], CH[4], H[2]O, and CO[2] are considered in establishing the electron energy distribution. The species conservation equation used in our simulations has the form with
The species equation is solved simultaneously with the energy equation, to compute the species number densities and gas temperature at each grid point. The species diffusion velocity is composed of
the diffusion-induced convection velocity , the ordinary diffusion velocity, and the thermal diffusion velocity, the latter of which is accounted for light species only, having molecular weight less
than five: Here, is determined to satisfy the condition In the above equations, ,, and are the number density, mole fraction, and stoichiometric coefficient of species , and are the forward and
reverse rate coefficients, and are the electron and gas temperatures, is the mixture average diffusion coefficient of species , is the thermal diffusion ratio of species , and is the local advection
velocity, which is scaled as ( is the initial inlet gas temperature) assuming negligible radial velocity and constant pressure, to account for the flow acceleration caused by heat release during
combustion. We assume that electrons and ions exist only in the discharge region and that the binary diffusion coefficients for electronically excited N[2] are equal to those of the ground state N
[2]. In the energy equation, is the electron mobility (a function of E/n), is the heat capacities at constant volume of species , and are the formation and sensible enthalpies of species , and is the
mixture-averaged thermal conductivity. The mixture diffusion coefficient of species , thermal diffusion ratio of species , and mixture-averaged thermal conductivity are computed at each grid points
according to (6), that is, where and are the mole and mass fractions of species , respectively, is the binary diffusion coefficient between species and , is the binary thermal diffusion ratio for
species into species , and is the pure thermal conductivity of species .
In solving the system of partial differential equations, the diffusion processes (species and thermal conduction) are discretized by a central difference scheme, and the convection and advection
processes (species and enthalpy) are discretized by an upwind scheme. The system of ordinary differential equations is then solved implicitly for each (adaptive) time step based on a backward
difference formula (BDF). In the computations, the domain is divided into smaller subdomains, which are allocated to separate processes. These processes are computed in parallel, synchronizing their
boundary values via the message passing interface, MPI. Sundials CVODE [12] with MPI support is used as a solver and Open MPI [13] is used for MPI-2 standard implementation. Each temporal solution is
computed iteratively using the Generalized Minimal Residual method (GMRES) [14]. A Dirichlet condition is used for the lower domain boundary, whereas Neumann conditions are used for the sides and top
of the computational boundary.
A Gaussian-shaped (in time) reduced electric field with given full-width half maximum is applied at the discharge region. The reduced electric field is varied to provide the total power of 0.4323W
while varying the repetition rate and the pulse width. The methane-air equivalence ratio is set to be 0.45, conditions at which the flame is not self-sustaining and the combustion that is initiated
at the discharge region quenches at downstream flow locations. This low value is deliberately chosen; otherwise self-sustained combustion makes it difficult to resolve the effect of the discharge.
The initial advection speed is 42.5cm/s. The initial inlet gas temperature and pressure are set to 296K and 1atm, respectively.
3. Simulation Results
3.1. Results for Repetition Rates Ranging between 10kHz and 50kHz
Simulations were carried out for different repetition rates ranging from 10kHz to 50kHz, under conditions of constant average power. Under these constraints, the energy per pulse is higher for the
lower repetition rates. The discharge pulse energies are 43μJ, 22μJ, 14μJ, 11μJ, and 9μJ for 10kHz, 20kHz, 30kHz, 40kHz, and 50kHz, respectively. The corresponding effective reduced
electric fields are determined to be 345Td, 335Td, 328Td, 323Td, and 318Td. The kinetics within the discharge region is shown in Figure 2. Because of the higher electric fields for the lower
repetition rates, the electron number density (Figure 2(b)) peaks at higher values and the population of excited electronic states of N[2] (Figure 2(a)) are significantly larger. As a result, the gas
temperature rises (Figure 2(c)) and the produced amount of radicals such as O, H, and OH (Figure 2(d)) are also significantly greater for the low repetition rates. This larger amount of the produced
radicals after the pulse converts CH[4] further to CO and H[2], and finally to CO[2] and H[2]O (Figures 2(e) and 2(f)). It is noteworthy that with this type of discharge the methane concentration
remains low within the discharge region after the very first discharge pulse because the first discharge ignites the methane-air mixture and the repetition timescales tested are shorter than the
times for species diffusion and advection. Although the lower repetition rates provide for higher values for the temperature and radical concentrations, their quasi-steady state levels are found to
be lower because of the longer time between pulses for thermal conduction, species diffusion, and radical recombination processes. According to these simulations, the kinetic evolution is found to be
more drastic for the lower repetition rates.
The quasi-steady state contours of the major and minor combustion species for different repetition rates between 10kHz and 50kHz are shown in Figures 3 and 4, respectively. The contours for CH[4],
CO[2], H[2]O, CO, H[2], and O correspond to Figures 3(a), 3(b), and 3(c) and Figures 4(a), 4(b), and 4(c), respectively. Interestingly, in spite of their different quasi-steady state values and
degree of temporal evolutions for the gas temperature and the radicals, the results for these cases are almost exactly the same. Our results indicate that, for this range of repetition rate and
average power, the discharge is able to maintain a sufficiently high level of excited state species population required to keep the discharge region combusted (Figures 3(a), 3(b), and 3(c)), while
the peak levels of the produced radicals (Figure 4(c)) are less important because they decay quickly to the thermally equilibrated values in the post-discharge region through radical recombination
reactions. These recombination reactions eventually release heat energy to the stream. In essence, while radicals play some role in the kinetics, we find that the average power, irrespective of the
mechanism through which heating takes place, is the critical factor on stabilizing combustion.
3.2. Results for Different Pulse Width Ranging from 9ns to 2ns
Simulations for different pulse widths corresponding to 9ns, 4ns, and 2ns are compared, while maintaining constant average discharge power. The energy per pulse is 14μJ because the repetition
rate is also kept constant, but the temporal energy density during the pulse is higher for the shorter pulse width. This is reflected in the higher reduced electric fields for shorter pulses. The
fields are determined to be 328Td, 421Td, and 535Td for 9ns, 4ns, and 2ns pulse widths, respectively. The detail kinetics within the discharge region is shown in Figure 5. Because of the higher
temporal energy density for the shorter pulses, the shorter pulses result in larger peak electron number densities (Figure 5(b)). For excited electronic states of N[2] (Figure 5(a)), more energized
electrons during the shorter pulses populate the higher energy states. N[2] C is produced most when the pulse width is the shortest. However, this different degree of population between these states
does not appear to lead to significant differences in the amounts of produced radicals such as O, H, and OH (Figure 5(d)). The results are very similar for all of the tested pulse widths, and the
kinetic evolutions for minor species (H[2] and CO, Figure 5(f)) and major species (CH[4], CO[2], and H[2]O, Figure 5(e)) are found to be the same. This finding indicates that shorter pulses populate
higher energy states with more preference but do not lead to a noticeably larger radical amount.
The quasi-steady states contours of major and minor combustion species are shown in Figures 6 and 7, respectively, for pulse widths of 9ns, 4ns, and 2ns. The contours for CH[4], CO[2], H[2]O, CO,
H[2], and O correspond to Figures 6(a), 6(b), and 6(c) and Figures 7(a), 7(b), and 7(c), respectively. As seen in the figures, the contours are almost exactly the same for the three different pulse
widths. This is expected since the discharge kinetics describing radical production (Figure 5) were also very similar. This again suggests that the average power is the defining factor on the
chemistry, and little benefit is achieved by shortening the pulse into the nanosecond range.
4. Conclusion
The effect of repetition rate and pulse width on combustion stabilization for nanosecond repetitively pulsed discharges was investigated by computational simulations. In these simulations, the total
average discharge power is kept constant. Since the lower repetition rates have larger pulse energy and a corresponding longer time between pulses, the gas temperature rises and the produced radicals
were greater but their quasi-steady values were correspondingly lower than cases of higher repetition rate. However, in spite of this different degree of kinetic evolution, the contours for major and
minor combustion species were found to be almost exactly the same and independent of repetition rate. Shortening the pulse widths while maintaining a constant average discharge power produced a
higher peak population of excited electronic state species but the overall quasi-steady amounts were similar to all pulse widths and therefore the contours of the combustion products were also
similar. From these simulations, we conclude that the average discharge power is found to be the determining factor on combustion stabilization and little if any benefit is obtained by varying the
operation parameters such as the repetition rate and pulse width of nanosecond pulsed discharges over the range of conditions studied here.
This work is supported by the National Science Foundation and the Department of Energy through the NSF/DOE Partnership in Basic Plasma Science. M. S. Bak is also supported by a Stanford Graduate
1. S. M. Starikovskaia, “Plasma assisted ignition and combustion,” Journal of Physics D, vol. 39, no. 16, pp. R265–R299, 2006. View at Publisher · View at Google Scholar · View at Scopus
2. M. S. Bak, H. Do, M. G. Mungal, and M. A. Cappelli, “Plasma-assisted stabilization of laminar premixed methane/air flames around the lean flammability limit,” Combustion and Flame, vol. 159, no.
10, pp. 3128–3137, 2012.
3. G. D. Stancu, F. Kaddouri, D. A. Lacoste, and C. O. Laux, “Atmospheric pressure plasma diagnostics by OES, CRDS and TALIF,” Journal of Physics D, vol. 43, no. 12, pp. 124002–124010, 2010. View at
Publisher · View at Google Scholar · View at Scopus
4. G. Pilla, D. Galley, D. A. Lacoste, F. Lacas, D. Veynante, and C. O. Laux, “Stabilization of a turbulent premixed flame using a nanosecond repetitively pulsed plasma,” IEEE Transactions on Plasma
Science, vol. 34, no. 6, pp. 2471–2477, 2006. View at Publisher · View at Google Scholar · View at Scopus
5. M. A. Deminsky, I. V. Kochetov, A. P. Napartovich, and S. B. Leonov, “Modeling of plasma assisted combustion in premixed supersonic gas flow,” International Journal of Hypersonics, vol. 1, no. 4,
pp. 209–224, 2010. View at Publisher · View at Google Scholar
6. C. H. Kruger, C. O. Laux, L. Yu, D. M. Packan, and L. Pierrot, “Nonequilibrium discharges in air and nitrogen plasmas at atmospheric pressure,” Pure and Applied Chemistry, vol. 74, no. 3, pp.
337–347, 2002. View at Scopus
7. D. Z. Pai, D. A. Lacoste, and C. O. Laux, “Transitions between corona, glow, and spark regimes of nanosecond repetitively pulsed discharges in air at atmospheric pressure,” Journal of Applied
Physics, vol. 107, no. 9, Article ID 093303, 15 pages, 2010. View at Publisher · View at Google Scholar
8. A. Kazakov and M. Frenklach, “Reduced reaction sets based on GRI-Mech 1.2,” http://www.me.berkeley.edu/drm/.
9. M. Frenklach, H. Wang, C.-L. Yu et al., http://www.me.berkeley.edu/gri_mech/.
10. A. Burcat, Third Millennium Ideal Gas and Condensed Phase Thermochemical Database for Combustion, TAE 867, Technion-Israel Institute of Technology, 2001.
11. G. J. M. Hagelaar and L. C. Pitchford, “Solving the Boltzmann equation to obtain electron transport coefficients and rate coefficients for fluid models,” Plasma Sources Science and Technology,
vol. 14, no. 4, pp. 722–733, 2005. View at Publisher · View at Google Scholar · View at Scopus
12. A. C. Hindmarsh, P. N. Brown, K. E. Grant et al., “SUNDIALS: suite of nonlinear and differential/algebraic equation solvers,” ACM Transactions on Mathematical Software, vol. 31, no. 3, pp.
363–396, 2005. View at Publisher · View at Google Scholar · View at Scopus
13. Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 7, no. 3, pp.
856–869, 1986. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/jc/2012/137653/","timestamp":"2014-04-17T17:32:36Z","content_type":null,"content_length":"123637","record_id":"<urn:uuid:a6806c2b-d7c1-42be-a2f8-b5c9fa09e307>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combining probability distributions from experts in risk analysis
Results 1 - 10 of 71
- In KDD , 2009
"... The explosion of user-generated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around
their products. Tracking such discussion on weblogs, provides useful insight on how to improve products or ..."
Cited by 35 (6 self)
Add to MetaCart
The explosion of user-generated content on the Web has led to new opportunities and significant challenges for companies, that are increasingly concerned about monitoring the discussion around their
products. Tracking such discussion on weblogs, provides useful insight on how to improve products or market them more effectively. An important component of such analysis is to characterize the
sentiment expressed in blogs about specific brands and products. Sentiment Analysis focuses on this task of automatically identifying whether a piece of text expresses a positive or negative opinion
about the subject matter. Most previous work in this area uses prior lexical knowledge in terms of the sentiment-polarity of words. In contrast, some recent approaches treat the task as a text
classification problem, where they learn to classify sentiment based only on labeled training data. In this paper, we present a unified framework in which one can use background lexical information
in terms of word-class associations, and refine this information for specific domains using any available training examples. Empirical results on diverse domains show that our approach performs
better than using background knowledge or training data in isolation, as well as alternative approaches to using lexical knowledge with text classification.
- NATIONAL INSTITUTE OF ECONOMIC AND SOCIAL RESEARCH DISCUSSION PAPER NO , 2005
"... This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple data-driven
approach to direct combination of density forecasts using optimal weights. These optimal weights are those weigh ..."
Cited by 23 (9 self)
Add to MetaCart
This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple data-driven approach to
direct combination of density forecasts using optimal weights. These optimal weights are those weights that minimize the ‘distance’, as measured by the Kullback-Leibler information criterion, between
the forecasted and true but unknown density. We explain how this minimization both can and should be achieved. Comparisons with the optimal combination of point forecasts are made. An application to
simple time-series density forecasts and two widely used published density forecasts for U.K. inflation, namely the Bank of England and NIESR “fan” charts, illustrates that combination can but need
not always help.
, 2004
"... We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables
but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We ad ..."
Cited by 19 (4 self)
Add to MetaCart
We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may
otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We address the problem of revising the probability estimates of the panel so as to produce a coherent
set that best represents the group’s expertise.
- In Proceedings of the Sixth ACM Conference on Electronic Commerce (EC’05 , 2005
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability
judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Cited by 14 (7 self)
Add to MetaCart
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on
2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts
via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead
of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient
pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than
linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation
technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Cited by 14 (0 self)
Add to MetaCart
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique
depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the
subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to
measure the accuracy of the aggregation mechanism. We show that the well-known aggregation operator LinOP is ideally suited for that task. We propose a LinOP-based learning algorithm, inspired by the
techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
- In International Joint Conference on Autonomous Agents and Multiagent Systems , 2006
"... Future market conditions can be a pivotal factor in making business decisions. We present and evaluate methods used by our agent, Deep Maize, to forecast market prices in the Trading Agent
Competition Supply Chain Management Game. As a guiding principle we seek to exploit as many sources of availabl ..."
Cited by 14 (2 self)
Add to MetaCart
Future market conditions can be a pivotal factor in making business decisions. We present and evaluate methods used by our agent, Deep Maize, to forecast market prices in the Trading Agent
Competition Supply Chain Management Game. As a guiding principle we seek to exploit as many sources of available information as possible to inform predictions. Since information comes in several
different forms, we integrate well-known methods in a novel way to make predictions. The core of our predictor is a nearest-neighbors machine learning algorithm that identifies historical instances
with similar economic indicators. We augment this with an online learning procedure that transforms the predictions by optimizing a scoring rule. This allows us to select more relevant historical
contexts using additional information available during individual games. We also explore the advantages of two different representations for predicting price distributions. One uses absolute prices,
and the other uses statistics of price time series to exploit local stability. We evaluate these methods using both data from the 2005 tournament final round and additional simulations. We compare
several variations of our predictor to one another and a baseline predictor similar to those used by many other tournament agents. We show substantial improvements over the baseline predictor, and
demonstrate that each element of our predictor contributes to improved performance.
- Risk Anal , 1999
"... Risk assessors attempting to use probabilistic approaches to describe uncertainty often find themselves in a data-sparse situation: available data are only partially relevant to the parameter of
interest, so one needs to adjust empirical distributions, use explicit judgmental distributions, or colle ..."
Cited by 11 (1 self)
Add to MetaCart
Risk assessors attempting to use probabilistic approaches to describe uncertainty often find themselves in a data-sparse situation: available data are only partially relevant to the parameter of
interest, so one needs to adjust empirical distributions, use explicit judgmental distributions, or collect new data. In determining whether or not to collect additional data, whether by measurement
or by elicitation of experts, it is useful to consider the expected value of the additional information. The expected value of information depends on the prior distribution used to represent current
information; if the prior distribution is too narrow, in many risk-analytic cases the calculated expected value of information will be biased downward. The well-documented tendency toward
overconfidence, including the neglect of potential surprise, suggests this bias may be substantial. We examine the expected value of information, including the role of surprise, test for bias in
estimating the expected value of information, and suggest procedures to guard against overconfidence and underestimation of the expected value of information when developing prior distributions and
when combining distributions obtained from multiple experts. The methods are illustrated with applications to potential carcinogens in food, commercial energy demand, and global climate change. KEY
WORDS: Probability; uncertainty; data; risk assessment. 1.
, 2005
"... A vision-based robot localization system must be robust: able to keep track of the position of the robot at any time even if illumination conditions change and, in the extreme case of a failure,
able to efficiently recover the correct position of the robot. With this objective in mind, we enhance t ..."
Cited by 10 (2 self)
Add to MetaCart
A vision-based robot localization system must be robust: able to keep track of the position of the robot at any time even if illumination conditions change and, in the extreme case of a failure, able
to efficiently recover the correct position of the robot. With this objective in mind, we enhance the existing appearance-based robot localization framework in two directions by exploiting the use of
a stereo camera mounted on a pan-and-tilt device. First, we move from the classical passive appearance-based localization framework to an active one where the robot sometimes executes actions with
the only purpose of gaining information about its location in the environment. Along this line, we introduce an entropy-based criterion for action selection that can be efficiently evaluated in our
probabilistic localization system. The execution of the actions selected using this criterion allows the robot to quickly find out its position in case it gets lost. Secondly, we introduce the use of
depth maps obtained with the stereo cameras. The information provided by depth maps is less sensitive to changes of illumination than that provided by plain images. The main drawback of depth maps is
that they include missing values: points for which it is not possible to reliably determine depth information. The presence of missing values makes Principal Component Analysis (the standard method
used to compress images in the appearance-based framework) unfeasible. We describe a novel Expectation-Maximization algorithm to determine the principal components of a data set including missing
values and we apply it to depth maps. The experiments we present show that the combination of the active localization with the use of depth maps gives an efficient and robust appearance-based robot
localization system.
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval) ,
plus any additional information that we may have about the probability of different values within this set. ..."
Cited by 9 (4 self)
Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval) , plus
any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete
information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk
analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general second-order formalism for handling
different types of uncertainty. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=959881","timestamp":"2014-04-16T09:01:26Z","content_type":null,"content_length":"41001","record_id":"<urn:uuid:f71669bf-3559-4d3a-b6ee-d9fff44c6b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Python Modules
│ │[NOTE: In something like 2003, I was asked if this stats.py library could be incorporated into scipy as scipy.stats. I agreed and since then, considerable improvements and additions │
│ │have been made within scipy.stats. I have therefore not been maintaining the stats.py here in any regular fashion. I leave it posted for those who want stats capabilities for lists │
│ │and tuples, or who do not want toor cannot install scipy. See also python-statlib] A collection of statistical functions, ranging from descriptive statistics (mean, median, │
│stats.py │histograms, variance, skew, kurtosis, etc.) to inferential statistics (t-tests, F-tests, chi-square, etc.). Originally, the functions were defined for operation on lists and, if │
│ │Numeric is installed, also defined for array arguments. THIS HAS NOW BEEN UPDATED TO BE COMPATIBLE WITH THE NEW numpy AND AS SUCH MAY FAIL ON Numeric ARRAY ARGUMENTS. ALSO AVAILABLE │
│ │IS statstest.py FOR TESTING THE INSTALL. THIS VERSION UPLOADED ON 2008-01-03, INCLUDING MIT-LIKE LICENSE. [If you want the previous version of stats.py].For a fairly complete, though │
│ │not very pretty, set of tests the stats.py, pstat.py and io.py modules, get the zip file below. Version 0.6 of stats.py uploaded on 2003-02-07. │
│ │REQUIRES pstat.py (v0.3 or later) and io.py (v0.1 or later). │
│ │A collection of list manipulation functions based on the |Stat ("pipe-stat") programs written by Gary Perlman. Allows things like column extraction (from a "2D" list of lists), │
│pstat.py │row-extraction based on criteria, as well as some file manipulation abilities (and for which many python modules already exist). Version 0.5 uploaded on 2008-01-03, which is NOW │
│ │COMPATIBLE WITH NUMPY AND WITH AN MIT-LIKE LICENSE. [If you want the previous version of pstat.py]. │
│ │A collection of input/output routines for flat space/tab delimited text files and "flat" binary files, including some special file handlers for MRI files. NUMPY-COMPATIBLE VERSION │
│io.py │uploaded 2008-01-03. [If you want the previous version of io.py]. │
│ │REQUIRES: pstat.py. │
│glplot.py │A thin (barely there) glue between PyOpenGL and wxPython to create quick-and-dirty multiline-plots (with or without errorbars) that allow zooming (a little like Matlab). The code │
│ │isn't pretty, but it works. Requires PyOpenGL, wxPython and Numeric. Tested on Linux and win32 platforms. Version 0.1 uploaded 2003-12-02. │
│statstest.zip│A collection of datasets and testing programs that run each function in the previous version of stats.py. │ | {"url":"http://www.nmr.mgh.harvard.edu/Neural_Systems_Group/gary/python.html","timestamp":"2014-04-20T10:53:37Z","content_type":null,"content_length":"4963","record_id":"<urn:uuid:02c37603-10d7-47c5-909c-94d41456528f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
November 29, 2002
This Week's Finds in Mathematical Physics (Week 189)
John Baez
Being deeply in love with space and time, I always like to read about rulers and clocks. There's a bunch of articles about time in the September issue of Scientific American, including a neat one
about the latest progress in chronometry:
1) W. Wayt Gibbs, Ultimate clocks, Scientific American, September 2002, pp. 86-93.
The most accurate clocks in common use are atomic clocks that make use of the radiation emitted by cesium as it transitions between two energy levels near the ground state. For $63,000 you can buy a
clock like this that keeps time good to a microsecond per month, or about 5 parts in 10^13. The primary time standard of the National Institute of Standards and Technology is a cesium clock accurate
to 1 part in 10^15. In fact, the second was defined in 1967 to consist of 9,192,631,770 periods of the radiation emitted by cesium as it undergoes this transition.
However, more accurate atomic clocks are in the offing which use different elements. The main source of error in cesium clocks is collisions between the atoms, which are cooled to less than 2
microkelvins to reduce Doppler shifting of the radiation. But cesium has a big cross-section at these low temperatures, so Scott Diddams and his collaborators at the National Institute of Standards
and Technology have switched to rubidium, which should give a clock good to 1 part in 10^17.
To completely avoid the effect of atomic collisions, you could try to build a clock that uses the radiation emitted by just one atom. Diddams' group has already tested a clock that uses the light
emitted by a single atom of mercury:
2) Scott A. Diddams et al, An optical clock based on a single trapped Hg-199+ ion, Science, 293 (August 3 2001), 825-828.
However, the frequency of this transition is easily affected by magnetic fields, so Thomas Udem, Theodor Haensch and others at the Max Planck Intitute for Quantum Optics are investigating a clock
based on a single indium ion that could reach an accuracy of about 1 part in 10^18.
When you reach accuracies like this, relativistic corrections become very important. Special relativity causes a time dilation of 1 part in 10^17 when you walk down the street at normal speed.
General relativity causes a gravitational time dilation of the same order when you lower your watch 10 centimeters! Researchers at the NIST already need to correct for gravitational time dilation
when they compare atomic clocks on different floors of their building, but as the accuracy of clocks continues to increase, they'll have to work ever harder to keep track of small effects due to the
tides, local variations in geology, and so on.
So ultimately, it could be small irregularities in the gravitational field, rather than limitations of technology, that limit our timekeeping ability. Where will it all end? Only time will tell.
Meanwhile, work continues on LIGO - the Laser Interferometer Gravitational-Wave Observatory. As you probably know, this consists of two facilities: one in Livingston, Louisiana, and one in Hanford,
Washington. Each facility consists of laser beams bouncing back and forth along two 4-kilometer-long tubes arranged in an L shape. As a gravitational wave passes by, the tubes should alternately
stretch and squash - very slightly, but hopefully enough to be detected via changing interference patterns in the laser beam.
LIGO is coming into operation in stages. The first stage, called LIGO I, is supposed to allow detection of gravitational waves made by binary neutron stars within 20 megaparsecs of us. These binaries
emit lots of gravitational radiation, spiral into each other, and eventually merge. In the last few minutes of this process you've got two objects heavier than the sun whipping around each other
about 100 times a second, faster and faster, and they should emit a "chirp" of gravitational waves increasing in amplitude and frequency until the final merger. It's these "chirps" that LIGO is
optimized for detecting. Later, in LIGO II, they'll try to boost the sensitivity to allow detection of inspiralling binary neutron stars within 300 megaparsecs of us.
To give you an idea of these distances are like: the radius of the Milky Way is about 15 kiloparsecs. The distance to the Andromeda galaxy is about 700 kiloparsecs. The radius of the "Local Group"
consisting of three dozen nearby galaxies is about 2 megaparsecs. The distance to the "Virgo Cluster", the nearest large cluster of galaxies, is about 15 megaparsecs. The radius of the observable
universe is roughly 3000 megaparsecs. So, if everything works as planned, we'll be able to see quite far with gravitational waves.
However, binary neutron stars don't merge very often! The current best guess is that with LIGO I we will be able to see such an event somewhere between once every 3000 years and once every 3 years. I
know, that's not a very precise estimate! Luckily, the volume of space we survey grows as the cube of the distance we can see out to, so LIGO II should see between 1 and 1000 events per year.
For a lot more information, including other things we might see, try:
3) Curt Cutler and Kip Thorne, An overview of gravitational-wave sources, available as gr-qc/0204090.
The really scary thing is how good LIGO needs to be to work as planned. Roughly speaking, LIGO I aims to detect gravitational waves that distort distances by about 1 part in 10^21. Since the laser
bounces back and forth between the mirrors about 50 times, the effective length of the detector is 200 kilometers. Multiply this by 10^-21 and you get 2 x 10^-16 meters.
By comparison, the radius of a proton is 8 x 10^-16 meters! So, we're talking about measuring distances to within a quarter of a proton radius! And that's just LIGO I. LIGO II aims to detect waves
that distort distances by a mere 2 parts in 10^23, so it needs to do 50 times better.
I should admit that I'm being a bit misleading. The goal is not really to measure distances, but really vibrations with a given frequency. However, it will still be an amazing feat... if everything
goes as planned.
But how's it actually going?
Well, on October 20th, 2000, the Hanford installation achieved "first lock":
4) First lock at LIGO Hanford Observatory, http://www.ligo.caltech.edu/LIGO_web/firstlock/
What this means is that the laser beams locked into phase for a little while. To do this, the mirrors must maintain a positional accuracy of about one wavelength of infrared light - that is, about 10
^-6 meters. Nice, but still 10 orders of magnitude from what's ultimately required.
By November 2000, the Hanford installation had been operational for long enough to notice that the daily tides stretch the 2-kilometer long tubes by about a tenth of a millimeter. Of course, this is
an enormous amount by LIGO standards! Luckily, the facility is equipped with special devices that can compensate for this motion.
On February 28th, 2001, a magnitude 6.8 earthquake hit Olympia, Washington. This threw the Hanford LIGO facility out of alignment:
5) Washington quake rattles Hanford Observatory, http://www.ligo.caltech.edu/LIGO_web/news/0228quake.html
To go inside and fix things, they needed to open a carefully evacuated chamber, which when functioning is evacuated to 1 trillionth normal atmospheric pressure. Bummer!
In the spring of 2001, the Livingston installation achieved first lock.
Then, in a series of "engineering runs", both facilities identified and tried minimize all sources of noise. For example: microseismic noise, caused mainly by ocean waves hitting distant shores.
Thermal noise of various sorts, minimized by cooling things to 2 kelvin, hanging mirrors attached to fused quartz test masses on steel wires... and many other clever tricks! Shot noise, meaning the
uncertainty in the laser beam phase due to quantum mechanics. Radiation pressure noise, from the lasers pushing on the mirrors! Noise from residual gas in the evacuated tubes. And so on.
The battle against noise and other sources of error led in some strange directions. The Livingston facility had to remove a cattle guard at the entrance because of the microseismic noise produced
whenever a car rolled over it. More annoyingly, it turned out that commercial logging near this facility caused real trouble every time a tree fell. And at the Hanford facility, wind-blown
tumbleweeds piling up along the pipe would sometimes throw the beam out of alignment, thanks to their gravitational pull.
The first "science run" was scheduled for June 29th, 2002. This means that both the Hanford and Livingston facilities would run simultaneously and actually collect data for the purposes of doing
science - still rather crude data, but good enough to put new upper bounds on the strength of the gravitational waves that are out there. By this time, the Livingston detector was able to notice
changes in distance of one part in 10^20. I assume the Hanford one was similar....
Unfortunately, on June 28th, one day before the scheduled run, there was a magnitude 7.2 earthquake on the border of China and Russia! Earthquakes above magnitude 7 on the Richter scale happen about
a dozen times a year. They shake the precision mirrors of LIGO more than the system can counteract, but usually after 15 minutes the interferometer comes back under control. This time, however, the
automatic control system at the Hanford facility became confused, and the laser beam was reflected in such a way that a wire holding up a mirror became overheated and broke! Again, all this occurred
in an evacuated chamber, which had to be vented. It took 2 months to fix everything and make sure it wouldn't happen again:
6) LIGO's first science run: a special report, http://www.ligo.caltech.edu/LIGO_web/0209news/0209s1r1.html
But by August 23, they were back in business! Both LIGO detectors ran in coordination with GEO 600, a gravitational wave detector in Hannover run by a UK/German team. This is important, because a
real gravitational wave should be detected by all 3 units, while a falling tree or other coincidental noise burst should not. They are now analyzing the data and should come out with a paper soon.
Don't hold your breath: it's very unlikely that they'll see any gravitational waves until they boost the sensitivity more. The LIGO folks are in this for the long haul...
But meanwhile, going down all the way to the Planck scale, I'd like to talk about a shocking new development in loop quantum gravity:
7) Olaf Dreyer, Quasinormal modes, the area spectrum, and black hole entropy, gr-qc/0211076.
First for some historical background. In 1975, Hawking showed that black holes emit thermal radiation due to quantum effects:
8) Stephen Hawking, Particle creation by black holes, Commun. Math. Phys. 43 (1975), 199-220.
Using this one can assign a temperature to a black hole, and then use thermodynamic relations to calculate an entropy for it. This entropy is
S = A/4
where A is the area of the event horizon, and I'm using Planck units, where c = G = ħ = k = 1.
Since then Hawking's calculation has been confirmed in a myriad of ways. However, one would really like to compute the entropy of a black hole using statistical mechanics! Ever since Boltzmann, we
have known that the entropy of a system is given by
S = ln N
where N is the number of microstates. But what are the microstates of a black hole? In other words, if you have a black hole of area A, what are all the states it could be in that look the same from
a distance, but differ in tiny microscopic ways?
There is no answer to this in general relativity, because general relativity is a classical theory, and Hawking's formula S = A/4 really involves Planck's constant, since the area is being measured
in units of the Planck length squared, ħ G / c^3. So, we really need a theory of quantum gravity to identify the microstates of a black hole.
In the late 1990s, people decided to compute the entropy of black holes in the framework of loop quantum gravity. After some pioneering work by Rovelli and Smolin, a grad student named Kirill Krasnov
noticed that the event horizon of a nonrotating black hole could be described using some equations known as "Chern-Simons theory". He began working with his advisor, Abhay Ashtekar, on using this to
compute the entropy of such a black hole. Since I'd been trying to apply Chern-Simons theory to quantum gravity for quite a while, I decided to jump aboard and join in the fun. So did Alejandro
Corichi, another student of Ashtekar.
By 1997 we felt we were getting somewhere, and we came out with a short note outlining our approach:
9) Abhay Ashtekar, John Baez, Alejandro Corichi and Kirill Krasnov, Quantum geometry and black hole entropy, Phys. Rev. Lett. 80 (1998) 904-907, also available at gr-qc/9710007.
Filling in the details took about 3 more years, and was quite exhausting. We chopped the job into two parts, a classical part and a quantum part:
10) Abhay Ashtekar, Alejandro Corichi and Kirill Krasnov, Isolated horizons: the classical phase space, Adv. Theor. Math. Phys. 3 (2000), 418-471, available as gr-qc/9905089.
Abhay Ashtekar, John Baez and Kirill Krasnov, Quantum geometry of isolated horizons and black hole entropy, Adv. Theor. Math. Phys. 4 (2000), 1-94, available as gr-qc/0005126.
The details are complicated, but the final upshot is quite simple. In loop quantum gravity, there is a basis of states given by "spin networks". Roughly speaking, these are graphs with edges labelled
by spins
j = 0, 1/2, 1, ...
Any surface in space gets its area from the spin network edges that puncture it, and a spin-j edge contributes an area of
8 π γ sqrt(j(j+1))
Here γ is a dimensionless constant called the "Barbero-Immirzi parameter" - a puzzling, annoying but so far unavoidable feature of loop quantum gravity! Dreyer's work is exciting because it sheds new
light on this puzzling parameter.
If we have a black hole of area close to A, we have
A ~ SUM 8 π γ sqrt(j(j+1))
where ~ means "approximately equal to", and we sum over spin network edges puncturing the event horizon. But it turns out that the geometry of the event horizon is described not only by the spins j
labelling each edge, but also by some numbers m for each edge, which must lie in the range
m = -j, -j+1, ..., j-1, j
Since there are 2j+1 choices of m for a given j, there are
PRODUCT (2j+1)
microstates of the black hole for any choice of spins j. Here the product is taken over all punctures. To get the total number of microstates, we must then sum this quantity over all choices of the
spins j satisfying
A ~ SUM 8 π γ sqrt(j(j+1)).
This is a nice math problem. It turns out that for a large black hole, the whopping majority of all microstates come from taking all the spins to be as small as possible while still contributing some
area. So, we can just count the microstates where all the spins j equal 1/2. In a state like this, m can take just two values at each puncture.
In a state where all the spins are 1/2, the number of spin network edges puncturing the horizon, say n, must satisfy
A ~ 8 π γ sqrt(3/4) n
= 4 π γ sqrt(3) n
so the number of punctures must be
n ~ A / 4 π γ sqrt(3)
Since m can take two values at each puncture, the number of microstates we get this way is
N = 2^n
and the entropy is
S = ln N
= (ln 2) n
ln 2
~ ------------------ A
4 π γ sqrt(3)
Good! Entropy is proportional to area, at least for large black holes! For very small ones we need to do a more careful count of microstates, and we get "quantum corrections" to Hawking's formula -
but that's another story. Right now, the more important thing is that nasty Barbero-Immirzi parameter. To get the above formula to match Hawking's formula S = A/4 we need
ln 2
γ = ----------
π sqrt(3)
On the one hand this is good: we've determined γ! We can also check that the same value works for electrically charged black holes and other sorts of black holes. On the other hand, it's annoying
that we can only determine it with the help of Hawking's calculation. We'd really like to derive the right value of the Barbero-Immirzi parameter from within loop quantum gravity. But this seems
hard, in part because it's such a bizarre number.
Now for an extra twist - something that we thought about but unfortunately decided not to put in our paper. If you've studied the quantum mechanics of angular momentum, a lot of these formulas
involving j's and m's should look familiar to you. That's because loop quantum gravity is usually treated as a gauge theory with gauge group SU(2), which is also the group used to study angular
But we can also formulate gravity as a gauge theory with gauge group SO(3), the usual rotation group! Classically it makes no difference. But in loop quantum gravity, it has the effect of ruling out
half-integer spins. This means that j = 1/2 is no longer the smallest nonzero spin. Instead, it's j = 1. We can easily redo the whole calculation using SO(3). Not much changes, but we get a different
value of the Barbero-Immirzi parameter. When all the spin network edges puncturing the event horizon have j = 1, we get
A ~ 8 π γ sqrt(2) n
and thus
n ~ A / 8 π γ sqrt(2)
There are now three allowed m values for each puncture, so
N = 3^n
and the entropy is
S = ln N
= (ln 3) n
ln 3
~ ------------------ A
8 π γ sqrt(2)
This matches Hawking's S = A/4 if we take
ln 3
γ = -----------
2 π sqrt(2)
Again, the same number works for electrically charged and other black holes, as long as use the SO(3) version of loop quantum gravity. Indeed, the SO(3) theory seems just as good as the SU(2) theory
unless you want to include spin-1/2 particles. As long as you don't do that, they're different but equally good quantum theories that look the same classically. But since we did want to eventually
include spin-1/2 particles, we focused on the SU(2) theory.
Now for the big news. Last Sunday, Olaf Dreyer, a postdoc at the Perimeter Insitute who had been a student of Ashtekar, came out with an amazing paper that could change everything!
In this paper, he calculates the Barbero-Immirzi parameter in a completely new way, using numerical results on the vibrational modes of classical black holes. His answer seems to agree with that
obtained by the above calculation... but only if we use SO(3) instead of SU(2) as the gauge group!
It's very hard to know what this means, but the calculation itself is so cool that I want to tell you how it goes.
Dreyer's new method only uses a tiny bit of information about loop quantum gravity - and it doesn't use Hawking's work at all. It's not a rigorous calculation in a full-fledged theory of quantum
gravity; it's actually very similar to Bohr's early calculation of the spectrum of hydrogen.
According to Bohr, if classically a system can undergo periodic motion at some frequency ω, then in the quantum theory it can emit or absorb quanta of radiation with energy
Δ E = ω
But the energy of a nonrotating black hole is just its mass:
E = M
and this is related to the area of its event horizon by
A = 16 π M^2
so we have
Δ A = 32 π M Δ M
= 32 π M ω
Now for something from loop quantum gravity: if we work in the SO(3) theory, it's natural to guess that this change in area comes from the appearance or disappearance of a single spin-1 edge
puncturing the horizon, so that
Δ A = 8 π γ sqrt(2)
Putting these equations together, we get
4 M ω
γ = -----------
And now for the miracle! A nonrotating black hole will exhibit damped oscillations when you perturb it momentarily in any way, and there are different vibrational modes, each with its own
characteristic frequency and damping. In 1993, Hans-Peter Nollert used computer calculations to show that in the limit of large damping, the frequency of these modes approaches a specific number
depending only on the mass of the black hole:
ω = 0.04371235 / M
In 1998, Shahar Hod noticed that the number here may equal
ln(3) / 8 π = 0.043712394070757472250...
They agree to 6 significant figures!
Assuming Hod is right, Dreyer concludes that
ln 3
γ = -----------
2 π sqrt(2)
This is the same result that we got before!!! But it comes from very different reasoning.
If this reasoning holds up to scrutiny, something very interesting could be going on here: some nontrivial relation between semiclassical black hole thermodynamics, loop quantum gravity, and the
vibrational modes of classical black holes!
On the other hand, maybe it's all just a numerical coincidence. So, I sure hope somebody redoes Nollert's calculation more accurately, or perhaps does it analytically, to see what's going on. Maybe
someone reading this can do it! I can't stand the suspense.
Here are some references in case you want to calculate this number yourself, and either verify or kill this amazing idea. Nollert's original calculation appears in
11) Hans-Peter Nollert, Quasinormal modes of Schwarzschild black holes: the determination of quasinormal frequencies with very large imaginary parts, Phys. Rev. D47 (1993), 5253-5258.
It was subsequently confirmed by Andersson:
12) Nils Andersson, On the asymptotic distribution of quasinormal-mode frequencies for Schwarzschild black holes, Class. Quant. Grav. 10 (1993), L61-L67.
Technically the vibrational modes of a black hole are called "quasinormal modes". You can read more about them here:
13) Hans-Peter Nollert, Quasinormal modes: the characteristic `sound' of black holes and neutron stars, Class. Quant. Grav. 16 (1999), R159-R216.
K. D. Kokkotas and B. G. Schmidt, Quasi-normal modes of stars and black holes, Living Reviews in Relativity 2 (1999) 2, online at http://www.livingreviews.org/Articles/Volume2/1999-2kokkotas/ Also
available at gr-qc/9909058.
Hod's observation appears here:
14) Shahar Hod, Bohr's correspondence principle and the area spectrum of quantum black holes, Phys. Rev. Lett. 81 (1998), 4293, also available as gr-qc/9812002.
and was developed a bit further in:
15) Shahar Hod, Gravitation, the quantum, and Bohr's correspondence principle, Gen. Rel. Grav. 31 (1999) 1639, also available as gr-qc/0002002.
He goes so far as to argue that the "quantum of area" is 4 ln 3. This matches the area due to a spin-1 puncture if the Barbero-Immirzi parameter has the value obtained by Dreyer:
ln 3
γ = -----------
2 π sqrt(2)
However, Hod believes the area eigenvalues of a black hole are evenly spaced, which disagrees with the results of loop quantum gravity. The idea of equally spaced area eigenvalues for a black hole
was originally championed by Bekenstein and Mukhanov:
16) Jacob D. Bekenstein, Lett. Nuovo Cimento 11 (1974), 467.
V. F. Mukhanov, Are black holes quantized?, JETP Lett. 44 (1986), 63-66.
Jacob D. Bekenstein and V. F. Mukhanov, Spectroscopy of the quantum black hole, Phys. Lett B360 (1995), 7-12.
and subsequently developed by many others as well. To get the thermodynamics of black holes to work out right, this forces them to assume an exponentially growing degeneracy of the eignvalues.
However, this would lead to widely spaced spectral lines in the radiation even for large black holes, contrary to Hawking's calculations. Ashtekar has argued that this is implausible. In loop quantum
gravity, the area eigenvalues get very densely packed for a large black hole, since one is adding up lots of different numbers of the form
8 π γ sqrt(j(j+1)),
so one would not see widely spaced spectral lines in Hawking radiation from a large black hole.
Anyway, there are a lot of weird things here that I don't understand at all, like these quasinormal modes. Worse, it could all be just a coincidence. But, all of a sudden that Barbero-Immirzi
parameter is starting to smell a lot sweeter!
Afterword: Here is my reply to some questions posted by Ken Tucker on sci.physics.research:
In article <2202379a.0212050928.77c435d0@posting.google.com>, Ken S. Tucker wrote:
>Do you or anyone think we could directly verify g-waves with a
>properly constructed g-wave transmitter near the LIGO?
We can't generate strong enough gravitational waves to detect with LIGO. We only have a chance of detecting binary neutron stars because they generate a LOT of gravitational radiation right before
they spiral into each other. The reason is that we've got two stars, each more massive than the sun, each a few kilometers in diameter, perhaps a dozen kilometers apart, whipping around each other
about 100 times a second!
Try imagining that. It's pretty awesome.
Now, try making something like that yourself.
You see, even though we have the advantage of being able to get much closer to the LIGO detector than the binary neutron stars, this is still outweighed by the incredible power of the gravitational
radiation produced by binary neutron stars! These guys emit approximately 3 x 10^49 watts of power in their final moments. Even 1000 parsecs away, that means folks here on earth receive a flux of
about 3 x 10^5 watts per square centimeter of gravitational radiation.
There's nothing we can make here on earth that comes close to that. For comparison, let's take a steel cylinder 1 meter in diameter and 20 meters long, and thus about 490 metric tons in mass. Now,
spin it end over end so fast that it almost rips apart due to the centrifugal force - that means about 4.5 cycles per second. You wouldn't want to get close to this thing! But it will radiate a
measly 2 x 10^-30 watts of gravitational radiation...
... that is, about 10^-79 times as much as the binary neutron star.
This is why the binary neutron star can be so much further away, yet still much easier to detect than any gravitational radiation we can make here.
By the way, don't confuse true gravitational radiation with a mere time-dependent gravitational potential. The latter is much easier to detect on LIGO; as I've described in another post which has not
appeared yet, even a tumbleweed flying past LIGO creates enough of a time-dependent gravitational potential for the device to detect.
>It would indeed be excellent to obtain a g-wave burst and a
>γ wave burst simultaneously. (even better is if the propagation
>rate were different then we'd have a cool yardstick).
We may see that from γ ray bursters, someday. We don't know how γ ray bursters work well enough to know how much gravitational radiation they produce.
We may also see simultaneous neutrino and gravitational-wave bursts from supernovae. This has been seriously studied. People saw neutrinos from the supernova 1987A. Figuring out how much
gravitational radiation to expect is tricky because only asymmetries in the supernova collapse/explosion create gravitational radiation. More precisely, one needs a time-dependent quadrupole moment
to get gravitational radiation.
>I'm wondering if it may be practically possible to generate g-waves
>to verify that this radiation in fact exists. In the threads I've studied
>(for example the thread "Gravitational Radiation Detection", around
>2000/01) this looks unlikely in our life times.
Yes, and I hope the figures above begin to explain why!
>I believe Hertz was able to transmit and receive EMR in his lab, to
>produce an unequivocal repeatable result. Such an experiment for
>g-waves would be a near holy grail.
>In Dr. Baez's post (2000/01/03) appears an equation for the
>g-wave Power output = 2/45 G M^2 L^4 w^6 / c^5, but I haven't
>been able to find a specific reference for the sensitivity of LIGO in
>units of power/area in the 100-300 Hz band.
That's because LIGO sensitivity is usually measured in different units. I don't know how much power per area LIGO can detect in the 100-300 hertz frequency band, but by the above figures, detecting a
binary neutron star 1000 parsecs away is equivalent to detecting roughly 3 x 10^5 watts / cm^2. This may seem like a hell of a lot of power per area, and it is, but gravity is such a weak force
compared to electromagnetism that one needs a hell of a lot more power per area to be able to detect it!
When Rutherford introduced me to Bohr he asked me what I was working on. I told him and he said, "How is it going?" I said, "I'm in difficulties." He said, "Are the difficulties mathematical or
physical?" I said, "I don't know." He said, "That's bad." - J. Robert Oppenheimer
© 2002 John Baez | {"url":"http://math.ucr.edu/home/baez/week189.html","timestamp":"2014-04-16T21:52:46Z","content_type":null,"content_length":"32423","record_id":"<urn:uuid:0a931dfe-0ad4-4afb-aef7-4db10ce59ded>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that the intersection of two subgroups of a group is again a subgroup - Homework Help - eNotes.com
Prove that the intersection of two subgroups of a group is again a subgroup
Prove that the intersection of two subgroups is a subgroup:
To show that a subset with an operation is a subgroup, we need to show that ` a,b in H => ab^(-1) in H` where H is the subset of a group.
Let `H,K subset G` . Let `M=H cap K` .
Suppose `a,b in M` ; then `a in H,b in H, a in K, b in K` .
Since H is a subgroup, `a,b in H ==> ab^(-1) in H`
``Since K is a subgroup, `a,b in K ==> ab^(-1) in K`
`a,b in M ==> a,b in H; a,b, in K` thus `ab^(-1) in M` .
** We know that the intersection is nonempty, since H,K are subgroups implies that they both have the identity element of the group as their identity element. **
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/prove-that-intersection-two-subgroups-group-again-449342","timestamp":"2014-04-17T14:50:12Z","content_type":null,"content_length":"25802","record_id":"<urn:uuid:8761dcea-3c12-4a0b-b956-129971573772>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rank of a matrix
What does the rank of a matrix tells about the consistency of the given equation?
I'm not sure what consistency is when applied to matrices, but here's something I knowabout ranks: The rank of a matrix is equal to the number of linearly independent equations in the system.
A system of equations is "consistent" if there exist at least one solution to the system. If the rank of an n by n coefficient matrix is n, then there exist a unique solution and the system is
consistent. If the rank is less than n, then the range of the matrix is a subspace or [itex]R^n[/itex] and there exist a solution if and only if the vector on the right hand side of the equation is
in that subspace. | {"url":"http://mathhelpforum.com/advanced-algebra/87373-rank-matrix.html","timestamp":"2014-04-19T18:57:43Z","content_type":null,"content_length":"34723","record_id":"<urn:uuid:010a7c9c-1a2f-4c70-9d92-298439c3c73e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Regressions Among Random Coefficients
Rich Jones posted on Thursday, February 17, 2000 - 8:38 am
I have a question about alternative parameterizations for latent growth modeling.
Two equivalent parameterizations of a LGM are to (1) freely estimate the correlation between initial status and change factors, or (2) regress the slope factor on the initital status factor (and
fixing the correlation to zero). These two parameterizations will not, however, produce equivalent parameter estimates for the mean and variance of the slope factor (although naturally the posterior
estimates of slopes and intercepts are equivalent in the two parameterizations).
However, when interest extends to how exogenous variables relate to initial status and growth factors, the two parameterizations can lead to very different inferences.
I'm afraid there is something I am missing regarding choosing models or interpreting output from the two parameterizations. Does anyone have any suggestions or references that might help?
Dieter Urban posted on Friday, February 18, 2000 - 8:13 am
you can find some information on a related problem in:
Rovine, M.J./Molenaar, P.C.M., 1998, The covariance between level and shape in the latent growth curve model with estimated basis vector coefficients. Methods of Psychological Research Online (3)
However, concerning your special problem: a bivariate regression should show the same results as a bivariate correlation. How did you get the difference between both?
Bengt O. Muthen posted on Friday, February 18, 2000 - 9:43 am
With an intercept and a slope factor and no covariates, the model that covaries them and the model that regresses the slope on the intercept gives the same fit and estimates of the growth factor
means, variances, and covariance. With covariates, the model interpretation differs for the two alternatives. Regressing the slope on the intercept, the covariate has both a direct and indirect
effect on the slope so the coefficient for the slope regressed on the covariate is different because it is a partial regression coefficient. In my view, the choice between the two models should be
substantively-driven. The regression approach may be motivated if the intercept is defined as the initial status and the slope the change thereafter. This was the case in Muthen-Curran's 1997 Psych
Methods article where initial status was pre-intervention status which affected how much the intervention caused change.
Apsalam posted on Tuesday, January 31, 2006 - 8:54 pm
Hi Bengt and Linda,
I have a cross sectional structural model where A causes B. I then collected data on A and B at three time points, and I’m running parallel-process multiple indicator growth models.
For my parallel process model to be consistent with my ‘A causes B’ hypothesis, I’m letting the intercept factor for A predict the intercept factor for B (i.e. regressing B on A), but because I have
no theories about rates of change I am freely estimating the covariance between the two slope factors and the intercept for B, i.e. in my model, the intercept for B is an endogenous variable,
predicted by the intercept for A, while all other growth factors are exogenous variables with freely estimated covariances.
Is the growth model I describe consistent with my A causes B hypothesis?
Linda K. Muthen posted on Wednesday, February 01, 2006 - 8:51 am
The model you have says where people start on process A predicts where they start on process B. You may want to add that where they start on process A predicts where how they grow on process B.
apsalam posted on Wednesday, February 01, 2006 - 11:45 am
Thank you that is very helpful.
Chuck He posted on Tuesday, October 02, 2007 - 5:16 am
I have a question about Random Effects.
My model is F1-F2-F3, while all of them are latent variables. F4, another latent variable, is included in this model. I would like to find how F4 has effect on the relationship between F1 and F2
(F1-F2). The following is my scripts. However, whenever I run this programme, it tells me the dimensions of integration and total number of integration points and then stops.
TITLE: Hierarchical regression
DATA: FILE IS 111.TXT;
Variable: NAMES ARE m1-m7 o1-o8 s1-s4 gs1-gs6;
ANALYSIS: TYPE = RANDOM;
MODEL: f1 BY m1-m7;
f2 BY s1-s4;
f3 BY o1-o8;
f4 BY gs1-gs6;
f2 on f1;
f3 on f2;
s | f2 on f1;
s with f2;
s on f4;
Does anyone have solutions on this problem?
Thanks, Chuck
Linda K. Muthen posted on Tuesday, October 02, 2007 - 8:40 am
It sounds like you want the interaction between f1 and f2. You would use the XWITH option for that. If this does not solve your problem, please send your input, data, output, and license number to
Chuck He posted on Tuesday, October 02, 2007 - 9:05 am
Hi, Linda,
Thanks for your response. However, it is not what I want.
Anyway, I will send all information together with the license number to you.
Thanks and best regards,
Chuck He posted on Thursday, October 04, 2007 - 1:21 am
Hi, Linda:
I have solved this problem. I misput one parameter in my model.
Thanks, Chuck
Karoline Brobakke posted on Monday, March 18, 2013 - 2:58 am
I have run a parallel process model (depression and stress) with regressions among the random effects (Your example 6.13 in the manual).
The slope for depression shows an overall stable mean and sig. variance. While the slope for stress shows a slight decline and sig. variance. The regression coefficients from the intercept of one
process to the slope factor of the other are both negative.
Because some individuals decline and others increase in both processes, I am uncertain about the correct interpretation of the regression coefficients.
Is it correct to interpret this so that individuals with lower initial status on either process either increase faster or decline more slowly on the other process (depending on whether they have
positive or negative slopes)?
Thank you in advance,
Bengt O. Muthen posted on Monday, March 18, 2013 - 9:07 am
Tammy Kochel posted on Tuesday, June 11, 2013 - 2:26 pm
If two parallel processes are included in a latent growth model, but the hypothesis predicts only the growth factors of one of those processes, does including the growth for the second process
"control" for that change over time, even though nothing is regressed upon it? Thanks. Tammy
Bengt O. Muthen posted on Tuesday, June 11, 2013 - 2:36 pm
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=66","timestamp":"2014-04-17T06:46:19Z","content_type":null,"content_length":"35794","record_id":"<urn:uuid:2575a50f-a89a-4b74-9220-50b74e3ba247>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complexity theory
November 14th 2006, 06:47 AM #1
Junior Member
Feb 2006
Complexity theory
I was wondering:
In complexity theory, why does the function used in reduction need to be computable.
I would greatly appreciate it, if someone could explain this in detail or give some sort of proof for it.
Many thanks.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/7569-complexity-theory.html","timestamp":"2014-04-18T12:23:49Z","content_type":null,"content_length":"28855","record_id":"<urn:uuid:39bec2c0-1bbd-4c1b-9336-018a258d8669>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: COMPUTING DEVICE AND DESIGN METHOD FOR NONLINEAR OBJECT
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A design method generates a plurality of groups of experimental conditions, each of the groups of experimental conditions includes performance variables for an electronic product with nonlinear
performance. The method simulates values to the groups of experimental conditions, computes an average value, and divides the groups of experimental conditions into a first part and a second part.
The values in the first part is greater than the average value and the values in the second part is less than the average value. The method computes nonlinear boundary values of a refining mechanism
based on the values, and determines a threshold value of the refiner. After refining the groups of experimental conditions, the method calculates the deviation of each value from the threshold value,
and determines the groups of experimental conditions with the greatest deviations as optimal groups of experimental conditions.
A design method of a nonlinear object using a computing device, the design method comprising: (a) using a statistics software to generate a plurality of groups of experimental conditions as a
simulation tool for simulating the nonlinear object, each of the groups of experimental conditions comprising a plurality of performance variables of the nonlinear object; (b) simulating values to
the groups of experimental conditions according to the simulation tool; (c) computing an average value of the values, and dividing the groups of experimental conditions into a first part and a second
part according to the average value; (d) computing nonlinear boundary values of a refining mechanism based on the values in the two parts, and determining a threshold value of the refining mechanism
from the nonlinear boundary values; (e) reclassifying the groups of experimental conditions according to the nonlinear boundary values and the threshold value of the refining mechanism; (f)
calculating a deviation of each of the values in the groups of experimental conditions from the threshold value, and determining the groups of experimental conditions having greatest deviations as
optimum groups of experimental conditions; and (g) generating and projecting the nonlinear object according to the optimum groups of experimental conditions, and displaying the nonlinear object on a
display device connected to the computing device.
The method as claimed in claim 1, wherein the statistics software is a Minitab program.
The method as claimed in claim 1, wherein the simulation tool is a Taguchi Method or a Response Surface method.
The method as claimed in claim 1, wherein each of the values in the first part is greater than the average value, and each of the values in the second part is less than the average value.
The method as claimed in claim 1, wherein the nonlinear boundary values are composed by a weighting factor and a model parameter of each of the performance variables.
The method as claimed in claim 1, wherein the step (c) further comprises: (c1) marking the groups of experimental conditions in the first part with a first sign, and marking the groups of
experimental conditions in the second part with a second sign.
The method as claimed in claim 6, wherein the step (e) comprises: (e1) selecting a performance variable as a standard value; (e2) classifying the standard value in each of the groups of experimental
conditions according to a conditional criterion, marking with the first sign the groups of experimental conditions in which the standard value is greater than the conditional criterion, and marking
with the second sign the groups of experimental conditions in which the standard value is less than the conditional criterion; (e3) calculating a weighting factor and a model parameter of the
standard value in each of the groups of experimental conditions; (e4) repeating step (e1) to step (e3) to determine each performance variable as the standard value and calculating the weighting
factor and the model parameter of the standard value; (e5) multiplying the model parameter of the standard value in each of the groups of experimental conditions by the corresponding first or second
sign and obtaining a plurality of values, and adding the plurality of values together to obtain a total value, each of the groups of experimental conditions corresponds to one total value; (e6)
classifying the groups of experimental conditions according to the threshold value of the refining mechanism, marking with the first sign the groups of experimental conditions in which the total
values are greater than the threshold value, and marking with the second sign the groups of experimental conditions in which the total values are less than the threshold value; and (e7) determining
whether an error rate of each of the groups of experimental conditions is less than a predetermined value by comparing the sign of each of the groups of experimental conditions in step (e6) with the
corresponding first or second sign in step (e1).
A computing device, comprising: at least one processor; a storage system; and one or more modules that are stored in the storage system and executed by the at least one processor, the one or more
modules comprising: a condition generation module operable to use a statistics software to generate a plurality of groups of experimental conditions as a simulation tool for simulating a nonlinear
object, each of the groups of experimental conditions comprising a plurality of performance variables of the nonlinear object; a simulation module operable to simulate values to the groups of
experimental conditions according to the simulation tool; a first classifying module operable to compute an average value of the values, and divide the groups of experimental conditions into a first
part and a second part according to the average value; a second classifying module operable to compute nonlinear boundary values of a refining mechanism based on the values in the two parts,
determine a threshold value of the refining mechanism from the nonlinear boundary values, and reclassify the groups of experimental conditions according to the nonlinear boundary values and the
threshold value of the refining mechanism; and a determination module operable to calculate a deviation of each of the values in the groups of experimental conditions from the threshold value,
determine the groups of experimental conditions having greatest deviations as optimum groups of experimental conditions, generate and projecting the nonlinear object according to the optimum groups
of experimental conditions, and display the nonlinear object on a display device connected to the computing device.
The computing device as claimed in claim 8, wherein the statistics software a is Minitab program.
The computing device as claimed in claim 8, wherein the simulation tool is a Taguchi Method or a Response Surface method.
The computing device as claimed in claim 8, wherein each of the values in the first part is greater than the average value, and each of the values in the second part is less than the average value.
The computing device as claimed in claim 8, wherein the nonlinear boundary values are composed by a weighting factor and a model parameter of each of the performance variables.
The computing device as claimed in claim 8, wherein the first classifying module is further operable to mark the groups of experimental conditions in the first part with a first sign, and mark the
groups of experimental conditions in the second part with a second sign.
The computing device as claimed in claim 13, wherein the groups of experimental conditions is reclassified according to the nonlinear boundary values and the threshold value of the refining mechanism
by the following steps: (e1) selecting a performance variable as a standard value; (e2) classifying the standard value in each of the groups of experimental conditions according to a conditional
criterion, marking with the first sign the groups of experimental conditions in which the standard value is greater than the conditional criterion, and marking with the second sign the groups of
experimental conditions in which the standard value is less than the conditional criterion; (e3) calculating a weighting factor and a model parameter of the standard value in each of the groups of
experimental conditions; (e4) repeating step (e1) to step (e3) to determine each performance variable as the standard value and calculating the weighting factor and the model parameter of the
standard value; (e5) multiplying the model parameter of the standard value in each of the groups of experimental conditions by the corresponding first or second sign and obtaining a plurality of
values, and adding the plurality of values together to obtain a total value, each of the groups of experimental conditions corresponds to one total value; (e6) classifying the groups of experimental
conditions according to the threshold value of the refining mechanism, marking with the first sign the groups of experimental conditions in which the total values are greater than the threshold
value, and marking with the second sign the groups of experimental conditions in which the total values are less than the threshold value; and (e7) determining whether an error rate of each of the
groups of experimental conditions is less than a predetermined value by comparing the sign of each of the groups of experimental conditions in step (e6) with the corresponding first or second sign
marked by the first classifying module.
A non-transitory storage medium having stored thereon instructions that, when executed by a processor of a computing device, cause the computing device to: (a) use a statistics software to generate a
plurality of groups of experimental conditions as a simulation tool for simulating the nonlinear object, each of the groups of experimental conditions comprising a plurality of performance variables
of the nonlinear object; (b) simulate values to the groups of experimental conditions according to the simulation tool; (c) compute an average value of the values, and divide the groups of
experimental conditions into a first part and a second part according to the average value; (d) compute nonlinear boundary values of a refining mechanism based on the values in the two parts, and
determine a threshold value of the refining mechanism from the nonlinear boundary values; (e) reclassify the groups of experimental conditions according to the nonlinear boundary values and the
threshold value of the refining mechanism; (f) calculate a deviation of each of the values in the groups of experimental conditions from the threshold value, and determine the groups of experimental
conditions having greatest deviations as optimum groups of experimental conditions; and (g) generate and project the nonlinear object according to the optimum groups of experimental conditions, and
display the nonlinear object on a display device connected to the computing device.
The storage medium as claimed in claim 15, wherein each of the values in the first part is greater than the average value, and each of the values in the second part is less than the average value.
The storage medium as claimed in claim 15, wherein the simulation tool is a Taguchi Method or a Response Surface method.
The storage medium as claimed in claim 15, wherein the nonlinear boundary values are composed by a weighting factor and a model parameter of each of the performance variables.
The storage medium as claimed in claim 15, wherein the step (c) further comprises: (c1) marking the groups of experimental conditions in the first part with a first sign, and marking the groups of
experimental conditions in the second part with a second sign.
The storage medium as claimed in claim 19, wherein the step (e) comprises: (e1) selecting a performance variable as a standard value; (e2) classifying the standard value in each of the groups of
experimental conditions according to a conditional criterion, marking with the first sign the groups of experimental conditions in which the standard value is greater than the conditional criterion,
and marking with the second sign the groups of experimental conditions in which the standard value is less than the conditional criterion; (e3) calculating a weighting factor and a model parameter of
the standard value in each of the groups of experimental conditions; (e4) repeating step (e1) to step (e3) to determine each performance variable as the standard value and calculating the weighting
factor and the model parameter of the standard value; (e5) multiplying the model parameter of the standard value in each of the groups of experimental conditions by the corresponding first or second
sign and obtaining a plurality of values, and adding the plurality of values together to obtain a total value, each of the groups of experimental conditions corresponds to one total value; (e6)
classifying the groups of experimental conditions according to the threshold value of the refining mechanism, marking with the first sign the groups of experimental conditions in which the total
values are greater than the threshold value, and marking with the second sign the groups of experimental conditions in which the total values are less than the threshold value; and (e7) determining
whether an error rate of each of the groups of experimental conditions is less than a predetermined value by comparing the sign of each of the groups of experimental conditions in step (e6) with the
corresponding first or second sign in step (e1).
BACKGROUND [0001]
1. Technical Field
Embodiments of the present disclosure generally relate to computing devices and experimental design methods, and more particularly to a computing device and a design method for a nonlinear object.
2. Description of Related Art
A pre-routing simulation is usually performed before the design of most electronic product. The problem of estimating the influence of operating conditions upon the integrity of electronic signals of
the product by using a pre-routing or preliminary simulation, is a difficult one. The variables in the conditions of operation may include different materials, and different conductor lengths, for
example. To establish a correlation between the operating conditions and the product can reduce manufacturing time. However, if the product is nonlinear performance, any fixed correlation between the
conditions of operation and the product itself cannot be accurately estimated.
BRIEF DESCRIPTION OF THE DRAWINGS [0005]
FIG. 1 is a schematic diagram of one embodiment of a computing device including an experimental design unit.
FIG. 2 is a block diagram of function modules of the experimental design unit in FIG. 1.
FIG. 3 is a flowchart illustrating one embodiment of a design method for a nonlinear object.
FIG. 4 is a detailed description of step S07 in FIG. 3, for reclassifying groups of experimental conditions according to nonlinear boundary values and a threshold value of a refining mechanism.
FIG. 5, FIG. 6, FIG. 7, FIG. 8 and FIG. 9 give examples illustrating a correlation between a nonlinear object and performance variables of the nonlinear object.
DETAILED DESCRIPTION [0010]
In general, the data "module," as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example,
Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as in an EPROM. It will be appreciated that modules may comprise connected logic units, such
as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware
modules and may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some non-limiting examples of non-transitory computer-readable medium include CDs,
DVDs, flash memory, and hard disk drives.
FIG. 1 is a block diagram of one embodiment of a computing device 1 including an experimental design unit 10. In the embodiment, functions of the experimental design unit 10 are implemented by the
computing device 1. The experimental design unit 10 is used for generating a series of groups of experimental conditions applicable to the design of a product, or part of a product, with nonlinear
performance (nonlinear object) using a statistics software 120, for example, the series of groups of experimental conditions are related to distances between an upper eyelid and a lower eyelid of the
eye. The statistics software 120 may be a Minitab program. In the embodiment, each of the groups of experimental conditions includes at least one performance variable of the nonlinear object. For
reducing design errors, the experimental design unit 10 can take the groups of experimental conditions as training data, and obtain an optimum, or number of optimum, groups of experimental conditions
by analyzing the training data. Detail functions of the experimental design unit 10 are described, in reference to FIG. 2 and FIG. 3, below.
In one embodiment, the computing device 1 may be a computer, a server, a portable electronic device, or any other electronic device that includes a storage system 12, and at least one processor 14.
In one embodiment, the storage system 12 may be a magnetic or an optical storage system, such as a hard disk drive, an optical drive, a compact disc, a digital versatile disc, a tape drive, or other
suitable storage medium. The storage system 12 further stores the statistics software 120. The processor 14 may be a central processing unit including a math co-processor.
The computing device 1 is electronically connected to a display device 2. The display device 2 is configured for showing the experimental design process.
FIG. 2 is a block diagram of function modules of the experimental design unit 10 in FIG. 1. In one embodiment, the experimental design unit 10 includes a condition generation module 100, a simulation
module 102, a first classifying module 104, a second classifying module 106, and a determination module 108. Each of the modules 100-108 may be a software program including one or more computerized
instructions that are stored in the storage system 12 and executed by the processor 14.
The condition generation module 100 uses the statistics software 120 to generate a plurality of groups of experimental conditions as a simulation tool for simulating the design of the nonlinear
object. Each of the groups of experimental conditions includes a series of performance variables of the nonlinear object. In the embodiment, the statistics software 120 may be a Minitab program, and
the simulation tool may be a Taguchi Method or a Response Surface method, for example.
As shown in FIG. 5, the nonlinear object has five performance variables: A, B, C, D, and E. The condition generation module 100 uses the statistics software to generate six groups of experimental
conditions: a first group, a second group, a third group, a forth group, a fifth group, and a sixth group.
The simulation module 102 simulates values to the groups of experimental conditions according to the simulation tool, on the basis of how the nonlinear product is likely to perform in actual
operation under each of those conditions, or sets of conditions. In the embodiment, the values are results of the simulation of the nonlinear object. Different nonlinear object may have different
values with units. For example, if the nonlinear object is an eye, the simulation module 102 may simulate a series of distances between an upper eyelid and a lower eyelid of the eye to the groups of
experimental conditions, and units of the distances can be in mm or in cm. As shown in FIG. 5, the value of the first group is "180," the value of the second group is "400," the value of the third
group is "270," the value of the forth group is "20," the value of the fifth group is "100," and the value of the sixth group is "66."
The first classifying module 104 computes an average value of the values, and divides the groups of experimental conditions into a first part and a second part according to the average value. In the
embodiment, the values in the first part is greater than the average value, and the values in the second part is less than the average value. As shown in FIG. 5, the first classifying module 104
further marks the groups of experimental conditions in the first part with a first positive "+1" sign, and marks the groups of experimental conditions in the second part with a second negative "-1"
An error rate of each group of experimental conditions (in FIG. 5) is about one-sixth (as shown in FIG. 6), and establishing an optimum group of experimental conditions is therefore difficult. The
error rate is a rate of an error would happen. Thus, the simulation tool is required to use a refining mechanism to classify the groups of experimental conditions and apply weights in each group of
experimental conditions, namely to reduce the weights for correct factors, and enhance the weights for error factors, to assist in highlighting one or more of an optimum group of experimental
The second classifying module 106 computes nonlinear boundary values for the refining mechanism based on the values divided into the two parts, and determines a threshold value of the refining
mechanism from the nonlinear boundary values. In one embodiment, the nonlinear boundary values are the result of a weighting factor and a model parameter of each of the performance variables. The
refining mechanism follows a boosting algorithm. The second classifying module 106 further reclassifies the groups of experimental conditions according to the nonlinear boundary values and the
threshold value of the refining mechanism, as detailed below (and illustrated in FIG. 4).
The determination module 108 calculates a deviation of each of the values in the groups of experimental conditions from the threshold value, and determines the groups of experimental conditions
having a maximum deviation as the optimum groups of experimental conditions. As illustrated in FIG. 9, if the threshold value is zero, the deviation between each of the values and the threshold value
is "2.234," "0.624," "0.624," "2.234," "2.234," and "0.624". The determination module 108 determines that the first group, the forth group and the fifth group have the greatest deviations, so the
first group, the forth group and the fifth group can be determined as the optimum groups of experimental conditions. The error rates of the first group, the forth group and the fifth group are low.
FIG. 3 is a flowchart illustrating one embodiment of a method for designing a nonlinear object using the computing device 1 of FIG. 1. The method can be performed by the execution of a
computer-readable program by the at least one processor 12. Depending on the embodiment, in FIG. 3, additional steps may be added, others removed, and the ordering of the steps may be changed.
In step S01, the condition generation module 100 uses the statistics software 120 to generate a plurality of groups of experimental conditions as a simulation tool for simulating the conditions of
operation of a nonlinear object. As shown in FIG. 5, each of the groups of experimental conditions includes a series of performance variables of the nonlinear object. In the embodiment, the
statistics software 120 may be a Minitab program, and the simulation tool may be a Taguchi Method or a Response Surface method, for example.
In step S03, the simulation module 102 simulates values for the groups of experimental conditions according to the simulation tool. As shown in FIG. 5, the value of the first group is "180," the
value of the second group is "400," the value of the third group is "270," the value of the forth group is "20," the value of the fifth group is "100," and the value of the sixth group is "66."
In step S05, the first classifying module 104 computes an average value of the values, divides the groups of experimental conditions into a first part and a second part according to the average
value, and marks the first part with a first sign and marks the second part with a second sign. In the embodiment, the values in the first part are greater than the average value, and the values in
the second part are less than the average value. The first sign may be "+1" which is different from the second sign. In one embodiment, the second sign can be "-1."
In step S07, the second classifying module 106 computes the nonlinear boundary values of a refining mechanism based on the values in the two parts, determines a threshold value of the refining
mechanism from the nonlinear boundary values, and reclassifies the groups of experimental conditions according to the nonlinear boundary values and the threshold value of the refining mechanism, as
below (and detailed in FIG. 4). In one embodiment, the nonlinear boundary values are the result of a weighting factor and a model parameter of each of the performance variables. The refining
mechanism follows a boosting algorithm.
In step S09, the determination module 108 calculates a deviation of each of the values in the groups of experimental conditions from the threshold value, and determines the groups of experimental
conditions having the greatest deviations as the optimum groups of experimental conditions of the nonlinear object. The determination module 108 further generates and projects the nonlinear object
according to the optimum groups of experimental conditions, and displays the nonlinear object on the display device 2.
As illustrated in FIG. 9, if the threshold value is zero, the deviation between each of the values and the threshold value is "2.234," "0.624," "0.624," "2.234," "2.234," and "0.624". The
determination module 108 determines that the first group, the forth group and the fifth group can be the optimum groups of experimental conditions relating to the nonlinear object. The error rates of
the first group, the forth group and the fifth group are low.
FIG. 4 is a detailed description of step S07 in FIG. 3, for reclassifying groups of experimental conditions according to the nonlinear boundary values and the threshold value of the refining
In step S700, the second classifying module 106 determines the performance variables as features, and selects a feature as a standard value. In another embodiment, the second classifying module 106
can select more than one feature as the standard value.
In step S702, the second classifying module 106 presets a conditional criterion, classifies the standard value in each of the groups of experimental conditions according to the conditional criterion,
marks the groups of experimental conditions as the first sign "+1" in which the standard value is greater than the conditional criterion, and marks the groups of experimental conditions as the second
sign "-1" for which the standard value is less than the conditional criterion.
For example, as shown in FIG. 6, if the feature B is selected to be the standard value and the digital "2" is preset as the conditional criterion, the second classifying module 106 marks the first
group and the third group with the first sign "+1," and marks the second group, the forth group, the fifth group and the sixth group with the second sign "-1". By comparing the sign of each group in
FIG. 6 with the corresponding sign in FIG. 5, the second classifying module 106 finds that the second group has a different sign in FIG. 6 and FIG. 5, so the second classifying module 106 determines
that the error rate of the second group is too high, which can be verified in step S704 below. If the feature C is selected to be the standard value and the digital number "2" is preset as the
conditional criterion, the second classifying module 106 marks the first group, the second group and the fifth group with the first sign "+1," and marks the third group, the forth group, and the
fifth group with the second sign "-1". By comparing the sign of each group in FIG. 6 with the corresponding sign in FIG. 5, the second classifying module 106 finds that the third group and the sixth
group have different signs in FIG. 6 and FIG. 5, so the second classifying module 106 determines that the error rates of the third group and the sixth group are too high, which can illustrated in
FIG. 7.
In step S704, the second classifying module 106 uses the refining mechanism to calculate a weighting factor and a model parameter of the standard value in each of the groups of experimental
conditions. In the embodiment, the process of selecting one or more features as the standard value can serve as the process of establishing models. For example, if the refining mechanism follows the
boosting algorithm, the weighting factor can be calculated with the following formula: Di+1=Di*exp(-α*y*h)/Z, where the model parameter can be calculated with the formula: α=ln(1-ε/ε)/2, "ε" is the
error rate, "y" is a value of the sign, "h" represents whether the classification is right (if the classification is wrong: y*h=-1, if the classification is right, y*h=1), "Z" is a normalize factor.
For example, if the substitution of ε=1/6 is made in the formula given above and then solve it for α=0.8047, as shown in FIG. 7, the total value of the weighting factors of the feature B in the six
groups of experimental conditions is equal to one.
In step S706, the second classifying module 106 repeats step S700 to step S704 to determine each performance variable as the standard value and calculate the weighting factor and the model parameter
of the standard value. The second classifying module 106 multiplies the model parameter of the standard value in each of the groups of experimental conditions by the corresponding sign and obtains a
plurality of values, and adds the plurality of values together to obtain a total value. In the embodiment, each of the groups of experimental conditions corresponds to a total value of one.
As shown in FIG. 7, the signs of the feature B in each experimental condition group are marked as "+1," "-1," "+1," "-1," "-1," and "-1," the second classifying module 106 calculates that the model
parameter of the feature B is α=0.8047. As shown in FIG. 8, the signs of the feature C in each experimental condition group are marked as "+1," "+1," "-1," "-1," "-1," and "+1," the second
classifying module 106 calculates that the model parameter of the feature C is α=1.4287. If the process of judging the feature B is determined as a first model, and the process of judging the feature
C is determined as a second model, the value of multiplying the model parameter of the feature B by the corresponding sign and the value of multiplying the model parameter of the feature C by the
corresponding sign are shown in FIG. 9.
In step S708, the second classifying module 106 classifies the groups of experimental conditions according to the threshold value of the refining mechanism, and marks with sign "+1" the groups of
experimental conditions in which the total values are greater than the threshold value, and marks with sign "-1" the groups of experimental conditions in which the total values are less than the
threshold value. As shown in FIG. 9, if zero is the threshold value, the second classifying module 106 classifies the groups of experimental conditions into two parts: the first group, the second
group and the sixth group compose one group, which is marked with the first sign "+1," and the third group, the forth group, ad the fifth group compose another part, which is marked with the second
sign "-1".
In step S710, the determination module 108 determines whether an error rate of each experimental condition group is less than a predetermined value by comparing the sign of each experimental
condition group in FIG. 9 with the corresponding sign in FIG. 5. If the error rate of each experimental condition group is less than the predetermined value, the flow ends. If the error rate of each
experimental condition group is not less than the predetermined value, the flow goes to step S712.
For example, if the predetermined value is three, the determination module 108 determines that the error rates of the third group and the sixth group are not less than the predetermined value.
In step S712, the determination module 108 repeats step S700 to step S710 until one error rate of the groups of experimental conditions is less than the predetermined value.
Although certain inventive embodiments of the present disclosure have been specifically described, the present disclosure is not to be construed as being limited thereto. Various changes or
modifications may be made to the present disclosure without departing from the scope and spirit of the present disclosure.
Patent applications by Cheng-Hsien Lee, Tu-Cheng TW
Patent applications by Hsiao-Yun Su, Tu-Cheng TW
Patent applications by Shou-Kuo Hsu, Tu-Cheng TW
Patent applications by HON HAI PRECISION INDUSTRY CO., LTD.
Patent applications in class STRUCTURAL DESIGN
Patent applications in all subclasses STRUCTURAL DESIGN
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120245900","timestamp":"2014-04-25T08:56:02Z","content_type":null,"content_length":"63000","record_id":"<urn:uuid:392adbff-6d74-4c14-8f5e-7eb18fdc360d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theses/Dissertations - MathematicsQuadratic Lyapunov theory for dynamic linear switched systems.Existence and uniqueness of solutions of boundary value problems by matching solutions.A combinatorial property of Bernstein-Gelfand-Gelfand resolutions of unitary highest weight modules.On a ring associated to F[x].
http://hdl.handle.net/2104/4791 2014-04-18T10:44:05Z 2014-04-18T10:44:05Z Eisenbarth, Geoffrey B. http://hdl.handle.net/2104/8899 2014-01-28T15:17:03Z 2014-01-28T00:00:00Z Quadratic Lyapunov theory
for dynamic linear switched systems. Eisenbarth, Geoffrey B. In this work, a special class of time-varying linear systems in the arbitrary time scale setting is examined by considering the
qualitative properties of their solutions. Building on the work of J. DaCunha and A. Ramos, Lyapunov's Second (or Direct) Method is utilized to determine when the solutions to a given switched system
are asymptotically stable. Three major classes of switched systems are analyzed which exhibit a convenient containment scheme so as to recover early results as special cases of later, more general
results. The stability of switched systems under both arbitrary and particular switching is considered, in addition to design parameters of the time scale domain which also imply stability. A new
approach to Lyapunov theory for time scales is then considered for switched systems which do not necessarily belong to any class of systems, contrasting and generalizing previous results. Finally,
extensions of the contained theory are considered and a nontrivial generalization of a major result by D. Liberzon and A. Agrachev is investigated and conjectured. 2014-01-28T00:00:00Z Liu, Xueyan,
1978. http://hdl.handle.net/2104/8841 2013-09-24T14:28:53Z 2013-09-24T00:00:00Z Existence and uniqueness of solutions of boundary value problems by matching solutions. Liu, Xueyan, 1978. In this
dissertation, we investigate the existence and uniqueness of boundary value problems for the third and nth order differential equations by matching solutions. Essentially, we consider the interval
[a, c] of a BVP as the union of the two intervals [a, b] and [b, c], analyze the solutions of the BVP on each, and then match the proper ones to be the unique solution on the whole domain. In the
process of matching solutions, boundary value problems with different boundaries, especially at the matching point b, would be quite different for the requirements of conditions on the nonlinear
term. We denote the missing derivatives in the boundary conditions at the matching point b by k₁ and k₂. We show how y(ᵏ²)(b) varies with respect to y(ᵏ¹)(b), where y is a solution of the BVP on [a,
b] or [b, c]. Under certain conditions on the nonlinear term, we can get a monotone relation between y(ᵏ²)(b) and y(ᵏ¹)(b), on [a, b] and [b, c], respectively. If the monotone relations are different
on [a, b] and [b, c], then we can finally get a unique value for y(ᵏ¹)(b) where the k₂nd derivative of two solutions on [a, b] and [b, c] are equal and we can join the two solutions together to
obtain the unique solution of our original BVP. If the relations are the same, then we will arrive at the situation that the k₂nd order derivatives of two solutions at b on [a, b] and [b, c] are
decreasing with respect to the k₁st derivatives at b at different rates, and by analyzing the relations more in detail, we can finally get a unique value for the k₁st derivative of solutions of BVP's
on [a, b] and [b, c], which are matched to be a unique solution of the BVP on [a, c]. In our arguments, we use the Mean Value Theorem and the Rolle's Theorem many times. As the simplest models, third
order BVP's are considered first. Then, in the following chapters, nth order problems are studied. Lastly, we provide an example and some ideas for our future work. 2013-09-24T00:00:00Z Hartsock,
Gail. http://hdl.handle.net/2104/8834 2013-09-24T14:25:17Z 2013-09-24T00:00:00Z A combinatorial property of Bernstein-Gelfand-Gelfand resolutions of unitary highest weight modules. Hartsock, Gail. It
follows from a formula by Kostant that the difference between the highest weights of consecutive parabolic Verma modules in the Bernstein-Gelfand-Gelfand-Lepowsky resolution of the trivial
representation is a single root. We show that an analogous property holds for all unitary representations of simply laced type. Specifically, the difference between consecutive highest weights is a
sum of positive noncompact roots all with multiplicity one. 2013-09-24T00:00:00Z Aceves, Kelly Fouts. http://hdl.handle.net/2104/8807 2013-09-24T14:01:41Z 2013-09-24T00:00:00Z On a ring associated to
F[x]. Aceves, Kelly Fouts. For a field F and the polynomial ring F [x] in a single indeterminate, we define Ḟ [x] = {α ∈ End_F(F [x]) : α(ƒ) ∈ ƒF [x] for all ƒ ∈ F [x]}. Then Ḟ [x] is naturally
isomorphic to F [x] if and only if F is infinite. If F is finite, then Ḟ [x] has cardinality continuum. We study the ring Ḟ[x] for finite fields F. For the case that F is finite, we discuss many
properties and the structure of Ḟ [x]. 2013-09-24T00:00:00Z | {"url":"http://beardocs.baylor.edu/xmlui/feed/atom_1.0/2104/4791","timestamp":"2014-04-18T10:51:26Z","content_type":null,"content_length":"6659","record_id":"<urn:uuid:0a03219f-6d12-41b8-9d0f-58c90eeca9a1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
• ax + by = 1: this is a linear Diophantine.
• x^n + y^n = z^n: For n = 2 there are infinitely many solutions (x,y,z), the Pythagorean triples. For larger values of n, Fermat's last theorem states that no positive integer solutions x, y, z
satisfying the above equation exist.
• x^2 - n y^2 = 1: (Pell's equation) which is named, mistakenly, after the English mathematician John Pell. It was studied by Brahmagupta and much later by Fermat.
Typical of the racism exhibited by the Brits and other Europeans is W.W. Rouse Ball in 'A short account of the History of mathematics' Dover Publications,1960, (originally appeared in 1908), page 146
'The Arabs had considerable commerce with India, and a knowledge of one or both of the two great Hindoo works on algebra had been obtained in the Caliphate of Al-Mansur (754-775 AD)though it was not
until fifty or seventy years later that they attracted much attention. The algebra and arithmetic of the Arabs were largely founded on these treatises, and I therefore devote this section to the
consideration of Hindoo mathematics. The Hindoos like the Chinese have pretended that they are the most ancient people on the face of the earth, and that to them all sciences owe their creation. But
it is probable that these pretensions have no foundation; and in fact no science or useful art (except a rather fantastic architecture and sculpture) can be definitely traced back to the inhabitants
of the Indian peninsula prior to the Aryan invasion. This seems to have taken place at some time in the fifth century or in the sixth century when a tribe of Aryans entered India by the north west
part of their country. Their descendants, wherever they have kept their blood pure, may still be recognized by their superiority over the races they originally conquered; but as is the case with the
modern Europeans, they found the climate trying and gradually degenerated. Note the blatant racism in the second paragraph and the venom that this author exhibits. [This message has been edited by
Kaushal (edited 15-06-2000).] " | {"url":"http://www.indicethos.org/Mathematics/FAQmath.html","timestamp":"2014-04-17T15:29:15Z","content_type":null,"content_length":"104542","record_id":"<urn:uuid:46709d6a-bbae-4c73-ac8c-76eafbebc27c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ray-Tracing in Clojure
Following is a port of the ray-tracer from ANSI Common Lisp, by Paul Graham to Clojure. Ray-tracing is a simple rendering algorithm which yields realistic images but it is computationally expensive.
The idea is that, we trace rays of light from the eye back through the image plane into the scene, each ray is tested to see if it hits any of the objects on the scene, if the ray misses all the
objects on the scene than the pixel is shaded the background color, if it hits an object then the pixel is set to the color value returned by the ray.
(defstruct v3d-struct :x :y :z)
(defn v3d [x y z]
(struct v3d-struct x y z))
(defn sq [x]
(* x x))
(defn sqrt [x]
(Math/sqrt x))
(defn magnitude [u]
(sqrt (apply + (map sq (vals u)))))
(defn normalize [u]
(let [mag (magnitude u)]
(apply v3d (map #(/ % mag) (vals u)))))
(defn subtract [u v]
(apply v3d (map #(- %1 %2) (vals u) (vals v))))
(defn distance [u v]
(magnitude (subtract u v)))
(defn minroot [a b c]
(if (zero? a)
(/ (- c) b)
(let [disc (- (sq b) (* 4 a c))]
(if (> disc 0)
(let [discroot (sqrt disc)]
(min (/ (+ (- b) discroot) (* 2 a))
(/ (- (- b) discroot) (* 2 a))))))))
Before we begin we need to define some vector utilities, all of the above functions should be self explanatory except minroot, which solves the quadratic equation.
(defstruct sphere-struct :color :radius :center)
(defn sphere [v r c]
(struct sphere-struct c r v))
(defn sphere-normal [s pt]
(normalize (subtract (:center s) pt)))
(defn sphere-intersect [s pt ray]
(let [c (:center s)
a (+ (sq (:x ray)) (sq (:y ray)) (sq (:z ray)))
b (* 2 (+ (* (- (:x pt) (:x c)) (:x ray))
(* (- (:y pt) (:y c)) (:y ray))
(* (- (:z pt) (:z c)) (:z ray))))
c (+ (sq (- (:x pt) (:x c)))
(sq (- (:y pt) (:y c)))
(sq (- (:z pt) (:z c)))
(- (sq (:radius s))))
n (minroot a b c)]
(if n
(v3d (+ (:x pt) (* n (:x ray)))
(+ (:y pt) (* n (:y ray)))
(+ (:z pt) (* n (:z ray)))))))
Next we define functions to determine where a sphere gets hit with a ray and the surface normal of a hit.
(defn lambert [s intersection ray]
(let [normal (sphere-normal s intersection)]
(max 0 (+ (* (:x ray) (:x normal))
(* (:y ray) (:y normal))
(* (:z ray) (:z normal))))))
(defn first-hit [world pt ray]
(->> (reduce (fn[h v]
(if-let [i (sphere-intersect v pt ray)]
(conj h [i v]) h)) [] world)
(sort-by #(distance (first %) pt))
(defn send-ray [world src ray]
(if-let [[loc obj] (first-hit world src ray)]
(* (lambert obj loc ray) (:color obj))
(defn color-at [world eye x y]
(let [ray (normalize (subtract (v3d x y 0) eye))]
(send-ray world eye ray)))
(defn ray-trace [world eye w h]
(let [buffered-image (java.awt.image.BufferedImage.
w h java.awt.image.BufferedImage/TYPE_BYTE_GRAY)
coords (for [x (range 1 w) y (range 1 h)] [x y])
colors (pmap #(let [[x y] %]
[x y (color-at world eye x y)]) coords)]
(doseq [[x y c] colors]
(.setRGB buffered-image x y
(.getRGB (java.awt.Color.
(float c) (float c) (float c)))))
We iterate each pixel on the image and calculate a color for it, to do this send-ray has to find the object it reflected from, for that it calls first-hit which iterates through all the objects in
the world and finds the one the ray hit first (if any). To find the amount of light shining on the surface, we refer to Lambert's law which says that the intensity of light reflected by a point on a
surface is proportional to the dot-product of the unit normal vector N at that point, and the unit vector L from the point to the light source.
(defn view [image]
(doto (javax.swing.JFrame. "Ray Tracing")
(.add (proxy [javax.swing.JPanel] []
(paintComponent [g]
(proxy-super paintComponent g)
(.drawImage g image 0 0 this))))
(.setSize (.getWidth image) (.getHeight image))
(.setResizable false)
(.setVisible true)))
Once the image is rendered all thats left to do is to paint it on a panel,
(let [eye (v3d 150 150 200)
world [(sphere (v3d 150 150 -600) 400 0.8)]
image (ray-trace world eye 300 300)]
(view image))
(let [eye (v3d 150 150 200)
world [(sphere (v3d 150 150 -600) 400 0.85)
(sphere (v3d 250 200 -600) 400 0.85)
(sphere (v3d 200 100 -600) 400 0.65)
image (ray-trace world eye 300 300)]
(view image)) | {"url":"http://nakkaya.com/2010/12/26/ray-tracing-in-clojure/","timestamp":"2014-04-20T23:28:02Z","content_type":null,"content_length":"20274","record_id":"<urn:uuid:22ba0558-2281-4b33-a49d-2c2a8729d179>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which Of The Following Gives The Accurate Relationship ... | Chegg.com
ecet 110
showing work not needed
Image text transcribed for accessibility: Which of the following gives the accurate relationship between conductance (G) and resistance(R)? G=1/R G=R G+R = 1 none of the above (TCOs 2,3,4) Determine
the current through a 2 k Ohm resistor when the power dissipated by the resistor is 100 mW. 50 uW 50 mW 7.07 W 7.07 mW (TCO 5) What is the value of equivalent current source for the multiple current
source circuit given below? +6A -1A +1A None of the above
Electrical Engineering
Answers (5)
• showing work not needed
stars 1
EnchantingMovie5827 answered 14 minutes later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/following-gives-accurate-relationship-conductance-g-resistance-r-g-1-r-g-r-g-r-1-none-tcos-q2803742","timestamp":"2014-04-24T07:57:50Z","content_type":null,"content_length":"20694","record_id":"<urn:uuid:2cc32776-994a-498d-9422-51736dbdab6b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to SolidOpt
The SolidOpt Framework provides tools to facilitate the construction of optimization modules. SolidOpt also provides an environment through which optimization modules could be loaded and used.
Different code representations of the target application simplify the implementation of optimization methods. The framework could be used in the development of many analysis and optimization tools
such as static code analyzers, code optimizers, compilers and decompilers.
Alongside with the main objective of SolidOpt to development of mechanism, which enables the optimization of arbitrary software applications during their entire lifecycle. Important objectives are
provision of:
• Number of high-level, mid-level and low-level code models;
• Infrastructure that simplify construction of model transformations (including optimizations);
• Decompilation toolchain;
• Compilation toolchain
• Retargeting toolchain
Programming language is a notation used to describe computations to people and machines. All software for every computer is written in some programming language. Consequently, the software can be
considered as "prescription", solving a concrete problem from the real world. Every prescription or algorithm, describing part of the real world is a model (or representation). One can conclude
that a software program is an execution model for solving a given problem. The sequence of necessary steps for reaching an adequate solution are modeled.
The programming languages are constantly evolving and target to be closer to the natural language. They implement different semantically-equivalent constructions (sometimes even syntactic sugar),
which main goal is providing convenience and simplicity during development. I.e the main goal of these changes is to simplify the software development.
With time the model of execution (the software program) should become more and more abstract and closer to the natural language, i.e it should be turned into a model clearer to the developer. A
gap between the model clear to the developer and the model clear to the machine starts to emerge. That gap has to be filled a software system translating the abstract model into a low-level
model, executable by a concrete virtual execution system (VES). This compilation process leads to loss of information about the high-level model and as a consequence to production of suboptimal
code. Suboptimal compilation, usually is due to three main reasons:
□ Lack of unwell implemented optimizing modules in the compiler;
□ The capabilities of the VES and not fully used;
□ The knowledge about the domain of the high-level model is not fully used;
□ The thorough information about the high-level model if not fully used.
As discussed, software applications can be "thought of" as model, producing more optimally-working applications. The source code remains intact and the care of the model optimality is taken by
specialized software system (SolidOpt). A huge advantage of the approach is that the approach allows conceptually-different software transformation to be applied, which depend on the pursued
goals. Moreover, this would allow the auxiliary optimization parts to be extracted out from the main algorithm. I.e. the high-level model should contain only the solution of the real-world
problem, without any extra techniques for its optimization.
One of SolidOpt's goals is to provide a multimodel architecture (Fig1), generalizing the transformation of the high-level model into a low-level executable model.
Fig1: Interactions between the models
□ M[i] - a level of abstraction of the model where i ∈ [0,n]. M[0] is a target model i.e. M[0] is the executable program;
□ T[M[i]],T^-1[M[i]] - a transformation of the model where i ∈ [0,n]. Keeps the model's level of abstraction;
□ T[i],T^-1[i] - a transformation between the models where i ∈ [0,n-1]. Changes the level of abstraction.
Optimization transformation method (hereinafter an optimization method or just an optimization) in terms of the framework SolidOpt is a module that converts the software program so that it
satisfies some conditions. Most often, these conditions represent a kind of metrics by which to evaluate a quality system. For example, performance metrics, metrics for energy consumption, etc.
The scheme describes the relationship between models and ways of lowering and raising the level of abstraction. Also shows the mechanism of transformation of the model.
Using thus determined methodology would facilitate the establishment of optimization methods, operating at different levels. This provides flexibility in the development of sophisticated
optimizations such as merging of classes and the pattern removing. To achieve better results, you may need some optimization to work on several levels of abstraction of the model.
Note that Fig1 doesn't enforce requirements on the models. It needs just a transformation from and to a given model. This allows users to build domain-specific models such as visual models.
Login or register to post comments | {"url":"http://www.solidopt.org/en/content/introduction-solidopt","timestamp":"2014-04-23T07:19:16Z","content_type":null,"content_length":"21434","record_id":"<urn:uuid:4a430f5c-08ee-4e73-b19b-8628a167150f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design and Analysis of Algorithms
Subject: Design and Analysis of Algorithms Subject Code: CS207
Year: II Semester: III
Question Bank
Unit – I
1. What is an algorithm?
2. What are computational procedures?
3. What is a program?
4. Define Algorithm Validation.
5. Define program proving and program verification.
6. Define pseudocode.
7. What are the control structures in pseudocode?
8. Define Recursion. Give an example.
9. Define Time and space complexity.
10. Distinguish performance analysis and performance measurement.
11. What are the components of fixed and variable part in space complexity?
12. Define program step.
13. What are the methods to determine step count?
14. Define input data size.
15. Describe frequency table method.
16. Define best, average and worst case step count.
17. Define break-even point.
18. Define asymptotic notation.
19. Define Big Oh notation.
20. Define Theta notation.
21. Define Omega notation
22. Define little Oh and Little Omega notation.
23. What does O(1) mean?
24. How do you time a very short event?
25. What are the design techniques that are used to devise algorithms?
26. Define recursion. What are its types?
27. Write an algorithm to find if the given no is Armstrong no? Find its time complexity?
28. Differentiate algorithm and program.
29. Find the order of 20n3+100n2+2.
1. Explain Towers of Hanoi problem and solve it using recursion.
2. Find the time complexity and space complexity of the following problems.
1) Factorial using Recursion.
2) Compute nth Fibonacci Number.
3) Compute xn or exponentiate (x,n).
4) mxn matrix multiplication
5) nxn matrix multiplication
6) mxn matrix addition.
7) Sequential/linear search.
3. Describe best, worst and average case analysis with an example.
1. What is divide and conquer technique?
2. Give the control abstraction for divide and conquer technique.
3. Write the recurrence relation for DandC.
4. Timecomplexity of Binary search is O(logn). Justify.
5. Write a straight forward max min algorithm.
6. Explain the greedy method.
7. Define feasible and optimal solution.
8. Write the control abstraction for greedy method.
9. What are the constraints of knapsack problem?
10. What is a minimum cost spanning tree?
11. Specify the algorithms used for constructing Minimum cost spanning tree.
12. State single source shortest path algorithm (Dijkstra’s algorithm).
13. Calculate the T(n) for the given recurrence form
T(n) = T(1) if n=1
T(n) = aT(n/b)+f(n) if n>1
where a=2,b=2, T(1)=2, f(n)=n;
1. Explain the binary search algorithm with an example.
2. Explain mimmax problem using Divide and conquer technique. Compute its time complexity.
3. Explain merge sort with an example. Compute its time complexity.
4. Explain Quick sort with an example. Give its time complexity.
5. Solve the knapsack problem using greedy technique.
6. Explain Prim’s algorithm to construct Minimum cost spanning tree.
7. Explain Kruskal’s algorithm to construct Minimum cost spanning tree.
8. Explain Optimal Randomized algorithm to construct Minimum cost spanning tree.
9. Explain single source shortest path algorithm (Dijkstra’s algorithm).
UNIT – III
PART – A
1. Define Dynamic programming technique.
2. Define Principle of optimality.
3. What do you mean by Multistage graph.
4. Differentiate the 2 approaches in finding the minimum cost path of multistage graph.
5. Find the minimum cost path using forward and backward technique for the graph given below.
6. Give the conditions to the table in 0/1 knapsack.
7. 0/1 knapsack problem cannot be solved by Greedy technique.Why?
8. Explain briefly about Traveling sales person problem.
9. What are the 3 traversal technique for binary trees.
10. What do you mean by traversal?
11. Give the non recursive algorithm for Triple order traversal.
12. Give the recursive algorithm for Triple order traversal.
13. Define Breadth first search.
14. Define Depth first search.
15. What is breadth first spanning tree? Give and eg.
16. Give the constraints to solve the Traveling sales person problem in dynamic programming.
17. What is 0/1 knapsack problem.
18. Define connected graph. Give an eg.
19. Define Articulation point. Give the condition to identify an articulation point.
20. Identify the articulation points and draw the biconnected components for the graph given below.
PART – B
1. Explain all pairs shortest path algorithm with an eg. Give its time complexity
2. What is multistage graph? Explain with an eg. Write the pseudo code for the finding the minimum cost path using forward approach.
3. What is multistage graph? Explain with an eg. Write the pseudo code for the finding the minimum cost path using backward approach.
4. Write an algorithm for 0/1 knapsack problem.
5. Write and explain an algorithm for BFS and DFS. Give an eg.
6. Give an algorithm to identify articulation points and to construct biconnected components. Explain with an eg.
UNIT – IV
PART – A
1. What are explicit constraints and implicit constraints?
2. Explain 8-Queen problem in brief.
3. What are static trees and dynamic trees?
4. Give any 4 problems that could be solved by backtracking.
5. What are the constraints of 8-Queens problem
6. Define m-colorability optimization problem.
7. What is a Hamiltonian cycle?
8. What are the 2 methods of Branch and bound techniques?
9. Compare and contrast LC-BB and FIFO BB.
10. What is a reduced cost matrix?
PART – B
1. Explain N-Queens problem using Backtracking.
2. Explain Graph Coloring.
3. Explain sum of subsets.
4. Explain Hamiltonian cycles.
5. Solve Knapsack problem using backtracking.
6. Explain Traveling Salesperson problem using branch and bound techniques.
UNIT – V
PART – A
1. What is P and NP?
2. What is deterministic algorithm?
3. What is Non-Deterministic Algorithm?
4. Draw the relationship between P, NP, NP complete and NP-hard.
5. What is the property of NP-Complete problem?
6. What is the property of NP-Hard problem?
7. What are the two most famous unsolved problems in Computer science?
PART – B
1. Explain the basic concepts of P, NP, NP-Complete and NP-Hard.
2. Prove a graph problem is NP-Hard.
3. Explain a NP-Hard Scheduling problem.
4. Explain a NP-Hard code generation problem.
5. Explain the concepts of Approximation algorithm.
This entry was posted in CSE 4 SEM QUESTION BANK. Bookmark the permalink. | {"url":"http://yogeshwaranworld.wordpress.com/2011/12/03/design-and-analysis-of-algorithms-2/","timestamp":"2014-04-17T06:52:14Z","content_type":null,"content_length":"57407","record_id":"<urn:uuid:5b302dfb-cf82-4b41-95f9-d7d59c59a80a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nerdy stuff - measuring sound.
Step 4: Nerdy stuff - measuring sound.
The speed of sound is roughly
340 meters per second
at sea level, but this is when air is the medium through which the sound waves travel. But because propane is of a different density than air, the velocity of sound is also different, and like all
gases, the density changes with heat or pressure changes. For our purposes, we can work with the a velocity of
257 meters per second
As mentioned in the last step, sound is a vibration, we measure the frequency of this vibration in hertz (Hz), which is the number of cycles of the vibration per second.
Wikipedia tells us
that "The frequency (f )is equal to the speed (v) of the wave divided by the wavelength (lambda) of the wave".
So in other words - frequency = speed / wavelength or:
f = v / lambda
To find the wavelength, we use basic algebra - multiply by lambda and divide by f to get.
lambda = v / f
To test this we can take the sound wave used to demonstrate the device in the video as an example (360Hz), and use or rough speed of sound for v.
lambda = 257(m/s) / 360Hz
This gives us a value for lambda of about 0.71 meters. Which should be close to the distance between the peaks of the flames. Though the actual measured value may differ from what is calculated given
the above mentioned scenarios.
Note - for some reason the lambda symbol keeps turning into this when I save "�»". So I've replaced the symbol with the word "lambda". I apologize for any confusion.
Special thanks to user cposparks, who found an error on this page when it was originally published, I've since made best efforts to correct it.
to pennyroyal69 , i think the problem is with the size of the holes in your ruben's tube !! Not the size ! You see larger holes mean lower pressur and the fire gets into the tube but make the holes
smaller and te length of the flames will increase. Hope it workes :-)
is it possible to do this rubens tube by using lpg...pls send me wetr it can b done by using lpg to my email id shahrukhextreme@gmail.com
Were your experimental measurements similar to your theoretical measurements? I ask this because the speed of sound is 340 m/s in air but the sound waves are being created in a medium that is
composed of propane. The speed of sound in propane is ~247 m/s. This would significantly change the theoretical wavelength for an arbitrary frequency.
I recall things being pretty close -- though now that you mention it, considering the peaks and troughs are fairly subtle, particularly at the longer wavelengths where the discrepancies would be more
noticeable, I very well may have been off in my measurements. There's also the possibility of an air/propane mixture in the tube, which would put the density of the medium somewhere between propane
and air.
It's been three years (to the day!) since this Instructable was published, and unfortunately, I no longer have the device, so am unable to go back and check for confirmation, one way or the other.
However, your observation's very astute, and I've started to do some research on the issue, though have found conflicting reports. As soon as I've got things figured out one way or another, I'll
update the Instructable to reflect that - though until then I'll put a disclaimer on the top of this page. | {"url":"http://www.instructables.com/id/The-Rubens--Tube%3A-Soundwaves-in-Fire!/step4/Nerdy-stuff-measuring-sound/","timestamp":"2014-04-19T09:58:13Z","content_type":null,"content_length":"142304","record_id":"<urn:uuid:7cf62e2b-999e-4d74-8bf0-bfbca409d53f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Web Resources
Lesson Plans
Talk or Text (Systems of Equations)
In this lesson, students compare different costs associated with two cell phone plans. They write equations with 2 variables and graph to find the solution of the system of equations. They then
analyze the meaning of the graph and discuss other factors involved in choosing a cell phone plan.
Math Makes a Connection: The Locker Problem
In this interactive game, students use their knowledge of multiples to solve a locker problem. Students will manually open or close a locker by clicking on the lockers. They must answer the following
question: Which of the 1000 locker doors are open when every student finishes? | {"url":"http://alex.state.al.us/weblinks_category.php?stdID=53879","timestamp":"2014-04-20T23:29:46Z","content_type":null,"content_length":"17490","record_id":"<urn:uuid:f7cc9a97-3079-4917-a4a9-36f1c87e69a9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roebling ACT Tutor
Find a Roebling ACT Tutor
...I have had great success in the past in both scenarios. If you are in need of a tutor who can make your child feel comfortable with math, and help them understand the concepts they need to
succeed on today's rigorous classroom environment , please reach out -- I would love to help. Mr.
10 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I also taught Honors chemistry, physics and calculus at two NJ high schools. I am currently employed at Princeton University as a lab manager and research scientist. In my spare time, I am
frequently involved as a tutor – and I love it!
19 Subjects: including ACT Math, chemistry, physics, geometry
...What I love most about tutoring is being able to establish a trusting relationship with a family, and to become a part of a student's journey in whatever academic endeavor they are
undertaking.As a conservative Jew, I have always been interested in studying Hebrew, both for the purpose of convers...
48 Subjects: including ACT Math, reading, writing, English
Physics and math can be a daunting task to many students, and some teachers don't make it any easier with either over-simplistic explanations that don't help or no explanations at all. As a
physics teacher with degrees in Math and Physics, I am aware of the areas of struggle students can experience...
9 Subjects: including ACT Math, physics, calculus, geometry
I am a graduate of The College of New Jersey with BA in Mathematics and Secondary Education, and am a certified K-12 teacher of mathematics. I have experience with test prep including: SSATSAT
ACT Praxis Mathematics Content Knowledge I have tutoring experience with students as young as 5 years old ...
11 Subjects: including ACT Math, calculus, geometry, algebra 1
Nearby Cities With ACT Tutor
Birmingham, NJ ACT Tutors
Bristol, PA ACT Tutors
Feasterville Trevose ACT Tutors
Feasterville, PA ACT Tutors
Fieldsboro, NJ ACT Tutors
Florence, NJ ACT Tutors
Hulmeville, PA ACT Tutors
Jobstown ACT Tutors
Juliustown ACT Tutors
Morrisville, PA ACT Tutors
Pemberton, NJ ACT Tutors
Rancocas ACT Tutors
Tullytown, PA ACT Tutors
Wrightstown, NJ ACT Tutors
Yardley, PA ACT Tutors | {"url":"http://www.purplemath.com/Roebling_ACT_tutors.php","timestamp":"2014-04-18T18:36:14Z","content_type":null,"content_length":"23593","record_id":"<urn:uuid:74f3f72f-9556-4626-a919-5a52e162d290>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the area of the sector of a circle if the radius is 16cm and the angle rotation is 150 degrees? I got this question wrong, so i am hoping for some help.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i think its just the radius * how much it's rotated but the degrees should be radians 150 * (pi/180) = 5pi/6 so 16 * 5pi/6 is the answer
Best Response
You've already chosen the best response.
which should come out to ~41
Best Response
You've already chosen the best response.
That is what I had put on my homework and he counted it wrong. I think that is for length, not area.
Best Response
You've already chosen the best response.
yes you're right one way you can do it though is to find the area of the whole circle, then find what 150/360 of it is. A = pi*r^2 A = pi*16^2 A = 301.44 but that is for 360 degrees, you want
only 150. 150/360 = 5/12 so the real area is (5 * 301.44) / 12
Best Response
You've already chosen the best response.
which should be roughly 125 square centimetres
Best Response
You've already chosen the best response.
where did the five come from?
Best Response
You've already chosen the best response.
5 comes from the ratio 5/12, which is just a simplified 150/360 as there are 360 total degrees in a circle, and you are looking for 150 of it (little less than half the circle) so you are looking
for the area of 150/360 of a circle, or 5/12 you know the area of the full circle is 301.44 so (5/12) * 301.44 should give you the smaller area
Best Response
You've already chosen the best response.
basically i set up a ratio: 301.44 sq cm for 360 degrees x sq cm for 150 degrees 301.44/360 = x/150 301.44*150/360 = x 301.44*(5/12) = x x being the area
Best Response
You've already chosen the best response.
if that helps you picture it
Best Response
You've already chosen the best response.
ok, I don't know what i did but i did not get 301.44
Best Response
You've already chosen the best response.
301.44 is the area of the TOTAL circle you only want PART of the circle the part of the circle you are looking for, is 5/12 of the total circle. therefore, 5/12*301.44 is the answer which is
about 125 square centimeters
Best Response
You've already chosen the best response.
ok, thank you. I understand.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cbaee8e4b0031882dbcc5a","timestamp":"2014-04-18T08:04:21Z","content_type":null,"content_length":"54831","record_id":"<urn:uuid:d86f8bd8-8e03-4667-ad5d-184db9f4714d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vertically Shot Bullet Landing Speed
Name: David
Status: other
Age: N/A
Location: N/A
Country: N/A
Date: N/A
Question: In the real world, (not the physics world of no air resistance), how do I calculate the speed of a bullet that was shot straight into the air when it returns to Earth. For
example, say a rifle had a muzzle velocity of an M-16, what would the bullet's speed when it comes straight back down. On the news one sees celebrations in the Middle East of shooting
into the air. Just how dangerous is that?
Replies: The upper limit is of course as you point out (no air resistance). Including air resistance is a much more complex calculation because it depends upon many other factors --
air density as a function of height, the shape of the bullet, the rotational speed of the bullet, whether the bullet is wobbling or tumbling in the way down Really a complicated mess
to calculate, but for the moment DO ignore air resistance. A rule of thumb in ALL physics/chemistry/engineering is do an order of magnitude calculation to see what, if any, more
complicated calculation may be necessary -- but always carry out "reality checks" to make sure you are on track. Let us consider a rifle bullet vs. a hailstone (spherical).
RIFLE BULLET: A typical muzzle velocity of a rifle (Google search) is 3000 ft/sec = 1000 m/sec (notice I am rounding here because we are just looking for where the decimal falls. A
typical bullet mass is 120 grains [weird units, but 1 grain = 0.065 gm] = 7.8 gm = 10 gm (close enough). Now a "reality check". The density of lead is 11.4 gm/cm^3 = 10 gm/cm^3 (close
enough). So the volume of the bullet is: volume = mass / density = 10 gm / 10 (gm/cm^3) = 1 cm^3. That is probably pretty conservative, but OK for an order of magnitude. Remember the
shell casing does not count -- only the projectile. In the absence of air and a perfect world, kinetic energy is conserved, so the bullet weighing 10 gm will hit the ground after a
vertical trajectory at a speed of 1000 m/sec. OUCH!!! Let us calculate the energy. From the muzzle velocity (1000 m/sec) and the bullet mass (10 gm = 10^-2 kg) and K.E. = 1/2 m(v)^2 we
get 1/2 (10^-2)*(1000)^2 = 0.5*10^-2+6 = 5x10^3 = 5000 Joules. HAILSTONE: A spherical hailstone weighing 10 gm has a volume of 10 cm^3 since the density is 1 gm/cm^3. REALITY CHECK:
The volume = 10 cm^3 = 4/3 *pi* r^3. So r^3 = 2.4 cm^3 or a radius of r = 1.3 cm or a diameter of 2.6 cm (a fairly nominal hailstone -- about an inch in diameter).
The potential energy of a hailstone weighing 10 gm = 10^-2 kg
falling from 10 km is: P.E. = m*g*h =
10^-2 * 9.8 * 10,000 (about 10^-2+1+4 = 10^3 = 1000 Joules). Its velocity
assuming complete conversion of the P.E. to K.E. = 1/2*m*(v)^2
gives: 1000 = 1/2x10^-2*(v)^2 or v^2 = 200,000 m^2/s^2 or about 450 m / sec
Compared to 1000 m/sec for the bullet.
Now if you want to refine the estimate further a fair assumption would be to assume that air resistance would be proportional to the cross sectional area of the object and the time of
flight. That is, the longer the object is in the air the greater will be the drag from the atmosphere until the projectile reaches its maximum terminal velocity. Even without doing the
calculation, assuming the bullet is a cylinder (you can vary the length / diameter ratio) and the hailstone is spherical, the air resistance will be much less for the bullet than for
the hailstone for two reasons -- cross sectional area, and time of flight. This also gets a bit messy because the bullet experiences drag both going up and down, but the hailstone only
experiences drag on the way down. Of course, there are other complicating factors that have been ignored. A big one is that the speed of the hailstone will depend upon whether it is
falling in an up-draft, or is being accelerated by being in a down-draft.
As to danger there is no question that getting hit by such a spent projectile could be lethal. Actually getting hit by a 1 inch hailstone would be very unpleasant and possibly also
I found both a better explanation and a plug-in algorithm, at least for spherical objects launched vertically, on the great website.
The drag force depends upon the square of the velocity, a shape dependent drag factor "C", the cross sectional area, and the density of air. The key point is that it is NOT negligible.
Depending upon the values of the inputs the projectile may / may not attain a constant terminal velocity.
The inputs assumptions are: a spherical object, standard gravity constant for Earth, constant atmospheric density, laminar air flow, no spinning or wobbling (not an issue for a
sphere), constant temperature, and of course vertical trajectory. The algorithm then cues the user for the density of air, the shape-dependent drag coefficient, C, the radius of the
sphere, the density (or mass) of the spherical projectile, the initial velocity. The default values: are 1.29 kg/m^3 for the density of air, C = 0.5 (C can vary from about 0.1 to 2.0),
but these can be changed. The algorithm outputs are: terminal velocity, the velocity at any intermediate height on the way up, the peak height, the time to achieve peak height, the
velocity at any intermediate height on the way down. This allows you to find the velocity upon impact.
For comparison, I chose lead (density = 11.3 gm/cm^2 (47.3 gm)) and ice (density = 0.9 gm/cm^2 (3.8 gm)), an initial velocity in both cases of 3000 m/sec (of course, this could be
changed). I left the other parameters at their default values. The outputs were:
For lead: peak height = 1771.5 m, peak time = 10.7 sec, terminal velocity = 67.7 m/ sec, velocity upon impact = 67.6 m/sec, time to impact = 31 sec. Note: the lead projectile reached
its terminal velocity before impact.
For ice (hail): peak height = 188.2 m, peak time = 3.0 sec, terminal velocity = 19.1 m/sec, velocity upon impact = 19.1 m/sec, time to impact = 11.2 sec. Note: the hail stone also
reached its terminal velocity prior to impact. The momentum upon impact for lead and ice would be: 3.2 kg*m and 0.07 kg*m, respectively. In either case you would take a pretty nasty
hit with either projectile.
The algorithm is a good teaching tool because it takes the pain out of the number crunching. This allows the physics student to carry out all manner of thought experiments without
getting brain dead from the manipulations. The bottom line is initial velocity and air friction are very important factors. I think that the non-vertical trajectory must be solved
numerically, because the gravitational force and air drag force are not co-linear, but I am not sure of that.
Vince Calder
Setting the force of air resistance (F=rCAv^2/2) equal to the weight of a piece of hail, I get a terminal velocity of 57 m/s. Note that this is an engineering estimate not rising to
the level of Newton's Laws. There is a caution that for more than 500 meters, this relationship may not hold true. But assuming it does hold true, we continue. For example for the
effective area, I take the cross sectional area of a 1/4" diameter sphere and use a C factor suitable for spheres, 1.2.
In the equation, r is the density of air (1.3 kg/m^3), is a fudge factor (I took 1/3), A is the cross sectional area (3.9E-5 m^2) and v is the speed in m/s. For the hail I got a mass
of 2.2E-4 kg (around 0.008 oz). Equating the air resistance to the weight of the hail gives 57 m/s for the terminal velocity.
If one stops the hail with constant acceleration in 1 cm, a force of 35.7 N (of 8.7 lb) is necessary; to stop in a millimeter, 87 lb. , These do not seem to be fatal forces.
Incidentally, I find it very difficult to make these calculations without error! I always try to do them in at least two different ways as a check, but...
As for the rifle recoil: If you fire a 30 gm bullet at 2800 m/s (numbers taken off a web site), a 10 lb (4.5 kg) rifle recoils at 3.8 m/s. To stop the rifle in 2 in, a force of 146 lb
is required. Wow! Barely conceivable.
At another site (http://www.chuckhawks.com/recoil_table.htm) I found a 0.458 Win. Mag. which fires a 500 gr (thats grains) bullets at 2100 ft/s. This is a 9 lb rifle and the recoil is
quoted as 21.1 ft/s. Matching the momentum of the rifle to the momentum of the bullet, I get a recoil speed of 16.6 ft/s. I have no understanding of why conservation of momentum is not
the same for me as for them. Especially as their amount of non-conservation varies among the rifles. Anyway, to stop the recoil of this 4.1 kg rifle in 2 in (0.05 m), the acceleration
must be 260 m/s^2 and the force is 1066 N or 240 lb. I guess I just do not understand rifles; it seems to me that force could knock someone down. Maybe it does??? Of course if the
marksman stops the gun in 8 in, the required force is only 60 lb...
Best, Dick Plano, Professor of Physics emeritus, Rutgers University
Click here to return to the Physics Archives
NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by
Argonne National Laboratory
Educational Programs
, Andrew Skipor, Ph.D., Head of Educational Programs.
For assistance with NEWTON contact a
System Operator (help@newton.dep.anl.gov)
, or at Argonne's
Educational Programs
NEWTON AND ASK A SCIENTIST
Educational Programs
Building 360
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: June 2012 | {"url":"http://newton.dep.anl.gov/askasci/phy05/phy05051.htm","timestamp":"2014-04-17T21:22:40Z","content_type":null,"content_length":"18877","record_id":"<urn:uuid:9ccee5ed-6985-4bd0-809a-4568cd7fa3ce>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is an ideal generated by multilinear, irreducible, homogeneous polynomials of different degrees always radical?
up vote 5 down vote favorite
I asked this question on math.se and someone even put a bounty on it, yet there was no answer. Hence, I am asking here. Assume $\Bbbk$ to be a field of characteristic zero.
Definition. A polynomial $f\in\Bbbk[x_0,\ldots,x_n]$ is called multilinear if $\deg_{x_i}(f)=1$ for each $0\le i \le n$. In other words, $f$ is linear in each variable. If $f$ is homogeneous of
degree $d$, then $f$ is a linear combination of monomials of the form $x_{i_1}\cdots x_{i_d}$ with $0\le i_1<i_2<\cdots<i_d\le n$.
Given an ideal $I=(f_1,\ldots,f_r)\subseteq\Bbbk[x_0,\ldots,x_n]$ with the property that the $f_i$ are irreducible, homogeneous, multilinear polynomials of (pairwise) different degrees, I am asking
whether $I$ is radical.
I actually don't believe it holds in general - if this is the case, I would love to see a counterexample.
If it is true however, then I am sure that the assumption on the degree can not be dropped (see this example of an ideal generated by irreducible, homogeneous, multilinear polynomials which is not
radical). I would also love to see a proof in this case, of course.
Thanks a lot in advance!
ag.algebraic-geometry ac.commutative-algebra polynomials commutative-rings
add comment
1 Answer
active oldest votes
One general fact that comes to mind: If an ideal $I\subset \mathbb{k}[x_1,\dots,x_n]$ contains an element of the form $f = gx_1 + h$ where $g,h$ don't use $x_1$, and $g$ is a
nonzerodivisor mod $I$, then the primary components of $I\cap \mathbb{k}[x_2,\dots,x_n]$ and $I$ are in bijection. This is birational projection and I learned it from Mike Stillman
(see Theorem 23 in http://arxiv.org/pdf/math/0301255.pdf).
Now here is almost a counter-example to your question:
$$ I = \langle x_{1} x_{9}-x_{4}x_{8}, x_{4}x_{6}-x_{7}x_{9}, x_{2}x_{5}-x_{3}x_{9}, x_{2}x_{3}-x_{5}x_{6} \rangle \subset \mathbb{k}[x_1,\dots,x_9]$$
up vote 8 down This ideal has 6 components, one of which is primary with minimal prime $\langle x_9, x_5, x_4, x_2 \rangle$.
vote accepted
If I read your hypotheses correctly, the only bit missing is the pairwise different degrees of the generators. I have an inkling that this may be a red herring. If I modify my example
by adding some extra unrelated variables, then the embedded component over $\langle x_9, x_5, x_4, x_2 \rangle$ is essentially unchanged:
$$\langle x_{1}x_{9}-x_{4}x_{8}, x_{4}x_{6}y_{1}-x_{7}x_{9}y_{2}, x_{2}x_{5}y_{3}y_{4}-x_{3}x_{9}y_{5}y_{6}, x_{2}x_{3}y_{7}y_{8}y_{9}-x_{5}x_{6}y_{10}y_{11}y_{12} \rangle$$
The Binomials package in Macaulay2 quickly confirms that this ideal is not radical.
Great! Thanks a bunch. – Jesko Hüttenhain Sep 4 '13 at 11:32
Now let me get the math.se bounty :) – Thomas Sep 4 '13 at 13:12
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra polynomials commutative-rings or ask your own question. | {"url":"http://mathoverflow.net/questions/141202/is-an-ideal-generated-by-multilinear-irreducible-homogeneous-polynomials-of-di/141215","timestamp":"2014-04-20T08:48:18Z","content_type":null,"content_length":"55154","record_id":"<urn:uuid:37a334f5-9893-4a80-b8bf-a560cd1538c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eigenfunctions and Fundamental Solutions of the Fractional Two-Parameter Laplacian
International Journal of Mathematics and Mathematical Sciences
VolumeΒ 2010Β (2010), Article IDΒ 541934, 18 pages
Research Article
Eigenfunctions and Fundamental Solutions of the Fractional Two-Parameter Laplacian
Department of Pure Mathematics, Faculty of Science, University of Porto, Campo Alegre street, 687, 4169-007 Porto, Portugal
Received 5 November 2009; Accepted 22 February 2010
Academic Editor: NakΒ Cho
Copyright Β© 2010 Semyon Yakubovich. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We deal with the following fractional generalization of the Laplace equation for rectangular domains , which is associated with the Riemann-Liouville fractional derivatives , , where , . Reducing the
left-hand side of this equation to the sum of fractional integrals by and , we then use the operational technique for the conventional right-sided Laplace transformation and its extension to
generalized functions to describe a complete family of eigenfunctions and fundamental solutions of the operator in classes of functions represented by the left-sided fractional integral of a summable
function or just admitting a summable fractional derivative. A symbolic operational form of the solutions in terms of the Mittag-Leffler functions is exhibited. The case of the separation of
variables is also considered. An analog of the fractional logarithmic solution is presented. Classical particular cases of solutions are demonstrated.
1. Introduction
Let and be the Riemann-Liouville fractional derivative and integral of order defined by [1, 2]
where means the integer part of . Consider a class of the linear nonhomogeneous differential equations:
where , , , , is a prescribed function, and is to be determined. Denoting by
equation (1.3) can be written in the form , where is the identity operator. When , we come out with the classical Poisson equation. Therefore we call fractional partial differential equation (1.3)
the fractional two-parameter Poisson equation (FPE). Its homogeneous analog is naturally called the fractional Laplace equation (FLE) or fractional two-parameter Laplacian.
In this paper we present a general operational approach [3] to describe eigenfunctions and fundamental solutions of the fractional two-parameter Laplacian based on the conventional right-sided
Laplace transform [4]
of absolutely integrable functions with respect to the measure and its distributional analog
in Zemanian's space defined below. Operational solutions will be written in terms of the generalized Mittag-Leffler function , [1, 2, 5] which is defined in terms of the power series:
In particular, the function is entire of the order and type . The exponential function and trigonometric and hyperbolic functions are expressed through (1.7) as follows:
We will consider in the sequel the existence and uniqueness of general solutions of the fractional Laplacian and its particular cases. Possible applications and an investigation of the fractional
two-parameter Poisson equation (1.3) are still out of the framework of this paper and will be done in forthcoming articles of the author.
2. Eigenfunctions and Fundamental Solutions of the Fractional Laplace Equation
We begin with the following.
Definition 2.1 (see [1]). By , one denotes the class of functions , which are continuously differentiable on the segment up to the order and is absolutely continuous on .
It is known [1] that the class contains only functions represented in the form
where and are arbitrary constants. It is not difficult to find that , Moreover, if then fractional derivative (1.1) exists almost everywhere and can be represented by the formula
Definition 2.2 (see [1]). By denotes the class of functions represented by the left-sided fractional integral (1.2) of a summable function, that is, , .
A description of this class is given by the following.
Theorem 2.3 (see [1]). A function , if and only if , and , .
Definition 2.4 (see [1]). One will say that a function has a summable fractional derivative if , .
If exists in the ordinary sense, that is, is differentiable in each point up to the order , then evidently admits the derivative in the sense of Definition 2.4.
So, if , then . Otherwise if just admits a summable fractional derivative, then the composition of fractional operators (1.1) and (1.2) can be written in the form (see [1])
Nevertheless we note that for any summable function .
Consider now the eigenfunction problem for the fractional Laplace equation in the rectangular domain
where , in the following three cases:
(i) belongs to classes , by and , respectively;(ii) admits a summable fractional derivative by and belongs to by or vice versa;(iii) admits summable fractional derivative , by and , respectively.
Theorem 2.5. In case (i) trivial solution of (2.4) is the only solution.
Proof. Indeed, taking the operator from both sides of (2.4) and using the identity it becomes Hence, applying the operator to both sides of (2.5), we use the fact that due Fubini's theorem this
operator commutes with . Then we obtain Hence from conditions of the theorem we observe that fractional integrals of the equation (2.6) are Laplace-transformable functions. Therefore we may act on (
2.6) by the conventional right-sided Laplace transform (1.5), let say, by with . Taking into account its convolution and operational properties [3] after straightforward calculations we arrive at the
following second kind homogeneous integral equation of the Volterra type: where , , , and Appealing to [5, Chapter ] we find that (2.7) has the only trivial solution in the space of summable
functions and because for each . Cancelling the Laplace transform and using its uniqueness property for summable functions we get . Theorem 2.5 is proved.
In case (ii), (2.6) should be substituted by the following equalities (see (2.3)):
or where we denoted by Cauchy's fractional initial conditions. Treating, for instance, (2.9) we take the Laplace transform from its both sides and arrive at the following integral equation: where It
is known [5] that a unique solution of (2.15) in the class of summable functions is which involves as the kernel the generalized Mittag-Leffler function (1.7). Next, substituting (2.16) into (2.15),
using (1.7), (2.17), index laws for fractional operators [1], and the estimates we write solution (2.18) of the Volterra type equation (2.15) in terms of the Mittag-Leffler functions: In order to
cancel the Laplace transformation by in (2.20) we will appeal to its distributional form (1.6) in Zemanian's space (see [4]), which is dual of the countable-union space of test functions defined by
where is a sequence of real numbers , which converges monotonically to as and each is a testing-function space of smooth functions and for each nonnegative integer it satisfies According to [4,
Chapter III] we assign a topology generated by the multinorm (2.22). Consequently, is a countably multinormed space and the kernel of the Laplace transform is a member of if and only if . Taking the
space we have an advantage that the space of smooth functions with compact support is dense in and the members of the dual are distributions. Moreover, any is a right-sided Laplace-transformable
generalized function via the formula (1.6) with the right half-plane as a region of definition. Meanwhile, any analytic function on the half-plane , which satisfies the estimate where is a
polynomial, may be identified as the Laplace transform (1.6) of a right-sided Laplace-transformable generalized function which is concentrated on . Finally, the uniqueness and inversion properties
are true and the inversion formula has the form in the sense of convergence in for any .
So in order to find eigenfunctions and general fundamental solutions of the fractional Laplace equation (2.4) we will invert the Laplace transform in (2.20) by using formula (2.24). Of course, we
understand that the conventional right-sided Laplace transform (1.5) is a particular case of (1.6) being applied to a regular generalized function ,
Further, we have
where is a fractional part of the number, the convergence is in , and we assume that for any . Therefore, canceling the Laplace transformation in (2.20) and taking into account (2.16) after
straightforward calculations we get the expression for a family of eigenfunctions of (2.4): where is the generalized Wright function [2]: and the convergence of series in (2.26) is in . Letting in (
2.26) we immediately come out with a classical fundamental solution of (2.4): Taking into account definition (1.7) of the Mittag-Leffler function, solution (2.28) may be written in the operational
form Analogously, in the case of (2.10) we show that are also correspondingly eigenfunctions and a fundamental solution of (2.4).
On the other hand we may write solutions (2.29) and (2.31) in the form of the generalized Neumann series. Namely, we find
taking in mind the analyticity of series in (2.32) by in the interval and by . In the same manner, we represent (2.31) by the expression with arbitrary assuming analyticity of the corresponding
series by in the interval and by . Now taking into account zero values , , , it is not difficult to verify that (2.32) and (2.33) are classical fundamental solutions of the fractional Laplacian
subject to conditions (2.11), (2.12), (2.13), and (2.14) respectively. Thus we have proved
Theorem 2.6. In case (ii) functions (2.26) and (2.30) represent eigenfunctions of the fractional Laplacian (2.4) and expressions (2.28) and (2.31) are unique classical fundamental solutions subject
to conditions (2.11), (2.12), (2.13), and (2.14) respectively. These solutions can be written in the corresponding form of generalized Neumann series (2.32) and (2.33) under additional conditions of
Finally, in case (iii) an analog of (2.9) and (2.10) is
Consequently, in the right-hand side of Volterra's equation (2.15) we get an additional term
which will give a source for generalized eigenfunctions and fundamental solutions of the fractional Laplacian (2.4). In fact, owing to the estimates
we write solution of the Volterra type equation (2.35) in terms of the Mittag-Leffler functions and generalized Neumann series:
Cancelling the Laplace transformation we take in mind the relations
where is Dirac's delta-function and denotes the convolution product. Therefore, after straightforward calculations we get the expression for a family of eigenfunctions of (2.4):
where the convergence of series in (2.39) is in . Letting in (2.39) we derive a generalized fundamental solution of (2.4):
which may be written in the operational form
Analogously, functions
are also correspondingly eigenfunctions and generalized fundamental solutions of (2.4).
Theorem 2.7. In case (iii) functions (2.39) and (2.42) represent eigenfunctions and expressions (2.40), (2.41), and (2.43) are generalized fundamental solutions of the fractional Laplacian (2.4).
Example 2.8. As a particular case, it is not difficult to obtain from (2.28), (2.31) the classical fundamental solution of the Laplace equation . Indeed, putting , , we assume, correspondingly, , in
(2.28), (2.31), and for instance, solution becomes Analogously we treat solution (2.31).
3. Separation of Variables: Analytic Solutions
The method of separation of variables allows us to simplify eigenfunctions and fundamental solutions of the fractional Laplacian. Indeed, putting , substituting in (2.34), and taking into account
initial conditions (2.11), (2.12), (2.13), and (2.14) it becomes
where , are arbitrary constants. If , we divide (3.1) by this product and separate variables, getting two Abel's type second kind integral equations to define , , namely,
where are constants. We note that the equality for at least one point agrees with (3.1) and (3.2). So we solve the latter equations similarly to (2.18), arriving at the following family of
eigenfunctions , where
On the other hand, we may write these solutions in terms of the generalized Neumann series. Precisely, denoting by
and recalling index properties for the fractional integral (1.2) we get representations of (3.3) in the respective resolvent form for fractional integral operators , :
where as usual is the identity operator. It is easily seen that series (3.5) are analytic with respect to and . Further, since (see (1.2))
we have
Therefore the resolvent functions , are analytic in the open discs:
respectively. Thus we write a family of eigenfunctions for (2.4) in the resolvent form
Indeed, substituting (3.9) into (2.4) taking into account the values after a simple change of the summation index into the series we easily satisfy (2.4). But we will extend our family of
eigenfunctions considering
with arbitrary such that , and the corresponding resolvent function (3.5) are analytic by and . So substituting (3.10) into (2.4) and ignoring trivial cases which drive immediately to (3.10) with ,
(see (2.3)), after separation of variables we obtain fractional differential equations to define :
where is an arbitrary constant. Hence acting by inverse operators and on (3.11) with the use of (2.3) we get, correspondingly,
The latter equations are solved, for instance, in [1, 2] and we obtain the following solutions are constants):
Consequently, (2.4) has families of eigenfunctions (3.9) and (3.10) with and given by (3.13). The case naturally gives classical fundamental solutions with , for instance, in the form (3.3).
Remark 3.1. Letting in (3.3) and using (1.8) we obtain familiar trigonometric eigenfunctions of the Laplace equation .
Returning again to functions , , from Section 2 we suppose the following power-logarithmic analytic expansions in the neighbourhood of points , , namely,
Hence owing to [1] and straightforward calculations we get for each
where . Substituting the latter expressions into (2.32), (2.33) we get
where double series in (3.16), (3.17) are absolutely and uniformly convergent on the compact , owing to conditions | {"url":"http://www.hindawi.com/journals/ijmms/2010/541934/","timestamp":"2014-04-20T06:22:22Z","content_type":null,"content_length":"1047299","record_id":"<urn:uuid:f5b153a2-e07e-4483-939c-f73bcbc7407b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Vectorization of the product of several matrices ?
[Numpy-discussion] Vectorization of the product of several matrices ?
oc-spam66 oc-spam66@laposte....
Wed Oct 1 15:46:10 CDT 2008
Hello and thank you for your answer.
> There are at least three methods I can think of, but choosing the best one
> requires more information. How long are the lists? Do the arrays have
> variable dimensions? The simplest and most adaptable method is probably
The lists would be made of 4x4 matrices :
LM = [M_i, i=1..N], M_i 4x4 matrix
LN = [N_i, i=1..N], N_i 4x4 matrix.
N would be 1000 or more (why not 100000... if the computation is not too long)
> In [3]: P = [m*n for m,n in zip(M,N)]
Thank you for this one. I am curious about other possibilities.
And also : is there a document about how the python interpreter works ? (order of the operations, typical timings, ...)
Créez votre adresse électronique prenom.nom@laposte.net
1 Go d'espace de stockage, anti-spam et anti-virus intégrés.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20081001/a5ddbbb2/attachment.html
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-October/037856.html","timestamp":"2014-04-18T21:08:36Z","content_type":null,"content_length":"3892","record_id":"<urn:uuid:4625bba1-e4e9-4de6-9dbb-852d53bc2cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by John on Monday, September 29, 2008 at 11:58am.
Need help finding a formula for:
Suppose a bank offers you the following deal:
You pay to the bank an annuity amount of $A per year over the next 10 years and the bank will in turn pay you $40,000 per year starting at the end of year 11 and ending the payments by the end of
year 30.
Interest rate=10%/year throughout the 30 year period.
Find the annuity amount of $A you will be willing to pay over the next 10 yrs.
• Finance - economyst, Monday, September 29, 2008 at 3:22pm
First, and excel spreadsheet is very helpful for solving these kinds of problems.
I believe you need to have a personal discount rate. How much would you pay to receive $40,000 thirty years from now. For this problem, I believe you are to assume the interest rate, r, is your
personal discount rate. (While its not stated, I will also assume you pay $A at the end of the year)
So, you the value of the amount you pay at time zero will be:
A/(1.1) + A/(1.1)^2 + A/(1.1)^3 + ... A/(1.1)^10
= A * sum(i) of 1/(1.1)^i as i goes from 1 to 10
= A * 17.53117
The value of the amount you receive at time zero is 40000/(1.1)^11 + 40000/(1.1)^12 + ... 40000/(1.1)^30
= 40000 * sum(j) of 1/(1.1)^j as j goes from 11 to 30
= 40000 * 163.4123
Set these two equal and solve for A.
□ Finance - John, Monday, September 29, 2008 at 5:30pm
What formula did you use to calculate 17.53117? I found a formula that I thought would work to get that sum, but I got a different number. It is:
1 - 1.1^(-10)/ 0.1
and also: (40000* 163)/17.53 = $371933 is the answer?
• Finance - economyst, Tuesday, September 30, 2008 at 9:43am
My bad. I did'nt properly apply my own formula.
17.53 is the sum of (1.1)^i as i goes from 1 to 10. What I really want, as my original formula says, is the sum of (1/(1.1)^i) as i goes from 1 to 10. This turns out to be 6.1446. So the present
value of 10 payments of A over 10 years is A*6.1446.
Likewise, my 163.4123 is the sum of (1.1)^i as i goes from 11 to 30. What I really want is the sum of 1/(1.1)^i as i goes from 11 to 30. This new sum is 3.2823.
So, set A*6.1446 = 40000*3.2823.
A = 21367.
This makes much more sense.
Sorry for the confusion.
Related Questions
math - Jessica has a credit card from party bank and another from artic bank. ...
economics - Suppose you wish to invest X dollars in a bank account which pays 5...
Compound interest - Hello My teacher skipped over this and I have no clue how to...
Finance - You receive $12,000 and looking for a bank to deposit the funds. Bank ...
Finance - You receive $12,000 and looking for a bank to deposit the funds. Bank ...
Finance - Suppose your bank account will be worth $4,200.00 in one year. The ...
Finance - Suppose your bank account will be worth $4,200.00 in one year. The ...
economics - Suppose you wish to create a scholarship fund which will pay 73.43 ...
math - you borrow $1200 from a bank that bank charges 9.5% simple annual ...
math algebra - Mrs. martinez has $10,000 to invest.One bank offers her a return ... | {"url":"http://www.jiskha.com/display.cgi?id=1222703931","timestamp":"2014-04-16T04:17:54Z","content_type":null,"content_length":"10610","record_id":"<urn:uuid:2e979992-5d6a-4b88-a5d7-04cb033a79b2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defect Effect of Bi-infinite Words in the Two-element Case
Ján Maňuch
Let X be a two-element set of words over a finite alphabet. If a bi-infinite word possesses two X-factorizations which are not shiftequivalent, then the primitive roots of the words in X are
conjugates. Note, that this is a strict sharpening of a defect theorem for bi-infinite words stated in KMP. Moreover, we prove that there is at most one bi-infinite word possessing two different
X-factorizations and give a necessary and sufficient conditions on X for the existence of such a word. Finally, we prove that the family of sets X for which such a word exists is parameterizable.
Full Text:
GZIP Compressed PostScript PostScript PDF original HTML abstract page | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/146","timestamp":"2014-04-16T10:11:26Z","content_type":null,"content_length":"11612","record_id":"<urn:uuid:7f8810b7-c654-43b7-a8fe-60c0d6406aac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constructing a parallel through a point (angle copy method)
Geometry construction using a compass and straightedge
This page shows how to construct a line parallel to a given line that passes through a given point with compass and straightedge or ruler. It is called the 'angle copy method' because it works by
using the fact that a transverse line drawn across two parallel lines creates pairs of equal corresponding angles. It uses this in reverse - by creating two equal corresponding angles, it can create
the parallel lines.
See also Constructing a parallel through a point (rhombus method).
Printable step-by-step instructions
The above animation is available as a printable step-by-step instruction sheet, which can be used for making handouts or when a computer is not available.
This construction works by using the fact that a transverse line drawn across two parallel lines creates pairs of equal corresponding angles. It uses this in reverse - by creating two equal
corresponding angles, it can create the parallel lines.
The image below is the final drawing above with the red items added.
Argument Reason
1 Line segments AR,BJ are congruent Both drawn with the same compass width.
2 Line segments RS,JC are congruent Both drawn with the same compass width.
3 Line segments AS,BC are congruent Both drawn with the same compass width.
4 Triangles ∆ARS and ∆BJC are congruent Three sides congruent (sss).
5 Angles ARS, BJC are congruent. CPCTC. Corresponding parts of congruent triangles are congruent
6 The line AJ is a transversal It is a straight line drawn with a straightedge and cuts across the lines RS and PQ.
7 Lines RS and PQ are parallel Angles ARS, BJC are corresponding angles that are equal in measure only if the lines RS and PQ are parallel
- Q.E.D
Try it yourself
Click here for a printable parallel line construction worksheet containing two problems to try. When you get to the page, use the browser print command to print as many as you wish. The printed
output is not copyright.
Constructions pages on this site
Right triangles
Triangle Centers
Circles, Arcs and Ellipses
Non-Euclidean constructions
(C) 2009 Copyright Math Open Reference. All rights reserved
Math Open Reference now has a Common Core alignment.
See which resources are available on this site for each element of the Common Core standards.
Check it out | {"url":"http://www.mathopenref.com/constparallel.html","timestamp":"2014-04-18T18:10:33Z","content_type":null,"content_length":"17063","record_id":"<urn:uuid:04d1083e-a61a-4c3d-8464-bec0468e001d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the limits of integration for a known area ! HELP!
1 Answer
No products are associated with this question.
Find the limits of integration for a known area ! HELP!
Hey. Any help would be really appreciated .
I'm trying to make a program but i don't even know how to start.
This is the problem:
I already have a script that computes this integral with b given.
integral (e^((-x)^2))/sqrt(pi) dx from x=0 to b *
Now I need to make one which uses the fact of a known area under the curve so i can compute b !
Thanks in advance
0 Comments
Here is an example where instead of your function, I did a simpler one (the area under the line y = x). You should be able to just swap in your function for F.
% This function calculates the area under the curve for the function f(x) = x
F = @(x) 0.5*x.^2;
% Suppose the desired area is 4.5 The function G will be zero when the area
% under the curve is 4.5
G = @(x)(F(x) - 4.5);
% This will find the critical value of x (which you have called "b") for which
% G is zero, which gives the area under the curve you want.
b = fzero(G,1)
0 Comments | {"url":"http://www.mathworks.com/matlabcentral/answers/74128-find-the-limits-of-integration-for-a-known-area-help","timestamp":"2014-04-21T00:38:52Z","content_type":null,"content_length":"24702","record_id":"<urn:uuid:928d5eea-a332-4087-89f5-33e8eeca1dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Direct proof that the centralizer of $GL(V)$ acting on $V^{\otimes n}$ is spanned by $S_n$
up vote 19 down vote favorite
Let $V$ be a finite dimensional vector space over a field of characteristic zero. Let $A$ be the space of maps in $\mathrm{End}(V^{\otimes n})$ which commute with the natural $GL(V)$ action. Clearly,
any permutation of the tensor factors is in $A$. I am looking for an elementary proof that these permutations span $A$.
If $\dim V \geq n$, there is a very simple proof. Take $e_1$, $e_2$, ..., $e_n$ in $V$ linearly independent and let $\alpha \in A$. Then $\alpha(e_1 \otimes e_2 \otimes \cdots \otimes e_n)$ must be a
$t_1 t_2 \cdots t_n$ eigenvector for the action of the matrix $\mathrm{diag}(t_1, t_2, \ldots )$ in $GL(V)$. So $\alpha(e_1 \otimes \cdots \otimes e_n) = \sum_{\sigma \in S_n} c_{\sigma} e_{\sigma
(1)} \otimes \cdots \otimes e_{\sigma(n)}$ for some constants $c_{\sigma}$. It is then straightforward to show that $\alpha$ is given by the corresponding linear combination of permutations.
I feel like there should be an elementary, if not very well motivated, extension of the above argument for the case where $\dim V < n$, but I'm not finding it.
Motivation: I'm planning a course on the combinatorial side of $GL_N$ representation theory -- symmetric polynomials, jdt, RSK and, if I can pull it off, some more modern things like honeycombs and
crystals. Since it will be advertised as a combinatorics course, I want to prove a few key results that give the dictionary between combinatorics and representation theory, and then do all the rest
on the combinatorial side. Based on the lectures I have outlined so far, I think this will be one of the few key results.
The standard proof is to show that the centralizer of $k[S_n]$ is spanned by $GL(V)$, and then apply the double centralizer theorem. Although the double centralizer theorem (at least, over $\mathbb
{C}$) doesn't formally involve anything I won't be covering, I think it is pretty hard to present it to people who aren't extremely happy with the representation theory of semi-simple algebras. So I
am looking for an alternate route.
rt.representation-theory schur-functors teaching
Again a question I always wanted to ask but didn't dare! (I still consider Artin-Wedderburn and all the related theory of semisimple modules untransparent and strange.) However, I fear this is
related to proving Schur-Weyl in positive characteristic, and this is known to be very hard (see arxiv.org/abs/math/0610591 for a nonelementary and very indirect proof). – darij grinberg Mar 3 '12
at 1:41
Maybe you can avoid the difficulties of proving the theorem by avoiding the theorem altogether, instead replacing it by the corresponding fact for Schur functors rather than Schur modules, i. e.,
considering $V$ as a variable rather than a fixed vector space. In that case you can more-or-less WLOG assume $\dim V\geq n$ and use your argument. Of course, the result you thus prove is weaker,
but the question is whether your goal is the classical Schur-Weyl duality of $\mathrm{GL}V$ for fixed $V$, or something else where Schur-Weyl duality is just a lemma (in the latter case, chances
... – darij grinberg Mar 3 '12 at 1:44
... are high that the easier functorial version of Schur-Weyl duality is already enough). – darij grinberg Mar 3 '12 at 1:45
So, I have a couple of places I want to use this, but here is the first one: I want to prove that the standard inner product on symmetric functions is $\dim \mathrm{Hom}(V,W)$. If you follow
Stanley and define the inner product by $\langle h_{\lambda}, m_{\mu} \rangle = \delta_{\lambda \mu}$, this comes down to computing $\mathrm{Hom}(\mathrm{Sym}^{\lambda_1} \otimes \cdots \otimes \
mathrm{Sym}^{\lambda_n}, \mathrm{Sym}^{\mu_1} \otimes \cdots \otimes \mathrm{Sym}^{\mu_n})$. If you know the above claim, this turns into some very nice combinatorics, and leads into RSK in a
clean way. – David Speyer Mar 3 '12 at 13:57
1 If you only prove the above result for $\dim V > n$, then you only compute the above Hom for $|\lambda| = |\mu| < n$, and there is NO finite value of $n$ for which you know that the combinatorics
and the representation theory match up. That seems like a shame. Of course, I can think of other ways to prove this, but I think this one is very elegant. – David Speyer Mar 3 '12 at 13:59
show 3 more comments
2 Answers
active oldest votes
Let $W$ be a vector space of dimension $n$ containing $V$. Let $\alpha$ be an endomorphism of $V^{\otimes n}$ commuting with the action of ${\rm GL}(V)$. Suppose that $\alpha$ can be
extended to an endomorphism $\beta$ of $W^{\otimes n}$ that commutes with the action of ${\rm GL}(W)$. Then, by the argument given by David Speyer in the question, there exist scalars
$c_\sigma \in \mathbf{C}$ such that
$$ \beta = \sum_{\sigma \in S_n} c_\sigma \sigma $$
and this also expresses $\alpha$ as a linear combination of place permutations of the tensor factors. (As I noted in my comment, this expression is, in general, far from unique.)
Any proof that such an extension exists must use the semisimplicity of $\mathbf{C}S_n$, since otherwise we get an easy proof of general Schur-Weyl duality. If we assume that ${\rm GL}(W)
$ acts as the full ring of $S_n$-invariant endomorphisms of $W^{\otimes n}$ then a fairly short proof is possible. I think it is inevitable that it uses many of the same ideas as the
double-centralizer theorem. A more direct proof would be very welcome.
up vote 6 Let $U$ be a simple $\mathbf{C}S_n$-module appearing in $V^{\otimes n}$. Let
down vote
accepted $$ X = U_1 \oplus \cdots \oplus U_a \oplus U_{a+1} \oplus \cdots \oplus U_b $$
be the largest submodule of $W^{\otimes n}$ that is a direct sum of simple $\mathbf{C}S_n$-modules isomorphic to $U$. We may choose the decomposition so that $X \cap V^{\otimes n} = U_1
\oplus \cdots \oplus U_a$. Each projection map $W^{\otimes n} \rightarrow U_i$ is $S_n$-invariant, and so is induced by a suitable linear combination of elements of ${\rm GL}(W)$. Hence
each $U_i$ for $1 \le i \le a$ is $\alpha$-invariant. Similarly, for each pair $i$, $j$ there is a isomorphism $U_i \cong U_j$ induced by ${\rm GL}(W)$; these isomorphisms are unique up
to scalars (by Schur's Lemma). Using these isomorphisms we get a unique ${\rm GL}(W)$-invariant extension of $\alpha$ to $X$.
Finally let $W^{\otimes n} = C \oplus D$ where $C$ is the sum of all simple $\mathbf{C}S_n$-submodules of $W^{\otimes n}$ isomorphic to a submodule of $V^{\otimes n}$ and $D$ is a
complementary $\mathbf{C}S_n$-submodule. The previous paragraph extends $\alpha$ to a map $\beta$ defined on $C$. The projection map $W^{\otimes n} \rightarrow D$ is $S_n$-invariant and
so is induced by ${\rm GL}(W)$. Hence we can set $\beta(D) = 0$ and obtain a ${\rm GL}(W)$-invariant extension $\beta : W^{\otimes n} \rightarrow W^{\otimes n}$ of $\alpha$.
Could you be more explicit about "Using this isomorphisms we get a unique $GL(W)$-invariant extension of $\alpha$ to $X$"? I am not getting it. Thanks! – David Speyer Mar 6 '12 at 1:43
@David Speyer: Suppose that $\alpha$ is defined on $U_i$ and we want to extend it to $U_j$. Let $\phi : U_i \rightarrow U_j$ be a ${\rm GL}(W)$-isomorphism. Then we define $\alpha$ on
$U_j$ by $\alpha(x) = \phi\alpha\phi^{-1}(x)$. This definition is the only one possible if we want $\alpha$ to be ${\rm GL}(W)$-invariant, hence the uniqueness of $\alpha$. By the
comment about Schur's Lemma it is enough to consider one non-zero map $U_i \rightarrow U_j$ for each pair $i$ and $j$, and then the extension of $\alpha$ will commute with all such
maps. – Mark Wildon Mar 6 '12 at 2:01
Ahh, the key point is that, for $1 \leq i < j \leq a$, the isomorphism from $U_i to U_j$ is induced by $GL(V)$, and therefore $\alpha$ acts the same way on $U_i$ and $U_j$. If the
isomorphism were only induced by $GL(W)$, we don't know this. – David Speyer Mar 6 '12 at 15:18
@David Speyer I think ${\rm GL}(V)$ and ${\rm GL}(W)$ should be swapped in your comment: clearly any ${\rm GL}(W)$-invariant map is ${\rm GL}(V)$-invariant. But then I agree, i.e. this
'for free' extension of maps is critical. (And my first reply addressed only the other, less important, points.) I was led to my argument by considering the corresponding inclusion of
Schur algebras: this is subsumed by your Key Lemma, since one could define the Schur algebra $S(N,n)$ by $S(N,n) = \mathrm{End}_{S_n}(V^{\otimes n})$ where $V$ is an $N$-dimensional
vector space. – Mark Wildon Mar 6 '12 at 23:29
add comment
I'm going to write up Mark Wildon's proof as I understand it. As in the standard proof, we start by showing the Key Lemma that the centralizer of $k[S_n]$ is linearly spanned by $GL(V)$.
Decompose $V^{\otimes n}$ into $S_n$-irreps, and let $\alpha$ be an endomorphism of $V^{\otimes n}$ commuting with $GL(V)$. For each irrep $U$ of $S_n$, let $U_1$, ..., $U_a$ be the
occurrences of $U$ in $V^{\otimes n}$.
For any $U_i$, consider the endomorphism of $V^{\otimes n}$ which acts by $1$ on $U_i$ and on $0$ on all of the other summands of $V^{\otimes n}$. This commutes with $k[S_n]$ so, by the Key
Lemma it is a linear combination of maps in $GL(V)$. Hence $\alpha$ commutes with it, which means that $\alpha$ takes $U_i$ to $U_i$ by some map $\alpha_i$.
Consider the endomorphism of $V^{\otimes n}$ which takes $U_i$ to $U_j$ by an $S_n$-equivariant endomorphism and acts by $0$ on every other summand of $V^{\otimes n}$. This commutes with $k
[S_n]$ so, by the Key Lemma it is a linear combination of maps in $GL(V)$. Hence $\alpha$ commutes with it, which means that $\alpha_i = \alpha_j$. (Abusing equals to mean "is taken to the
other along the isomorphism $U_i \to U_j$, which is unique up to scalar".) Write $\alpha(U)$ for the common value of $\alpha_1$, $\alpha_2$, ..., $\alpha_a$.
There are now two ways to finish the proof.
up vote 4
down vote Standard Argument: By Maschke and Artin-Wedderburn, there is an element in $k[S_n]$ which acts on each irrep $U$ by $\alpha(U)$. This element of $k[S_n]$ induces $\alpha$.
Mark Wildon's Argument: Let $V \subset W$. We will show that we can extended $\alpha$ to an endomorphism $\beta$ of $W^{\otimes n}$ which commutes with $GL(W)$. Decompose $W^{\otimes n}$
into $S_n$ irreps, so that the previous decomposition of $V^{\otimes n}$ occurs as a subset of the summands. Let the occurrences of $U$ be $U_1 \oplus U_2 \cdots \oplus U_a \oplus \cdots \
oplus U_b$. Define a linear map $\beta:W^{\otimes n}\to W^{\otimes n}$ to act on all of the $U_i$ by $\alpha(U)$ or, if $a=0$ so that $\alpha(U)$ is undefined, define $\beta$ to act on the
$U_i$ by $0$.
We claim that $\beta$ commutes with $GL(W)$. Proof: Any element of $GL(W)$ commutes with $k[S_n]$. So (by Schur's lemma), it can only map $U_i$ to a linear combination of other $U_j$'s, and
the component of $\alpha$ mapping $U_i$ to $U_j$ is a scalar multiple of the standard isomorphism. Clearly, $\beta$ commutes with any map of this form.
Now, by my argument in the original post, take $\dim W \geq n$ to see that $\beta$ is induced by an element of $S_n$. Then $\alpha$ is also induced by this element of $S_n$.
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory schur-functors teaching or ask your own question. | {"url":"http://mathoverflow.net/questions/90094/direct-proof-that-the-centralizer-of-glv-acting-on-v-otimes-n-is-spanne?sort=oldest","timestamp":"2014-04-20T06:00:19Z","content_type":null,"content_length":"74044","record_id":"<urn:uuid:90abc749-ce51-4d6a-9357-c9548e8eec5e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Theorem on Lie Algebra Actions
All we’re going to do now is try to take what we did in the last couple of posts and generalize. So instead of working on Lie groups where we have nice left invariance, and we had all our flows were
complete, we’re going to try to get things for arbitrary vector fields on a manifold where the flow is not necessarily guaranteed to be complete.
First off, I’m going to want to think of right actions instead of left now. This is because in the last post I showed that flowing is the same thing as right multiplication by $exp$. From now on, I’m
assuming we have a Lie group acting smoothly on a manifold on the right $\theta(p,g)=p\cdot g$. We want a global flow action of $\mathbb{R}$ on our manifold $M$, so let $X\in Lie(G)$. Define the
action $\mathbb{R}\times M \to M$ by $t\cdot p=p\cdot exp(tX)$ (note: the two dots are two different actions, the one on the right comes from the Lie group).
We have an infinitesimal generator for this flow, say $\widehat{X}\in\frak{X}(M)$. i.e. $\widehat{X}_p=\frac{d}{dt}\big|_{t=0}p\cdot exp(tX)$. Thus we have a map $\widehat{\theta}:Lie(G)\to\frak{X}
(M)$ by $\widehat{\theta}(X)=\widehat(X)$.
Let’s break down what this map really is. For any $p\in M$, examine $\theta^p:G\to M$ by $\theta^p(g)=p\cdot g$. This is a smooth, since we can identify $G\cong \{p\}\times G$ and then it is just
inclusion followed by the smooth action. Thus this is the orbit map of the action. You get everything in the orbit of $p$ by the action of $G$. Thus $\widehat{X}_p=d(\theta^p)_e(X_e)$.
Let’s go one step further and show that $X$ and $\hat{X}$ are $\theta^p$ related (note now that $X$, $p$, and the action are completely arbitrary, so this is really a very general statement).
By the group law $p\cdot gh=(p\cdot g)\cdot h$, we actually have $\theta^p\circ L_g(h)=\theta^{p\cdot g}(h)$. Now we just check:
$\widehat{X}_{p\cdot g}=d(\theta^{p\cdot g})_e(X_e)$
$= d(\theta^p)_g\circ d(L_g)_e(X_e)$
$= d(\theta^p)_g(X_g)$.
Which shows $\theta^p$ related.
Now we easily can get that $\widehat{\theta}: Lie(G)\to \frak{X}(M)$ is a Lie algebra hom. Using linearity of $\widehat{\theta}$ for a fixed p, and the previous statement, we get that $[\widehat{X},
\widehat{Y}]_p=d(\theta^p)_e([X, Y]_e)=\widehat{[X, Y]}_p$. Thus $[\widehat{\theta}, \widehat{\theta}]=\widehat{\theta}([X, Y])$.
Now we are ready to state the “Fundamental Theorem on Lie Algebra Actions”. A quick term, we say a $\frak{g}$-action $\widehat{\theta}$ is complete if $\widehat{\theta}(X)$ is complete for all $X$.
The FT on LAA says that given any complete $Lie(G)$-action $\widehat{\theta}:Lie(G)\to\frak{X}(M)$, there is a unique smooth right $G$-action on M whose infinitesimal generator is $\widehat{\theta}$.
I won’t prove this, but it was nice to state and a good ending place for the day. | {"url":"http://hilbertthm90.wordpress.com/2009/09/04/fundamental-theorem-on-lie-algebra-actions/","timestamp":"2014-04-19T12:16:24Z","content_type":null,"content_length":"86993","record_id":"<urn:uuid:d34dc9bb-20e8-4a94-b74e-ea4215a5252c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why not use Excel as your calculator?
Why not use Excel as your calculator?
Before I launch into the many advantages of using Excel, a bit of nostalgia: Remember the old desktop and handheld calculators? Maybe you do, or maybe you’ve only seen them in a museum or in your
grandparent’s home. If you really want one of these for your very own, maybe you can get one free (with a lovely company logo on it) like I did from the bank where my sister-in-law used to work.
If you’ve used Windows for a while, odds are you’ve seen its Calculator program. It’s like a handheld calculator, but without the need to cradle it in your palm or find real estate for it on your
desk. Find the Windows Calculator by clicking the Start button, looking under All Programs, and then under Accessories.
One last thing, I promise, and I’ll finally get to extolling the virtues of Excel. If you’re a student or are otherwise interested in learning more about math, Microsoft Mathematics is a downloadable
tool you can use to solve equations step-by-step while learning the fundamentals of algebra, trigonometry, physics, chemistry, and calculus. Get Microsoft Mathematics 4.0 (free!) here, at the
Microsoft Download Center.
By now, you’ve gotten used to using digital calculators, such as the one in Windows or in your phone, and you know their limitations. Say you want to calculate mortgage interest or your grade point
average. This is where it’s time to make the move to Excel and see how it’s a vast improvement over a simple calculator.
Something all calculators have in common is that when you mistype a number or press the wrong key, there’s not much in the way of “Undo” functionality, with the exception of the CE (Clear Entry) key.
Sorry to say, that CE key won’t get you very far.
Excel to the rescue
If you’ve entered simple formulas in Excel, such as adding, subtracting, multiplying, or dividing numbers, you know that the formula it creates looks a bit like what you’re doing on a calculator…but
Excel saves your work, and is much more forgiving. In an Excel formula, instead of having to start over if you make a minor mistake, you can just fix a thing or two without losing what you’ve typed.
Yet another nice thing about Excel is that after you’ve completed your formula, you can see it in the formula bar and its results in the cell, all at the same time. Here’s a formula and its result
The formula shown below is entered in Excel pretty much the same way you’d enter it into a typical calculator – using lots of correctly(!) placed parentheses and math operators. Those math operators
are very similar to what you’ll find on calculator…the plus sign (+), minus sign (-), slash symbol for division, (/), and the asterisk (*). On old calculators and on paper, multiplication is ”x,” but
don’t try that in Excel – I promise you, that won’t work! Get used to using that * key (also known as “star”).
OK, I could enter this in a calculator. But let me be honest – I’m not the world’s most careful button puncher, so I could easily make one or more entry errors using a calculator and end up with the
wrong answer.
Messy! Or, I could enter these numbers in an Excel grid and use a smaller (but powerful!) formula to get the same result. This formula will find a student’s grade point average, based on 5 courses
that have a few different credits. This type of average is known as a “weighted average.”
The formula (in cell C7) looks like the following:
That’s a little less messy, since it uses the SUM function to total the credits, but it’s still a pretty long formula. One nice thing about this formula is that it uses cell references (cells B2
through B6 for the credits and cells C2 through C6 for the grades). If I need to change a value or two for either the credits or the grades, I don’t need to change the formula.
The awesome SUMPRODUCT function
Here’s a much more efficient formula I can use to calculate a weighted GPA:
Aha! Try doing that in a calculator! Instead of stringing together all those multiplication operations - (B2*C2)+(B3*C3)+(B4*C4)+(B5*C5)+(B6*C6) - I’m using Excel’s SUMPRODUCT function.
What does that SUMPRODUCT function do, you may wonder? Glad you asked! SUMPRODUCT multiplies the corresponding elements of arrays (an array is a group) of numbers and then sums those products of the
array elements. Here’s an example of two arrays (bear with me, this is getting a little bit “mathy!”):
[2 3 4] [3 2 5]
The first array [2 3 4] contains three numbers, each assigned a position. The first position of this array contains 2, the second position contains 3, and the third position contains 4. The second
array [3 2 5] also contains three numbers – 3, 2, and 5 – in first, second, and third positions. If I were to multiply these two arrays using the SUMPRODUCT function, the formula would look like the
following, returning 32 as the answer (the two arrays are enclosed in squiggly braces, and their elements are separated by semicolons):
Intrigued? Learn about and interact with the SUMPRODUCT function in a browser worksheet.
So…remember algebra? Think back to multiplying two expressions by each other, such as (x+2 )(y-3). You can consider this expression the same as multiplying two arrays ( [ x 2 ] and [y -3] ) times
each other. In this algebraic expression, you multiply x by y, x by 3, 2 by y, and 2 by -3. Then, you add them all together to get xy + 3x + 2y -6. Look familiar?
I’ll expand the size of my sample arrays ( [2 3 4] and [3 2 5] ) so that they’ll have the five credit values and the corresponding grade values for each course. These two arrays each contain 5
elements instead of just 3. And instead of putting 5 numbers in each array ([4 3 4 2 3] and [3.3 3.6 3.1 3.7 3.2]), I’ll instead use two ranges of 5 numbers that contain the values of 5 Excel cells.
For the class credit array, I’ll assign the range B2:B6 (that’s 5 cells). Its corresponding grade array is C2:C6.
The first part of the formula, SUMPRODUCT(B2:B6,C2:C6), results in a total of 53.4. That’s the total of the credit value times the grade value for each course.
│Class │Credits│ Grade│ Credits * Grade│
│Econ 102 │ 4│ 3.3│ 13.2│
│Math 171 │ 3│ 3.6│ 10.8│
│History 143 │ 4│ 3.1│ 12.4│
│Computer Sci 114 │ 2│ 3.7│ 7.4│
│Business 155 │ 3│ 3.2│ 9.6│
│Total │ 16│3.34 (GPA)│ 53.4│
The second part, SUM(B2:B6), uses the SUM function to total the credit values, resulting in 16. Dividing the result of the SUMPRODUCT operation by the result of the SUM operation, I get 53.4/16 =
3.34. The image here shows you where the two arrays are used in the formula:
I think the SUMPRODUCT function is pretty cool, and I hope you’ll agree. And for doing math operations that use more than just a few numbers or calculations, using Excel is a lot less error prone
than using a calculator. Especially for me!
Watch Doug Thomas and a math whiz take Microsoft Mathematics 4.0 through its paces in this short video: What Math 4.0 can do for students
To learn more about using Excel as your calculator, follow these links:
Use Excel as your calculator (Office.com article)
How to use Excel as a calculator (eHow.com article)
How to Use Excel as a Financial Calculator (eHow.com article)
How to Create an Excel Financial Calculator (wikiHow article)
Use Excel as a Calculator (University of Sydney article)
Learn about and interact with the SUMPRODUCT function in a browser worksheet here: SUMPRODUCT function
And finally, learn more about using math operators here: Calculation operators and precedence
– Gary Willoughby | {"url":"http://blogs.office.com/2011/08/23/why-not-use-excel-as-your-calculator/","timestamp":"2014-04-20T01:21:13Z","content_type":null,"content_length":"38919","record_id":"<urn:uuid:f29e128a-2eaf-4446-8e16-87bca7e727a6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Island Lake Algebra 2 Tutor
Find a Island Lake Algebra 2 Tutor
...Microsoft Outlook is an email client that has grown to become a personal information manager. It can work as a stand alone application or it can be linked with a network. The program allows
users to access emails from multiple sources, track schedules and tasks, enter contact information, and utilize journal and note features.
39 Subjects: including algebra 2, reading, English, calculus
...I love math and have a strong desire to help students improve in this beautiful discipline.I have 3 years experience teaching algebra I. I believe math is fun and that the biggest impediment
to students doing well in math is the self-fulfilling prophecy that "I am lousy at math and I will always...
12 Subjects: including algebra 2, calculus, algebra 1, geometry
...I am flexible to accommodate their individual learning styles. I help them understand math easily and we have fun learning! They love the accomplishment of seeing their grades improve and
knowing they understand what they're learning.
26 Subjects: including algebra 2, Spanish, geometry, chemistry
...I have my bachelors in engineering and I can help you improve your grades and even score better in an exam. I have worked with high school students as well as college students. I have helped
students excel in various exams.
14 Subjects: including algebra 2, geometry, GRE, algebra 1
...In many ways I prefer tutoring to teaching in a traditional classroom setting, because while tutoring I have the luxury of being able to respond specifically to a student’s concerns and craft
my message for just one student at a time, as opposed to 30 or more. I’ve been teaching and/or tutoring ...
25 Subjects: including algebra 2, calculus, statistics, geometry
Nearby Cities With algebra 2 Tutor
Cary, IL algebra 2 Tutors
Fox Lake, IL algebra 2 Tutors
Fox River Grove algebra 2 Tutors
Hainesville, IL algebra 2 Tutors
Holiday Hills, IL algebra 2 Tutors
Lake Barrington, IL algebra 2 Tutors
Lake Zurich algebra 2 Tutors
Lindenhurst, IL algebra 2 Tutors
Mchenry, IL algebra 2 Tutors
North Barrington, IL algebra 2 Tutors
Oakwood Hills, IL algebra 2 Tutors
Port Barrington, IL algebra 2 Tutors
Prairie Grove, IL algebra 2 Tutors
Volo, IL algebra 2 Tutors
Wauconda, IL algebra 2 Tutors | {"url":"http://www.purplemath.com/island_lake_algebra_2_tutors.php","timestamp":"2014-04-21T07:33:00Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:712b143c-9db9-4014-a0ee-e4e75e798fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » sci.math.* » sci.math.independent
Topic: Penn. State Univ Dr. Ecker reconsider's AP's No Odd Perfect Proof
Replies: 0
Penn. State Univ Dr. Ecker reconsider's AP's No Odd Perfect Proof
Posted: Jan 26, 2014 3:21 PM
On Tuesday, December 31, 2013 10:12:30 PM UTC-6, Dr. Mike Ecker wrote:
> I've been away for a while so I'll jump in, except I will use n instead of k.
Well, I changed the proof since Dr. Ecker first critiqued it. So here is the brand new proof, which is unsinkable as Mr. Bau would say--
Valid No Odd Perfect Proof , and predictions about Odd Abundant & Deficient Numbers
As I wrote earlier, the finest validation of a math proof is _not_ other fellow mathematicians opining it is true or being published in a math journal, but rather, the finest validation is whether
the method of the proof itself goes on to enlighten further truths of that math topic. For instance, my Beal proof method instantly proves Fermat's Last Theorem FLT, and my Goldbach proof method
instantly proves the Generalized Goldbach.
When mathematics has so called proofs that are only opined true or published in math journals like that of Appel & Haken's 4 Color Mapping or Wiles's FLT, for which their method has no more relation
to doing anything more to mathematics, means that their offering is a fake work. Their offering is invalid.
So how am I sure that my No Odd Perfect proof is a true proof? Well, I take the method involved and see if it produces further truths about the topic in question such as odd deficient numbers and
odd abundant numbers. My method says the proof is based on the fact that as 3 is the smallest factor of a odd number, then we have two groupings of the divisors, one grouping is 1/3 the number and
the other grouping is 2/3 the number. So those groupings forbid a odd number to ever be perfect because of that 2 in the numerator of 2/3, since it means 2 is a divisor of the odd number. That is
the mechanism of the proof.
So, now, if it is a valid proof, the mechanism or method should produce more facts about odd abundant and deficient numbers.
One fact is already clear, that the largest or maximum deficient number would miss being perfect by the amount of 2. So that means there must exist at least one odd number with a deficit of 2, and
it is found in 9 where 1x9 and 3x3, where we have 1 + 3 + 3, which misses being odd perfect by only 2, and the 9 is the only nearest miss by 2 of all the odd numbers.
And the method of proof implies that the nearest miss for an abundant number to be odd perfect would miss by 2(3x5) which we have in 945.
Now I spent some time delineating the abundant numbers 945, 1575, 2205.
Now the sum of 945 is 975 with a abundance of 30 = 2(3x5).
The sum of 1575 is 1649 with a abundance of 74 = 2(37).
The sum of 2205 is 2241 with a abundance of 36 = 2(2x9).
Provided I did my arithmetic correctly, so we see a conformation with the proof method that the odd abundant numbers would all fail being odd perfect because of that 2 in the 2/3 grouping.
Now there is one more phenomenon I want to discuss now, is the sequence of odd abundant in that starting with 945, the sequence is just a adding of 630 to 945 to get the next such odd abundant
number. And that would agree to the proof method for 630 is the 2/3 of 945. Now a important question arises as to whether there are any odd abundant numbers other than that sequence as listed here:
{945, 1575, 2205, 2835, 3465, 4095, 4725, 5355, 5775, 5985, 6435, 6615, 6825, 7245, 7425, 7875, 8085, 8415, 8505, 8925, 9135, 9555, 9765, 10395, 11025, 11655, 12285, 12705, 12915, 13545, 14175, ...}
from http://oeis.org/wiki/Odd_abundant_numbers
So, here is the question, are these the only odd abundant numbers or are there any odd abundants interspersed between that above sequence? In a sense the above is a validification of the proof
method of 1/3 and 2/3 groupings. Because if 9 is the maximum that a odd deficient number gets close to being perfect and misses it by 2, and if 945 is the nearest miss to being odd perfect for the
abundant odd numbers and misses by 30, then the proof method is truly a grouping of 1/3 and 2/3 and the 2 in the 2/3 forbids the construction of the odd perfect.
Now maybe that sequence list above is not inclusive of all the odd abundant numbers. Maybe it is a list of only those separated by 630. So I need to find out.
Constructive proof No Odd Perfect Number
The basic term used is _cofactors_, where a number has its cofactors paired.
Example is 6 and 15:
The number 6 has cofactors of 1 with 6, and, 2 with 3 and represented as this:
(1 + 6) + (2 + 3) = 12
The number 15 has cofactors of 1 with 15, and, 3 with 5 and represented as this:
(1 + 15) + (3 + 5) = 24
For 18 we have
(1 + 18) + (2 + 9) + (3 + 6) = 39
For 20 we have
(1 + 20) + (2 + 10) + (4 + 5) = 42
For 9 we have
(1 + 9) + (3 + 3) = 16
For 28 we have
(1 + 28) + (2 + 14) + (4 + 7) = 56
Also, let me focus on the number 945 since it is odd abundant so as to give the reader some bearings of odd abundant and odd deficient numbers.
(1 + 945) + (3 + 315) + (5 + 189) + (7 + 135) + (9 + 105) + (15 + 63) + (21 + 45) + (27 + 35) and once we omit the 945 the sum of divisors is 975.
I displayed this abundant odd number to compare with the deficient odd number of 15. Few people know that some odd numbers can be abundant. Why is that important? Because if the odd numbers can
overshoot and undershoot the mark, stands to reason that perhaps some odd number falls smack on the spot of equal.
Constructive Definition of a Perfect Number
Now let me define the Perfect Number in general as that of omitting the number itself k as a divisor, the remaining cofactor divisors add up to k.
For example, 6 and omitting 6 has 1+2+3 =6. And 28 omitting 28 has 1+2+4+7+14.
Construction proof
Take the arbitrary Odd Perfect Odd number larger than 1 and call it k.
We ask just one simple question for the proof. We ask, what is the smallest divisor for a odd perfect number? It cannot be 2 for that means k is divisible by 2 and no longer odd. That means the
smallest divisor is 3 or 5 or 7 etc etc. The proof that 3 cannot be the smallest divisor case takes care of the 5,7,9 etc cases. So I no longer will talk about if 5 etc is the smallest divisor.
Let us construct the arbitrary odd perfect number that has its smallest divisor of 3. This means we can group all the divisors into just two groups of 1/3k and 2/3k.
Now since this odd perfect is grouped into 1/3 and 2/3, means that it has a possibility of these and only these permutations since k is odd:
1/3 2/3
3/9 6/9
5/15 10/15
7/21 14/21
315/945 630/945
So, now, can we construct this Odd Perfect Number given that definition?
The answer is no, because in all possible permutations of an odd number with a 1/3 to 2/3 grouping of added terms, we can never get rid of the even number 2 in that 2/3. This means that k is
divisible by an even number 2 in order for the Odd Perfect to sum to k.
Let me illustrate that with another picture.
The grouping 1/3 and grouping 2/3 becomes this:
1/3 grouping + 1/3 grouping + 1/3 grouping
So for 15 to be Odd Perfect we would have this:
5 + 5 + 5
Which means 15 is divisible by 2 because 5 is the 1/3 grouping and the 2/3 grouping is 2x5 where 2 divides into 15.
So for 45 to be Odd Perfect we would have this:
15 + 15 + 15
Which means that 45 is divisible by 15 but it has another grouping of 2/3 composed of a 15x2 where 2 divides into 45.
So we see here how the construction of the Odd Perfect Number has an imposed barrier to construction, in that we can never get rid of the 2 in the 2/3 grouping and which forces the odd perfect k to
be divisible by 2.
Now this proof must show that even perfects are possible and have no barrier to having some even numbers be perfect. That is "some even numbers" can be perfect.
Are the Even Perfects constructible by this proof method? Well, the even perfects are 1/2 and 1/2 so we have this:
1/2 1/2
2/4 2/4
3/6 3/6
14/28 14/28
And we see there is no barrier to force the groupings of 1/2 with 1/2 or 3/6 with 3/6 where the numerator cannot divide into the denominator. All of the 1/2 to 1/2 groupings allow the numerator to
divide into the denominator, so there is no barrier to construction. However, in the case of odd perfect the 2/3 grouping never allows 2 to divide into 3 and is that barrier. So for odd numbers, the
barrier to construction of odd perfect is the perpetual even number 2 divisor in the groupings of 1/3 and 2/3 and its permutations.
Proof that Perfect Numbers is a finite set
Now in mathematics the two oldest unsolved problems deal with the Perfect Numbers, the No Odd Perfect (except 1) conjecture and the question of whether perfect numbers are finite or infinite. I
proved the No Odd Perfect Numbers here in sci.math in the last several weeks. I proved the Finitude of Perfect Numbers years ago, but let me repeat it here for the history record.
Proof of the Finitude of Perfect Numbers
When mathematics is honest about its definitions of finite versus infinite, it seeks a borderline between the two concepts, otherwise they are just one concept. From several proofs of regular
polyhedra and of the tractrix versus circle we find the borderline to be 1*10^603. That causes a measure to use for all questions of sets as to whether they are finite or infinite. The Naturals are
infinite because there are exactly 1*10^603 inside of 1*10^603 (not counting 0).
The algebraic-closure of numbers is 1*10^1206 which forms a density measure. So are the primes finite or infinite? Well, are there 1*10^603 primes between 0 and 1*10^1206? Easily for at around 10^
607 we have 10^603 primes. Are the Perfect-Squares {1, 4, 9, 16, 25, . .} finite or infinite set? The perfect-squares are a special set since they are "minimal infinite" since there are exactly 10^
603 of them from 0 to 10^1206.
How about perfect cubes? Well they are a finite set since there is not enough of them between 0 and 10^1206.
How about Fibonacci primes or Mersenne primes? Both of them are so rare, that there are only a handful between 0 and 10^1206.
So this is the modern means of checking whether a set is finite or infinite. We ask the density of the set from 0 to 10^1206 and if there are 10^603 of the objects in that space, then they form an
infinite set. If not, they are finite.
Looking at the density of Perfect-Numbers and the list is { 6, 28, 496, 8128, . . .} and we immediately see the density is nowhere close to the Minimal density of Perfect-Squares {1, 4, 9, 16, 25, .
. .} and so we immediately can see the Perfect-Numbers forms a finite set.
Recently I re-opened the old newsgroup of 1990s and there one can read my recent posts without the hassle of mockers and hatemongers.
Archimedes Plutonium | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2617021&messageID=9372730","timestamp":"2014-04-19T17:39:24Z","content_type":null,"content_length":"25600","record_id":"<urn:uuid:866a4d7d-3f5d-4e71-9b99-db315db9021e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
need help with :definite integral from 0 to 1 x^3/ Square root of x^4+9
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[\int\limits_{0}^{1}{x^3 \over \sqrt{x^4+9}}dx\]Is this the correct equation?
Best Response
You've already chosen the best response.
yes, that is the one.
Best Response
You've already chosen the best response.
u-substitute for x^4 + 9
Best Response
You've already chosen the best response.
let u be x^4+9?
Best Response
You've already chosen the best response.
but how do you know which one to put as" u"?
Best Response
You've already chosen the best response.
u = x^4 + 9 du = 4x^3 dx du / 4x^3 = dx\[\int\limits\limits_{0}^{1}{x^3 \over \sqrt{u}}{du \over 4x^3}\]\[\int\limits\limits_{0}^{1}{du \over 4\sqrt{u}}\]Can you finish it? I think you are
actually supposed to re-evaluate the limits but I don't remember how to do that. It is not necessary if you replace u with what you substituted it for before you apply the limits
Best Response
You've already chosen the best response.
let me try replacing it
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
to re-evaluate the limits plug in 0 and 1 into u=x^4 + 9
Best Response
You've already chosen the best response.
then, what do I need to do after that?
Best Response
You've already chosen the best response.
I don't know why you did that.. you need to integrate the last equation I gave you that has u in it
Best Response
You've already chosen the best response.
\[\int\limits\limits\limits_{9}^{10}{du \over 4\sqrt{u}}\]
Best Response
You've already chosen the best response.
@TweT226 thx
Best Response
You've already chosen the best response.
After you do u-substitution to re-evaluate the limits see Twe's post
Best Response
You've already chosen the best response.
when i plug in 0 i get 9 and whne i plug in 1 , I get 10
Best Response
You've already chosen the best response.
\[\int\limits\limits\limits\limits_{9}^{10}{1 \over 4}u^{-1/2}du\]
Best Response
You've already chosen the best response.
how did u get u^-1/2?
Best Response
You've already chosen the best response.
\[{{1 \over 4}u^{1/2} \over 1/2}|_9^{10}\]\[{1 \over 2}u^{1/2}|_9^{10}\]\[{1 \over 2}10^{1/2}-{1 \over 2}9^{1/2}\]
Best Response
You've already chosen the best response.
\[\sqrt x=x^{1/2}\]\[{1 \over x^n}=x^{-n}\]
Best Response
You've already chosen the best response.
oh,okay.i see
Best Response
You've already chosen the best response.
re-evaluating the limits is the short method. If you don't do that you can plug what you substituted u for then use the old limits after integration. You should get the same answer
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[\int\limits\limits_{0}^{1}{x^3 \over \sqrt{x^4+9}}dx\]u-substitution: u = x^4 + 9 du = 4x^3 dx du / 4x^3 = dx Re-evaluating the limits u = (0)^4 + 9 = 9 u = (1)^4 + 9 = 10\[\int\limits\limits\
limits_{9}^{10}{x^3 \over \sqrt{u}}{du \over 4x^3}\]\[\int\limits\limits\limits\limits_{9}^{10}{du \over 4\sqrt{u}}\]\[\int\limits\limits\limits\limits\limits_{9}^{10}{1 \over 4}u^{-1/2}du\]\[{{1
\over 4}u^{1/2} \over 1/2}|_9^{10}\]\[{1 \over 2}u^{1/2}|_9^{10}\]\[{1 \over 2}10^{1/2}-{1 \over 2}9^{1/2}\]
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d12879e4b01186104980f6","timestamp":"2014-04-20T06:14:40Z","content_type":null,"content_length":"113405","record_id":"<urn:uuid:149f0c73-a4dc-4c95-b926-122f4ea0c209>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scatter Plots
7.3: Scatter Plots
Difficulty Level:
Created by: CK-12
You've been exercising every week and when you go for your next doctor's visit the doctor says that the reading for your resting heart rate has changed. You start taking your own resting heart rate
once a week on Mondays and relate it to the numbers of hours per week you've been exercising. How would you represent this data? Do you expect to see a correlation between the number of hours you
exercise per week and your resting heart rate? How would you know if there is a correlation?
Watch This
First watch this video to learn about scatter plots.
CK-12 Foundation: Chapter7ScatterPlotsA
Then watch this video to see some examples.
CK-12 Foundation: Chapter7ScatterPlotsB
Watch this video for more help.
Khan Academy Correlation and Causality
Often, when real-world data is plotted, the result is a linear pattern. The general direction of the data can be seen, but the data points do not all fall on a line. This type of graph is called a
scatter plot. A scatter plot is often used to investigate whether or not there is a relationship or connection between 2 sets of data. The data is plotted on a graph such that one quantity is plotted
on the $x$$y$$x$$y$
The following scatter plot shows the price of peaches and the number sold:
The connection is obvious$-$
The following scatter plot shows the sales of a weekly newspaper and the temperature:
There is no connection between the number of newspapers sold and the temperature.
Another term used to describe 2 sets of data that have a connection or a relationship is correlation. The correlation between 2 sets of data can be positive or negative, and it can be strong or weak.
The following scatter plots will help to enhance this concept.
If you look at the 2 sketches that represent a positive correlation, you will notice that the points are around a line that slopes upward to the right. When the correlation is negative, the line
slopes downward to the right. The 2 sketches that show a strong correlation have points that are bunched together and appear to be close to a line that is in the middle of the points. When the
correlation is weak, the points are more scattered and not as concentrated.
When correlation exists on a scatter plot, a line of best fit can be drawn on the graph. The line of best fit must be drawn so that the sums of the distances to the points on either side of the line
are approximately equal and such that there are an equal number of points above and below the line. Using a clear plastic ruler makes it easier to meet all of these conditions when drawing the line.
Another useful tool is a stick of spaghetti, since it can be easily rolled and moved on the graph until you are satisfied with its location. The edge of the spaghetti can be traced to produce the
line of best fit. A line of best fit can be used to make estimations from the graph, but you must remember that the line of best fit is simply a sketch of where the line should appear on the graph.
As a result, any values that you choose from this line are not very accurate$-$
In the sales of newspapers and the temperature, there was no connection between the 2 data sets. The following sketches represent some other possible outcomes when there is no correlation between
data sets:
Example A
Plot the following points on a scatter plot, with $m$$n$$m$$n$
$& m \quad 4 \quad 9 \quad 13 \quad 16 \quad 17 \quad 6 \quad 7 \quad \ 18 \quad 10\\& n \quad \ 5 \quad 3 \quad 11 \quad 18 \quad 6 \quad 11 \quad 18 \quad 12 \quad 16$
Example B
Describe the correlation, if any, in the following scatter plot:
In the above scatter plot, there is a strong positive correlation.
Example C
The following table consists of the marks achieved by 9 students on chemistry and math tests:
Student A B C D E F G H I
Chemistry Marks 49 46 35 58 51 56 54 46 53
Math Marks 29 23 10 41 38 36 31 24 ?
Plot the above marks on scatter plot, with the chemistry marks on the $x$$y$
If Student I had taken the math test, his or her mark would have been between 32 and 37.
Points to Consider
• Can the equation for the line of best fit be used to calculate values?
• Is any other graphical representation of data used for estimations?
Guided Practice
The following table represents the sales of Volkswagen Beetles in Iowa between 1994 and 2003:
Year 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003
Beetles Sold 50 60 55 50 70 65 75 65 80 90
(a) Create a scatter plot and draw the line of best fit for the data. Hint: Let 0 = 1994, 1 = 1995, etc.
(b) Use the graph to predict the number of Beetles that will be sold in Iowa in the year 2007.
(c) Describe the correlation for the above graph.
b. The year 2007 would actually be the number 13 on the $x-$
c. The correlation of this graph is strong and positive.
Interactive Practice
1. What is the correlation of a scatter plot that has few points that are not bunched together?
1. strong
2. no correlation
3. weak
4. negative
2. What term is used to define the connection between 2 data sets?
1. relationship
2. scatter plot
3. correlation
4. discrete
3. Describe the correlation of each of the following graphs:
4. Plot the following points on a scatter plot, with $m$$n$$m$$n$
1. $m \quad 5 \quad 14 \quad 2 \quad 10 \quad 16 \quad 4 \quad 18 \quad 2 \quad 8 \quad 11\ \quad \ 6 \quad 13 \quad 4 \quad 10 \quad 15 \quad 7 \quad 16 \quad 5 \quad 8 \quad 12$
2. $m \quad 13 \quad 3 \quad 18 \quad 9 \quad 20 \quad 15 \quad 6 \quad 10 \quad 21 \quad 4\ \quad \ 7 \quad 14 \quad 9 \quad 16 \quad 7 \quad 13 \quad 10 \quad 13 \quad 3 \quad 19$
The following scatter plot shows the closing prices of 2 stocks at various points in time. A line of best fit has been drawn. Use the scatter plot to answer the following questions.
5. How would you describe the correlation between the prices of the 2 stocks?
6. If the price of stock A is $12.00, what would you expect the price of stock B to be?
7. If the price of stock B is $47.75, what would you expect the price of stock A to be?
The following scatter plot shows the hours of exercise per week and resting heart rates for various 30-year-old males. A line of best fit has been drawn. Use the scatter plot to answer the following
8. How would you describe the correlation between hours of exercise per week and resting heart rate?
9. If a 30-year-old male exercises 2 hours per week, what would you expect his resting heart rate to be?
10. If a 30-year-old male has a resting heart rate of 65 beats per minute, how many hours would you expect him to exercise per week?
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Basic-Probability-and-Statistics-Concepts---A-Full-Course/r11/section/7.3/","timestamp":"2014-04-16T10:57:31Z","content_type":null,"content_length":"148015","record_id":"<urn:uuid:e0bd912d-e83c-4111-9494-92f4a5fcea9f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digital signal processing
Digital signal processing: principles, algorithms, and applications
User ratings
5 stars
4 stars
3 stars
2 stars
1 star
User Review - Flag as inappropriate
I have read the part of spectrum estimation, which is very comprehensive and clearly. It help me clarify many misconception and questions, especially in the nonparametric estimation methods, the time
window effects and frequency leakage.
A book for rookies in mathematics
User Review - Purnabhishek M... - Flipkart
It is a good book for digital signal processing I admit but it lacks solid mathematical background.It doesn't has good link with mathematical concepts its like giving merely procedure to solve ...
Read full review
DISCRETETIME SIGNALS AND SYSTEMS 39
THE ZTRANSFORM AND ITS APPLICATION TO THE ANALYSIS 151
FREQUENCY ANALYSIS OF SIGNALS AND SYSTEMS 230
13 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=sAcfAQAAIAAJ&q=input-output&dq=related:ISBN0471096911&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-18T06:22:11Z","content_type":null,"content_length":"117283","record_id":"<urn:uuid:103700fc-bc4d-4c24-bcc3-8cdc3632bb0b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Make Quick Work Of Impedance Measurements
Knowing the impedance of a transmission line can be useful for many design tasks. The line may be a piece of coaxial cable that can serve a test system setup in a laboratory, if its impedance could
be found. There are various techniques for determining the impedance of a transmission line, both simple and complicated and varying in accuracy accordingly. For those in need of a relatively quick
method for finding the impedance of a transmission line, based on vector-network-analyzer (VNA) measurements, the technique shown here is "quick and dirty." It provides accuracy that is as good as
the calibration of the VNA, typically better for lower measurement frequencies. The frequency span for which the measurements should be performed depends upon the length of the transmission line in
question. This simple method represents an application of the mathematical modeling of a transmission line.
For the purpose of explaining this technique, consider a transmission line of length "l." It has a characteristic impedance of Z[0] and is terminated in a load impedance of Z[L] at one side of the
transmission line. From transmission-line theory, the input impedance, Z[i] , is given by Eq. 1:
Z[i] = Z[0]{L + jZ[0]tan(l)>/0 + jZ[L]tan(l)>} (1)
= 2pl
The real and imaginary parts of the input impedance can be easily calculated. If both Z[L] and Z[0] are purely resistive, the real and imaginary parts of the input impedance can be found from Eq. 2:
While the real part of the input impedance can be nulled only if the load impedance Z[L] = 0 (i.e., a short circuit), the imaginary part admits three solutions as shown by Eq. 3:
hen the imaginary part of the input impedance goes to zero not only when the load is perfectly matched to the transmission line impedance (i.e., when Z[0] = Z[L]).
It is useful to calculate the value of the real part of the input impedance when the imaginary part is zero i) = 0> as shown by Eq. 4:
This means that if measurements are performed by means of a VNA on a transmission line terminated in a load, the graphical depiction of impedance on a Smith Chart will show a circle passing by (Z[L],
0), (Z[0]^2/Z[L], 0) with diameter Z[L]- Z[0]^2/ Z[L]. This circle will reduce to the point (Z[L], 0) if Z[0] = Z[L].
Figures 1 and 2 show the results for measurements of two coaxial cables having impedances of 50 and 75 O, respectively. Both of the cables are terminated in characteristic impedances of 50 O. These
measurements are performed with a VNA calibrated in the same impedance as the load applied to the cable. Typically, the characteristic impedance of the system and the cable is 50 O, although in some
cases, such as in cable-television (CATV) systems, the characteristic impedance may be 75 O. Since the accuracy of the VNA's calibration will determine the accuracy of these impedance measurements,
lower measurement frequencies and narrow frequency spans are to be preferred for the measurements. At this point, it is important to calculate the minimum frequency span required for an impedance
measurement. The minimum frequency can be chosen close to the network analyzer minimum working frequency.
Upon performing an impedance measurement on a transmission line, a complete circle will be plotted onto the Smith Chart if x l = p. This makes it possible to calculate the narrowest frequency span
possible, given the length of the transmission line to be measured, using the equality shown in Eq. 5:
(2p/λ)L = p (5)
The value of λ can be found since λ = vc/f where c is the speed of light and v is the relative velocity in the medium. Parameter v can be set to unity (1), since the task at hand is to find the
minimum frequency span. By remembering that the speed of light, c, is equal to 3 x 108 m/s, the frequency span can be found from Eq. 6:
f[SPAN] (MHz) = 150/l (6)
For example, to determine the impedance of a 30-cmlong coaxial cable, a minimum frequency span of SPAN[min] = 150/0.3 = 500 MHz should be used.
In the example above, the VNA can be correctly set for a minimum frequency, f[min], of 10 MHz, and a maximum frequency, f[max], of 510 MHz. Once the VNA is set and calibrated, the transmission line
to be measured is connected to its test port and the circle will be displayed on the VNA's on-screen Smith Chart. At this point, two markers must be placed on the displayed Smith Chart in order to
measure the impedance of the two points that are crossing the horizontal axis. One will be close to Z[L], which is Re[0](Z[i]), the other one will measure Re[8](Z[i]). Applying the second part of Eq.
4 yields Eq. 7:
Z[0] = 0(Zi)Re[8] (Z[i])>^0.5 (7) In the example of Fig. 2 (a 75-O cable terminated in 50 O), the two markers show the impedances Re[0](Z[i]) = 51.15 O and Re[8](Z[i]) = 108.2 O. From Eq. 7,
Z[0] = (51.15 x 108.2)^0.5 = 74.4 O
Although the method is best suited to measure unbalanced lines, such as coaxial cable, it can also be applied to balanced lines. These have the same mathematical modeling but the accuracy of the
measurement is generally diminished due to higher parasitic effects caused mainly by capacitive coupling between the wires and the ground, so it is even more important to set the frequency as low as
possible. Furthermore, the use of a balanced-unbalanced transformer (balun) between the VNA and the transmission line is recommended. Of course, the balun must work within the chosen frequency span
and must be taken into account during the calibration phase.
Continue on page 2
Page Title
As an example, a 1-m-long straight bifilar line, with known nominal impedance of 150 O, was measured with and without a balun, with the results shown in Figs. 3 and 4. Applying Eq. 7, the measured
impedance with and without the balun is Z[0] = (54.1 x 351)^0.5 = 137.8 O (measured with the balun) and Z[0] = (54.1 x 300.8)^0.5 = 127.6 O (measured without the balun).
If the resistance of the conductor is not negligible, the impedance of the transmission line cannot be considered purely resistive. In this case, it is necessary to start from the general formula
that is used to model the impedance of a transmission line:
Z[0] = ^0.5 (8)
R = the distributed resistance of the conductor in O/unit length;
L = the distributed inductance of the line in H/unit length;
G = the distributed conductance of the line in Siemens/unit length; and
C = the distributed capacitance of the line in F/unit length.
Equation 1 presents a formula for the input impedance of a lossless line. For a lossy line, this must be changed to the form of Eq. 9:
Z[i] = Z[0]{ L + Z[0] tanh(λl)>/0 + Z[L] tanh(λl) > (9)
λ = a + j.
Here, the coefficient λ takes into account not only the phase shift = 2p/?, but also the amplitude attenuation a. In terms of RLGC, it is given by Eq. 10:
? = ^0.5(10)
Equation 9 is more complex than Eq. 1, so it is not as easy to find its roots. However, it is possible to estimate the line impedance as the frequency of analysis approaches zero (DC) or a high
frequency. If Ω is close to zero, the impedance of the line will be given by Eq. 11:
Z[0] (DC) = (R/G)0.5 (11)
The input impedance of the terminated line can be calculated knowing that usually G will be negligible, then also λ = 0. Substituting it into Eq. 9 yields Eq. 12:
Z[i] (DC) Z[0]{ L + Z[0] tanh(0)>/0 + Z[L] tanh(0)> } = Z[L] (12)
If, instead, Ω is high, the impedance Z[0] will be as described by Eq. 13:
Z[0] (8) (L/C)^0.5 (13)
The AC losses will dominate, so the behavior will be similar to that seen for the lossless lines. This means that on the Smith Charts, a circle (or a single point) will appear. To calculate Z(8), it
is possible to proceed in a similar way as used to calculate the impedance Z0 of a lossless line. The only difference is that the circle, in general, will not have its diameter on the real axis; the
two impedances on the diameter will have to be taken on a segment parallel to the real axis and not just onto the real axis. Again, if these two impedances are Rea(Zi) and Reb(Zi) , Eq. 14 can be
Z[0] (8) a(Z[i]) x Re[b](Z[i])>^0.5 (14)
Then, to measure the impedance of a lossy transmission line, the procedure is quite identical to that used for lossless lines. The VNA start frequency must be set as low as possible while the stop
frequency must be set higher than the frequency found from Eq. 6. Depending on the ratio between R, and L, C on the Smith Chart an arc of circle will be displayed, terminating on a single point,
going to high frequency or on a series of circles.
If the impedance measured at the low frequency is Z[low], then R = (Z[low] Z[L])/l. Then, to calculate the line's impedance, simply use Eq. 14. As an example for the simulations that follow, a
2-m-long lossy cable from an oscilloscope probe was measured with a VNA, with the results shown in Fig. 5. These measurement results lead to the following parameters for the scope probe cable: R =
(414.3 50)/2 = 182.15 O/m and Z[0](8) = 96.87 O.
The following two examples have been simulated using the free tool simulation Quite Universal Circuit Simulator (QUCS) from SourceForge.net. In these examples, lossy cables with line impedance of 100
O and distributed resistive losses of 180 O/m (in Fig. 6) and 90 O/m (in Fig. 7) were simulated. Converting the markers from S to Z results in Z[0](DC) = 99.4 2.3j and Z(8) = 407.2 26.6j, so that R =
(407.2 50)/2 = 178.6 O/m.
If the distributed resistance, R, in the second example is one-half the value (90 O compared to 180 O) of the first example, converting the markers from S to Z results in the details shown in Fig. 7.
Again, converting the markers from S to Z yields Z[0] (DC) = 230.1 + 0.006j, Z[a] (8) = 111.5 + 1.5j, and Z[b] (8) = 89.6 2.7j. Then, Z[0] (8) (111.5 x 89.6)^0.5 = 99.95 O and R = (230.1 50)/2 = 90 O
/m. This simple method of finding the impedance of a transmission line provides quick results for any length of coaxial cable or other transmission line, and can be used with an available microwave
VNA of suitable measurement bandwidth.
ALBERTO BAGNASCO, RF Senior Design Engineer, Selex Communications S.p.A., Via Pieragostini, 80-16151, Genova, Italy | {"url":"http://mwrf.com/print/test-amp-measurement-analyzers/make-quick-work-impedance-measurements","timestamp":"2014-04-21T10:41:37Z","content_type":null,"content_length":"26132","record_id":"<urn:uuid:6f01a63d-88a6-4f63-8b61-d3a031ca89c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moss Beach Algebra Tutors
...I don't just challenge my students; I actively encourage, or in many cases teach students to approach challenges eagerly and energetically. I empower my students to be in charge of their own
learning, and as a result they tend not only to improve in whatever subject we are studying, but also to ...
21 Subjects: including algebra 1, reading, English, ESL/ESOL
...I have taught middle school English for two years and public speaking is part of our core curriculum. Students learn the elements of persuasive speaking and how to understand their audience. I
teach my students how to integrate multimedia and visual displays in their presentations in order to strengthen their claims and engage their audience.
14 Subjects: including algebra 1, reading, English, grammar
...My original interest and intent in majoring in these two subjects was to teach freshman and sophomore level college courses, and I was enrolled in education to teach public school only as a
back-up plan and because I thought I might as well start my career with extra teaching skills. I am no lon...
48 Subjects: including algebra 2, calculus, statistics, physics
...I enjoy helping students in general chemistry, biochemistry, AP chemistry, SAT II chemistry and any other school chemistry test preparation. I hold a B.S. in chemistry and M.S. in biochemistry.
Using molecular cloning, I have also conducted research in biochemistry and neuroscience laboratories at UC Riverside and San Diego State University.
18 Subjects: including algebra 1, algebra 2, physics, chemistry
Hi parents and potential students, My name is Valerie, and I am a UC Berkeley graduate with a degree in Nutritional Science. Most of my coursework involved studying the human body, and therefore I
have an extensive knowledge of human biology. I also love math and have taken math courses up to and including calculus.
7 Subjects: including algebra 1, biology, anatomy, elementary math | {"url":"http://www.algebrahelp.com/Moss_Beach_algebra_tutors.jsp","timestamp":"2014-04-20T08:21:15Z","content_type":null,"content_length":"25053","record_id":"<urn:uuid:13588f73-b7af-4885-a09c-10d61bd1114e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson: Loops and Recursion
(Lessons) (Previous: Lesson: Fractals) (Next: Lesson: Recursion Review)
Table of Contents
1. Review
In this section we have learned the two problem solving methods that are the foundation of everything we will do in computer science. Loops require you to break down a problem into a single step that
is repeated many times. Recursion asks you to assume that you have a way to solve the problem for a simpler case, and figure out how to reduce a more complex case to the simpler case.
Before you start writing any code, in any computer program, you need to know exactly how your code will work. It is no good just copying random code and trying to tie it together somehow; you have to
think before you do anything. This is especially true when solving a problem using loops or recursion: I need to be able to explain clearly and rigorously what my strategy is before I start writing
any code, or I will be lost.
What makes this hard is that usually you are trying to go from a description of what the desired solution looks like, to a procedure for actually obtaining it. It is of course necessary to understand
very clearly what the problem is asking for, but even if you know that much, you won't get anywhere until you figure out a strategy to search for the answer.
My uncle works at a company that does a lot of programming, and he says that when he interviews a condidate for a job, he can tell almost instantly whether they are a good programmer or not by how
they approach a sample problem. Some people will immediately start writing down code that feels plausible, and then will see what it does, and if it doesn't work, fuss with it until it actually does
what it is supposed to. Others will sit for a while and analyze the problem carefully, then write teh code correctly the first time. I'm sure you can figure out for yourself who ends up getting the
1.1. Analyzing looping problems
When you are using loops to solve a problem, there are several issues you need to get locked down before you start writing your code:
• First, think out what sort of "count" you will be keeping. Are you counting up, or down? Counting by ones, or doing something more interesting? The for loop is a great friend to you here, because
it puts all this information together in one place.
• Are there any values other than the count that you need to keep track of between loops? These variables should be declared before the loop starts.
• What exactly happens in each time through the loop? Be careful that it will set up properly for the next time through the loop.
So, for example, suppose that I am trying to solve this problem:
A pythgorean triple is a set of integers (a, b, c) for which a < b < c and a² + b² = c². Print all pythagorean triples for which c is less than or equal to 20, and count how many there are.
I think to myself:
1. I am searching for values for these three variables. I know that c has to be an integer, less than or equal to 20. This means that b can be at most 19, and a can be at most 18. A good place to
start is to loop through all possible values of a:
for(int a = 1; a <= 18; a++) {
2. As I do this, I need to count how many triples I find. So, I should declare a variable - int count - outside the loop to keep track of this.
3. Next, I think about what I will do with each value of a. I say to myself, "For each value of a, I want to find every value of b for which a² + b² is a square number." Notice how I have turned my
description into a procedure: I am now doing a search through possible values of b in order to find one that meets some condition.
The phrase "every value of b" in my little description above suggests that there is another loop going on inside the one I already have. Again, the first step is thinking out how the counting
will work. I know that b has to be greater than a; that makes (a + 1) a good starting point. But what is the maximum value of b? Looking back at the problem, I know that √(a² + b²) has to be less
than or equal to 20. So:
for(int b = a + 1; Math.sqrt(a * a + b * b) <= 20; b++) {
4. Now I have to think to myself what will happen inside this inner loop. First I take a second to take stock of what I know about my variables each time through:
□ I have two integers, a and b
□ They define a right triangle whose hypotenuse is less than or equal to 20
□ b is greater than a
□ I want to know if their hypotenuse is an integer.
I know how to find c as a double. It looks like this:
double c = Math.sqrt(a * a + b * b);
The trick is to figure out whether this is an integer. Digging into my bag of tricks, I remember that I can round off a double to an int by adding .5 and then truncating to an integer. So,
perhaps I should round the square root to an integer, and then check if the equivalence is still true:
int c = (int)(Math.sqrt(a * a + b * b) + .5);
if(c * c == a * a + b * b) {
System.out.println("(" + a + ", " + b + ", " + c + ")");
The whole code for this example looks like this:
public int triples() {
int count = 0;
for(int a = 1; a <= 18; a++) {
for(int b = a + 1; Math.sqrt(a * a + b * b) <= 20; b++) {
int c = (int)(Math.sqrt(a * a + b * b) + .5);
if(c * c == a * a + b * b) {
System.out.println("(" + a + ", " + b + ", " + c + ")");
return count;
In general, looping problems will clearly have either a process that is repeated over and over again, or a range of numbers that you need to search through. Later on in the course, we will also see
situations where you are looping through a String, as many of you did with Lindenmeyer systems, or where you are looping through a collection of objects.
1.2. Analyzing Recursion Problems
In solving a problem recursively, you take quite a different approach than you do in solving one iteratively. With a loop, you have to identify first what the smallest, repeating piece of the problem
is, and then try to duplicate that over and over. In a recursive problem, you instead assume that your method solves the problem, and you write your method by trying to think of how to break down a
more complicated case - usually meaning bigger parameter values - into a simpler case.
So, the process of figuring out a recursive solution to a problem looks like this:
1. First, you have to describe exactly what it is that your method accomplishes - it moves a stack of n disks to a different tower, or it multiplies two numbers. Part of your task here is to write
the signature of the method - think carefully about what information needs to go into it, and what it will give back.
2. Next, think out how you can reduce one particular case into one or more simpler cases, and then combine the answers. So, for example, you say, "Moving n disks means moving n-1 to the extra tower,
then moving the nth disk, then moving the n-1 back on top of it." You can go ahead and write in this code, which is usually very simple.
3. At this point, you have a method that will run forever. You need to identify a base case, some point that the program will always reach, when it gets to sufficient depth, and that can be answered
with a simple answer rather than a more complex one. The base case will be identified with an if statement.
Remember, the main thing that makes recursive reasoning special is that you start by describing exactly how your method works, and then, relying on the fact that it will eventually actually do that,
you go about fillin in the actual code.
As an example, suppose that I have a method double f(double x) that represents some mathematical function. Here is an example of an f that is a simple cubic funcion:
double f(double x) {
return .5 * x * x * x + 1 * x * x - 2 * x - 3;
A plot of this function - draw, appropriately enough, by a Turtle - is shown to the right. As you can see, I have labeled the x and y intercepts of the function.
Suppose that you wanted to write a method to locate one of the roots of this function - that is, an x value for which f(x) is 0. You would have to specify what range of x's to look in, since as you
can see there are three roots. So, my method might look something like this:
// Return the x value in the given range for which f(x) = 0
public double root(double minX, double maxX)
So, for example, root(0, 4) ought to return 1.865, and root(-2, 0) ought to return -1.2111.
If I want to implement this method recursively, I think to myself: "How can I simplify the task of looking for a root in the given range?" Well, a simple way to do it would be to narrow the range. I
will do this by figuring out where the function changes sign - in the first half, or the second half. Wherever that happens, I can call root() again with a reduced range just containing half of the
original range:
// Return the x value in the given range for which f(x) = 0
public double root(double minX, double maxX) {
double midX = (minX + maxX) / 2;
if(f(maxX) * f(midX) < 0) {
// If multiplying these two gives a negative number, they are opposite in sign.
// So, the function crosses the axis in the first half
return root(minX, midX);
} else {
// Otherwise, the function crosses the axis in the second half
return root(midX, maxX);
As usual, I find myself with a method that really ought to work, except that it will never terminate: it just keeps looking at smaller and smaller ranges. I need to introduce a "base case" that will
end the whole process when we get to a difference that is too small to matter. A simple way to define this would be to stop when the width of my range is too small to be noticeable; if it is, say,
less than .00001, then the error must be less than that.
// Return the x value in the given range for which f(x) = 0
public double root(double minX, double maxX) {
double midX = (minX + maxX) / 2;
if(maxX - minX < .00001) {
return midX;
} else if(f(maxX) * f(midX) < 0) {
// If multiplying these two gives a negative number, they are opposite in sign.
// So, the function crosses the axis in the first half
return root(minX, midX);
} else {
// Otherwise, the function crosses the axis in the second half
return root(midX, maxX);
2. Exercise: sandwich()
Let us define a sandwich of size n to be a String containing the number n enclosed between two sandwiches of size n - 1. A sandwich of size 0 is an empty string. So, for example, sandwich(2) = "121",
and thus sandwich(3) = "1213121".
Write both a recursive and an interative method to calculate this. Be sure that you carefully work out how each method works before you start writing code.
What would you modify if you wanted each sandwich to be enclosed in parentheses, as in "(((1)2(1))3((1)2(1)))"?
What would you modify to take the nubmers out of the above pattern, so that sandwich(3) just returns "((()())(()()))"?
Solution to sandwich()
3. Homework
Loops and Recursion (pdf) (Due: 10/24/06)
(Lessons) (Previous: Lesson: Fractals) (Next: Lesson: Recursion Review) | {"url":"http://www.zahniser.net/~russell/computer06/index.php?title=Lesson:%20Loops%20and%20Recursion","timestamp":"2014-04-17T12:29:54Z","content_type":null,"content_length":"17045","record_id":"<urn:uuid:f7cf9ef5-43b9-4a06-a5d3-d93adc20b127>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Initial Velocity - Projectile Motion
The initial velocity to fire is the same as the velocity an object would be traveling at after falling from its peak altitude.
Formulas and calculator can be found here:
If the time is 5 seconds, the altitude would be 123 meters and the initial velocity would be 49 meters/second.
If the altitude is 183 meters, the time is 6.1 seconds at an initial velocity of 59 meters/second. | {"url":"http://www.physicsforums.com/showthread.php?s=16db9a4951cb69f98818fb5b80c8bf5d&p=4614397","timestamp":"2014-04-25T08:37:34Z","content_type":null,"content_length":"32690","record_id":"<urn:uuid:86105a2e-141d-40a5-9ad4-7862f3520813>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Norriton, PA ACT Tutor
Find an East Norriton, PA ACT Tutor
...I tutor fifth grade math, kindergarten and first grade reading, and I work as an assistant in a first grade classroom. I also tutor English to students ages 6-8. I also used to teach Latin to
first graders, which involved writing my own lesson plans.
34 Subjects: including ACT Math, reading, ESL/ESOL, English
...Solve polynomials. 7. Solve problems involving fractions. 8. Solve problems by using factoring. 9.
27 Subjects: including ACT Math, calculus, geometry, statistics
...Generally, the topics most students have issues with and that I can help to demystify are how to identify constants and variables in basic equations, properly using exponentiation, ratios,
logarithms, and simplifying radicals. Science related math concepts in pre-algebra that I can clarify for s...
9 Subjects: including ACT Math, chemistry, algebra 2, geometry
...I took Linear algebra course in college and passed it with a B. I have experience with the uses of linear algebra and matrices. I have experience dealing with row reduction, multiplication of
13 Subjects: including ACT Math, calculus, geometry, GRE
...I began tutoring in high school, where I helped my peers in subjects such as Chemistry and Spanish after school hours. During my time at college, I tutored children in local elementary
schools, as well as local residents in ESL. My favorite subjects to tutor are languages and test prep.
23 Subjects: including ACT Math, reading, Spanish, writing
Related East Norriton, PA Tutors
East Norriton, PA Accounting Tutors
East Norriton, PA ACT Tutors
East Norriton, PA Algebra Tutors
East Norriton, PA Algebra 2 Tutors
East Norriton, PA Calculus Tutors
East Norriton, PA Geometry Tutors
East Norriton, PA Math Tutors
East Norriton, PA Prealgebra Tutors
East Norriton, PA Precalculus Tutors
East Norriton, PA SAT Tutors
East Norriton, PA SAT Math Tutors
East Norriton, PA Science Tutors
East Norriton, PA Statistics Tutors
East Norriton, PA Trigonometry Tutors
Nearby Cities With ACT Tutor
Center Square, PA ACT Tutors
Eagleville, PA ACT Tutors
Jeffersonville, PA ACT Tutors
Lawncrest, PA ACT Tutors
Limerick, PA ACT Tutors
Lower Gwynedd, PA ACT Tutors
Lower Merion, PA ACT Tutors
Norristown, PA ACT Tutors
Plymouth Valley, PA ACT Tutors
Radnor, PA ACT Tutors
Tredyffrin, PA ACT Tutors
Trooper, PA ACT Tutors
Upper Chichester, PA ACT Tutors
Upper Gwynedd, PA ACT Tutors
West Bradford, PA ACT Tutors | {"url":"http://www.purplemath.com/East_Norriton_PA_ACT_tutors.php","timestamp":"2014-04-16T07:28:20Z","content_type":null,"content_length":"23846","record_id":"<urn:uuid:da16e9b3-3659-4d4b-827c-3d10da69b7b7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Discussion: Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool, Midsegment of a rectangle?
Discussion: Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool
Topic: Midsegment of a rectangle?
Related Item: http://mathforum.org/mathtools/tool/15621/
<< see all messages in this topic
< previous message | next message >
Subject: RE: More on What is a Trapezoid
Author: Annie
Date: Dec 13 2004
On Dec 10 2004, lanius wrote:
> I suppose it is legal to respond to your own post. :-)
I'd like
> to hear more about the original discussion that I was trying to
> generate here.
Consider these two definitions:
Def 1 A
> quadrilateral with at least one pair of parallel sides.
Def 2 A
> quadrilateral with EXACTLY one pair of parallel sides.
I think
> most US high school text books define a trapezoid with definition 2.
> Does anyone know of a US high school textbook that doesn' t define
> trapezoids in that way (or equivalant to that)? Does anyone know the
> history of when or why that definition came to be used in schools,
> at least in the US? Is definition 1 used in other countries in
> current K-12 textbooks?
As far as I know, the UCSMP books use the more inclusive definition. I don't
know of other high school textbooks that use it (I was holding out for Jacobs,
but he didn't come through for me). The topic of the defintion has been
discussed a bit on the Forum's geometry.pre-college newgroup, and is
summarized in this Dr. Math read:
Reply to this message Quote this message when replying?
yes no
Post a new topic to the Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool Discussion discussion
Discussion Help | {"url":"http://mathforum.org/mathtools/discuss.html?context=tool&do=r&msg=16589","timestamp":"2014-04-20T04:03:33Z","content_type":null,"content_length":"17482","record_id":"<urn:uuid:977709f5-17da-4d9c-8be9-55c3366a7f4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Calculator
In this tutoral, Basic Calculator in C#, we will look at creating a basic calculator. The calculator will have the following functionality:
• Addition
• Subtraction
• Division
• Multiplication
• Square Root
• Exponents (Power Of)
• Clear Entry
• Clear All
There will be a 2nd tutorial that will cover some more advanced features such as
• Adding a number to memory
• Removing a number from memory
• Calculating with a number in memory
• Entering numbers by typing
The first thing you need to do is create a new project in Visual Studio (or Visual Basic Express Edition if thats what you use). Once you have created your new project you need to create your user
interface, your user interface should look like this:
Your user interface will consist of
• Buttons 0 through 9
• Buttons for
□ Addition
□ Subtraction
□ Division
□ Multiplication
□ Exponents (x^)
□ Inverse (1/x)
□ Square Root (sqrt)
• Decimal
• Equals
• Backspace
• CE (Clear Entry)
• C (Clear All)
• ReadOnly TextBox for input (Make sure TabStop is also set to False)
How you setup your user interface is up to you, but remember people are used to a calculator looking a certain way so you may wish to follow my example. In this tutorial I will show you how to code
two of the number buttons (since all 10 are the same except the zero button), how to code the calculations buttons, the clear buttons and the backspace buttons. For two of the buttons we will need to
use built-in math functions in the .Net Framework:
For the record there are many more members of the
System.Math Class
located at the online MSDN.
Now, in our calculator we need some
variables to hold different items and states in our calculator, such as which calculation are we performing, does the input area already have a decimal, whether we can enter values into the input
area and to hold values while we perform calculations. Add the following code to the top of your code, this is the global variables we need in our calculator:
//variables to hold operands
private double valHolder1;
private double valHolder2;
//Varible to hold temporary values
private double tmpValue;
//True if "." is use else false
private bool hasDecimal = false;
private bool inputStatus = true;
//variable to hold Operater
private string calcFunc;
These variables will be used through out our program thats why they're globals. Now, before any calculations can be done, the user needs to be able to enter numbers into the input box, so lets take a
look at how to do that. Since all the number keys in the calculator are the same (except the 0 (zero) key, we'll conver that in a minute) I will code one of the buttons, then you can do the rest.
Lets take a look at the number one key:
private void cmd1_Click(object sender, System.EventArgs e)
//Check the inputStatus
if (inputStatus)
//Its True
//Append values to the value
//in the input box
txtInput.Text += cmd1.Text;
//Value is False
//Set the value to the value of the button
txtInput.Text = cmd1.Text;
//Toggle inputStatus to True
inputStatus = true;
When a user clicks a number button (in this case the number one button) we check the status of the
flag. If its true then we know we can just append the next value to the end of whats currently in the input box, otherwise we just enter the number into the input box. All the remaining numbers, as
stated before, follow this procedure. The zero button slightly different as we don't want the user to be able to enter zero as the first number (this is covered more in the decimal button
functionality). So lets take a look at how we code the zero button:
private void cmd0_Click(object sender, System.EventArgs e)
//Check the input status
if (inputStatus)
//If true
//Now check to make sure our
//input box has a value
if (txtInput.Text.Length >= 1)
//Add our zero
txtInput.Text += cmd0.Text;
First we check the status of the
flag, if its true we know we can enter a number in the box. Here we do a second check, we make sure the length of the text in the input box is at least 1 (it has a value), if so we enter the zero
into the input box.
For adding a decimal to our input box we need to first make sure our input box doesn't already contain one, for this we use the
global (boolean) variable, then we need to make sure our input box has a value (don't want the user to be able to enter a decimal as the first value). Then we make sure the value in the input area
isn't 0 (zero), this we will handle later.
If all those are true then we enter the decimal then toggle the
to True, so the user cant enter a 2nd one. Now, if the input area doesn't have a value, we enter 0., as we assume the user is wanting to work with a decimal value such as 0.5. Lets take a look at the
procedure for doing this:
private void cmdDecimal_Click(object sender, System.EventArgs e)
//Check for input status (we want true)
if (inputStatus)
//Check if it already has a decimal (if it does then do nothing)
if (!hasDecimal)
//Check to make sure the length is > than 1
//Dont want user to add decimal as first character
if (txtInput.Text.Length != 0)
//Make sure 0 isnt the first number
if (txtInput.Text != "0")
//It met all our requirements so add the zero
txtInput.Text += cmdDecimal.Text;
//Toggle the flag to true (only 1 decimal per calculation)
hasDecimal = true;
//Since the length isnt > 1
//make the text 0.
txtInput.Text = "0.";
As you can see, we check all the items mentioned above, if they're True we add the decimal, otherwise we add 0. to the input area.
The first calculation we will look at is addition. The first thing we do here is to make sure the input box has a value (Length > 1). If it does then we check the
The calcFunction variable will be used to tell our CalculateTotals procedure which calculation to perform
. Here, if the value is empty (String.Empty) we assign the value of our input box to a variable,
, which will hold the first part of all calculations, then clear out the input box so the user can enter a 2nd number.
If the
variable isnt empty then we call our
procedure to display a total to the user. We then assign the value of
to our variable for the next turn through, then we toggle the
flag to False. Now lets take a look at how we accomplished this:
private void cmdAdd_Click(object sender, System.EventArgs e)
//Make sure out input box has a value
if (txtInput.Text.Length != 0)
//Check the value of our function flag
if (calcFunc == string.Empty)
//Flag is empty
//Assign the value in our input
//box to our holder
valHolder1 = System.Double.Parse(txtInput.Text);
//Empty the input box
txtInput.Text = string.Empty;
//Flag isnt empty
//Call our calculate totals method
//Assign a value to our calc function flag
calcFunc = "Add";
//Toggle the decimal flag
hasDecimal = false;
Believe it or not, all the other basic calculation buttons are the same as the Add button, with the exception of what we set
to. In the other buttons we set this variable to the calculation we want to perform, Subtract,
Divide, Multiply, and so on, so there really isn't a reason to show how that is done since we did the Add button and the others are the same.
Even though they are the same I'll show the functionality of one more calculation button. This time we will look at the code for the subtraction button. The first thing we do here is to make sure the
input box has a value (Length > 1). If it does then we check the
The calcFunction variable will be used to tell our CalculateTotals procedure which calculation to perform
. Here, if the value is empty (String.Empty) we assign the value of our input box to a variable,
, which will hold the first part of all calculations, then clear out the input box so the user can enter a 2nd number.
If our
isnt empty then we call our
method to perform the calculations. We then assign the value of
to our
variable so the calculations method will know which calculation to perform. The code for the subtraction button looks like this:
private void cmdSubtract_Click(object sender, System.EventArgs e)
//Make sure the input box has a value
if (txtInput.Text.Length != 0)
//Check the valueof our calculate function flag
if (calcFunc == string.Empty)
//Flag is empty
//Assign the value of our input
//box to our holder
valHolder1 = System.Double.Parse(txtInput.Text);
//Empty the input box
txtInput.Text = string.Empty;
//Flag isnt empty
//Call our calculate totals method
//assign a value to our
//calculate function flag
calcFunc = "Subtract";
//Toggle the decimal flag
hasDecimal = false;
Thats how the normal calculation buttons are coded. Now lets say you want to give the user the option to calculate Exponents, 4^2 for example. To code this button you need a couple of checks before
doing anything. First we need to check and make sure the input area has a value, if it does then we check to see the value of the
If this is empty, we then convert the value of the input area to a Double and assign it to the
variable to hold on to, this will be used for the calculations in the
procedure and empth the value from the input area.. If its not empty we directly call the
function as this means the user has already entered 2 numbers.
We then assign the value of
to our
variable, this will tell CalculateTotals what calculation to perform, and toggle the
flag to False. Lets take a look at how we accomplished all of this:
private void cmdPowerOf_Click(object sender, System.EventArgs e)
//Make sure the input box has a value
if (txtInput.Text.Length != 0)
//Check if the calcFunc flag is empty
if (calcFunc == string.Empty)
//Assign the value of the input box to our variable
valHolder1 = System.Double.Parse(txtInput.Text);
//Empty the input box
//So the user can enter the power of value
txtInput.Text = string.Empty;
//Call the calculate totals method
//Assign our flag the value of "PowerOf"
calcFunc = "PowerOf";
//Reset the decimal flag
hasDecimal = false;
Doing a Square Root is somewhat different as it doesn't take 2 values, just the number you want the square root of, so some of the checking required in the other calculations isn't required here. For
a Square Root we first check to ensure the input area has a value. If it does have a value we assign the value of the input area, converted to a Double, to our
Once we have the value, we call the
System.Math.Sqrt Method
to perform the calculations on the
variable. Once this is complete we assign the resulting value to our input area, then toggle the
flag to False. Lets take a look at how this is done:
The Equals button is quite simple. Here, we first check to make sure our input area has a value and that our
variable isn't a zero (Divide by 0 is a bad thing). If both of these are true we call the
procedure to perform our calculations based on the value of the
flag. We then clear the value of
and toggle the
flag to False. This is done like this:
private void cmdSqrRoot_Click(object sender, System.EventArgs e)
//Make sure the input box has a value
if (txtInput.Text.Length != 0)
//Assign our variable the value in the input box
tmpValue = System.Double.Parse(txtInput.Text);
//Perform the square root
tmpValue = System.Math.Sqrt(tmpValue);
//Display the results in the input box
txtInput.Text = tmpValue.ToString();
//Clear the decimal flag
hasDecimal = false;
In the last 2 buttons we have looked at how you use the two System.Math Members I mentioned earlier, pretty simple isnt it.
We have 3 more buttons to look at before we look at the
procedure. First we'll look at the backspace button.For the backspace, first we need to make sure the input are has a value. If it does then we retrieve the next to last character and see if its a
decimal, if it is we toggle the
flag to False. Next we create an Integer variable (
) to hold the length of the contents in the input area. From there we use
, along with
to remove the last character of the string for each time the user clicks the backspace button.
private void cmdBackspace_Click(object sender, System.EventArgs e)
//Declare locals needed
string str;
int loc;
//Make sure the text length is > 1
if (txtInput.Text.Length > 0)
//Get the next to last character
str = txtInput.Text.Substring(txtInput.Text.Length - 1);
//Check if its a decimal
if (str == ".")
//If it is toggle the hasDecimal flag
hasDecimal = false;
//Get the length of the string
loc = txtInput.Text.Length;
//Remove the last character, incrementing by 1
txtInput.Text = txtInput.Text.Remove(loc - 1, 1);
The last 2 buttons I'm going to demonstrate are the CE (Clear entry) and C (Clear all) buttons. These are very simple. First the clear entry button. What we do here is set the value in the input area
to empty (String.Empty), and the
flag to false.
private void cmdClearEntry_Click(object sender, System.EventArgs e)
//Empty the input box
txtInput.Text = string.Empty;
//Toggle the decimal flag
hasDecimal = false;
The clear all button required a bit more code as we do more with this button. Here we set our 2 holder variables,
to 0 (zero), we then set the
flag to String.Empty and the
flag to False, like this:
private void cmdClearAll_Click(object sender, System.EventArgs e)
//Empty the text in the input box
txtInput.Text = string.Empty;
//Clear out both temp values
valHolder1 = 0;
valHolder2 = 0;
//Set the calc switch to empty
calcFunc = string.Empty;
//Toggle the hasDecimal flag
hasDecimal = false;
Those are the buttons you need for a
calculator. The final thing we're going to look at is the procedure that actually does the calculations,
. Here the first thing we do is set our variable
to the current value of the input area.
We then do a
on the value of
so we know which calculations to perform. We perform our calculations (add, subtract, divide, multiply, exponent, etc) and set the results to the input area so the user can see their results. Finally
we set the
flag to False. This is what this procedure looks like:
private void CalculateTotals()
valHolder2 = System.Double.Parse(txtInput.Text);
//determine which calculation we're going to execute
//by checking the value of calcFunc
switch (calcFunc)
case "Add":
valHolder1 = valHolder1 + valHolder2;
case "Subtract":
valHolder1 = valHolder1 - valHolder2;
case "Divide":
valHolder1 = valHolder1 / valHolder2;
case "Multiply":
valHolder1 = valHolder1 * valHolder2;
//exponents (power of)
case "PowerOf":
valHolder1 = System.Math.Pow(valHolder1, valHolder2);
//set our input area to the value of the calculation
txtInput.Text = valHolder1.ToString();
inputStatus = false;
For the Exponents (Power Of) we use the
System.Math.Pow Method
for calculating the value.
There are two more buttons in this calculator that we didn't cover, basically due to the length of the tutorial. Those buttons are included in the sample project I'm attaching to this tutorial.
Thats it, thats how you create a basic calculator in C#. I hope you find this tutorial helpful. I am including the project file with this tutorial, but remember this solution is under the
so you may not remove the header from the files or turn this project in as your homework assignment.
I know I am forced to go with the
honor system
in this, but if you do just turn this in as your assignment not only will you be cheating, but you will learn nothing, and subsequently wont know enough to become a programmer once you get out of
I will be doing a 2nd part to this tutorial where I look at adding more advanced functionality to this calculator, such
as adding a number to memory, removing a number from memory, calculations with a number in memory and more.
Thank you for reading!
PC_Calculator_CSharp.zip (120.99K) Number of downloads: 17896 | {"url":"http://www.dreamincode.net/forums/topic/32968-basic-calculator-in-c%23/","timestamp":"2014-04-18T04:19:57Z","content_type":null,"content_length":"175695","record_id":"<urn:uuid:26fb0240-e27a-4865-8d42-4a61ec911001>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Problems Library - Algebra, Linear Equations: Graphing
This page:
linear equations
slope between points
finding intercepts
slope-intercept form
About Levels
of Difficulty
use of variables
one variable equations
functions & relations
linear equations
linear data
linear systems
linear inequalities
exponents & radicals
rational equations
exponential functions
Browse all
About the
PoW Library
Graphing Linear Equations
An understanding of the coordinate plane and the ability to graph a linear equation are helpful tools in these problems.
Related Resources
Interactive resources from our Math Tools project:
Algebra: Linear Equations
The closest match in our Ask Dr. Math archives:
High School: Linear Equations
NCTM Standards:
Algebra Standard for Grades 9-12
Access to these problems requires a Membership. | {"url":"http://mathforum.org/library/problems/sets/alg_lineareqs_graphing.html","timestamp":"2014-04-17T01:53:39Z","content_type":null,"content_length":"14188","record_id":"<urn:uuid:7fb47da7-7fb4-4f84-8c79-c44ec2a21d66>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greenville, GA Math Tutor
Find a Greenville, GA Math Tutor
...I have taught for four years in a public school setting and am now looking to extend my skills into private tutoring. I was a Science major at the University of Georgia, then decided I wanted
to teach and received my teaching certificate. Recently, I finished my Master's degree in Education.
11 Subjects: including algebra 1, algebra 2, biology, chemistry
...I majored in biology with a focus on pre-medicine. I took an abundance of basic science courses including: biology, chemistry, organic chemistry, physics, genetics, cell biology, toxicology,
zoology, botany, and vertebrate natural history. I spent many hours tutoring my peers and working in study groups.
22 Subjects: including algebra 2, chemistry, prealgebra, algebra 1
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
13 Subjects: including trigonometry, discrete math, linear algebra, probability
...I have been a math tutor in the past for subjects such as Algebra I, II, and geometry. I have also assisted students in studying for the SAT and ACT. My scores for both tests were 1980 (super
scored SAT) 1880 (highest score SAT) and 28 (ACT). I am a member of Pi Sigma Alpha, the Political Science Honor Society.
7 Subjects: including SAT math, PSAT, political science, algebra 1
...During my 20 years of tutoring, I have always emphasized both arithmetic and logical reasoning skills. My unique method of teaching makes it easier for my students to understand and solve
almost any type of math problem. Moreover, I've been extremely successful in teaching my students to always utilize a step-wise approach to problem-solving.
57 Subjects: including geometry, algebra 1, algebra 2, SAT math
Related Greenville, GA Tutors
Greenville, GA Accounting Tutors
Greenville, GA ACT Tutors
Greenville, GA Algebra Tutors
Greenville, GA Algebra 2 Tutors
Greenville, GA Calculus Tutors
Greenville, GA Geometry Tutors
Greenville, GA Math Tutors
Greenville, GA Prealgebra Tutors
Greenville, GA Precalculus Tutors
Greenville, GA SAT Tutors
Greenville, GA SAT Math Tutors
Greenville, GA Science Tutors
Greenville, GA Statistics Tutors
Greenville, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Brooks, GA Math Tutors
Experiment Math Tutors
Franklin, GA Math Tutors
Gay, GA Math Tutors
Grantville, GA Math Tutors
Haralson Math Tutors
Manchester, GA Math Tutors
Senoia Math Tutors
Sharpsburg, GA Math Tutors
Shiloh, GA Math Tutors
Sunny Side Math Tutors
Turin, GA Math Tutors
Warm Springs, GA Math Tutors
Woodland, GA Math Tutors
Zebulon, GA Math Tutors | {"url":"http://www.purplemath.com/Greenville_GA_Math_tutors.php","timestamp":"2014-04-17T13:28:50Z","content_type":null,"content_length":"24055","record_id":"<urn:uuid:62dd6c3f-854e-47b2-88ca-8f527f5257fa>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Shop Window
Copyright © University of Cambridge. All rights reserved.
'Maths Shop Window' printed from http://nrich.maths.org/
You work in a maths shop in the lobby of the Hilbert Hotel.
The manager wants to make a window display to highlight the different types of real valued functions of the real numbers that she has on offer, with the 'best' examples of functions from each
category along with a representation which most sums up the 'essence' of each category. She decides that she wants to showcase 9 particularly important types of function categories:
1 Periodic
2 Tending to a vertical asymptote
3 Discontinuous somewhere
4 Decreasing
5 Bounded
6 Infinitely differentiable at all points
7 Singular somewhere
8 Taking finitely many values
9 Unique tangent exists at all points
Think of a few examples of functions from each category and the different ways that you might represent the different categories. What would be the clearest examples and representations that you
could think of to showcase these function categories?
It might be that you are in competition with another assistant to produce the best display; if so you will need to convince the manager that your selection of 9 functions and representations is the
best; it may be that you will need to work collaboratively simply to dream up any examples in some of the categories! It might be that you wish to suggest a better set of function categories.
Imagine now that you are faced with fussy customers who are likely to request simple examples of functions satisyfing pairs of these properties. Which requests can you satisfy? Which requests will it
be impossible to satisfy? | {"url":"http://nrich.maths.org/6904/index?nomenu=1","timestamp":"2014-04-19T12:30:45Z","content_type":null,"content_length":"4771","record_id":"<urn:uuid:62bf5b4a-2d64-41e7-bc8e-49e892268911>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
October 10th 2012, 08:10 AM #1
Find extrema $(x^2-8)^{2/3}$
Answers are $\pm 2\sqrt{2}$ and 0 for x values,
Local minimas are $(\pm 2\sqrt{2},0)$
Which makes complete sense if you look at the graph, however algebraically I solve for local maxima (0,4) which makes no sense to me as its undefined on the graph, why does my book say it counts
as a local maxima if its "undefined"?
Re: Extremas
We are given:
and we find:
So, we see we have the 3 critical values:
While the derivative is undefined for $x=\pm2\sqrt{2}$, the function is defined there, indicating we have cusps at these points.
We then find on the intervals:
$(-\infty,-2\sqrt{2})$ derivative is negative, function is decreasing.
$(-2\sqrt{2},0)$ derivative is positive, function is increasing.
$(0,2\sqrt{2})$ derivative is negative, function is decreasing.
$(2\sqrt{2},\infty)$ derivative is positive, function is increasing.
So, by the first derivative test for extrema, we find minima at:
and a maximum at $(0,4)$.
October 10th 2012, 09:34 AM #2 | {"url":"http://mathhelpforum.com/calculus/205021-extremas.html","timestamp":"2014-04-19T10:33:21Z","content_type":null,"content_length":"35147","record_id":"<urn:uuid:af4b8e83-0e09-48a9-8ea3-3bc9f9ca34f8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples of finite dimensional non simple non abelian Lie algebras
up vote 2 down vote favorite
Hello, I have recently started reading about Lie algebras. However all the examples I have encountered so far are simple and semisimple Lie algebras. Thus I would love to see an example of a real or
complex finite dimensional Lie algebra $A$ with the following property :
$A$ is non abelian and it contains non trivial ideals.
Usually, the difficulty is in finding the simple ones :) – Mariano Suárez-Alvarez♦ Oct 14 '11 at 17:13
Community-wiki? Examples come in all shapes and sizes. – Jim Humphreys Oct 14 '11 at 20:43
1 @ Srifo B: Here is an exercise for you: given simple Lie algebras $s_1,...,s_n$, construct a Lie algebra with abelian radical $r$ and semi-simple part $s_1\times...\times s_n$, such that $[r,s_i]\
neq 0$ for $i=1,...,n$. (Hint: think of the adjoint representation). – Alain Valette Oct 15 '11 at 8:00
add comment
5 Answers
active oldest votes
A nice example to play around with is the Lie algebra of upper triangular matrices. It is solvable, so has plenty of ideals and things like that.
up vote 4 down vote
And if you take the strictly upper triabgular, then it is nilpotent. – Yiftach Barnea Oct 14 '11 at 19:01
add comment
I think it is also a good idea to take a look at the following papers by Willem de Graaf et al. about nilpotent and solvable Lie algebras of small dimension over arbitrary fields:
up vote 3 down vote http://arxiv.org/abs/math/0511668
add comment
The 3-dimensional Heisenberg Lie algebra can be described by the presentation:
$$\mathcal{H}=\big\langle x, y, z\,\big\vert\,[x,y] = z, [x,z]=[y,z]=0\big\rangle$$
up vote 1 down vote
The derived subalgebra $[\mathcal{H},\mathcal{H}]$ is a central ideal spanned by $z$, and the whole Lie algebra is a nilpotent Lie algebra (thus not simple or semi-simple).
add comment
Take any non-abelian Lie algebra $L$ and consider $L\oplus {\mathbb C}^n$. This Lie algebra is non-abelian, and non-semisimple because it has a non-trivial radical.
up vote 0 down vote
add comment
To get a good idea about the relative paucity of simple Lie algebras (as Mariano says in his comment above), you could take a look at a list of low-dimensional Lie algebras. For instance,
up vote 0 here: J. Patera,R.T. Sharp,P. Winternitz, and H. Zassenhaus, Invariants of real low dimension Lie algebras, J. Mathematical Phys. 17 (1976), no. 6, 986–994.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged lie-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/78145/examples-of-finite-dimensional-non-simple-non-abelian-lie-algebras","timestamp":"2014-04-21T00:04:25Z","content_type":null,"content_length":"66746","record_id":"<urn:uuid:909ea50f-bdb5-45dc-befa-c936f54e0fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/daish96/answered","timestamp":"2014-04-21T10:22:37Z","content_type":null,"content_length":"111582","record_id":"<urn:uuid:60ed2465-a183-4a95-925d-200c4731e6de>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Vector Bundles on Degenerations of Elliptic Curves and Yang-Baxter Equations
             
Memoirs of the In this paper the authors introduce the notion of a geometric associative \(r\)-matrix attached to a genus one fibration with a section and irreducible fibres. It allows them to
American Mathematical study degenerations of solutions of the classical Yang-Baxter equation using the approach of Polishchuk. They also calculate certain solutions of the classical, quantum and
Society associative Yang-Baxter equations obtained from moduli spaces of (semi-)stable vector bundles on Weierstraß cubic curves.
2012; 131 pp; • Introduction
softcover • Yang-Baxter equations
• Massey products and AYBE--a single curve
Volume: 220 • Massey products and AYBE--families of curves
• Explicit calculations--smooth curves
ISBN-10: • Explicit calculations--singular curves
0-8218-7292-3 • Summary
• Bibliography
List Price: US$71
Individual Members:
Members: US$56.80
Order Code: MEMO/220/ | {"url":"http://ams.org/bookstore-getitem/item=memo/220/1035","timestamp":"2014-04-18T14:42:39Z","content_type":null,"content_length":"14715","record_id":"<urn:uuid:5a9df4fe-6d50-4433-a87f-80bb35e6673d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
PART IVd2b. THE THREE-FOLD NUMBER
A. TO GOVERN ALL THINGS
A.1. THE THREE-FOLD NUMBER
In view of what follows next it seems necessary to remind the reader that the phi-based exponential planetary frameworks developed and discussed in Parts I through IV stemmed entirely from the
rejection of Bode's "law" and the resulting need to develop a more workable approach to the structure of the Solar System. Simply stated, a mathematical problem concerning mean planetary periods
resulted in the final determination of a constant of linearity (k) from the quadratic equation k ^2- k - 1 = 0. Thus the required constant for the periods (planetary and synodic) turned out to be the
Golden Ratio Phi = 1.6180339887949. Moreover, as a consequence of the applied methodology the final exponential planetary framework was in turn necessarily predicated on the larger (though intimately
related) constant Phi ^2= 2.6180339887949. This determination was in fact the direct result of dealing with Time before Distance and latter in turn beforeVelocity, although as we have seen, all three
parameters were later combined in the final planetary frameworks in any event. On completion of the latter, however, it became apparent from various philosophical writings and cues that the initial
emphasis on Time, i.e., periods of revolution, was--to a certain extent at least--already present in ancient works (see below). Consequently, and also as treated in the previous sections, what
follows next is a natural pursuit of the various related threads that have been passed down to us; thus basically a search for similarity, and hopefully further enlightenment.
This said, however, the degree of understanding attained in earlier times is difficult to assess for many reasons; but if Spira Solaris and its integral parameters are in any way underlying features
in certain philosophical writings, then it should be possible to focus on the subject matter with specific numerical values and mathematical concepts already in place. Thus although Plato's emphasis
in both the Epinomis and the Timaeus suggests that there is little likelihood of detailed understanding without lengthy and specific instructions:^1
So much, then, for our program as a whole. But to crown it all, we must go on to the generation of things divine, the fairest and most heavenly spectacle God has vouchsafed to the eye of man.
And: believe me, no man will ever behold that spectacle without the studies we have described, and so be able to boast that he has won it by an easy route. Moreover, in all our sessions for study
we are to relate the single fact to its species; there are questions to be asked and erroneous theses to be refuted. We may truly say that this is ever the prime test, and the best a man can
have; as for tests that profess to be such but are not, there is no labor so fruitlessly thrown away as that spent on them. We must also grasp the accuracy of the periodic times and the precision
with which they complete the various celestial motions, and this is where a believer in our doctrine that soul is both older and more divine than body will appreciate the beauty and justice of
the saying that ' all things are full of gods ' and that we have never been left unheeded by the forgetfulness or carelessness of the higher powers. There is one observation to be made about all
such matters. If a man grasps the several questions aright, the benefit accruing to him who thus learns his lesson in the proper way is great indeed; if he cannot, 'twill ever be the better
course to call on God. Now the proper way is this--so much explanation is unavoidable. To the man who pursues his studies in the proper way, all geometric constructions, all systems of numbers,
all duly constituted melodic progressions, the single ordered scheme of all celestial revolutions, should disclose themselves, and disclose themselves they will, if, as I say, a man pursues his
studies aright with his mind's eye fixed on their single end. As such a man reflects, he will receive the revelation of a single bond of natural interconnection between all these problems. If
such matters are handled in any other spirit, a man, as I am saying, will need to invoke his luck. We may rest assured that without these qualifications the happy will not make their appearance
in any society; this is the method, this the pabulum, these the studies demanded; hard or easy, this is the road we must tread. (Epinomis, 989d-992a, Trans. A.E. Taylor, The Collected Dialogues
of Plato, Princeton University Press, Princeton, 1982:1530-31; emphases supplied)
... For these reasons and from such constituents, four in number, the body of the universe was brought into being, coming into concord by means of proportion, and from these it acquired Amity, so
that coming into unity with itself it became indissoluble by any other save him who bound it together. (Timaeus, 31b-32c, Plato's Cosmology: The Timaeus of Plato, Trans. Francis MacDonald
Cornford, Bobbs-Merrill, Indianapolis, 1975:44, emphases supplied) ^2
it is conceivable that the parameters and structure of Spira Solaris might provide not only the necessary element of "luck", but also some degree of understanding concerning the "binding." Certainly
there are enough parameters and associated concepts available (both past and present) as already touched upon in the last two sections. However, it would still be optimistic to expect that either the
application or the precise details would be immediately apparent. In fact--after the manner of Orpheus, Pythagoras and Plato--it might well be that certain related matters were indeed: "promulgated
mystically and symbolically (by the first); by the second, enigmatically and through images; and scientifically by the third." Or, as Thomas Taylor put it: "conformably to the custom of the most
ancient philosophers, (information) was delivered synoptically, and in such a way as to be inaccessible to the vulgar."
For example, although not especially "mystical" or "symbolic," consider the following "poetic" dissemination in medieval scholar Nicole Oresme's reference to Aristotle and "the three-fold number"
according to Ovid:^3
Said Aristotle, prince of philosophers and never-failing friend of truth:
All things are three; The three-fold number is present in all things whatsoever...
Nor did we ourselves discover this number, but rather natures teaches it to us.
Here, historical preconceptions notwithstanding, it can undoubtedly be suggested that of all numbers the Golden Ratio is uniquely qualified to receive such an appellation, though it is not the only
related choice in this regard. Equally applicable might be the reciprocal of the underlying constant of Spira Solaris, i.e., Phi ^-2 = 0.381966011, which as will be seen in later sections, may be
understood to represent many things, including the "fifth element"(Aether); "Venus philosophical" to some alchemists; in the same alchemical understanding Sir Isaac Newton's aptly named
"Quintessence" and also a key parameter associated with phyllotaxis to return us to Ovid and the link with Nature. Then again, there is the more precise wording in the Chaldean Oracles, where Ovid's
"All Things are Three" is expanded to include intellection ( "for the Mind of the Father said, that all things can be cut into three, Governing all things by mind" ). This occurs in a larger passage
that is also readily understandable in the present context with its thinly disguised references to the Golden Ratio and not least of all, "Fountain of Fountains, and of all Fountains, The Matrix
containing all things":^ ^4
The Monad is enlarged, which generates Two.
For the Dyad sits by him, and glitters with Intellectual Sections.
And to govern all things, and to order all things not ordered.
For in the whole World shineth the Triad, over which the Monad Rules.
This Order is the beginning of all Section.
for the Mind of the Father said, that all things can be cut into three,
Governing all things by mind.
The Center from which all (lines) which way soever are equal.
for the paternal Mind sowed Symbols through the World.
Fountain of Fountains, and of all Fountains.
The Matrix containing all things . . .
The content of the above passage from the Chaldean Oracles may surprise some readers, but nevertheless the historical side of the matter is not that difficult, although it is clearly out of kilter.
The Fibonacci series (and thereafter the Golden Ratio) have long been associated with natural growth from Fibonacci onwards for the moderns, through Kepler, and later via the efforts of a veritable
host of investigators, as R.C. Archibald's lengthy Bibliography ^5 in Jay Hambidge's Dynamic Symmetry (1920:146-156)^ 6 clearly attests. Though "to err is human," noticeably absent from the latter
list are the contributions of Samuel Coleman (Nature's Harmonic Unity,1911)^7 and those of Louis Agassiz^ 8 Essay on Classification, 1857)--but more on these omissions later. In passing, though, it
is relevant to point out that the claim that the Fibonacci series was only discovered in the early part of the second millennium is surely invalid--a doubly ignorant assertion (in Thomas Taylor's
understanding of the term) that in any case was largely demolished by D'arcy Wentworth Thompson years ago as follows:^ 9
The Greeks were familiar with the series 2, 3 : 5, 7 : 12, 17, etc.; which converges to 2^1/2^ as the other (i.e., the Fibonacci series) does to the Golden Mean; and so closely related are the
two series, that it seems impossible that the Greeks could have known the one and remained ignorant of the other. (Sir D'arcy Wentworth Thompson, On Growth an Form, Dover, New York 1992:923;
unabridged reprint of the 1942 edition)
The latter also pointed out, however, that:^10
We must not suppose the Fibonacci numbers to have any exclusive relation to the Golden Mean; for arithmetic teaches us that, beginning with any two numbers whatsoever, we are led by successive
summations toward one out of innumerable series of numbers whose ratios one to another converge to the Golden Mean ( (Sir D'arcy Wentworth Thompson, On Growth an Form, Dover, New York 1992:933;
unabridged reprint of the 1942 edition)
This is true enough, but it also adds weight to his previous observation.
For example, consider the following Pythagorean reference (or mnemonic device, if one prefers) concerning the number 36 as explained by W. Wyn Westcott:^11
Plutarch, "De Iside et Osiride," calls the Tetractys the power of the number 36, and on this was the greatest oath of the Pythagoreans sworn: and it was denominated the World, in consequence of
its being composed of the first four even and the first four odd numbers; for 1 and 3 and 5 and 7 are 16; add 2 and 4 and 6 and 8, and obtain 36. (W.Wyn Westcott, Numbers: their Occult Power and
Mystic Virtues, Sun Publishing Santa Fe, 1983:114).
Mere numerology? Elementary mathematics? Perhaps both, but also perhaps neither and an expansion in dimensional thinking that may or may not have historical precedents. Either way, by adding the
first four even and first four odd numbers to obtain 36 the stated intermediate values are also readily aligned for further use. Here I will leave it to the reader to observe that so aligned the
numbers added vertically result in the Fibonacci and the Lucas Series with Phi also the limiting ratio in the other three columns. All of which reinforces Sir D'Arcy Wentworth Thompson's point about
the many routes and detours that lead to the Golden Ratio while also rendering historical claims concerning Fibonacci's pre-eminent "discovery" of the series even less tenable than they already are.
And here, one might note, we have not even factored in the main issue under consideration, which is the numerous pointers, guides and indicators leading to the ever-present Golden Ratio in Nature
A.2. THE THREE-FOLD NUMBER AND THE "IDEAL" DIVERGENCE ANGLE
Whereas the constant Phi ^2provided the fundamental basis for the exponential planetary framework, its importance with respect to natural growth and phyllotaxis has long been known, especially with
respect to the phi-based "divergence angle", along with its possible relationship to "the domain of pure physics" (Cook, 1914:414).^12 A more up-to-date summary and description of the latter aspect
was in fact recently provided by mathematician Ian Stewart in 1995 as follows:^13
The most dramatic insight yet comes from some very recent work of the French mathematical physicists Stephane Douady and Yves Couder. They devised a theory of the dynamics of plant growth and
used computer models and laboratory experiments to show that it accounts for the Fibonacci pattern.
The basic idea is an old one. If you look at the tip of the shoot of a growing plant, you can detect the bits and pieces from which all the main features of the plant?leaves, petals, sepals,
riorots, or whatever?develop. At the center of the tip is a circular region of tissue with no special features, called the apex. Around the apex, one by one, tiny lumps form, called primordia.
Each primordium migrates away from the apex?or, more accurately, the apex grows away from the lump?and eventually the lump develops into a leaf, petal, or the like. Moreover, the general
arrangement of those features is laid down right at the start, as the primordia form. So basically all you have to do is explain why you see spiral shapes and Fibonacci numbers in the primordia.
The first step is to realize that the spirals most apparent to the eye are not fundamental. The most important spiral is formed by considering the primordia in their order of appearance.
Primordia that appear earlier migrate farther, so you can deduce the order of appearance from the distance away from the apex. What you find is that successive primordia are spaced rather
sparsely along a tightly wound spiral, called the generative spiral. The human eye picks out the Fibonacci spirals because they are formed from primordia that appear near each other in space; but
it is the sequence in time that really matters.
The essential quantitative feature is the angle between successive primordia. Imagine drawing lines from the centers of successive primordia to the center of the apex and measuring the angle
between them. Successive angles are pretty much equal; their common value is called the divergence angle. In other words, the primordia are equally spaced?in an angular sense?long the generative
spiral. Moreover, the divergence angle is usually very close to 137.5°, a fact first emphasized in 1837 by the crystallographer Auguste Bravais and his brother Louis. To see why that number is
significant, take two consecutive numbers in the Fibonacci series: for example, 34 and 55. Now form the corresponding fraction 34/55 and multiply by 360°, to get 222.5°. Since this is more than
180°, we should measure it in the opposite direction round the circle?or, equivalently, subtract it from 360°. The result is 137.5°, the value observed by the Bravais brothers.
The ratio of consecutive Fibonacci numbers gets closer and closer to the number 0.618034. For instance, 34/55?0.6182 which is already quite close. The limiting value is exactly (5^1/2-1)/2, the
so-called golden number, often denoted by the Greek letter phi (F). Nature has left a clue for mathematical detectives: the angle between successive primordia is the "golden angle" of 360 (1- F)°
= 137.5°. In 1907, G. Van Iterson followed up this clue and worked out what happens when you plot successive points on a tightly wound spiral separated by angles of 137.5°. Because of the way
neighboring points align, the human eye picks out two families of interpenetrating spirals?one winding clockwise and the other counterclockwise. And because of the relation between Fibonacci
numbers and the golden number, the numbers of spirals in the two families are consecutive Fibonacci numbers. Which Fibonacci numbers depends on the tightness of the spiral. How does that explain
the numbers of petals? Basically, you get one petal at the outer edge of each spiral in just one of the families.
At any rate, it all boils down to explaining why successive primordia are separated by the golden angle: then everything else follows.
Douady and Couder found a dynamic explanation for the golden angle. They built their ideas upon an important insight of H. Vogel, dating from 1979. His theory is again a descriptive one--it
concentrates on the geometry of the arrangement rather than on the dynamics that caused it. He performed numerical experiments which strongly suggested that if successive primordia are placed
along the generative spiral using the golden angle, they will pack together most efficiently. For instance, suppose that, instead of the golden angle, you try a divergence angle of 90°, which
divides 360° exactly. [ FIGURE S. omitted]
Then successive primordia are arranged along four radial lines forming a cross. In fact, if you use a divergence angle that is a rational multiple of 360°, you always get a system of radial
lines. So there are gaps between the lines and the primordia don't pack efficiently. Conclusion: to fill the space efficiently, you need a divergence angle that is an irrational multiple of 360°
multiply by a number that is not an exact fraction. But which irrational number? Numbers are either irrational or not, but like equality in George Orwell's Animal Farm, some are more irrational
than others. Number theorists have long known that the most irrational number is the golden number. It is "badly approximable" by rational numbers, and if you quantify how badly, it's the worst
of them all. Which, turning the argument on its head, means that a golden divergence angle should pack the primordia most closely. Vogel's computer experiments confirm this expectation but do not
prove it with full logical rigor.
The most remarkable thing Douady and Couder did was to obtain the golden angle as a consequence of simple dynamics rather than to postulate it directly on grounds of efficient packing. They
assumed that successive elements of some kind representing primordia form at equally spaced intervals of time somewhere on the rim of a small circle, representing the apex; and that these
elements then migrate radially at some specified initial velocity. In addition, they assume that the elements repel each other like equal electric charges, or magnets with the same polarity. This
ensures that the radial motion keeps going and that each new element appears as far as possible from its immediate predecessors. It's a good bet that such a system will satisfy Vogel's criterion
of efficient packing, so you would expect the golden angle to show up of its own accord. And it does.
Douady and Couder performed an experiment not with plants, but using a circular dish full of silicone oil placed in a vertical magnetic field. They let tiny drops of magnetic fluid fall at
regular intervals of time into the center of the dish. The drops were polarized by the magnetic field and repelled each other. They were given a boost in the radial direction by making the
magnetic field stronger at the edge of the dish than it was in the middle. The patterns that appeared depended on how big the intervals between drops were; but a very prevalent pattern was one in
which successive drops lay on a spiral with divergence angle very close to the golden angle, giving a sunflower-seed pattern of interlaced spirals, Douady and Couder also carried out computer
calculations, with similar results. By both methods, they found that the divergence angle depends on the interval between drops according to a complicated branching pattern of wiggly curves. Each
section of a curve between successive wiggles corresponds to a particular pair of numbers of spirals. The main branch is very close to a divergence angle of 137.5°, and along it you find all
possible pairs of consecutive Fibonacci numbers, one after the other in numerical sequence. The gaps between branches represent "bifurcations," where the dynamics undergoes significant changes.
Of course, nobody is suggesting that botany is quite as perfectly mathematical as this model. In particular, in many plants the rate of appearance of primordia can speed up or slow down. In fact,
changes in morphology, whether a given primordium becomes a leaf or a petal, say, often accompany such variations. So maybe what the genes do is affect the timing of the appearance of the
primordia. But plants don't need their genes to tell them how to space their primordia: that's done by the dynamics. It's a partnership of physics and genetics, and you need both to understand
what's happening.
Three examples, from very different parts of science. Each, in its own way, an eye-opener. Each a case study in the origins of nature's numbers the deep mathematical regularities that can be
detected in natural forms. And there is a common thread, an even deeper message, buried within them. Not that nature is complicated. No, nature is, in its own subtle way, simple. However, those
simplicities do not present themselves to us directly. Instead, nature leaves clues for the mathematical detectives to puzzle over. It's a fascinating game, even to a spectator. And it's an
absolutely irresistible one if you are a mathematical Sherlock Holmes. (Nature's Numbers: The Unreality of Mathematical Imagination, Ian Stewart, Basic Books, New York 1995:135-143; emphases
supplied. For further information see also The Fibonacci Numbers and Golden section in Nature - 1 and II )
Here once again we encounter the intermediate fibonacci pairings 34 and 55 discussed and applied in Part III with respect to planetary resonances, but underlying all of this is an undoubtedly complex
relationship with natural growth, which returns us to Archytas with perhaps a slightly improved appreciation of the essence of the matter, specifically the imparting of an "organic motion to a
geometric figure", i.e., as stated in the previous section, it was the latter who was:^14
"The first who methodically applied the principles of mathematics to mechanics: who imparted an organic motion to a geometric figure, by the section of the semi-cylinder seeking two means that
would be proportional, in order to double the cube."
But could this kind of understanding have existed that far back in time? Perhaps not, but then again, Ian Stewart's last observation--that "nature leaves clues for the mathematical detectives to
puzzle over"--is itself a most ancient one, as we already know from the two-part quotation from Ovid: "The three-fold number is present in all things whatsoever; nor did we ourselves discover this
number, but rather nature teaches it to us" and the Chaldean Oracles ("for the paternal Mind sowed Symbols through the World").
A.3. THE SPIRAL OF PHEIDIAS
In these modern times it seems that we take Phi and the Phi-Series largely for granted. And we also perhaps fail to fully appreciate just how much progress was made many decades (if not centuries)
earlier as illustrated and recorded in works such as: On The Relation Of Phyllotaxis To Mechanical Law, by Arthur Harry Church (1904),^15 by Samuel Coleman and Arthur C. Coan (Nature's Harmonic
Unity, 1911 and Proportional Form, 1920),^16 Sir Theodore Andrea Cook's (The Curves of Life, 1914),^17 Sir D'Arcy Wentworth Thompson (On Growth and Form, 1917, 1942)^18 and those listed in R.C.
Archibald's Bibliography cited above. Interestingly enough, Cook also includes additional details concerning the name Phi and the Phi-Series by a Mr. William Schooling in the Introduction and
Appendices to The Curves of Life (1914). Here the dialogue includes "Phi" itself, Cook's treatment of "Man (as) the Measure of All Things", Natural Growth, Ideal Angles once again, and finally Mr.
Mark Barr and William Schooling's Spiral of Pheidias:^19
Mr. Mark Barr suggested to Mr. Schooling that his ratio should be called the Phi proportion for reasons given below.
The symbol Phi ( F )given to this proportion was chosen partly because it has a familiar sound to those who wrestle constantly with pi (the ratio of the circumference of a circle to its
diameter), and partly because it is the first letter of the name of Pheidias, in whose sculpture this proportion is seen to prevail when the distances between salient points are measured. So much
is this the case that the Phi proportion may be fitly called the "Ratio of Pheidias." Take a well-proportioned man 68 inches in height, or Phi ^4. If we take ten inches as our unit of
measurement. From the ground to his navel is 42 inches, or Phi ^3 ; from his navel to the crown of his head is 26 inches, or Phi ^2 ; from the crown of his head to the line of his breasts is 16
inches, Phi; and from his breasts to his navel is 10 inches, or the unit of measurements, or 1, which is Phi ^0.
There are many valuable properties of Phi. Mr. Church, for instance, in pointing out the relation of spirally-constructed systems of plant growth to the Fibonacci ratio, speaks of the "Fibonacci
or ideal angle" of 137° 30' 27.95". From what has been said above about the Phi proportion, it may be seen that this ideal angle can be prettily and neatly expressed in circular measurement as
2Pi/phi ^2 (or twice Pi divided by the square of Phi).
I shall leave Mr. Schooling himself to explain many other most interesting facts concerning Phi in the Appendix. For the present it will be enough to say that it appears likely to give more
accurate results for other forms of natural growth than the Fibonacci series so admirably used for botany by Mr. A.H. Church, and closer calculations in matters of art than the theory published
in 1876 by Theodore Fechner in his "Vorschule der Aesthetik," I would further suggest that research into its uses in both directions would probably be well repaid; for in Mathematics it can be
expressed with binomial coefficients, and it can also be used as a base which greatly facilitates the computation of logarithms. In Geometry and Trigonometry its properties are further explained
in Appendix II (pp.441-447)
Fig. 2 The Pheidias Spiral
One more thing should be added here. If the radii vectores of a logarithmic spiral in Phi proportion, the result is not only a spiral of singularly pleasing character, but there is the further
feature that on any radius the sum of the distances between two successive curves of the spiral equals the distance along the same radius to the succeeding curve (see Fig. 389). Such a Phi spiral
bears a close resemblance shown in my second chapter produced by unwinding a tape from a shell (see Fig. 44, p.32).
In art, on the other hand, it ought to prove useful for proportional areas as for simple linear measurements; for since, as has been shown, the series is essentially spiral in character, and
since it gives the Phi proportion along any radius, it should also it should also provide a formula for the proportions of successive areas or spaces between radii. I suggest that such increases
of space as are observable in the various "compartments" of the shell shown in section in Fig.390, will be in Phi proportion, and bear a direct relation to the external spiral..... Mr. Schooling
suspects (he does not claim yet to have proved) that the Phi proportion mentioned in the last chapter is an expression of economy of form, manifested in the packing of the human foetus, in the
shape of shells, and in other ways. That such an economy of form should result in beauty is analogous with the fact that gracefulness is the result of ease or economy of force or effort....
Mr. Schooling also tells me that he is constructing an instrument for drawing logarithmic spirals automatically, using the Phi spiral as a standard, and stating the conditions of any other
logarithmic spiral in terms of deviation from this standard. Incidentally this instrument will show that it is possible to proceed from a straight line to a circle through an infinite number of
logarithmic spirals. T.A.C 1914 .
(Appendix I. Nature and Mathematics, Sir Theodore Andrea Cook, The Curves of Life, 1914:440; for clarity the word "Phi" has been substituted for the corresponding symbol in Cook's exposition of
"Man as the Measure of all Things").
The "historical" establishment of the name "Phi" is of interest here, although it is not necessarily exclusive, and there are also a number of other interesting points raised.
Nevertheless--Anaxagoras notwithstanding--it seems fair to suggest that not all would necessarily agree with Sir Theodore Andrea Cook's application of Phi to the human form, especially since his
example of a "well-proportioned man 68 inches in height" makes little allowance for the considerable variations in girth and height that routinely occur among members of the human race. Putting it
another way, it is not at all certain--Ovid's "Natural" Three-fold number notwithstanding--that the constant Phi would be immediately apparent if the matter was actually put to the test among the
general population. Far better suited to this purpose would be the many spiral forms in nature already addressed by Cook himself, especially, it would seem, those prominently exhibited among certain
shells--"Golden"-- or otherwise. Thus Cook is on firmer ground with his Phi-related assignments regarding the latter and also the other aspects of his extensive analyses. As is Samuel Coleman in
Nature's Harmonic Unity, a work in progress unknown to Cook (and vice-versa) that was published three years earlier than Cook's Curves of Life (1914). By way of an introduction to both, the part of
the Appendix to Theodore Andreas Cook's The Curves of Life published in 1914 that records his surprise, reception and criticism of Samuel Coleman's Nature's Harmonic Unity is given below:^19
JUST as I was reading the proofs of "The Curves of Life," a book by Samuel Coleman, M.A., edited by C. Arthur Coan, LL.B., and published by Messrs. G. P. Putnam's Sons, was sent to me from New
York, entitled "Nature's Harmonic Unity: A Treatise on its Relation to Proportional Form." Its preface is dated December 1st, 1911, and it therefore provides a very interesting example of the way
in which two minds may be attracted by kindred subjects at the same time, without any knowledge of each other's studies, and arrive at different conclusions from a similar set of data. Readers of
the Field in 1912 will remember the chapters in that paper in which I tried to expound certain principles which are discoverable in natural growth, and applied them to such artistic creations as
the Parthenon and the Open Staircase of Blois, illustrating my theory from a large number of examples chosen from botany, anatomy, conchology, and other branches of natural science. A great many
such" illustrations" of his thesis are provided by Mr. Coleman in the volume before us, from the nautilus or the sunflower, to the Parthenon or the facade of Rheims Cathedral. Yet the treatment
and main result are different. Although Mr. Coleman's pages and Mr. Coan's mathematics are of absorbing interest, I venture to uphold the theory set forth in 1912 in the Field and developed in
"The Curves of Life" as a better working hypothesis, a better "explanation" of the phenomena.
The title of Mr. Coleman's book suggests and epitomises the contrast between his attitude and mine. I could almost call my book Nature's Geometrical Diversity in contrast with Mr. Coleman's
Nature's Harmonic Unity. There is some confusion also in his use of the word "unity." At one time he seems to suggest that the phenomena of Nature and art exhibit the common characteristic--the
one feature--of obedience to the laws of Nature, which is true. Elsewhere, and in the main, he insists that these varied phenomena exhibit unity by following one--and only one law--so far as
proportion of form is concerned. The expression of that law he finds mainly in "extreme and mean proportion." He regards deviations from this law as negligible, and it is even suggested that,
since the measurements of the Parthenon do not conform precisely with this law, the measurements are wrong! I hold, on the contrary, that the deviations from law are of more moment and of greater
interest; that they are better calculated to extend our knowledge than the detection of rigid conformity with the law.
Again, his book is concerned with proportional form, while I think that far greater advantages attach to considering form in connection with growth. He may be said to be dealing with morphology
apart from physiology, with form separate from function, whereas, in my judgment, considerations of function and growth are essential to the right understanding of form and its proportion. He
proposes to explain the complicated phenomena of forms in life and of beauty in art by saying that they all agree with one very simple mathematical expression. My position, on the contrary, is
that the phenomena of. life and beauty are always accompanied by deviations from any simple mathematical expression we can at present formulate.
Mathematics, to my mind, are of the highest value as an instrument; but, as we have seen in previous pages, it is of the essence of a living thing, as of a beautiful work of art, that it cannot
be exactly defined by any simple mathematical formula like that chosen by Mr. Coleman. I have pointed out in the last chapter that the agreement of a number of phenomena with a given formula is
not an important factor in knowledge; it merely sums up a certain sphere of investigation in a convenient way. The really important thing is the exception. But I must not be taken as expressing
disdain for laws in general or for Mr. Coleman's mathematics in particular. Indeed, without the mathematical expression as a guide we should be unable to take note right of the aberration, and to
this extent Mr. Coleman and Mr. Coan have done very valuable work; but Mr. Coleman makes, as I think, the fundamental error (which runs through his whole argument) of predicating certain
mathematical forms in his own mind, and then saying that they exist in the natural object (still more in the artificial or architectural object) which he is examining. Judging it on these lines,
the most valuable part of Mr. Coleman's book is to be found in Mr. Coan's appendix, which faithfully and accurately sets out the actual differences between living organisms (or architectural
creations), and strictly mathematical results. It is these differences which predicate life in the one case and beauty in the other. . . .
. . . .Mr. Coleman shows ordered proportions which may be attributed to certain laws. But he sees too much in laws when he means by this word the narrowest sort of geometrical relations. The law
of extreme and mean proportion stands out as a proved principle, and it does govern a pleasant relation for the sweep of the eye across some architectural and natural spacings. It is the old and
well-known "Golden Section." It is also Euclid, Book VI., prop. 30. But this principle is not more than a letter in the alphabet of architecture, to say nothing of the other arts. Mr. Coleman
(guided, I think, too little by his editor) believes not only in a very wide application of the Golden Section, but he wishes to show that most of art is governed either by this or by other laws
which are as easily formulated. In order to prove his contention, he draws a maze of lines over hi architecture and his natural objects (as may be seen from those hen reproduced), but the lines
result merely in presenting one or another aspect of the extreme and mean proportion. And while no one can deny that this relation is important, the author tries to show too much more than
Zeising and Fechner showed. Still, it would be unfair to overlook the author's extension of the older observations, and one is surprised to see the increased number of agreements with ordered
geometry. But when we analyse the geometry we find that the author's demonstrations could be expressed by a very simple formulation of the wider meanings of the extreme and mean proportion. Such
a simplified formulation at once exposes the improbability of a royal road to the arts, though it widens the significance of the Golden Section.
(Cook, 1914: 431-441; the cited examples of Samuel Coleman's graphical treatments are omitted here for brevity)
It could be said that in places Cook is perhaps somewhat harsh in his criticism, yet in presenting his own viewpoint he nevertheless also provides publicity and permanent linkage to Coleman's work,
and moreover, he also seems to have taken pains to include the more easily absorbed diagrams produced by the latter. These two works with their obvious similarities and differences were soon after
followed by Sir D'Arcy Wentworth Thompson's massive On Growth and Form in 1917--a treatise that included an extended technical analysis of spiral configurations in general and in particular as
applied to shells. Here again, if there was ever a subject where Ovid's "Three-Fold Number" was evident with respect to structure and form, thus must surely have been it, as a number of investigators
appear to have realized well before Cook and Coleman, who nevertheless noted in Chapter IX ("On Conchology") in Nature's Harmonic Unity (1911):^20
Mr. T. A. Cook, in his excellent book entitled Spirals in Nature and Art, declares that: 'If any particular class of objects should be chosen by a student for purposes of study in relation to so
mathematical and creative an art as architecture, the class of shells would be most suitable inasmuch as they suggest with particular emphasis those structural and mathematical problems the
builder has to face."
It is not the intention here to indulge in comparitive analyses per se, or comment extensively on either works, but rather emphasize once again that the present treatment approaches such topics from
a somewhat different direction. So much so, if fact, that it is possible to approach the subject of spiral formations pre-equipped with an array of accurate, pre-determined spirals. This is in marked
contrast (apart from William Schooling's mechanical device) to most earlier methods employed to classify spirals that occur in nature--all of which necessarily required some form of detailed
measurement and subsequent analysis. The difficulty with the latter is obvious enough, for natural spirals not only exhibit growth, but also along the way incorporate initial, intermediate and final
stages in their formation. And while such spirals may well have distinctly characteristic spirals and therefore "characteristic numbers" (as applied to shells by Canon Mosely in 1838), there still
remain deviations from both the base spiral and the theoretically perfect. Where then, does one draw the line and reject variations in measured data? And how does one handle the tighter spirals with
their minimal separations? Then again, what if observed variations are of significance in their own right in certain situations--as indeed they might well be?
This last question was in fact taken up by Sir Theodore Andrea Cook, who also wrote in the Appendix to The Curves of Life:^ 21
If there be, as I think there is, a tendency for a nautilus to acquire the form of a logarithmic spiral, just as there is a tendency for a book to fall under the action of gravity, and yet there
is no known example of a nautilus shell being an exact logarithmic spiral, it is reasonable to assume that there are other forces at work, akin to friction or muscular action, which cause this
deviation. The deviation stimulates us to further investigation, and to the probable or possible discovery of some other law of nature, from which in turn deviations will be discovered, leading
to yet further extension of knowledge.
If "exact" is taken to be synonymous with perfect, then Cook is undoubtedly correct, even with respect to the nautilus, whose spiral can nevertheless be closely approximated by an equiangular spiral
with a growth factor of approximately 3 (see the next section for a more accurate value), but with due caution nevertheless since the fitting of two-dimensional spirals to natural objects is a
complex matter requiring precise definition.
Thus to this end, consider "The Pheidias Spiral" reproduced above in Figure 2--an accurate, generated reproduction of William Schooling's original Figure 389 as published in The Curves of Life by Sir
Theodore Andrea Cook.^21 It is generated here in the sense that it is not a physical copy, but a mathematical rendering of an equiangular spiral configured to exactly match Schooling's version. Here
one might observe, in keeping with the latter's approach and terminology, that all equiangular spirals based on the constant Phi raised to any power, whether integer, fractional part, or indeed any
kind of number whatsoever may most reasonably be termed "Pheidian". Thus the Pheidias Spiral, though still fundamentally exponential, is simply a special case case, i.e., Phi raised to the first
power ( Phi ^1). Which is, of course, still the "Golden Ratio" and also relation 5a below--the latter being the result of the original investigation to determine the initial constant of linearity for
the Solar System, and later, as it so happened, also the "length" in the Rectangle/Area problem described in earlier sections. Relation 5b--still Pheidian in the above sense--being in turn the
fundamental period constant for Spira Solaris itself:
As for the construction of Pheidian Spirals, for this pair at least we already know the basic parameters, especially Relation 5b with its constant growth factor of Phi ^2 = 2.61803398874 per
revolution and associated equiangle of 81;17,24,10, (81.2914357) degrees. In turn relation 5a has a corresponding growth factor of Phi itself with an equiangle of 85;37,13,31, (85.6204239) degrees.
In other words, with the base constant provided by Phi it is the exponent that provides the variation in growth factor and the corresponding equiangle. But before continuing further it may prove
useful to simplify and standardize William Schooling's Pheidias Spiral by first removing the diagonal reference lines and secondly terminating the vertical and horizontal lines at the outermost
90-degree intersection points. Either way Schooling's presentation of the Pheidias spiral provides a number of advantages in additon to precise centering and alignment, i.e., the cross lines also
serve to emphasize the amount of constant growth and further illustrate the variation in form between different spirals. For example, retaining Schooling's basic orientation the simplified Pheidias
Spiral ( Phi ^1 with interior segments added) plus Spira Solaris (Phi ^2) in the same configuration are as shown below:
Fig. 3 The Pheidias Spiral and Spira Solaris
A study of the relative sizes and degrees of "opening" for a wide range of "Nautiloid Spirals" was undertaken by Sir D'Arcy Wentworth Thompson starting with a small growth factor of 1.1 : 1
(corresponding angle 89;08 degrees) and 21 increasing values, ending with a growth factor of 1,000,000,000 : 1 (corresponding angle less than 19 degrees). Commenting on these results Sir D'Arcy
Wentworth Thompson explained further:^ 22
We see that with smaller angles the apparent form of the spiral is greatly altered, and the very fact of its being a spiral soon ceases to be apparent (Figs. 379, 380). Suppose one whorl to be an
inch in breadth, then, if the angle of the spiral were 80°, the next whorl would (as we have just seen) be about three inches broad; if it were 70°, the next whorl would be nearly ten inches, and
if it were 60°, the next whorl would be nearly four feet broad. If the angle were 28°, the next whorl would be a mile and a half in breadth; and if it were 17°, the next would be some 15,000
miles broad. (Sir D'Arcy Wentworth Thompson, On Growth and Form, New York 1992:792; unabridged reprint of the 1942 edition)
This is helpful and informative enough, but initially it may still be difficult to grasp the inter-relationship between the growth factor and the equiangular form of the associated spirals. The
reason for this is perhaps more subtle than one might think, at least initially. One might note, for example, that the innermost part of Schooling's original Pheidias Spiral does not commence at
zero, but starts some distance from it. In fact when generating equiangular spirals this consideration can pose technical problems, especially with spirals having relatively small exponents and
near-circular configurations (e.g., Phi ^1/6 with a growth factor of 1.083505882 : 1). Then again, if we are to take our cues from Nature we might also emulate the construction priorities of the
spider, starting instead from the outside and working inward, but more on this intriguing aspect in the next section.
Closely allied to the initialization problem, however, is the more general question of how in this context one actually measures exponential growth in the first place. The increase in growth is, of
course, a fixed ratio, and for the Pheidias spiral the ratio, as we already know, is 1.61803398874 : 1. But is it immediately apparent from Fig.2 that this is so for the Pheidias spiral
throughout--above, below, and in all directions whatsoever? Probably not. What is likely required is more definition, in fact the type of precision that Sir Theodore Andrea Cook had already provided
a little earlier in The Curves of Life. Why then present his material out of order here? Because it is not quite that simple, that's why. And, as the reader will soon find out, in gaining a better
understanding of the equiangular spiral, it also becomes possible to appreciate more fully precisely what it was that Sir Theodore Andrea Cook imparted along the way.
A.4: THE EQUIANGULAR SPIRAL AND THE ELEMENTS
What follows next remains something of a puzzle at present. According to the source (Sir Theodore Andrea Cook) a certain degree of progress appears to have been made relating the equiangular spiral
to the chemical elements, though little awareness of this particular application apparently remains today, at least in the general literature. Nevertheless, under the title FINAL RESULTS in The
Curves of Life Cook recounts (in 1914) that:^23
In 1888 Dr. Johnstone Stoney submitted to the Royal Society a memoir on the"logarithmic law of atomic weights," which, however, was not published in full. Lord Raleigh (Proceedings of the Royal
Society, Series A, Vol. LXXXV., p. 471, 1911) consulted the original manuscript, and gives some extracts from and remarks upon it. After many fruitless efforts to extract information from the
curves obtained by plotting the atomic weights, it happily occurred to Dr. Stoney to employ the volumes proportioned to the atomic weights. When this was done the resulting figure (cf. Fig. 385)
at once suggested a well-known logarithmic spiral, and a close scrutiny justified this suspicion. In other words, the relations of all the known elements to each other could almost exactly be
expressed by the logarithmic spiral. If this held true of what was known already, it became apparent that it would also hold true of what was to be discovered later on; and that if new elements
were discovered after 1888, they would find their right places in the gapes indicated in Dr. Johnstone Stoney's spiral diagram. This remarkable process had already occurred in Mendeléef's
Periodic System since the year of its publication in 1869: and the fact that it has also occurred in the spiral system (which includes the Mendeléef System and gives it additional confirmation)
is one of the most convincing proofs that the spiral system is not merely a correct hypothesis, but a fundamental law. The total of the elements known in 1912 was about eighty-three. Six elements
were missing in 1888 in Dr. Stoney's diagram, between hydrogen and lithium; Sir William Ramsay discovered helium in 1895, which fills one of the gaps, though the position is not mathematically
exact. But on the sixteenth radius an even more remarkable corroboration was effected, in what had hitherto been a gap between the most intensely electro-negative elements (such as fluorine,
chlorine, bromine, and iodine) and the most electro-positive elements (such as lithium, sodium, potassium, etc.). This gap was filled with absolute appropriateness, by the series of inert gases:
argon, discovered by Lord Raleigh, and Sir William Ramsay in 1894, and helium, neon, krypton, and xenon, discovered by Sir William Ramsay between 1895 and 1898, five new elements which occupy
places foretold to be necessary to the Mendeléef series as well.
(Theodore Andrea Cook, The Curves of Life, Dover, New York, 1978:413; republication of the London (1914) edition.
Although the latter part is dated in any case, the puzzle concerns not only the fact that in spite of its potential importance Dr. Johnstone Stoney's paper was "not published in full," but also
Cook's resulting figure (Fig. 385) which though undefined is apparently "a well-known logarithmic spiral" -- a conclusion reinforced moreover, by "close scrutiny" that apparently "justified this
suspicion." Thus the impression gained here is that this was not simply the well-known logarithmic spiral under consideration at all, but something altogether more specific that for some reason
nevertheless remained undefined. On the following page, however, Cook returns to Dr. Johnstone Stoney's "resulting figure" (i.e., the above mentioned Fig. 385) to demonstrate the concept of growth
"along radii" as follows:^24
In mathematics we have the most supple and beautifully precise instrument by which the human mind can fulfil its need of cataloguing, labelling, defining the multifarious facts of life around us.
In this task the visible expression of various results or totals in the form of curves is an invaluable convention; and the problems of growth or increase the logarithmic spiral occupies perhaps
the most important position of all. For it can be used not merely in the sense of the curve of growth an energy, which swings from origin to outer space, it can define growth along its radii as
In Fig. 385, for instance, we have the definite curve which has grown from the centre we will call C to B, and ends (as far as I have drawn it) at A. But we have also the radii which I will call
CP, CL, CM, CN, each of which is cut at three points by the curve progressing from C to A, and you will notice that the three points of intersection on the line CP are differently situated (with
regard to C) from the points of intersection on CL, which differs again, in this respect, from CM, and CM differs from CN. Now, since the spiral curve CBA extends infinitely in each direction
from any point within it, which is easier to imagine at A than it is at C, and since there can be any number of radii, so this mathematical concept embodies the great truth of infinite
gradations, which is explained in the very beautiful and valuable theory of infinite series. The rhythmical beat of the spiral curve upon its radii is in direct relation to this theory, as has
been pointed out to me by Mr. Mark Burr. (Theodore Andrea Cook, The Curves of Life, Dover, New York, 1914:414)
The spiral in question is reproduced above as originally printed; once again the version given here is computer generated, not copied. As is the version below, which is an enlarged, augmented
reproduction of the self-same spiral such that Cook's four radii extend to meet the spiral as opposed to passing through it to the points M, L and N in the original.
Fig. 385B Extended
These changes arise from a need to clarify Cook's remarks concerning Fig. 385 and further requirements to qualify and quantify equiangular spirals in general. This said, one can understand (at one
level at least) why Cook's extended radii are not to the same scale as that required for the full equiangular expansion, i.e., they would have to be extended by a factor of slightly less than 3.7 : 1
in each of the given directions. In other words, as shown in the augmented version (Fig.385B), starting at the 90-degree point, the ratio between the distance M1_M2 and the distance M2_M3 is
approximately 3.7:1, and although progressively larger in scale, the same constant ratio also applies to the distances L1_L2 and L2_L3 at 180 degrees, the distances N1_N2 and N2_N3 at 270 degrees,
and the distances P1_P2 and P2_P3 at the 360-degree point. In fact the same equality holds for any angle with the same constant amount of growth always taking place for each successive revolution of
360 degrees. Here the reader may recall the role played by the equiangular rectangle in the construction of Spira Solaris with its associated growth factor of 2.61803398874 : 1 in Part III; and also
perhaps, a more ancient line already mentioned concerning "The Center from which all (lines) which way soever are equal"...
We now turn to the specific "well-known logarithmic spiral" that was confirmed by further inspection according to the text. Although it was stated in the last paragraph that the radii increase per
revolution by slightly less that 3.7 to 1, a more exact value would appear to be applicable, namely the fixed and precise ratio: 3.699025327... to 1. In other words, it seems that this spiral in
given association with the chemical elements is either based exactly on the constants Phi and e, or values that are quite close, for Fig.385 and Fig.385B in fact depict the equiangular spiral Phi ^ e
with the corresponding parameters:
Thus we appear to have a specific logarithmic spiral relating to the chemical elements ca 1888 as recorded in a 1914 publication. But how could something this fundamental have been ignored and/or
allowed to simply fade away? Were there problems with the analysis, perhaps? Was it subsequently proved erroneous? Or (although unlikely), was the spiral in question simply coincidental? Who knows,
and this is indeed a puzzle--one that increases yet again in light of the subsequent (and quite recent) application of the spiral form in the discovery of the molecule "buckminsterfullerene."
Moreover, Hugh Aldersey-Williams, in describing the investigations leading up to the latter discovery mentions the perhaps surprising (and perhaps not so surprising) role played by Sir D'Arcy
Wentworth Thompson along the way:^25
Thompson's claim to fame rests largely on one extended and luminous essay, On growth and form; it is this work that is cited by the Rice group. The contemporary palaeontologist and author Stephen
Jay Gould has called Thompson "perhaps the greatest polymath of our century" and his essay "one of the great lights of science and of English prose, the greatest work of prose in
twentieth-century science". Thompson's aim was to show that the shape of living things has a mathematical basis (and hence has no need for reliance on supernatural or teleological explanation).
His argument is completely general. It applies to plants and animals, to airborne, waterborne, and land creatures of all sizes. He notes that the Eiffel Tower and John Smeaton's design for the
Eddystone lighthouse both take the form of the trunk of an oak tree. It would be easy to conclude that nature inspired man to create these shapes. But it would be more perceptive to note, as
Thompson did, that both man and nature take the most economical course of action prescribed by physical laws.
Polymath though he was, D'Arcy Thompson had little to say about chemistry. Nevertheless, he was to serve Kroto and Smalley's purpose as rather more than just an erudite ambassador for Euler's
theorem. His genius can be seen as an inspiration behind a beautiful diagram in the paper on the reactivity of the fullerenes which shows a buckminsterfullerene molecule encased inside a larger
spheroidal carbon frame, which in turn is beginning to be enclosed by a third shell. The whole spiral scheme bears a remarkable resemblance to the spiral pattern of growth adopted by some plants
and animals that are illustrated in Thompson's book. (The resemblance is in fact a little misleading. As Thomson points out, nature favours the "equiangular or logarithmic" spiral which a radius
drawn from the centre of the spiral to its leading edge increases in geometric progression - that is, by a constant factor - as sucessive orbits are scribed out. This is the mathematical
relationship followed by the Nautilus sea-shell and many other gastropods. In the alternative, the "equable" or Archimedean spiral, this radius increases in roughly arithmetic progression - that
is, by a constant increment - generating a spiral like that of a Swiss roll or a coiled rope. It is this latter model that lies at the heart of the proposed mechanism for the growth of soot
particles, the spacing of successive layers being not the thickness of a piece of sponge cake or the diameter of a rope but the familiar van der Waals distance between layers of graphite.) (Hugh
Aldersey-Williams, THE MOST BEAUTIFUL MOLECULE: THE DISCOVERY OF THE BUCKYBALL, John Wiley & Sons, New York 1995:113-114)
Not the logarithmic spiral per se in this instance it would seem, but the Archimedian. Nevertheless, the transition between the two is readily accomplished utilising logarithmic data--in essence a
"double-logarithmic" spiral; e.g., the inset in Part III's Fig.6c and also below for the spiral under discussion:
Fig. 3a The Equiangular Spiral Phi ^ e Fig. 13 The "Archimedian" (Log) Spiral Phi ^ e
A slightly tighter Pheidian spiral is obtained utilising the exponent "8/3" (growth factor: 3.608281187 per revolution) although it diverges slightly from the original; but perhaps more to the point,
both spirals and indeed all those considered so far are based on Phi and either integer or fractional exponents. Now Phi and the exponent e enters the discussion--an important constant that also just
happens to be included by T.A.Cook in his ensuing discussion of infinite series following the introduction of the spiral in question.
A.5. THE FUNDAMENTAL PHEIDIAN CONSTANTS: 2.61803398874 AND INVERSE: 0.381966011
Part III (The Exponential Order)
planetary period constant
for the Phi-series exponential framework was determined to be
Phi ^2
, whereas the inverse (
Phi ^-2 = 0.381966011
) was seen above to be closely related to the "ideal" convergence angle. However, towards a fuller understanding it may be helpful to illustrate the relationship between
Phi ^2
and the latter as follows:
Fig. 4a. Relations 14a-14e and the Ideal Growth Angle
Relation [14a] is discussed in the present context by Sir Theodore Andrea Cook (1914:440); relation [14b] retains the phi-based exponential form but rather than division per se, negative exponents
and multiplication are utilised. Relation [14c] retains the same configuration but now uses the exact number, which is, of course less than unity (Phi ^-2 = 0.381966011).
The purpose behind this rather obvious treatment is to emphasize the latter value and relation [14c], although it could be said that the entire Phi-series planetary framework is essentially
"three-fold" since it is based on fractional exponents of Phi itself expressed in thirds. This, of course, refers to the theoretical planetary model and mean values. Nevertheless, with respect to the
Solar System itself, even though the elliptical orbits of Jupiter and Saturn produce regular variations in orbital velocity, the difference function between the two not only includes this precise
value, it also periodically sweeps across it. As for the mean phi-series planetary velocities on either side, i.e., those pertaining to Jupiter and Saturn -- Phi ^-5/3 = 0.448422366 and Phi ^-7/3 =
0.325358512 respectively -- the former is near the maximum while the latter is closer to the mean as shown below utilising real-time data from 1900 to 2000:
Fig. 4b. Varying Velocity: The Jupiter-Saturn Synodic Cycle SD1
The three associated periods (the sideral period of Saturn, the Jupiter-Saturn Synodic cycle SD1, and the sidereal period of Jupiter) also, of course, provide the most obvious and best known
fibonacci resonances in the Solar System, i.e., the 2 : 3 : 5 60-year cycle (see Part III for details and other expansions).
Additionally, as treated further in the following section, angular momentum (L) may be obtained from the product of the planetary mass and the inverse orbital velocity. The latter, for the Phi-Series
Jupiter-Saturn Synodic cycle is once again the primary period constant, i.e., Phi ^2 = 2.61803398, albeit in a different application; see Part IVb2c: The Pheidian Planorbidae for the final details
and conclusions.
Lastly, it is invariably stated that one of the equiangular spiral's more unusual properties is that "the invert to an equiangular spiral is identical with the original curve," also described by
D'arcy Wentworth Thompson (1942:767) as:
one of the most beautiful and most singular properties of the curve. It was this which led James Bernoulli, in imitation of Archimedes, to have the logarithmic spiral inscribed upon his tomb; and
on John Goodsir's grave near Edinburgh the same symbol is reinscribed.
Nevertheless, the two equiangular spirals under consideration--Spira Solaris ( Phi ^2 = 2.61803398 shown below in red and its inverse Phi ^-2 = 0.381966011 in blue)--would appear to possess a
difference in phase, i.e., identical plots incorporating 360 data points per revolution over six revolutions (2160 data points for k = Phi ^2 and k = Phi ^-2 respectively) actually produce the
following result:
Fig. 4c. The Equiangular Spirals k = Phi ^2, Phi ^-2 and Gyres
In other words, one or the other requires a rotation about the vertical axis of 180 degrees for the match to be completely identical. Whether one wishes to consider this the result of the applied
methodology is something else altogether. Nor for that matter need the end-to-end configuration of the two spirals shown in the upper inset have specific historical ramifications, though the latter
representation is undoubtedly well-known, wide-spread and also most ancient. To which may be added a further order of complexity in so much as the joined pair, when scaled to fit the
of the central inset "Gyres," source:
William Butler Yeats and Cones
by Sandra Schneiderman ( http://www.sandraschneiderman.com/yeats/ ) remain centrally aligned in one plane while also meeting both the center and the edges in the other. For more on the historical
complexities that attend this matter, see
W. B. Yeats and "A Vision"
by Neil Mann.
As for the lower combination, that we have already seen in Part III (
The Exponential Order,
Figure 12.
Double-formed* Spira Solaris and the plan-view of the Milky Way
). Lastly, although it is getting a little ahead of things, the equiangular spiral
k = Phi ^4
may be provisionally associated with Whirlpool Galaxy M51 in a similar manner, i.e.,
Fig.4d The Double-formed Spiral k = Phi ^4 and the Whirlpool Galaxy M51
Edge-on Whirlpool Galaxy M51 image by the
Hubble Heritage Team
(NASA/STScI/AURA) using data collected by
Principal Astronomer N. Scoville (Caltech) and collaborators.
From part III, simplistically the transition from two to three dimensions, i.e., the rotation of the two-dimensional equiangular spiral
k = Phi ^4
through 180 degrees in both vertical and horizontal planes; the fourth dimension is
TIME --"Eternal, Young and Old, and of a Spiral Form
." For more on the spiral
k = Phi ^4
see the next section (
The Phedian Planorbidae
A.6. BENJAMIN PIERCE, LOUIS AGASSIZ, FIBONACCI, AND THE SOLAR SYSTEM
It may well be that the extension of the "Three-fold Number" beyond terrestrial boundaries is simply the logical continuation of Ovid's initial observation that "The three-fold number is present in
all things whatsoever" --an observation that in view of the nature of spiral galaxies need not remain with planetary systems per se. Nevertheless, the linking of natural growth to the structure of
planetary systems was undoubtedly a bold and momentous step even though it also reflects the second part of the quotation from Ovid: "Nor did we ourselves discover this number, but rather natures
teaches it to us." At least this seems applicable in the case of Benjamin Peirce,^26 who integrated both to successfully apply the Fibonacci series to the structure of the Solar System. The latter's
work was originally published in the Proceedings of the AAAS in 1850 and given additional permanence with a further airing in Louis Agassiz's Essay on Classification in 1857^27. All to little or no
avail, it would seem, for in spite of the details and the implications the work it still remains in relative obscurity to the present day. In some respects this may be understandable, though the
subsequent lack of attention or acceptance can hardly be blamed on the quality of the work or the means of presentation. All too easily dismissed as "speculative biology" (Lurie 1962:128) ^28 it
would seem, it is likely that it was also one of the first victims of "Bode's "Law" which first surfaced less than a decade later (1866-1871) despite its fatal mathematical flaws and ad hoc origins.
Indeed, if longevity and popularity alone provide the guidelines, then "Bode's Law" would win hands-down in any comparison between the two planetary frameworks. If, however, the standard by which
such matters are judged depends not on popularity or elementary mathematics, but on human progress and increased understanding, then one can only wonder what else might have been accomplished since
Agassiz's time and sadly lament the loss.
The complete description of Benjamin Pierce's application of the Fibonacci series to the structure of the Solar System as published by Louis Aggassiz is provided below; perhaps significantly, the
words "Fibonacci" and/or the "Golden Section" (and the like) are noticeably absent--such words perhaps already unacceptable to the powers that be and also a perceived threat to the status quo.
Nevertheless, there can be no mistaking the sequence applied or the major premise, called here perhaps fittingly enough (for the moderns, at least) "the law of phyllotaxis". One may also note that
Peirce had already considered the practical differences between his theoretical treatment and the Solar System itself and subsequently considered not only the position of Earth, but also discepancies
encountered for the positions of Mars, Uranus and Neptune. Initially Pierce also applied a double form of Fibonacci series but subsequently reduced the set to arrive are a situation similar to that
involving the synodic difference cycle between adjacent planets.
ESSAY ON CLASSIFICATION
Louis Agassiz 1857
SECTION XXXI
It must occur to every reflecting mind, that the mutual relation and respective parallelism of so many structural, embryonic, geological, and geographical characteristics of the animal kingdom
are the most conclusive proof that they were ordained by a reflective mind, while they present at the same time the side of nature most accessible to our intelligence, when seeking to penetrate
the relations between finite beings and the cause of their existence.
The phenomena of the inorganic world are all simple, when compared to those of the organic world. There is not one of the great physical agents, electricity, magnetism, heat, light, or chemical
affinity, which exhibits in its sphere as complicated phenomena as the simplest organized beings; and we need not look for the highest among the latter to find them presenting the same physical
phenomena as are manifested in the material world, besides those which are exclusively peculiar to them. When then organized beings include everything the material world contains and a great deal
more that is peculiarly their own, how could they be produced by physical causes, and how can the physicists, acquainted with the laws of the material world and who acknowledge that these laws
must have been established at the beginning, overlook that à fortiori the more complicated laws which regulate the organic world, of the existence of which there is no trace for a long period
upon the surface of the earth, must have been established later and successively at the time of the creation of the successive types of animals and plants?
Thus far we have been considering chiefly the contrasts existing between the organic and inorganic worlds. At this stage of our investigation it may not be out of place to take a glance at some
of the coincidences which may be traced between them, especially as they afford direct evidence that the physical world has been ordained in conformity with laws which obtain also among living
beings, and disclose in both spheres equally plainly the workings of a reflective mind. It is well known that the arrangement of the leaves in plants^148 may be expressed by very simple series of
fractions, all of which are gradual approximations to, or the natural means between 1/2 or 1/3, which two fractions are themselves the maximum and the minimum divergence between two single
successive leaves. The normal series of fractions which expresses the various combinations most frequently observed among the leaves of plants is as follows: 1/2, 1/3, 2/5, 3/8, 5/13, 8/21, 13/
34, 21/55, etc. Now upon comparing this arrangement of the leaves in plants with the revolutions of the members of our solar system, Peirce has discovered the most perfect identity between the
fundamental laws which regulate both, as may be at once seen by the following diagram, in which the first column gives the names of the planets, the second column indicates the actual time of
revolution of the successive planets, expressed in days; the third column, the successive times of revolution of the planets, which are derived from the hypothesis that each time of revolution
should have a ratio to those upon each side of it, which shall be one of the ratios of the law of phyllotaxis; and the fourth column, finally, gives the normal series of fractions expressing the
law of the phyllotaxis.^149
Table I (Agassiz-Pierce 1857)
In this series the Earth forms a break; but this apparent irregularity admits of an easy explanation. The fractions: 1/2, 1/3, 2/5, 3/8, 5/13, 8/21, 13/34, etc., as expressing the position of
successive leaves upon an axis, by the short way of ascent along the spiral, are identical as far as their meaning is concerned with the fractions expressing these same positions by the long way,
namely, 1/2,2/3, 3/5, 8/13, 13/21, 21/34, etc.
Let us therefore repeat our diagram in another form, the third column giving the theoretical time of revolution.
Table II (Agassiz-Pierce 1857)
It appears from this table that two intervals usually elapse between two successive planets, so that the normal order of actual fractions, 1/2, 1/3, 2/5, 3/8, 5/13, etc.,or the fractions by the
short way in phyllotaxis, from which, however, the Earth is excluded, while it forms a member of the series by the long way. The explanation of this, suggested by Peirce, is that although the
tendency to set off a planet is not sufficient at the end of a single interval, it becomes so strong near the end of the second interval that the planet is found exterior to the limit of this
second interval. Thus, Uranus is rather too far from the Sun relatively to Neptune, Saturn relatively to Uranus, and Jupiter relatively to Saturn; and the planets thus formed engross too large a
proportionate share of material, and this is especially the case with Jupiter. Hence, when we come to the Asteroids, the disposition is so strong at the end of a single interval, that the outer
Asteroid is but just within this interval, and the whole material of the Asteroids is dispersed in separate masses over a wide space, instead of being concentrated into a single planet. A
consequence of this dispersion of the forming agents is that a small proportionate material is absorbed into the Asteroids. Hence, Mars is ready for formation so far exterior to its true place,
that when the next interval elapses the residual force becomes strong enough to form the Earth, after which the normal law is resumed without any further disturbance. Under this law there can be
no planet exterior to Neptune, but there may be one interior to Mercury.
Let us now look back upon some of the leading features alluded to before, omitting the simpler relations of organized beings to the world around, or those of individuals to individuals, to
consider only the different parallel series we have been comparing when showing that in their respective great types the phenomena of animal life correspond to one another, whether we compare
their rank as determined by structural complication with the phases of their growth, or with their succession in past geological ages; whether we compare this succession with their embryonic
growth, or all these different relations with each other and with the geographical distribution of animals upon earth. The same series everywhere! These facts are true of all the great divisions
of the animal kingdom, so far as we have pursued the investigation; and though, for want of materials, the train of evidence is incomplete in some instances, yet we have proof enough for the
establishment of this law of a universal correspondence in all the leading features which binds all organized beings of all times into one great system, intellectually and intelligibly linked
together, even where some links of the chain are missing. It requires considerable familiarity with the subject even to keep in mind the evidence, for, though yet imperfectly understood, it is
the most brilliant result of the combined intellectual efforts of hundreds of investigators during half a century. The connection, however, between the facts, it is easily seen, is only
intellectual; and implies therefore the agency of Intellect as its first cause.^150^
And if the power of thinking connectedly is the privilege of cultivated minds only; if the power of combining different thoughts and of drawing from them new thoughts is a still rarer privilege
of a few superior minds; if the ability to trace simultaneously several trains of thought is such an extraordinary gift, that the few cases in which evidence of this kind has been presented have
become a [p.131] matter of historical record (Caesar dictating several letters at the time), though they exhibit only the capacity of passing rapidly, in quick succession, from one topic to
another, while keeping the connecting thread of several parallel thoughts: if all this is only possible for the highest intellectual powers, shall we by any false argumentation allow ourselves to
deny the intervention of a Supreme Intellect in calling into existence combinations in nature, by the side of which all human conceptions are child's play?
If I have succeeded, even very imperfectly, in showing that the various relations observed between animals and the physical world, as well as between themselves, exhibit thought, it follows that
the whole has an Intelligent Author; and it may not be out of place to attempt to point out, as far as possible, the difference there may be between Divine thinking and human thought. Taking
nature as exhibiting thought for my guide, it appears to me that while human thought is consecutive, Divine thought is simultaneous, embracing at the same time and forever, in the past, the
present, and the future, the most diversified relations among hundreds of thousands of organized beings, each of which may present complications again, which, to study and understand even
imperfectly, as for instance, Man himself, Mankind has already spent thousands of years. And yet, all this has been done by one Mind, must be the work of one Mind only, of Him before whom Man can
only bow in grateful acknowledgment of the prerogatives he is allowed to enjoy in this world, not to speak of the promises of a future life.
I have intentionally dismissed many points in my argument with mere questions, in order not to extend unduly a discussion which is after all only accessory to the plan of my work. I have felt
justified in doing so because, from the point of view under which my subject is treated, those questions find a natural solution which must present itself to every reader. We know what the
intellect of Man may originate, we know its creative power, its power of combination, of foresight, of analysis, of concentration; we are, therefore, prepared to recognize a similar action
emanating from a Supreme Intelligence to a boundless extent. We need therefore not even attempt to show that such an Intellect may have originated all the Universe contains; it is enough to
demonstrate that the constitution of the physical world and, more particularly, the organization of living beings in their connection with the physical world, prove in general the existence of a
Supreme Being as the Author of all things. The task of science is rather to investigate what has been done, to inquire if possible how it has been done, than to ask what is possible for the
Deity, as we can know that only by what actually exists. To attack such a position, those who would deny the intervention in nature of a creative mind must show that the cause to which they refer
the origin of finite beings is by its nature a possible cause, which cannot be denied of a being endowed with the attributes we recognize in God. Our task is therefore completed as soon as we
have proved His existence. It would nevertheless be highly desirable that every naturalist who has arrived at similar conclusions should go over the subject anew from his point of view and with
particular reference to the special field of his investigations; for so only can the whole evidence be brought out. I foresee already that some of the most striking illustrations may be drawn
from the morphology of the vegetable kingdom, especially from the characteristic succession and systematical combination of different kinds of leaves in the formation of the foliage and the
flowers of so many plants, all of which end their development by the production of an endless variety of fruits. The inorganic world, "considered in the same light, would not fail to exhibit also
unexpected evidence of thought, in the character of the laws regulating the chemical combinations, the action of physical forces, the universal attraction, etc., etc. Even the history of human
culture ought to be investigated from this point of view. But I must leave it to abler hands to discuss such topics.
SECTION XXXI
Last Section (31st)
31st. The combination in time and space of all these thoughtful conceptions exhibits not only thought, it shows also premeditation, power, wisdom, greatness, prescience, omniscience, providence.
In one word, all these facts in their natural connection proclaim aloud the One God, whom man may know, adore, and love; and Natural History must in good time become the analysis of the thoughts
of the Creator of the Universe, as manifested in the animal and vegetable kingdoms, as well as in the inorganic world.
It may appear strange that I should have included the preceding disquisition under the title of an "Essay on Classification." Yet it has been done deliberately. In the beginning of this chapter I
have already stated that Classification seems to me to rest upon too narrow a foundation when it is chiefly based upon structure. Animals are linked together as closely by their mode of
development, by their relative standing in their respective classes, by the order in which they have made their appearance upon earth, by their geographical distribution, and generally by their
connection with the world in which they live, as by their anatomy. All these relations should therefore be fully expressed in a natural classification; and though structure furnishes the most
direct indication of some of these relations, always appreciable under every circumstance, other considerations should not be neglected which may complete our insight into the general plan of
creation. (Louis Agassiz, ESSAY ON CLASSIFICATION, Ed. E. Lurie, Belknap Press, Cambridge, 1962:127-128)
As far as Pierce's still largely unheralded contribution and its attempted furtherance by Louis Agassiz are concerned one might note that although the latter gives due prominence to the subject in
his Essay on Classification this highly significant issue is still rarely mentioned in abstracts or notes on the latter's work itself. Yet the understanding inherent in the basic premise was hardly
likely to have been entirely isolated, as Agassiz himself stresses in the following passage, which also carries with it familiar yet ancient echoes of unity and applied intellect:^29
These facts are true of all the great divisions of the animal kingdom, so far as we have pursued the investigation; and though, for want of materials, the train of evidence is incomplete in some
instances, yet we have proof enough for the establishment of this law of a universal correspondence in all the leading features which binds all organized beings of all times into one great
system, intellectually and intelligibly linked together, even where some links of the chain are missing. It requires considerable familiarity with the subject even to keep in mind the evidence,
for, though yet imperfectly understood, it is the most brilliant result of the combined intellectual efforts of hundreds of investigators during half a century. The connection, however, between
the facts, it is easily seen, is only intellectual; and implies therefore the agency of Intellect as its first cause
One major difference between this approach and others lies in the direction used by Peirce; i.e., the latter commenced from the outermost regions and applied Fibonacci-related divisions while moving
inwards towards the center. Here the location of Neptune was perhaps a key (or a hindrance) in that the secondary position (i.e., the synodic location of the exponential framework) happens to be
similar to that of Neptune itself. On the other hand, however, the 1 : 1 occurrence was perhaps--rightly or wrongly--also an alerting factor for the Fibonacci series itself. Nor should this
necessarily matter, for the premise itself was already absorbed and applied.
In retrospect it is hard to say how far this line of inquiry might have been taken, or what might ultimately have resulted, but it must surely have been a far more useful endevour than the circular,
simplistic and ad hoc diversions introduced and perpetuated by "Bode's Law." And how could something so momentous and far-reaching have been so easily driven into obscurity? According to the modern
editor of Agassiz' Essay on Classification, (E. Lurie) it was partly the work of Asa Gray and Chauncey Wright, as explained in the following footnote (the latter's No.149):^30
Agassiz tried to interest Americans in this concept, an idea typical of German speculative biology and one that he had been much impressed with since his student days at the University of Munich.
See Asa Gray, "On the Composition of the Plant by Phytons, and Some Applications of Phyllotaxis," Proceedings, AAAS, II (1850), 438-444, and Benjamin Peirce, "Mathematical Investigations of the
Fractions Which Occur in Phyllotaxis," in ibid., 444-447. Gray was never entirely convinced of the validity of this ideal conception. He subsequently encouraged Chauncey Wright to examine the
problem of leaf arrangement, with the result that such facts were shown to be understandable in terms of the principle of natural selection.
but it is still incredible that it should have been driven down so swiftly, except, perhaps that it was undoubtedly heliocentric as well as a major departure away from the views perpetuated by
organized religion.
Thus it may have come too late, a century after Linneaus' classifications, a little less with respect to Cook's voyages, and half a century or more of continued activity that was simply too much for
those who wished to maintain the status quo. But what else took place during this period of hopeful enlightenment only to fade from view? For that we turn next to the perhaps unexpected subject of
spiral formations in shells.
Before this, however, it seems fitting to leave the present section and the Three-fold Number with the starting paragraph from the Prologue to the Theory of the Planets (Theorica planetarium) by
Campanus of Navara:^31
The foremost master of philosophy divides the province of that [subject] into three primary genera; the first of these he names theological, the second mathematical, and the third natural. And
the middle term becomes in a way a partaker in the nature of the two extreme terms, because mathematical principles are found in the realms of nature and theology alike, and because it ranks
below the first and above the third in nobility of subject matter, although both of them yield place to it with respect to certainty of the method of teaching; this is the reason, moreover, why
it is called, by a transfer of epithet, “the teaching genus,” on the grounds that it possesses a method of teaching which the student cannot contradict. For it begins with things which are
grasped by the intellect, namely, things self-evident to all men, and from these it deduces, by an infallible process, the first demonstrables, then the middle ones, then the last, proceeding
from first to last through the middle ones in their due order. (Campanus of Navara, ca.1250 CE )
and the abstract of a recent (2003) paper entitled "
The golden mean as clock cycle of brain waves" by Harald Weiss^33and Volkmar Weiss:^32
The principle of information coding by the brain seems to be based on the golden mean. Since decades psychologists have claimed memory span to be the missing link between psychometric
intelligence and cognition. By applying Bose-Einstein-statistics to learning experiments, Pascual-Leone obtained a fit between predicted and tested span. Multiplying span by mental speed (bits
processed per unit time) and using the entropy formula for bosons, we obtain the same result. If we understand span as the quantum number n of a harmonic oscillator, we obtain this result from
the EEG. The metric of brain waves can always be understood as a superposition of n harmonics times 2 F, where half of the fundamental is the golden mean F (= 1.618) as the point of resonance.
Such wave packets scaled in powers of the golden mean have to be understood as numbers with directions, where bifurcations occur at the edge of chaos, i.e. 2 F = 3+ f^ 3. Similarities with El
Naschie’s theory for high energy particle’s physics are also discussed.
END OF PART IVB2b
1. Epinomis, 989d-992a, Trans. A.E. Taylor, The Collected Dialogues of Plato, Princeton University Press, Princeton 1982.
2. Timaeus, 31b-32c, Plato's Cosmology: The Timaeus of Plato, Trans. Francis MacDonald Cornford, Bobbs-Merrill, Indianapolis 1975.
3. Ovid, as quoted by Nicole Oresme in Du Ciel et du monde, Book II, Chapter 25, fols. 144a-144b, p.537.
4. The Chaldean Oracles as Set Down By Julianus,{Latin: Francesco Patrizzi; English: Thomas Stanley} Heptangle Books, Gillette, New Jersey, 1939:3.
5. Archibald, R.C."Notes on the Logarithmic Spiral, Golden Section and the Fibonacci Series," Note V in Hambidge, Dynamic Symmetry, Yale University Press, New Haven 1920:146-157.
6. Hambidge, Jay. Dynamic Symmetry, Yale University Press, New Haven 1920.
7. Coleman, Samuel. Nature's Harmonic Unity: A Treatise on its Relation to Proportional Form, Benjamin Blom, New York 1971.
8. Agassiz, Louis. ESSAY ON CLASSIFICATION, Ed. E. Lurie, Belknap Press, Cambridge, 1962.
9. Thomson, D'arcy Wentworth.On Growth an Form, Dover, New York 1992: unabridged reprint of the 1942 edition.
10. ibid., 1942:933.
11. Westcott, W. Wyn. Numbers: their Occult Power and Mystic Virtues, Sun Publishing Santa Fe, 1983.
12. Cook, Theodore Andrea, The Curves of Life, 1914:414.
13. Stewart, Ian. Nature's Numbers: The Unreality of Mathematical Imagination, Ian Stewart, Basic Books, New York 1995.
14. Guthrie, Kenneth Sylvain.The Pythagorean Source Book and Library, Phane Press, Grand Rapids 1988.
15. Church, Arthur Harry. On The Relation Of Phyllotaxis To Mechanical Law, Williams and Norgate, London 1904; see also: http://www.sacredscience.com (cat #154).
16. Coleman, Samuel, Ed. Arthur C. Coan. Nature's Harmonic Unity, Benjamin Blom, New York 1971and Proportional Form, 1920.
17. Cook, T.A. The Curves of Life, 1914.
18. Thomson, D'arcy Wentworth.On Growth an Form, Dover, New York 1992: first published in 1917; unabridged reprint in 1942.
19. Schooling, William, in T.A. Cook, The Curves of Life, New York 1978:440; republication of the London (1914) edition.
20. Coleman, Samuel, Ed. Arthur C. Coan. Nature's Harmonic Unity, Benjamin Blom, New York 1971:116.
21. The Curves of Life, 1914.
22. Cook. T.A. The Curves of Life, 1914:421.
23. Thomson, D'arcy Wentworth.On Growth an Form, Dover, New York 1992:792 unabridged reprint of the 1942 edition.
24. Cook, T. A. The Curves of Life, Dover, New York 1978:413; republication of the London (1914) edition.
25. Cook, T. A. The Curves of Life, 1978:414.
26. Aldersey-Williams,Hugh. THE MOST BEAUTIFUL MOLECULE: THE DISCOVERY OF THE BUCKYBALL, John Wiley & Sons, New York 1995.
27. Pierce, Benjamin. "Mathematical Investigations of the Fractions Which Occur in Phyllotaxis,"Proceedings, AAAS, II 1850: 444-447.
28. Agassiz, Louis. ESSAY ON CLASSIFICATION, Ed. E. Lurie, Belknap Press, Cambridge 1962:127-128.
29. Agassiz, op. cit., p. 128.
30. Lurie, E. Ed., Agassiz, ESSAY ON CLASSIFICATION, Belknap Press, Cambridge, 1962.
31. Benjamin, Francis S, Jr. and G.J. Toomer, Campanus of Navara and Medieval Planetary Theory: Theorica planetarium, University of Wisconsin Press, Madison 1971:137.
32. Weiss, Volkmar, "Memory Span as the Quantum of Action of Thought," Cahiers de Psychologie Cognitive 14 (1995) 387-408
33. Weiss, Harald and Volkmar Weiss. "The golden mean as clock cycle of brain waves," Chaos, Solitons and Fractals 18 (2003) No. 4, 643-652.
Copyright © 2002. John N. Harris, M.A.(CMNS). Last updated on February 8, 2004.
Return to spirasolaris.ca | {"url":"http://www.spirasolaris.ca/sbb4d2b.html","timestamp":"2014-04-16T16:36:51Z","content_type":null,"content_length":"125134","record_id":"<urn:uuid:b8b78898-bd6f-4cff-af9e-592df53029f3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kent, WA ACT Tutor
Find a Kent, WA ACT Tutor
...I use various theatre games and warm ups, and I strongly emphasize scene by scene analysis. I consider voice lessons to be an extension of acting lessons, and I primarily focus on musical
theatre. My approach to teaching voice revolves around getting the student to convey the emotional content and the dramatic meaning of the song.
22 Subjects: including ACT Math, reading, English, writing
...I am also currently holding two majors which they are Economics and Mathematics and graduated a 4 year university with Bachelor of Arts degree in Economics and a 2 year university with
Certificate Degree in Mathematics. I am a curently Math teacher at your company and also I am from Turkey who ...
28 Subjects: including ACT Math, calculus, physics, logic
...I have taught a variety of college courses in church history, theology, religious studies, and world religions. I am familiar with the theologies of the major denominations, and I treat them
all moderately and respectfully. I also thoroughly enjoy sharing my knowledge of this field.
38 Subjects: including ACT Math, English, writing, geometry
...In doing so I have had to perfect my ability to dissect and deconstruct literary works, scientific journals and write concise, straightfoward papers. I feel confident teaching people to extr
act important information from tests/sources and then show them how to format that information in a straig...
25 Subjects: including ACT Math, Spanish, chemistry, writing
...Only then can math be seen as transcending the purely abstract becoming a valuable tool useful in every discipline. While I had a rocky start in life upon leaving high school, and I have held
a variety of jobs in assorted professions for the past 30 years, it is at this point in my life that I w...
11 Subjects: including ACT Math, calculus, geometry, algebra 1
Related Kent, WA Tutors
Kent, WA Accounting Tutors
Kent, WA ACT Tutors
Kent, WA Algebra Tutors
Kent, WA Algebra 2 Tutors
Kent, WA Calculus Tutors
Kent, WA Geometry Tutors
Kent, WA Math Tutors
Kent, WA Prealgebra Tutors
Kent, WA Precalculus Tutors
Kent, WA SAT Tutors
Kent, WA SAT Math Tutors
Kent, WA Science Tutors
Kent, WA Statistics Tutors
Kent, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/kent_wa_act_tutors.php","timestamp":"2014-04-21T02:21:17Z","content_type":null,"content_length":"23509","record_id":"<urn:uuid:5c902fe2-01b0-4e5d-8638-b2b013f4f9ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Mathematics & Theoretical Computer Science
Volume 7 n° 1 (2005), pp. 25-36
author: L. Sunil Chandran and Vadim V. Lozin and C.R. Subramanian
title: Graphs of low chordality
keywords: induced cycles, chordality
abstract: The chordality of a graph with at least one cycle is the length of the longest induced cycle in it. The odd (even) chordality is defined to be the length of the longest induced odd
(even) cycle in it. Chordal graphs have chordality at most 3. We show that co-circular-arc graphs and co-circle graphs have even chordality at most 4. We also identify few other classes
of graphs having bounded (by a constant) chordality values.
If your browser does not display the abstract correctly (because of the different mathematical symbols) you can look it up in the PostScript or PDF files.
reference: L. Sunil Chandran and Vadim V. Lozin and C.R. Subramanian (2005), Graphs of low chordality, Discrete Mathematics and Theoretical Computer Science 7, pp. 25-36
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dm070103.ps.gz (46 K)
ps-source: dm070103.ps (131 K)
pdf-source: dm070103.pdf (87 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Sat Apr 2 17:27:48 CEST 2005 by falk | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/59/68","timestamp":"2014-04-17T11:32:15Z","content_type":null,"content_length":"13980","record_id":"<urn:uuid:59d51099-0926-4477-89e1-416f36a4379f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trenton, NJ Algebra 2 Tutor
Find a Trenton, NJ Algebra 2 Tutor
Energetic and experienced math tutor looking to help you or your child reach your/their math goals. Over 20 years teaching and tutoring in both public and private schools. Currently employed as a
professional math tutor and summer school Algebra I teacher at the nearby and highly regarded Lawrence...
6 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...Parents and students often contact me when they realize their academic process needs just the right balance of tutor/guidance counseling and are always pleased with my services. I offer a
detailed assessment process to find the right plan, just for you. The information gathered is used to regional/culturally/school district specific pedagogy for your student.
43 Subjects: including algebra 2, English, calculus, reading
I have been helping my fellow students informally since the start of high school. After graduating from MIT I feel like I can continue to assist others with a desire to learn and excel. I
understand that some, if not many, students benefit greatly from a more one-on-one learning approach and I loo...
17 Subjects: including algebra 2, chemistry, reading, calculus
...I enjoy tutoring and believe that due to my education and past experience, I have a lot to offer!I have always had a love and passion for math and am therefore very patient, thorough, and
helpful when it comes tutoring students in this subject! I love Chemistry and therefore love tutoring studen...
30 Subjects: including algebra 2, reading, English, biology
...In addition, I enjoy coding HTML and Javascript by hand, including the use of JQuery. I have been IT Director for a clothing company for many years. My daily job responsibilities include
analyzing networking issues related to Cisco and Brocade hardware, cabling problems, and performance issues.
13 Subjects: including algebra 2, reading, writing, algebra 1
Related Trenton, NJ Tutors
Trenton, NJ Accounting Tutors
Trenton, NJ ACT Tutors
Trenton, NJ Algebra Tutors
Trenton, NJ Algebra 2 Tutors
Trenton, NJ Calculus Tutors
Trenton, NJ Geometry Tutors
Trenton, NJ Math Tutors
Trenton, NJ Prealgebra Tutors
Trenton, NJ Precalculus Tutors
Trenton, NJ SAT Tutors
Trenton, NJ SAT Math Tutors
Trenton, NJ Science Tutors
Trenton, NJ Statistics Tutors
Trenton, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Trenton_NJ_algebra_2_tutors.php","timestamp":"2014-04-18T14:06:51Z","content_type":null,"content_length":"24233","record_id":"<urn:uuid:2bae0a78-e89d-47d0-a9be-2b7fb1c4ea6e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimizing Cooperative Cognitive Radio Networks with Opportunistic Access
Journal of Computer Networks and Communications
Volume 2012 (2012), Article ID 294581, 9 pages
Research Article
Optimizing Cooperative Cognitive Radio Networks with Opportunistic Access
^1Electrical Engineering Program, KAUST, Al Khawarizmi Applied Mathematics Building 1, Mail Box 2675, Makkah Province, Thuwal 23955-6900, Saudi Arabia
^2School of Engineering, University of Warwick, Coventry, CV4 7AL, UK
^3Electrical Engineering Department, KAUST, Thuwal 23955-6900, Saudi Arabia
^4Department of Electrical and Computer Engineering, Texas A&M University, Texas A&M Engineering Building, Education City, Doha, Qatar
Received 9 January 2012; Revised 19 March 2012; Accepted 20 March 2012
Academic Editor: Enrico Del Re
Copyright © 2012 Ammar Zafar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Optimal resource allocation for cooperative cognitive radio networks with opportunistic access to the licensed spectrum is studied. Resource allocation is based on minimizing the symbol error rate at
the receiver. Both the cases of all-participate relaying and selective relaying are considered. The objective function is derived and the constraints are detailed for both scenarios. It is then shown
that the objective functions and the constraints are nonlinear and nonconvex functions of the parameters of interest, that is, source and relay powers, symbol time, and sensing time. Therefore, it is
difficult to obtain closed-form solutions for the optimal resource allocation. The optimization problem is then solved using numerical techniques. Numerical results show that the all-participate
system provides better performance than its selection counterpart, at the cost of greater resources.
1. Introduction
The ever increasing wireless communication networks have put great stress on the already limited spectrum. Due to the fixed spectrum allocation policy, only the licensed users, otherwise known as
primary users, are able to access the licensed spectrum. Additionally, the Federal Communications Commission (FCC) task force highlighted in their report the fact that at any given time only 2% of
the spectrum is being used [1]. Therefore, ensuring better spectrum usage is of paramount importance.
Cognitive radios have been proposed to resolve this issue [2]. In cognitive radio, the unlicensed users, otherwise known as secondary users, first sense the licensed bands for spectrum holes (parts
of the licensed spectrum which are not being employed by the primary users at some time in certain geographical location) [3]. Then, if a spectrum hole is found, the secondary users transmit data to
the intended destination. However, the secondary user has to be careful so as not to cause interference to the primary user. The two stages of spectrum sensing and data transmission are related and
for optimal performance must be optimized jointly. This is due to the probability of detection, , and probability of false alarm, , associated with spectrum sensing. If the secondary user with
probability misses a spectrum hole, then it will keep silent and miss an opportunity to transmit, reducing throughput. However, if a transmission from the primary is missed, with probability , then
the secondary user transmits and causes interference to the primary user. Moreover, due to interference, the signal-to-noise-ratio (SNR) of the secondary user also decreases, decreasing the
throughput and the symbol error rate (SER). Resource allocation that optimizes this sensing-throughput tradeoff has been discussed in [4]. Other optimal resource allocation algorithms for cognitive
radio networks have been discussed in [5]. More specifically, in [5], the authors considered a multiband system and considered the two cases of sensing-based spectrum sharing and opportunistic
spectrum access. However, both the above-mentioned works maximize the throughput.
In this paper, optimal resource allocation is discussed to minimize the SER. In order to achieve minimum SER, cooperation is introduced into the system as it decreases the SER due to diversity [6].
Hence, the transmitting secondary user now, upon sensing a spectrum hole, transmits to the relays as well as the destination. Power allocation for relay-assisted cognitive radio networks has been
discussed in [7–15]. These works proposed strategies to maximize the throughput for a cognitive relay network that is allowed to share the frequency band with the primary user. Thus, they did not
consider spectrum sensing for opportunistic access. Here, we consider an opportunistic system with amplify-and-forward (AF) relays. Full channel state information is assumed at the central controller
which performs the resource allocation. Firstly, an all-participate (AP) system is discussed and it is shown that the optimization problem is nonconvex and hence cannot be solved using analytical
means. It is then noted that the AP system is limited due to the systems resources being orthogonally distributed. To rectify this, a selection scheme is proposed and the optimal resource allocation,
in this case, is discussed.
The rest of the paper is organized as follows. Section 2 gives the system model. The AP system is considered in Section 3. Section 4 details selective relaying. Numerical results are discussed in
Section 5. Finally, Section 6 concludes the paper.
2. System Model
Consider a cognitive radio network in which the secondary source utilizes relays to send data to the secondary destination as shown in Figure 1. The secondary network only has opportunistic access to
the licensed spectrum. Therefore, it needs to perform spectrum sensing. The source performs the spectrum sensing and then transmits information to the relays and destination if it finds a “spectral
hole” in the first time slot. The relays then forward the received signal to the destination after amplification. In this paper narrowband channel is assumed. The source and the relays can use
frequency orthogonal channels to avoid causing interference to each other in wideband channels. For ease of analysis, we consider time orthogonal channels here. Hence, a total of time slots are used.
2.1. Received Signal Model
Based on the spectrum sensing result, there are two possible received signal models.
2.1.1. Without Interference from the Primary User
In this scenario, with probability , where is the probability of false alarm, the source correctly detects the presence of a “spectral hole” and transmits. The signals received at the destination and
the relays are where is the zero-mean and unit-energy transmitted symbol, is the source energy, is the channel response between the source and the destination, is the channel response between the th
relay and the source, and and are the complex Gaussian noise samples. The relay, after normalization and amplification, forwards the received signal to the destination. The signal after normalization
Therefore, the received signal at the destination from the th relay is where is the known channel response between the receiver and the th relay, is the th relay's energy, and is the complex Gaussian
noise. Substituting in (3) gives where and
Writing the received signals in matrix form, one has where and is a dimensional vector whose components are zero mean and unit variance complex Gaussian random variables. Therefore is also complex
Gaussian with mean vector of all zeros and covariance matrix being the identity matrix, , that is .
2.1.2. With Interference from the Primary User
In this case, with probability where is the probability of detection, the source misses the transmission from the primary user and transmits, which causes interference. The signals at the destination
from both the source and the relays now also include an interfering signal due to primary user activity is where and are the interference signals.
Taking into account the fact that the source and relays have no knowledge of the interfering signal and adopting the same approach as previously, one can write where and
Again in matrix form one has where and .
2.2. Spectrum Sensing
Spectrum sensing is performed, by means of an energy detector, for the first seconds out of total time slot duration of seconds at the source node only. The remaining is used for transmission, after
detecting a “spectral hole”. The probabilities of detection and false alarm, according to [16], are given by respectively, where is the threshold of the energy detector, is the number of samples, is
the sampling frequency, equals times the SNR at the output of the detector and is the Gaussian -function.
3. All Participate System
In this section, an all-participate (AP) system is discussed. In such a system, all the relays forward the signal to the destination. Firstly, the optimization problem is formulated. Then the
constraints on the objective function are derived. The SER at the destination is given by where is the SNR after combining, is the signal-to-interference-plus-noise-ratio (SINR) after combining, is a
constant which depends on the modulation scheme used, is the probability of no primary user transmission, and is the probability of a primary user transmission. The SNR can be found, assuming maximal
ratio combining (MRC), as where where the source and relay energies have been replaced by where and s are the source and relay powers, respectively, and is the symbol time. Similarly, can be
expressed as where
After substituting (15) and (18) in (14), the SER can be obtained as
Now we form the different constraints on the problem. First, we consider both individual power constraints at the source and the relay and a global power constraint on the whole system. Therefore,
the constraints are given by where is the power available at the source, is the power available at each relay, is the power available to the whole system, and specifies the constraint on the
probability of false alarm. The constraints on , , and are introduced to maintain an acceptable throughput. Next the two cases of global power constraint only and individual power constraints only
are considered. For the case of global power constraint only, the constraints will be
In the other scenario, the constraints are given by
The individual power constraints are set to limit the interference suffered by the primary user in the case of missed detection. As there is no individual power constraint, the interference caused to
the user in the global power constraint only case, where the primary user is only protected by spectrum sensing, is greater.
The problem with optimizing (22) is that it is a nonlinear and nonconvex function due to the Gaussian Q-function being nonlinear and, in general, nonconvex. Thus, the Lagrangian multiplier method [17
] cannot be applied here to obtain closed form expressions of the optimal resource allocation. One has to resort to numerical techniques to obtain the optimal solution.
A special case of importance is the absence of the direct link between source and destination, because the relays take on a more prominent role in the presence of no direct link. In this case, the
SER is can be obtained by setting in (22).
4. Selective Relaying
The drawback of the all-participate (AP) scheme discussed in the previous section is that to avoid causing interference, the source and the relay transmit on orthogonal channels. Hence, consuming a
considerable amount of resources. In our discussion of a time orthogonal systems, time slots are utilized for the transmission of one data frame. Additionally, as no sensing is performed at the
relays, the primary may become active over any one of the time slots and cause interference.
To overcome these problems, a selection scheme is proposed in this section in which only one relay is selected to take part in forwarding the signal from the source. Now only 2 time slots are used in
transmitting one frame of data and thus decreasing the likelihood of primary becoming active again during relay transmission. In the selection case, the SER is where , , and correspond to the th
relay that is selected. The SER in (26) is first optimized for all the relays and the relay which gives the minimum optimum SER is selected. Again, all three cases given in (23), (24) and (25) of
both global and individual constraints, global constraint only and individual constraint only are considered. The selection criteria of minimizing SER adds complexity. However, such a criteria
provides results which can serve as a benchmark as minimizing SER is the optimal selection criteria.
It is again evident that, even in the selection case, the SER is still a nonlinear and nonconvex function. Therefore, one has to resort to numerical techniques to find the optimal solution. The
special case of no direct link is again of particular interest and considered separately.
5. Numerical Results
In this section, numerical results are provided for the optimization problems discussed. First, the proposed AP system with optimal resource allocation is discussed and it is shown that the proposed
AP schemes give better performance than the uniform power allocation (UPA) scheme. In UPA, the power is uniformly distributed among the source and the relays and the sensing time and the symbol time
are set so that the inequality is satisfied. The selection scheme is discussed next and its performance is compared with selection with UPA. To make it easy for the reader to follow the discussion, a
glossary is included in Table 1.
An interior-point algorithm was used to perform the optimization. The MATLAB function fmincon, which performs constrained optimization, is used to run the interior-point algorithm. To ensure that the
algorithm converged to the optimal solution, the algorithm was run for a large number of initial values. All the variances are set as equal, . The constraint of is set at . The total time duration is
taken to be 100 ms. Binary phase shift keying (BPSK) is the modulation scheme employed. Due to the the number of samples and the sensing time being linearly related, the results are plotted against
the number of samples.
The relationship between the number of samples () and SER is shown in Figure 2. As one can clearly observe from Figure 2, there is an optimal value of the number of samples, hence for the sensing
time, which minimizes the SER. This is because of the tradeoff between symbol time, and sensing time, . Increasing sensing time means higher probability of detection which leads to a lower SER.
However, an increase in sensing time comes at the cost of a decrease in symbol time which leads to lower and . Therefore, the SER increases. Similarly, decreasing sensing time implies a lower
probability of detection and, in turn, higher SER. However, it also means high values of and due to increase in symbol time, which in turn leads to lower SER. Hence, there exists an optimal value.
This optimal value is affected by the primary user's SNR. The higher the primary user's SNR, the lower the optimal value of the sensing time will be as it takes shorter time to reach the same value
of as it would take for a lower SNR of the primary user.
Figure 3 shows the relationship between the symbol time, , and the SER. The relationship follows the opposite pattern as the sensing time. This is due to the constraint relating sensing time and the
symbol time. Therefore, when the optimal value of the sensing time is low, the optimal value of the symbol time is high.
Figure 4 shows the SER performance of the different AP schemes plotted against , where . As expected, for the case with a direct link, the three optimal resource allocation (ORA) scenarios, global
constraint only (GL), individual constraints only (Ind), and both global and individual constraints, outperform the uniform power allocation (UPA) and the direct link only for all values of and the
gap in performance becomes wider with increasing .
In Figure 4, in the case where there is no direct link (NDL) between source and destination, the performance is a little different. In this case, all three ORA schemes, AP-ORA-NDL, AP-ORA-GL-NDL, and
AP-ORA-Ind-NDL, outperform the uniform power allocation scheme (UPA-NDL). However, for less than 0dB, the direct link only case provides better SER performance than the three ORA cases with no
direct link. The three ORA schemes with no direct link start to catch up to the direct link only scenario after 0dB and completely outperform it after 5dB. This phenomena coupled with the fact that
AP-ORA-NDL, AP-ORA-GL-NDL, and AP-ORA-Ind-NDL, are handily outperformed by AP-ORA, AP-ORA-GL, and AP-ORA-Ind, respectively, demonstrate the significance of the presence of a link between the source
and destination.
Comparing the different constraints, AP-ORA gives the worst performance in both scenarios of direct and no direct link. This is due to the fact that AP-ORA is constrained both globally and
individually. Thus, even if one relay has more favourable conditions, the power allocated to it cannot exceed which is not the case in global constraint only case. In global constraint only case more
power can be allocated to the source and relay which has more favourable conditions. The comparison between global constraint only and individual constraints only requires further elaboration.
First, the global constraints only and individual only scenarios are compared in the no direct link case. Here, AP-ORA-Ind-NDL provides lower SER than AP-ORA-GL-Ind for all values of . AP-ORA-GL-NDL
has the advantage of allocating more power to relays with better channel conditions. However, AP-ORA-Ind-NDL makes up for this advantage by having more total power in the system as it is not
constrained by a total power constraint.
Now consider the direct link case. Here, AP-ORA-GL outperforms AP-ORA-Ind at low values of due to the presence of the direct link. As discussed before, the direct link is quite important, hence, as
AP-ORA-GL is not limited by individual constraints, more power can be allocated to the direct link. This is not the case for AP-ORA-Ind. Therefore, AP-ORA-GL gives lower SER. However, with an
increase in , the noise decreases and the greater total power in the case of AP-ORA-Ind comes more into play. Thus, AP-ORA-Ind starts to outperform AP-ORA-GL at higher values of . However, we must
keep in mind that in the global power constraint only case, the interference to the primary is greater than those in the other two cases. Hence, the advantage in performance at low comes at the cost
of greater interference to the primary.
Figure 5 shows the SER performance of the AP system as a function of the number of relays, . A similar pattern to Figure 4 is observed. The ORA schemes outperform the UPA schemes in both cases of
direct and no direct link. Among the proposed ORA schemes, AP-ORA-Ind-NDL provides lower SER than AP-ORA-NDL and AP-ORA-GL-NDL in the no direct link scenario while in the direct link scenario AP-ORA
is outperformed by both AP-ORA-Ind and AP-ORA-GL. In addition, AP-ORA-GL has better performance than AP-ORA-Ind for a small number of relays. However, as the number of relays increases AP-ORA-Ind
surpasses AP-ORA-GL in terms of performance due to greater total power. Moreover, AP-ORA-NDL, AP-ORA-GL-NDL, and AP-ORA-Ind-NDL even start to outperform UPA for a large number of relays which shows
the gain in performance due to ORA.
Figure 6 shows the performance of the various selection schemes as a function of . The comparison in performance follows a similar pattern as in the AP case with the proposed selection ORA schemes
outperforming their UPA counterparts and direct link only scenario. However, there is one major difference. In the presence of a direct link, Sel-ORA-Ind gives poorer performance than Sel-ORA-GL even
for high values of . Only at around 15 dB does Sel-ORA-Ind starts to catch up to Sel-ORA-GL. This is due to the fact that as pointed out in the AP system, in the case of global constraints only more
power can be allocated to the source. However, unlike AP, in Sel there is only one additional relay which implies less total power for Sel-ORA-Ind and, therefore, requires a high value of to make the
difference in total power count.
SER performance for selective relaying as a function of the number of relays is shown in Figure 7. Again, the main difference from the AP case is that Sel-ORA-GL outperforms Sel-ORA-Ind even for a
large number of relays. This is due to the fact that even though the number of relays increase, the total power for Sel-ORA-Ind remains constant as only one relay in addition to the source takes part
in data transmission. An interesting point to note here is that the there seems to be a minimum threshold for the SER for the selective system.
Figures 8 and 9 show the performance comparison between the AP and Sel system as a function of and , respectively. The comparison is presented separately for clarity, as if it was included in the
previous figures, they would have become cluttered. From Figure 8, one can see that AP scheme outperforms the selection scheme in all scenarios, however, the gap in performance is not too big. This
is due to the fact that the total number of relays is 3. If is increased, the performance gap will also increase. Still, one has to keep in mind the extra cost and spectral inefficiency associated
with the AP scheme. This becomes more clear when the Figure 9 is examined.
As one can see, the difference in performance between the respective AP schemes and Sel schemes increases with increase in number of relays. As discussed earlier, the Sel schemes look to be bounded
by a minimum threshold. Due to this, Sel with direct link scenarios even fall below the AP with no direct link scenarios for a large number of relays.
6. Conclusions
In this paper, ORA for a cognitive relay network has been discussed. It has been shown that for an AP system that ORA improves SER performance and the discussed schemes outperform the UPA schemes.
The importance of the direct link between the source and the destination has also been demonstrated. Among the different constraints on the system, the case of both individual and global constraints
gives the worst performance while global constraints only is the best for low . However, this comes at the cost of greater interference to the primary user. The individual constraints only case takes
over as the best scheme as increases.
It was then noted that the AP scheme consumes considerable resources and is spectrally inefficient. Therefore, a simple relay selection scheme has been proposed. Optimal resource allocation was then
discussed for the selection scheme. The performance comparison of the AP and Sel shows that while AP provides better SER performance, it comes at the cost of considerable resources.
This work was supported by King Abdullah University of Science and technology (KAUST).
1. “Federal communications comission (fcc), et docket no 03-322 notice of proposed rule making and order,” 2003.
2. J. Mitola and G. Maguire Jr., “Cognitive radio: making software radios more personal,” IEEE Personal Communications, vol. 6, no. 4, pp. 13–18, 1999. View at Publisher · View at Google Scholar ·
View at Scopus
3. S. Haykin, “Cognitive radio: brain-empowered wireless communications,” IEEE Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, 2005. View at Publisher · View at Google
Scholar · View at Scopus
4. Y. Liang, Y. Zeng, E. C. Y. Peh, and A. T. Hoang, “Sensing-throughput tradeoff for cognitive radio networks,” IEEE Transactions on Wireless Communications, vol. 7, no. 4, pp. 1326–1337, 2008.
View at Publisher · View at Google Scholar · View at Scopus
5. S. Stotas and A. Nallanathan, “Optimal sensing time and power allocation in multiband cognitive radio networks,” IEEE Transactions on Communications, vol. 59, no. 1, pp. 226–235, 2011. View at
Publisher · View at Google Scholar · View at Scopus
6. J. Laneman, D. Tse, and G. Wornell, “Cooperative diversity in wireless networks: efficient protocols and outage behavior,” IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3062–3080,
2004. View at Publisher · View at Google Scholar · View at Scopus
7. W. Yue, B. Zheng, and Q. Meng, “Optimal power allocation for cognitive relay networks,” in Proceedings of the International Conference on Wireless Communications and Signal Processing (WCSP '09),
pp. 1–5, Nanjing, China, November 2009. View at Publisher · View at Google Scholar · View at Scopus
8. L. Li, X. Zhou, H. Xu, G. Y. Li, D. Wang, and A. Soong, “Simplified relay selection and power allocation in cooperative cognitive radio systems,” IEEE Transactions on Wireless Communications,
vol. 10, no. 1, pp. 33–36, 2011. View at Publisher · View at Google Scholar · View at Scopus
9. J. Mietzner, L. Lampe, and R. Schober, “Distributed transmit power allocation for multihop cognitive-radio systems,” IEEE Transactions on Wireless Communications, vol. 8, no. 10, pp. 5187–5201,
2009. View at Publisher · View at Google Scholar · View at Scopus
10. Z. Liu, Y. Xu, D. Zhang, and S. Guan, “An efficient power allocation algorithm for relay assisted cognitive radio network,” in Proceedings of the International Conference on Wireless
Communications and Signal Processing (WCSP '10), pp. 1–5, Suzhou, China, October 2010. View at Publisher · View at Google Scholar · View at Scopus
11. X. Liu, B. Zheng, J. Cui, and W. Ji, “A new scheme for power allocation in cognitive radio networks based on cooperative relay,” in Proceedings of the 12th IEEE International Conference on
Communication Technology (ICCT '10), pp. 861–864, Tsukuba Science City, Novmber 2010.
12. X. Qiao, Z. Tan, S. Xu, and J. Li, “Combined power allocation in cognitive radio-based relay-assisted networks,” in Proceedings of the IEEE International Conference on Communications Workshops
(ICC '10), pp. 1–5, Cape Town, South Africa, May 2010. View at Publisher · View at Google Scholar · View at Scopus
13. X. Liu, B. Zheng, and W. Ji, “Cooperative relay with power control in cognitive radio networks,” in Proceedings of the 6th International Conference on Wireless Communications, Networking and
Mobile Computing (WiCOM '10), pp. 1–5, Chengdu, China, September 2010. View at Publisher · View at Google Scholar · View at Scopus
14. L. Jayasinghe and N. Rajatheva, “Optimal power allocation for relay assisted cognitive radio networks,” in Proceedings of the IEEE 72nd Vehicular Technology Conference Fall (VTC2010-Fall '10),
pp. 1–5, Ottawa, Canada, September 2010. View at Publisher · View at Google Scholar · View at Scopus
15. Z. Shu and W. Chen, “Optimal power allocation in cognitive relay networks under different power constraints,” in Proceedings of the IEEE International Conference on Wireless Communications,
Networking and Information Security (WCNIS '10), pp. 647–652, Beijing, China, June 2010.
16. H. Urkowitz, “Energy detection of unknown deterministic signals,” Proceedings of the IEEE, vol. 55, no. 4, pp. 523–531, 1967.
17. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. | {"url":"http://www.hindawi.com/journals/jcnc/2012/294581/","timestamp":"2014-04-16T14:29:05Z","content_type":null,"content_length":"292367","record_id":"<urn:uuid:5bbe10a2-4c5d-4687-a7cc-74bc689ccae1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Math Place Value Worksheets
Free Math
Place Value Worksheets
Hub Page
"Math Salamanders Free Math Sheets"
Welcome to the Math Salamanders Free Math Place Value Worksheets area.
Here you will find a wide range of math worksheets place value and free place value activities which will help your child gain a better understanding of how our number system and place value works.
This page contains links to other Math webpages where you will find a range of activities and resources.
Each webpage has a short description of what the page is about and the math learning it covers.
If you cannot find what you are looking for, try searching the site using the Google search box on the right hand side on this page.
Math Salamanders Copyright Information.
Thank you for honoring our copyright. The Math Salamanders hope you enjoy using our collection of free Math Worksheets printable for kids.
Definition of Place Value
So what exactly is place value?
Place value refers to the value of the digits in any given number. In the number 482 for example, the value of the digit '8' is 80 and the value of the digit '4' is 400.
At a more advanced level, in the number 36.57, the value of the digit '5' is 0.5 and the value of the digit '7' is 0.07.
In our number system, each time you move a place to the right, the value of the digit gets ten times bigger. Each time you move a place to the left, the value of the digit gets ten times smaller.
Place Value Learning
Children start their learning journey in Math when they start to count. When they are confindent counting small groups of objects and getting beyond 10, they then begin to develop their understanding
of place value up to 100 and beyond.
When they have understood how place value with whole numbers works, they can start learning about place value with decimals.
Our selection of free math place value worksheets has been split into different areas below so that you can more easily find the right sheet for your child.
Counting to 25
Here you will find our selection of counting worksheets up to 25
Using these sheets will help your child to:
• Count objects up to 25
• Sequence numbers up to 25.
All the free math worksheets in this section support Elementary Math Benchmarks for Kindergarten.
Sequencing Numbers to 25
Here you will find our selection of sequencing worksheets up to 25
Using these sheets will help your child to:
• count on and back up to 25;
• sequence numbers to 25.
All the free math worksheets in this section support Elementary Math Benchmarks for Kindergarten.
Place Value to 100
Here you will find our selection of Place Value to 100 worksheets.
Using these Math Worksheets Place Value will help your child to:
• learn their place value to 100;
• understand the value of each digit in a 2 digit number;
• Round numbers up to 100 to the nearest 10
• learn to read and write numbers to 100.
All the free Place Value Worksheets in this section are informed by the Elementary Math Benchmarks for Grade 1.
Place Value to 1000
Here you will find our selection of Place Value to 1000 worksheets.
Using these sheets will help your child to:
• learn their place value to 1000;
• understand the value of each digit in a 3 digit number;
• learn to read and write numbers to 1000.
All the free Math Place Value Worksheets in this section are informed by the Elementary Math Benchmarks for Grade 2.
Place Value to 10,000
Here you will find our selection of Place Value to 10,000 worksheets.
Using these sheets will help your child learn to:
• learn their place value to 10,000;
• understand the value of each digit in a 4 digit number;
• learn to read and write numbers to 10,000.
All the free Math Place Value Worksheets in this section follow the Elementary Math Benchmarks for Grade 3.
Place Value to 10 million
Welcome to our BIG Number Place Value area.
Here you will find sheets to help your child learn their place value to 10 million.
Using these sheets will help your child to:
• Know how to read and write numbers to 10 million;
• Understand place value to 10 million.
• Solve place value problems.
All the 4th grade math worksheets in this section support elementary math benchmarks.
Place Value Negative Numbers
Using these sheets will help your child to:
• learn to order negative numbers;
• learn to position numbers from -10 to 10 on a number line.
All the free Math Worksheets 3rd Grade in this section support the Elementary Math Benchmarks for Third Grade.
Place Value Decimals
Here you will find our selection of Place Value involving Decimals with up to 2 decimal places (2dp).
Using these sheets will help your child learn to:
• learn their place value with decimals up to 2dp;
• understand the value of each digit in a decimal number;
• learn to read and write numbers with up to 2dp.
All the free Place Value Worksheets in this section are informed by the Elementary Math Benchmarks for Grades 4 and 5.
Whether you are looking for a free Homeschool Math Worksheet collection, banks of useful Math resources for teaching kids, or simply wanting to improve your child's Math learning at home, there is
something here at Math-Salamanders.com for you!
The Math Salamanders hope you enjoy using these free printable Math worksheets and all our other Math games and resources.
We welcome any comments about our site on the Facebook comments box at the bottom of every page.
comments received about this page so far!
New! Comments
Have your say about the Math resources on this page! Leave me a comment in the box below. | {"url":"http://www.math-salamanders.com/free-math-place-value-worksheets.html","timestamp":"2014-04-19T19:32:58Z","content_type":null,"content_length":"61122","record_id":"<urn:uuid:3b23cf53-fd08-4d7a-b737-4cedf445735c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Jumping' printed from http://nrich.maths.org/
Why do this problem?
uses the context of sports training to offer opportunities for learners to explore division and/or multiplication. Pupils will be required to consider the relationships between multiplication,
division and fractions, which will help reveal their level of understanding.
Possible approach
Depending on pupils' previous experiences and skills, it might be helpful to pose a few questions involving finding 'half as much again' before going on to the problems as posed. You could encourage
pupils to record their own long jump results during a PE lesson, then list some of these on the board when you return to class. Pick out one length and ask the group how far that child would have
jumped if s/he had jumped half as far again. Invite pairs to work on finding a solution and then the ensuing discussion will allow you to assess how well they have understood the idea. You can pose
a few similar questions to give them more practice, should they need it.
You can then introduce them to the questions as stated in the problem which require children to 'work backwards'. Again, encourage them to talk to a partner or work in a small group and give them
free choice of equipment/tools that they feel would help their calculations.
Allow plenty of time for them to come together to discuss their methods. You may like to have picked out some pairs/groups and warned them in advance that you'd like them to explain what they've
done to everyone else. Try to sit back during this discussion so that class comments on the explanations rather than you. This may well prove a good assessment opportunity from your perspective.
You may like to conclude by asking the children which method they would use if they were now given a similar problem. There are likely to be a range of responses, so encourage each pupil to give
reasons for their choice.
Key questions
Tell me about the two jumps.
How did you get to your answer?
Possible extension
Ask children to create questions (to which they know the answers) that are similar, but also extend the simple phrase to one involving more difficult fractions. e.g. "only reached two thirds of what
they did the first time" or "a third as much again".
Possible support
Some pupils may find it helpful to use some material to count with, for example a paper number line that can be cut up can be useful. | {"url":"http://nrich.maths.org/7407/note?nomenu=1","timestamp":"2014-04-21T12:24:13Z","content_type":null,"content_length":"6704","record_id":"<urn:uuid:5531aa01-7106-4d2a-9913-3688200b2d65>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Fixed-Point Theory of Strictly Causal Functions
Eleftherios Matsikoudis and Edward A. Lee
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2013-122
June 9, 2013
We ask whether strictly causal components form well defined systems when arranged in feedback configurations. The standard interpretation for such configurations induces a fixed-point constraint on
the function modelling the component involved. We define strictly causal functions formally, and show that the corresponding fixed-point problem does not always have a well defined solution. We
examine the relationship between these functions and the functions that are strictly contracting with respect to a generalized distance function on signals, and argue that these strictly contracting
functions are actually the functions that one ought to be interested in. We prove a constructive fixed-point theorem for these functions, introduce a corresponding induction principle, and study the
related convergence process.
BibTeX citation:
Author = {Matsikoudis, Eleftherios and Lee, Edward A.},
Title = {The Fixed-Point Theory of Strictly Causal Functions},
Institution = {EECS Department, University of California, Berkeley},
Year = {2013},
Month = {Jun},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-122.html},
Number = {UCB/EECS-2013-122},
Abstract = {We ask whether strictly causal components form well defined systems when arranged in feedback configurations. The standard interpretation for such configurations induces a fixed-point constraint on the function modelling the component involved. We define strictly causal functions formally, and show that the corresponding fixed-point problem does not always have a well defined solution. We examine the relationship between these functions and the functions that are strictly contracting with respect to a generalized distance function on signals, and argue that these strictly contracting functions are actually the functions that one ought to be interested in. We prove a constructive fixed-point theorem for these functions, introduce a corresponding induction principle, and study the related convergence process.}
EndNote citation:
%0 Report
%A Matsikoudis, Eleftherios
%A Lee, Edward A.
%T The Fixed-Point Theory of Strictly Causal Functions
%I EECS Department, University of California, Berkeley
%D 2013
%8 June 9
%@ UCB/EECS-2013-122
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-122.html
%F Matsikoudis:EECS-2013-122 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-122.html","timestamp":"2014-04-20T20:56:54Z","content_type":null,"content_length":"6302","record_id":"<urn:uuid:bdf45387-c67a-49c0-a64e-89e524c980f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
My Synth
My Synth
After many months of work, I finally have a prototype of my synthesiser up and running (written in C++). It's called Cascade and uses a variant of cellular automata to perform additive synthesis
of dynamic, evolving tones.
Some of you may be familiar with the concept of cellular automata - a system of cells whose values change according to neighbourhood (the adjacent cells). The synth uses a 1D 256-state automaton,
where at each time step the current 'generation' (a 1D array of 8-bit values) of cells is converted into a sound by interpreting each cell's value as the amplitude of a particular harmonic (there
are 64). By experimenting with the rules, you can create some very interesting evolving sounds.
As opposed to a rule table, the automaton is governed by a weighted average (top-bottom and middle-sides) system that basically controls which direction 'energy' is spread throughout the
spectrum. For a 1D automaton, each prospective cell has 3 neighbours (top middle and bottom). For each cell, the neighbourhood is processed thus:
Here are some examples (mp3 format), along with the corresponding spectrum:
Here's the EXE:
Admittedly, due to the nature of the synth operation is not very intuitive but with experimentation you can get some nice sounds. It's got basic MIDI compatibility, too, and is 10-voice
polyphonic. To define the starting state, click and drag the boxes on the left column to alter their values (or right click to invert them).
MIDI keyboard required.
Oh gorsh I'ma gonna dig up that 'ol midi controller right now
Page faults immediately for me. (AMD-K6 ~450MHz, 192MB RAM, Win98) MIDI keyboard _required_ you say.... :-/ That's two stories down, and neither it nor the PC are move friendly...
The example mp3 are impressive, however - wish I could make some.
I don't know what could be causing that - but this is the place to find out, eh? ;) Maybe I should post some code for you all to dissect!
It doesn't need a hugely fast computer though (it can run upto about 10% CPU when busy on my 2.6GHz) - as you can tell, I've managed to fix the efficiency problem I posted a few days ago.
EDIT: Some of the images don't seem to be working so here's all of it in a zip:
Here's the user guide: | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/90311-my-synth-printable-thread.html","timestamp":"2014-04-20T14:30:38Z","content_type":null,"content_length":"12009","record_id":"<urn:uuid:189c42c7-2286-4d38-b597-5e812d59e76b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersubband terahertz transitions in Landau level system of cascade GaAs/AlGaAs quantum well structures in strong tilted magnetic field
The tunable terahertz intersubband Landau level transitions in resonant tunneling cascade quantum well structures are considered. The way of lifting the selection rule forbidding the inter-Landau
level terahertz transitions of interest by applying a magnetic field tilted with respect to the structure layers is proposed. The importance of asymmetric structure design to achieve considerable
values of transition dipole matrix elements is demonstrated.
Quantum well structures; Landau levels; terahertz transitions.
Recently, the possibility to achieve a population inversion in the system of Landau levels (LL) in cascade quantum well structures in strong magnetic field under a condition of sequential resonant
tunneling, i.e., in strong transverse electric field, was shown [1]. If the spacing between the first and any upper (νth) subbands is lower than the optical phonon energy (i.e., when the optical
phonon scattering is suppressed), the population of zeroth LL in νth subband can exceed that of the first LL in the first subband. So, the stimulated emission of terahertz radiation can be achieved
on the transitions between these LLs, and the emission frequency may continuously be tuned in a wide range of terahertz frequencies by the variation of the magnetic field strength according to the
where is the subband spacing and ω[c] is the cyclotron frequency. A scheme of transitions between Landau levels of subbands 1 and 2 in a quantum well structure considered in [1] is shown in Figure 1
. The main problem arising is that in a magnetic field directed perpendicularly to the structure layers, the optical transition of interest, shown in Figure 1, is forbidden, i.e., the corresponding
dipole matrix element is exactly equal to zero.
Figure 1. A scheme of transitions between Landau levels in a quantum well. The thick arrow indicates the (2, 0)→(1,1) radiative transition, and the wavy arrows mark the transitions due to the
electron–electron scattering.
In [1], we proposed a possible way to overcome the difficulty and to provide nonzero matrix element values for transitions of interest by tilting the magnetic field with respect to the structure
layers. In the present paper, we investigated the effect of magnetic field tilt on the optical matrix element of the intersubband Landau level transitions. The importance of an asymmetric structure
design to achieve substantial values of transition dipole matrix elements was revealed, and an asymmetric two-well periodic structure was proposed as a possible solution maximizing the optical matrix
element of the terahertz transitions of interest.
Theoretical background
Let us consider the electron in the quantum well structure in the tilted magnetic field , where z is the growth axis. In Landau gauge , the electron envelope wave function is given by [2]
where component is determined by a two-dimensional Schroedinger equation
with Hamiltonian
is the Hamiltonian for the case of magnetic field normal to the structure layers, and
Here, is the quantum well potential, is the effective mass, and are the magnetic lengths for transverse (B[⊥]) and longitudinal (B[∥]) magnetic field components and L is the thickness of the
In the case of the magnetic field being normal to the structure layers, the variables in the Schroedinger equation are separated, and energy levels and electron wave functions are given by the
expressions [3]
where is the wave function of harmonic oscillator with mass and frequency , and and are the energy and wave function of νth subband. Here, the small effect of the effective lowering of the
barrier height with the increasing of the Landau level number n [3] is neglected.
It can be easily seen that in this case, the dipole matrix element
is exactly equal to zero for any polarization due to the orthogonality of subband ( ) and oscillator ( ) wave functions, that is, the considered (2,0)→(1,1) transition is optically forbidden.
However, the matrix element of the specified transition can be made nonzero by applying an additional component of the magnetic field parallel to the layers, that is, by tilting the magnetic field
with respect to the structure layers. Now, due to an additional term
arising in Equation 4, the variables in the Schroedinger equation are no longer separated, resulting in the mixing of in-plane and out-of-plane electron motions [4] and lifting of the above selection
rule. The effect is similar to the violation of the Δn=0 selection rule for the resonant tunneling transitions between the Landau levels in the tilted magnetic field [4-10].
Here, we will consider the situation when the matrix element of the Hamiltonian (Equation 6) over the first and second subband stated (Equation 8) is much lower than the subband spacing. This is the
case in the magnetic field range when . The structure of a single-electron spectrum in the tilted magnetic field in this case does not change significantly [4-10]. The main effect of is the shift
of the harmonic oscillator center in Equation 8 by the value [7], where is the average value of the electron coordinate along the z axis in the νth subband state:
Substituting wave functions (Equation 11) into Equation 9, the following expression can be obtained for the squared modulus of the dipole matrix element:
From this expression, one can see that the dipole matrix element becomes nonzero only if the values and are substantially different.
In symmetric well potential , the subband wave functions are symmetric or antisymmetric with respect to symmetry center of the potential, and the averages are the same for all subbands. So, in
symmetric potential, the transition matrix element continues to be close to zero even in the tilted magnetic field. Thus, to provide a nonzero dipole matrix element for transitions of interest along
with the application of the tilted magnetic field, it is necessary to introduce an asymmetric potential along the direction of the structure growth.
Results and discussion
The simplest solution is to apply an electric field along the structure growth axis especially as the electric field is necessary to provide the resonant tunneling pumping of the LLs of the upper
subband. In Figure 2, the calculated dependence of the dipole matrix element of the (2,0)→(1,1) transition in the GaAs/AlGaAs quantum well on the applied electric field is shown. It can be seen
that the application of the electric field results in the nonzero dipole matrix element. Nevertheless, since the possible values of the electric field strength are determined by resonant tunneling
conditions and cannot be selected independently, this way of providing a nonzero matrix element is not very effective.
Figure 2. Calculated dependencies of the dipole matrix element on the applied electric field. Squared modulus of the dipole matrix element | D[(2,0)→(1,1)]|^2 versus the voltage drop eFa per quantum
well for different values of the parallel component of the magnetic field B [∥]=1 to 5T. The calculation was performed for the 250-Å GaAs/Al[0.3]Ga[0.7]As quantum well. The transverse component
of the magnetic field is B [⊥]=5T.
More effective is the use of an asymmetric design of the structures themselves. One of the possible solutions is to introduce an asymmetric double well as an active element of the periodic cascade
quantum well, consisting of two strongly coupled wells with different widths (Figure 3a), as an active element of the periodic cascade quantum well structure. In the asymmetric double quantum well,
the first subband wave function is located mainly in the wider well, while the second subband wave function is shifted to the narrower well (Figure 3b). As a result, a significant difference between
average coordinates and is achieved (Figure 3c).
Figure 3. An asymmetric double well as an active element in periodic cascade quantum well structure. ( a) Proposed design of the active element of the periodic cascade quantum well structure and the
calculated wave functions of the first and second subbands. ( b) The dependence of the 1 to 2 intersubband spacing on the narrow well width a[R]. (c) The values of the averages 〈z〉[ν] for
corresponding subbands as a function of narrow well width a[R].
The dipole matrix element for transitions between the zeroth LL of the second subband and the first LL of the first subband is presented in Figure 4 as a function of narrow quantum well width a[R.].
Here, the width of wider well is fixed, while the width a[R] of the narrow well is varied. A pronounced maximum can be seen at a[R]= 110Å, and the maximum achievable value of |D[(2,0)→(1,1)]|^2 is
considerably higher than that in previously considered case of the single symmetric quantum well in transverse electric field (see Figure 2 for comparison).
Figure 4. The calculated dependence of the squared modulus of the dipole matrix element |D[(2,0)→(1,1)]|^2. Calculated for the double-well structure shown in Figure 3a as a function of narrow well
width a[R].
Of course, the structure considered is an example proposed here to illustrate the general way of how the selection rule forbidding the transitions of interest can be overcome. More detailed
simulations, including the direct calculations of the tunneling characteristics and optical gain, are necessary to optimize the structure design.
Finally, the terahertz transitions between Landau levels of different subbands in resonant tunneling quantum well structures in a tilted magnetic field were considered. An effective way was proposed
to lift the selection rule forbidding the intersubband inter-Landau level transitions by placing the structures into the tilted magnetic field. An importance of asymmetrical structure potential was
revealed, and the possibility to achieve considerable values of inter-Landau level transition matrix element was demonstrated for an asymmetric double-well structure.
Authors' contributions
MPT participated in the general task formulation and carried out the theoretical background and intersubband matrix element calculations. YAM of conceived the general task formulation, participated
in the study design and coordination, and drafted the manuscript. PFK carried out the simulation program development and energy band structure calculations and participated in the sequence alignment.
All authors read and approved the final manuscript.
The work was supported by the Russian Basic Research Foundation (grants nos. 09-02-00671 and 08-02-92505-NCNIL), the RF President grant for young scientists (no. МК-916.2009.2), the MISIS grant no.
3400022, and the Ministry of Science and Education of the Russian Federation program “Scientific and Pedagogical Personnel of Innovative Russia in 2009–2013”.
Sign up to receive new article alerts from Nanoscale Research Letters | {"url":"http://www.nanoscalereslett.com/content/7/1/491?fmt_view=classic","timestamp":"2014-04-21T07:39:17Z","content_type":null,"content_length":"103600","record_id":"<urn:uuid:b755dbdb-9440-405c-a3bb-aa614c808b66>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability: usual or unusual
This isn't really a complex question more of a problem of definition. The questions I'm getting in my book are asking me to compute probabilities of various circumstances and then it asks after each
one if the occurrence of the event is usual or unusual. The book has a graphic which is showing at a value of probability 1 the event will definitely occur and at 0.5 it is a 50/50 chance and then
right near zero is says unlikely to occur, and of course at 0 the event is impossible. Elsewhere in the paragraph it mentions a probability of 1/1000 as being "very unlikely" but it doesn't define an
"unusual event" with any sort of numerical parameter. What should I conclude then as an unusual event? Less than 0.05? | {"url":"http://www.physicsforums.com/showthread.php?t=609643","timestamp":"2014-04-20T05:51:24Z","content_type":null,"content_length":"25902","record_id":"<urn:uuid:9d4fe2d0-81f3-41f6-a532-4b77b828d084>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
Tutorials & videos for parents and students: Khan Academy
Free math help and on-line math videos: MathVids.com
Math is Fun!: Illustrated Mathematics Dictionary
Learn Alberta: Mathematics Glossary
The Math Forum @ Drexel: Ask Dr. Math
BBC Home Ages: 4-11
BBC Home Ages: 11-16
One Response to Math Help
1. Essaycorp says:
Thanks for sharing these websites for online math help.
Leave a Reply Cancel reply | {"url":"http://ccampbel14.com/weblinks/math-help/","timestamp":"2014-04-17T03:52:05Z","content_type":null,"content_length":"66100","record_id":"<urn:uuid:625224be-1258-41c5-866b-cfc5332553b6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Other Superoperator Isomorphism
November 20th, 2009
A few months ago, I spent two posts describing the Choi-Jamiolkowski isomorphism between linear operators from M[n] to M[m] (often referred to as “superoperators“) and linear operators living in the
space M[n] ⊗ M[m]. However, there is another isomorphism between superoperators and regular operators — one that I’m not sure of any name for but which has just as many interesting properties.
Recall from Section 1 of this post that any superoperator Φ can be written as
for some operators {A[i]} and {B[i]}. The isomorphism that I am going to focus on in this post is the one given by associating Φ with the operator
The main reason that M[Φ] can be so useful is that it retains the operator structure of Φ. In particular, if you define vec(X) to be the vectorization of the operator X, then
In other words, if you treat X as a vector, then M[Φ] is the operator describing the action of Φ on X. From this it becomes simple to compute some basic quantities describing Φ. For example, the
induced Frobenius norm,
is equal to the standard operator norm of M[Φ]. If n = m then we can define the eigenvalues {λ} and the eigenmatrices {V} of Φ in the obvious way via
Then the eigenvalues of Φ are exactly the eigenvalues of M[Φ], and the corresponding eigenvectors of M[Φ] are the vectorizations of the eigenmatrices of Φ. It is similarly easy to check whether Φ is
invertible (by checking whether or not det(M[Φ]) = 0), find the inverse if it exists, or find the nullspace (and a pseudoinverse) if it doesn’t.
Finally, here’s a question for the interested reader to think about: why is the transpose required on the B[i] operators for this isomorphism to make sense? That is, why can we not define an
isomorphism between Φ and the operator
1. No comments yet.
1. No trackbacks yet. | {"url":"http://www.njohnston.ca/2009/11/the-other-linear-map-isomorphism/","timestamp":"2014-04-16T07:13:19Z","content_type":null,"content_length":"29646","record_id":"<urn:uuid:ac200cfa-a1a2-40d7-9581-3c3faa077e89>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference posterior distributions for Bayesian inference
Results 1 - 10 of 104
- Journal of Artificial Intelligence Research , 1994
"... This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs
representing a Markov chain, and undirected networks representing a Markov field. These graphical models ..."
Cited by 249 (12 self)
Add to MetaCart
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs
representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates.
Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two
standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms
can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from
data. The paper conclu...
- Bayesian Analysis , 2006
"... Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-t family of conditionally conjugate priors for
hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors i ..."
Cited by 140 (13 self)
Add to MetaCart
Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-t family of conditionally conjugate priors for
hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma
family of “noninformative ” prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the half-t family when the number of groups is small and in
other settings where a weakly informative prior is desired.
- IEEE Transactions on Information Theory , 1998
"... Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both th ..."
Cited by 136 (11 self)
Add to MetaCart
Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem
are described with emphasis on the analogy and the differences between results in the two settings. Index Terms — Bayes envelope, entropy, finite-state machine, linear prediction, loss function,
probability assignment, redundancy-capacity, stochastic complexity, universal coding, universal prediction. I.
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of
standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Cited by 96 (28 self)
Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of
standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information,
and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and
to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about
relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
- ANNALS OF STATISTICS , 2004
"... ..."
, 1990
"... . The Bayesian approach to probability theory is presented as an alternative to the currently used long-run relative frequency approach, which does not offer clear, compelling criteria for the
design of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions ..."
Cited by 51 (2 self)
Add to MetaCart
. The Bayesian approach to probability theory is presented as an alternative to the currently used long-run relative frequency approach, which does not offer clear, compelling criteria for the design
of statistical methods. Bayesian probability theory offers unique and demonstrably optimal solutions to well-posed statistical problems, and is historically the original approach to statistics. The
reasons for earlier rejection of Bayesian methods are discussed, and it is noted that the work of Cox, Jaynes, and others answers earlier objections, giving Bayesian inference a firm logical and
mathematical foundation as the correct mathematical language for quantifying uncertainty. The Bayesian approaches to parameter estimation and model comparison are outlined and illustrated by
application to a simple problem based on the gaussian distribution. As further illustrations of the Bayesian paradigm, Bayesian solutions to two interesting astrophysical problems are outlined: the
measurement of wea...
- IEEE TRANS. INFORM. THEORY , 1995
"... The capacity of the channel induced by a given class of sources is well known to be an attainable lower bound on the redundancy of universal codes with respect to this class, both in the minimax
sense and in the Bayesian (maximin) sense. We show that this capacity is essentially a lower bound also ..."
Cited by 47 (9 self)
Add to MetaCart
The capacity of the channel induced by a given class of sources is well known to be an attainable lower bound on the redundancy of universal codes with respect to this class, both in the minimax
sense and in the Bayesian (maximin) sense. We show that this capacity is essentially a lower bound also in a stronger sense, that is, for “most ” sources in the class. This result extends Rissanen’s
lower bound for parametric families. We demonstrate the applicability of this result in several examples, e.g., parametric families with growing dimensionality, piecewise-fixed sources, arbitrarily
varying sources, and noisy samples of learnable functions. Finally, we discuss implications of our results to statistical inference.
- Journal of the American Statistical Association
"... Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet, in practice, most Bayesian analyses are performed with so-called \noninformative" priors, that is,
priors constructed by some formal rule. We review the plethora of techniques for constructing such priors, and ..."
Cited by 39 (0 self)
Add to MetaCart
Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet, in practice, most Bayesian analyses are performed with so-called \noninformative" priors, that is, priors
constructed by some formal rule. We review the plethora of techniques for constructing such priors, and discuss some of the practical and philosophical issues that arise when they are used. We give
special emphasis to Je reys's rules and discuss the evolution of his point of view about the interpretation of priors, away from unique representation of ignorance toward the notion that they should
be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly � when sample sizes are small (relative to
the number of parameters being estimated) it is dangerous to put faith in any \default " solution � but when asymptotics take over, Je reys's rules and
- Bayesian Analysis , 2006
"... Abstract. Bayesian statistical practice makes extensive use of versions of objective Bayesian analysis. We discuss why this is so, and address some of the criticisms that have been raised
concerning objective Bayesian analysis. The dangers of treating the issue too casually are also considered. In p ..."
Cited by 35 (3 self)
Add to MetaCart
Abstract. Bayesian statistical practice makes extensive use of versions of objective Bayesian analysis. We discuss why this is so, and address some of the criticisms that have been raised concerning
objective Bayesian analysis. The dangers of treating the issue too casually are also considered. In particular, we suggest that the statistical community should accept formal objective Bayesian
techniques with confidence, but should be more cautious about casual objective Bayesian techniques.
- IEEE TRANS. INFORMATION THEORY , 2004
"... Recent years have seen a resurgence of interest in redundancy of lossless coding. The redundancy (regret) of universal xed{to{variable length coding for a class of sources determines by how much
the actual code length exceeds the optimal (ideal over the class) code length. In a minimax scenario ..."
Cited by 33 (13 self)
Add to MetaCart
Recent years have seen a resurgence of interest in redundancy of lossless coding. The redundancy (regret) of universal xed{to{variable length coding for a class of sources determines by how much the
actual code length exceeds the optimal (ideal over the class) code length. In a minimax scenario one nds the best code for the worst source either in the worst case (called also maximal minimax) or
on average. We rst study the worst case minimax redundancy over a class of stationary ergodic sources and replace Shtarkov's bound by an exact formula. Among others, we prove that a generalized
Shannon code minimizes the worst case redundancy, derive asymptotically its redundancy, and establish some general properties. This allows us to obtain precise redundancy rates for memoryless, Markov
and renewal sources. For example, we derive the exact constant of the redundancy rate for memoryless and Markov sources by showing that an integer nature of coding contributes log(log m=(m 1))= log
m+ o(1) where m is the size of the alphabet. Then we deal with the average minimax redundancy and regret. Our approach | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=44863","timestamp":"2014-04-16T11:33:28Z","content_type":null,"content_length":"37827","record_id":"<urn:uuid:5f38861c-acb6-4d64-99b9-91c40369eafa>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given an abelian group `$G$` and subgroup `$H$`, `$G^k/\Delta \cong G/H \times G^{k-1}$`, where `$\Delta = \{ (a, ..., a)\mid a\in H \}$`. Why is this the case?
up vote 0 down vote favorite
It looks like the natural way to define this isomorphism is $(g_1, ..., g_k )\mapsto ([g_1], g_2, ..., g_k ) $ where $[g_1]$ is the congruence class of $g_1$ in $G/H$. I can see why this is onto, but
I don't think $\Delta$ would be part of the kernel. So, it looks like I'm considering the wrong map, but what else is there to consider?
Thanks! I ran across this in a research paper, and it's just not coming to me why this is true.
This is not really a research level question, but set $\Delta(G)$ to be the corresponding diagonal subgroup of $G^{k}$ using $G$ in place of $H$. Embed $G^{k-1}$ inside $G^{k}$ by adding the
identity in the $k$-th component. Note that $G^{k} = \Delta(G) \times G^{k-1}$ with these identifications. – Geoff Robinson Oct 22 '12 at 8:09
A slight modification of Geoff's answer may be easier to see. There is an isomorphism from $G^k$ to itself taking $\Delta$ to $H\times\{0\}^{k-1}$, namely the isomorphism sending $(g_1,g_2,g_3,\
dots,g_k)$ to $(g_1,g_2-g_1,g_3-g_1,\dots,g_k-g_1)$. – Andreas Blass Oct 22 '12 at 11:32
Thanks. I really like Andreas' answer, but I unfortunately can't designate it as "best answer" for some reason (perhaps a problem with my account?). – Reeve Oct 22 '12 at 18:59
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/110303/given-an-abelian-group-g-and-subgroup-h-gk-delta-cong-g-h-times-g","timestamp":"2014-04-16T22:03:48Z","content_type":null,"content_length":"49574","record_id":"<urn:uuid:7d3e932a-9374-484d-8f7c-b0a9e9775700>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example 8.36: Quadratic equation with real roots
April 29, 2011
By Nick Horton
We often simulate data in SAS or R to confirm analytical results. For example, consider the following problem from the excellent text by Rice:Let U1, U2, and U3 be independent random variables
uniform on . What is the probability that the roots... | {"url":"http://www.r-bloggers.com/tag/cubature-library/","timestamp":"2014-04-20T03:25:19Z","content_type":null,"content_length":"25169","record_id":"<urn:uuid:c37fca52-575d-4473-a0c7-e8a70382971c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPRING 2004 COLLEGE GEOMETRY MATH 4233-WAC
SPRING 2007 COLLEGE GEOMETRY MATH 4233
Tuesday & Thursday
301 Boyd, 3:30-4:45
Instructor: Dr. Amin Boumenir
Office: 321 Boyd
Phone: (678)839-4131
Email: boumenir@westga.edu
Office: Tuesday, Wednesday and Thursday 11:00 to 14:00 or by appointment
Textbook: Foundations of Geometry, by Venema, Prentice-Hall 2002.
Prerequisite: MATH 3003. This course relies heavily on logic and proofs.
Learning Outcomes:
It is expected that a student completing MATH 4233 will understand
1. Logic, reasoning, and argument patterns. (L2)
2. Axiomatic systems and their properties. (L2)
3. The finite geometries of Young and Fano. (L14, L10)
4. The axiomatic foundation and development of plane geometry. (L14)
5. The properties and applications of non-Euclidean geometry. (L10, L14)
Topics: We shall cover chapters: 1: EuclidÕs Elements, 2: Axiomatic systems for geometry; 5: The Axioms of the plane; 6: Neutral Geometry; 7: Euclidean geometry; 9: Area; 10: Circles;
11: Constructions, 12: Transformations, 13: Models.
Tests: There will be 3 class tests and counts 100 points each. The dates are Thursdays February 8^th, March 15^th and April 19^th.
Grading Policy:
Quizzes: 100 The best five quizzes are counted.
Tests: 300 Three tests will be given throughout the semester.
Final: 200 The final exam will be comprehensive.
Total: 600
Grades: F<360<=D<420<=C<480<=B<540<=A <600
Important Dates:
March 1 : Last day to withdraw with a grade of W
March 19-23 : Spring Break
April 26 : Last day of class
May 3 : Final Exam 2:00-4:00 (Thursday) | {"url":"http://www.westga.edu/~math/syllabi/syllabi/spring07/MATH4233.html","timestamp":"2014-04-18T18:48:02Z","content_type":null,"content_length":"7287","record_id":"<urn:uuid:8d4e5350-d582-4dcf-af8d-c8ff8ae9804f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posted on April 30, 2011 by Adriana Salerno
Every year since 1999, MIT’s mathematics department hosts the Simons Lecture Series, “to celebrate the most exciting mathematical work by the very best mathematicians of our time.” This year, the
lecturers were Cornell’s Steven Strogatz and Princeton’s Manjul Barghava. I was incredibly lucky to be able to catch two of Steve Strogatz’s lectures last week. In particular, I enjoyed the third
lecture, entitled “Blogging about math for the New York Times”. I thought I would blog about Strogatz’s blogging experiences, making this a sort of meta-blog post or composition of two blogs (this
last interpretation was Mike Breen’s idea.)
Some of you may recall that early last year the New York Times online had a series of articles entitled “The Elements of Math”, written by Steve Strogatz. This 15-column series went through all of
the big ideas in mathematics, starting with numbers and counting and ending with a discussion of infinity. From the very first post, From Fish to Infinity, the column was immensely popular, getting
over 500 comments. Although after a show of hands in the audience, it was clear that the column might not have been quite as popular among mathematicians. Strogatz reacted by playfully adding: “Well
look at that… you bastards.”
Strogatz was asked by the New York Times’ op-ed editor, David Shipley, to write a series of columns about mathematics. The main problem was deciding what this really meant. Strogatz envisioned a
column which dealt with topical mathematics, or mathematics inspired by current affairs and the news of the moment. Shipley wanted Strogatz to go through all of mathematics from elementary school to
the most advanced ideas. At the heart of the matter was a very important question: who would be the target audience for this blog? Strogatz then listed a few possibilities:
-Tautologically, we could say the audience is people who read the New York Times (but that doesn’t get you anywhere.)
- The cynical point of view would be to just focus on mathematicians and scientists who already know the basics.
- Yet another point of view would be to lure fans of Martin Gardner’s columns in Scientific American by including puzzlers.
- The column could be for the parents of children who are learning math in school and who are usually baffled by their kids’ homework.
- One audience member said it might be interesting to write about topics that might help people read (and write) the news. To focus on number sense.
- Yet another audience member mentioned that one might want to write for many different audiences, and cater each post to a different group.
- Shipley actually asked Strogatz if he could just write for smart people who don’t know any math.
Finally, Strogatz decided that he would write these posts with one representative person in mind, his friend the actor Alan Alda. This was, he said, a perfect example of someone who loved science (he
hosted Scientific American Frontiers for PBS for many years), but knew very little math. In fact, he was an avid reader of science magazines but seemed lost when it came to understanding some of the
more abstract ideas in mathematics. He decided he would focus on helping people navigate through the turn-offs (the symbols and formulas), on explaining what mathematicians do, and on helping people
at least understand why we love math.
Another issue Strogatz dealt with was trying to decide what tone to use. There is the John Allen Paulos approach (or what Strogatz called the “I’m trying to help you, you moron” approach), the Paul
Lockhart approach (more critical of the way we’re teaching math in K-12), and many others. Strogatz opted for a more positive tone, and said he was inspired by Leonard Bernstein’s Young People’s
Concerts in that he wanted to make the readers feel like he was sharing something wonderful with them. He also said he talked as little about himself as possible, and tried to emphasize the ideas of
His final challenge was then what to write those 15 columns about. On the one hand, people really appreciate finally learning something that they didn’t understand the first time around. But should
new ideas be covered* as well? Looking back on his column and readers’ reactions, Strogatz shared with us this conjecture: “You can get away with being abstract or unfamiliar, but not both.” Abstract
ideas like the Pythagorean theorem were well received because they were very familiar. Unfamiliar ideas like conditional probabilities were well received because they could be presented in very
concrete examples. But the one post that was panned by a commenter as being the one where the series jumped the shark was the one on geodesics on a torus, which was both abstract and unfamiliar. I
actually quite enjoyed the post, but I guess I’m not the target audience so it doesn’t matter.
This talk was really fun especially as someone who, as a teacher and sometimes writer of math, thinks often about communicating mathematics. Hopefully, if any of you are thinking of these issues too,
Strogatz’s advice will come in handy. Another article I thought I should mention is this one by John Baez, published in the Notices.
* Many people these days take issue with the term “cover” when it comes to teaching. The usual comment is “we shouldn’t be covering, but uncovering the material”. Strogatz suggests we use a different
interpretation of the word cover, the one used by musicians. So when he says “cover” he means to give a “fresh interpretation of something old”. I thought this was a lovely way to put it.
yyyOne Response to Blog(Blog)
1. Bonnie Shulman says:
Really interesting (and useful) links! And I love the way you captured Strogatz’s dry sense of humor and lively intelligence.
Leave a Reply Cancel reply
This entry was posted in blogging. Bookmark the permalink. | {"url":"http://blogs.ams.org/phdplus/2011/04/30/blogblog/","timestamp":"2014-04-18T20:43:39Z","content_type":null,"content_length":"49113","record_id":"<urn:uuid:5fa53b24-6c41-45e8-93ae-08e372864722>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simply don't understand minimax...
October 9th, 2011, 04:55 PM
Simply don't understand minimax...
I have been working on this for hours, and have done quite a bit of research. It seems I just cannot connect the dots in my head to understand this.
I am writing a tic tac toe type program, and need to implement and AI using a minimax algorithm. I know what the behind it is, and what it's designed to do, but just can't seem to figure out how
to get it into my program. Heck, I went so far as to hire a tutor to help. After 3 hours with him today, and a lot of wasted cash, I am no closer. He had no idea what minimax was.
My game is similar to tic tac toe, except the board can be larger depending on what the user chooses. When someone scores, their pieces are removed from the board but the game continues.
I can't imagine recursing through all of those options, so I am just wanting the algorithm to find the next best scoring option. My board is a 2D array, where most of the examples I see deal with
1D arrays. I could use any help that you can give me.
The game works fine now for 2 human players.
Methods are written for checking board, clearing board, checking win etc. I guess I just don't understand how to recursively create new boards and return values on the moves.
This is what I have so far.
I guess I need to know how to use my methods to increase the value of position Value. I know, my question is a mess, sorry :(
Code Java:
/** minimax method to compute best possible computer move*/
public int[][] compMove(int[][]gameBoard, int boardSize, int playerNumber, int row, int col){
int bestMoveIndex = 0;
int bestValue = +1000;
int [][] bestMoves = new int[boardSize][boardSize];
for(int i = 0; i<boardSize; i++)
for(int k = 0; k<boardSize; k++){
if(gameBoard[i][k] == 0 && playerNumber == 1)
gameBoard[i][k] = 1;
int value = maxSearch(gameBoard);
if(value < bestValue){
bestValue = value;
bestMoveIndex= 0;
bestMoves[bestMoveIndex][k] = i;
else if(value == bestValue){
bestMoves[bestMoveIndex++][k] = i;
gameBoard[i][k] = 0;
CompPlacePiece(bestMoves, i, k, playerNumber);
return gameBoard;
public int maxSearch(int[][]gameBoard, int playerNumber){
int positionValue = 0;
return positionValue;
October 10th, 2011, 07:05 AM
Re: Simply don\'t understand minimax...
This tells me that you don't have a problem with minimax specifically, but with state-based search in general. Minimax comes after that.
If I were you, I'd start by creating a method that takes a game board and whose turn it is (X or O) and returns a List of all the potential game boards that could result from that turn.
October 13th, 2011, 12:36 PM
Re: Simply don't understand minimax...
First of all you should not create new board every time you make a move. That is a total waste of space and time.
Instead have methods to make and unmake a move. Also you need an evaluation function for positions at the leaves of the tree . Terminal states are scored as a win , loss or draw. In the case of
tic-tac-toe that would be if a player forms a horizontal/vertical or diagonal line.
What you need to implement:
Code java:
int board[3][3];
void make(Move m,c); //mark the square m (m.x,m.y) to c . c is either X or O
void unmake(Move m); //similar to make but it sets the square to blank
void generateMoves(List<Move>) //generate a list of moves. This is simply list of blank squares for tic-tac-toe
int evaluate(); //evaluate a positon as win/loss/or draw (100,-100 and 0 resp.)
Then you do nega-max as follows as follows.
Code java:
int negaMax( int depth ) {
//do static evaluation at terminal states
if ( depth == 0 )
return evaluate();
//start from worst possible score i.e a loss = -100
int score,max = -100;
// generate list of legal moves
Vector<Move> moves = new Vector<Move>();
for (Move m : moves) {
//make the move AND switch sides i.e player != player
//recursive call with a reduced depth of 1
score = -negaMax( depth - 1 );
// keep the best score
if( score > max )
max = score
//return to previous state again remember to switch sides
//return max score
return max;
I hope the comments help you understand the algorithm. If not say so and I will do my best.
October 13th, 2011, 12:45 PM
Re: Simply don't understand minimax...
I disagree with this advice. You might be technically computationally correct, but chances are the OP is going off of code provided to him by an instructor, so changing it will probably introduce
more trouble than it's worth.
But like I said, I think the OP is having some problems that are more fundamental than the actual minimax algorithm. | {"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/11453-simply-dont-understand-minimax-printingthethread.html","timestamp":"2014-04-19T17:16:31Z","content_type":null,"content_length":"21977","record_id":"<urn:uuid:a2d6e195-8dc2-4e2f-ad9d-d4bcaeda2da9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming languages, ranked by popularity
December 17, 2010
By David Smith
In a presentation to the Chicago R User Group last night, Drew Conway used his new Infochimps package in R to assess the relative popularity of programming languages. Drew used the word.stats
function in the Infochimps package to count the frequency of common computer languages mentioned in Twitter messages, and displayed the results in this bar chart:
It's not perfect: languages like C and C++ are excluded because they're impossible to search for, "ada" is excluded because it's ambiguous (and otherwise that niche language would be ranked most
popular), and R is measured by the frequency of its community twitter hashtag #rstats and not the letter R. But it's interesting nonetheless. There's lots more info about Infochimps in general, and
how this chart in particular was created, in the slides downloadable from Drew's blog.
Another way to look at programming language popularity is the frequency of mentions on two popular programmer's resource sites. In a post at the Dataists blog, Drew Conway (again) and John Myles
White used R and the XML package to extract the number of questions on stackoverflow.com and number of projects on github.com for about 50 programming languages, and plotted the results in this
As you can see, R tanks higher than the median for github projects and quite a lot higher for stackoverflow questions.
So R is doing quite well amongst programming languages in general. As a specialized statistics language, a more relevant comparison may come from looking at tags at the statistical
question-and-answer site stats.stackexchange.com, where R currently has 260 questions compared to 6 for SAS and 22 for SPSS.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/programming-languages-ranked-by-popularity/","timestamp":"2014-04-21T09:46:06Z","content_type":null,"content_length":"38179","record_id":"<urn:uuid:e82f47f7-40fe-4351-a9ec-392e4cc2be59>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A short proof of an interesting Hellytype
Nina Amenta \Lambda
The Geometry Center
1300 South Second Street
Minneapolis, MN 55454
We give a short proof of the theorem that any family of subsets
of R d , with the property that the intersection of any nonempty finite
subfamily can be represented as the disjoint union of at most k closed
convex sets, has Helly number at most k(d + 1).
1 Introduction
We say that a family of sets F has Helly number h when h is the smallest
integer (if one exists) such that any finite subfamily H ` F has nonempty
intersection if and only if every subfamily B ` H with jBj Ÿ h also has
nonempty intersection. Theorems of the form ``F has Helly number h'' are
called Hellytype theorems -- they follow the model of Helly's theorem, which
states that the family of convex sets in R d has Helly number d+ 1. There are
many Hellytype theorems; for excellent surveys see [DGK63] and the recent | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/953/3772023.html","timestamp":"2014-04-18T21:53:43Z","content_type":null,"content_length":"8185","record_id":"<urn:uuid:342bf651-a593-4684-9cb8-053415e3e791>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heat equation, Fourier cosine transform
[itex]I=\int _0 ^{\infty } \int _0^{k} \cos (kp)dke^{-xp}dp=\int _0^k \int _0 ^{\infty } \cos (kp)e^{-px}dpdk[/itex]. I'm stuck on that integral with respect to p (integral by parts failed and I
don't see any useful substitution so I used Wolfram Alpha for it).
As you noted, you have to integrate by parts twice. Or just look it up in a Laplace transform table, since that's what the integral is.
Thank you very much for all vela. So the answer is post #22 is correct, right?
Yes, it matches what I found. You can verify that at x=0, it reproduces the boundary condition and as x→∞, the solution goes to 0, as you'd expect. It doesn't do anything crazy, so it looks like a
valid solution. | {"url":"http://www.physicsforums.com/showthread.php?p=3778462","timestamp":"2014-04-20T14:09:58Z","content_type":null,"content_length":"54680","record_id":"<urn:uuid:8d03cbfe-3666-4abe-8743-bed671825317>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
It takes three to tango: Nuclear analysis needs the three-body force
An accurate picture of the carbon-14 nucleus must consider the interactions among protons and neutrons both in pairs (known as the two-body force, left) and in threes (known as the three-body force,
(PhysOrg.com) -- The nucleus of an atom, like most everything else, is more complicated than we first thought. Just how much more complicated is the subject of a Petascale Early Science project led
by Oak Ridge National Laboratory's David Dean.
According to findings outlined by Dean and his colleagues in the May 20, 2011, edition of the journal Physical Review Letters, researchers who want to understand how and why a nucleus hangs together
as it does and disintegrates when and how it does have a very tough job ahead of them.
Specifically, they must take into account the complex nuclear interactions known as the three-body force.
Nuclear theory to this point has assumed that the two-body force is sufficient to explain the workings of a nucleus. In other words, the half-life or decay path of an unstable nucleus was to be
understood through the combined interactions of pairs of protons and neutrons within.
Dean's team, however, determined that the two-body force is not enough; researchers must also tackle the far more difficult challenge of calculating combinations of three particles at a time (three
protons, three neutrons, or two of one and one of the other). This approach yields results that are both different from and more accurate than those of the two-body force.
Nuclei are held together by the strong force, one of four basic forces that govern the universe. (The other three are gravity, which holds planets, solar systems, and galaxies together and pins us to
the ground, the electromagnetic force, which holds matter together and keeps us from, for instance, falling through the ground, and the weak force, which drives nuclear decay.)
The strong force acts primarily to combine elementary particles known as quarks into protons and neutrons through the exchange of force carriers known as gluons. Each proton or neutron has three
quarks. The strong force also holds neighboring protons and neutrons together into a nucleus.
It does so imperfectly, however. Many nuclei are unstable and will eventually decay, emitting one or more particles and becoming a smaller nucleus. While we cannot say specifically when an individual
nucleus will decay, we can determine the likelihood it will do so within a certain time. Thus an isotope's half-life is the time it takes half the nuclei in a sample to decay. Known half-lives range
from an absurdly small fraction of a second for beryllium-8 to more than 2 trillion trillion years for tellurium-128.
One job of nuclear theory, then, is to determine why nuclei have different half-lives and predict what those half-lives are.
"For a long time, nuclear theory assumed that two-body forces were the most important and that higher-body forces were negligible," noted team member and ORNL computational physicist Hai Ah Nam. "You
have to start with an assumption: How to capture the physics best with the least complexity?"
Two factors complicate the choice of approaches. First, two-body interactions do accurately describe some nuclei. Second, accurate calculations including three-body forces are very difficult and
demand state-of-the-art supercomputers such as ORNL's Jaguar, the most powerful system in the United States. With the ability to churn through as many as 2.33 thousand trillion calculations each
second, or 2.33 petaflops, Jaguar gave the team the computing muscle it needed to analyze the carbon-14 nucleus using the three-body force.
Carbon-14, with six protons and eight neutrons, is the isotope behind carbon dating, allowing researchers to determine the age of plant- or animal-based relics going back as far as 60,000 years. It
was an ideal choice for this project because studies using only two-body forces dramatically underestimate the isotope's half-life, which is around 5,700 years.
"With Jaguar we are able to do ab initio calculations, using three-body forces, of the half-life for carbon-14," Nam said. "It's an observable that is sensitive to the three-body force. This is the
first time that we've demonstrated at this large scale how the three-body force contributes."
The three-body force does not replace the two-body force in these calculations, she noted; rather, the two approaches are combined to present a more refined picture of the structure of the nucleus.
In the carbon-14 calculation, the three-body force serves to correct a serious underestimation of the isotope's half-life produced by the two-body force alone.
Dean and his colleagues used an application known as Many Fermion Dynamics, nuclear, or MFDn, which was created by team member James Vary of Iowa State University. With it, they tackled the carbon-14
nucleus using an approach known as the nuclear shell model and performing ab initio calculations or calculations based on the fundamental forces between protons and neutrons.
Analogous to the atomic shell model that explains how many electrons can be found at any given orbit, the nuclear shell model describes the number of protons and neutrons that can be found at a given
energy level. Generally speaking, the nucleons gather at the lowest available energy level until the addition of any more would violate the Pauli exclusion principle, which states that no two
particles can be in the same quantum state. At that point, some nucleons bump up to the next higher energy level, and so on. The force between nucleons complicates this picture and creates an
enormous computational problem to solve.
The carbon-14 calculation, for instance, involved a billion-by-billion matrix containing a quintillion values. Fortunately, most of those values are zero, leaving about 30 trillion nonzero values to
then be multiplied by a billion vector values. As Nam noted, just keeping the problem straight is a phenomenally complex task, even before the calculation is performed; those 30 trillion matrix
elements take up 240 terabytes of memory.
"Jaguar is the only system in the world with the capability to store that much information for a single calculation," Nam said. "This is a huge, memory-intensive calculation."
The job is even more daunting with larger nuclei, and researchers will have a long wait for supercomputers powerful enough to compute the nature of the largest nuclei using the three-body force. Even
so, if the three-body force gives more accurate results than the two-body force, should researchers be looking at four, five, or more nucleons at a time?
"Higher-body forces are still under investigation, but it will require more computational resources than we currently have available," Nam said.
5 / 5 (1) Jul 13, 2011
Ow my aching head! Trying to compute those dratted 3-body forces! No wonder three's a crowd!
not rated yet Jul 13, 2011
Humbling news. This means that mankind will never see the day when we can precisely compute the state of a glass of milk.
I guess I needed that for reading Kurzweil and feeling that we'll conquer the Universe in no time. :-)
1 / 5 (2) Jul 13, 2011
2 or 3 body interaction is only approximation: effective description of field dynamics with e.g. singularities like charge (of electric field, resulting in Coulomb force).
Shell models represents extremely complicated structure of nucleus through simple probability cloud - it only shows averaged situation (is thermodynamical model).
To really understand what's happening there, we need to search for its spatial structure: field configuration behind with mechanisms leading to what we effectively observe (like here: http://
www.scienc...__590130 )
1 / 5 (1) Jul 13, 2011
@qwrede: Not NO time. LOGARITHMIC time. Just hold your horses and try to keep breathing.
4 / 5 (2) Jul 13, 2011
Humbling news. This means that mankind will never see the day when we can precisely compute the state of a glass of milk.
The n-body-problem is a difficult mathematical problem, but progress is being made. Thus we cannot exclude the possibility of finding methods to get exact solutions some day in the future.
3 / 5 (2) Jul 13, 2011
Uh, I can't find any hint of how much their mega-calculation improved on the 'simple' version...
1 / 5 (1) Jul 13, 2011
"You have to start with an assumption: How to capture the physics best with the least complexity?" - ORNL computational physicist Hai Ah Nam
"The n-body-problem is a difficult mathematical problem" - frajo
Well Nature has some pretty cool mathematics. I bet we are using the wrong expression or at least we haven't recognized an expression that Nature uses.
4 / 5 (4) Jul 13, 2011
@hush1: The n-body problem is INTRINSICALLY difficult, because it involves recursive nonlinearities that generate chaotic attractors and other exotic animals. When your best model involves
sensitivities to initial conditions and measurement errors down to the zillionth decimal place, you're allowed to make the distinction.
1 / 5 (1) Jul 13, 2011
The only known way that a top quark can decay is through the weak interaction producing a W-boson and a down-type quark (down, strange, or bottom). Because of its enormous mass, the top quark is
extremely short lived with a predicted lifetime of only 5×1025 s.
To compute this decay takes longer than the process itself.
Obviously the computation model used to express this process and the process event itself differ.
Of course the n-body problem is intrinsically difficult.
Nature obviously 'computes' this differently if the n-body problem is the appropriate description of the process.
not rated yet Jul 13, 2011
typo correction:
"...5x10-25 s."
not rated yet Jul 13, 2011
I think this is why Quantum Computers are a necessity.
These little babies should be able to simulate Quantum physics by actually dong it, and allowing us to read out the results!!!
Hurry up and get these Quantum Comps going wil ya!!!!!!
Science will be revolutionised by them.
not rated yet Jul 13, 2011
Uh, I can't find any hint of how much their mega-calculation improved on the 'simple' version...
They stated that a 2 body approach yielded a value widely shy of the value established by experiment. A 3 body approach gave results almost spot on. And you didn't see that the more complex
calculation improved the simple one?
5 / 5 (1) Jul 14, 2011
The associated arxiv paper seems to be: http://arxiv.org/pdf/1101.5124 . As for the 2-body value of the half-life, eq. 1 in the paper gives the half-life in terms of known and calculable parameters.
The value they have improved is M[SUB]GT[/SUB], they say the expected (2-body) value would be 0.275 one while the observed value (which can be obtained in their model using certain values of the
adjustable parameters) is 2x10[SUP]-3[/SUP]. Assuming the latter gives exactly 5730 years and all the other constants are the same, the 2-body value would be ~111 days (~0.303 years, ~18900 times
1 / 5 (1) Jul 16, 2011
The n-body-problem is a difficult mathematical problem, but progress is being made. Thus we cannot exclude the possibility of finding methods to get exact solutions some day in the future.
The question is, why we should attempt for it, if the particle simulations are done routinely for nearly unlimited particles already. If someone wants to find some ultimate equation for it, he
shouldn't be payed for it from public taxes, because of apparent ineffectiveness of such approach from practical perspective.
not rated yet Jul 17, 2011
Not sure I get it. Carbon-14 is still made of 14 red and blue balls right?
not rated yet Jul 17, 2011
Yes, that is to say, this article is not about a new experimental finding and C-14 still is made up of 6 protons and 8 neutrons. This article is saying that we can now account for the half-life of
C-14 theoretically (so presumably we now understand it, and many other nuclei, better).
not rated yet Jul 18, 2011
The n-body problem is different from 3-body forces. In the n-body problem, there are n particles interacting via 2-body forces, that is, the potential energy has a bunch of terms V(x[i],x[j]) for i,j
in 1:n. In the case of this C14 calculation, there are 14 particles interacting via both 2-body forces and 3-body forces, i.e. potentials V(x[i],x[j],x[k]). | {"url":"http://phys.org/news/2011-07-tango-nuclear-analysis-three-body.html","timestamp":"2014-04-17T22:00:02Z","content_type":null,"content_length":"92788","record_id":"<urn:uuid:81d68d85-96b3-45ed-8823-e8189d9417f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electrical Definitions and Symbols (NEC 2002)
Definitions and Symbols
AMPS = 746 watts times horse power and then divided by volts.
C = Roman numeral for one hundred.
CM = Circular mill which is the cross sectional area of a conductor.
D = Distance which is the amount of length from the power source and in some calculations plus the amount of length for the return path combined.
E = Volts.
EMF = electromotive force or [voltage].
ETC. = “and so on , and so on”.
Ground = A loose terminology that could be referring to either grounding or grounded.
Grounded = neutral conductors which are normally current carrying conductors.
Grounding = Bonding ground conductors or earth ground conductors.
GFCI = “Ground Fault Circuit Interrupter”. A device that in generic terms monitors the hot conductor, the grounded conductor, and the grounding conductor. The device monitors the electrical circuit
that it is to protect for a leakage between these lines with a limit of a maximum leakage of 5 milliamps, or .005 of one amp.
In generic terms the GFCI was discovered, and utilized in our wiring method to protect accidental loss of life, due to shock hazards.
The GFCI has been very successful by all available statistics at saving many lives since it’s conception and use began. When “us”old folks? were kids, and if a radio dropped in our bathtub while we
were in it, we most likely were dead! Now with theGFCI protection device we might have a half a chance at survival. A GFCI is so sensitive that a leakage of .005 of one amp between the grounded
conductor and hot conductor, or between the hot conductor and the grounding conductor, or even between the grounded conductor {white} and the grounding conductor {green} is designed to kick out the
device, and de-energize the circuit that GFCI is to protect.
Horsepower = a measurement of mechanical power that a motor produces. In electrical, one horse power equals 746 watts. In mechanical terms one horse power is produced when 33,000 pounds are lifted
one foot in one minute. Horse power represents the work being done by the output of an electric or internal combustion style motor.
Hot = A current carrying conductor with voltage present.
I = Load in amperes which is the resistance multiplied by the voltage.
Input = What you pay for
Input = In another term input can be a Primary of a transformer
K = Short for kilo in Greek meaning/one thousand. Remember this term for electrical terminology. Example kilowatt / KVA / “K”= a terminology of conductor resistance which is the resistance of a
conductor multiplied by the circular mill and then divided by one thousand, normally found in a voltage drop calculation.
Kilowatt = Units of watts in calculation of one thousand.
M = Roman numeral for one thousand. Not normally found in electrical terminology.
Megawatt = units of watts in calculation of one million.
Neutral = The return path of a circuit carrying only the unbalanced load of two ungrounded conductors. A neutral may not be broken by a switch or other type of device. A neutral will be referred to
as a grounded conductor.
Ohms = A measurement of resistance. The resistance through which one volt will force one ampere.
Output = Work performed
In another term output can be a Primary of a transformer
Parallel Circuit = Where the current divides, and therefore has more than one path to flow, their total resistance is the sum of individual currents. The voltage across each of the loads are the
same. The total resistance is less than any individual resistor. In a parallel circuit, you can remove one resistance [light bulb etc.] without affecting the work performance of any of the other
resistance [light bulb etc.] on that parallel circuit.
PF = Power factor which is the ratio between the power in watts, and the apparent power in volt-amps. Power factor is normally expressed in percentages.
R = Resistance which is the opposition that a device, or material offers to the flow of current. The opposition which results in the production of heat in the material carrying the current.
Resistance is measured in ohms. All resistance have two dimensions. Cross sectional area, and length.
Series Circuit = Where the current flow is the same. The total resistance equals the sum of the individual resistance. In a series circuit you can not remove one resistance [light bulb etc.] on that
series circuit. [The circuit will de-energize unless you replace the connected void created by the removal of that resistance, {light bulb etc.}].
V = Volts which is a nominal value assigned to a circuit or system for the purpose of conventionally designating it’s voltage class. The pressure required to force one ampere through a resistance of
one ohm.
VA = Volt amps which is the electric current that will flow through one ohm under the pressure of an applied voltage.
VD = Waste of electricity due to the heating of a conductor.
Voltage drop which is the amount of voltage reduced due to the resistance and length and load of a wire used.
VD = The load applied to a conductor multiplied by the resistance created by that conductor.
W = Watts, which is a unit of electrical power of the rate that a form of energy [work performed]. A unit measure of power.
Ungrounded = Hot Conductor.
Volt Amperes = Watts divided by the power factor.
If carefully said, and in general (Layman’s) terms, volt amperes and watts are generally the same, unless dealing with electronics. VA = W or watts divided by power factor.
Watts = W = VA x PF or volt amperes multiplied by the power factor
If carefully said, and in general (Layman’s) terms,
volt-amperes and watts are generally the same, unless dealing with electronics.
This document is based on the 2002 national electrical code and is designed to give you an option, as a self-help, that should pass minimum code requirements. While extreme care has been implemented
in the preparation of this self-help document, the author and/or providers of this document assumes no responsibility for errors or omissions, nor is any liability assumed from the use of the
information, contained in this document, by the author and / or provider. | {"url":"http://www.selfhelpandmore.com/home-wiring-usa/definitions-calculations/electrical-definitions-symbols-2002.php","timestamp":"2014-04-19T06:51:54Z","content_type":null,"content_length":"39327","record_id":"<urn:uuid:17be6bd8-cb0d-43be-8eb5-c6d3b815734e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wynnewood, PA Algebra 1 Tutor
Find a Wynnewood, PA Algebra 1 Tutor
...I can help you become one too! OTHER: I pay a lot of attention to details, especially grammar and spelling. I can assist with any proofreading needs or help you child learn to read.
20 Subjects: including algebra 1, reading, statistics, biology
Hi! My name is Kristin and I have taught middle school math for the past 6 years. I've enjoyed tutoring students in elementary and middle school for the past 10 years.
21 Subjects: including algebra 1, reading, statistics, SAT math
...I am computer savvy.I am honors student in algebra in high school. I am an honors student in math in my high school. I underwent training in Microsoft Access at the Prism Career Institute with
a GPA of 4.0 in 2010.
8 Subjects: including algebra 1, reading, elementary math, Microsoft Excel
...I have tutored all levels of Math from pre-algebra all the way up to Multivariable Calculus. I also have experience in tutoring chemistry, Organic Chemistry, physics and many other classes! I
have been a full time teacher for the past 4 years after receiving my Master's Degree in secondary Math education from Temple U.
18 Subjects: including algebra 1, chemistry, physics, calculus
...I hold Master's Degrees in Special Education and School and Mental Health Counseling. I am well qualified at modifying presentation of curricula, which is particularly useful for students who
are struggling in their regular education classes.I am a special education case-manager and teacher at a...
29 Subjects: including algebra 1, English, reading, GED
Related Wynnewood, PA Tutors
Wynnewood, PA Accounting Tutors
Wynnewood, PA ACT Tutors
Wynnewood, PA Algebra Tutors
Wynnewood, PA Algebra 2 Tutors
Wynnewood, PA Calculus Tutors
Wynnewood, PA Geometry Tutors
Wynnewood, PA Math Tutors
Wynnewood, PA Prealgebra Tutors
Wynnewood, PA Precalculus Tutors
Wynnewood, PA SAT Tutors
Wynnewood, PA SAT Math Tutors
Wynnewood, PA Science Tutors
Wynnewood, PA Statistics Tutors
Wynnewood, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Wynnewood_PA_algebra_1_tutors.php","timestamp":"2014-04-18T08:43:50Z","content_type":null,"content_length":"23959","record_id":"<urn:uuid:87eb92b0-f27a-4814-ae66-93c7547aba74>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
A portable library of parallel search algorithms and data structures
A note from the maintainer
ZRAM was written by Ambros Marzetta as part of his Ph.D. at ETH Zuerich.
Since then I have done some maintenence in order to use it in my own projects. Most of the code remains Ambros's
ZRAM is a library of parallel search algorithms and the corresponding data structures. The implementation is application-independent and machine-independent. The interface is user-friendly and lets
non-specialists use the power of parallel computers. ZRAM has led to the solution of previously unsolved QAP and vertex enumeration instances.
The library can be used in combinatorial optimization, enumeration, heuristic search and other areas. It is written in ANSI C, and its source code is available here.
[[!img Error: Image::Magick is not installed]]
Implemented parallel search algorithms and some applications using them:
• Branch and bound
□ Quadratic Assignment Problem
□ Minimum vertex cover
□ Traveling Salesman Problem (TSP)
• Backtracking
□ N-queens problem (the standard backtrack example) (7 queens and 8 queens PostScript)
□ Enumeration of all partitions of a set
Parallel computers used:
• Workstation networks using MPI
• Intel Paragon
• NEC Cenju-3
• Sun Ultra
(Only MPI is currently tested).
• Ambros Marzetta, ZRAM: A Library of Parallel Search Algorithms and Its Use in Enumeration and Combinatorial Optimization. PhD thesis Nr. 12699, ETH Zürich, 1998 (119 pages; abstract, Kurzfassung
gzipped PostScript, PostScript, Acrobat PDF).
• Adrian Brüngger, Ambros Marzetta, Jens Clausen and Michael Perregaard, Solving Large-Scale QAP Problems in Parallel with the Search Library ZRAM, Journal of Parallel and Distributed Computing 50
(1998), 157-169. (14 pages; PostScript, gzipped PostScript, Acrobat PDF)
• Adrian Brüngger, Ambros Marzetta, Komei Fukuda and Jürg Nievergelt, The Parallel Search Bench ZRAM and its Applications, Annals of Operations Research, 90:45-63, 1999.
• Adrian Brüngger, Ambros Marzetta, Jens Clausen and Michael Perregaard, Joining Forces in Solving Large-Scale Quadratic Assignment Problems in Parallel, Proceedings of the 11th International
Parallel Processing Symposium IPPS 97. (10 pages)
• Adrian Brüngger and Ambros Marzetta, The Parallel Search Bench ZRAM and its Applications, CrosSCutS December 1996. (4 pages; gzipped PostScript) | {"url":"http://www.cs.unb.ca/~bremner/software/zram/","timestamp":"2014-04-21T02:00:09Z","content_type":null,"content_length":"6120","record_id":"<urn:uuid:8a1143e6-36b4-4a37-81ae-6458d2ce92d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Huntington Station, NY 11746
Experienced and patient Math Tutor
...am a former Wall Street quant with degrees in Math, Computer Science and Finance. I have been tutoring since my days on the Math Team at Stuyvesant HS. I was a high school teacher for a brief time
and, more recently, I taught Probability and Statistics at Stony...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Dix_Hills_NY_Algebra_tutors.aspx","timestamp":"2014-04-19T07:58:53Z","content_type":null,"content_length":"59650","record_id":"<urn:uuid:1bb5729b-1845-429f-8c91-d85d7ff049f1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the normal from 3 points
up vote -4 down vote favorite
I have three points
and I want to calculate the normal out of them. What I did is:
normal = cross(P0-P1, P0-P2);
and then I wanted to plot the normal so what I did is,
c = normal + P0 %end position of normal vector
quiver3(P0(1), P0(2), P0(3), c(1), c(2), c(3));
but it didn't work (It looks like there is an angle between the line and the plane. So it is not the normal).
Any suggestions please?
matlab vector
2 what do you mean by "did not work"? I suggest you add an example: the values of P0, P1 and P1 and a screen shot of your quiever3 output. – Shai Jul 30 '13 at 14:42
Might be helpful: stackoverflow.com/questions/2035659/… – amustafa Jul 30 '13 at 14:43
Full code might help us help you, as well as the content of your workspace. What "didn't work" means btw ? Did you get an error ? – CTZStef Jul 30 '13 at 14:45
add comment
closed as unclear what you're asking by Shai, natan, C-Pound Guru, Ralgha, woodchips Jul 30 '13 at 19:11
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for
help clarifying this question.If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
"It has an angle so it is not the normal" . There are two problems.
First problem - you misinterpret how the quiver3 command works. The first three elements are the start of the quiver (the back of the arrow), but the next three are not the
endpoint (your normal + P0) - they are the direction. So I think you need to change your code to
normal = cross(P0-P1, P0-P2);
normal = normal / norm( normal ); % just to make it unit length
quiver3(P0(1), P0(2), P0(3), normal(1), normal(2), normal(3));
up vote 3 down vote axis equal
You can confirm a vector is normal to your plane by confirming that the dot product is zero:
disp(dot((P0 - P1, normal));
disp(dot((P0 - P2, normal));
You would expect the result to be a "number very close to zero" - rounding error will usually prevent things from being exactly zero (consider any value less than 1e-16 smaller
than the length of the vectors to be "zero").
1 the dotprodut is not zero it is -1.084202172485504e-019 – Jack_111 Jul 30 '13 at 15:08
1 Good! That's "zero to within rounding error". It is -0.000000000000000000108 . Close enough for almost anything. How do things look when you set the axes to equal? – Floris Jul
30 '13 at 15:13
add comment
Not the answer you're looking for? Browse other questions tagged matlab vector or ask your own question. | {"url":"http://stackoverflow.com/questions/17950002/calculating-the-normal-from-3-points","timestamp":"2014-04-23T16:56:46Z","content_type":null,"content_length":"64183","record_id":"<urn:uuid:ba94acf7-6e2d-4f09-81a8-9663d848c0e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physical Chemistry Lecture Notes
for T. W. Shattuck, Physical Chemistry
Thermodynamics, Electrochemistry, and Equilibrium
Most of the lecture notes have the same format: formula lines for the proofs but without the reasons for each step. Room is provided in the right-hand column for you to fill in with the reasons each
step was taken. In this way you can concentrate on the flow of the proofs and the meaning of each formula and not worry about copying down each formula correctly for your notes.
Reactivity, Ch. 1 and 2
Barometric Formula
First-Order Homogeneous Differential Equations, General Pattern P1
Concentration Measures, Molarity, Molality, Mole Fraction
Absorption Spectrsocopy, Beer-Lambert Law
Electrolytic Conductivity
Electric Flux
Absorbance of Mixtures
Linear Curve Fitting
Error Analysis
Excel linest function Excel 2007 Excel linest function (earlier versions)
1st and 2nd Order Reactions
Progress to Equilibrium
Exponential Temperature Dependence, e^-E/RT
Temperature Jump Kinetics
Parallel Reactions-Competitive Reactions
Consecutive Reactions Reactions
Integrating Rate Laws Using the Finite Difference Approximation
Kinetics Mechanism Simulation Introduction
First Order Rate Laws and Stella
SN1 Mechanism
Pre-Equilibrium Mechanism - Michaelis - Menten Mechanisim
Chain Mechanisms
Briggs-Rauscher Oscillating Iodine Clock Mechanism
Unimolecular Reactions- Lindemann-Henshelwood Mechanism
Detailed Balance: Molecularity
Detailed Balance: Activation Energy for a Reaction Sequence
Dynamic NMR
Photochemical Steady State and Stern Volmer Quenching
Langmuir Adsorption
Formula Sheet 1
First Law
Kinetics of Thermal Transfer
Differential Scanning Calorimetry
Van der Waals Liquifaction
Taylor Series and the Virial Equation
Basic Derivatives, Isothermal Compressibility and Coefficient of Thermal Expansion
Integrating the Basic Derivatives
Non PV Work
Meaning of the Reaction Enthalpy
Temperature Dependence of the Enthalpy of Reaction
Equipartition Theorem Predictions for Internal Energy and Cv
Normal Mode Analysis
Joule-Thomson Expansion
Partial Derivative Conversion
First Law and Ideal Gases
Entropy and Free Energy
Entropy (Chapter 10)
Entropy, Temperature, and Heat Transfer
Thermodynamic Definition of Entropy (Chapter 11)
Carnot Cycle
Ideal Gas Carnot Cycle
Thermodynamic Definition of Entropy
Statistical Thermodynamic Definition of Entropy (Chapter 12)
Entropy and Probability
Boltzmann Distribution and the Most Proable State
The Thermodynamic Definition of Temperature: b=1/kT
The Thermodynamic Definition of Entropy
Entropy and Applications
The Second Law: Entropy and the Clausius Inequality
Ideal Gas Entropy Changes
Entropy and Phase Transitions
Absolute Entropy
Temperature Dependence of the Reaction Entropy
Entropy and Spontaneous Processes
Entropy and the Surroundings for the Ideal Gas
Spontaneity and the Foundations of Thermodynamics: Free Energy
Thermodynamic Potential Functions
Foundations of Thermodynamics
Thermodynamic Equations of State
Chemical Potential
Thermodynamics of Mixing of Ideal Gases
Non-PV work and Gibbs Free Energy
Formula Sheet 2
The Fine Arts and Science
Phase Equilibria
Clausius-Clapeyron Equation
Ehrenfest Criteria, Second Order Phase Transitions
Concentration Measures, Molarity, Molality, Mole Fraction
Chemical Potentials in Solution
Boiling Point Elevation
Osmotic Pressure
Variable Pressure and Temperature Distillation
Gibb's Phase Rule
Hammett Free Energy Relationships
Isonarcotic Activity of Esters, Alcohols, Ketones, and Ethers
Visual Approach to Activity Coefficients and Henry's Law Standard States
Henry's Law constants and Free Energies of solvation
Activity of a Non-Volatile Solute: Osmotic Coefficient
Activities of Ions in Solution
Debye-Huckel Theory
Gibbs Free Energy of Solvation and the Poisson Equation
Debye-Huckel Theory: Dilute Point Charge in a Continuum Dielectric
Poisson-Boltzmann Equation
Chemical Equilibrium
Gibbs Free Energy and Chemical Equilibrium
Temperature Dependence of the Equilibrium Constant, Kp
Temperature Dependence of the Equilibrium Constant, Kp, and Entropy
Standard States and Different Concentration Measures
Molarity vs. Molality Standard States
Equilibria with Pure Liquids and Solids and the Solvent in Dilute Solution
Biochemist's Standard State
Electrochemistry and Electrolyte Solutions
Photosynthesis Z-scheme
Ionic Activities from Electrochemical Cells
Metal Insoluble Salt Electrodes
Standard Reduction Potentials
For more information or corrections contact Tom Shattuck at twshattu@Colby.edu.
Colby Chemistry Home Page
Last modified: 4/28/2013 | {"url":"http://www.colby.edu/chemistry/PChem/Lecture1SDS.html","timestamp":"2014-04-18T18:24:08Z","content_type":null,"content_length":"13385","record_id":"<urn:uuid:f0fc9c39-882f-4056-8ee9-0f38365787b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defying gravity: The uphill roller
Issue 40
September 2006
"Mechanics is the paradise of the mathematical sciences because by means of it one comes to the fruits of mathematics."
Leonardo da Vinci
The inspiration for this article comes from a slightly unusual source: The Proceedings of the Old Bailey from April 18th 1694. On this particular day 29 death sentences were passed at the court
house, as well as numerous orders for brandings; there would have been 30 death sentences had not one lady's pregnancy saved her — she had successfully "pleaded her belly". The business part of the
document ends with a list of the 29 unfortunates and continues to another list; this time of advertisements. Surprising though it may seem, advertisements for anything from quack remedies to
religious texts formed an important part of the court proceedings between the 1680s and the 1750s. On April 18th 1694 the second item on the list was an advert for a book:
Pleasure with Profit: Consisting of Recreations of divers kinds, viz. Numerical, Geometrical, Mathematical, Astronomical, Arithmetical, Cryptographical, Magnetical, Authentical, Chymical, and
Historical. Published to Recreate Ingenious Spirit, and to induce them to make further scrutiny how these (and the like) Sublime Sciences. And to divert them from following such Vices, to which Youth
(in this Age) are so much inclin'd. By William Laybourn, Philomathes.
If the author of these lines is to be believed, then those who were tried in the docks that day may well have been spared their gruesome fate, had they only been given access to this work. We will
touch on only a small part of it ourselves: to be precise, pages 12 and 13.
A detail of page 12 of Pleasure with Profit.
William Leybourn (1626-1719) (alias Oliver Wallingby) was in his time a distinguished land and quantity surveyor (although he began his working life as a printer). Such was his prestige, he was
frequently employed to survey the estates of gentlemen, and he also helped to survey the remnants of London after the great fire of 1666. A prolific and eclectic author, his work The Compleat
Surveyor, which was first published in 1653 and ran to five editions, is regarded as a classic of its kind and (in collaboration with one Vincent Wing) the 1649 volume Urania Practica, was the first
book in English devoted to astronomy.
In 1694 he had published the recreational volume Pleasure with Profit. It contained a delightful mechanical puzzle, attributed to one "J.P.", which has become known as the Uphill Roller.
Pages 12 and 13 of the book detail the construction of a double cone and two inclined rails along which the cone can roll — uphill! His final paragraph explains the paradox: even though the cone does
ascend the slope, its centre of gravity will actually move downwards, if the experiment is set up in the right way. Although one's senses might be confounded, the law of gravity is not.
This animation shows a virtual uphill roller in action. In the animation below you can see the downward movement of the centre of gravity of the double cone.
Before we examine his explanation, we will look at the matter through modern eyes, using elementary trigonometry. The experiment consists of two inclined rails and a double cone. The cone will roll
"uphill" resting on the top surfaces of the two rails.
Figure 1 shows a vertical section of the set-up. The inclined line represents the top surfaces of the rails. The bottom end of the slope, shown at the left-hand side of figure 1, is at a distance a
from the ground. The top end of the slope, on the right-hand side of figure 1, is at a distance b from the ground. The figure also shows the cross-section of the double cone after it has rolled some
way up the slope. The point in the middle of this cross section is the double cone's centre of gravity, which we call G. The angle of inclination of the rails is G the coordinates (x,y). As the
double cone rolls up the rails, its centre of gravity G moves along a path given by some equation y = f(x). Our aim is to find this equation.
Figure 2 shows the view from the front, in other words from the point where the two rails meet. We see the cone as if it had been sliced vertically and lengthwise. Again we see the centre of gravity
G, as well as the points P and Q where the top surfaces of the two rails meet the double cone. The points G[1], P[1] and Q[1] are the projections of these three points to the ground. The angle
Figure 3 shows the projection of the whole set-up to the floor. The diamond shape is the shadow the double cone casts on the floor and the two lines emanating from the points P[1] and Q[1] are the
shadows cast by the two rails. They meet at the point O and at an angle 2β. As before, the points G[1], P[1] and Q[1] are the projections of G, P and Q.
With the coordinate system from figure 1 in place we can find the equation of the path of the centre of gravity of the double cone as it rolls up the slope. For convenience we write XY for the
straight line segment running from a point X to a point Y.
First consider figure 3. The distance between O and G[1] is precisely the x-coordinate of the point G. Using basic trigonometry we have that
Since P[1]Q[1] = PQ we get
Now let's take the front view from figure 2. Write S for the point that lies on the line from G to G[1] at the same height as P and write R for the apex of the cone which lies on the line from G to G
[1]. See figure 4.
Now the height y of G is SG[1] + SG. For the second part we have SG = r-RS, where 2r is the thickness of the double cone at its centre (see figure 4). So
This gives
Figure 5 below shows the same view as figure 1. Noting that the point
is at distance
from the origin of the coordinate system, we get
Therefore we have
The path of the centre of gravity of the cone is, then, the straight line
which has the gradient
Of course, to properly appreciate the paradox a physical model is needed. The author's model (made by his long-term friend Brian Caswell) has:
measured in degrees, from which it is plain that the inequality holds.
An excerpt from Leybourn's recipe for an uphill roller
The things necessary for this Experiment are, First, A Roller of Wood [...] of two Cones (or Sugar Loaves). Let the thickness in the middle be about 5 or 6 Inches, the length [of the cone] about 3
times the thickness. [...] You must provide two straight smooth Rulers about a Yard in length, and strong enough to bear the weight of the Roller. Lastly, you must have three pieces of Wood to
support the ends of the rulers; the first about two or three inches thick [to support the rulers at the bottom of the slope]; the other two [to support the rulers at the top of the slope] must be
thicker than the first by somewhat less than half the diameter of the ruler.
Leybourn's recipe
Now that we have a clear criterion to use, we can look more closely at Leybourn's instructions for building an uphill roller (which we've reproduced on the right) and check that they really do give
rise to angles that satisfy the required relationships. The instructions specify that each of the cones that make up the double cone should be between 5 and 6 inches in thickness, so taking the upper
limit of 6 we get r=3. The length of each of the two cones is to be three times its thickness, so it is 9. The rails are to be one yard in length, and since one yard is 36 inches, the length of the
slope is 36.
The blocks supporting the end of the slope should be thicker than those supporting the beginning of the slope by "somewhat less" than half the diameter of the cone, in other words by less than r (as
marked in the left-hand diagram in figure 6 below). We can choose the thicknesses of the blocks so that their difference is less than, but very close to r. Then the difference in height between the
bottom of the slope and the top of the slope, b - a, is roughly equal to r.
From this and the right-hand diagram in figure 6, we now deduce that
This means that
Now let's turn to the angle β. The right hand diagram in figure 6 shows that the sloped rail and the dotted horizontal line form a right angled triangle with an angle
so the length of the rail's shadow is equal to
At the end of the cone's journey, the point P[1], shown in figure 7, will be at the left-hand apex of the double cone, and its distance to G will be the length of one cone, which is 9. This gives
Again we have a right angled triangle whose third side is
From figure 8 we get that
In summary, Leybourn's instructions reduce to:
our inequality requires that
or that 143 > 134 — which indeed it is! (Using the same analysis, his lower limit of 5 inches requires that 5159>4934).
With the mystery of the uphill roller explained we will leave the reader with the following sentiment expressed in the book, with which we whole-heartedly agree:
But leaving those of the Body, I shall proceed to such Recreations as adorn the Mind; of which those of the Mathematicks are inferior to none.
About the author
Julian Havil
is a mathematics teacher at Winchester College, and has been so for the past 31 years. Having given up a multitude of sports over the years he now contents himself physically with an occasional
lengthy walk and, having been recently married, emotionally with the company of his wonderful wife Anne; the rest of the time is spent reading about and writing about mathematics — and enjoying good
food, wine, company and holidays. Havil's previous book
Gamma: Exploring Euler's Constant
was reviewed in the
November 2003 issue
Submitted by Anonymous on October 12, 2010.
There is a precise article in the European Physics Journal by S. Gandhi and C. Efthimiou about the uphill roller. The reference is: "The Ascending double cone: a closer look at a familiar
demonstration", volume 26 in 2005, pages 681-694. The authors have looked at several details (some of which I do not understand since I am still in high school). | {"url":"http://plus.maths.org/content/defying-gravity-uphill-roller","timestamp":"2014-04-16T05:27:39Z","content_type":null,"content_length":"52230","record_id":"<urn:uuid:b46c13e1-7482-4838-96ba-aee96663df2c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is consumer surplus, and how to calculate it.
Consumer surplus is when a consumer derives more benefit (in terms of monetary value) from a good or service than the price they pay to consume it. Imagine you are going to an Electronics store to
buy a new flat panel TV. Before you go to the store, you decide to yourself that you are not going to pay more than $750 for a TV. This $750 is your maximum willingness to pay for the TV. After
entering the store, you find a TV you really like for only $500! Since you were willing to pay $750 for the TV, and you only ended up paying $500 for it, you have saved $250. This $250 is called
consumer surplus by economists, because it is the “extra” or “surplus” value you received from the good beyond the price you paid for it.
But figuring out an individual’s surplus isn’t good enough. We want to figure out the total amount of surplus for all consumers in the economy and derive the total consumer surplus. An easy way to
visualize is shown to the right. In this mini economy we have 5 consumers, and we line them up left to right by their willingness to pay (consumer 1 is willing to pay more than consumer 2, etc.). You
can see that each consumer pays the same price for the good, so their surplus is calculated as the difference between their willingness to pay, and the actual amount they have to pay.
For the first consumer, he is willing to pay $20, but only has to pay $5, so he gets a surplus of $15. The next consumer is willing to pay $16, but only has to pay $5, so he gets a surplus of $11.
Using the same logic, the third, fourth, and fifth consumers have surplus values equal to $5, $3, and $0 (because their maximum willingness to pay is equal to the price, so consumer surplus is zero).
To get total surplus we add these values up, so $15+$11+$5+$3=$34. The total consumer surplus in this economy is $34. But in reality, most graphs won’t look like this, you will be given a linear
demand curve so let’s do another example. In the graph below you will see a typical demand curve with a price line intersecting it. This price will occur at P* and will intersect the demand curve at
Q* (which give us equilibrium price and quantity). Because this point is at equilibrium, Q* is the quantity of goods that will be purchased from the market, and consumers will pay a price of P*.
We will use this information to calculate consumer surplus for this graph. Remember that consumer surplus is equal to the difference between a consumer’s maximum willingness to pay and the price that
they do have to pay. Since the demand curve is above the price at points to the left of Q* each of these purchases results in surplus. When P* intersects the demand curve, there is no surplus, and to
the right of Q* consumers are not willing to pay the price. So in the graphical example, we would have to calculate the area of a triangle which is equal to ½(base*height). In our example, the base
is equal to Q*, and the height is equal to Pmwp (maximum willingness to pay price) minus P* (actual price paid). This gives us ½((Pmwp-P*)XQ*)).
Whenever you are asked to calculate consumer surplus, remember to plug in the numbers given to you in this formula. You should be able to figure out what the Pmwp is (where demand intersects the Y
axis) and P* and Q*. With this information, you just have to calculate the area of the triangle, and you know what consumer surplus will be.
For examples of calculating consumer surplus check out:
Spread the knowledge!
3 comments:
Akshay on February 7, 2012 at 11:24 PM said...
Hello, Thank you for this explanation on consumer surplus. The first chart of the individual consumers is extremely helpful in understanding this concept. I did have one question though. When we
look at the Demand Curve, why don't we just add up the individual prices that consumers are willing to pay above the equilibrium price? Why do we have to calculate the area of the triangle?
@Akshay, essentially that it what we are doing by calculating the area of the triangle. Imagine if there were 300,000,000 consumers, calculating the surplus for each one would be tedious, but
calculating the area of the triangle approaches this value and is much easier. This is similar to a Riemann sum vs. an integral for those with calculus.
Hi I have been given this equation :
I have found both P and Q. However from this information they want me to find the economic surplus? How can I do that? | {"url":"http://www.freeeconhelp.com/2011/09/what-is-consumer-surplus-and-how-to.html","timestamp":"2014-04-20T03:09:23Z","content_type":null,"content_length":"80962","record_id":"<urn:uuid:a88ac6e2-b628-4c5f-ac33-3ac1b8c96109>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constructions unique up to non-unique isomorphism
up vote 15 down vote favorite
1) Fields have algebraic closures unique up to a non-unique isomorphism.
2) Nice spaces (without base point) have universal covering spaces unique up to a non-unique isomorphism.
3) Modules have injective hulls unique up to a non-unique isomorphism.
Such situations can lead to interesting groups - the absolute Galois group, the fundamental group, and the "Galois" groups of modules introduced by Sylvia Wiegand in Can. J. Math., Vol. XXIV, No. 4,
1972, pp. 573-579.
I'd appreciate any insight into the abstract features of situations which give rise to this type of phenomenon. And I'd appreciate as many examples from as many parts of mathematics as possible.
7 It is possible to functorially define the universal covering space of a non-pointed space X: take the diagram $\Pi_1(X) \to Top$ sending a point $x\in X$ to the (pretty well unique) covering space
one gets from the pointed space $(X,x)$, and a homotopy class of paths to the map between covering spaces this induces. The colimit of this diagram is the universal covering space of $X$, and this
time this is functorial wrt maps $X\to Y$. I learned this from Todd Trimble, I think. – David Roberts Jan 30 '11 at 9:59
6 @ David: "the colimit of this diagram is the universal covering space of X", are you sure this is correct? What you described is essentially the groupoid of universal covering spaces and covering
space maps inside Top and you are taking the colimit inside Top. This is the same as taking a single covering space and taking the quotient by all covering automorphisms, so you just get X back. –
Chris Schommer-Pries Jan 30 '11 at 14:46
2 Doesn't this always happen when the construction has not-trivial automorphisms? – Nick S Jan 30 '11 at 18:36
@ David - this coequalizer still just gives B back. Try an explicit example like B = circle. The point is that there are many paths and so the different universal covers are identified with
1 eachother in more than one way. This forces you to take a quotient of the universal covers which is too small (namely B itself) and no longer a universal cover. You can see that this has to be the
case because the paths from b to b (up to homotopy) are just $pi_1$ and so this colimit factors through the quotient by the action of $pi_1$. Agreed? Where in the n-lab is this written? – Chris
Schommer-Pries Jan 31 '11 at 18:57
5 There cannot be any such construction. The group of homeomorphisms from the circle to itself has no compatible action on the (or should I say "a") universal covering space. – Tom Goodwillie May 17
'11 at 4:10
show 1 more comment
12 Answers
active oldest votes
The first two examples can be described more or less uniformly. Associated to a field $F$ is the category $C_F$ of algebraic field extensions of $F$ (whose objects are morphisms $F \to E$
and whose morphisms are commutative triangles). This category has a weak terminal object given by any algebraic closure $F \to \bar{F}$. The full subcategory on the algebraic closures is
what one might call the absolute Galois groupoid of $F$ (which is a perfectly canonical construction), and choosing an object in this groupoid (which is not) gives the absolute Galois
Similarly, associated to a nice space $X$ is the category $C_X$ of connected covers of $X$ (whose objects are covering maps $Y \to X$ and whose morphisms are commutative triangles). This
category has a weak initial object given by any universal cover $\bar{X} \to X$. The full subcategory on the universal covers is (equivalent to?) the fundamental groupoid of $X$ (again, a
up vote perfectly canonical construction), and choosing an object in this groupoid (which is not) gives the fundamental group.
12 down
vote So you will get this kind of behavior in any situation where you have a weak universal object instead of a universal one. (This partially covers the third example, since injectivity is also
a weak universal property.) A general way to engineer a situation similar to the above two might be to look at something like the category of (epi?)morphisms into an object or (mono?)
morphisms out of it in your favorite category and see what happens.
In any case, if you are only interested in these constructions because they produce interesting groups, then I think nowadays the modern thing to do is to produce interesting groups using
Tannaka-Krein duality.
1 Nice observation! In fact, the Galois group of modules alluded to in example 3) is also of this type: it's the automorphism group of the weak initial object given by the injective hull
inside the category of embeddings (monomorphisms) $M \to I$ into an injective, so it's analogous to example 1) and dual to 2). – Theo Buehler Jan 30 '11 at 14:18
3 Not to mention it's also the weak terminal object in the category of essential extensions of M (with embeddings for the maps). – Harry Altman Jan 30 '11 at 14:27
3 "You will get this kind of behavior in any situation where you have a weak universal object instead of a universal one" --- this is so only when the weak universal objects are all
isomorphic, no? – Steven Landsburg Jan 30 '11 at 15:00
@Steven: I guess that would give a groupoid which is not connected, but it's still possible to restrict attention to each of its connected components. – Qiaochu Yuan Jan 30 '11 at 15:17
I don't understand your last comment. It seems to me that this is due to the fact that after all I'm not so sure what the good definition of "weak initial/terminal" object is, anyway. –
Theo Buehler Jan 30 '11 at 15:58
show 2 more comments
Any two injective resolutions (of an object in an abelian category) are homotopy equivalent, but this homotopy equivalence is not unique. This is of course because the lifting
property in the definition of "injective" does not require any uniqueness.
The connected sum of oriented manifolds is unique up to homeomorphism, but this homeomorphism is not unique.
up vote 11 down
vote A bit silly, but: In a short exact sequence $0 \to A \to B \to C \to 0$ in a semisimple abelian category $B$ is unique up to isomorphism (namely, $B \cong A \oplus C$), but the
isomorphism is not unique.
1 But the equivalence of injective resolutions is unique in the correct higher sense, i.e. the space of choices is contractible. – Thomas Nikolaus May 17 '11 at 7:31
@Thomas, is that indeed true? I've never thought of it but it looks like an interesting question. What do you mean by 'space of choices' in this case? – Fernando Muro Feb 10 '12 at
add comment
For a field $k$ and a natural number $n$, the vector space of dimension $n$ over $k$ is unique up to a non-unique isomorphism, though this somehow feels "less unique" to me than your other
up vote examples. I thought at first that this might be due to its not fitting into the class of examples described by Qiaochu, but I suppose you can force it into that class by considering the
10 down category of $n$-dimensional vector spaces over $k$. But that in turn feels considerably more ad hoc (at least to me) than considering the category of algebraic field extensions.
12 Except when the field is $\mathbf{F}_2$ and the dimensions is $1$:) – Chandan Singh Dalawat Jan 30 '11 at 15:52
3 Or any field and the dimension is 0 ;-)) – Johannes Hahn Jan 31 '11 at 0:15
I attended a talk about some problem in the classification of—well, of something to do with finite fields, I don't remember—at which a colleague was moved to ask “What happens here if $
3 \lambda \ne \mu$?” “Oh, yes,” said the speaker, as if discounting a trivial special case, “the results don't work if there are more than 2 non-$0$ scalars in the field.” – L Spice May
17 '11 at 5:03
add comment
My favourite: the mapping cone of a morphism in a triangulated category is unique up to non-unique isomorphism. This fact has originated a lot of research in this topic, and it
up vote 9 down still does.
add comment
In recent work in set theory the concept of "canonical structure" has emerged, in connection with combinatorial work on pcf theory. The idea is that there are many constructions that depend
on the axiom of choice but, once realized, are actually independent of the specific choices made. Usually, this involves two steps: You construct an object, which is not quite canonical
(say, a collection of subsets of a cardinal $\kappa$), but then you recognize that there is a natural ideal (say, the non-stationary ideal on $\kappa$) and the corresponding equivalence
classes are canonical. Of course, by switching to a new model of set theory, the "canonical structure" may change, so sometimes one thinks of it as a sort of invariant of the models.
The first papers that explicitly mentioned the name "canonical structure" are by Cummings, Foreman, and Magidor, "Canonical structures in the universe of set theory", Parts I and II, Annals
of Pure and Applied Logic 129 (2004), 211-243, and 142 (2006), 55-75.
The following quote is from the beginning of the introduction to Part I:
It is a distinguishing feature of modern set theory that many of the most interesting questions are not decided by ZFC, the theory in which we profess to work; to put it another way, ZFC
admits a large variety of models. A natural response to this is to identify invariants which may take different values in different models, and which codify a large amount of information
about a model.
up vote
6 down Of particular interest are invariants which are canonical, in the sense that the Axiom of Choice is needed to show that they exist, but once shown to exist they are independent of the
vote choices made. For example the uncountable regular cardinals are canonical in this sense.
Shelah discovered a large class of canonical invariants, the study of which he labeled PCF theory. These invariants include two which are central in this paper; Shelah [24, 26] (under
some mild cardinal arithmetic assumptions on the singular cardinal $\mu$) defined two stationary subsets of $\mu^+$, the sets of good and approachable points. The definitions of these
sets appear to depend on certain arbitrary choices, but (modulo the club filter) are in fact independent of these choices. Other canonical structures we study in this paper include the
stationary sets of tight and internally approachable structures, and the collection of good points on a scale.
The two references cited in the quote are S. Shelah, "On successors of singular cardinals", in M. Boffa, D. van Dalen, and K. McAloon, editors, Logic Colloquium ’78, pages 357–380,
Amsterdam, 1979. North-Holland; and S. Shelah, "Cardinal Arithmetic". Oxford University Press, Oxford, 1994.
Besides the ongoing work by Cummings-Foreman-Magidor and Shelah, these ideas have been extended by others; Krueger and Ishiu come to mind.
Sounds like a case for 'the': ncatlab.org/nlab/show/the – David Roberts Jan 30 '11 at 20:47
1 Ok, @David, I'm curious. What do you mean? – Andres Caicedo Jan 31 '11 at 0:47
@Andres: What David is referring to is the fact that universal constructions in category theory have this property, and that this is only a new feature of set theory. For instance,
suppose we want to make the product with an object $S$ a functor (i.e. the functor $(-)\times S$. However, while the product is unique up to unique isomorphism, we have to choose a
representative of each isomorphism class as well as the connecting morphisms between them. This requires choice or global choice, but after we choose a specific representative of the
functor usign choice, it is unique up to unique iso. – Harry Gindi Jan 31 '11 at 9:16
So when I said above "the functor $(-)\times S$", I was abusing language, since any given construction of "the" functor is only "a functor $(-)\times S$". This might seem like we're being
overly cautious, but when we move from isomorphism of objects to equivalence of objects (in a bicategory), this makes some difference. This can be resolved by using Mac Lane's coherence
theorem for bicategories, but, when we move up to tricategories, such a coherence theorem is proven not to exist. – Harry Gindi Jan 31 '11 at 9:21
I see. Thanks, @Harry! – Andres Caicedo Jan 31 '11 at 14:11
add comment
The homology of a differential graded algebra has an $A_\infty$-algebra structure which is unique up to non-unique isomorphism.
up vote 5 See Keller's nice expository paper, for instance. In particular, he states this result in Section 3.3 (as a theorem due to Kadeishvili, among others). It is stated there as a result
down vote about the homology of an $A_\infty$-algebra, but any differential graded algebra may be viewed as an $A_\infty$-algebra.
Hi John, Thanks, I wish you would say more people or give a reference at least. – David Feldman Jan 30 '11 at 23:22
@David: I added a reference. – John Palmieri Jan 30 '11 at 23:36
add comment
Here are some examples that are less of an algebraic nature (but all seem to be subsumed by Qiaochu's observation in that they are "weakly initial" or "weakly terminal" objects in appropriate
Consider the categories of metric spaces or complete metric spaces and $1$-Lipschitz maps. Isbell has shown that in these categories there are injective hulls, unique up to non-unique
isomorphism. A metric space $I$ is injective if for every isometric embedding $A \to B$ and every $1$-Lipschitz map $A \to I$ there exists a $1$-Lipschitz extension $B \to I$. The
up vote automorphism groups of the injective hull of a space seems exceedingly hard to determine (even for finite spaces) but there's one case I find interesting. If $M$ happens to be a (real) Banach
3 down space and $I(M)$ is its injective hull then $I(M)$ is a Banach space, uniquely determined up to unique linear isometry, and it is of the form $C(K)$ where $K$ is an extremally disconnected
vote Hausdorff space. H. Elton Lacey and co-authors have given a complete (finite!) list of possible injective hulls of separable Banach spaces.
Closely related are projective covers in the category of compact Hausdorff spaces and continuous maps. There, the projectives are precisely the extremally disconnected spaces (Gleason).
1 I should add that there is an abstract version of injective hulls/projective covers considered in Adamek-Rosicky, Locally presentable and accessible categories, that seems to subsume all
the examples given so far (except maybe the example involving bases). – Theo Buehler Jan 30 '11 at 14:48
add comment
A compact connected semisimple Lie Group $G$ has an essentially unique maximal torus $T$, a maximal abelian subgroup of maximum dimension (the rank of $G$ is the dimension of this torus).
Although $G$ has lots of such torii (in fact any element of $G$ is contained in at least one), any two are conjugate to one another by some element of $G$.
up vote 3 In a similar vein, one can break a given maximal torus $T$ up into congruent pieces (the images of Weyl chambers under the exponential map applied to the Lie algebra of $T$), any two of
down vote which are equivalent to one another by an element of the Weyl Group of $G$. The value of any class function on $G$ is then completely determined on all of $G$ by its values on a single
one of these pieces.
add comment
The countable dense linear order is unique up to a non-unique isomorphism. The countable atomless Boolean algebra is unique up to a non-unique isomorphism. The random graph is unique up to a
non-unique isomorphism. (Algebraically closed fields of a given characteristic and transcendence degree, and vector spaces of given dimension over a given field, are other examples already
mentioned above.)
In general, if $T$ is a $\kappa$-categorical first-order theory (in a countable language), then the model $M$ of $T$ of cardinality $\kappa$ is unique up to a non-unique isomorphism.
up vote
3 down Even more generally: if $T$ is any complete theory and $\kappa$ an infinite cardinal, then the saturated model $M$ of $T$ of cardinality $\kappa$—if it exists at all—is unique up to a
vote non-unique isomorphism. ($M$ is unique by a standard back-and-forth argument. Non-uniqueness of the isomorphism amounts to saying that $\operatorname{Aut}(M)$ is nontrivial. By homogeneity,
it suffices to exhibit two elements of $M$ with the same type. If there exists a nonprincipal parameter-free $1$-type, we can easily find two elements that realize it. If all $1$-types are
principal, there are only finitely many, hence two elements of $M$ have to realize the same type by the pigeonhole principle.)
add comment
Let me mention Sullivan's minimal models.
Every commutative differential graded $\mathbb{Q}$-algebra (cdga) $A^*$ concentrated in non-negative degrees and such that $H^0(A^*)=\mathbb{Q}$ admits a minimal Sullivan model $i:M^*\to A^
*$ where $M^*$ is a free commutative graded algebra obtained from $\mathbb{Q}$ by adding generators of non-negative degrees so that the differential of each generator is a $\mathbb{Q}
$-linear combination of products of length $\geq 2$ of the previous generators, and $i$ is a map of cgda's that induces a cohomology isomorphism (i.e., a quasi-isomorphism).
The minimal model is unique up to a non-unique isomorphism. More generally, if $f:A^*\to B^*$ is a map of cdga's and $j:N^*\to B^*$ is a minimal model of $B^*$, then there is a cdga map
up vote 3 $g:M^*\to N^*$, defined up to cdga homotopy, such that $fi=gj$ up to cdga homotopy; moreover, if $f$ is a quasi-isomorphism, then $g$ is an isomorphism.
down vote
This reduces the classification of non-negative cdga's up to quasi-isomorphism (and as a consequence, the classification of simply connected topological spaces up to rational homotopy) to
the classification of algebras of a certain kind up to isomorphism.
Of course, this example is similar to some mentioned before (in a sense it is the commutative analog of the answer of John Palmieri).
add comment
Vector spaces have a basis that is unique up to a non-unique isomorphism.
up vote Hilbert spaces have an orthonormal basis that is unique up to a non-unique unitary.
2 down (At least, if you accept Zorn's lemma, i.e. the axiom of choice)
Wait, how do vector spaces have a unique basis? If $V = \langle x, y\rangle$, then $x + y, y$ is also a basis of $V$, but it is not the same basis. – Simon Rose Jan 30 '11 at 16:04
@Simon: but there is an linear isomorphism that maps the former basis to the latter. So indeed a basis is not unique, but unique up to an isomorphism. Finally, this isomorphism itself is
not unique, as it can for example be composed with any isomorphism that leaves the latter basis invariant. – Chris Heunen Jan 30 '11 at 16:14
3 @Chris: We might be splitting hairs, but I think that this misses the spirit of the question. There is some notion of canonicality which defines an algebraic closure, for example, but no
such notion that defines a basis of a vector space. – Simon Rose Jan 30 '11 at 16:23
@Simon: I agree this might only answer the letter of the question and perhaps not its spirit. Neverthelesss, OP asked for as many examples from as many parts of mathematics as possible,
and this is one. Moreover, these isomorphisms certainly form interesting groups, namely $SL(n)$ and $U(n)$. Anyway, if one is after insight into the abstract features of such situations,
isn't it important to also consider examples that fall outside one's initial intuition? – Chris Heunen Jan 30 '11 at 17:29
I do think an example lurks here, but you haven't nailed it. You don't want a basis, you want only that for which you thought you wanted a basis, namely rigidification. So for one given
field and a given dimension fix a model vector space $M$ of that dimension. Define a rigidification of any given vectors space (over the same field, with the same dimension) as in
isomorphism $i:V\rightarrow M$. An isomorphism of rigidified vector spaces is a map $j:M\rightarrow M$ that makes the triangle commute. Now you get $GL(n)$ (and other groups if you force
more structure on $M$.) – David Feldman Jan 31 '11 at 5:00
add comment
One example that springs to mind is when you are secretly working with the objects of a higher category, and so the choice is not unique up to a unique isomorphism, but the choice of
isomorphism is also subject to higher coherence data. In a 2-category this would mean the isomorphisms are unique up to a unique invertible 2-arrow and so on. In $\omega$-categories, you may
up vote have such coherence all the way to infinity, and so end up with no uniqueness after all. One place where this emerges is when your $\omega$-category has all duals - is then an $\
2 down omega$-groupoid, because all the duals make everything weakly invertible!
add comment
Not the answer you're looking for? Browse other questions tagged big-list ct.category-theory ra.rings-and-algebras fields ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/53767/constructions-unique-up-to-non-unique-isomorphism/53861","timestamp":"2014-04-20T09:05:29Z","content_type":null,"content_length":"132529","record_id":"<urn:uuid:21b17e59-5719-48e3-8c91-b731176ad534>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matlab. Simple???
June 24th 2010, 12:32 PM #1
Matlab. Simple???
I just downloaded Matlab a few days back, and I've been slowly working through some of the online tutorials. But I have to be honest, the format in which you have to type in a bunch of different
stuff to get certain things done seems very inefficient and annoyingly non-user-freindly. Maybe its because I just got it a few days ago, but it really seems to be complicated, clumbersome, and
unusefull. For those fimmiliar with Matlab, did anybody experience this frustration at first also? Does it just take alittle time to get used to and then all the sudden (as you get more
experienced) does it become much more "powerful" and useful? And does anybody have any tips on a good tutorial that doesnt take hours to get to the usefull parts but ensures you have a decent
understanding of using Matlab?
I just feel like having the "command-line" interface makes things difficult, and that Matlab would be much more efficient if their were different buttons and menus and scroll bars and what not
that allowed you to access the processes within the software. Plus it would look nicer. But I'm assuming as one gets deeper into the understanding of the software, the command line approach
becomes very useful??
Thanks in advance for any advice or tips.
Above is a good introduction to matlab written by my professor. To see how useful matlab is, you need to find a class that benefits from it greatly. For me, I'm using matlab in my signals and
systems class, and I am stunned how many built in functions and features the language has. For example, I can plot the impulse response to an LTI in 3 lines of code, I can make a graph of some
random function in 3 lines of code, and I can interface functions with each other in the easiest, most streamlined interface I've ever used in programming. In C++, you'd need to mess around with
tons of syntax that says "this function I've written belongs to this piece of code. Therefore, that piece of code can use it." In matlab, I put a function.m file in the same folder as my
program.m, and I can immediately use the function.m in program.m. It's fantastic! The tons of built in functions cover many areas of math and engineering. It can do differentiation, integration,
statistics, signals and systems, and probably tons of other topics I don't even know about. Another powerful feature about the language is that most variables are automatically arrays. The
language seems designed to handle enormous arrays and enormous amounts of data.
Commandline is great to test features out before you put it in your script file(the parallel of a program). It's also good to debug/expand an already made script file, because after the script
file runs, all of your data is still in matlab's memory. With that, you can, using commandline, input commands to manipulate, test, or view that data.
The best source to see matlab's use would be taking a class that can benefit from the power behind matlab, or maybe reading a book about matlab for engineers.
I'm partial to LabVIEW myself.
I know the feeling. First time I used Matlab I thought why would anyone want to use a command line or write a bunch of code in a text file - seems like something a cave man would do now that we
have pretty user interfaces for everything.
For number of years I steered well clear, with the philosophy that excel was a much easier way to solve any problem that could arise as it was much easier to see what was happening on a page and
why would anybody want to use a programming language like matlab that required you to write your own code.
It wasn't until I had a project that involved structural analysis that I really began to see the true power of a package like matlab (even then I only used a fraction of that power). The problem
was what I refer to as a dynmic problem, in the sense that the the number/types of inputs can change and hence the way you approach the problem may also change. This was something that excel
basically wasn't designed to do, although VBA was an option it would result in some nasty code that would practically impossible to expand/recycle as my project needs changed. At the end of the
day it is my opinion that excel is for manipulting data rather than serious number crunching (dont get me wrong spreadsheets have their applications, which I refer to as static problems where you
know what the size of your problem is going to be and how you want to handle it).
Its not until you find yourself in a position like this until you will begin to understand the strengths of matlab, and while it is a very steep learning curve at the start, it is very achievable
to become sound in basic matlab principles within 3 months that will allow you to solve many many problems complex problem in a fraction of the time it would take to do in something like a
Sorry to have a rant on the Matlab vs Excel debate but it is the best example I can think of to answer your question.
Regards Elbarto
I just downloaded Matlab a few days back, and I've been slowly working through some of the online tutorials. But I have to be honest, the format in which you have to type in a bunch of different
stuff to get certain things done seems very inefficient and annoyingly non-user-freindly. Maybe its because I just got it a few days ago, but it really seems to be complicated, clumbersome, and
unusefull. For those fimmiliar with Matlab, did anybody experience this frustration at first also? Does it just take alittle time to get used to and then all the sudden (as you get more
experienced) does it become much more "powerful" and useful? And does anybody have any tips on a good tutorial that doesnt take hours to get to the usefull parts but ensures you have a decent
understanding of using Matlab?
I just feel like having the "command-line" interface makes things difficult, and that Matlab would be much more efficient if their were different buttons and menus and scroll bars and what not
that allowed you to access the processes within the software. Plus it would look nicer. But I'm assuming as one gets deeper into the understanding of the software, the command line approach
becomes very useful??
Thanks in advance for any advice or tips.
If you came to Matlab from a programming background it would not seem so strange. What is strange is that it is so commonly used to teach the equivalent of programming 101.
Last edited by CaptainBlack; June 28th 2010 at 08:37 PM.
That is very true. I often wonder if I would have had a better introduction into programming if I started with a more general language like python to get my head around the concepts of
statements, loops and functions etc (although these are fundementally no different in matlab).
June 24th 2010, 02:45 PM #2
Junior Member
Jun 2009
June 24th 2010, 03:43 PM #3
June 27th 2010, 01:08 AM #4
June 28th 2010, 01:32 AM #5
Grand Panjandrum
Nov 2005
June 28th 2010, 07:45 PM #6 | {"url":"http://mathhelpforum.com/math-software/149287-matlab-simple.html","timestamp":"2014-04-20T10:05:10Z","content_type":null,"content_length":"51057","record_id":"<urn:uuid:6bfb5d02-0232-4ce8-ad33-1ee16abcc989>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turbulente Konvektion
Computational Fluid Dynamics
Universität Göttingen
Institut für Geophysik
Convection is an ubiquitous phenomenon which occurs in the atmosphere, theoceans, the interior of the Earth, and in numerous engineering applications.Convection has also served as a paradigm for
pattern forming systems because theflow can organize itself into rolls or polygonal structures. It is not known howthe large scales of convection are organized once the flow has become turbulent.
This project intends to study the large scale patterns which persist in theturbulent regime. Convection in a plane layer will be simulated within theBoussinesq approximation. Quantities extracted
from the computations willinclude the Nusselt number, the Reynolds number, and the spectral distributionsof heat transport and kinetic energy. The equations of motion are solved with a spectral
method. The velocity field isdecomposed into poloidal and toroidal scalars in order to automatically obtain adivergence free velocity field. The spectral decomposition uses Fourier modesand Chebychev | {"url":"http://java.hlrs.de/hpc-projects/public/abstracts/abstract.jsp?acronym=turb_con","timestamp":"2014-04-18T18:42:28Z","content_type":null,"content_length":"2019","record_id":"<urn:uuid:883693d7-530b-4384-8bd4-65b17e29a3bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPSS Statistics TSMODEL algorithms has errors in Initialization of Exponential Smoothing section
Technote (troubleshooting)
I'm using the SPSS Statistics TSMODEL procedure (Analyze>Forecasting>Create Model in the menus) to fit exponential smoothing models to time series data and produce forecasts. I'm attempting to
understand or manually recreate the production of the fitted and forecasted values. I'm looking at the section Initialization of Exponential Smoothing where it discusses initialization of the
backcasting process used to produce initial state values for producing forecasts and it seems that the formulas don't allow me to accurately reproduce the results from the procedure. Are there errors
in these formulas?
Resolving the problem
Yes, there are some errors in this section of the TSMODEL Algorithms. Specifically, in each of the models with trend components, the slope coefficients produced by the linear regressions of Y on time
mentioned in the formulas must be multiplied by -1 in backcasting. Also, the formula for the vector of seasonal states for the Winters multiplicative model is incorrect. It should state that the
value of S for each period is the sum of the intercept and slope coefficients for that period's regression divided by the sum of the means of the intercept and slope coefficients over all periods. | {"url":"http://www-01.ibm.com/support/docview.wss?uid=swg21647885","timestamp":"2014-04-20T03:11:40Z","content_type":null,"content_length":"20750","record_id":"<urn:uuid:6a8858d9-049b-4733-b604-18530c94146f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elmhurst, IL Prealgebra Tutor
Find an Elmhurst, IL Prealgebra Tutor
...Louis. I am very good at algebra and can generally be helpful to those students who are highly motivated to improve their skills in this area. I have taken many math courses in my academic
career, and I have helped many students get through algebra related topics.
13 Subjects: including prealgebra, geometry, statistics, finance
...As a classroom teacher I expanded my students' vocabularies, teaching them how to determine a word's meaning from its prefix, root word and suffix. I am a certified Elementary teacher who has
been trained in methods of teaching Math, and I also have experience teaching math as a classroom teach...
15 Subjects: including prealgebra, reading, GED, English
...I am also a graduate from Loyola University Chicago with a major in Biology. I am a recent graduate from Loyola University Chicago with a major in Biology. Throughout my schooling and tutoring
friends/siblings/other students, I have learned various methods that prove to be an effective method of studying.
29 Subjects: including prealgebra, English, chemistry, calculus
...If you need help with a specific course that I didn't list, please message me and I'll let you know if I can help. I also tutor for High School Physics (Mechanics). - Standardized Tests: - I
have a great deal of experience tutoring for the math sections in standardized tests such as the ACT, P...
18 Subjects: including prealgebra, calculus, physics, geometry
...I found then that I have a natural ability to break concepts down to those who might tend to struggle with the challenging mathematics content. Over the past few years I have worked with
students ranging from elementary to college-age as well as non-traditional students. I am a recent alum of t...
20 Subjects: including prealgebra, physics, geometry, algebra 1
Related Elmhurst, IL Tutors
Elmhurst, IL Accounting Tutors
Elmhurst, IL ACT Tutors
Elmhurst, IL Algebra Tutors
Elmhurst, IL Algebra 2 Tutors
Elmhurst, IL Calculus Tutors
Elmhurst, IL Geometry Tutors
Elmhurst, IL Math Tutors
Elmhurst, IL Prealgebra Tutors
Elmhurst, IL Precalculus Tutors
Elmhurst, IL SAT Tutors
Elmhurst, IL SAT Math Tutors
Elmhurst, IL Science Tutors
Elmhurst, IL Statistics Tutors
Elmhurst, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Elmhurst_IL_prealgebra_tutors.php","timestamp":"2014-04-18T21:28:26Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:caeb3269-197d-43c2-b9a4-c4452d7cbe97>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A culture started with 1,000 bacteria. After 6 hours, it grew to 1,000 bacteria. Predict how many bacteria will be present after 15 hours. Round your answer to the nearest whole number. P=Ae^kt
• one year ago
• one year ago
Best Response
You've already chosen the best response.
A=1000, t=6, P=1000+1000(your choice of description is vague) 2000=1000e^k6 find k and sub t=15 for the answer. do you need further help?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5107ba9ae4b069b605a268c9","timestamp":"2014-04-16T13:21:51Z","content_type":null,"content_length":"27815","record_id":"<urn:uuid:6faa9e67-6890-475c-88cd-defd984efad1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Instruction and Software Library Mapping for Embedded Software Using Symbolic Algebra
Complex Instruction and Software Library Mapping for Embedded Software Using Symbolic Algebra (2003)
Download Links
Other Repositories/Bibliography
by Armita Peymandoust , Tajana Simunic , Giovanni De Micheli
author = {Armita Peymandoust and Tajana Simunic and Giovanni De Micheli},
title = {Complex Instruction and Software Library Mapping for Embedded Software Using Symbolic Algebra},
year = {2003}
With growing demand for embedded multimedia applications, time to market of embedded software has become a crucial issue. As a result, embedded software designers often use libraries that have been
preoptimized for a given processor to achieve higher code quality. Unfortunately, current software design methodology often leaves high-level arithmetic optimizations and the use of complex library
elements up to the designer's ingenuity. In this paper, we present a tool flow and a methodology, SymSoft, that automates the use of complex processor instructions and preoptimized software library
routines using symbolic algebraic techniques. We use SymSoft to optimize a set of examples for the SmartBadgeIV (Maguire et al., 1998) portable embedded system running the Linux embedded operating
system. The results of these optimizations show that by using SymSoft we can map the critical basic blocks of the benchmark examples to the StrongARM SA-1110 instruction set much more efficiently
than the commercial StrongARM compiler. SymSoft is also used to map critical code sections to commercially available software libraries with complex mathematical elements such as exp or the inverse
discrete cosine transform routine. Our measurements on SmartBadgeIV show that even higher performance improvements and energy savings are achieved by using these library elements. For example, the
final optimized MP3 audio decoder runs four times faster than real-time playback while consuming four times less energy. Since the decoder executes faster than real-time playback, additional energy
savings are now possible by using processor frequency and voltage scaling.
957 Advanced Compiler Design and Implementation - Muchnick - 1997
313 Power analysis of embedded software: A first step towards software power minimization - Tiwari, Malik, et al. - 1994
249 Maximizing multiprocessor performance with the SUIF compiler - Hall, Anderson, et al. - 1996
180 Custom Memory Management Methodology: Exploration of Memory Organisation for Embedded Multimedia System Design - Catthoor, Wuytack, et al. - 1998
139 Comprehensive Gröbner bases - Weispfenning - 1992
139 DSPstone: A DSP-oriented Benchmarking Methodology - Zivojnovic, Velarde, et al. - 1994
138 Instruction level power analysis and optimization of software - Tiwari, Malik, et al. - 1996
93 Code Generation for Embedded Processors - Marwedel, Goossens
82 A Framework for Estimating and Minimizing Energy Dissipation - Li, Henkel - 1998
66 Influence of compiler optimizations on system power - Irwin, Ye
57 Energy-efficient design of batterypowered embedded systems,” Very Large Scale Integration (VLSI) Systems - Simunic, Benini, et al. - 2001
56 Retargetable Code Generation for Digital Signal Processors - Leupers - 1997
39 Techniques for low energy software - Mehta, Owens, et al. - 1997
38 Hardware/Software Instruction set Configurability for System-on-Chip Processors - Killian, Rowen, et al.
31 SmartBadges: a wearable computer and communication system - Maguire, Smith, et al. - 1998
31 The multiple wordlength paradigm - Constantinides, Cheung, et al. - 2001
21 Instruction Level Power Analysis and - Tiwari, Malik, et al. - 1996
21 Source code optimization and profiling of energy consumption in embedded systems - Simunic, Benini, et al. - 2000
19 Embedded software in real-time signal processing systems: application and architecture trends - Paulin, Liem, et al. - 1997
14 Application of Symbolic Computer Algebra in High-Level Data-Flow Synthesis - Peymandoust, DeMicheli - 2003
12 Polynomial Methods for Component Matching and Verification - Smith, Micheli - 1998
9 FRIDGE: an interactive fixed-point code generation environment for Hw/Sw-codesign - Willems, Keding, et al. - 1997
9 Symbolic algebra and timing driven data-flow synthesis - PEYMANDOUST, D - 2001
9 Polynomial circuit models for component matching in high-level synthesis - Smith, DeMicheli
7 MATH Toolkit for Real-Time Programming - Crenshaw - 2000
5 Instruction scheduling for power reduction in processor-based system design - Tomyiama, H, et al. - 1998
4 Generic Coding of Moving Pictures and Associated Audio - Technology - 1996
3 Using Symbolic Algebra - Peymandoust, Micheli - 2001
2 circuit models for component matching in high-level synthesis - “Polynomial - 2001
1 Integrated Performance Primitives for the - com - 2000 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.5337","timestamp":"2014-04-18T17:27:46Z","content_type":null,"content_length":"30177","record_id":"<urn:uuid:1f0905a1-54f4-4c51-a421-6e51b4dd33f9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Henrik Melbeus
Publications (12)45.38 Total impact
[show abstract] [hide abstract]
ABSTRACT: Henrik Melbéus and Tommy Ohlsson describe three different theories of extra dimensions – universal, large and warped – and how these unseen dimensions could be observed, if they exist
at all.
Physics World 09/2012; 25(9):27-30. · 0.45 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We study the first Kaluza-Klein excitation of the Higgs boson in universal extra dimensions as a dark matter candidate. The first-level Higgs boson could be the lightest Kaluza-Klein
particle, which is stable due to the conservation of Kaluza-Klein parity, in non-minimal models where boundary localized terms modify the mass spectrum. We calculate the relic abundance and find
that it agrees with the observed dark matter density if the mass of the first-level Higgs boson is slightly above 2 TeV, not considering coannihilations and assuming no relative mass splitting
among the first-level Kaluza-Klein modes. In the case of coannihilations and a non-zero mass splitting, the mass of the first-level Higgs boson can range from 1 TeV to 4 TeV. We study also the
prospects for detection of this dark matter candidate in direct as well as indirect detection experiments. Although the first-level Higgs boson is a typical weakly interacting massive particle,
an observation in any of the conventional experiments is very challenging.
Physics Letters B 07/2012; 715(1-3):164-169. · 4.57 Impact Factor
Physics Letters B 07/2012; 713(3):350-350. · 4.57 Impact Factor
Physical review D: Particles and fields 05/2012; 85(10).
[show abstract] [hide abstract]
ABSTRACT: We study how the recent ATLAS and CMS Higgs mass bounds affect the renormalization group running of the physical parameters in universal extra dimensions. Using the running of the Higgs
self-coupling constant, we derive bounds on the cutoff scale of the extra-dimensional theory itself. We show that the running of physical parameters, such as the fermion masses and the CKM mixing
matrix, is significantly restricted by these bounds. In particular, we find that the running of the gauge couplings cannot be sufficient to allow gauge unification at the cutoff scale.
Physics Letters B 05/2012; 712(4-5):419-424. · 4.57 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We investigate monoenergetic gamma-ray signatures from annihilations of dark matter comprised of Z1, the first Kaluza-Klein (KK) excitation of the Z boson in a nonminimal universal
extra dimensions (UED) model. The self interactions of the non-Abelian Z1 gauge boson give rise to a large number of contributing Feynman diagrams that do not exist for annihilations of the
Abelian gauge boson B1, which is the standard Kaluza-Klein dark matter (KKDM) candidate. We find that the annihilation rate is indeed considerably larger for the Z1 than for the B1. Even though
relic density calculations indicate that the mass of the Z1 should be larger than the mass of the B1, the predicted monoenergetic gamma fluxes are of the same order of magnitude. We compare our
results to existing experimental limits, as well as to future sensitivities, for image air Cherenkov telescopes, and we find that the limits are reached already with a moderately large boost
factor. The realistic prospects for detection depend on the experimental energy resolution.
Physical Review D 02/2012; 85(4):043524. · 4.69 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We calculate the continuum photon spectrum from the pair annihilation of a Z^1 LKP in non-minimal universal extra dimensions. We find that, due to the preferred annihilation into W^+ W^
- pairs, the continuum flux of collinear photons is relatively small compared to the standard case of the B1 as the LKP. This conclusion applies in particular to the spectral endpoint, where also
the additional fermionic contributions are not large enough to increase the flux significantly. When searching for the line signal originating from Z^1 Z^1 annihilations, this is actually a
perfect situation, since the continuum signal can be regarded as background to the smoking gun signature of a peak in the photon flux at an energy that is nearly equal to the mass of the dark
matter particle. This signal, in combination with (probably) a non-observation of the continuum signal at lower photon energies, constitutes a perfect handle to probe the hypothesis of the Z1 LKP
being the dominant component of the dark matter observed in the Universe.
Physics Letters B 11/2011; 706(4-5):329-332. · 4.57 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We study the renormalization group (RG) running of the neutrino masses and the leptonic mixing parameters in two different extra-dimensional models, namely, the Universal Extra
Dimensions (UED) model and a model, where the Standard Model (SM) bosons probe an extra dimension and the SM fermions are confined to a four-dimensional brane. In particular, we derive the beta
function for the neutrino mass operator in the UED model. We also rederive the beta function for the charged-lepton Yukawa coupling, and confirm some of the existing results in the literature.
The generic features of the RG running of the neutrino parameters within the two models are analyzed and, in particular, we observe a power-law behavior for the running. We note that the running
of the leptonic mixing angle \theta_{12} can be sizable, while the running of \theta_{23} and \theta_{13} is always negligible. In addition, we show that the tri-bimaximal and the bimaximal
mixing patterns at a high-energy scale are compatible with low-energy experimental data, while a tri-small mixing pattern is not. Finally, we perform a numerical scan over the low-energy
parameter space to infer the high-energy distribution of the parameters. Using this scan, we also demonstrate how the high-energy \theta_{12} is correlated with the smallest neutrino mass and the
Majorana phases.
Journal of High Energy Physics 04/2011; 2011(4):052. · 5.62 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We study the generation of small neutrino masses in an extra-dimensional model, where right-handed neutrinos are allowed to propagate in the extra dimension, while the Standard Model
particles are confined to a brane. Motivated by the fact that extra-dimensional models are non-renormalizable, we truncate the Kaluza-Klein towers at a maximal extra-dimensional momentum. The
structure of the bulk Majorana mass term, motivated by the Sherk-Schwarz mechanism, implies that the right-handed Kaluza-Klein neutrinos pair to form Dirac neutrinos, except for a number of
unpaired Majorana neutrinos at the top of each tower. These heavy Majorana neutrinos are the only sources of lepton number breaking in the model, and similarly to the type-I seesaw mechanism,
they naturally generate small masses for the left-handed neutrinos. The lower Kaluza-Klein modes mix with the light neutrinos, and the mixing effects are not suppressed with respect to the
light-neutrino masses. Compared to conventional fermionic seesaw models, such mixing can be more significant. We study the signals of this model at the Large Hadron Collider, and find that the
current low-energy bounds on the non-unitarity of the leptonic mixing matrix are strong enough to exclude an observation.
Physical Review D 08/2010; 82(4):045023. · 4.69 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We investigate indirect neutrino signals from annihilations of Kaluza-Klein dark matter in the Sun. Especially, we examine a five- as well as a six-dimensional model, and allow for the
possibility that boundary localized terms could affect the spectrum to give different lightest Kaluza-Klein particles, which could constitute the dark matter. The dark matter candidates that are
interesting for the purpose of indirect detection of neutrinos are the first Kaluza-Klein mode of the U(1) gauge boson and the neutral component of the SU(2) gauge bosons. Using the DarkSUSY and
WimpSim packages, we calculate muon fluxes at an Earth-based neutrino telescope, such as IceCube. For the five-dimensional model, the results that we obtained agree reasonably well with the
results that have previously been presented in the literature, whereas for the six-dimensional model, we find that, at tree-level, the results are the same as for the five-dimensional model.
Finally, if the first Kaluza-Klein mode of the U(1) gauge boson constitutes the dark matter, IceCube can constrain the parameter space. However, in the case that the neutral component of the SU
(2) gauge bosons is the LKP, the signal is too weak to be observed.
Journal of Cosmology and Astroparticle Physics 01/2010; 2010(1):018. · 6.04 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We investigate a model of large extra dimensions where the internal space has the geometry of a hyperbolic disc. Compared with the ADD model, this model provides a more satisfactory
solution to the hierarchy problem between the electroweak scale and the Planck scale, and it also avoids constraints from astrophysics. In general, a novel feature of this model is that the
physical results depend on the position of the brane in the internal space, and in particular, the signal almost disappears completely if the brane is positioned at the center of the disc. Since
there is no known analytic form of the Kaluza-Klein spectrum for our choice of geometry, we obtain a spectrum based on a combination of approximations and numerical computations. We study the
possible signatures of our model for hadron colliders, especially the LHC, where the most important processes are the production of a graviton together with a hadronic jet or a photon. We find
that the signals are similar to those of the ADD model, regarding both qualitative behavior and strength. For the case of hadronic jet production, it is possible to obtain relatively strong
signals, while for the case of photon production, this is much more difficult.
Journal of High Energy Physics 08/2008; 2008(8):077. · 5.62 Impact Factor
Top Journals
• 2010–2012
□ KTH Royal Institute of Technology
☆ Department of Theoretical Physics
Tukholma, Stockholm, Sweden | {"url":"http://www.researchgate.net/researcher/29960476_Henrik_Melbeus","timestamp":"2014-04-20T17:47:03Z","content_type":null,"content_length":"236879","record_id":"<urn:uuid:4efb0f4f-24e1-4f91-8873-4aaf98dc22cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manchester Geometry Seminar
Geometry and Mathematical Physics Seminar
2013/2014: Semester 2
Time: Thursdays 4.15pm (for the Manchester Geometry Seminar)
Thursdays 3.15pm (for the Geometry and Mathematical Physics Seminar)
Location: The Frank Adams Room (Room 1.212: FA 1 and FA 2) in the Alan Turing building.
The two seminars are run in combination/ alternation. Visit the main page to find more information, in particular, programmes of talks for previous years. For the Manchester Geometry Seminar, we meet
about 3.50 for tea and biscuits; each lecture begins at 4.15pm in the seminar room FA 1. For the Geometry and Mathematical Physics Seminar, we start at 3.15pm in the seminar room FA 2; after a break
around 4pm, we move to the seminar room FA 1.
• February 13: Manchester Geometry Seminar. 4.15pm, FA1.
Prof. Valentin Ovsienko, (University of Reims): Differential operators and Riemannian curl on contact manifolds
• February 20: Manchester Geometry Seminar. 4.15pm, FA1.
Prof. Eugene Ferapontov, (Loughborough University): Linearly degenerate PDEs and quadratic line complexes
• February 27: Geometry and Mathematical Physics Seminar. 3pm, FA 2. 4.15pm, FA 1.
Dr Adam Biggs: Pencils of operators and their invariants
• March 6: No seminar.
• March 13: Manchester Geometry Seminar. 4.15pm, FA 1.
Prof. Victor Buchstaber, (Steklov Mathematical Institute and University of Manchester): A dynamical system on the torus associated with the Josephson junction model
• March 20: Manchester Geometry Seminar. 4.15pm, FA 1.
Dr Oleg Chalykh, (University of Leeds): Power structure on the Grothendieck ring of varieties and motivic formula for the Calogero--Moser spaces on curves
• March 27: Manchester Geometry Seminar. 4.15pm, FA 1.
Dr Sean Holman, (University of Manchester): Microlocal analysis of the geodesic X-ray transform
• April 3: To be announced.
Easter break: Monday 7 April to Friday 25 April. [Gregorian or Western Easter coincides this year with Julian or Orthodox Easter: Sunday 20 April = 7 April Old Style.]
• May 1, 8: To be announced.
Ted Voronov. 4 (17) March 2014 | {"url":"http://www.maths.manchester.ac.uk/~tv/seminar.html","timestamp":"2014-04-21T12:09:52Z","content_type":null,"content_length":"4797","record_id":"<urn:uuid:5fbc6c3d-345b-42eb-bf8e-c84b0ae2edf2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |