content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Calculating pressure needed in a misting system.
Take a look at the Bernoulli equation. The Bernoulli principle states that the sum of static and dynamic pressure is constant for an incompressible non viscid fluid. Static pressure is about the
pressure that the fluid would exhibit at rest. On the other hand dynamic pressure relates to the speed of the fluid.
The concepts you should learn about are:
So what you want is to increase the speed of the fluid by decreasing the pipe (valve) section area. Because the static and the dynamic pressure is constant if you increase the speed the pressure
decreases. If the water pressure decreases below the vapor pressure point it will turn into vapor, which makes the misting effect that you want.
The concepts you should look are:
http://en.wikipedia.org/wiki/Bernoulli%27s_principle http://en.wikipedia.org/wiki/Vapor_p...ater_in_nature
For more advanced concepts:
There is also a point which is that the assumption of an inviscid fluid might not hold for really small orifices since the boundary layer might have to be accounted for, making the calculations a bit
harder, even though i doubt it would make significant problem.
Also if the orifice is opened at high angles, say more than 10 angular degrees, there might be separation the can cause the water not to go out evenly.
The concepts for this are:
Reynolds number
Stall/flow separation
Boundary Layer
Hope that helps a bit
|
{"url":"http://www.physicsforums.com/showthread.php?p=3840035","timestamp":"2014-04-16T13:53:08Z","content_type":null,"content_length":"24646","record_id":"<urn:uuid:5592b963-348d-4fa6-b85c-0d339af4aa55>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sources of matter
6.2 Sources of matter
6.2.1 Cosmological constant
A cosmological constant is implemented in the 3 + 1 framework simply by introducing the quantity
The matter source terms can then be written as with
6.2.2 Scalar field
The dynamics of scalar fields is governed by the Lagrangian density
where for the scalar field and for the stress-energy tensor, where
For a massive, minimally coupled scalar field [46],
and where 25) can be expanded as in [107] to yield which, when coupled to Equation (31), determines the evolution of the scalar field.
6.2.3 Collisionless dust
The stress-energy tensor for a fluid composed of collisionless particles (or dark matter) can be written simply as the sum of the stress-energy tensors for each particle [161],
where There are two conservation laws: the conservation of particles where is the Lagrangian derivative, and
6.2.4 Ideal gas
The stress-energy tensor for a perfect fluid is
where as the generalization of the special relativistic boost factor, the matter source terms become The hydrodynamics equations are derived from the normalization of the 4-velocity, [162 ]where
When solving Equations (45, 46, 47), an artificial viscosity (AV) method is needed to handle the formation and propagation of shock fronts [162, 84, 85]. These methods are computationally cheap, easy
to implement, and easily adaptable to multi-physics applications. However, it has been demonstrated that problems involving very high Lorentz factors are somewhat sensitive to different
implementations of the viscosity terms, and can result in substantial numerical errors if solved using time explicit methods [126].
On the other hand, a number of different formulations [75] of these equations have been developed to take advantage of the hyperbolic and conservative nature of the equations in using high resolution
and non-oscillatory shock capturing schemes (although strict conservation is only possible in flat spacetimes – curved spacetimes exhibit source terms due to geometry). These techniques potentially
provide more accurate and stable treatments in highly relativistic regimes. A particular formulation used together with high resolution Godunov techniques and approximate Riemann solvers is the
following [139, 26]:
where and Update
Although Godunov-type schemes are accepted as more accurate alternatives to AV methods, especially in the limit of high Lorentz factors, they are not immune to problems and should generally be used
with caution. They may produce unexpected results in certain cases that can be overcome only with problem-specific fixes or by adding additional dissipation. A few known examples include the
admittance of expansion shocks, negative internal energies in kinematically dominated flows, the ‘carbuncle’ effect in high Mach number bow shocks, kinked Mach stems, and odd/even decoupling in
mesh-aligned shocks [135]. Godunov methods, whether they solve the Riemann problem exactly or approximately, are also computationally much more expensive than their simpler AV counterparts, and it is
more difficult to incorporate additional physics.
A third class of computational fluid dynamics methods reviewed here is also based on a conservative hyperbolic formulation of the hydrodynamics equations. However, in this case the equations are
derived directly from the conservation of stress-energy,
to give with curvature source terms are different expressions for energy and momenta. An alternative approach of using high resolution, non-oscillatory, central difference (NOCD) methods [99, 100]
has been applied by Anninos and Fragile [12 ] to solve the relativistic hydrodynamics equations in the above form. These schemes combine the speed, efficiency, and flexibility of AV methods with the
advantages of the potentially more accurate conservative formulation approach of Godunov methods, but without the cost and complication of Riemann solvers and flux splitting.
NOCD and artificial viscosity methods have been discussed at length in [12] and compared also with other published Godunov methods on their abilities to model shock tube, wall shock and black hole
accretion problems. They find that for shock tube problems at moderate to high boost factors, with velocities up to
6.2.5 Imperfect fluid
The perfect fluid equations discussed in Section 6.2.4 can be generalized to include viscous stress in the stress-energy tensor,
where and
The corresponding energy and momentum conservation equations for the internal energy formulation of Section 6.2.4 become
For the NOCD formulation discussed in Section 6.2.4 it is sufficient to replace the source terms in the energy and momentum equations (53, 53, 54) by
|
{"url":"http://relativity.livingreviews.org/Articles/lrr-2001-2/articlesu16.html","timestamp":"2014-04-18T20:46:28Z","content_type":null,"content_length":"35897","record_id":"<urn:uuid:cbb1168f-b563-4405-8dd5-ce27e6fd78f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 155
In letter A. into is italicized In letter B. ill is italicized In letter C. at is italicized In letter D. disdainfully is italicized.
Which of the following sentences contains an italicized word that's used as a predicate adjective? A. Jerry looks into the microscope. B. Jerry looks ill today. C. Jerry looks at the map. D. Jerry
looks disdainfully at the pile of laundry. I think it's B.
college physics
(a) use d=vi*t+0.5*a*t^2 manipulate the equation to solve for t 210 m = (23 m/s)t+0.5(0.150 m/s^2)*t^2 t = 9.41 s (b) v = d/t = 210 m/ 9.41 s = 22.3 m/s I'm still trying to figure out the rest...
-2(-3x+ 4) >greater or equal to 3(5-2x)
how do you do this problem of inequality -2(-3x4 > 3(5-2x)
How can you write a unit rate if at least one term is a fraction? How is this different from writing a unit rate where both terms are whole numbers?
Identify one environmental or human factor that causes ill health,accidents,crises and disasters within your community or any other community within South Africa.
Explain how water of hydration is related to measurement. Then explain how it is related to the law of definite composition
Geometry: Student Correction (Stuck)
BD = 4 DA = 5 DE = 6 Solving for x 1. Joelle s proportion is correct. 2. Joelle s proportion is not correct because BD, which is 4, corresponds proportionally to BA, which is 9, not 5. 3. Joelle s
proportion is not correct because DE, which is 6, does not corres...
Geometry: Student Correction (Stuck)
Joelle set up the following proportion: 4/6 = 5/x. Solve for x. Determine if proportion is correct. If not, explain what is wrong with it.
Triangle ABC has vertices of A(0,0), B( 4,0)and C( 2,4). The coordinates of each vertex in triangle AEF are multiplied by 3 Is triangle ABC~AEF? Explain.
I had this test these answers are correct you'll get 100% hope it helps 1).1,256ft2 2).4,187ft3 3).615cm2 4).1,436cm2 5).8
What is the percent by mass of glucose in a solution containing .15 mol of glucose dissolved in 410 g acetic acid? The molar mass of glucose is 180g/mol.
Math (9th grade)
Write an equation in point slope form of a line parallel to 12x-2y=6 and passes through the point (-4,-3)
A reaction is peformed in which 50 ml of .400M silver nitrate and 75 ml of .100M hydrochloric acid are mixed together. The initial temperature of the solutionn was 25 C and the final temperature was
27.2 C. What is the enthalpy, in kj/mol, for the formation of the precipitate ...
what is the sum coefficient and balanced equation of : Co(OH)3+Na2CO3 my answer is 3Co(OH)3+ 3Na2CO3---> 3CoCO3+ 3Na2OH
How could you use a precipitation reaction to separate the following pair of cations: K^+ and Hg2 ^2+? (write the formula and net formula for each reactant)? The answer is Hg2 ^2+ (aq) + 2Cl- (aq) =
HgCl2(s) but how do I work it out?
How do I balance the following equation by the half-reaction method (it takes place in basic solution)? Fe(OH)2(s) + O2(g) = Fe(OH)3(s) the answer is 4Fe(OH)2(s) + 2H20(l) + O2(g) = 4Fe(OH)3(s) but
how do I work it out?
Math (12th)
Thank you. I tried that and got 3 :)
Math (12th)
What is the common difference in the arithmetic sequence 30, 27, 24, 21, 18, ...
Is Pride and Prejudice identified more with Romanticism or Realism? Even though it was written during the Romantic period, it seems more like Realism to me. Is this correct? Thanks!
Math please help urgent !
Math please help urgent !
Find the slop of the tangent to the curve y=(10-sqrt x)(5+sqrt x) at x=25
Cross country running
how do you keep motivated to keep going even though your body wants to quit?
Cross country running
Thankyou very much!
Cross country running
Can I do anything for side cramps during a race?
Cross country running
How does one deal with another runners joy after getting sufficiently skunked? In other words, Can I get some tips on how to be a good sport when I get creamed during a race?
what is the geometry around each carbon in ethanoic acid
if youre looking for x its 6
The Philippine-American war was A) a minor event for americans B) more costly than the spanish american war C) fought in a traditional manner D) never completely resolved E) over even before the
spanish american war
What is an example in "Antony and Cleopatra" of Casear displaying a disposition to not want to go to war? Thank you!
so i found a formula: m1v1ix +m2v2ix = m1v1fx + m2v2fx which i rearranged to find v1fx and eliminated m2v2ix because m2v2ix = 0 so the eqn looked like: v1fx = (m1v1ix - m2v2fx)/m1 subbing in i got:
[(0.029)(200) - (1.1)(6.25)]/(0.029) which gave the answer -59.8276. is this co...
i'm just unsure of what formula i should apply!
A 29 g rubber bullet is travelling to the right at 200 m/s, when it hits a 1.1 kg block of wood sitting on a horizontal frictionless surface. If this collision is inelastic (though not perfectly
inelastic), and the velocity of the block of wood after the collision is 6.85 m/s ...
Assume you are given two objects whose centre of mass is located exactly at the origin. The first object has a mass of 33 kg and is located at position +30 m with respect to the origin. If the mass
of the second object is 7.0 kg, what is its position (in m) with respect to the...
Ian is firmly attached to a snowboard which is on a flat surface of ice, which you can assume to be frictionless. He and the snowboard are initially at rest. He then throws a 138 g ball which travels
at a speed of 19.2 m/s, and finds himself moving backwards at a speed of 0.05...
Is the answer 3645.77? Thanks for your help, btw :)
What is the magnitude of the momentum of a car weighing 1,555 N when it is moving at a speed of 23 m/s? Express your answer in kg m/s.
Assuming maximal power output, how long, in seconds, would it take a 1,535 kg car with a 128 hp engine to accelerate from 50 km/h to 102 km/h? (1 hp = 746 W)
social studies
between 1400000 and 500000 years ago early hominids learned how to?
Signs of fish stress include A. increased respiration B. faster scale growth C. reduced feeding rates D. both A and C Thanks for the help!
Lee Holmes deposited $16,600 in a new savings account at 9% interest compounded semiannually. At the beginning of year 4, Lee deposits an additional $41,600 at 9% interest compounded semiannually. At
the end of year 6, what is the balance in Lee s account? I have tried a...
Thannk you both of you but both answers are incorrect. I can input the answer to see if it is right and both are wrong. We are all missing something. If anyone else wants to try please do!!!
Lee Holmes deposited $16,600 in a new savings account at 9% interest compounded semiannually. At the beginning of year 4, Lee deposits an additional $41,600 at 9% interest compounded semiannually. At
the end of year 6, what is the balance in Lee s account? I have tried a...
thanks so much!
I couldn't find it on any of those sites.
Hi I was wondering if someone could please help me with this question: What was the place of residence of the Ho-Chunk war chief Red Bird? I have searched many places and have had no luck finding the
answer. Thanks much! =)
can someone tell me what this means please Esta campana muda en el campanario, esta mitad partida por la mitad, estos besos de Judas, este calvario, este look de presidiario, esta cura de humildad.
Este cambio de acera de tus caderas, estas ganas de nada, menos de ti, este arr...
015 (part 1 of 2) 10.0 points A car is parked on a cliff overlooking the ocean on an incline that makes an angle of 21 below the horizontal. The negligent driver leaves the car in neutral, and the
emergency brakes are defective. The car rolls from rest down the incline wi...
From the top of a lighthouse 75 feet high, the cosine of the angle of depression of a boat out at sea is 4. To the nearest foot, how far is the boat from the base of the lighthouse?
I did, but I don't understand what he typed. Thanks
Use the distance formula and the Pythagorean theorem to find the distance, to the neearest tenth, from R(6,-5) to U(-2,6)
write and equation on point sloop form for the perpendicular bisector of the segment with endpoints A(-3,-3) and B(5,6)
what does 'sqrt' stand for?
Use the distance formula and the Pythagorean theorem to find the distance, to the neearest tenth, from R(6,-5) to U(-2,6)
Sorry, if 6, 10, and 12, and 9, 15 and x are the lengths of the coresponding sides of two similair triangles, what is the value of x?
if 6, 10, and 12, and 9, and x are the lengths of the coresponding sides of two similair triangles, what is the value of x?
you rock Reiny! And BTW..I call them vertices also...it was a typo. Thanks for your help!
this one is killing me for some reason: Find any of the values of k so that (-3, 4), (-8,5), and (-5, k) are the verticles of a right traingle.
makes sence now. thanks so much!
the lengths of 2 sides of a triangle are 2 inches and 10 inches. find the range of possible lengths for the third side, s.
Life saver!! = )
THe cosine law is confusing to me. This lesson gives no examples. The triangle is obtuse?
Tell if the measures 9,11, and 7 can be side lengths of a triangle. if so, classify the triangle as acute, right, or obtuse?
thanks so much!! You are too cool!
so the answer is yes?
Can a triangle have sides with lengths 7, 13, and 9?
Thanks so much! I get it now
Ok, I think I understand. so the coordinates of N are (4, -8)
M is the midpoint of segment AN, A has coordinates (-6,2) and M has coordinates (-1, -3). What are the coordinates of N?
One equilateral triangle has sides 5 ft long. Another equilateral triangle has sides 7 ft long. Find the ratio of the areas of the triangles
Given that triangles KON and LOM are similar, find the scale factor
That is the question. I don't get it either.
Using the information about John, Jason, and Julie, can you uniquely determine how they stand with respect to each other? On what basis? Statement 1: John and Jason are standing 12 feet apart.
Statement 2: Julie is standing 31º NW of Jason. Statement 3: John is standing 4...
find m_DCB, given angles A and F are congruent, angles B and E are congruent, and m_CDE=26
Triangles ABF and EDG are congruent. Triangles ABF and GCF are equilateral. AG = 24 and CG = 1/5 AB. What is the total distance from A to B to C to D to E?? Please I dont understand this
St monica
Bill and Amy want to ride their bikes from their neighbourhood to school which is 14.4 kilmeters away. It takes Amy 40 mins to arrive at school. Bill arrives 20 mins after Amy. How much faster (in
meters/second) is Amy's average speed for the entire trip?
novel phrase the domesticated generation fell from him ?
literature the novel
allegory of the call of the wild
What is the average area of a theme park? Thanks!
5th grd math- metric conversion
30m = 6000, i have converted to cm, in, and ft, no where close to getting 6000? help
Our text does not even refer to multispectral.
Multispectral refers to: A. UV, IR, and visible light B. the many colors in visible light C. UV, IR, and x-rays Thanks!
In an arithmetic sequence, the first term is 100 and the sixth term is 85. Find the common difference. Then, find the 50th term of the sequence. Do I use the formula an=a1+(n-1)d? If so, what would
my "n" by in the equation?
social studies
Who is the richest person in Paraguay? Thanks!
The siren of a police car produces a sound of frequency 420Hz.A man sitting next to the road notices that the pitch of the sound changes as the car moves towards and then away from him.1- WRITE DOWN
THE NAME OF THE PHENOMENON. 2-ASSUME THAT THE SPEED OF THE SOUND IN AIR IS 340...
True or false the work done by a non-zero net force on an object, moving on a horizontal plane, is equal to the change in the potential energy of the object? and dont forget to give the correct
explanation for my q.
True or false the work done by a non-zero net force on an object, moving on a horizontal plane, is equal to the change in the potential energy of the object? and dont forget to give the correct
explanation for my q.
Physical science
What do you call the phenomenon that causes dispersion of white light when it passes through a triangular prism
Physical science
What do you call the phenomenon that causes dispersion of white light when it passes through a triangular prism
Physical science
What do you call the phenomenon that causes dispersion of white light when it passes through a triangular prism
importance of uniformity of weight test in pharmacy
Can someone please help me with this question? Thanks! What organic molecules make up the structure that surrounds the DNA molecule?
Describe all the points on the Earth s surface that are exactly 4000 miles from the North Pole. If you need to, use 3960 miles for the radius of the Earth.
Describe all the points on the Earth s surface that are exactly 4000 miles from the North Pole. If you need to, use 3960 miles for the radius of the Earth.
An isosceles triangle has two sides of length w that make a 2á-degree angle. Write down two different formulas for the area of this triangle, in terms of w and á (Greek alpha ). By equating the
formulas, discover a relation involving sin 2á, sin...
4th grade
I guess it would be 7 feet on each side. THANKS JOE!!!!!!!!!!!:D
4th grade
I guess it would be 7 feet on each side. THANKS JOE!!!!!!!!!!!:D
Pages: 1 | 2 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=lydia","timestamp":"2014-04-20T12:13:32Z","content_type":null,"content_length":"25647","record_id":"<urn:uuid:33d2da37-42f4-45c0-81f4-2903a26633c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
<i>Landmarks</i>—The Curved Space around a Spinning Black Hole
Published September 1, 1963
Landmarks articles feature important papers from the archives of the Physical Review journals.
General relativity describes how mass and energy induce curvature in spacetime. The math is intricate, so exact solutions to Einstein’s equations are rare. One important solution appeared in Physical
Review Letters in 1963. Analysis over the following years showed it to be the unique description of curved spacetime around a spinning black hole. Since all black holes spin, this solution has been
essential to astrophysicists studying the behavior of black holes and of matter in their vicinity.
Solutions to Einstein’s general relativity equations describe the curvature of space with a mathematical function called the metric tensor, or “metric.” Given the coordinates of two points in space,
the metric tells you how to compute the distance between them, since the usual Pythagorean theorem doesn’t apply in curved space. On the surface of a sphere, for example, the Pythagorean answer is
always too small.
Einstein published his theory of gravitation in 1916. That same year, the German mathematical physicist Karl Schwarzschild found a spherically symmetric solution for empty space, with no time
variation but with curvature everywhere. Mathematicians eventually showed that Schwarzschild’s metric had a central singularity—corresponding to a finite mass packed into a point—and a surrounding
spherical surface called the “event horizon” at what became known as the Schwarzschild radius. This metric is now known as the description of a static black hole (see 2004 Focus Landmark).
In 1918 theorists used approximate methods to show that a rotating mass also distorts spacetime via an effect called frame dragging [1]. An example is that bodies traveling around the Earth on
identical orbits, but in opposite directions, will measure slightly different times for one circuit. A complete solution to the Einstein equations for a rotating body would have the symmetry of a
cylinder, but even this modest departure from spherical symmetry made solving the equations fearsomely difficult.
Roy Kerr, a New Zealand relativist at the University of Texas in Austin and Wright-Patterson Air Force Base in Ohio, came at the problem from a different angle. Exploring a certain class of metrics,
he found an exact solution with two free parameters. One corresponded to the mass parameter in the Schwarzschild metric, but the significance of the other was not obvious. By examining the form of
the new solution at large distances from the origin and by comparing it to known, approximate solutions for a rotating object, Kerr showed that the second parameter represented angular
momentum—essentially, the amount of spin [2].
The Kerr metric was clearly a description of spacetime curvature around a spinning mass. But interpreting the physical significance of some mathematical oddities close to the origin proved difficult.
Whereas the Schwarzschild metric has a central point singularity, the Kerr metric has a singularity that looks like “a ring in the equatorial plane … spinning at the speed of light,” explains Werner
Israel of the University of Victoria, Canada. In addition, the Kerr metric possesses two surfaces that looked like event horizons, with one nested inside the other but touching at the poles. Robert
Wald of the University of Chicago says that the meaning of these surfaces was “not clear from the Kerr metric written in Kerr’s coordinates.”
A few years later, analyses showed that only the inner surface is a true event horizon, and that the smaller ring singularity is not accessible to the world outside [3]. The outer surface defines a
region within which frame-dragging is so powerful that even light can only orbit in one direction. That opened the way for interpretation of the Kerr metric as a description of a spinning black hole,
but it wasn’t the end of the story. The Kerr metric has just two parameters, mass and angular momentum, but astrophysical objects, Israel says, have “different oblateness, bumps, and other
deformities” that the Kerr metric cannot account for.
Studies by a number of physicists in the 1970s resolved this point by proving what was called the “no hair” theorem. Any rotating object, as it collapses into a black hole, emits gravitational
radiation that erases all of its irregularities, so that mass and angular momentum are its only surviving characteristics (along with electric charge, if there is any). Israel cites the Nobel
laureate S. Chandrasekhar, who said that in his entire scientific life, “the most shattering experience has been the realization that [the Kerr metric] provides the absolutely exact representation of
untold numbers of massive black holes that populate the Universe.” [4]
David Lindley is a freelance writer in Alexandria, Virginia, and author of Uncertainty: Einstein, Heisenberg, Bohr and the Struggle for the Soul of Science (Doubleday, 2007).
1. J. Lense and H. Thirring, “Über den Einfluss der Eigenrotation der Zentralkörper auf die Bewegung der Planeten und Monde nach der Einsteinschen Relativitätstheorie,” Phys. Zeitschr. 19, 156
2. In “Discovering the Kerr and Kerr-Schild metrics,” a 2008 memoir available at arXiv:0706.1109, Kerr gives a detailed account of the mathematical arguments that led to his discovery.
3. R. H. Boyer and R. W. Lindquist, “Maximal Analytic Extension of the Kerr Metric,” J. Math. Phys. 8, 265 (1967); B. Carter, “Global Structure of the Kerr family of Gravitational Fields,” Phys.
Rev. 5, 1559 (1968).
4. S. Chandrasekhar, Truth and Beauty (University of Chicago Press, Chicago, 1987) p. 54 [Amazon][WorldCat].
|
{"url":"http://physics.aps.org/articles/v7/18","timestamp":"2014-04-20T06:25:20Z","content_type":null,"content_length":"20204","record_id":"<urn:uuid:debfe562-2194-47dd-b2d5-ba903f565685>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Purplemath Forums
Triangle ABC is a right triangle, with B being the right angle and line CB being the ground. An obtuse triangle is inside triangle ABC. Let's call this obtuse triangle ADC. The diagram should look
something like this: http://i646.photobucket.com/albums/uu183/blackhole252/Triangle.jpg Suppose angle A...
Trig Word Prob: ship goes 62 units at 12 deg, then 111 units
A ship travels 62 units on a bearing of 12deg, and then travels on a bearing of 102deg for 111 units. Find the distance from the starting point to the end point. Round to the nearest unit. The answer
apparently seems to be 127 units, but I have no idea how to get that. What I'm asking for: A nice d...
Re: Trig Word Prob: ship goes 62 units at 12 deg, then 111 units
Is this what you mean? If this is the correct diagram, I still can't solve it.
|
{"url":"http://www.purplemath.com/learning/search.php?author_id=8610&sr=posts","timestamp":"2014-04-19T12:40:18Z","content_type":null,"content_length":"16651","record_id":"<urn:uuid:428b561e-06ba-425c-bd1b-64f9f30e2229>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Concatenating matrix columns causes loss of precision
Replies: 4 Last Post: Aug 24, 2013 12:03 AM
Messages: [ Previous | Next ]
Rohit Concatenating matrix columns causes loss of precision
Posted: Aug 23, 2013 1:41 AM
Posts: 2
Registered: 8/23 Hi all,
/13 I have a curious problem I'm hoping someone can help me with. I have a column matrix say temp1 with floating point numbers in the following format(displayed this way using format
long g):
and another nx3 matrix (temp2) with values like so:
25.59989 -17.82167 31.19241
25.17558 -17.59459 30.71448
25.18788 -17.39987 30.61347
I concatenate the 2 matrices column wise, temp = [temp1 temp2];
The resulting matrix is:
1.334305e+09 24.40084 -17.98591 30.31327
1.334305e+09 24.23554 -17.68831 30.00396
1.334305e+09 25.31328 -17.61529 30.83927
I want the resulting matrix to have the original precision of temp1. How do I do this? I have already tried format long g. Writing to a file with dlmwrite and precision set to %.5f
results in the fractional part zeroed out.
Date Subject Author
8/23/13 Concatenating matrix columns causes loss of precision Rohit
8/23/13 Re: Concatenating matrix columns causes loss of precision James Tursa
8/23/13 Re: Concatenating matrix columns causes loss of precision Rohit
8/23/13 Re: Concatenating matrix columns causes loss of precision James Tursa
8/24/13 Re: Concatenating matrix columns causes loss of precision Roger Stafford
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=9230583","timestamp":"2014-04-19T17:42:31Z","content_type":null,"content_length":"21573","record_id":"<urn:uuid:473c7867-c2ef-479f-85ee-64a21bd7bd58>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
I have been a math instructor at De Anza College for almost 20 years. I really enjoy the students and the college atmosphere.Contact Info: Office: S76C
E-mail: deansusan@deanza.edu
Phone: 408-864-8865
Office Hours: SPRING 2007: MW 11:30 AM - 12:20 PM in S76C
T 1:30 - 3:20 PM in S44; TH 3:30 - 5 PM in S44
Classes for SPRING 2007:Math 114 - 12, College Math Preparation Level 3: Intermediate Algebra
Lecture meets on campus MW in room S46 12:30 - 1:20 PM and meets TTHF in room S44 (lab) 12:30 - 1:20 PM
Math 10 - 21, Elementary Statistics and Probability
Class meets 2 hours 10 minutes on MONDAY and on WEDNESDAY 1:30 - 3:40 PM.
Math 10 - 27, Elementary Statistics and Probability HYBRID class
Class meets 2 hours 10 minutes TUESDAY 3:45 - 5:55 PM.
See the De Anza Web Site for registration information: http://www.deanza.eduMath and Art The mathematician Maxime Bocher (1867-1918) wrote that he liked to look at mathematics more as an art than a
science: "The activity of the mathematician, constantly creating as he is, guided although not controlled by the external world of senses, bears a resemblance, not fanciful, I believe, but real, to
the activities of the artist, of a painter, let us say. Rigorous deductive reasoning on the part of the mathematician may be likened here to the technical skill in drawing on the part of the
painter. Just as one cannot become a painter without a certain amount of skill, so no one can become a mathematician without the power to reason accurately up to a certain point.... Other qualities
of a far more subtle sort, chief among which in both cases is imagination, go to the making of a good artist or a good mathematician." Clearly, without math, there would be no architecture and
without math, the Rennaissance artist Filippo Brunelleschi could not have invented linear perspective. However, artists use math in far more subtle ways as well. The lines they weave, the shapes
they use, the space they manipulate, and the proportions they create are all inextricably entwined with math.
|
{"url":"http://faculty.deanza.edu/deansusan/","timestamp":"2014-04-16T10:42:00Z","content_type":null,"content_length":"16061","record_id":"<urn:uuid:94986be9-4f60-4af3-9b9f-8f4666869575>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This book offers a self-contained account of the 3-manifold invariants arising from the original Jones polynomial. These are the Witten-Reshetikhin-Turaev and the Turaev-Viro invariants. Starting
from the Kauffman bracket model for the Jones polynomial and the diagrammatic Temperley-Lieb algebra, higher-order polynomial invariants of links are constructed and combined to form the 3-manifold
invariants. The methods in this book are based on a recoupling theory for the Temperley-Lieb algebra. This recoupling theory is a q-deformation of the SU(2) spin networks of Roger Penrose.
The recoupling theory is developed in a purely combinatorial and elementary manner. Calculations are based on a reformulation of the Kirillov-Reshetikhin shadow world, leading to expressions for all
the invariants in terms of state summations on 2-cell complexes. Extensive tables of the invariants are included. Manifolds in these tables are recognized by surgery presentations and by means of
3-gems (graph encoded 3-manifolds) in an approach pioneered by Sostenes Lins. The appendices include information about gems, examples of distinct manifolds with the same invariants, and applications
to the Turaev-Viro invariant and to the Crane-Yetter invariant of 4-manifolds.
"This extremely useful volume provides a self-contained treatment of the construction of 3-manifold invariants directly from the combinatorics of the Jones polynomial in Kauffman's bracket
formulation."--Mathematical Reviews
Table of Contents
Another Princeton book authored or coauthored by Louis H. Kauffman:
Subject Area:
• Mathematics
Hardcover: Not for sale in Japan
|
{"url":"http://press.princeton.edu/titles/5528.html","timestamp":"2014-04-16T04:30:54Z","content_type":null,"content_length":"13986","record_id":"<urn:uuid:93b83aa4-1604-4d91-941a-b2ae47200dc6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cat map re-transformation
up vote 0 down vote favorite
Is there any way of moving from one cat map transformation to the other without resetting parameters? For example, suppose you have two matrices '$A$'and '$B$' each permuted with different cat map
parameters namely '$p_1$','$q_1$' and '$p_2$','$q_2$'. The goal is by having the permuted matrix of '$A$' under '$p_1$', '$q_1$' which we call '$A_p$' and parameters of '$B$' in this case '$p_2$' and
'$q_2$', we get a matrix called '$A_bp$' which is equal to permuting the original '$A$' with '$p_2$' and '$q_2$'? obviously, we are not allowed to re-permute '$A_p$' to get '$A$' and apply '$p_2$',
and '$q_2$' on '$A$'.
chaos ds.dynamical-systems arithmetic-dynamics
Can you clarify what your definitions are? By "cat map" I assume you mean "toral automorphism", that is a map $\mathbb{R}^d/\mathbb{Z}^d$ given by a matrix $A\in SL(d,\mathbb{Z})$, and probably you
want $A$ to have no eigenvalues on the unit circle. But I don't know what you mean by "cat map parameters" or "permuted matrix", so right now I don't understand your question. – Vaughn Climenhaga
Oct 10 '12 at 21:12
@Vaughn: Thanks for your comments and sorry for the confusion. Yes I meant "toral automorphism" with the application of something like Arnold Cat Map where we have a map $ \begin{bmatrix} x_n\\y_n
\end{bmatrix} =\begin{bmatrix} 1&p\\ q&1+pq \end{bmatrix} \begin{bmatrix} x_{n-1}\\y_{n-1} \end{bmatrix} mod~ n$. Since $det\begin{pmatrix} \begin{bmatrix} 1 & p\\ q & pq+1 \end{bmatrix} \end
{pmatrix}=1 $ then the map is area preserving. So we permute $A$ and $B$ using the map with two different parameter $p_1$,$q_1$ and $p_2$ and "q_2" which gives us $A_p$ and $ B_p$. – Shanti Oct 11
'12 at 3:02
Then, we want to use $B_p$ and $p_2$, $q_2$ ,$p_1$, and $q_1$ to permute $B_p$ further such that the result is like permuting $B$ with $p_1$ and $q_1$ using the above mapping. – Shanti Oct 11 '12
at 3:05
Still don't understand the question: what do you mean by "moving from one transformation to another"? – Anthony Quas Oct 11 '12 at 10:57
@Quas: I mean instead of permuting $B_p$ back to $B$ with its parameters and then permuting the resulted $B$ with $A$'s parameter to get the same permuted matrix as $A_p$, directly go from $B_p$ to
$A_p$ without doing back and forth. – Shanti Oct 11 '12 at 14:45
show 1 more comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged chaos ds.dynamical-systems arithmetic-dynamics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/109321/cat-map-re-transformation","timestamp":"2014-04-17T15:32:43Z","content_type":null,"content_length":"51455","record_id":"<urn:uuid:a65f2e8d-aa74-4383-ac17-43c87147b3bc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The complexity of resolution procedures for theorem proving in the propositon-al calculus
- Information and Computation , 1988
"... An algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2 (.25+ε) L is described, where L can be either the length of
the input expression or the number of occurrences of literals (i.e., leaves) in it. This represents a new ..."
Cited by 15 (1 self)
Add to MetaCart
An algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2 (.25+ε) L is described, where L can be either the length of the
input expression or the number of occurrences of literals (i.e., leaves) in it. This represents a new upper bound on the complexity of non-clausal satisfiability testing. The performance is achieved
by using lemmas concerning assignments and pruning that preserve satisfiability, together with choosing a "good " variable upon which to recur. For expressions in conjunctive normal form,
it is shown that an upper bound is 2.128 L.
, 1989
"... This thesis explores the relative complexity of proofs produced by the automatic theorem proving procedures of analytic tableaux, linear resolution, the connection method, tree resolution and
the Davis-Putnam procedure. It is shown that tree resolution simulates the improved tableau procedure and th ..."
Cited by 9 (0 self)
Add to MetaCart
This thesis explores the relative complexity of proofs produced by the automatic theorem proving procedures of analytic tableaux, linear resolution, the connection method, tree resolution and the
Davis-Putnam procedure. It is shown that tree resolution simulates the improved tableau procedure and that SL-resolution and the connection method are equivalent to restrictions of the improved
tableau method. The theorem by Tseitin that the Davis-Putnam Procedure cannot be simulated by tree resolution is given an explicit and simplified proof. The hard examples for tree resolution are
contradictions constructed from simple Tseitin graphs.
, 2008
"... doi:10.1088/0957-4484/19/39/395103 The use of gold nanoparticle aggregation for DNA computing and logic-based biomolecular detection ..."
"... AbstractAn algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2 (.25+e) L is described, where L can be either the
length of theinput expression or the number of occurrences of literals (i.e., leaves) in it. This represents ..."
Add to MetaCart
AbstractAn algorithm for satisfiability testing in the propositional calculus with a worst case running time that grows at a rate less than 2 (.25+e) L is described, where L can be either the length
of theinput expression or the number of occurrences of literals (i.e., leaves) in it. This represents a new upper bound on the complexity of non-clausal satisfiability testing. The performance
isachieved by using lemmas concerning assignments and pruning that preserve satisfiability, together with choosing a "good " variable upon which to recur. For expressions in conjunctivenormal form,
it is shown that an upper bound is 2.128 L.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2132319","timestamp":"2014-04-19T19:21:52Z","content_type":null,"content_length":"19125","record_id":"<urn:uuid:146b4b1a-4ea6-4e5f-a397-c0fb97ed3cd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From: Tom Duff
Newsgroups: comp.graphics.algorithms,sci.math Subject: Re: Algorithm for convex hull in higher dimensions Date: Thu, 02 Oct 1997 14:14:55 -0700 Oxford Materials wrote: > > I have not heard of any
algorithm for finding the convex hull > of a set of points which works in higher dimensions, so I have > created one which is based on the following 2 properties: > [ad hoc, non-algorithm deleted]
Try quickhull -- it's easy, fast, accurate, versatile and already written: http://www.geom.umn.edu/software/qhull/ -- Tom Duff, KF6LWB ================================================================
============== From: Jeff Erickson Newsgroups: comp.graphics.algorithms,sci.math Subject: Re: Algorithm for convex hull in higher dimensions Date: Thu, 02 Oct 1997 19:14:24 -0400 Oxford Materials
wrote: > I have not heard of any algorithm for finding the convex hull > of a set of points which works in higher dimensions I have. :-) There are several, of varying complexities and running times.
For a nice survey, see the chapter "Convex Hull Computations" by Raimund Seidel in the CRC Handbook of Discrete and Computational Geometry. If you want code, see any of the following web pages. The
first two are specific programs; the last two are lists pointing to several more programs. http://www.geom.umn.edu/software/qhull/ http://netlib.bell-labs.com/netlib/voronoi/hull.html http://
www.geom.umn.edu/software/cglist/ch.html http://www.cs.duke.edu/~jeffe/compgeom/code.html#topes You've rediscovered (a variant of) the "quickhull" algorithm -- starting with some "base" polytope,
insert the point farthest from some facet until all the points are inside. Except possibly for the initial step, this is the algorithm implemented in "qhull" (a the first URL above). > The problem is
that the polytope may not always remain convex > (that is, adjacent faces may be angled towards each other) during > this algorithm. Thus the "furthest in the normal direction" point > may not be
associated with the current face. Also, even if the furthest point from a facet is associated with that facet, the point might be able to "see" other facets. Just joining the facet with the new point
won't be enough. One way to fix the algorithm is to fill in the concavities as soon as they appear. After you connect a point to a facet, creating 3 new triangular facets, check the edges of the
facet you threw away. If an edge is concave, then you need to "flip" the pair of triangles that contains it: --- --- |\ | | /| | \ | ====> | / | | \| |/ | --- --- (If you prefer to think of the
convex hull as a solid object instead of a surface, think of gluing a tetrahedron into the groove between the two facets.) This deletes two facets and creates two new ones. Any points that were
associated with the two old facets either become interior points or are now associated with one of the new facets. Every time you create a new facet, recursively check any of its edges that do not
touch the point you're trying to add. After each flip, there will be two new edges to check, which may lead to more flips. The edges you check will expand out from the new point in a wave. Eventually
this process will delete precisely the old facets that the new point could see. If you're familiar with the incremental flipping algorithm for Delaunay trianglations, bells should be ringing in your
head right about now. It's not just a similar algorithm; it's the SAME algorithm. All this stuff works in higher dimensions, too, except that flips are a little more complicated. If you add the
points in random order, instead of always adding the "furthest" point, the running time of this algorithm (in 3d) is O(n log n), which is the best you can do in the worst case. However, adding the
"furthest" points first is probably going to be even faster in practice, since you'll quickly throw away most of the internal points. All this assumes (implicitly) that the facets are going to be
triangles at every stage of construction. If your point sets contains four or more points on the same plane,you could have non-triangular facets, which could cause you problems. Even worse, if some
four points are almost on the same plane, your algorithm may think they're coplanar due to lack of numerical precision, and this almost certianly WILL cause problems. The programs I pointed to
earlier, qhull and hull, devote a LOT of attention to this aspect of the computation. -- Jeff Erickson Center for Geometric Computing jeffe@cs.duke.edu Department of Computer Science http://
www.cs.duke.edu/~jeffe Duke University
|
{"url":"http://www.math.niu.edu/~rusin/known-math/97/conv.hull","timestamp":"2014-04-20T05:59:46Z","content_type":null,"content_length":"5065","record_id":"<urn:uuid:c44be543-970d-4da1-99c1-937e82ffc0b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
Submitted by Anonymous on September 29, 2010.
Say that you are told that one of the boxes contain $a and the other contains twice of that $2a. Consider the example where you open a box and find $100 in it. Now you aren't sure whether this is
equal to 2a or a (i.e if you picked the $a box then the other contains $200, or if you picked the $2a box then the other contains only $50.).
The expected value is $5/4a if you decide to switch ($125).
|
{"url":"http://plus.maths.org/content/comment/reply/5230/1577","timestamp":"2014-04-19T15:28:35Z","content_type":null,"content_length":"20295","record_id":"<urn:uuid:6f4c61fa-b330-4b13-82a2-99a837e58a72>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
|
finding variance of truncus pdf
August 26th 2006, 06:25 PM
finding variance of truncus pdf
I know that E(X^2) is the definite integral of x^2 * f(x).
But what if f(x) = (4/3)(x^-2) ? The answer in the texbook says it is [4x/3].
I'm so confused. Thanks in advance :)
August 26th 2006, 10:39 PM
Originally Posted by freswood
I know that E(X^2) is the definite integral of x^2 * f(x).
But what if f(x) = (4/3)(x^-2) ? The answer in the texbook says it is [4x/3].
I'm so confused. Thanks in advance :)
There is something wrong with the question. Your f(x) is not a density without
a statement about its domain, which in this case would probably be
(4/3, infty). Then as the integration "integrates out" the dependency on x,
what you report as the textbooks answer cannot be the answer to the
question you have posted.
|
{"url":"http://mathhelpforum.com/advanced-statistics/5147-finding-variance-truncus-pdf-print.html","timestamp":"2014-04-21T10:35:56Z","content_type":null,"content_length":"4255","record_id":"<urn:uuid:054a5207-9156-471a-9ff6-b5033843b4a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
|
While solving Sudoku, including the toughest ones, is it possible to device a set of algorithms so as to eliminate the need of backtracking altogether? That is, suppose there is a function which
includes like say 15 algorithms to find a cell where only one number is allowed. Is it possible that this function always fills at least one cell, so that if initially the Sudoku has 55 cells to
fill, the function will be invoked at most 55 times, that is the function doesn't need to be a recursive one (backtracking calls for it to be recursive)?
This post has been edited by cupidvogel: 05 December 2011 - 12:40 AM
|
{"url":"http://www.dreamincode.net/forums/topic/258487-is-backtacking-a-must-in-sudoku-solving-algorithm/page__p__1503086","timestamp":"2014-04-17T08:31:57Z","content_type":null,"content_length":"85218","record_id":"<urn:uuid:5eb9ef9d-3f84-4e66-896d-736b0cef108d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nicholson, GA Math Tutor
Find a Nicholson, GA Math Tutor
...In these subjects, I have a nearly flawless pedagogy and an exceptional understanding of the fundamental concepts of the material. I also have a very strong understanding of differential and
integral calculus, and have tutored many students in these areas - usually classes called calculus I and ...
13 Subjects: including calculus, physics, SAT math, differential equations
...Here we get more involved with non-linear functions as well as imaginary and complex numbers. As an engineer and as a college level tutor, I have worked with algebra 2 type functions and
problems for many years. I have been working with spreadsheets for more than 20 years.
22 Subjects: including algebra 1, SAT math, trigonometry, precalculus
...Throughout my studies and tutoring in mathematics and physics I have paid attention to particular struggles that students have with certain topics, and from that, I have developed my teaching
skills to better suit the education of the students. My goal is to help students increase their grades a...
15 Subjects: including algebra 1, algebra 2, calculus, geometry
...I would love the opportunity to work with you.I earned 2 Bachelors degrees, one in Computer Science and Mathematics. I also have experience tutoring middle and high school students in math
ematics. With my education and work experience, I can help your student gain a better understanding of how ...
30 Subjects: including linear algebra, discrete math, ACT Math, SAT math
I am an experienced math instructor with over eight years of experience at the high school and university levels. I taught mathematics at Athens Academy and the University of Georgia. I have
provided tutoring at UGA for my own students as well as the students of other instructors and professors.
14 Subjects: including algebra 2, geometry, prealgebra, trigonometry
Related Nicholson, GA Tutors
Nicholson, GA Accounting Tutors
Nicholson, GA ACT Tutors
Nicholson, GA Algebra Tutors
Nicholson, GA Algebra 2 Tutors
Nicholson, GA Calculus Tutors
Nicholson, GA Geometry Tutors
Nicholson, GA Math Tutors
Nicholson, GA Prealgebra Tutors
Nicholson, GA Precalculus Tutors
Nicholson, GA SAT Tutors
Nicholson, GA SAT Math Tutors
Nicholson, GA Science Tutors
Nicholson, GA Statistics Tutors
Nicholson, GA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/nicholson_ga_math_tutors.php","timestamp":"2014-04-17T16:08:16Z","content_type":null,"content_length":"23824","record_id":"<urn:uuid:d54fe02f-d9ad-453c-844b-d3ff08655ffa>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Camden, NJ Statistics Tutor
Find an East Camden, NJ Statistics Tutor
I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and
pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching.
13 Subjects: including statistics, calculus, geometry, algebra 1
I am a current student of chemical engineering at Rowan University. Chemistry and math are my favorite things, but I am able to tutor for other sciences (at the high school level) other than
chemistry as well. I have a background in mathematics that currently reaches up to calculus III.
7 Subjects: including statistics, chemistry, physics, calculus
...My professional goals are consistent with the vocations I've led. I have completed my Bachelors of Arts; I majored in Spanish. I have many references: from a Nobel Peace Prize nominee, with
whom I worked in Guatemala, to my Professors at Temple; I assure you that I am an excellent tutor.
26 Subjects: including statistics, Spanish, English, writing
...This means I had to endure quite a bit of challenging math and or science courses. From Calculus I up to differential equations each math course created a new set of challenges for me to
overcome in hopes of obtaining my degree. From the science side of things I completed core subjects like Bio...
20 Subjects: including statistics, physics, calculus, geometry
...I performed well on the SAT registering a 2130 and scoring a perfect on the essay portion. I missed one question on the reading and two on the math section. I desire to tutor people who have
the desire to do well on the SAT and do not feel like dealing with the snobbery that may occur with a professional tutor.
16 Subjects: including statistics, reading, chemistry, physics
Related East Camden, NJ Tutors
East Camden, NJ Accounting Tutors
East Camden, NJ ACT Tutors
East Camden, NJ Algebra Tutors
East Camden, NJ Algebra 2 Tutors
East Camden, NJ Calculus Tutors
East Camden, NJ Geometry Tutors
East Camden, NJ Math Tutors
East Camden, NJ Prealgebra Tutors
East Camden, NJ Precalculus Tutors
East Camden, NJ SAT Tutors
East Camden, NJ SAT Math Tutors
East Camden, NJ Science Tutors
East Camden, NJ Statistics Tutors
East Camden, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Ashland, NJ statistics Tutors
Briarcliff, PA statistics Tutors
Camden, NJ statistics Tutors
Center City, PA statistics Tutors
East Haddonfield, NJ statistics Tutors
Eastwick, PA statistics Tutors
Edgewater Park, NJ statistics Tutors
Ellisburg, NJ statistics Tutors
Erlton, NJ statistics Tutors
Middle City East, PA statistics Tutors
Middle City West, PA statistics Tutors
South Camden, NJ statistics Tutors
West Collingswood Heights, NJ statistics Tutors
West Collingswood, NJ statistics Tutors
Westmont, NJ statistics Tutors
|
{"url":"http://www.purplemath.com/east_camden_nj_statistics_tutors.php","timestamp":"2014-04-19T05:06:05Z","content_type":null,"content_length":"24592","record_id":"<urn:uuid:4fdc4298-4b06-4778-a962-035a289a862f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Solve sinx+ sin2x+ sin3x= 1
Last edited by mr fantastic; July 24th 2011 at 07:02 PM. Reason: Re-titled.
$\displaystyle \sin 2x = 2\sin x \cos x$ $\displaystyle \sin 3x = 3\sin x - 4\sin^3 x$
can you help me complete? it not so easy like that
Post #2 puts you on the right track. I'll add that line 2 of that reply follows from using the compound angle formula on sin(x + 2x). You need to show some effort if more help is needed. What have
you tried and where are you still stuck? Are you expected to solve it by hand or using CAS technology?
Hint! I will post the whole solution soon, but take a moment and think why multiplying the both sides of an equation with $2\sin{\frac{x}{2}}$ will help you...
Not the whole solution... With help of my formula we get: $\sin2x\sin{\frac{3x}{2}}=\sin{\frac{x}{2}}$ $2\sin{x}\cos{x}\sin{\frac{3x}{2}}=\sin{\frac{x}{2} }$ $4\sin {\frac{x}{2}} \cos{\frac{x}{2}} \
cos{x} \sin {\frac{3x}{2}}=\sin{\frac{x}{2}}$ $\sin {\frac{x}{2}} (4\cos {\frac{x}{2}} \cos {x} \sin {\frac{3x}{2}}-1)=0$ $\sin {\frac{x}{2}} (4\cos {\frac{x}{2}}\cos {x} \sin {\frac{3x}{2}}-1)=0$ $\
sin {\frac{x}{2}} (2\sin {2x}+\sin {2x}-1)=0$ $\sin {\frac{x}{2}} (2 \sin {2x} \cos {x}+\sin {2x}-1)=0$
Last edited by Also sprach Zarathustra; July 24th 2011 at 08:02 PM.
Are you sure there's no typo? Maybe the equation is: $\sin(x)+\sin(2x)+\sin(3x)=0$ ?
This equation does not seem to have a clean solution. If you graph the function, you can see that there are two solutions between 0º and 90º (and no others in the range 0º to 360º). The smaller
solution is approximately 9.836º, but that does not look like a recognisable quantity in either degrees or radians. I haven't tried to locate the other solution accurately, but it is somewhere near
71.3º. Again, this is not a nice-looking angle. The solutions for $\sin x$ are not rational. I doubt whether the equation can be solved exactly.
|
{"url":"http://mathhelpforum.com/trigonometry/185049-solve-sinx-sin2x-sin3x-1-a.html","timestamp":"2014-04-20T07:15:57Z","content_type":null,"content_length":"79593","record_id":"<urn:uuid:2d89492b-1c33-40a5-b10e-34e400731e12>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User Input Issues
Okay so, I've been having some trouble getting a project to build. The issue is that I'm unsure of how to properly get user input involving bool values. I'll post the code I have so far so you can
have an idea of what I'm saying - and then explain a bit after.
The .h file, Logic.h:
#pragma once
class Logic
Logic(bool x);
bool Get(); //accessor function
void Set(bool x); //mutator function
bool Not();
bool And(Logic x);
bool Or(Logic x);
bool Nand(Logic x);
bool Nor(Logic x);
bool ExOr(Logic x);
bool ExNor(Logic x);
bool Print();
bool b;
The implementation file, Logic.cpp:
#include "StdAfx.h"
#include "Logic.h"
b = false;
Logic::Logic(bool x)
b = x;
bool Logic::Get()
return b;
void Logic::Set(bool x)
b = x;
bool Logic::Not()
return !b;
bool Logic::And(Logic x)
return b && x.b;
bool Logic::Or(Logic x)
return b || x.b;
bool Logic::Nand(Logic x)
return !(b && x.b);
bool Logic::Nor(Logic x)
return !(b || x.b);
bool Logic::ExOr(Logic x)
return b && x.Not() || !b && x.b;
bool Logic::ExNor(Logic x)
return !(b && x.Not() || !b && x.b);
bool Logic::Print()
return b;
...and the source file:
#include "stdafx.h"
#include <iostream>
#include "Logic.h"
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{Logic a, b, sum;
cout << "Enter a value for the first bit... " << endl;
cin >> //this is where variable " a " would be input
cout << "Enter a value for the second bit... " << endl;
cin >> //this is where variable " b " would be input.
sum = a.And(b);
cout << "Sum = " << sum.Print() << endl;
return 0;
As you can probably guess, I'm trying create a program that performs boolean algebra - pretty simple in itself, but I can't for the life of me figure out how to get user input. The user must input
either 1 or 0. I've been toying with the Get() and Set() functions, but I can't quite seem to get it right. Build fails with errors each time. I must be completely missing something obvious, it's
kind of depressing...
EDIT: Just a hint is fine too, I feel like I should know it, I've just kind of hit a wall.
This post has been edited by Chaosnub: 02 November 2009 - 09:11 PM
|
{"url":"http://www.dreamincode.net/forums/topic/136219-user-input-issues-w-bool-values/","timestamp":"2014-04-17T22:06:08Z","content_type":null,"content_length":"131329","record_id":"<urn:uuid:d8af69f0-48e5-4a54-bbe2-247e79daf5cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodridge, IL Math Tutor
Find a Woodridge, IL Math Tutor
...I have a Master's degree in Biology and a Bachelor's in Genetics and Development. I am passionate about what I teach and will help your son or daughter master the material they need to learn.
I believe in understanding the concepts of biology and genetics, not just memorizing terms.
10 Subjects: including algebra 1, elementary (k-6th), Microsoft Excel, geometry
...This is my current method of teaching/tutoring. It is highly interactive with continuous student feedback and whole body involvement. I believe in learning by doing and-or playing.
19 Subjects: including algebra 1, geometry, precalculus, ACT Math
...I have a Bachelor's degree (2010) in mathematics from the University of Illinois at Urbana-Champaign. I am certified to teach secondary (6-12) mathematics. I student taught in an Algebra 2/
Trigonometry class, teaching 3 sections of it.
12 Subjects: including prealgebra, algebra 1, algebra 2, calculus
...I enjoy helping students understand the subject and realize that Math can be fun and not stressful.Algebra 1 is the basis of all other Math courses in the future and is used in many
professions. Topics include: simplifying expressions, algebraic notation, number systems, understanding and solvin...
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...Math was an important tool that I used to solve complex problems. Therefore, I am very well familiar with math levels ranging from basic arithmetic to differential equations. I have
volunteered for 2 years at an elementary school where I provide one-on-one tutoring for the kids that were falling behind.
11 Subjects: including precalculus, biology, algebra 1, algebra 2
Related Woodridge, IL Tutors
Woodridge, IL Accounting Tutors
Woodridge, IL ACT Tutors
Woodridge, IL Algebra Tutors
Woodridge, IL Algebra 2 Tutors
Woodridge, IL Calculus Tutors
Woodridge, IL Geometry Tutors
Woodridge, IL Math Tutors
Woodridge, IL Prealgebra Tutors
Woodridge, IL Precalculus Tutors
Woodridge, IL SAT Tutors
Woodridge, IL SAT Math Tutors
Woodridge, IL Science Tutors
Woodridge, IL Statistics Tutors
Woodridge, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/woodridge_il_math_tutors.php","timestamp":"2014-04-21T11:07:54Z","content_type":null,"content_length":"23782","record_id":"<urn:uuid:fbe66522-ffab-44f3-8ddb-038ba7a48a0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HashMap: Obtaining all values in a collision?
03-28-2008, 10:14 PM
HashMap: Obtaining all values in a collision?
Hi, could someone please show me how to obtain the values in a HashMap with the same key / are in a collision.
Thanks a lot
03-29-2008, 04:27 AM
Actually,HashMap is composed of Map.Entry<K,V>,so you can see,one key maps one one value.if you use map.put(k1,v1),you can obtain v1 using method map.get(k1).but if you continue useing map.put
(k1,v2),this v1 must be covered by v2.sothat you can only obtain v2 if you use the method map.get(k1) once more.So come back to your problem,you could't get all values int a HashMap,but only get
the lasted value through map.get(k).
03-29-2008, 10:25 PM
Thanks for the reply, but I'm not sure I folllow.
"v1 must be covered by v2.sothat you can only obtain v2 if you use the method map.get(k1) once more"
What is meant by covered?
"you could't get all values int a HashMap,but only get the lasted value through map.get(k). "
I have realised that the get method only returns the object that was inserted last in the collision. Are you saying it's not possible? I highly doubt that, it would make the HashMap a useless
I have tried to iterate through my whole HashMap, but it gives only the one object value in the collision and carries on to the next key. I must be missing something, I'm hoping somebody can
point me in the right direction here.
|
{"url":"http://www.java-forums.org/new-java/6891-hashmap-obtaining-all-values-collision-print.html","timestamp":"2014-04-16T18:16:38Z","content_type":null,"content_length":"5359","record_id":"<urn:uuid:cb6623d0-d6e4-446c-ac89-79a44ac86769>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binomial Theorem Lesson Plans & Worksheets | Lesson Planet
Binomial Theorem Teacher Resources
Find Binomial Theorem educational ideas and activities
Showing 1 - 20 of 44 resources
In this video, the Binomial Theorem is defined and used to expand (a + b) 4.
A week's worth of teaching on the Binomial Theorem. Lesson examples and a plethora of worksheets included. Learners find coefficients of specific terms within binomial expansions using notation of
factorials and then apply these skills in using the Binomial Theorem to find solutions to practical applications.
Sal shows two ways to quickly calculate the coefficients of a binomial expansion. With the first method, he shows the relationship between PascalÕs triangle and the coefficients, and in the second
method, he shows an even faster way for one to write the coefficients without calculating previous rows of coefficients.
In this video on the Binomial Theorem, Sal tries to give an intuition behind why combinations are part of its definition. By looking at the expansion of (a + b)3 and carefully looking at where each
value originates from, one can see how we are really asking a question about combinatorics.
Using the binomial theorem and definition of a limit, Sal shows a proof that the derivative of xn equals nxn-1.
A comprehensive lesson that explores and researches Pascal's triangle and relates its properties to the Binomial Theorem through a variety of lessons. Have the class practice expanding polynomials
using the theorem. A few other formulas and functions related to this theorem will be explored.
In this algebra worksheet, students expand binomials and identify different terms. They expand the function using the binomial theorem. There are forty-four questions with an answer key.
In this binomial theorem worksheet, students expand given binomials. They write an equation to identify the nth term of a sequence. This one-page worksheet contains 15 problems.
In this Algebra II worksheet, 11th graders apply the binomial theorem to expand a binomial and determine a specific term of the expansion. The one page worksheet contains four problems. Answers are
In this algebra worksheet, students factor and expand equations using binomial expansion. They identify coefficients of different terms. There are forty-four questions with an answer key.
In this advanced algebra semester review worksheet, students answer 63 questions spiraling a review of topics including combinations, systems of equations, absolute value, parent graphs, conics,
complex numbers, and binomial theorem.
Practice applying the binomial theorem on this worksheet. It contains five multiple choice and five free response problems. The solutions are provided.
In this binomial expansions practice worksheet, students utilize the theorem to determine the nth term in the expansion for 10 problems, then expand it completely for an additional 6 problems.
In this binomial theorem worksheet, students identify the coefficients of a binomial expansion. They determine the designated term of an expansion. This two-page worksheet contains 13 problems.
In this algebra worksheet, students identify the coefficient and different terms of a binomial. They use binomial expansion to find their answer. There are 7 questions with an answer key.
In this Binomial Theorem instructional activity, students solve 10 different problems that include applying the Binomial Theorem in each.
Sal shows two ways to quickly calculate the coefficients of a binomial expansion. With the first method, he shows the relationship between PascalÕs triangle and the coefficients, and in the second
method, he shows an even faster way for one to write the coefficients without calculating previous rows of coefficients.
An app that provides basic algebra practice by allowing players to answer questions on a number of beginning algebra or pre-algebra topics.
Continuing his discussion of the Poisson Distribution (or Process) from the previous video, Sal takes students through the derivation of the traffic problem he had begun. The math gets gritty in this
video as Sal takes out the graphic calculator to solve the problem.
In this system of equations worksheet, 11th graders solve and complete 23 various types of problems. First, they graph each system of inequalities shown. Then, students write a polynomial function of
least degree with integral coefficients that has the given zeros. They also determine the equation for each conic, name the conic and state the center.
|
{"url":"http://www.lessonplanet.com/lesson-plans/binomial-theorem","timestamp":"2014-04-19T08:08:41Z","content_type":null,"content_length":"73596","record_id":"<urn:uuid:f8f2e7de-f6b0-4815-9e11-334289731f15>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cheltenham, MD Math Tutor
Find a Cheltenham, MD Math Tutor
I am currently an 8th grade math teacher for Anne Arundel County Public Schools. I have previously taught a wide variety of math subjects from 7th grade through entry level college classes. My
previous clients have gone on to significantly increase their score on their standardized tests as well as raise their class grades by an average of 1.5 letter grades.
12 Subjects: including linear algebra, algebra 1, algebra 2, geometry
...Since then, I have built on my algebra knowledge with a wide array of advanced mathematics. Therefore, I am very comfortable with the basics of algebra. I took three semesters of calculus at
The University of Maryland, and did well in all of them.
27 Subjects: including algebra 1, algebra 2, calculus, MATLAB
...I now apply those concepts on a daily basis as an engineer at a local manufacturing company here in Columbia, MD. I have 5 years of tutoring experience in Physics and Physics-related subjects.
Most recently, I have worked with students at UMBC in college level Physics courses for the past year and a half.
5 Subjects: including calculus, physics, precalculus, SAT math
Hello All! I am currently a graduate student at Capella University seeking my Masters degree in Educational Psychology 2015. I am also an admissions counselor for The Chicago School of
Professional Psychology in Washington, D.C.
20 Subjects: including SPSS, algebra 1, elementary math, statistics
...I have taught high school students in biology, human anatomy, general chemistry, and physics as a college student. I would love an opportunity to work with you, your family member or a friend.
I have always enjoyed reading ever since I had a great reading teacher in Kindergarten.
27 Subjects: including algebra 2, Spanish, algebra 1, prealgebra
Related Cheltenham, MD Tutors
Cheltenham, MD Accounting Tutors
Cheltenham, MD ACT Tutors
Cheltenham, MD Algebra Tutors
Cheltenham, MD Algebra 2 Tutors
Cheltenham, MD Calculus Tutors
Cheltenham, MD Geometry Tutors
Cheltenham, MD Math Tutors
Cheltenham, MD Prealgebra Tutors
Cheltenham, MD Precalculus Tutors
Cheltenham, MD SAT Tutors
Cheltenham, MD SAT Math Tutors
Cheltenham, MD Science Tutors
Cheltenham, MD Statistics Tutors
Cheltenham, MD Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Cheltenham_MD_Math_tutors.php","timestamp":"2014-04-19T05:32:38Z","content_type":null,"content_length":"23924","record_id":"<urn:uuid:cd741325-e26b-48ff-b418-664310bcc138>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Year 8 question - finding the pattern of numbers in a grid
April 10th 2009, 12:34 AM #1
Apr 2009
Year 8 question - finding the pattern of numbers in a grid
I really need help with this maths question. It is basically asking to find the unknown number. This is a year 8 question but for some reason it is extremely hard for me to figure out, I even
tried asking some of my friends but they do not know. Here is an attached picture of the question. The answer is in the back of the book which is 32 but i don't know why. Please help!
Last edited by mr fantastic; April 25th 2009 at 06:28 PM. Reason: Removed shouting
I REALLY NEED HELP WITH THIS MATHS QUESTION. It is basically asking to find the unknown number. This is a year 8 question but for some reason it is extremely hard for me to figure out, I even
tried asking some of my friends but they do not know. Here is an attached picture of the question. The answer is in the back of the book which is 32 but i don't know why. Please help!
Okay, I've spotted it.
Let $x$ be the left-most column value of the row $n$, and let $y$ be the middle column value of row $n$.
$x = (y \times n) + (2 \times n)$
Let's focus on some of the rows:
$6 = ((4 \times 1_{(the \ row \ number)}) + 2)$
$20 = ((8 \times 2_{(the \ row \ number)}) + 4)$
$12 = ((2 \times 3_{(the \ row \ number)}) + 6)$
$? = ((6 \times 4) + 8) = 32$
One word - GENIUS and btw do u rekon this is year 8 level?
In Year 9 you should learn that there are infintely many solutions. I'll allow that this is substantially more complicated than your basic pattern recognition busy work, but it is still just a
finite list of values.
Surprise your Head Master by finding another solution.
April 10th 2009, 01:09 AM #2
April 11th 2009, 07:16 AM #3
Apr 2009
April 11th 2009, 09:01 AM #4
MHF Contributor
Aug 2007
|
{"url":"http://mathhelpforum.com/math-topics/85652-year-8-question-finding-pattern-numbers-grid.html","timestamp":"2014-04-19T15:15:20Z","content_type":null,"content_length":"42937","record_id":"<urn:uuid:650df1d4-207a-4fbd-a74e-81e3176f735f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Ask Dr. Math Archives: College Probability
This page:
Dr. Math
See also the
Internet Library:
linear algebra
modern algebra
Discrete Math
conic sections/
coordinate plane
Logic/Set Theory
Number Theory
Browse College Probability
Stars indicate particularly interesting answers or good places to begin browsing.
See also College Statistics.
My book says that for any continuous random variable X, P (X=x)=0. Using calculus, I can figure out the probability is 0, but by intuition, I can't see why.
Find the probability of getting at least 20 duplicate addresses when drawing a sample of 30,000 at random from UK households (estimate 21,000,000) where there is replacement every time a
selection is made.
A rope one unit long is cut in two places. What is the probability that the three resulting pieces can be arranged to form a triangle?
If two components have average lifespans of 2500 hours, and the probability of failure for each is f(t) = (1/u)e^(-t/u), what is the probability (1) that both components will fail within 2500
hours, and (2) that both will fail within a total of 2500 hours of operating time?
Suppose a football team plays 8 games, and the chance of winning any particular game is 50%. What is the probability of completing the season without more losses than wins?
A native speaker of German writes: The decimal representation of a real [number] can be viewed as pseudo-random because it's not ending and not repeating. If you take any real, what is the
probability of finding 987987987 in the decimals?
Statistics, God, and the probability of life arising by "pure chance"...
What is the probability of being dealt a bridge hand void in a specified suit from an ordinary deck of 52 playing cards?
If I flip a coin 4 times and they all turn out to be heads, what is the probability that the coin is fair?
If a chord is selected at random on a fixed circle, what is the probability that its length exceeds the radius of the circle?
How many people would you have to share a room with before the probability that one of them shares your birthday is at least 50%?
Let X be a geometric random variable with parameter p1 and let Y be a geometric random variable with parameter p2. Find the probability that X is less than or equal to Y.
Find the probability of getting three heads in a row with a weighted coin, with a 1/3 probability of getting a head on each toss. What if I want to be 99% sure of getting three heads?
I am able to determine the chi-square value, but I do not know how to determine the p-value, or if I can even do that.
You said that the probability is 1 that you can find any number with that string in it. I don't understand how.
What is the probability of at least two eights being next to each other in a random shuffling of a deck of cards. What about at least two cards (2 eights or 2 queens etc.) being next to each
A rope 20m long is randomly cut into two segments, each of which is used to form the perimeter of a square . . .
Let X1, X2, ....., Xn be n independent and continuous random variables with the same distribution, let X denote the maximum and Y the minimum of these random variables, and let U = X / Y; find
the distribution of X,Y, and U.
A game starts with a score of zero, and adds or subtracts one based on whether coin flips come up heads or tails. Is there a way to calculate how many coin tosses it will take to reach a score of
n or -n?
What is the probability that Dorothy doesn't fall off the cliff if she starts one step away from it and moves backwards and forwards one step at a time depending on the toss of a coin?
How do I perform a regression analysis on four-parameter experimental data to determine the values of A, B, C and D in the equation y = [(A- D)/(1+{x/C}^B)]+D?
Math projects: A. Riemann - a German mathmatician; B. The Mayan number system and calendar; C. Probability.
If I roll a die 10 times, what is the probability that I will have gotten all numbers from 1 to 6 at least one time each?
Given data regarding the occurrence of heart attack among runners and non-runners, determine whether there is a connection between runner status and heart attacks.
What proportion of the time does the runner run barefoot?
In a class of 35 students, what is the probability that every student in the class will have the same zodiac sign as at least one other classmate?
What are the chances that a 9-digit Social Security number is comprised entirely of 2 digits, such as 211-12-1221?
Finding the transtion matrix and steady-state vector of a stochastic process.
If an index of stock prices has probability 0.65 of increasing in any year, how can you turn this into a binomial distribution question?
A pair of dice is rolled until a sum of either 5 or 7 appears. Find the probability that a 5 will occur first.
Three points are taken at random on the circumference of a circle. What is the chance that the sum of any two arcs so determined is greater than the third?
If you flip a coin ten times, what is the probability of getting at least four heads?
If all possible orders of 20 people are considered, what is the average value of the number of places in the row...?
If you break a straight stick into three pieces, what is the probability that you can join the pieces end-to-end to form a triangle?
Suppose you randomly place 2 points on the circumference of a circle. What is the probability that a 3rd point placed randomly on the circle's circumference will form a triangle that will contain
the center of the circle?
A classic problem in which two envelopes contain money, one double the other. After choosing one at random, should you trade it for the other? It appears there should be no advantage, but the
expected values suggest otherwise. Or do they?
In a box there are nine fair coins and one two-headed coin. One coin is chosen at random and tossed twice. Given that heads show both times, what is the probability that the coin is the
two-headed one? What if it comes up heads for three tosses in a row?
If X, Y, and Z are 3 random variables such that X and Y are 90% correlated, and Y and Z are 80% correlated, what is the minimum correlation that X and Z can have?
Can you prove that if you take the natural log of a uniformly distributed random variable, it becomes a exponentially distributed random variable?
I need to choose one of seven cards at random in a card game. Is there a way I can use a 6-sided die to be sure that my pick is truly random?
Page: [<prev] 1 2 3 4 [next>]
|
{"url":"http://mathforum.org/library/drmath/sets/college_probability.html?start_at=81&num_to_see=40&s_keyid=40390164&f_keyid=40390165","timestamp":"2014-04-20T08:57:07Z","content_type":null,"content_length":"22702","record_id":"<urn:uuid:4229c971-c724-4076-b059-7a96b13e406a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Intuitions of Mass and Volume
joeshipman@aol.com joeshipman at aol.com
Fri Feb 17 19:39:06 EST 2006
Friedman criticizes RVM as an axiom because he claims that the
intuition supporting it is shown to be incoherent by results on the
necessary non-invariance of countably additive measures.
This criticism is not to the point. The intuition I am talking about is
a primordial physical intuition of MASS, not of VOLUME. Even if one
believed matter to be infinitely divisible, one would not expect matter
to be absolutely uniformly distributed, so there is no intuitive
requirement that two subsets of a mass of "stuff" that are related by a
rigid motion must have the same "mass".
On can then interpret the absence of an invariant countably additive
measure as indicating that absolute homogeneity and isotropy are
impossible, that space has a "grain" or a "lumpiness" which affects the
matter in it. If you believed in infinitely divisible matter because
you didn't know any atomic physics, but had been lucky enough to
discover general relativity which depends only on classical continuous
physics, this would be a reasonable interpretation, which would
preserve the intuition of infinitely divisible matter that retained the
property of "mass".
The intuition that every subset of SPACE has an invariant "volume" is
indeed shown to be incoherent by results that depend on the Axiom of
Choice, but this is a different, and logically stronger, intuition than
the intuition that every subset of a material object has a "mass".
-- JS
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/009876.html","timestamp":"2014-04-18T16:06:06Z","content_type":null,"content_length":"3856","record_id":"<urn:uuid:d9fd8dea-7956-4a89-bb0e-8bb73c8678a3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Triangle in the Hyperbolic Plane
The interior angles of a triangle in the hyperbolic plane have a sum of less than 180 degrees. But in order to prove this, we must first construct the hyperbolic plane and discover some of its
Stereographic Projection
To construct the hyperbolic plane, we will take a sphere resting on a horizontal plane and project it from its "north pole" (point N), onto the plane. Each point of the sphere, excluding N will then
be mapped onto the plane. This kind mapping from point P' to point P is called "Stereographic Projection".
Stereographic Projection Properties:
The plane n is tangent to N and is parallel to the horizontal (or image) plane. The plane p' is tangent to P'.
From the perfect symmetry of the sphere, planes n and p' create equal angles with the straight line NP'. The line of intersection of n and p' is perpendicular to the line NP. Since n is parallel to
the image plane, the image plane forms the same angle that p' does with the projecting ray PP'. It also intersects p' in a straight line perpendicular to PP'.
From this, we can draw the conclusion that this mapping is angle-preserving! Here's how:
If r' is tangent to the sphere at P' and r is the image of r', then r and r' form equal angles with PP'. This is because r is created by the intersection of the image plane with the plane containing
r' and NP'.
BUT, if two straight lines r and r' are respective intersections of a plane e with planes p and p' - e containing straight line PP' and planes p and p' form equal angles with PP' while intersecting
in a straight line perpendicular to PP' - then r and r' also form equal angles with PP'.
With the perfect symmetry of the sphere, we can now extend this property. Take s' as another tangent to the sphere at P'. With s as its image, the angle formed by r and s is equal to the angle formed
by r' and s'.
Therefore, "Stereographic Projection reproduces the angles on the sphere without distortion", and so the mapping is angle-preserving.
We can also draw the conclusion that this mapping is also circle-preserving!
Take k' to be an arbitrary circle on the sphere and NOT passing through N. Then each point on k' has a plane tangent to the sphere and all of these planes create a circular cone with vertex S. Since
k' does not pass through N, NS is not tangent to the sphere at N, hence it is not parallel to the image plane. Take M to be the point where NS meets the image plane.
Let P' be an arbitrary point on k' and P its image. Now P'S is tangent to the sphere at P' and PM is the image of P'S. The angle PP'S is equal to the angle P'PM. Next, we draw a line passing through
S and parallel to PM. We put the point P" where this new line intersects line NP. So P" is either the same point as P' or the triangle P'P"S has equal angles at P' and P", making it an isosceles
triangle with SP' equal to SP". We now have similar triangles with equal ratios: PM/P'S = PM/P"S = MN/SN, and by solving this we get: PM = P'S*MN/SN. Since S has the same distance from all points of
k', we discover that P'S is constant and from the formula previously constructed, it follows that PM is also a constant. If PM is a constant, then k is a circle with M as its centre.
We have now shown that Stereographic Projection of a sphere onto an image plane maps circles, not passing through N, onto circles in the plane. By reversing the previous argument we can show that
every circle on the image plane is the image of a circle on the sphere. If a circle is able to move on the sphere, it can approach a circle that is passing through N, making NS approach a tangent to
the sphere at N and causing M to approach infinity. It then follows that the circles passing through N have images that are straight lines on the image plane. Hence, the set all circles on the sphere
corresponds to the set of all circles and straight lines in the plane. So, "Stereographic Projection is circle-preserving".
Now we will look at any mapping, a', of the sphere onto itself. a' maps all of the circles on the sphere into circles (still on the sphere). ie. a' could be a rotation of the sphere about some
diameter. With Stereographic Projection, a' creates a mapping a of the image plane into itself. a will map the set all circles and straight lines into itself. A map such as a of the plane into itself
is a "circle-preserving transformation".
Constructing the Hyperbolic Plane:
We are now in a position where we can finally construct the hyperbolic plane.
First, let the Hyperbolic Plane be represented by the interior of a circle m lying in a horizontal plane. At the centre of m, place a sphere having the same radius as m, touching the plane at that
centre. Now project, by Vertical Parallel Projection, the circumference and interior of m onto the lower hemisphere bounded by circle l, congruent to m. This hemisphere has now become a new model for
the Hyperbolic Plane. Every chord g of m is projected into a semicircle v of the sphere meeting l at right angles. These semicircles are regarded as the images of the hyperbolic straight lines.
Now, using Stereographic Projection, map the hemisphere back into the plane. The image of the hemisphere covers the interior of a circle k and is the new model for the Hyperbolic Plane.
With the angle- and circle- preserving nature of Stereographic Projection, the semicircles v have now become arcs n , perpendicular to k . The diameters of k will be included as the limiting cases in
this class of circular arcs.
This final model of the Hyperbolic Plane is due to Poincaré.
There is a one-to-one correspondence between the set of all circular arcs perpendicular to k , and the set of all the chords of m . Therefore, any two points A and B in the interior of k can be
joined by one and only one circular arc perpendicular to k . Let R and S be the two points where this arc meets k .
The hyperbolic distance between A and B can be calculated from the formula:
s = c/2 * |log AR*BS / BR*AS|
If A', B', R', and S' are the points of the original model that give rise to A, B, R, and S in the Poincaré model, then this relation holds:
AR*BS / BR*AS = [A'R'*B'S' / B'R'*A'S']^0.5
And so the formula for the hyperbolic distance in Poincaré's model is:
s = c * |log AR*BS / BR*AS|
Now, the Euclidean angles in Poincaré's model are equal to the hyperbolic angles multiplied by a fixed proportionality factor, x . But, since the angle 360 degrees of a full rotation is reproduced in
the Hyperbolic Plane without any change, x must be 1. Thus, "Poincaré's model preserves angles".
The Proof:
First, we take an arbitrary triangle ABC in Poincaré's model of the Hyperbolic Plane:
Keeping in mind that the axioms of congruence are valid in the Hyperbolic Plane, we can draw a triangle A'B'M congruent to ABC, where the centre M of k corresponds to the point C. Since every circle
perpendicular to k that passes through M will degenerate into a diameter of k and M is exterior to all other circles perpendicular to k, then in the figure above, the hyperbolic straight lines A'M
and B'M can represented by Euclidean straight lines while the hyperbolic straight line A'B' is represented by a circular arc. Now, the Euclidean angles at A' and B' are smaller in the triangle A'B'M
formed by two straight lines and a circular arc than the angles in the rectilinear triangle A'B'M, formed by three straight lines. We can then conclude that the sum of the interior angles of triangle
A'B'M must be less than 180 degrees.
Because Poincaré's model preserves angles, the same is true for the sum of Hyperbolic angles in the Hyperbolic triangle A'B'M and also its congruent partner, triangle ABC.
Jennifer Montgomery
Cohn-Vossen S, Hilbert D. Geometry and the imagination. Nemenyi P, = translator; New York: Chelsea Publishing Company; 1952. 357 p. Translation of: Anshauliche geometrie.
|
{"url":"http://www.math.ubc.ca/~cass/courses/m309-01a/montgomery/index.html","timestamp":"2014-04-21T12:16:41Z","content_type":null,"content_length":"10443","record_id":"<urn:uuid:08e121a5-104a-446b-bf4c-6f0a0e23e108>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] A proposal for dot (or inner)
Harald Hanche-Olsen hanche at math.ntnu.no
Fri Jan 28 20:37:07 CST 2000
I am having some problems relating to the current function dot
(identical to matrixmultiply, though I haven't seen the equivalence in
any documentation). Here is what the docs say:
Will return the dot product of a and b. This is equivalent to matrix
multiply for 2d arrays (without the transpose). Somebody who does
more linear algebra really needs to do this function right some day!
Or the builtin doc string:
>>> print Numeric.dot.__doc__
dot(a,b) returns matrix-multiplication between a and b. The product-sum
is over the last dimension of a and the second-to-last dimension of b.
First, this is misleading. It seems to me to indicate that b must
have rank at least 2, which experiments indicate is not necessary.
Instead, the rule appears to be to use the only axis of b if b has
rank 1, and otherwise to use the second-to-last one.
Frankly, I think this convention is ill motivated, hard to remember,
and even harder to justify. As a mathematician, I can see only one
reasonable default choice: One should sum over the last index of a,
and the first index of b. Using the Einstein summation convention
[*], that would mean that
dot(a,b)[j,k,...,m,n,...] = a[j,k,...,i] * b[i,m,n,...]
[*] that is, summing over repeated indices -- i in this example
This would of course yield the current behaviour in the important
cases where the rank of b is 1 or 2.
But we could do better than this: Why not leave the choice up to the
user? We could allow an optional third parameter which should be a
pair of indices, indicating the axes to be summed over. The default
value of this parameter would be (-1, 0). Returning to my example
above, the user could then easily compute, for example,
dot(a,b,(1,2))[j,k,...,m,n,...] = a[j,i,k,...] * b[m,n,i,...]
while the current behaviour of dot would correspond to the new
behaviour of dot(a,b,(-1,-2)) whenever b has rank at least 2.
Actually, there is probably a lot of code out there that uses the
current behaviour of dot. So I would propose leaving dot untouched,
and introducing inner instead, with the behaviour I outlined above.
We could even allow any number of pairs of axes to be summed over, for
inner(a,b,(1,2),(2,0))[k,l,...,m,n,...] = a[k,i,j,l,..] * b[j,m,i,n,...]
With this notation, one can for example write the Hilbert-Schmidt
inner product of two real 2x2 matrices (the sum of a[i,j]b[j,i] over
all i and j) as inner(a,b,(0,1),(1,0)).
If my proposal is accepted, the documentation should probably declare
dot (and its alias matrixmultiply?) as deprecated and due to disappear
in the future, with a pointer to its replacement inner. In the
meantime, dot could in fact be replaced by a simple wrapper to inner:
def dot(a,b):
if len(b.shape) > 1:
return inner(a,b,(-1,-2)
return inner(a,b)
(with the proper embellishments to allow it to be used with python
sequences, of course).
- Harald
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2000-January/012366.html","timestamp":"2014-04-17T01:56:01Z","content_type":null,"content_length":"5522","record_id":"<urn:uuid:0c3dd90f-64d4-425f-ab81-1354eb85179f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Eliminating theta
September 10th 2012, 03:53 PM #1
Junior Member
Sep 2011
San Antonio, TX
Eliminating theta
I have the two equations:
1) x + y = qsin(theta)
2) z = rcos(theta)
The question is eliminate theta and solve for q in terms of x, y, z, and r.
I'm not sure what eliminating theta means, but I have:
arcsin((x+y)/q) = theta
and I substituted that into my top equation. I have no clue on what I should do now.
Also,I don't understand how I can solve for q in terms of z and r when q isn't in the equation with z and r.
Re: Eliminating theta
How about using the fact that $sin^2(\theta)+ cos^2(\theta)= 1$?
Re: Eliminating theta
Perhaps an easier approach would be to write your equations as:
(1) $\frac{x+y}{q}=\sin\theta$
(2) $\frac{z}{r}=\cos\theta$
Now square both equations and add, and the right side will be an identity which will eliminate $\theta$.
Re: Eliminating theta
Should I have sin^2(arcsin(x+y)/q) + cos^2(arccos(z/r)) =1?
Re: Eliminating theta
Nevermind my last post
Re: Eliminating theta
I have q^2 = ((x+y)^2)/(1-(z/r)^2)
Sorry, I don't know how to make it look neater than that.
Re: Eliminating theta
Well, you could write the left side as $\frac{(x+ y)^3}{1- \frac{z^2}{r^2}}= \frac{r^2(x+ y)^2}{r^2- z^2}$, multiplying both numerator and denominator by $r^2$. And, of course, since you are
asked to find q, you still need to take the square root of both sides.
Re: Eliminating theta
Before reading your reply I got:
Your method gets rid of the fraction in the denominator so:
$q=\sqrt{\frac{r^2(x+ y)^2}{r^2- z^2}}$
Re: Eliminating theta
Re: Eliminating theta
Thank you all.
September 10th 2012, 03:59 PM #2
MHF Contributor
Apr 2005
September 10th 2012, 04:02 PM #3
September 10th 2012, 04:10 PM #4
Junior Member
Sep 2011
San Antonio, TX
September 10th 2012, 04:11 PM #5
Junior Member
Sep 2011
San Antonio, TX
September 10th 2012, 04:28 PM #6
Junior Member
Sep 2011
San Antonio, TX
September 10th 2012, 04:40 PM #7
MHF Contributor
Apr 2005
September 10th 2012, 08:12 PM #8
Junior Member
Sep 2011
San Antonio, TX
September 11th 2012, 07:00 AM #9
September 11th 2012, 08:58 AM #10
Junior Member
Sep 2011
San Antonio, TX
|
{"url":"http://mathhelpforum.com/pre-calculus/203241-eliminating-theta.html","timestamp":"2014-04-17T02:51:50Z","content_type":null,"content_length":"59529","record_id":"<urn:uuid:6e97c22b-e70a-46a4-9982-757ffbc96586>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summarizing Data with CUBE Queries - 4GuysFromRolla.com
Summarizing Data with CUBE Queries
By Scott Mitchell
One of the motivations behind using databases to hold information is so that decisions can be made on the data. For example, the benefit of a retailer storing product sales information in a database
is that the retailer can examine reports on the data, showing interesting facts, such as what products are the top sellers, where a particular item is selling best, the gross revenue generated from
all sales of a particular item or store, and so on.
Typically, interesting data has different dimensions. A dimension can be thought of as an attribute with a set of possible values. In the database world, this maps to a table column. For example, an
eCommerce store might have a table tracking all sales, with a record for each item sold. One dimension is the item that was sold; another, the store from which is was sold. Oftentimes, it is useful
to view summary data for a particular dimension, such as all items sold at a particular store, or all of the sales for a particular store. In this article we'll examine how to use native SQL commands
to provide such summary queries.
First Things First - Examining GROUP BY
Before we can discuss what ROLLUP and CUBE are, we must first take a look at the GROUP BY clause, which is a powerful clause of the SQL SELECT statement that gets far less press than it deserves.
Simply, the GROUP BY clause allows for aggregate functions to be applied to particular partitions of a database table. These partitions are specified via the column list in the GROUP BY clause.
Now that I have you thoroughly confused, let me attempt to demystify this confusion by presenting an example. Consider that we have a table to track stats for various basketball players in the NBA.
This table, let's call it PlayerStats, might have the following structure:
│ PlayerStats │
│ Name │ varchar(50), Primary Key │
│ Team │ varchar(50) │
│ Position │ varchar(10) │
│ Points │ int │
│ Rebounds │ int │
│ Blocks │ int │
For our example, let's assume this table has the following data:
│ PlayerStats │
│ Name │ Team │ Position │ Points │ Rebounds │ Blocks │
│ Kobe Bryant │ Lakers │ Shooting Guard │ 2,461 │ 564 │ 67 │
│ Shaq │ Lakers │ Center │ 1,841 │ 742 │ 159 │
│ Robert Horry │ Lakers │ Power Forward │ 522 │ 514 │ 61 │
│ Chris Webber │ Kings │ Power Forward │ 1,542 │ 704 │ 88 │
│ Mike Bibby │ Kings │ Point Guard │ 875 │ 147 │ 8 │
│ Yao Ming │ Rockets │ Center │ 1,104 │ 675 │ 147 │
Earlier I mentioned that the GROUP BY clause is useful when using aggregate functions. Before we can examine the GROUP BY function, let's take a moment to discuss SQL aggregate functions. An
aggregate function is a function whose result is based on the results in a number of rows. Common SQL aggregate functions include COUNT, MIN, MAX, AVG, SUM, and so on. For example, we can determine
the total number of points scored by players in the PlayerStats table by executing the following SQL query:
SELECT SUM(Points)
FROM PlayerStats
This query would return a single row with a single column, the value in that column being 8,345 - the sum of all of the points of all of the players. However, what if we wanted to view the points by
position, or by team? Note that for this particular problem, we don't want to apply to the SUM aggregate function to all of the records in the PlayerStats table; rather, we want this function to be
applied to various partitions of the table. So, if we want the total points by team, we want the function applied to the partition of the players who play for the Lakers; we also want the function
applied to the partition of players who play for the Kings; furthermore, we want the function applied to the partition of players who play for the Rockets.
To construct such a query we use the GROUP BY clause. The GROUP BY clause specifies what table column(s) should be used to partition the records in the table. For example, to partition the table by
the Team column, we'd use the following SQL query:
SELECT Team, SUM(Points)
FROM PlayerStats
GROUP BY Team
The results of this query would be three rows, one for each unique team. Specifically, the results would be:
│ Team │ │
│ Lakers │ 4,824 │
│ Kings │ 2,417 │
│ Rockets │ 1,104 │
For more information on the GROUP BY clause, refer to the previous article, Using the GROUP BY Clause.
Grouping on Multiple Columns
The GROUP BY clause can be used on multiple columns in a table. For example, imagine that we wanted to partition our table into subsets by team and position. That is, instead of wanting to see the
sum of all points for each team, we want to see the sum of points for the Laker point guards, shooting guards, small forwards, power forwards, and centers. Similarly, we want to see the sum of points
for the Kings point guards, shooting guards, and so on. And similarly for all teams in the table. To accomplish this, we could use the following SQL statement:
SELECT Team, [Position], SUM(Points)
FROM PlayerStats
GROUP BY Team, [Position]
The reason Position has to be in brackets is because Position is a reserved word in SQL Server 2000...
The results returned by this query for our given data set would not be terribly interesting (since there are no records in the table where there are more than one player on the same team that play
the same position). The results of this query on our current data set would be:
│ Team │ Position │ │
│ Lakers │ Center │ 1,841 │
│ Rockets │ Center │ 1,104 │
│ Kings │ Point Guard │ 875 │
│ Kings │ Power Forward │ 1542 │
│ Lakers │ Power Forward │ 522 │
│ Lakers │ Shooting Guard │ 2,461 │
(If an additional record were added to the PlayerStats table, say, Bobby Jackson, Kings, Point Guard, 150 points, the value for the Kings, Point Guard result would be 1,025 (the sum of all points by
Kings point guards).)
Summing Up the Points for Team and Position
Grouping on multiple columns is nice, but commonly for reports not only are we interested in the points per team per position, but also in the total number of points per team and the total number of
points per position. That is, we would want the following results:
│ │ Point Guard │ Shooting Guard │ Power Forward │ Center │ SUM │
│ Lakers │ │ 2,461 │ 522 │ 1,841 │ 4,824 │
│ Kings │ 875 │ │ 1,542 │ │ 2,417 │
│ Rockets │ │ │ │ 1,104 │ 1,104 │
│ SUM │ 875 │ 2,461 │ 2,064 │ 2,945 │ 8,345 │
Note that the bottom-most row and right-most column are "SUM" columns, summing up their respective rows and columns. The bottom right hand corner is a "sum of sums". To put it another way, each of
the bottom-most row's column shows the sum of points for each position. The right-most column shows the sum of points for each team. The bottom right hand corner shows the sum of points across all
teams and positions. And, of course, in the table one can quickly find the sum of points for the players of a particular position on a particular team.
This data structure is, in the database world, referred to as a cube. A cube is a structure that provides aggregate functions to multiple dimensions. The output above is an example of a
two-dimensional cube. However, cubes of any dimension can be created. (In fact, you can have one-dimension cubes.) Specifically, a data cube presents summary data for each of the dimensions used to
partition a table.
Imagine for a moment that we were asked by our boss to create a Web report that displayed the NBA player statistics as shown above. How would we accomplish this? One way would to be to use a SQL
statement with a GROUP BY clause on both the Team and Position columns. Then, in our ASP or ASP.NET Web page, we'd create two arrays to hold the sum of points for each "dimension" (points by team and
points by player). Finally, to compute the bottom right hand cell, we'd need to just sum up the values in one of the two arrays.
While this approach is plausible, there's an even easier way to accomplish the same thing using one SQL statement. We'll look at this in Part 2 of this article.
Read Part 2
|
{"url":"http://www.4guysfromrolla.com/webtech/043003-1.shtml","timestamp":"2014-04-20T14:02:14Z","content_type":null,"content_length":"43894","record_id":"<urn:uuid:a597a72c-cfbd-4f6e-8f6c-00cb3333dbae>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
57244629 comment
"Drivers will be able to use Apple Maps as in-car navigation"
Not sure how that compares to most in-car navigation, but Apple maps is still a pile of garbage. They maps are much harder to see than googles, the POI database is terrible, the routing is unusable,
at least in small-town Canada.
Be interesting to see what they come up with but I sure hope it has an App store to switch out the maps.
57007715 comment
Comment: Re:Gravity wells and other distance issues (Score 1) 330
by anethema (#46321173) Attached to: Japanese Firm Proposes Microwave-Linked Solar Plant On the Moon
Actually you are understating the difficulty of transmitting that power.
There are several big problems. One is real world rectennas are not that efficient. The best best lab condition ones with lower power are 90% but high power ones are quite a bit less, sometimes up to
75 percent or so.
The other massive loss is transmission loss. The basic formula is 32.45 + 20log(d) + 20log(f). Using the 1.32 TW estimate in this post: http://science.slashdot.org/co... and a rough 24ghz given in
this page http://www.propagation.gatech.... we would have around 231 db. Considering you are starting with 121 dBW of power, you are left with -110dBW of power on earth not accounting for antenna
Since a watt is 0dBW, we need 55db of dish gain on either side to get 1 watt and a gain of 115 to get the input power generation. At 24 GHz this is a 3 KILOMETER dish on either end. The largest
parabolic on earth now is 0.3 km, so ten times that. Plus it would have to always point at the moon (how? No idea).
Plus You would lose your 50% or more since obviously the whole equator isn't facing the sun. Then the 50-70 from your rectenna loses.
I imagine a bunch of other stuff from other losses I havent taken into account (Convert to AC, line loss, whatever else) and I cant imagine this being feasable.
56850583 comment
Comment: Re:There are several good indie titles (Score 1) 669
by anethema (#46285629) Attached to: Ask Slashdot: What Games Are You Playing?
I actually found its not really like Cod at all.
There are similarities, like there would with any shooter. But there is parkour like running around (Like Assassins creed, but first person), jetpacks (due two these two, you have to think very
vertically, or get killed very often), cloaks, "burn cards" which grant cool abilities, plus the obvious very large titans running around.
55265817 comment
Comment: Re:Beware the CSI effect. (Score 1) 93
Yeah I really enjoyed the behind the scenes stuff of the BBC Planet earth showing the sharks catching the seals.
They aren't sure where/when/if it is going to happen, so catching the shots was tough.
They had a cool high speed camera that was always recording, and when they hit the button to get their slow-mo footage, the video camera recorded 2 seconds BEFORE and 2 after they pressed it,
otherwise they would never have been able to get the whole event.
Pretty interesting.
55047175 comment
Comment: Re:geostationary GPS satellites (Score 1) 247
by anethema (#45878619) Attached to: Is Earth Weighed Down By Dark Matter?
They are absolutely not geostationary. The whole reason your GPS needs time to 'lock' when you haven't used it in a while is it is downloading the orbital path(Ephemeris) data from the satellites
themselves. Once it knows where they should be at which times exactly, it knows where it is relatively.
So basically, none of them are geostationary, unless you count ground based DGPS stations, that obviously don't move haha.
55047051 comment
Comment: Re:Awesome (Score 1) 295
by anethema (#45878553) Attached to: CES: Laser Headlights Edge Closer To Real-World Highways
You don't actually have to replace the housing to get this though. The proper retrofits put a projector lens inside your current housing.
These for example are designed to be modified into stock housings: http://www.theretrofitsource.com/product_info.php?products_id=141
Also if you've seen a HID Xenon bulb, the ball inside that makes the light is bigger in almost every dimension as compared to a filament in a Halogen bulb. This is why the light tends to be a bit
unfocused in a halogen housing without a projector, just the stock reflector. But even still most of the problem comes from the fact that they are just twice as bright. You see much of the same glare
with a halogen installed it is just too dim to be annoying.
54597773 comment
Comment: Re:Seeing the gravity is only 1/6th up there (Score 1) 90
by anethema (#45758457) Attached to: Smooth, 6.5 Hour Spacewalk To Fix ISS Ammonia Pump
Yeah it is surprising how many people think there is no gravity in orbit.
Gravity is only reduced by roughly 10 percent at that distance from earth. The reason it seems like there is no gravity is you are always falling towards the earth. You just happen to keep missing !
Do a retrograde burn and you will stop missing quickly though.
54316241 comment
Comment: Re:They are scared (Score 1) 670
by anethema (#45686517) Attached to: Diet Drugs Work: Why Won't Doctors Prescribe Them?
Of course you will be more tired. But if you keep up your routine with your caloric intake way down, then you will lose weight. You have to. If you were not gaining weight on your old diet, with the
same routine as now, the food has to go somewhere.
Now doing the same things with less calories, you can't maintain weight, your body will burn it.
54316143 comment
Comment: Re:Why even use cloud services ? (Score 1) 191
by anethema (#45686507) Attached to: Why Cloud Infrastructure Pricing Is Absurd
You can also go the other way and spec your equipment for the massive seasonal peaks then rent out out as a cloud service for others later :D (IE: Amazon)
54029433 comment
Comment: Re:They are scared (Score 1) 670
by anethema (#45626737) Attached to: Diet Drugs Work: Why Won't Doctors Prescribe Them?
You still maintain weight if you take in less calories than you're burning.
If she is near immobile most of the day, she should need very few calories.
Look for places you can cut calories out of the diet and look for exercises that can be done mostly with upper body.
53971639 comment
Comment: Re:Impossible! (Score 1) 127
I'd rather have willpower.
Those who do well in school and life, are not the most intelligent, but the ones with the most willpower to see things through.
Something I've never had the most of unfortunately :(
53891169 comment
Comment: Re:What about HDMI (Score 1) 408
by anethema (#45596783) Attached to: Death to the Trapezoid... Next USB Connector Will Be Reversible
What? Sure you aren't thinking about something else?
Its like 1/4 the width and a bit thicker.
Still a stupid trapezoid but can't win em all!
53891045 comment
Comment: Re:Even worse... (Score 1) 408
by anethema (#45596735) Attached to: Death to the Trapezoid... Next USB Connector Will Be Reversible
ALL the power that goes into your home is transmitted wirelessly over the short distance between the coils in a transformer outside.
It uses inductance to move power from one coil to another, and the Qi works the same way.
It is actually fairly efficient and can change the voltage at the same time. Win-win!
51669031 comment
Comment: Re:Samsung more profitable than Apple? Debunked. (Score 1) 236
by anethema (#45057559) Attached to: No Love From Ars For Samsung's New Smart Watch
Ignore the Ad Hominem and look at their reasoning. It is clear and well laid out.
|
{"url":"http://slashdot.org/~anethema/","timestamp":"2014-04-16T20:54:39Z","content_type":null,"content_length":"92223","record_id":"<urn:uuid:a92bbe6a-b452-40c4-8479-428a24c7d354>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Polmod factorization
Bill Daly on Fri, 23 Jan 2009 07:10:54 +0100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
• To: pari-users@list.cr.yp.to
• Subject: Polmod factorization
• From: Bill Daly <bill@wmdaly.com>
• Date: Sun, 18 Jan 2009 14:24:58 -0500
• Delivery-date: Fri, 23 Jan 2009 07:10:54 +0100
• Mailing-list: contact pari-users-help@list.cr.yp.to; run by ezmlm
• Sender: Bill Daly <bill@wmdaly.com>
• User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
If f(x) is an irreducible polynomial in x, then Mod(x,f(x)) is a generic root of f(x), and the algebra mod f(x) is isomorphic (I think) to the algebra of the field generated by appending any root of
f(x) to Q. Is there a way of factoring f(x) mod f(x)? What I have in mind is that for some polynomials where Mod(x,f(x)) is a root, then there may be other rational functions of x which are also
roots of f(x), e.g. if f(x) is polcyclo(n), then Mod(x^a,f(x)) is a root whenever a is coprime to n. I don't however see any easy way of finding such roots with polmods in PARI. What, if anything, am
I overlooking?
Regards, Bill
|
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0901/msg00003.html","timestamp":"2014-04-20T23:45:53Z","content_type":null,"content_length":"4352","record_id":"<urn:uuid:1c13d645-7348-499f-8176-95b4001721aa>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
String theory as a theory of everything or a fool's errand?
A recent book by by Lee Smolin "The Trouble With Physics" gives his views about how String Theory fails even by the nature of it's science. An excellent review of his argument can be found here.
I found this compelling despite having spent much time trying to understand Strings and its 11 dimensions. If this theory doesn't work scientifically then what other areas of research could lead
potentially to a Theory of Everything? I hear about Supergravity but don't know details. Just wondering what others think and get some help understanding.
|
{"url":"http://www.atheistnexus.org/group/atheistswholovescience/forum/topics/string-theory-as-a-theory-of-everything-or-a-fool-s-errand?commentId=2182797%3AComment%3A1913754&groupId=2182797%3AGroup%3A25044","timestamp":"2014-04-17T00:37:18Z","content_type":null,"content_length":"81287","record_id":"<urn:uuid:343f2442-1431-4707-9034-6ee8a8f8ae76>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dates into percentages
Excel - Dates into percentages
Asked By CelticCharme on 01-Feb-09 04:01 PM
I was wondering if you could help me out. I train students and when a
student completes an element of their course work I input a completed date
into a cell. What I want to do is as soon as I input the completed date I
want excel to work out the percentage as they complete the course e.g. 6%,
15%, 75%, etc until they are finished 100%.
The amount of elements the students do changes from student to student from
28 to 34. What formula I can use? Do I have to change the formula from
student to student to get 100%?
If you need any more information please ask.
Rick Rothstein replied on 01-Feb-09 04:50 PM
I think you will need to describe your layout to us (what data is in =
what columns and/or rows).
Rick (MVP - Excel)
date I=20
ShaneDevenshir replied on 01-Feb-09 05:41 PM
Specifically how do you determine what percentage each element is worth of
the whole? If there are two elements do they automatically get 50% each or
can one be worth 25% the other 75%?
If this helps, please click the Yes button
Shane Devenshire
CelticCharme replied on 02-Feb-09 03:13 AM
Hi and thank you both for your replies.
From cell H8 to BW7 (68 elements) I input dates as they complete each
element (21/01/09)
Students do not do all 68 elements this ranges from 28 to 34 elements this
will be the 100%.
I want to put in the progression percentage in cell BX8 so I show how much
is completed out of 100%
The cells I use range from H7 to BW7 as you can see I don’t use them all and
it depends which elements the student use.
Student 1 = I input dates (over a six month span) into cells H7 to K7, L7 to
O7, V7 to Y7, Z7 to AC7, AH7 to AK7, AP7 to AS7 and BN7 to BQ7. That’s 28
cells which will equal 100%.
I want to see what percentage has been completed in cell BX7.
I hope this makes it easier for you
Celtic charmer
Rick Rothstein replied on 02-Feb-09 09:44 AM
If the number of elements a student completes can vary between 28 and =
34, do you know **in advance** which number of elements they will =
ultimately complete? If not, then there is no way to give a meaningful =
percentage progress report. What would you divide by to get that =
percent... 28, 29, 30, 31, 32, 33 or 34? The problem, for example, when =
a student has completed 27 elements, they are 96.4% complete if they =
will be doing 28 elements, but only 79.4% complete if they will be doing =
34 elements. On the other hand, if you do know what the ultimate number =
of elements they will complete are, then what column is that value =
located at in?
Rick (MVP - Excel)
use them all and=20
K7, L7 to=20
That=E2=80=99s 28=20
worth of=20
each or=20
completed date=20
date I=20
e.g. 6%,=20
student from=20
CelticCharme replied on 02-Feb-09 12:48 PM
The student tells me at the start what elements they will do. The example I
gave is the most common elements they pick and it's the cell numbers I also
Is there a way you can see what I am working on? If thats any help.
Thank you for your time
Celtic Charmer.
Rick Rothstein replied on 02-Feb-09 01:03 PM
And where is that information encoded at on the worksheet? I need either =
the count of those elements or an indicator of some kind for the =
elements each student plans to do so I can count them. In order to get a =
percent complete, you have to divide by that count and you haven't told =
us, yet, how to read that count from your worksheet.
Rick (MVP - Excel)
CelticCharme replied on 02-Feb-09 02:15 PM
Is there anyway you can see it? or can I send?
Celtic Charmer
Rick Rothstein replied on 02-Feb-09 02:47 PM
I don't think I need to see it... all you have to do is tell me how you =
know how many elements the student told you he/she would be doing and =
where that is encoded on your worksheet. Showing me the worksheet won't =
help if this information is not on it; and if it is on the worksheet, =
just tell me where.
Rick (MVP - Excel)
either the count of those elements or an indicator of some kind for the =
elements each student plans to do so I can count them. In order to get a =
percent complete, you have to divide by that count and you haven't told =
us, yet, how to read that count from your worksheet.
CelticCharme replied on 02-Feb-09 04:06 PM
Hi Rick and thank you very much for your time.
I input dates into cells H7 I7 J7 K7, L7 M7 N7 O7, V7 W7 X7 Y7, Z7 AA7 AB7
AC7, AH7 AI7 AJ7 AK7, AP7 AQ7 AR7 AS7 and BN7 BO7 BP7 BQ7. I do this as each
element is completed by the student. That’s 28 cells which will equal 100%.
is this the encoded you require?
Celtic charmer, again thank you for your time.
Rick Rothstein replied on 02-Feb-09 04:18 PM
No, that does not tell me what I need to know. Let's say the student has
completed 7 elements, so you have put the completion dates in these cells...
H7 I7 J7 K7, L7 M7 N7
My question to you is, at the point in time when you enter a date into N7,
how do you know the student is on his/her way to completing a total of 28
elements? How do you know, AT THAT POINT IN TIME, they are not going to end
up doing 34 elements instead of 28? You said the student tell you how many
elements they will be doing. Let's assume the student told you they will be
doing 28 elements... where on your worksheet do you have that 28 entered for
that student? The reason I keep asking you this question is because whatever
number of elements the student told you they will be doing... I have to
divide by that number.
Rick (MVP - Excel)
CelticCharme replied on 02-Feb-09 05:06 PM
I am sorry if I am not explaining this well.
I have 68 elements in total from cell h7 to bw7.
I know from day one what elements the student will be doing because they
tell me.
At the moment I just put the dates into the elements as they complete each
Some elements have to be done with others but students mainly pick the list
of elements I listed 28 in total. At other times students may pick different
elements that have to be done with others which will bring their total up to
34 but let’s not worry about that for now.
I don’t know if this helps ??? again I am sorry
Rick Rothstein replied on 02-Feb-09 05:16 PM
And my question is... after they tell you how many elements they will be
doing at day one, do you put that information into your worksheet anywhere?
If so, tell me where. If not, you will have to; otherwise, you will never be
able to divide by the correct number in order to get a meaningful percentage
complete. You can't just use 28 as the divisor for all situations... if the
actually are going to do 34 elements, then when they complete 29 of them,
using 28 as the divisor would generate a completion percentage of more than
Rick (MVP - Excel)
CelticCharme replied on 03-Feb-09 12:36 PM
Hi Rick
I have all possible elements on my excel sheet from cell columns h6 to bw6.
Then the students names go into the cell rows numbers 7, 8, etc... So the
first date input for student 1 will be made in cell h7 and the last possible
input will be in cell bw7
I have to have all 68 possible elements because some students may want to
different ones.
Maybe I'm trying the impossible?
Thanks for you time,
Celtic Charmer
Rick Rothstein replied on 03-Feb-09 12:50 PM
Do you put those dates into the cells when the student tells you what
elements he/she is going to do (sort of like target dates) and then do you
put the actual completion dates in a different row as the student completes
the element?
Rick (MVP - Excel)
CelticCharme replied on 03-Feb-09 01:54 PM
Hi Rick,
I put the dates in as the student completes each element in the same row as
the student is on.
Rick Rothstein replied on 03-Feb-09 02:18 PM
Then I need to ask again... how do you know in advance (that means, BEFORE
they complete ALL of the elements they intend to do) BY LOOKING AT YOUR
WORKSHEET how many elements the student told you they were going to do? I
repeat... in order to calculate the percentage that you want, you MUST know
the number of elements the student will ultimately do so you can divide by
it. If that number is not on your worksheet, or cannot be counted in some
way BEFORE the student completes ALL of his/her elements, then you cannot
get the percentage complete figure you asked for.
Perhaps if I state my question in a different way, you will see what I am
looking for. Pretend its the beginning of the school year. You ask your
students how many elements they plan to do. The next day (second day of the
school year), I come up to you while you are looking at the worksheet and
ask... how many elements did "Student A" tell you they were going to do?
What would you tell me? Assuming you give me a number... where did you get
that number from? That number is what I need to know in order to calculate
the percentage complete.
Rick (MVP - Excel)
CelticCharme replied on 03-Feb-09 02:38 PM
Hi Rick.
well as i Have them in my work sheet already all 68 elements of them. I know
the elements the student is doing via his/her learning plan in their folder,
which we do at the start of the course. I then fill them in as the student
completes each element.
I put a X into the cells(elements) they are NOT doing and that leaves blanks
where I will input the dates as they are completed.
Rick Rothstein replied on 03-Feb-09 02:50 PM
Having the elements the student plans to do in a location other than the
worksheet (namely, the student's folder) does not help (I can't write a
formula to look into that folder<g>). Can you add a column to your worksheet
and put the total number of elements the student plans to do in it? If so, I
should be able to give you the formula for the percentage complete that you
want; if you can't add this column, the you will not be able to do what you
Rick (MVP - Excel)
CelticCharme replied on 03-Feb-09 03:14 PM
Hi Rick
yes I can add more columns at the end which will be bx8 onwards as many as
you think I need. Student a is row 8 and not 7 as I have stated sorry.
Rick Rothstein replied on 03-Feb-09 04:11 PM
Okay, label Column BX as appropriate (maybe "Total Elements To Be Completed
by Student" or words to that effect) and place the total elements the
student will be completing in that column (that is, type in 28, 34, or
whatever number of total elements the student is scheduled to do). We will
make Column BY your "Percentage Complete" column (you can label it that if
you wish). Put this formula in BY8 and copy it down to the end of your
student list...
and use Format/Cells to format those cells in Column BY as Percentage.
Rick (MVP - Excel)
CelticCharme replied on 05-Feb-09 05:08 AM
Hi Rick
Thank you very very much for your help. A good job it works very well again
a big thank you.
Celtic Charmer ;-)
|
{"url":"http://www.excel-answers.com/microsoft/Excel-Miscellaneous/33924830/dates-into-percentages.aspx","timestamp":"2014-04-16T22:28:19Z","content_type":null,"content_length":"23570","record_id":"<urn:uuid:e4856457-5f9c-42f6-8eaa-8b47eac20c22>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ound Hackers
View Thread
suid I was thinking about putting this thread in Cryptography as the result of its solution would very much affect that field but I felt it was more of an Off-topic issue in general.
So what do you guys think? Does P = NP?
That question could conceivably be as difficult to answer as the debate between Intelligent Design and Evolution. A couple semesters ago I took a Computational Complexity course where I
Posts: 33 learned more about these classes of P and NP, however my professor did not go into really any detail regarding the in/equality of these two algorithm classes.
Location: /
Joined: Basically, if P does equal NP then any problem that can be verified in polynomial time can also be solved in polynomial time (it goes without saying the reverse order of course). As I
12.11.10 stated previously, with cryptography, the solution P = NP would mean that no form of encryption would be safe since the entire purpose of the encryption algorithms is that processing time
Rank: is just too large to compute the keys to break the encryption (RSA and the Factorization of large prime numbers).
(Quantum computing is another fear of the cryptography field but that is another discussion).
Now, it is very hard to believe that P = NP given the substantial claim that it is. If this was proven true it would change the way we view the world completely. As one person put it
"Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss..." meaning that there is no difference between knowing the
solution and finding it.
If P was proven NOT to equal NP, while not quite as significant a result, there would still be great benefits in this discovery. It would prove that there just isn't an efficient way,
with the currently accepted axioms describing our world, to solve certain problems even if the answer can be verified.
I am kind of torn between the two views. While P = NP would be a giant discovery and among the top (if not the very most) revolutionary discovery of all time, and P != NP would be another
huge discovery in its own respect, I have trouble accepting that either could be true. Therefore if I had to choose one I would probably go with the majority and say P != NP however I
think it is probably impossible to prove in at least the next century.
Member suid wrote:
with cryptography, the solution P = NP would mean that no form of encryption would be safe
Posts: 218 Not all forms of encryption surely. One time pads would still be safe.
England As for the whole P=NP thing, I don't really know all that much on this subject so I can't provide any kind of argument or reasons, but I think (and hope) that P!=NP because otherwise that
Joined: would mean most of the stuff I do would be unnecessary.
Rank: Try a new search engine
starofale wrote:
Member Not all forms of encryption surely. One time pads would still be safe.
You would be absolutely correct with that; one time pad cannot be broken unless you can predict the pseudo random values generated.
Posts: Also, I find that the analogy which the OP quoted is off; if I've understood it correctly, then it shouldn't really be that anyone who can appreciate a symphony would be mozart, it'd be
Location: more like "anyone who can perform a symphony would be mozart", which is a drastic difference. It's sort of like all those faggots with guitars who brag about how easy <insert popular
Joined: metal/hard rock song> is to play, even though they aren't the ones creative enough to write it themselves.
Rank: Guest Also, the major problem with a topic like this is that most people, me included, have fuck all idea what mathematical terms are in English.
Edit: oh and also, as shown with the one time pad example, difficulty of factorization is not the base of all encryptions. Furthermore, that the security within RSA depends solely on that
is merely an assumption, but has yet to be proven; a more fitting example would be Rabin.
Edited by on 08-04-11 23:50
spyware COM wrote:
Member Also, the major problem with a topic like this is that most people, me included, have fuck all idea what mathematical terms are in English.
Posts: 4192 I hate this too. If there's one way Esperanto could help the world, it would be assisting in bridging the linguistic divide in matters of science.
The Fucking anglocentric society is killing me.
Rank: God
Warn Level: "The chowner of property." - Zeph
“Widespread intellectual and moral docility may be convenient for leaders in the short term,
but it is suicidal for nations in the long term.” - Carl Sagan
“Since the grid is inescapable, what were the earlier lasers about? Does the corridor have a sense of humor?” - Ebert
Member spyware wrote:
If there's one way Esperanto could help the world, it would be assisting in bridging the linguistic divide in matters of science.
Posts: Agreed, surprising to see someone else who's even heard of it. Shame it didn't take off, but you work with what you get I suppose.
Rank: Guest
What sort of classes would one take that involve discussion of these problems?
Posts: 332
Location: G'bye y'all! I was an asshole, So korg banned me.
inside you.
spyware Arabian wrote:
Member What sort of classes would one take that involve discussion of these problems?
Posts: 4192 Ask around at your educational facility whether they have a charismatic, brilliant Krumholtz-esque professor around.
14.04.07 "The chowner of property." - Zeph
Rank: God
Warn Level: [small]
“Widespread intellectual and moral docility may be convenient for leaders in the short term,
but it is suicidal for nations in the long term.” - Carl Sagan
“Since the grid is inescapable, what were the earlier lasers about? Does the corridor have a sense of humor?” - Ebert
Member Arabian wrote:
What sort of classes would one take that involve discussion of these problems?
Posts: At least four math classes on university level of the type found in master of engineering courses; such as linear algebra, statistics, discrete mathematics and then cryptography. That's
Location: pretty much over a year of just cramming math, it's fucking rough.
Rank: Guest
COM wrote:
spyware wrote:
If there's one way Esperanto could help the world, it would be assisting in bridging the linguistic divide in matters of science.
Posts: Agreed, surprising to see someone else who's even heard of it. Shame it didn't take off, but you work with what you get I suppose.
Rank: Guest I was also intrigued that there are other people who have at least heard of Esperanto. It's got a serious community over at lernu.net though. Ironically enough though, Esperanto itself
basing most of its vocabulary/grammar on Romance and Germanic languages puts the rest at a disadvantage.
Arabian COM wrote:
Arabian wrote:
What sort of classes would one take that involve discussion of these problems?
Posts: 332 At least four math classes on university level of the type found in master of engineering courses; such as linear algebra, statistics, discrete mathematics and then cryptography. That's
Location: pretty much over a year of just cramming math, it's fucking rough.
inside you.
Rank: Bummer. The only one I hit on my master's track is Discrete Math.
G'bye y'all! I was an asshole, So korg banned me.
stealth- Pwnzall wrote:
Member COM wrote:
spyware wrote:
If there's one way Esperanto could help the world, it would be assisting in bridging the linguistic divide in matters of science.
Posts: 1003
Location: Agreed, surprising to see someone else who's even heard of it. Shame it didn't take off, but you work with what you get I suppose.
Joined: I was also intrigued that there are other people who have at least heard of Esperanto. It's got a serious community over at lernu.net though. Ironically enough though, Esperanto itself
10.04.09 basing most of its vocabulary/grammar on Romance and Germanic languages puts the rest at a disadvantage.
Rank: Mad
I hear that a lot, but you would be surprised how much languages like Japanese are apparently incorporated into Esperanto. Especially with word building, it's supposed to be very Japanese
Does anyone here actually have any speaking familiarity with the language? I'm currently in the process of learning it, but I still have a lot more to cover.
The irony of man's condition is that the deepest need is to be free of the anxiety of death and annihilation; but it is life itself which awakens it, and so we must shrink from being
fully alive.
http://www.stealt. . .
GTADarkDude So is this thread about Esperanto or about P=NP?
You don't need a lot of math to understand the P versus NP problem. An undergraduate course about complexity theory should be sufficient. If you're unable to take such a course and you're
still interested, there's always Wikipedia: http://en.wikipedia.org/wiki/P_versus_NP_problem and http://en.wikipedia.org/wiki/Time_complexity
Posts: 142 Anyway, I think P!=NP. It just doesn't feel credible that a polynomial solution can be found for a NP-complete problem like the Hamiltonian path problem. On the other hand, I find it also
Location: hard to believe that no such solution exists for the circuit satisfiability problem, which is also NP-complete. Especially sinds 'easy' solutions can already be found for some cases. It
The certainly is an interesting discussion that is unlikely to be put to an end very soon. I actually doubt that a formal proof even exists.
Joined: On a side node, I thought Esperanto was pretty well-known?
Rank: ...
nqe Just to confirm what GTADark said, grasping the basics behind P ?= NP does not require any fancy math or advanced computer science courses.
The basics behind the question are the following. You have a class of problem called NP (nondeterministic polynomial time) such that for a problem to be in NP it needs to be verifiable in
polynomial time given a certificate. This simply means that given a solution (certificate) to a problem you can verify whether it's correct or not using a polynomial time algorithm. Most
"natural"/interesting problems fall within NP.
Posts: 2
Location: Now, within NP there are two important subclasses, P and NP-complete. For a problem to be in P, it needs to be solvable using a polynomial time algorithm. NP-complete problems can be
Joined: thought of as the hardest possible problems that fall within the NP class.
Rank: Guest The ability (or maybe reasonableness) to ask the question whether NP = P comes from an important theorem called the Cook-Levin Theorem. Said theorem leads to the conclusion that any
NP-complete problem can be reduced (transformed) into any other NP-Complete problem in polynomial time. Connecting the dots, this means that if we figure out a single solution of an
NP-complete problem in polynomial time, we can reduce every other NP-complete problem to said problem in polynomial time (adding polynomial time on top of polynomial time algorithm is
still polynomial time) resulting in NP-complete to be equal to P! But NP-complete problems are the hardest problems in NP and if they are equal to P then NP=P.
A solution to this problem either way will be immensely exciting. If NP=P, we will be able to solve most problems a lot quicker and a lot more efficiently. Yes that would include some of
our encryption schemes but there are (I believe) encryption schemes that rely on algorithms outside of NP (and if they currently are not developed enough for practical use, they'll
obviously get a lot more attention).
If, however, NP != P, then it would seem proving that statement might require a brand new proof-technique. Something as basic as math induction has helped us prove many things. I see no
reason why a much more advanced proving method might not exist which has yet to be discovered. Of course this is just speculation on my part.
Hope the above makes sense.
Also, yay first post!
Edited by nqe on 09-04-11 19:04
Member nqe wrote:
If NP=P, we will be able to solve most problems a lot quicker and a lot more efficiently
Posts: I do not see why proving that something exists means that we know how to get to it/do it. Unless the proof itself is a method of solution that proves the statement by working for exactly
Location: everything, then it isn't necessarily the case; it'd merely mean that we at least have a chance.
Rank: Guest
Mtutnid COM wrote:
nqe wrote:
If NP=P, we will be able to solve most problems a lot quicker and a lot more efficiently
Posts: 102 I do not see why proving that something exists means that we know how to get to it/do it. Unless the proof itself is a method of solution that proves the statement by working for exactly
Location: everything, then it isn't necessarily the case; it'd merely mean that we at least have a chance.
Rank: OK. It is not quite that. It is if a problem can be verified in polynomial time can also be solved (reversed) in polynomial time while knowing the whole process of how it is made.
Newbie I don't see how this would help cryptography. We can assume that P=NP and try to solve some problems in polynomial time (without proving that P=NP) and it seems that we aren't able to do
that for most problems, so even if we did prove that P=NP, we aren't clever enough to create a solution that solves those problems in polynomial time. I hope this isn't too confusing.
Carve me up, slice me apart
Suck my guts and lick my heart
Chop me up, I like to be hurt
Drink my marrow and blood for dessert
My one desire, my only wish is to be-
Edited by Mtutnid on 10-04-11 00:03
Member Actually, a proof for P=NP would indeed give us a recipe to solve NP problems in polynomial time. To prove that two sets are equal, you have to prove that any problem in P is in NP (which
is true by definition), and that any problem in NP is also in P. To prove the latter, a NP-complete problem should be reduced to a problem in P. If people manage to do that, any NP
problem can be reduced to a problem in P, proving that P=NP. These reductions are also done in polynomial time, which gives us a completely polynomial algorithm.
Posts: 142 For example, this would indeed be great news for cryptanalysts. In most cryptographic systems, calculating the answer takes exponential time, whereas verifying an answer can be done in
Location: polynomial time. If P=NP, this NP problem can be polynomially reduced to a NP-complete problem, which can be polynomially reduced to a P problem, which has a polynomial solution. Thus we
The suddenly have a fully polynomially solution, which pretty much breaks the crypto system. (In theory. Of course n^100 is still quite undoable.)
Joined: ...
Mtutnid OK, but I think that P doesn't equal NP. Whatever, I could be wrong, who cares. Anyways if P=NP then I don't think we would be able to prove it, I can only think of ways to disprove it
Member (but not being able to disprove it (only thinking in that direction)). The reduction of a NP problem to a P problem is very hard. We haven't been able to do it with a single problem, thus
trying to prove it is a little optimistic...
It looks like all NP problems include randomness. So if a NP problem contains random numbers, unknown values or unknown algorithms for making a problem, it means that that problem uses no
Posts: 102 known logical approach to making that problem, therefore it cannot be reversed. If you want to solve the problem you will have to
Location: a) Solve it by bruteforcing or probability, because it contains random values. (Time increases exponentially)
HELL b) It has unknown values which have to be bruteforced. (Time increases exponentially)
Joined: c) It has unknown algorithms. Then you need many input/output values so that you can recreate the algorithm. (Since algorithms are created by humans the ones used are random, making the
22.09.10 time exponential to the complexity of the algorithm)
Newbie So now every NP problem that has:
a) Random values in the problem
b) Unknown values like the seed used for generating a number
c) Unknown algorithm for the creation process of the problem
Can't be reduced to a P problem.
??? There is a 99.999999999999999999999999999999999999999999999999999999999999999% chance that i am wrong so please help me understand where my mistake is. ???
Carve me up, slice me apart
Suck my guts and lick my heart
Chop me up, I like to be hurt
Drink my marrow and blood for dessert
My one desire, my only wish is to be-
Edited by Mtutnid on 10-04-11 04:15
nqe Mtutnid wrote:
Member Anyways if P=NP then I don't think we would be able to prove it, I can only think of ways to disprove it (but not being able to disprove it (only thinking in that direction)). The
reduction of a NP problem to a P problem is very hard. We haven't been able to do it with a single problem, thus trying to prove it is a little optimistic...
Posts: 2
Location: The opposite is in fact correct. As I mentioned in my previous post, the Levin-Cook theorem basically states that if any single NP-complete problem (and there are hundreds well known such
Joined: problems) was somehow solved in polynomial time, then every other problem would also be solvable in polynomial time, hence NP-complete = P = NP. This is why this problem is said to be
13.02.11 approachable by amateur computer scientists - it only requires the creation of a single algorithm to prove a phenomenal statement.
Rank: Guest
Now, to disprove it, it's not at all clear what needs to be done. One way to think of it is that you need to show that the infinite space of algorithms (that is, any possible algorithm)
is unable to solve NP-complete in polynomial time. Anyone who's done proofs knows that proving a statement by counter-example is normally much easier than a well constructed proof.
As for cryptography, I'm not sure what it means to "help it". NP=P would cause problems with our current encryptions, but does that help or hurt cryptography?
Edited by nqe on 10-04-11 05:33
Member All knowledge helps and destroys. Depends on who it helps and for whom it destroys.
Carve me up, slice me apart
Suck my guts and lick my heart
Posts: 102 Chop me up, I like to be hurt
Location: Drink my marrow and blood for dessert
Joined: My one desire, my only wish is to be-
22.09.10 EATEN...
Member nqe wrote:
As for cryptography, I'm not sure what it means to "help it". NP=P would cause problems with our current encryptions, but does that help or hurt cryptography?
Partially. All of the popular encryption systems that are currently in use, that depend on concepts such as prime factorization, elliptic curves or the discrete log problem, would be
Posts: 142 broken in theory and should be replaced. However, only because polynomial algorithms are generally considered 'feasible'. This is debatable. The grade of the polynomial describing the
Location: time complexity of the solution algorithm would become very important.
The On the other hand, an encryption system in EXPTIME is quite undesirable. This would mean it could take months to verify an answer, which just doesn't make any sense for stuff like SSL or
Netherlands smart card authentication.
23.02.08 ...
|
{"url":"https://www.hellboundhackers.org/forum/pnp-76-16193_0.html","timestamp":"2014-04-20T16:06:29Z","content_type":null,"content_length":"62869","record_id":"<urn:uuid:1203a770-6fcd-41a5-b799-1f0d7a3bbb65>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harmony Township, NJ Calculus Tutor
Find a Harmony Township, NJ Calculus Tutor
...When I was younger, I even tutored one of my older sisters AND my father in algebra, as neither reached that level in high school. One of the things I emphasize is learning how to do basic math
without the use of calculators. Calculators are great for some problems, like say 23.98342 x .00498, or for graphing multiple equations, but they were not designed for everything.
11 Subjects: including calculus, Spanish, geometry, algebra 1
...I am specialized in solutions to the Sturm-Louisville equation, which is the basic differential equation for much of engineering. My work involved plenty of eigenvalue/mode analysis and
solution of DEs by Heavisides operators. My education is a BSME from Cornell and an MSME concentrating in sound and vibrations at North Carolina State.
8 Subjects: including calculus, geometry, algebra 1, precalculus
...I PROVE my expertise by showing you my perfect 800 score on the SAT Subject Test, mathematics level 2. I'm not too shabby at reading and writing, either. Unlike the one-size-fits-all test-prep
courses, and the overly-structured national tutoring companies, I always customize my methods and presentation for the student at hand.
23 Subjects: including calculus, English, geometry, statistics
...I have taken two courses in statistics in college, as well as taught a statistics course at a high school level. While it has been a while since I taught this subject, I do still feel I know
it, and I am willing to put in the time to refresh my memory ahead of any tutoring sessions. SAT math is not only about knowing math, but being able to apply math skills to the SATs.
12 Subjects: including calculus, statistics, algebra 2, geometry
...My passion is to work with students of all ages (mostly high school and college level) in science and math, with a specific focus on preparing students in the following areas: biology,
physiology, anatomy, organic chemistry, chemistry, chemistry AP, physics and physics AP, math subjects through c...
39 Subjects: including calculus, chemistry, geometry, physics
|
{"url":"http://www.purplemath.com/Harmony_Township_NJ_calculus_tutors.php","timestamp":"2014-04-19T12:06:41Z","content_type":null,"content_length":"24800","record_id":"<urn:uuid:01de7f59-d421-4528-9db3-e82e3de53ee8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West New York Algebra 2 Tutor
Find a West New York Algebra 2 Tutor
...A little about me, I grew up in New York with great family and friends. I am responsible, hardworking, caring, and a great listener. I am confident that you will improve in your learning with
me as your tutor, but only if you are willing to put your effort in too! *I am also certified in CPR f...
29 Subjects: including algebra 2, English, chemistry, geometry
...What makes tutoring an ideal environment in which to learn is that I can help you see what ways of thinking you have that can be a great help in grasping new subjects. I have made a lifetime of
learning. Having earned three master's degrees and working on a doctoral degree, all in different fie...
50 Subjects: including algebra 2, chemistry, calculus, physics
...I received a 5 on the AP English Language and Composition exam, and I have yet to receive my AP English Literature and Composition score. Government and politics is my strongest subject. I just
completed AP Government and Politics, and I received a perfect score (not just a 5 but a legitimate p...
43 Subjects: including algebra 2, English, calculus, reading
I specialize in tutoring for Physics and Math. My education is rooted deeply in Physics, as I most recently received a Master's in Physics from the University of Connecticut. I taught introductory
physics courses at UConn and enjoyed seeing my students grow both in academics and critical thinking, validated through both testing and laboratory reports.
9 Subjects: including algebra 2, calculus, physics, algebra 1
...Prior to that, I finished a postdoctoral fellowship at Columbia University. I have taught one high school student at my Biotech company in the last year and two high school students on the job
at Columbia University. I have also experience working with the New York Academy of Sciences in conjun...
22 Subjects: including algebra 2, reading, writing, biology
|
{"url":"http://www.purplemath.com/West_New_York_Algebra_2_tutors.php","timestamp":"2014-04-20T15:51:32Z","content_type":null,"content_length":"24281","record_id":"<urn:uuid:994319d5-5fe8-4b2c-9a85-3e5a3dc50298>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A version of the bundle idea for minimizing a nonsmooth function: Conceptual idea, convergence analysis, numerical results
Results 1 - 10 of 99
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite
matrices. We show that the approach is very efficient for graph bisection problems, such as max-cut. Other appli ..."
Cited by 207 (17 self)
Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices.
We show that the approach is very efficient for graph bisection problems, such as max-cut. Other applications include max-min eigenvalue problems and relaxations for the stable set problem.
- SIAM Journal on Optimization , 1997
"... . A central drawback of primal-dual interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts
applicability to problems of small dimension. Typically semidefinite relaxations arising in combinatorial applic ..."
Cited by 141 (6 self)
Add to MetaCart
. A central drawback of primal-dual interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts
applicability to problems of small dimension. Typically semidefinite relaxations arising in combinatorial applications have sparse and well structured cost and coefficient matrices of huge order. We
present a method that allows to compute acceptable approximations to the optimal solution of large problems within reasonable time. Semidefinite programming problems with constant trace on the primal
feasible set are equivalent to eigenvalue optimization problems. These are convex nonsmooth programming problems and can be solved by bundle methods. We propose replacing the traditional polyhedral
cutting plane model constructed from subgradient information by a semidefinite model that is tailored for eigenvalue problems. Convergence follows from the traditional approach but a proof is
included for completene...
, 2003
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice,
particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Cited by 92 (13 self)
Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in
engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical
ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briey on semide nite representability, and then
outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article
concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a
unifying narrative thread.
- IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION , 2002
"... It is a common engineering practice to use approximate models instead of the original computationally expensive model in optimization. When an approximate model is used for evolutionary
optimization, the convergence properties of the evolutionary algorithm are unclear due to the approximation error. ..."
Cited by 72 (12 self)
Add to MetaCart
It is a common engineering practice to use approximate models instead of the original computationally expensive model in optimization. When an approximate model is used for evolutionary optimization,
the convergence properties of the evolutionary algorithm are unclear due to the approximation error. In this paper, extensive empirical studies on convergence of an evolution strategy are carried out
on two bench-mark problems. It is found that incorrect convergence will occur if the approximate model has false optima. To address this problem, individual and generation based evolution control is
introduced and the resulting effects on the convergence properties are presented. A framework for managing approximate models in generation-based evolution control is proposed. This framework is well
suited for parallel evolutionary optimization that is able to guarantee the correct convergence of the evolutionary algorithm and to reduce the computation costs as much as possible. Control o...
- SIAM Journal on Optimization
"... Let f be a continuous function on R n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at
which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not eve ..."
Cited by 54 (19 self)
Add to MetaCart
Let f be a continuous function on R n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f
is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but whose gradient is easily computed where it is defined. We present a
practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm. When f is locally Lipschitz and has bounded level
sets, and the sampling radius ǫ is fixed, we show that, with probability one, the algorithm generates a sequence with a cluster point that is Clarke ǫ-stationary. Furthermore, we show that if f has a
unique Clarke stationary point ¯x, then the set of all cluster points generated by the algorithm converges to ¯x as ǫ is reduced to zero.
, 1994
"... When computing the infimal convolution of a convex function f with the squared norm, one obtains the so-called Moreau-Yosida regularization of f . Among other things, this function has a
Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result co ..."
Cited by 49 (2 self)
Add to MetaCart
When computing the infimal convolution of a convex function f with the squared norm, one obtains the so-called Moreau-Yosida regularization of f . Among other things, this function has a Lipschitzian
gradient. We investigate some more of its properties, relevant for optimization. Our main result concerns second-order differentiability and is as follows. Under assumptions that are quite reasonable
in optimization, the Moreau-Yosida is twice diffferentiable if and only if f is twice differentiable as well. In the course of our development, we give some results of general interest in convex
analysis. In particular, we establish primaldual relationship between the remainder terms in the first-order development of a convex function and its conjugate.
, 1999
"... To efficiently derive bounds for large-scale instances of the capacitated fixed-charge network design problem, Lagrangian relaxations appear promising. This paper presents the results of
comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimi ..."
Cited by 44 (25 self)
Add to MetaCart
To efficiently derive bounds for large-scale instances of the capacitated fixed-charge network design problem, Lagrangian relaxations appear promising. This paper presents the results of
comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimization of Lagrangian duals arising from two Lagrangian relaxations. This study
substantiates the fact that bundle methods appear superior to subgradient approaches because they converge faster and are more robust relative to different relaxations, problem characteristics, and
selection of the initial parameter values. It also demonstrates that effective lower bounds may be computed efficiently for large-scale instances of the capacitated fixed-charge network design
problem. Indeed, in a fraction of the time required by a standard simplex approach to solve the linear programming relaxation, the methods we present attain very high quality solutions.
- Parallel Numerical Algorithms , 1997
"... Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled
as a graph partitioning problem in which the vertices of a graph are divided into a specified number of su ..."
Cited by 41 (0 self)
Add to MetaCart
Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a
graph partitioning problem in which the vertices of a graph are divided into a specified number of subsets such that few edges join two vertices in different subsets. Several new graph partitioning
algorithms have been developed in the past few years, and we survey some of this activity. We describe the terminology associated with graph partitioning, the complexity of computing good separators,
and graphs that have good separators. We then discuss early algorithms for graph partitioning, followed by three new algorithms based on geometric, algebraic, and multilevel ideas. The algebraic
algorithm relies on an eigenvector of a Laplacian matrix associated with the graph to compute the partition. The algebraic algorithm is justified by formulating graph partitioning as a quadratic
assignment p...
, 1996
"... To minimize a convex function, we combine Moreau-Yosida regularizations, quasiNewton matrices and bundling mechanisms. First we develop conceptual forms using "reversal " quasi-Newton formulae
and we state their global and local convergence. Then, to produce implementable versions, we inco ..."
Cited by 40 (8 self)
Add to MetaCart
To minimize a convex function, we combine Moreau-Yosida regularizations, quasiNewton matrices and bundling mechanisms. First we develop conceptual forms using "reversal " quasi-Newton
formulae and we state their global and local convergence. Then, to produce implementable versions, we incorporate a bundle strategy together with a "curve-search". No convergence results are given
for the implementable versions; however some numerical illustrations show their good behaviour even for large-scale problems.
, 1993
"... We study the problem of finding the minimum bisection of a graph into two parts of prescribed sizes. We formulate two lower bounds on the problem by relaxing node- and edge-incidence vectors of
cuts. We prove that both relaxations provide the same bound. The main fact we prove is that the duality be ..."
Cited by 39 (8 self)
Add to MetaCart
We study the problem of finding the minimum bisection of a graph into two parts of prescribed sizes. We formulate two lower bounds on the problem by relaxing node- and edge-incidence vectors of cuts.
We prove that both relaxations provide the same bound. The main fact we prove is that the duality between the relaxed edge- and node-vectors preserves very natural cardinality constraints on cuts. We
present an analogous result also for the max-cut problem, and show a relation between the edge relaxation and some other optimality criteria studied before. Finally, we briefly mention possible
applications for a practical computational approach.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=507993","timestamp":"2014-04-18T06:18:54Z","content_type":null,"content_length":"38792","record_id":"<urn:uuid:06b82a3f-443c-48e6-a137-a8c4bcef458b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone please help explain in detail how the solution is obtained for 3B-5 in problem set 6 (on the integration worksheet)? Here's the problem: Evaluate the limit by relating it to a Riemann
sum. the limit as n tends to infinity of (sin(b/n)+sin(2b/n)+...+sin((n-1)b/n)+sin(nb/n))/n The solution is the definite integral from 0 to 1 of sin(bx)dx
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50c63000e4b0b766106d7af3","timestamp":"2014-04-17T16:55:39Z","content_type":null,"content_length":"25499","record_id":"<urn:uuid:bc14e9bf-87e7-4f65-b377-717e978ec9af>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Great Neck Plaza, NY Algebra Tutor
Find a Great Neck Plaza, NY Algebra Tutor
...My students include those from Hunter College High School, Stuyvesant, Bronx Science, Brooklyn Tech, etc., all referred by parents. I helped many students got into their dream schools or honor
classes. I have two master degrees (physics and math) and have very deep understanding of physics and math concepts.
12 Subjects: including algebra 1, algebra 2, calculus, physics
...I will do everything possible to make sure that your kids get the best education. I will work with your kids each step of the way to guarantee maximum results. I guarantee you will see
improvement in your child's school work.
6 Subjects: including algebra 1, elementary (k-6th), elementary math, prealgebra
...I have a Master's in Math Education 7-12 and a Bachelor's in Mathematics. I have plenty of experience as a teacher, tutor, and test prep instructor. The majority of my experience is in K-12
public education but am also capable of tutoring college level courses up to Calculus level.
12 Subjects: including algebra 1, algebra 2, calculus, probability
...I have always tutored independently and this is my first time working with a tutoring agency. In the past I have tutored algebra, geometry, precalculus and calculus. I have also worked with an
SAT Prep organization that provides disadvantaged students with a full SAT prep course, and I have tutored a few private clients in SAT Prep.
23 Subjects: including algebra 2, algebra 1, reading, Java
...I have both Bachelor and Master in chemical engineering from City College of New York (CCNY) and MBA with project management concentration from DeVry University. The chemical engineering
degree provided strong graduate foundation in math, chemistry and physics, while my MBA provided me with grad...
21 Subjects: including algebra 2, algebra 1, chemistry, calculus
Related Great Neck Plaza, NY Tutors
Great Neck Plaza, NY Accounting Tutors
Great Neck Plaza, NY ACT Tutors
Great Neck Plaza, NY Algebra Tutors
Great Neck Plaza, NY Algebra 2 Tutors
Great Neck Plaza, NY Calculus Tutors
Great Neck Plaza, NY Geometry Tutors
Great Neck Plaza, NY Math Tutors
Great Neck Plaza, NY Prealgebra Tutors
Great Neck Plaza, NY Precalculus Tutors
Great Neck Plaza, NY SAT Tutors
Great Neck Plaza, NY SAT Math Tutors
Great Neck Plaza, NY Science Tutors
Great Neck Plaza, NY Statistics Tutors
Great Neck Plaza, NY Trigonometry Tutors
Nearby Cities With algebra Tutor
Great Nck Plz, NY algebra Tutors
Great Neck algebra Tutors
Great Neck Estates, NY algebra Tutors
Harbor Hills, NY algebra Tutors
Kensington, NY algebra Tutors
Kings Point, NY algebra Tutors
Lake Success, NY algebra Tutors
Little Neck algebra Tutors
Manhasset algebra Tutors
Plandome, NY algebra Tutors
Russell Gardens, NY algebra Tutors
Saddle Rock Estates, NY algebra Tutors
Saddle Rock, NY algebra Tutors
Thomaston, NY algebra Tutors
University Gardens, NY algebra Tutors
|
{"url":"http://www.purplemath.com/Great_Neck_Plaza_NY_Algebra_tutors.php","timestamp":"2014-04-18T13:35:18Z","content_type":null,"content_length":"24562","record_id":"<urn:uuid:91a81f79-634b-4431-9a81-bc89f3c5a972>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Angles Between Faces of a Solid
Date: 09/15/2003 at 13:15:27
From: Dave Kiefer
Subject: Geometry - Calculating angles between faces of a solid
I am making a Great Rhombicosidodecahedron,
and I need to calculate the angle between the square-hexagon, hexagon-
decagon, and square-decagon faces, i.e., the dihedral angles.
Would the angles between faces be the same? Or would I have to
calculate the angles between the different types of faces? For
example, if I were to make the faces of a flat material, with
a "clip" adapter to hold them together to assemble the solid, would I
need one type of clip, or three?
I can't begin to address this mathematically - I've tried with models
and tools to measure the angles, but the results aren't accurate
Date: 09/15/2003 at 16:27:34
From: Doctor Douglas
Subject: Re: Geometry - Calculating angles between faces of a solid
Hi Dave.
The dihedral angles are not all the same, because the polygons are not
all identical. You can perhaps convince yourself of this by imagining
the vertex at which two squares and a hexagon meet. Suppose that you
place the vertex at the origin:
y O is the origin, and we are looking
| downward from the top (along -z).
D-----E The zipping angle (EOF) becomes zero
| | .F. when E and F meet.
A-----O-------G--x Square ABCO lies in the xy-plane.
| | | Square ADEO is tilted upward by
| | | "folding" on line AO.
B-----C . . H Hexagon COFGHI is tilted upward by
I "folding" on line OC.
It seems reasonable that you will have to fold the hexagon upward
by a greater amount than you will have to fold square ADEO. After
all, when you are done, you will have formed a hexagonal prism, and
clearly the dihedral angle between the two squares is 120 degrees,
while the dihedral angle between the hexagon and a neighboring square
is 90 degrees. Hence, dihedral angles between faces with fewer
sides ought to be shallower (dihedral angle closer to 180 degrees).
So, on to calculating the dihedral angles for your problem (square/
hexagon/decagon). Because you have a square, it is convenient to
place it in the xy-plane as above, and find the vector V=(vx,vy,vz)
that represents where the zipping angle is completely closed. This
vector will point somewhere in the first octant (all of its
components are positive). We also can refer to the point V as a
point on the decagon/hexagon edge, exactly 1 unit from the origin.
Then angle AOV must measure 120 degrees (square/hexagon edge folds
on AO), and angle COV must measure 144 degrees (square/decagon edge
folds on CO). Using the properties of dot products:
(-1,0,0).(vx,vy,vz) = -vx = cos(120 deg)
(0,-1,0).(vx,yz,vz) = -vy = cos(144 deg)
So that vx = -cos(120 deg) = 1/2 and vy = -cos(144 deg) = 0.809017.
We can obtain vz from the fact that V is on the unit sphere:
vz = sqrt(1 - vx^2 - vy^2) = sqrt(1 - 0.25 - 0.6545085)
= sqrt(0.095491)
= 0.309017
Note that
vy^2 + vz^2 = 1 - vx^2
= 1 - 0.5^2
= 0.75.
On to computing the dihedral angle. Let's first try to find the angle
between the hexagon and square. For this we need to take the dot
product between the normal vectors to each face. The normal to the
square is obviously (0,0,-1) and the normal to the hexagon can be
found from the cross product between the vectors OA and OV:
OA x OV = (-1,0,0) x (vx,vy,vz)
= (0,vz,-vy)
So that the angle between the planes is given by
Q(square,hexagon) = arccos{(0,0,-1).(0,vz,-vy)/[sqrt(vy^2+vz^2)]}
= arccos[vy/sqrt(0.75)]
= arccos[0.809017/0.866025]
= 20.905 degrees
The dihedral angle is the supplementary angle to this, or
D(square,hexagon) = 180 deg - 20.905 deg
= 159.095 degrees.
To obtain the dihedral angles involving the decagon, you'll need the
normal to that face, which can be found from the cross product of
OC with OV:
OC x OV = (0,-1,0) x (vx,vy,vz)
= (-vz,0,vx)
Then the angle between the square and the decagon is
Q(square,decagon) = arccos{(0,0,-1).(-vz,0,vx)/[sqrt(vz^2+vx^2)]}
= arccos[-vx/sqrt(0.095491 + 0.25)]
= arccos[-0.5/sqrt(0.345491)]
= 148.28 degrees
This is already the dihedral angle, because it is bigger than 90
Finally, we can compute the remaining dihedral angle
(hexagon-decagon) by using the arccosine formula and the dot
product. We already have the two required normals OA x OV and
OC x OV, and we must remember to be careful to divide by the
lengths L in the dot product:
= arccos{(OAxOV).(OCxOV) / [L(OAxOV)L(OCxOV)]}
= arccos{(0,vz,-vy).(-vz,0,vx) / [sqrt(vz^2+vy^2)sqrt(vx^2+vz^2)]}
= arccos{(vx vy) / [sqrt(0.75)sqrt(0.345491)]}
= arccos{0.5 x 0.809017 / [sqrt(0.75)sqrt(0.345491)]}
= arccos(0.794655)
= 37.377 degrees
so that the dihedral angle between the hexagon and decagon is
180 degrees - 37.377 degrees = 142.62 degrees. So the angles we're
looking for are
hexagon-decagon: 142.62 degrees
square-decagon: 148.28 degrees
hexagon-square: 159.095 degrees
It makes sense that the square/hexagon dihedral angle
is is the largest, because the square and hexagon have the
fewest sides, and that the hexagon/decagon dihedral angle is
the smallest, by the reasoning above.
- Doctor Douglas, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/64251.html","timestamp":"2014-04-19T08:36:59Z","content_type":null,"content_length":"10972","record_id":"<urn:uuid:9e015478-3c76-4e2b-b08d-6517bd60783c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does this sum equal zeta(3)?
up vote 12 down vote favorite
Does: $$\sum_{1 \leq i<j} \frac{1}{i j^2} = \sum_{1 \leq k} \frac{1}{k^3}?$$
Motivation: Call the above sum $S$, and let $$T := \sum_{ GCD(i,j)=1} \frac{1}{\max(i,j) i j}.$$ The sum $T$ came up in a computation on Jim Propp's question here. Numerical computation suggested
that $T$ is extremely close to $3$.
It is not hard to show that $$T = \zeta(3)^{-1} \sum \frac{1}{\max(i,j) i j} = \zeta(3)^{-1} \left( \sum_{k} \frac{1}{k^3} + 2 \sum_{i<j} \frac{1}{i j^2} \right) = 1 + 2 \zeta(3)^{-1} S,$$ by
breaking into cases according to whether $i<j$, $i=j$ or $i>j$. So $T=3$ iff $S=\zeta(3)$.
As I describe in the above linked thread, numerical computations suggest that the sums agree to $20$ digits of accuracy. What is going on?
nt.number-theory zeta-functions
5 Of course, it is identity 67 on mathworld.wolfram.com/RiemannZetaFunction.html – Dror Speiser Feb 11 '11 at 17:02
10 @Dror: why of course? – Mariano Suárez-Alvarez♦ Feb 11 '11 at 17:57
1 This question appears to be related to a recent one of mine: mathoverflow.net/questions/50253. I'm wondering if one of the solutions given there can be used to give another perspective on this
result? – Mike Spivey Feb 11 '11 at 19:26
@Mariano: aside from the fact that I remembered seeing the identity on that page a few years back, finding it again was routine: google "Riemann Zeta Function", second hit, skimming to about mid
1 page, and there, but not over - first reference Stark, jstor, and he references Klamkin, as did David below. As can be seen from the timestamps, this was done in under 5 minutes (given that I
remembered seeing it). – Dror Speiser Feb 16 '11 at 10:14
add comment
1 Answer
active oldest votes
Hi David,
This is the first example of a multiple zeta identity. Your sum S is just $\zeta(1,2)$, where the multiple zeta value is defined by: $$\zeta(s_1, s_2, \ldots, s_k) = \sum_{0 < n_1
up vote 22 down < n_2 < \cdots n_k} \left( \prod_{i=1}^k n_i^{-s_i} \right).$$
vote accepted
Your identity $\zeta(1,2) = \zeta(3)$ was discovered by Euler according to Wikipedia.
5 Thanks Marty! For those with JSTOR access, a very clean proof is given at jstor.org/stable/2308345 . For those with access to a good library, this is The American Mathematical
Monthly, Vol. 59, No. 7, Aug. - Sep., 1952, p. 471 , problem proposed by M. Klamkin, solution by R. Steinberg. – David Speyer Feb 11 '11 at 17:48
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory zeta-functions or ask your own question.
|
{"url":"http://mathoverflow.net/questions/55141/does-this-sum-equal-zeta3?answertab=oldest","timestamp":"2014-04-18T21:58:12Z","content_type":null,"content_length":"57994","record_id":"<urn:uuid:1f99168e-a3cb-40fa-9268-c232da19d063>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
using math.h library (sin/cos/acos...) - Arduino Forum
I'm trying to do some trigonometry in Arduino.
I need to calculate the distance between 2 points on a sphere. I have the math formula and i'm trying to translate it into Arduino.
I've added an include for the <math.h> library at top.
My main question is: How can i do cos/sin calculations in Arduino if it doesn't support floating points?
Can i write that part in C and insert it inside of Arduino? Would i make a C library that calls the math.h one and sends out results as longs or ints back to Arduino on request?...
Thanks lots.
|
{"url":"http://forum.arduino.cc/index.php?topic=41124.msg299529","timestamp":"2014-04-21T12:53:07Z","content_type":null,"content_length":"76484","record_id":"<urn:uuid:5f0013c9-02e9-4c6d-b12c-cc4441426108>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tail Recursive Algorithms In Scala
Posted by kungfuice & filed under Computers, java, programming.
< ![endif]–>After the NFJS conference this weekend I have been spending more and more time working in Scala and trying to learn the ins-and-outs of the language. One of the neat things I came across
was tail recursive algorithm optimization within the language itself.
When writing in a functional language like Scala you rely heavily on recursive algorithms to do your work, instead of iterative algorithms more present in OO languages like Java. This leads to an
interesting problem with running functional languages within the JVM.
The Java Virtual Machine does not have support for tail-recursive algorithm optimizations. What does that mean? Basically the JVM cannot spot when to optimize calls stacks for recursive algorithms.
Well who cares right? If you’re writing in Java this issue doesn’t usually come up since most of us simply write our loops and are happy with the performance the JVM offers us.
However, when you are writing a lot of recursive algorithms you quickly start to see how tail-recursive algorithms help makes your code more efficient.
For those of you who don’t remember what a tail-recursive algorithm is, here is a short little explanation:
“A Tail-recursive function is a function that ends in a recursive call to itself (effectively not building up any deferred operations). Tail recursive functions have the ability to re-use the same
stack frame over and over and therefore are not bound by the number of available stack frames. Many people have written the simple factorial function recursively, but many have noticed that if you
pass it a value of N that is large enough you will see a dreaded Out of Memory exception.
The reason for this is due to the factorial functions deferred operation statement that comes at the end of the method. Let’s look at the simple factorial algorithm
//INPUT: n is an Integer such that n >= 1
int fact(int n) {
if (n == 1)
return 1;
return n* fact(n-1);
H You can see that at then end of the algorithm we have a deferred operation of multiplying n with the next value computed from factorial, this goes on and on until we finally hit a factorial
function that returns one, at which point the stack is popped and everything returns nicely. However we are now bound by the available stack frames in terms of computing n (Also by the max size of an
int, but that’s beside the point).
In a tail-recursive algorithm you do not see this deferred execution, and therefore the same stack frame can be reused over and over again. Here is the same algorithm rewritten using a tail-recursive
//INPUT: n is an Integer such that n >= 1
int fact(int n) {
return fact_support(n, 1);
int fact_support(int n, int acc) {
if (n == 0) return acc;
else return fact_support(n-1,acc*n);
As you can see there are no deferred operations in this example we have effectively gotten rid of having to use n number of stack frames. Each call to fact_support brings with it the current
accumulated sum, and thus does not need to wait to “pop” the stack and rely on the stack pop to accumulate the sum of the factorials.
As you can see tail-recursive optimizations are very important to languages where recursion is relied on heavily. Without this type of optimization many of the recursive algorithms are now bound by
the number of available stack frames.
So that can’t be so hard right? Why not just write all your algorithms to use tail-recursion? Well for one it’s not always possible to do this, but also it puts a lot of work in the hands of the
developer. We have this beautiful thing called the JVM that should be able to spot cases where optimization can occur.
Unfortunately the JVM doesn’t know how to spot this optimization, therefore the Scala compiler must look at ways of optimizing your code. Even more unfortunate the Scala documentation explicitly
states that you should not rely on the Scala compiler making this optimization.
It may optimize your code, but it may not as well, so really the only way to be sure is to write your algorithms guaranteeing that tail-recursion is being utilized from the get-go.
It would be really nice to see the JVM improved to allow for this type of optimization to be done in the Hotspot, and there has been talk about improving the JVM to support functional languages, but
I guess time will only tell.
Dan Hodge
An excellent discussion of tail recursion and much easier to follow than the section on tail recursion in Scala By Example.
I’m glad this helped you out. Keep an eye out for some more Scala posts I’ll be making in the next couple of weeks.
Paul Copeland
Even though there is no “deferred execution” there are the same number of nested function calls and presumably the same depth of the stack in the second case unless the system does some kind of
rewriting. The usual examples show that tail recursion can always be rewritten by the programmer with a loop. Many useful recursive algorithms cannot be reduced to tail recursion and there is no
simple way to remove the recursion.
Bill Allen
The whole point of the tail recursion optimization is that the compiler removes the function calls effectively rewriting the algorithm to be an efficient loop.
I am just trying to make sure that I get it right. You meant by the optimization that it can reuse the stack frame over and over, right?. But you mentioned that it does not always detect that
optimization even if the program was written as tail recursive?
@Amal This is true. Since the Java Virtual Machine has no way of optimizing tail recursive functions it is solely up to the Scala compiler to detect tail call recursive functions and reorder them
into iterative functions. Unfortunately if you do anything fancy in your tail-recursion. Basically any function that ends in an indirect function call. Really tail-call optimization in Scala is
limited to any function that calls itself (and only itself) directly as its last operation, without going through a function value.
|
{"url":"http://www.kungfuice.com/index.php/2008/09/18/tail-recursive-algorithms-in-scala/","timestamp":"2014-04-19T02:11:21Z","content_type":null,"content_length":"36202","record_id":"<urn:uuid:2b87a415-b76a-473f-988d-b080ce0bd98a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Alissa S. Crans
Many people who have never had occasion to learn what mathematics is confuse it with arithmetic and consider it a dry and arid science. In actual fact it is the science which demands the utmost
imagination. One of the foremost mathematicians of our century says very justly that it is impossible to be a mathematician without also being a poet in spirit... It seems to me that the poet must
see what others do not see, must see more deeply than other people. And the mathematician must do the same.
-Sofya Kovalevskaya, 1890
|
{"url":"http://myweb.lmu.edu/acrans/","timestamp":"2014-04-17T07:48:31Z","content_type":null,"content_length":"5194","record_id":"<urn:uuid:d63a4502-ce5d-4c51-8b87-ac17d64ce83e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmwood, MA Algebra 2 Tutor
Find an Elmwood, MA Algebra 2 Tutor
...I am a native English speaker, have been studying Spanish for twelve years, and Chinese for two. In high school, I tutored my peers in classes that I had already taken, and assisted in
teaching a third grade spanish class. Last year through my university I participated in a program known as Jumpstart.
9 Subjects: including algebra 2, chemistry, physics, biology
...I also enjoy helping others to master different types of questions. I have several years part-time experience holding office hours and working in a tutorial office. I have worked with students
who are taking the GED specifically.
29 Subjects: including algebra 2, reading, English, geometry
...I attended U.C. Santa Barbara, ranked 33rd in the World's Top 200 Universities (2013), and graduated with a degree in Communication. I've been tutoring for the past 9 years and have worked
with students at many different levels.
27 Subjects: including algebra 2, reading, writing, English
...I have currently been teaching this subject for the past 8 years and am well versed in the changes to the subject requirements due to the Common Core. I have currently been teaching this
subject for many years and am well versed in the changes to the subject requirements due to the Common Core. Over the past 8 years of teaching I have assisted students in becoming more organized.
5 Subjects: including algebra 2, algebra 1, precalculus, study skills
...I am currently a research associate in materials physics at Harvard, have completed a postdoc in geophysics at MIT, and received my doctorate in physics / quantitative biology at Brandeis
University. I will travel throughout the area to meet in your home, library, or wherever is comfortable for ...
16 Subjects: including algebra 2, calculus, physics, geometry
Related Elmwood, MA Tutors
Elmwood, MA Accounting Tutors
Elmwood, MA ACT Tutors
Elmwood, MA Algebra Tutors
Elmwood, MA Algebra 2 Tutors
Elmwood, MA Calculus Tutors
Elmwood, MA Geometry Tutors
Elmwood, MA Math Tutors
Elmwood, MA Prealgebra Tutors
Elmwood, MA Precalculus Tutors
Elmwood, MA SAT Tutors
Elmwood, MA SAT Math Tutors
Elmwood, MA Science Tutors
Elmwood, MA Statistics Tutors
Elmwood, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Elmwood_MA_Algebra_2_tutors.php","timestamp":"2014-04-20T19:35:51Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:48c949bf-a9dc-49c3-9f09-6964e979623f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a decision procedure for runtime types
Major Section: ACL2 Documentation
This doc topic is the main source of information about the tau system and discusses the general idea behind the procedure and how to exploit it.
A ``Type-Checker'' for an Untyped Language
Because ACL2 is an untyped language it is impossible to type check it. All functions are total. An n-ary function may be applied to any combination of n ACL2 objects. The syntax of ACL2 stipulates
that (fn a1...an) is a well-formed term if fn is a function symbol of n arguments and the ai are well-formed terms. No mention is made of the ``types'' of terms. That is what is meant by saying ACL2
is an untyped language.
Nevertheless, the system provides a variety of monadic Boolean function symbols, like natp, integerp, alistp, etc., that recognize different ``types'' of objects at runtime. Users typically define
many more such recognizers for domain-specific ``types.'' Because of the prevalence of such ``types,'' ACL2 must frequently reason about the inclusion of one ``type'' in another. It must also reason
about the consequences of functions being defined so as to produce objects of certain ``types'' when given arguments of certain other ``types.''
Because the word ``type'' in computer science tends to imply syntactic or semantic restrictions on functions, we avoid using that word henceforth. Instead, we just reason about monadic Boolean
predicates. You may wish to think of ``tau'' as synonymous with ``type'' but without any suggestion of syntactic or semantic restrictions.
Some Related Topics
Design Philosophy
The following basic principles were kept in mind when developing tau checker and may help you exploit it.
(1) The tau system is supposed to be a lightweight, fast, and helpful decision procedure for an elementary subset of the logic focused on monadic predicates and function signatures.
(2) Most subgoals produced by the theorem prover are not in any decidable subset of the logic! Thus, decision procedures fail to prove the vast majority of the formulas they see and will be net
time-sinks if tried too often no matter how fast they are.
Tau reasoning is used by the prover as part of preprocess-clause, one of the first proof techniques the system tries. The tau system filters out ``obvious'' subgoals. The tau system is only tried
when subgoals first enter the waterfall and when they are stable under simplification.
(3) The tau system is ``benign'' in the sense that the only way it contributes to a proof is to eliminate (prove!) subgoals. It does not rewrite, simplify, or change formulas. Tau reasoning is not
used by the rewriter. The tau system either eliminates a subgoal by proving it or leaves it unchanged.
(4) It is impossible to infer automatically the relations between arbitrary recursively defined predicates and functions. Thus, the tau system's knowledge of tau relationships and function signatures
is gleaned from theorems stated by the user and proved by the system.
(5) Users wishing to build effective ``type-checkers'' for their models must learn how rules affect the tau system's behavior. There are two main forms of tau rules: those that reveal inclusion/
exclusion relations between named tau predicates, e.g., that 16-bit naturals are also 32-bit naturals,
(implies (n16p x) (n32p x)),
and signatures for all relevant functions, e.g., writing a 32-bit natural to a legal slot in a register file produces a register file:
(implies (and (natp n)
(< n 16)
(n32p val)
(register-filep regs))
(register-filep (update-nth n val regs))).
For a complete description of acceptable forms see :tau-system.
(6) The tau system is ``greedy'' in its efforts to augment its database. Its database is potentially augmented when rules of any :rule-class (see :rule-classes) are proved. For example, if you make a
:rewrite or :type-prescription rule which expresses relationship between tau, ACL2 will build it into the tau database. The rule-class :tau-system can be used to add a rule to the tau database
without adding any other kind of rule.
(7) Greediness is forced into the design by benignity: the tau system may ``know'' some fact that the rewriter does not, and because tau reasoning is not used in rewriting, that missing fact must be
conveyed to the rewriter through some other class of rule, e.g., a :rewrite or :type-prescription or :forward-chaining rule. By making the tau system greedy, we allow the user to program the rewriter
and the tau system simultaneously while keeping them separate. However, this means you must keep in mind the effects of a rule on both the rewriter and the tau system and use :tau-system rules
explicitly when you want to ``talk'' just to the tau system.
(8) Tau rules are built into the database with as much preprocessing as possible (e.g., the system transitively closes inclusion/exclusion relationships at rule-storage time) so the checker can be
(9) For speed, tau does not track dependencies and is not sensitive to the enabled/disabled status (see enable and disable) of rules. Once a fact has been built into the tau database, the only way to
prevent that fact from being used is by disabling the entire tau system, by disabling (:executable-counterpart tau-system). If any tau reasoning is used in a proof, the rune (:executable-counterpart
tau-system) is reported in the summary. For a complete list of all the runes in the tau database, evaluate (global-val 'tau-runes (w state)). Any of these associated theorems could have been used.
These design criteria are not always achieved! For example, the tau system's ``greediness'' can be turned off (see set-tau-auto-mode), the tau database can be regenerated from scratch to ignore
disabled rules (see regenerate-tau-database), and disabling the executable-counterpart of a tau predicate symbol will prevent the tau system from trying to run the predicate on constants. The tau
system's benignity can be frustrating since it might ``know'' something the rewriter does not. More problematically, the tau system is not always ``fast'' and not always ``benign!'' The typical way
tau reasoning can slow a proof down is by evaulating expensive tau predicates on constants. The typical way tau reasoning can hurt a previously successful proof is by proving some subgoals (!) and
thus causing the remaining subgoals to have different clause-identifiers, thus making explicit hints no longer applicable. We deal with such problems in dealing-with-tau-problems.
Technical Details
The tau system consists of both a database and an algorithm for using the database. The database contains theorems that match certain schemas allowing them to be stored in the tau database. Roughly
speaking the schemas encode ``inclusion'' and ``exclusion'' relations, e.g., that natp implies integerp and that integerp implies not consp, and they encode ``signatures'' of functions, e.g.,
theorems that relate the output of a function to the input, provided only tau predicates are involved.
By ``tau predicates'' we mean the application of a monadic Boolean-valued function symbol, the equality of something to a quoted constant, an arithmetic ordering relation between something and a
rational constant, or the logical negation of such a term. Here are some examples of tau predicates:
(natp i)
(not (consp x))
(equal y 'MONDAY)
(not (eql 23 k))
(< 8 max)
(<= max 24)
Synonyms for equal include =, eq, and eql. Note that negated equalites are also allowed. The arithmetic ordering relations that may be used are <, <=, >=, and >. One of the arguments to every
arithmetic ordering relation must be an integer or rational constant for the term to be treated as a tau predicate.
A ``tau'' is a data object representing a set of signed (positive or negative) tau predicates whose meaning is the conjunction of the literals in the set.
When we say that a term ``has'' a given tau we mean the term satisfies all of the recognizers in that tau.
The tau algorithm is a decision procedure for the logical theory described (only) by the rules in the database. The algorithm takes a term and a list of assumptions mapping subterms (typically
variable symbols) to tau, and returns the tau of the given term.
When the system is called upon to decide whether a term satisfies a given monadic predicate, it computes the tau of the term and asks whether the predicate is in that set. More generally, to
determine if a term satisfies a tau, s, we compute a tau, r, for the term and ask whether s is a subset of r. To determine whether a constant, c, satisfies tau s we apply each of the literals in s to
c. Evaluation might, of course, be time-consuming for complex user-defined predicates.
The tau database contains rules derived from definitions and theorems stated by the user. See :tau-system for a description of the acceptable forms of tau rules.
To shut off the greedy augmentation of the tau database, see set-tau-auto-mode. This may be of use to users who wish to tightly control the rules in the tau database. To add a rule to the tau
database without adding any other kind of rule, use the rule class :tau-system.
There are some slight complexities in the design related to how we handle events with both :tau-system corollaries and corollaries of other :rule-classes, see set-tau-auto-mode.
To prevent tau reasoning from being used, disable the :executable-counterpart of tau-system, i.e., execute
(in-theory (disable (:executable-counterpart tau-system)))
or, equivalently,
(in-theory (disable (tau-system)))
To prevent tau from being used in the proof of a particular subgoal, locally disable the :executable-counterpart of tau-system with a local :in-theory hint (see hints).
The event command tau-status is a macro that can be used to toggle both whether tau reasoning is globally enabled and whether the tau database is augmented greedily. For example, the event
(tau-status :system nil :auto-mode nil)
prevents the tau system from being used in proofs and prevents the augmentation of the tau database by rules other than those explicitly labeled :tau-system.
To see what the tau system ``knows'' about a given function symbol see tau-data. To see the entire tau database, see tau-database. To regenerate the tau database using only the runes listed in the
current enabled theory, see regenerate-tau-database.
|
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v6-1/INTRODUCTION-TO-THE-TAU-SYSTEM.html","timestamp":"2014-04-21T08:34:15Z","content_type":null,"content_length":"14745","record_id":"<urn:uuid:80e4af21-9f24-46c2-9f85-b7a33b9f1267>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the Casimir effect?
Northeastern University experimental particle physicists Stephen Reucroft and John Swain put their heads together to write the following answer.
To understand the Casimir Effect, one first has to understand something about a vacuum in space as it is viewed in quantum field theory. Far from being empty, modern physics assumes that a vacuum is
full of fluctuating electromagnetic waves that can never be completely eliminated, like an ocean with waves that are always present and can never be stopped. These waves come in all possible
wavelengths, and their presence implies that empty space contains a certain amount of energy--an energy that we can't tap, but that is always there.
Now, if mirrors are placed facing each other in a vacuum, some of the waves will fit between them, bouncing back and forth, while others will not. As the two mirrors move closer to each other, the
longer waves will no longer fit--the result being that the total amount of energy in the vacuum between the plates will be a bit less than the amount elsewhere in the vacuum. Thus, the mirrors will
attract each other, just as two objects held together by a stretched spring will move together as the energy stored in the spring decreases.
Image: Scientific American
This effect, that two mirrors in a vacuum will be attracted to each other, is the Casimir Effect. It was first predicted in 1948 by Dutch physicist Hendrick Casimir. Steve K. Lamoreaux, now at Los
Alamos National Laboratory, initially measured the tiny force in 1996.
It is generally true that the amount of energy in a piece of vacuum can be altered by material around it, and the term "Casimir Effect" is also used in this broader context. If the mirrors move
rapidly, some of the vacuum waves can become real waves. Julian Schwinger and many others have suggested that this "dynamical Casimir effect" may be responsible for the mysterious phenomenon known as
One of the most interesting aspects of vacuum energy (with or without mirrors) is that, calculated in quantum field theory, it is infinite! To some, this finding implies that the vacuum of space
could be an enormous source of energy--called "zero point energy."
But the finding also raises a physical problem: there's nothing to stop arbitrarily small waves from fitting between two mirrors, and there is an infinite number of these wavelengths. The
mathematical solution is to temporarily do the calculation for a finite number of waves for two different separations of the mirrors, find the associated difference in vacuum energies and then argue
that the difference remains finite as one allows the number of wavelengths to go to infinity.
Although this trick works, and gives answers in agreement with experiment, the problem of an infinite vacuum energy is a serious one. Einstein's theory of gravitation implies that this energy must
produce an infinite gravitational curvature of spacetime--something we most definitely do not observe. The resolution of this problem is still an open research question.
|
{"url":"http://www.scientificamerican.com/article/what-is-the-casimir-effec/","timestamp":"2014-04-16T17:25:28Z","content_type":null,"content_length":"58232","record_id":"<urn:uuid:7efb0b1a-8656-4e13-ad93-f79caab7c5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Huntington Park Precalculus Tutor
Find a Huntington Park Precalculus Tutor
...I spend approximately 5-7 hours a day coding in MATLAB. I have taken a class on numerical methods at Caltech that was done half in mathematica, half in Matlab. I am currently working on a
physics research project studying the structure of a new type of material, a quasicrystal, and the code I am writing for the project is also in mathematica.
26 Subjects: including precalculus, calculus, physics, algebra 2
...I tutored the students in one-on-one sessions, group sessions, and conducted review sessions before exams. In addition, I was a teaching assistant for undergraduate and graduate students in the
Biomedical Engineering and Kinesiology departments. It is my goal to not only teach my students the material, but to give them the tools needed to succeed in all their classes.
30 Subjects: including precalculus, chemistry, calculus, physics
...Students I worked with have scored higher on their finals and other placement tests. I am very flexible and available weekdays and weekends. I will be a great help for students who require
science classes in their majors or for those who are looking to score high on their entry exams.
11 Subjects: including precalculus, chemistry, geometry, algebra 1
My tutoring experience prior to WyzAnt is primarily in organic chemistry; I was an organic chemistry tutor for the Department of Chemistry at University of California, Irvine for a year and then
became the coordinator for their organic chemistry tutoring program the following year, where I worked wi...
9 Subjects: including precalculus, chemistry, physics, geometry
...Before that, I tutored undergraduate-level physics at Case Western Reserve University. As for my "tutoring style"? I try to keep things simple.
11 Subjects: including precalculus, calculus, physics, algebra 2
|
{"url":"http://www.purplemath.com/Huntington_Park_Precalculus_tutors.php","timestamp":"2014-04-21T02:19:30Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:71271c24-bbe3-4005-b7ce-12d050b59376>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Practical Foundations of Mathematics
Paul Taylor
Practical Foundations collects the methods of construction of the objects of twentieth century mathematics. Although it is mainly concerned with a framework essentially equivalent to intuitionistic
ZF, the book looks forward to more subtle bases in categorical type theory and the machine representation of mathematics. Each idea is illustrated by wide-ranging examples, and followed critically
along its natural path, transcending disciplinary boundaries between universal algebra, type theory, category theory, set theory, sheaf theory, topology and programming.
Submitted by xardon on Sept. 5, 2012, 8:16 a.m.
Flag book for abuse
|
{"url":"http://hackershelf.com/book/302/practical-foundations-of-mathematics/","timestamp":"2014-04-20T15:51:32Z","content_type":null,"content_length":"7089","record_id":"<urn:uuid:a72013a0-e19f-44cc-b7d0-0f8dbe474814>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fraction Strips Pdf
Fraction Strips Pdf DOC
Sponsored High Speed Downloads
DRAFT UNIT PLAN - Grade 4: Number & Operations – Fractions – Extend understanding of fraction equivalence and ordering. DRAFT Maryland Common Core State Curriculum for Grade 4 April 19, 2012 Page 18
of 21
Using your fraction strips, work through the following and have several examples for each: If you add 2 fractions and the sum is greater than ½, what can you say about the fractions?
Use your fraction strips and/or Fraction Bar Comparison Sheet to help you match fractions that are equal. Draw a line between the matching fractions. ...
The students would also know the terminology for our class manipulatives such as circle fraction pieces, rectangle fraction pieces, fraction strips, ... from http://www.frenchimmersion.spps.org/sites
/f3d4c5a5-033c-4745-bdda- 4559e33d0e7d/uploads/Unit9_family_letters.pdf. Teaching Ideas.
such as fraction strips, fraction towers, Cuisenaire rods) for fraction placement on a number line . ... http://www.pbs.org/teachers/mathline/lessonplans/pdf/esmp/half.pdf (Interactive lesson ideas
and activities on half and not half)
Discuss how the fraction strips can help with finding equivalent fractions, then model multiplying by a form of 1. Teacher will give 2 more fractions to find 3 equivalent fractions by drawing and
using the rule. Evaluation:
Identify the equivalent decimal that represents the shaded area of this fraction bar. a. 0.5. b. 0.2. c. 0.1. d. 1.5. Objective Test Sub-Score: _____/10. ... Using the blue strips have students
complete #8 and #9 on the worksheet individually and discuss their answers, discoveries, ...
We also cut up unit fraction strips to work with in other lessons. ... http://lrt.ednet.ns.ca/PD/BLM/pdf_files/fraction_strips/fs_to_twelfths_labelled.pdf. http://lrt.ednet.ns.ca/PD/BLM/
table_of_contents.htm. Author: Jerine Pegg Created Date:
Fraction Strips. Quiz (Math Check-up) Quick Write. Response Card. Recipe Activity. ... http://www.tttpress.com/pdf/fraction-number-line.pdf (Activity may be modified to meet the learning objective.)
B Level: Choose two of the 5 points activities.
Folding fraction strips and modeling that language and supporting students as they practice is critical. ... This PDF provides a very solid overview of 3.NF.1 with examples and understandings for the
Concrete to Representational to Abstract Instructional Strategy.
Use the fraction strips below to shade in the following values: a. b. c. d. a. Select . The . multiples. ... Blue fraction: If you select a denominator of 4 and a numerator of 5. a. The selected
fractions are and . b.
Fraction strips (made in the “Creating Fraction Strips” activity or made from the fraction strip templates on the following pages), ... http://www.aimsedu.org/Activities/minimetrics/mini-metrics.pdf
– A Mini-Metric Olympics activity from the AIMS organization.
Also, if students are using fraction strips, circles, or other fraction manipulatives, they will be able to compare fractions without making this ... http://www.mathworksheets4kids.com/fractions/
compare/picture-1.pdf http://www.worksheetfun.com/wp-content/uploads/2013/02 ...
http://www.printable-math-worksheets.com/support-files/manipulative-fraction-strips.pdf (blackline master of fraction ... Students can use fraction strips to solve fraction addition and subtraction
word problems with unlike dominators and explain to a partner why 1/2 + 1/3 does not equal 2/5 ...
How can benchmark fractions and number sense be used to estimate the value of a given fraction? ... Students model adding mixed numbers using drawings or fraction strips. ... http://www.thirteen.org/
edonline/adulted/lessons/stuff/lp46_fracword.pdf. http://nlvm.usu.edu/en/nav/frames_asid_106_g_2 ...
The emphasis of this unit is to develop proficiency with fraction equivalency and comparison for 4th grade students. This unit addresses components of Critical Area (2) in the Massachusetts
Mathematics Curriculum Frameworks.
Fraction Strips used to show the relationship between fractions, or Fraction Tiles used for students that might have fine motor problems. Connecting Cubes to model fractions. Quick Practice . Math
Journals (Writing Prompts) Challenge . 6-1 Paint the Fence.
Fraction strips or hundredths strips and hundredths grids. Calculators. Note: ... http://www.aimsedu.org/Activities/minimetrics/mini-metrics.pdf – A Mini-Metric Olympics activity from the AIMS
organization. www.funbrain.com/poly/ ...
Make a fraction kit by folding fraction strips out of equal-sized paper strips. Create a strip to represent the whole. Fold other strips to represent halves, fourths, eighths, thirds and sixths. ...
They are found at the end of the pdf.
Various models such as fraction strips, percent bars, and number lines are used to develop conceptual understanding. ... http://www.mansfieldisd.org/curriculum/mathematics/pdf/6th/
texteams%20Proportionality.pdf. Here are several examples of the activities available:
Students will be asked to reason quantitatively, using benchmark fractions and estimation as a means of developing fraction number sense. Estimation can be useful in problem solving and checking to
see if answers make sense.
Today we’re going to discuss fraction strips as representations of fractions. We will use rulers to talk about fractions of one foot, ... Mixed numbers word problems http://math.about.com/library/
fractionsa.pdf Mixed numbers Name: ...
Fraction Track. Fraction Strips in Black and White. Sample Item 1: Sample Item 2: Sample Item 3: https://www-k6.thinkcentral.com. ... http://fcat.fldoe.org/pdf/specifications/MathGrades3-5.pdf.
Literature Connection Chart. Title Author Concept or Skills Grade Level Beep, ...
finding equivalent fractions, using manipulative models such as fraction strips, number lines, fraction circles, rods, pattern blocks, cubes, Base-10 blocks, tangrams, ... http://pbskids.org/zoom/
printables/activities/pdfs/square.pdf. Prove equivalent measurement using rulers, tape measures, ...
Materials List: paper, pencils, fraction circles or strips. Provide each group of students with a set of fraction circles or strips. Have them place a piece in the center of ... http://www2.edc.org/
mathpartners/pdfs/3-5%20Geometry%20and%20Measurement.pdf. Go to lesson 4, page 20 to view the ...
Fraction strips; fraction circles; colored cubes or paper squares; unifix cubes; pattern blocks; geoboards; colored tiles; cuisenaire rods; chips ... The 2002 Mathematics Curriculum Framework can be
found in PDF and Microsoft Word file formats on the Virginia Department of Education’s ...
use fraction circles, fraction strips or pattern blocks to model halves, quarters and eighths; thirds, sixths and twelfths; fifths and tenths; and record equivalence statements. use concrete models
of simple fractions to aid addition of fractions where one denominator is a multiple of the other.
... fraction strips, or length models.) is introduced in Block 11 as a tool to help students recognize and distinguish between types of division. ... tape diagrams go to: http://www.engageny.org/
Students can line up fraction bars or fractions strips to find as many relationships as they can between fractions. For example, ... for Mathematics. 2012. http://www.smarterbalanced.org/wordpress/
Students reason about how to multiply fractions using fraction strips and number line diagrams. ... for Mathematics. http://www.smarterbalanced.org/wordpress/wp-content/uploads/2011/12/
Math-Content-Specifications.pdf (accessed April 3, 2013).
Keep the remaining fraction strips for additional tasks. Comments. This lesson is an introduction to fractional pieces, ... http://www.mattnelson.com/dee/fractions/Fractions_Lesson_Plan_2.pdf - More
activities about The Doorbell Rang. (Note ...
Understand a fraction 1/b as the ... name, and use equivalent fractions with denominators 2, 3, and 8, using strips as area models. *3. NF.2. ... “Fractions Clothesline” http://www.nsa.gov/academia/
_files/collected_learning/elementary/fractions/are_you_my_equal.pdf _____ Create a ...
... using concrete materials (e.g., use fraction strips to show that ¾ is equal to 9/12); describe multiplicative relationships between quantities by using ... http://www.edugains.ca/resources/
LearningMaterials/ContinuumConnection/Fractions.pdf Establishes context and content focus for ...
Omitted but can be found http://mit.edu/6.969/www/readings/ma-selection.pdf. 1.6 Actions with fractions: Chapter 2: Pg 33. Assessments: ... Number Strips. Number walls. Fraction burgers. STRESS:
Fractions can (and are) bigger than one! Chapter 4: Pg 118.
Fraction Track. Fraction Strips in Black and White. Sample Item 1: Sample Item 2: Sample Item 3: https://www-k6.thinkcentral.com. ... http://fcat.fldoe.org/pdf/specifications/MathGrades3-5.pdf.
Literature Connection Chart. Title Author Concept or Skills Grade Level Beep, ...
http://www.aimsedu.org/Activities/samples/WhatIsTheOne.pdf. Determining Fraction Relations. ... Measurement models: similar to area models but lengths instead of areas are compared (e.g., fraction
strips, rods, cubes, number ... Fraction computation can be approached in the same way as whole ...
... or you may access the page directly by typing http://www2.edc.org/mathpartners/pdfs/3-5%20Geometry%20and%20Measurement.pdf. Activity 5: Getting Ready ... Provide each group of students with a set
of fraction circles or strips. Have them place a piece in the center of the table. On ...
... as a value on a number line, as part of a 2-D shape like a circle, as part of a whole set, and as fraction strips. These will all help in HS math concepts. Fraction. Numerator. ... http://
hanlonmath.com/pdfFiles/244StrategiesforFactsBH.pdf MULTIPLICATION STRATEGIES: skip counting ...
MATH_4_A_DECIMAL MODEL FRACTION WORD MATCH_RES.pdf. C. Cards are cut apart for students to match the one card from each page to make a set containing the word form, ... and 2 one meter strips of
yellow bulletin board paper, laminated arrow, magnet, or something to mark the number line.
Use your fraction strips to compare the following fractions. Line up each fraction strip to see which fraction has the greatest length. ... http://www.gatesfoundation.org/college-ready-education/
Explain why a fraction a/b is equivalent to a ... –SEE LINK FOR INSTRUCTIONS -http://faculty.tamucc.edu/sives/1350/CoverUp-UncoverV1andV2.pdf. Vocabulary: (Combination on review and new terms ...
-Equal length strips of paper -Chart paper -Fraction Kits from previous task (optional) TASK ...
(Make fraction sets using fraction strips, pies, geoboards fractions, and pattern block fractions. ... (Explore “How far will we go?” at http://cesme.utm.edu/resources/math/MAG/3-5MAGActivities.pdf/
http://maccss.ncdpi.wikispaces.net/file/view/3rdGradeUnit.pdf/295313308/3rdGradeUnit.pdf. Area Level A & B pg 3 & 4. ... Using Fraction Strips to Explore the Number Line (page 33) K-5 Math Teaching
Resource Center **Scroll Down to standard codes 3.G.2, ...
They may find items in the room or cut items to fit the criteria (strips of paper, string, ... http://www.rda.aps.edu/mathtaskbank/pdfs/instruct/3-5/i35tiles.pdf. 4. M. D.3. ... •Can you show me the
fraction with fraction strips?
Students apply their understanding of fractions and fraction models to represent the addition and subtraction of fractions with unlike denominators as equivalent calculations with like denominators.
Fraction Strips (Tape Diagrams) Double Line Diagrams. Table representation. ... What fraction of the students at the dance are girls? 5. ... http://www.doe.k12.de.us/assessment/files/Math_Grade_7.pdf
. Common Core aligned assessment questions, ...
Parent letters are also available on the web site in PDF form. On her own computer, she has compiled them into one big PDF just to make it easier to find. ... Lisa shared "Kyneshewa's strategy" –
created by one of her students – which used fraction strips.
These fraction pairs result in a whole that is partitioned into the same number of parts and the size of the parts is equal. Students must consider the number of parts ... You provide your students
with inch rulers, centimeter rulers, and strips of paper to measure.
Place value strips are another tool to use. Use skip counting by 10, 100, ... http://www.mathworksheetsland.com/4/26measfrac/guided.pdf. ... Remember to compose and decompose fractions just as you
whole numbers, and using the fraction strips. Remember, in comparing fractions, ...
Fraction Strips for Renaming Mixed Numbers* SF Grade 5 TE (p. 472A) Students use fraction strips to add and subtract mixed numbers with regrouping. Fraction strips/tiles. Colored pencils/markers.
Scissors DIFFERENTIATION. CROSS-CURRICULAR CONNECTIONS
|
{"url":"http://ebookily.org/doc/fraction-strips-pdf","timestamp":"2014-04-23T12:18:47Z","content_type":null,"content_length":"40987","record_id":"<urn:uuid:11efc557-2c3d-4cf2-b531-73de4f538253>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Forest Park, WA Algebra Tutor
Find a Lake Forest Park, WA Algebra Tutor
...As a pre-medical student, I have completed and excelled in three years of collegiate level chemistry coursework. I have tutored Chemistry for over five years both at the University of
Washington and as an independent contractor through a private tutoring company. I have tutored College level Ge...
27 Subjects: including algebra 1, algebra 2, chemistry, reading
I am a former middle school and high school teacher. I began in the middle schools in the US teaching math, English, science, and PE. Then, I worked overseas for two years preparing high school
juniors and seniors to take Cambridge O-Level Exams with math and English.
39 Subjects: including algebra 1, algebra 2, reading, English
...I was employed for many years in the field of both local and wide area networking at companies like Intel Corporation and Nortel as well as owning and operating my own small business in sales,
configuration and integration of computer networks. I have studied both differential equations and part...
43 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I can cover the introductory to intermediate accounting classes. I make sure that you understand the concepts involved by asking what you understand and filling in the gaps. I also make up
examples and questions based on the scenario.
12 Subjects: including algebra 1, algebra 2, reading, accounting
Hi, my name's Alisa, and I'm currently a freshman at the University of Washington. I worked as an instructor at a martial arts studio and ever since then I've loved teaching people. I tutored at
my school, and when I came to UW- I wanted to find where else I could tutor as well.
21 Subjects: including algebra 1, algebra 2, chemistry, physics
Related Lake Forest Park, WA Tutors
Lake Forest Park, WA Accounting Tutors
Lake Forest Park, WA ACT Tutors
Lake Forest Park, WA Algebra Tutors
Lake Forest Park, WA Algebra 2 Tutors
Lake Forest Park, WA Calculus Tutors
Lake Forest Park, WA Geometry Tutors
Lake Forest Park, WA Math Tutors
Lake Forest Park, WA Prealgebra Tutors
Lake Forest Park, WA Precalculus Tutors
Lake Forest Park, WA SAT Tutors
Lake Forest Park, WA SAT Math Tutors
Lake Forest Park, WA Science Tutors
Lake Forest Park, WA Statistics Tutors
Lake Forest Park, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Lake_Forest_Park_WA_Algebra_tutors.php","timestamp":"2014-04-20T21:05:31Z","content_type":null,"content_length":"24234","record_id":"<urn:uuid:62ba1f4e-39dc-4fb5-9948-9ae6f451f297>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Glenarden, MD Algebra Tutor
Find a Glenarden, MD Algebra Tutor
...I am an engineering professor with a very strong math background. I have read and studied several poker books and play professionally at low limits. I completely understand how to calculate
pot odds when determining whether or not it is proper to make a bet when you are still waiting for a card on the turn or river to complete a flush or straight.
15 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I look to create a positive environment where I can help motivate students to engage in both analytical and critical thinking in mathematics. I believe that working with students 1-on-1 and in
small group settings can allow students to get their questions asked and not feel intimidated or worry about the pace of the class. I truly believe that my students' success is my success.
24 Subjects: including algebra 1, algebra 2, reading, calculus
...My musical studies included extensive study in ear training. During my time at University of the Pacific I was a tutor for ear training. I worked with students to build their skills with
melodic, harmonic, and rhythmic dictation.
11 Subjects: including algebra 1, algebra 2, public speaking, writing
...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through
problems with students since that is the best way to learn.Have studied and scored high marks in econometric...
14 Subjects: including algebra 1, algebra 2, calculus, statistics
...The sources of the issues are numerous, but the solution can be much simpler. Hire a tutor. I don't mean to hire that high schooler down the street who scored well on the SATs and got an A in
19 Subjects: including algebra 1, algebra 2, English, reading
Related Glenarden, MD Tutors
Glenarden, MD Accounting Tutors
Glenarden, MD ACT Tutors
Glenarden, MD Algebra Tutors
Glenarden, MD Algebra 2 Tutors
Glenarden, MD Calculus Tutors
Glenarden, MD Geometry Tutors
Glenarden, MD Math Tutors
Glenarden, MD Prealgebra Tutors
Glenarden, MD Precalculus Tutors
Glenarden, MD SAT Tutors
Glenarden, MD SAT Math Tutors
Glenarden, MD Science Tutors
Glenarden, MD Statistics Tutors
Glenarden, MD Trigonometry Tutors
Nearby Cities With algebra Tutor
Ardmore, MD algebra Tutors
Bladensburg, MD algebra Tutors
Capitol Heights algebra Tutors
Cheverly, MD algebra Tutors
District Heights algebra Tutors
Fairmount Heights, MD algebra Tutors
Glenn Dale algebra Tutors
Landover Hills, MD algebra Tutors
Lanham algebra Tutors
Lanham Seabrook, MD algebra Tutors
New Carrollton, MD algebra Tutors
North Englewood, MD algebra Tutors
Riverdale Park, MD algebra Tutors
Riverdale Pk, MD algebra Tutors
Seat Pleasant, MD algebra Tutors
|
{"url":"http://www.purplemath.com/Glenarden_MD_Algebra_tutors.php","timestamp":"2014-04-19T02:27:25Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:ff89e556-c9a6-49ad-a9af-1b6cf2b31576>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Data Analysis and Optimization - Hall of Fame
When Alex Holsenbeck was given the challenge to figure out how many options would be exercised by Embarq insiders in the future he began to panic. But then he realized that he probably had learned
everything he needed in DAO. After dusting off the cobwebs, Alex used regression as the back bone for a Crystal Ball model that helped Embarq forecast the financial impact of future insider options
The first step was to perform a regression based on historic exercise rates vs. specific stock price reference points. Validating the regression output with academic research on lag effects,
Holsenbeck found that 10-day lagged reference points were the most robust. Exercise rates from the historical regression data were then used as key inputs to the simulation model. The combination of
exercise rates and stock price paths that were simulated resulted in a range of future option exercising.
Alex survived the thorough review from the Treasury group, and soon the model was used throughout the finance department. The tax group and cash flow statement guru were particularly heavy users, and
Alex knew that all the regression practice in class had paid off.
|
{"url":"http://faculty.darden.virginia.edu/DAO/inductee/holzenbeck%2007.htm","timestamp":"2014-04-20T08:14:16Z","content_type":null,"content_length":"4901","record_id":"<urn:uuid:b9bd1e8e-8e4c-4f71-8ce3-acf53178f0cc>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
NAG Library
NAG Library Routine Document
1 Purpose
F01EJF computes the principal matrix logarithm, $\mathrm{log}\left(A\right)$, of a real $n$ by $n$ matrix $A$, with no eigenvalues on the closed negative real line.
2 Specification
SUBROUTINE F01EJF ( N, A, LDA, IMNORM, IFAIL)
INTEGER N, LDA, IFAIL
REAL (KIND=nag_wp) A(LDA,*), IMNORM
3 Description
Any nonsingular matrix $A$ has infinitely many logarithms. For a matrix with no eigenvalues on the closed negative real line, the principal logarithm is the unique logarithm whose spectrum lies in
the strip $\left\{z:-\pi <\mathrm{Im}\left(z\right)<\pi \right\}$.
is computed using the Schur–Parlett algorithm for the matrix logarithm described in
Higham (2008)
Davies and Higham (2003)
4 References
Davies P I and Higham N J (2003) A Schur–Parlett algorithm for computing matrix functions. SIAM J. Matrix Anal. Appl. 25(2) 464–485
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
5 Parameters
1: N – INTEGERInput
2: A(LDA,$*$) – REAL (KIND=nag_wp) arrayInput/Output
3: LDA – INTEGERInput
4: IMNORM – REAL (KIND=nag_wp)Output
5: IFAIL – INTEGERInput/Output
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
$A$ is singular so the logarithm cannot be computed.
was found to have eigenvalues on the negative real line.
The principal logarithm is not defined in this case,
can be used to find a complex non-principal logarithm.
The arithmetic precision is higher than that used for the Padé approximant computed matrix logarithm.
An unexpected internal error occurred.
Please contact
On entry, ${\mathbf{N}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{N}}\ge 0$.
On entry, ${\mathbf{LDA}}=〈\mathit{\text{value}}〉$ and ${\mathbf{N}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{LDA}}\ge {\mathbf{N}}$.
Allocation of memory failed.
The real allocatable memory required is approximately $3{{\mathbf{N}}}^{2}$.
7 Accuracy
For a normal matrix
(for which
), the Schur decomposition is diagonal and the algorithm reduces to evaluating the logarithm of the eigenvalues of
and then constructing
using the Schur vectors. This should give a very accurate result. In general, however, no error bounds are available for the algorithm. See Section 9.4 of
Higham (2008)
for details and further discussion.
For discussion of the condition of the matrix logarithm see Section 11.2 of
Higham (2008)
. In particular, the condition number of the matrix logarithm at
${\kappa }_{\mathrm{log}}\left(A\right)$
, which is a measure of the sensitivity of the computed logarithm to perturbations in the matrix
, satisfies
$\kappa \left(A\right)$
is the condition number of
. Further, the sensitivity of the computation of
is worst when
has an eigenvalue of very small modulus, or has a complex conjugate pair of eigenvalues lying close to the negative real axis.
If $A$ has real eigenvalues then up to $4{n}^{2}$ of real allocatable memory may be required. Otherwise up to $4{n}^{2}$ of complex allocatable memory may be required.
The cost of the algorithm is
floating point operations. The exact cost depends on the eigenvalue distribution of
; see Algorithm 11.11 of
Higham (2008)
If estimates of the condition number of the matrix logarithm are required then
should be used.
can be used to find the principal logarithm of a complex matrix. It can also be used to return a complex, non-principal logarithm if a real matrix has no principal logarithm due to the presence of
negative eigenvalues.
9 Example
This example finds the principal matrix logarithm of the matrix
$A = 3 -3 1 1 2 1 -2 1 1 1 3 -1 2 0 2 0 .$
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/F01/f01ejf.html","timestamp":"2014-04-21T04:50:41Z","content_type":null,"content_length":"20522","record_id":"<urn:uuid:0cc8bdc9-2675-4ed6-8690-06ad15511c1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Newbold, Philadelphia, PA
Havertown, PA 19083
PhD in Physics -Tutoring in Physics, Math, Engineering, SAT/ACT
...Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. In between formal tutoring
sessions, I offer my students FREE email support to keep them moving...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Newbold_Philadelphia_PA_algebra_tutors.aspx","timestamp":"2014-04-24T17:00:26Z","content_type":null,"content_length":"60143","record_id":"<urn:uuid:f82facdb-5a82-4866-be4d-6ea35bb6850a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Christos Papakyriakopoulos
Griechische Wissenschaftler
(a short version of a biography written by Stavros G. Papastavridis )
CHRISTOS PAPAKYRIAKOPOYLOS (Χρήστος Παπακυριακόπουλος) was born in ATHENS(-Chalandri) in 1914. His father, coming from PELOPONESE (TRIPOLIS), was an affluent merchant. He graduated from Varvakeion, (
the most prestigious, at that time and for many decades later, high school in Greece). He enrolled in the National Metsovion Institute of Technology (Ethniko Metsovion Polytexneio) in 1933. There he
met Professor of Mathematics Nikolaos Kritikos, who influenced him to switch and enroll in the Mathematics Department of the University of Athens.
His interest was attracted to Algebraic Topology, (subject having minimal activity in Greece at that time), and he read on his own Aleksandrov and Hopf, Topologie, which has appeared in 1935. (Pavel
Sergeevich Aleksandrov, Born: 7 May 1896 in Bogorodsk (also called Noginsk), Russia, Died: 16 Nov 1982 in Moscow, USSR. Heinz Hopf, Born: 19 Nov 1894 in Breslau, Germany (now Wroclaw, Poland) Died: 3
June 1971 in Zollikon, Switzerland. From 1926 Aleksandrov and Hopf were close friends working together. They spent some time in 1926 in the south of France with Neugebauer. Then Aleksandrov and Hopf
spent the academic year 1927-28 at Princeton in the United States. This was an important year in the development of topology with Aleksandrov and Hopf in Princeton and able to collaborate with
Lefschetz, Veblen and Alexander. During their year in Princeton, Aleksandrov and Hopf planned a joint multi-volume work on Topology the first volume of which did not appear until 1935. This was the
only one of the three intended volumes to appear since World War II prevented further collaboration on the remaining two volumes).
His first paper was
Papakyriakopulos, Ch.: Ueber eine Indicatrix der ebenen geschlossenen Jordankurven.(Greek) Bull. Soc. Math. Greece 18(1938) 84-92.
He got his Doctor's degree from the Mathematics Department of the University of Athens at 1943, after recommendation from Constantin Caratheodory. In his thesis he gave a new proof of the topological
invariance the homology groups for Simplicial Complexes. His thesis was published
Papakyriakopoulos, Ch.: Ein neuer Beweis fuer die Invarianz der Homologiegruppe eines Komplexes. (Greek) Bull. Soc. Math. Greece 22(1946) 1-154.
(He provided another proof of the so called Hauptvermutung (Main assumption), which was proved for the first time by James Waddell Alexander, (Born: 19 Sept 1888 in Sea Bright, New Jersey, USA, Died:
23 Sept 1971 in Princeton, New Jersey, USA. In collaboration with Veblen, he showed that the topology of manifolds could be extended to polyhedra. Before 1920 he had shown that the homology of a
simplicial complex is a topological invariant. Alexander's work around this time went a long way to put the intuitive ideas of Poincaré on a more rigorous foundation).
During those years he was working as an (non-paid) Teaching Assistant of Professor N. Kritikos, at the National Metsovion Institute of Technology (Ethniko Metsovion Polytexneio).
In the national referendum of 1935, he voted openly against the return of the king. In the harsh years of the Nazi occupation he joint the National Front of Liberation. After the civil war clashes of
December 1944 in Athens, he followed the forces of National Front of Liberation in the countryside, where he found himself in Karditsa teaching Elementary arithmetic to Primary school students. At
this time his godfather was minister of Interior, looking for Papakyriakopoulos in order to appoint him Mayor of XALANDRI, flabbergasted to find out that Papakyriakopoulos was in the mountains with
the guerillas! At about the same time his brother died in action fighting with the loyal to the Greek government in exile forces, the so called Rimini brigade, as part of the allied push in northern
Italy. It sounds like an ancient Greek tragedy, almost like Eteoklis and Polynikis, In ANTIGONE or SEVEN AGAINST THEBES. The terrible political division that brought to Greece an extremely
destructive civil war 1946-49, with deep repercussions in the following decades, has cut across the Papakyriakopoulos family. The Varkiza agreement in February 1944, among the fighting political
factions, gave a temporary breath to peace in Greece and Papakyriakopoulos returned to Athens and at the National Metsovion Institute of Technology (Ethniko Metsovion Polytexneio). There the
political climate was unfavorable to National Front of Liberation sympathizers, eventually Professor Kritikos was fired (to be rehired later on in the 50ies, when things have calmed down a little
bit), and Papakyriakopoulos was forced to leave in 1946.
Working on his own with no connection to any mathematical center outside Greece, he concentrated on the low dimensional topology, and among other things Dehn's lemma caught his fancy. Max Wilhelm
Dehn, (Born: 13 Nov 1878 in Hamburg, Germany, Died: 27 June 1952 in Black Mountain, North Carolina, USA), was a student of Hilbert and a solver of Hilbert's 3rd problem, (from the famous least of 23
problems). ( Hilbert's 3rd problem, is the following. The equality of two volumes of two tetrahedra of equal bases and equal altitudes. In two letters to Gerling, Gauss expresses his regret that
certain theorems of solid geometry depend upon the method of exhaustion, i. e., in modern phraseology, upon the axiom of continuity (or upon the axiom of Archimedes). Gauss mentions in particular the
theorem of Euclid, that triangular pyramids of equal altitudes are to each other as their bases. Now the analogous problem in the plane has been solved. Gerling also succeeded in proving the equality
of volume of symmetrical polyhedra by dividing them into congruent parts. Nevertheless, it seems to me probable that a general proof of this kind for the theorem of Euclid just mentioned is
impossible, and it should be our task to give a rigorous proof of its impossibility. This would be obtained, as soon as we succeeded in specifying two tetrahedra of equal bases and equal altitudes
which can in no way be split up into congruent tetrahedra, and which cannot be combined with congruent tetrahedra to form two polyhedra which themselves could be split up into congruent tetrahedra).
In 1910, in M.Dehn, "Uber die topologie des dreidimendionalen Raumes, Math. Ann. 69(19100, pp. 137-168, Dehn gave a proof of a lemma concerning loops in three dimensional manifolds. In 1929 H.
Knesser realized a gap in Dehn's proof. Papakyriakopoulos tried to close the gap, and he sends his purported proof to a very distinguished Princeton Knot Theorist, Ralph Fox. Fox found a gap in
Papakyriakopoulos proof, but he was very favorably impressed by the young Papakyriakopoulos who was working in total scientific isolation, and he urged him to come to Princeton. Papakyriakopoulos was
always recognizing the importance of this encouragement and subsequent support that he received from R. Fox. Papakyriakopoulos went to Princeton at 1948, never to return to Greece again, except for a
very short visit in 1952 to attend the funeral of his father.
The Greek Security Police pursue him in the USA, trying to convince the USA Immigration authorities to expel him from the country. Princeton University supported him, and Papakyriakopoulos was always
very grateful for that. (We may note in passing that the list of people who found political asylum at Princeton University, includes Albert Einstein and Thomas Mann in the 30ties and Chai Ling
(student leader of the Tiananmen Square uprising in 1989) in the 90ties
Below we provide some necessary terminology.
Let D^n = { (x[1], x[2], …, x[n] ) Î R^n : x[1]^2 + x[2]^2 + …+ x[n]^2 £1 }be the closed n-disk, and its subsets S^n = { (x[1], x[2], …, x[n] ) Î R^n : x[1]^2 + x[2]^2 + …+ x[n]^2 =1 }is the
n-sphere, and O^n = { (x[1], x[2], …, x[n] ) Î R^n : x[1]^2 + x[2]^2 + …+ x[n]^2 £1 }be the open n-disk(open ball).
A topological space M is a C^¥ n dimensional manifold, if it is endowed with a collection (U[i] , f[i] )[iÎI] , where I is a non empty index set
a) the U[i] 's constitute an open covering of M
b) the f[i] 's are homeomorphisms f[i] : O^n ® U[i]
c) If i, j Î I and U[i] Ç U[j] is non empty, then the homeomorphism
f[j]^-1×f[i] : f[i]^-1(U[i] Ç U[j]) ® f[j]^-1(U[i] Ç U[j]) is C^¥^ differentiable, (in the usual sense of Calculus of several real variables)
A ( C^¥ )^ n dimensional manifold will be called here simply Manifold.
Manifolds do appear in a variety of ways, e.g. the state space of mechanical systems, the set of solutions of systems of equations (minus "few" points) etc.
As Marston Morse put it,
"Any problem which is non-linear in character, which involves more than one coordinate system or more than one variable, or where structure is initially defined in the large, is likely to require
considerations of topology and group theory for its solution. In the solution of such a problems classical analysis will frequently appear as an instrument in the small, integrated over the whole
problem with the aid of topology or group theory", (The Calculus of Variations in the Large, AMS Coll. Publ. Vol. 18, New York 1934).
If M, N are manifolds then the continuous function f : M®N is called Differentiable if f "translated locally in terms of coordinates becomes C^¥^ differentiable, (in the usual sense of Calculus of
several real variables)", i.e. more precisely : Let (U[i] , f[i] )[iÎI] and (V[i] , g[i] )[iÎJ] be the associated open coverings of M and N respectively . Then for any x in M, there is an open
neighborhood U of x, an open neighborhood V of f(x), mÎ I, nÎ J, so that UÍU[m] , VÍV[n] , F(U)ÍV, such that the function
g[n]^-1×f×f[m] : f[m]^-1(U)®g[n]^-1(V), is C^¥^ differentiable, (in the usual sense of Calculus of several real variables).
If M, N are differentiable manifolds then the continuous function f : M®N is called DIFFEOMORPHISM if it is differentiable and if there exists differentiable function
g : N®M so that gf=I[M ]and fg=I[N] .
The central problem
A) Find necessary and sufficient conditions for two given compact manifolds to be HOMEOMORPHIC or DIFFEOMORPHIC
B) Describe all types of compact manifolds up to HOMEOMORPHISM or DIFFEOMORPHISM
The only compact 1 dimensional manifold, up to homeomorphism or diffeomorphism is S^1
HOMOTOPY, HOMOLOGY etc
Let I=[0, 1]
Let X, Y be topological spaces and f, g: X®Y be continuous maps.
We call f and g homotopic if there is a map F: X´I®Y, such that for all xÎX we have
F(x, 0)=f(x) and F(x, 1)=g(x)
Two topological spaces are called homotopy equivalent if there are maps f:X®Y and g:Y®X, such that gf is homotopic to I[X] and fg is homotopic to I[Y]
If X is a topological space then seqyences of groups are defined the
HOMOLOGY groups
H[0](X), H[1](X), H[2](X), H[3](X), … H[n](X),
the COHOMOLOGY groups
H^0(X), H^1(X), H^2(X), …H^n(X),
and the HOMOTOPY groups
π[0](Χ), π[1](Χ), π[2](Χ), …π[n](Χ),
Group π[1](Χ) is called Fundamental Group and was introduced by POINCARE(1904).
If X is a connected topological space, we select arbitrarily one point xÎX. In a way the elements of the Fundamental group can be presented as maps f: S^1®X , with f(1)=x.
The map f represents the zero element in π[1](Χ), if it can be extended as a continuous map F:D^1®X
Papakyriakopoulos, C.D.: On solid tori. Proc. London Math. Soc., III. Ser. 7(1957) 281-299.
Let M be a three dimensional manifold with non-void boundary ¶M. Suppose that there is some closed loop f: S^1®¶M (with possible self intersection), which is homotopically zero in M, but NOT
homotopically zero in ¶M. Then there exist a simple closed loop F: S^1®¶M sharing this property.
The proof was given under some orient ability assumption, which were later removed in
STALLINGS JOHN. On the loop theorem, Ann. of Math. 72(1960), pp. 12-19.
Almost immediately after the loop theorem, Papakyriakopoulos proved his natural companion known as Dehn's Lemma.
DEHN's LEMMA
Papakyriakopoulos, C.D.: On Dehn's lemma and the asphericity of knots. Ann. of Math., II. Ser. 66(1957) 1-26 .
Let M be a three dimensional manifold with non-void boundary ¶M. Suppose that there is some simple closed loop f: S^1®¶M (without self intersection), which is homotopically zero in M. Then there is
an embedding F: D^2®M which extends f.
The "adventures" of the statement inspired some poetically inclined. The following limerick, (Limerick must have five lines with aabba rhyme scheme. The beat must be anapestic (weak, weak, strong)
with three feet in lines 1, 2, and 5 and 2 feet in lines 3 and 4), in J.Milnor's 60th birthday's conference, Michael Spivak attributed it to J.Milnor himself.
The perfidious lemma of Dehn was every topologist's bane 'til Christos Papakyriakopoulos proved it without any strain.
The SPHERE THEOREM
was proved in the same paper in the same paper with Dehn's Lemma. Papakyriakopoulos proof contained some extra assumptions, which were later removed by a slight modification of Papakyriakopoulos
proof, in
WHITEHEAD J.H.C. On the sphere in 3-manifolds, Bull. AMS 64(1958), pp. 161-166.
Let M be a (closed) orientable 3 dimensional manifold, such that π[2]M is not the trivial group. Then there is an imbedding S^2®M which is homotopically non zero.
He presented those results in the International Congress of Mathematics in 1958 in Amsterdam, in the invited address
Papakyriakopoulos, C.D.: Some problems on 3-dimensional manifolds. Bull. Am. Math. Soc. 64(1958) 317-335.
For the three theorems above, the AMERICAN MATHEMATICAL SOCIETY, in 1964 awarded him the VEBLEN prize in geometry.
(This prize was established in 1961 in memory of Professor Oswald Veblen(Born: 24 June 1880 in Decorah, Iowa, USA, Died: 10 Aug 1960 in Brooklyn, Maine, USA) through a fund contributed by former
students and colleagues. The fund was later doubled by the widow of Professor Veblen, bringing the fund to $2,000. The first two awards of the prize were made in 1964 and the next in 1966;
thereafter, an award will ordinarily be made every five years for research in geometry or topology under conditions similar to those for the Bocher Prize.
The first award was given 1964 to C. D. Papakyriakopoulos for his papers, On Solid Tori, Annals of Mathematics, Series 2, volume 66 (1957), pp. 1-26, and On Dehn's lemma and the asphericity of knots,
Proceedings of the National Academy of Sciences, volume 43 (1957), pp. 169-172.
From the early 60ies till his death Papakyriakopoulos devoted his efforts to the POINCARE CONJECTURE
Consider a compact 3 dimensional manifold V. Is it possible for the fundamental group of V to be trivial, even though V is not homeomorphic to S^3 ?
Answering the above question negatively, became to be known as the POINCARE CONJECTURE, which has inspired topologist ever since, leading to many false proof s by established mathematicians, (e.g. J.
H. C. Whitehead in 1934). The Danish Mathematician Piet Hein warns: A problem worthy of attack proves its worth by hitting back. But the conjecture by focusing interest in the topology of manifolds
contributed to many big advances in our understanding manifolds, with deep repercussions to other fields in mathematics, and quite often lead to surprising consequences. If someone looks at the list
of Fields medalists, there are not many those about whom you can say that theories on manifolds have not play a very important role on his research, (and those are mostly the recipients of the first
awards in 1936 and 1950)
Andrew J. Wiles (Special Tribute)
In the years of Poincare and probably till the 40ies, the Poincare Conjecture looked like the simplest open question concerning the classification of manifolds. Common sense would have dictate that
there is no hope in classifying higher dimensional manifolds unless we classify first the 3 dimensional one, and that there is no hope in classifying 3 dimensional manifolds unless we settle either
way the Poincare Conjecture.
Finally after 100 years the Russian mathematician Grigori Perelman of the Steklov Institute of Mathematics in St. Petersburg provided a proof that is not yet official but experts consider that it is
most likely correct.
So Papakyriakopoulos concentrated his efforts almost exclusively to the Poincare Conjecture. He was fully aware that perhaps it was a "winner takes all situation" and calmly was accepting this risk,
with sincere humility. His life in Princeton can be described in one word: Spartan-Laconic-Doric. He was very disciplined and very organized: 8.00 hours at the Student Center cafeteria for breakfast,
8.30 at his office, 11.30 Lunch, 12.30 in his office, 15.00 Tea in the common room of the department, reading of the New York Times. 4.00 Seminar or back to his office. His life was an example of
search for the truth and the price associated with it.
The life o Papakyriakopoulos influences the literature.
Uncle Petros and Goldbach's Conjecture, by the Greek author Apostolos Doxiadis, a book that has been translated in almost 20 languages.
After the downfall of military junta in July 1975, he renewed his passport and he was contemplating a trip back to Greece. But fate has decided otherwise, and on June 29 1976, after been hospitalized
for stomach cancer, Christos Papakyriakopoulos took the road, which has no end and no return.
1. Papakyriakopulos, Ch.: Ueber eine Indicatrix der ebenen geschlossenen Jordankurven.(Greek) Bull. Soc. Math. Greece 18(1938) 84-92.
2. Papakyriakopoulos, Chr.: Ueber die geschlossenen Jordanschen Kurven im $\bbfR_n$.(Greek) Bull. Soc. Math. Greece 19(1939) 97-126.
3. Papakyriakopoulos, Ch.: Ein neuer Beweis für die Invarianz der Homologiegruppe eines Komplexes. (Greek) Bull. Soc. Math. Greece 22(1946) 1-154.
4. Papakyriakopoulos, C.D.: On the ends of knot groups. Ann. of Math., II. Ser. 62(1955) 293-299.
5. Papakyriakopoulos, C.D.: On solid tori. Proc. London Math. Soc., III. Ser. 7(1957) 281-299.
6. Papakyriakopoulos, C.D.: On Dehn's lemma and the asphericity of knots. Ann. of Math., II. Ser. 66(1957) 1-26 .
7. Papakyriakopoulos C.D.: On Dehn's lemma and the asphericity of knots. Proc. Nat. Acad. Sc., Vol. 43(1957) 169-172.
8. Papakyriakopoulos, C.D.: On the ends of the fundamental groups of 3-manifolds with boundary. Commentarii Math. Helvet. 32(1957) 85-92.
9. Papakyriakopoulos, C.D.: Some problems on 3-dimensional manifolds. Bull. Am. Math. Soc. 64(1958) 317-335.
10. Papakyriakopoulos, C.D.: The theory of three-dimensional manifolds since 1950. Proc. Int. Congr. Math. 1958, 433-440 (1960).
11. Papakyriakopoulos, C.D.: A reduction of the Poincare conjecture to other conjectures Bull. Am. Math. Soc. 68(1962) 360-366.
12. Papakyriakopoulos, C.D.: A reduction of the Poincare conjecture to other conjectures. II Bull. Am. Math. Soc. 69(1963) 399-401.
13. Papakyriakopoulos, C.: A reduction of the Poincare conjecture to group theoretic conjectures. Ann. Math., II. Ser. 77(1963) 250-305.
14. Papakyriakopoulos, C.: Attaching 2-dimensional cells to a complex. Ann. Math., II. Ser. 78(1963) 205-222.
15. Papakyriakopoulos, C.D.: Planar regular coverings of orientable closed surfaces. Knots, Groups, 3-Manif.; Pap. dedic. Mem. R. H. Fox, 1975, 261-292.
ATIYAH MICHAEL. The Geometry and Physics of knots. Camb. Univ. Press 1990.
ANDERSON T. MICHAEL. Scalar Curvature and Geometrization Conjectures for 3-manifolds. Comparison Geometry, MSRI Publications, Volume 30, 1997, pp. 49-82.
DEHN MAX. Uber die topologie des dreidimendionales Raumes, Math. Ann. 69(19100, pp. 137-168.
DONALSON S.K. An application of gauge theories to 4 dimensional topology. J. Diff. Geom. 18(1983), pp. 279-315.
EHRENFEST P. In what way does it become manifest in fundamental laws of Physics that space has three dimensions ? Proc. Amsterdam Acad. 20 (1917).
FREED D.S. and UHLENBECK K.K. Instantons and 4 dimensional manifolds. 2nd edition, Springer Verlag , New York 1991.
FREEDMAN MICHAEL. The topology of 4 dimensional manifolds. J. Diff. Geom. 17 (1982) pp. 357-453.
GABAI DAVID. Valentin's Poenaru program for the Poincare conjecture. Geometry, topology and Physics, pp. 139-166, Conf. Proc. in honour of Raoul Bott, ed. S.T.Yau, Lecture Notes Geom. Topology VI,
Internat. Press Cambridge MA 1995.
GOMPF R. Three exotic R^4's and other anomalies(!). J. Diff. Geom. 18(1983) pp. 317-328.
GOMPF R. An infinite set of exotic R^4's . J. Diff. Geom. 18(1985) pp. 283-300.
POENARU VALENTINE. The three big Theorems of Papakyriakopoulos, Bulletin of Greek Math. Society 18(19770 pp. 1-7.
POENARU VALENTINE. A program for the Poincare conjecture and some of its ramifications. TOPICS IN LOW-DIMENSIONAL TOPOLOGY, In Honor of Steve Armentrout, Proceedings of the Conference on
Low-Dimensional Topology University Park, Pennsylvania, USA May 1996, edited by A Banyaga, H Movahedi-Lankarani & R Wells , World Scientific.
SMALE S. Generalized Poincare Conjecture in dimensions greater than 4, Ann. of Math. 64(19600, pp. 399-405.
SMALE STEPHEN. The story of higher dimensional Poincare conjecture (what actually happened on the beaches of Rio), Math. Intel. bf 12 (19900 44-51.
STALLINGS JOHN. On the loop theorem, Ann. of Math. 72(1960), pp. 12-19.
STALLINGS JOHN. Polyhedral homotopy-spheres, BAMS, 66(1960), pp. 485-488.
TAUBES C.H. Gauge Theory on asymptotically periodic 4 manifolds. J. Diff. Geom. 25 (1987) pp. 363-430.
THURSTON W.P. Three dimensional Manifolds, Kleinian groups and hyperbolic geometry. The Mathematical Heritage of Henri Poincare. Proc. Symp. Pure Math. 39(1983), Part 1. (Also in Bull. Amer. math.
Soc. 6(1982) 357-381.
THURSTON WILLIAM P. Three dimensional Geometry and Topology. Ed. Silvio Levy, Vol. 1, Princeton Math series 35, Princeton Univ. Press 1997.
THURSTON W.P. and Weeks Jeffrey R. The Mathematics of 3 dimensional Manifolds. Scient. Amer. July 1984, 251(1) pp. 94-106.
WHITEHEAD J.H.C. On the 2-sphere in 3-manifolds, Bull. AMS 64(1958), pp. 161-166.
ZEEMAN E.C. The Poincare Conjecture for n³5, TOPOLOGY of 3-MANIFOLDS (1961) pp. 198-204 Prentice Hall.
A - B - C- D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
Ancient Greece Medieval Greece / Byzantine Modern Greece
Science, Technology , Medicine , Warfare Science, Technology, Arts Cities, Islands, Regions, Fauna/Flora ,
, Biographies , Life , Cities/Places/Maps , Arts , Literature , Philosophy ,Olympics, Mythology , , Warfare , Literature, Biographies , History , Warfare
History , Images Biographies Science/Technology, Literature, Music , Arts , Film/Actors , Sport
Icons, History , Fashion
|
{"url":"http://www.mlahanas.de/Greeks/new/Papakyriakopoulos.htm","timestamp":"2014-04-16T22:19:57Z","content_type":null,"content_length":"62948","record_id":"<urn:uuid:b9664d54-68df-443d-90d6-bbf6bcacdaa4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sequential Monte Carlo EM for multivariate probit models
Moffa, Giusi and Kuipers, Jack (2014) Sequential Monte Carlo EM for multivariate probit models. Comp. Stats. & Data An. 72, pp. 252-272.
Full text not available from this repository.
Other URL: http://arxiv.org/abs/1107.2205, http://dx.doi.org/10.1016/j.csda.2013.10.019
Multivariate probit models (MPM) have the appealing feature of capturing some of the dependence structure between the components of multidimensional binary responses. The key for the dependence
modelling is the covariance matrix of an underlying latent multivariate Gaussian. Most approaches to MLE in multivariate probit regression rely on MCEM algorithms to avoid computationally intensive
Export bibliographical data
|
{"url":"http://epub.uni-regensburg.de/21577/","timestamp":"2014-04-19T19:34:07Z","content_type":null,"content_length":"32441","record_id":"<urn:uuid:be53baeb-5702-4715-ad51-1b58befd4437>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pololu - Stepper Motors
Compare all products in this category
This small NEMA 8-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 600 mA at 3.9 V, allowing for a holding torque of 180 g-cm (2.5 oz-in). With a
weight of 60 g, this is one of the smallest stepper motors we carry.
This NEMA 11-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 670 mA at 3.5 V, allowing for a holding torque of 600 g-cm (8.3 oz-in).
This NEMA 11-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 670 mA at 4.5 V, allowing for a holding torque of 950 g-cm (13 oz-in).
This NEMA 14-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 280 mA at 7.4 V, allowing for a holding torque of 650 g-cm (9 oz-in).
This NEMA 14-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 500 mA at 10 V, allowing for a holding torque of 1 kg-cm (14 oz-in).
This NEMA 14-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 1 A at 2.7 V, allowing for a holding torque of 1.4 kg-cm (20 oz-in).
This NEMA 17-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 1.7 A at 2.8 V, allowing for a holding torque of 3.7 kg-cm (51 oz-in).
This NEMA 17-size hybrid bipolar stepping motor has an integrated 28 cm (11″) threaded rod as its output shaft, turning it into a linear actuator capable of precision open-loop positioning. The
included traveling nut has four mounting holes and moves 40 µm (1.6 mil) per full step; finer resolution can be achieved with microstepping. The stepper motor has a 1.8° step angle (200 steps/
revolution) and each phase draws 1.7 A at 2.8 V, allowing for a holding torque of 3.7 kg-cm (51 oz-in).
This NEMA 17-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 1.2 A at 4 V, allowing for a holding
torque of 3.2 kg-cm (44 oz-in).
This NEMA 23-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 1 A at 5.7 V, allowing for a holding
torque of 4 kg-cm (55 oz-in).
This NEMA 23-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 1 A at 7.4 V, allowing for a holding
torque of 9 kg-cm (125 oz-in).
This NEMA 23-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 2 A at 3.6 V, allowing for a holding
torque of 9 kg-cm (125 oz-in).
This NEMA 23-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 2.8 A at 2.5 V, allowing for a holding torque of 13 kg-cm (180 oz-in).
This NEMA 23-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 1 A at 8.6 V, allowing for a holding
torque of 14 kg-cm (190 oz-in).
This NEMA 23-size hybrid stepping motor can be used as a unipolar or bipolar stepper motor and has a 1.8° step angle (200 steps/revolution). Each phase draws 2 A at 4.5 V, allowing for a holding
torque of 14 kg-cm (190 oz-in).
This NEMA 23-size hybrid bipolar stepping motor has a 1.8° step angle (200 steps/revolution). Each phase draws 2.8 A at 3.2 V, allowing for a holding torque of 19 kg-cm (270 oz-in).
This tiny bipolar stepping motor from Sanyo has a 1.8° step angle (200 steps/revolution). Each phase draws 300 mA at 6.3 V, allowing for a holding torque of 66 g-cm (0.92 oz-in). With a weight of
just 27 g, this is the smallest stepper motor we carry.
This tiny, double shaft bipolar stepping motor from Sanyo has a 1.8° step angle (200 steps/revolution). Each phase draws 300 mA at 6.3 V, allowing for a holding torque of 66 g-cm (0.92 oz-in). With a
weight of 28 g, this is the smallest stepper motor we carry.
This pancake bipolar stepping motor from Sanyo has a 1.8° step angle (200 steps/revolution). It offers a holding torque of 850 g-cm (12 oz-in), and each phase draws 1 A at 3.5 V. This stepper motor’s
flat profile (18.6 mm including the shaft) allows it to be used in places where more traditional stepper motors would be too bulky.
This pancake bipolar stepping motor from Sanyo has a 1.8° step angle (200 steps/revolution). It offers a holding torque of 1.9 kg-cm (26 oz-in), and each phase draws 1 A at 5.4 V. This stepper
motor’s flat profile (25.6 mm including the shaft) allows it to be used in places where more traditional stepper motors would be too bulky.
This pancake bipolar stepping motor from Sanyo has a 1.8° step angle (200 steps/revolution). It offers a holding torque of 1 kg-cm (14 oz-in), and each phase draws 1 A at 4.5 V. This stepper motor’s
flat profile (16 mm including the shaft) allows it to be used in places where more traditional stepper motors would be too bulky.
|
{"url":"http://www.pololu.com/category/87/stepper-motors","timestamp":"2014-04-21T05:23:38Z","content_type":null,"content_length":"58161","record_id":"<urn:uuid:8ae54304-917a-4231-ade0-e64b93806d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User user5706
bio website math.sjsu.edu/~simic
visits member for 3 years, 11 months
seen Jun 24 '13 at 21:09
stats profile views 233
13 awarded Popular Question
8 answered Failures that lead eventually to new mathematics
Sep Famous mathematicians with background in arts/humanities/law etc
28 comment Sorry, didn't realize that.
28 answered Famous mathematicians with background in arts/humanities/law etc
Sep Inner products on differential forms
23 comment Thanks, Brian and Paul - that's exactly what I wanted to know. Thanks also to José for an entirely different (at least for me) point of view.
23 accepted Inner products on differential forms
Sep When is a closed differential form harmonic relative to some metric?
23 comment Thanks to all who helped elucidate this question! I was mostly interested in the degrees $k = 1$ and $k = n-1$, but it would certainly be very interesting to see what can be said about
the intermediate $k$'s. Perhaps a topic for a future Ph.D. thesis.
23 awarded Scholar
23 accepted When is a closed differential form harmonic relative to some metric?
11 awarded Nice Question
9 asked Inner products on differential forms
9 awarded Supporter
9 awarded Student
9 asked When is a closed differential form harmonic relative to some metric?
18 awarded Good Answer
9 awarded Necromancer
6 awarded Fanatic
2 awarded Nice Answer
28 awarded Enthusiast
May answered Examples of common false beliefs in mathematics.
|
{"url":"http://mathoverflow.net/users/5706/user5706?tab=activity","timestamp":"2014-04-20T06:01:50Z","content_type":null,"content_length":"43288","record_id":"<urn:uuid:055c3cc0-a402-46a9-b8d7-6f2e762db133>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
It's a Boson! The Higgs as the Latest Offspring of Math & Physics - The Crux | DiscoverMagazine.com
Amir D. Aczel (amirdaczel.com) writes about mathematics and physics and has published 18 books, numerous newspaper and magazine articles, as well as professional research papers.
A Higgs candidate event from the ATLAS detector of the LHC.
Courtesy of CERN
What made me fall in love with theoretical physics many years ago (in 1972, when I first met Werner Heisenberg) was its stunningly powerful relationship—far beyond any reasonable expectation—with
pure mathematics. Many great minds have pondered this mysteriously deep connection between something as abstract as mathematics, based on theorems and proofs that seem to have little to do with
anything “real,” and the physical universe around us. In addition to Heisenberg, who brilliantly applied abstract matrix theory to quantum physics, Roger Penrose has explored the deep relation
between the two fields—and also, to a degree, between them and the human mind—in his book The Road to Reality.
And in 1960, the renowned quantum physicist and Nobel Laureate Eugene Wigner of Princeton wrote a fascinating article that tried to address the mysterious nature of this surprising
relationship. Wigner marveled at the sheer mystery of why mathematics works so well in situations where there seems to be no obvious reason why it does. And yet, it works.
Wigner had contributed much to physics using very advanced mathematics: he was one of the pioneers of using mathematical groups to model physical phenomena. Group theory is the mathematical branch
dealing with concepts of symmetry. As Wigner helped us see, symmetries can reveal the deepest secrets of physical reality.
Such symmetries, for example, allowed Steven Weinberg to actually predict both the existence of the Z boson and the actual masses of the Z and the two W bosons, which act inside nuclei of matter to
produce radioactive decay. In doing so, Weinberg exploited what he called “the Higgs mechanism,” which he hypothesized to break a primeval symmetry of the early universe and thus impart masses to
these three particles—and presumably also to all other matter in the cosmos. When the discovery of the Higgs boson was announced at CERN on the 4th of July, this immensely important scientific
triumph of our time also lent further support for Weinberg’s theory.
The fact that the “pure” mathematics of group theory can help produce such accurate physical predictions as Weinberg’s seems like nothing less than a miracle. But in fact, the connection between
mathematical groups and physics was established eight decades ago by the brilliant German-Jewish mathematician Emmy Noether, who barely escaped Nazism only to die from an abdominal tumor in the
United States shortly after obtaining a professorship at Bryn Mawr, which had allowed her to leave Europe. Noether devised and proved two key theorems in mathematics, called Noether’s Theorems. These
powerful mathematical results established the relationship between the symmetries of group theory and the all-important conservation laws in physics—such as the conservation of energy, momentum, and
electric charge.
Continuous groups, the work of the Norwegian mathematician Sophus Lie (pronounced “lee”), have played a key role in physics, and these come into play through Noether’s Theorems. The technical
explanations of how symmetry works in theoretical physics are beyond the scope of this article, but the point is that finding a pure-math kind of symmetry allows a physicist to do a lot: such as
discover an entire new theory! As a very quick example: symmetry through time is what gives physics the key concept of the conservation of energy—the paramount property that energy can only change
form (for example, from mass to sheer energy, as per Einstein’s famous formula), but never be created or destroyed.*
In the 1960s, symmetry went wild! Murray Gell-Mann, sitting at an international conference at CERN in Geneva one day in 1962, was able to look at symmetries described by formulas written on the
board, run down the aisle and write excitedly on the board his prediction of the existence of a new particle, called the Omega Minus! (later confirmed in particle accelerator work).
A digram illustrating the symmetry of Gell-Mann’s “Eightfold Way”
(which preceded his Omega-minus discovery and resulting new symmetry).***
Courtesy of CERN
The electroweak symmetry of the photon, the Z, and the two Ws was broken when these four photon-like particles turned into 3 massive particles (hence mass was “imparted” to them through the Higgs
mechanism) and one remaining massless photon (see my previous post).** The deep theoretical idea of a broken symmetry producing mass thus led to the theoretical birth of the Higgs boson: the
then-hypothesized, and now tentatively confirmed, existence of the field associated with the Higgs particle, which through an interaction with the electroweak field gave us mass.
Peter Higgs and his colleagues (Brout, Englert, Hagen, Kibble, and Guralnik, all of whom had the same idea in 1964) were able to exploit the idea of a symmetry to predict the existence of a
particle-wave-field that gave mass to other particles and to itself when the universe was very young. The Higgs itself was thus born from the pure mathematical idea of symmetry, captured through the
theory of continuous groups. To be sure, this idea was already “in the air” before these papers were published, and other theoretical physicists had understood it well and published papers about it
from 1954 to 1961. But a deep trap quickly materialized.
A physicist now at MIT, Jeffrey Goldstone, had (with the help of Steven Weinberg and Abdus Salam) proved a theorem that showed that, under certain circumstances, “bad” particles—massless like the
photon—somehow appear when the primeval symmetry of the universe (in which the Higgs supposedly did its mass-giving magic) breaks down. This was devastating news: the physicists didn’t want these
massless bosons there, since they ruined the theory about how mass can be imparted to particles. Something had to be done about it!
So then came our “gang of 6″ (Higgs, Brout, Englert, Hagen, Kibble, Guralnik) and, technically, what they all did was to show that the offending Goldstone theorem did not apply to the particular
symmetry relevant to the early universe.**** And so, with the hurdle of the nasty Goldstone-Weinberg-Salam theorem finally removed, the road was finally wide open for the greatest symmetry of all
time to break down dramatically through the interaction of the electroweak field with the Higgs field, resulting in mass being given to the Ws and the Z, leaving only our lonely photon as massless.
This purely theoretical advance in 1964, culminating in Weinberg’s Nobel paper of 1967, thus made the Higgs mechanism emerge triumphant, and mass was shown to be conferred to particles—allowing us to
come into being and contemplate the birth of the universe we live in now. It all happened because of the mathematical idea of symmetry and the uncannily powerful relationship between pure mathematics
and theoretical physics!
* Two more quick examples of symmetries in physics: Einstein’s general theory of relativity enjoys an important symmetry called “general covariance“—and it gives the theory its power and validity.
Maxwell’s theory of electromagnetism has a particular Lie-group structure that allows the theory to remain valid even when “rotated” in an abstract mathematical space through the action of a
mathematical group. The group involved here is the group of all possible rotations of a circle (rotations by any given angle). This Lie group is called U(1).
** This symmetry is modeled by the continuous group SU(2)xU(1), a product of the “circle rotations” group U(1) and the group of special unitary 2 by 2 matrices, SU(2). This composite group is
believed to have governed the electroweak symmetry that existed shortly after the Big Bang and which broke down through its field interacting with the Higgs field. It was explained by Steven
Weinberg, Sheldon Glashow, and Abdus Salam, The full standard model is represented today by the composite group SU(3)xSU(2)xU(1), which adds the quarks (with their 3 color charges) to the picture.
*** It shows the proton, neutron, and xi, sigma, and lambda mesons (intermediate-sized composite particles). This is one representation of the Lie group SU(3), the group of special unitary 3 by 3
**** It’s called a Yang-Mills gauge symmetry: a particular kind of continuous, Lie group symmetry.
How can a particle (The Higgs) give itself mass?
@A.Amiri (and earlier commenters on my previous post): The Higgs, as a particle, gets its mass by interacting with its own field: in the same way that an electron interacts with its own
electromagnetic field–the field that the electron’s own charge generates. self-interactions lead to “self-energy” and are a common phenomenon in particle physics. Here, the interaction is with
the Higgs (scalar) field.
This blog claims that Weinberg’s 1967 paper predicts the masses of the W and Z bosons. I don’t think that is true. The Weinberg angle enters into their masses and no human knew its value. The
blogger should look this up and make any necessary corrections.
Amir, isn’t this Scalar Boson discovery at 125 GeV too light to be the Standard Model Higgs? I have read that it would require a heavier super-partner to fulfill that role. The problem with this
though is that no evidence for Super-symmetry has so far been found, including at the LHC.
• http://www.drdhaugoda.blogspot.com
if the higgs particle can self interact to higgs field, then why not it is also possible to interact the other particles themslf with the higgs field to gain mass without higgs particle.no one
has its answer- so higgs particle and its fuction is just hypothetical yet.no one can prove yet.
I’m not a particle physicist, but I would nitpick this article for its seeming conflating of giving all particles mass and why there is matter in the universe.
Higgs gives some fundamental particle mass. But its a mess, some doesn’t get it (photons, gluons), some gets it from the field (many standard model particles), some gets it from the particles (Zs
/Ws), some doesn’t get it proportionally to energy as I understand it (visible higgs), some gets it somewhere else (neutrinos).
More importantly here, higgs gives us only ~ 1 % of mass as I hear it. The mass of nucleons comes from strong force interactions, I assume. But higgs does predict the lower mass of the proton (I
hear), which is why it instead of the neutron is stable and why we have atoms.
Yes, I know, details. But details is the story at large sometimes.
Wigner marveled at the sheer mystery of why mathematics works so well in situations where there seems to be no obvious reason why it does. And yet, it works.
And then came anthropic theory and showed how the physics of nature and by analogy the mathematics of men can be cherry-picked – because it works for us.
II wonder how Wigner took that.
Mass of nucleons comes _mostly_ from strong force interactions, natch.
@”Peter”: I don’t know who you are, but before you make such a patently false accusation, shouldn’t you check the source? Steven Weinberg, “A Model of Leptons,” Phys. Rev. Lett. 19, 1967, p.
1265: “Note also that (14) gives g and g’ larger than e, so (16) tells us that MW>40 BeV, while (12) gives MZ>MW and MZ> 80 BeV.” “M” stands for MASS. Please don’t make false statements about
something if you know nothing about it. This is a serious science forum. Thank you.
@Julian Mann: Yes, they have absolutely no evidence of supersymmetry, so the superpartner may not exist. From what I understand from CERN, the mass range of 125-6 GeV, a “light Higgs,” is
consistent with the standard model. It’s because everyone was rooting for supersymmetry (perhaps) that they searched the upper range to the 400s of GeV first–but found nothing there! In fact, as
you may know, a “light Higgs” was well within the range that Fermilab explored before shutting down the Tevatron, but they never reached enough “sigmas” (the required 5, for statistical
significance) as they didn’t have enough luminosity (intensity of the beam: number of particle interactions per second) and the Higgs is a rare event that requires a lot of data. So CERN beat
Fermilab not because of its higher energy (7 TeV at present, with half-power, versus Fermilab’s 1.98 TeV), but rather because of the much higher luminosity!
• Pingback: Looking at how CERN is looking for Higgs « escape from a bankrupt state
• http://www.mazepath.com/uncleal/qz4.htm
“…symmetries can reveal the deepest secrets of physical reality” One must choose pertinent symmetries. Modeling massless boson photons with mirror-symmetric theory is fine, ditto strong
interactions. Modeling fermionic matter (leptons, quarks) and weak interactions with mirror symmetric theory suffers furies of parity violations met with hierarchies of manually-inserted symmetry
breakings. Observed vacuum has universal f(x) = f(-x) party-even isotropies plus selective f(x) = -f(-x) parity-odd anisiotropies (e.g., chirality) acting only toward matter. Believe what you
Quantum gravitation, dark matter, and SUSY have zero empirical validation. It might not be parity-even achiral GR at all. It might be its parity-odd chiral superset ECKS gravitation. Opposite
shoes fit with trace different energies into trace chiral vacuum. They locally vacuum free fall along trace non-identical minimum action trajectories, violating the Equivalence Principle. Eötvös
experiments are 5×10^(-14) difference/average sensitive. Crystallography’s “opposite shoes” are chemically and macroscopically identical, single crystal test masses in enantiomorphic space
groups: P3(1)21 versus P3(2)21 alpha-quartz or P3(1) versus P3(2) gamma-glycine. Somebody should look.
Otto Stern was a Nobel Laureate because the Dirac equation selectively fails. Specifically testing “impossible” fundamental trace vacuum asymmetry towards matter is important, for it may be true.
isotropy plus Noether’s theorems demand conservation of angular momentum. Trace vacuum anisotropy only toward matter would source 1.2×10^(-10) m/sec^2 Milgrom acceleration in MOND. Dark matter
and its Wesley Crusher physics would abruptly end, followed by a surströmming banquet. Got herring?
A physicist now at MIT, Jeffrey Goldstone, had (with the help of Steven Weinberg and Abdus Salam) proved a theorem that showed that, under certain circumstances, “bad” particles—massless like the
photon—somehow appear when the primeval symmetry of the universe … breaks down.
Where did this particles “somehow” appear? In reality or in the theorem? What is the “primeval symmetry of the universe”? Why was this “bad”?
You should either write for experts, then you can assume certain knowledge – or you should explain what you write about. And please, when you simplify something for laymen, don’t dumb it down.
@Tony Mach: The Goldstone (plus Salam & Weinberg, since they helped prove it) Theorem was purely theoretical. It works, in the sense that these massless bosons would indeed materialized when a
symmetry is broken–but not, it turns out, when the symmetry is local (meaning can be defined differently everywhere in space: and this is a key reason) and continuous (meaning of a Lie-group
type); physicists call such a symmetry a Yang-Mills gauge symmetry (the word gauge is due to Herman Weyl; C. N. Yang and Robert Mills developed the idea in a major paper in 1954). For symmetries
of this particular kind, the theorem does not apply. It turns out that many symmetries in particle physics are indeed Yang-Mills gauge symmetries: including the “primeval symmetry of the
universe” I refer to, meaning the symmetry of the scalar field before the breakdown of the electroweak force. The result that refuted the Goldstone theorem’s applicability because the symmetry
involved was indeed a Yang-Mills gauge symmetry was exactly what Higgs did, and independently of him Brout and Englert, and independently Kibble, Hagen, and Guralnik. In fact, at the end of their
paper, the latter three authors write: “Thus the absence of massless bosons is a consequence of the inapplicability of Goldstone’s theorem rather than a contradiction of it.” (Guralnik, Hagen,
Kibble, 1964, p. 587.) In a paper published in 2008 as part of a collection called “Perspectives on LHC Physics,” (World Scientific; p. 140), Steven Weinberg conceded that while his theorem with
Goldstone and Salam is mathematically true, “in fact these theories [on how nature actually behaves]…just don’t respect our theorem.”
@rkdhaugoda: Well, the Higgs particle is a manifestation of the Higgs field. In an earlier post I give the best analogy I know (from A. Zee’s book on quantum field theory): The field is like a
mattress, you jump on it and create waves, and a wave is also a particle because of quantum mechanics (particle-wave duality). The idea is that all particles in modern physics are handled in
formulas as if they are fields. A field is mathematically amenable to manipulations by theoretical physicists, and hence its immense usefulness. Physicists often speak about “massive
fields”–meaning fields associated with massive particles; and when particles interact, it is viewed as interactions of fields.
@Torbjörn Larsson, OM: You are right–we don’t know the details of how particles other than the Ws and the Z get their mass (and that’s why we concentrate on seemingly-unimportant bosons that are
only responsible for some kind of radioactive decay, rather than on the all-important quarks or the electron, for example). Neutrinos are the biggest mystery here and may well NOT get their mass
at all from the Higgs mechanism but rather through a Majorana term in their equations–a totally different process. I just didn’t want to get into that complicated discussion, which is a side
issue. But neutrinos are so mysterious for so many different reasons that we may well expect that their (extremely infinitesimal) mass may come from a different source. All this should not
diminish the great importance of the Higgs: It IS a mechanism for generating mass out of a broken symmetry, and its experimental verification at CERN is of monumental importance for science.
About the Anthropic Principle–see an earlier post I wrote called “Is the Universe a Giant Schroedinger’s cat?” But I have no idea whether Eugene Wigner was at all interested in anthropic
arguments, and in any case, marveling about WHY mathematics works so magically well in physics and other sciences has nothing to do (in my opinion) with anything anthropic. My guess is that this
is true regardless of whether you are Platonic or Kantian in your philosophy of mathematics. But I would welcome readers opinions on this.
• http://www.jdweir.com
I’m not a physicist.
Thank you for answering the question of how the Higgs gets its own mass.
When physicists say they have detected the Higgs, can you explain the mechanism? If the particle is so small that light cannot bounce off it and back to the eye of the observer, how does the
scientist draw an inference that the particle exists?
@Yann: Great question! Thank you for asking it. Well, I was very fortunate to have been invited to CERN a number of times and actually visited and inspected the very insides of the huge
detectors, called ATLAS and CMS, which now discovered the Higgs (I was allowed into the machine just a couple of months before the tunnel (300 feet underground) was closed for the start of the
proton collisions. So, here is how it works. These very large detectors (ATLAS is 7 stories high; CMS a little smaller, but heavier–weighing as much as the Eiffel Tower) are made of many
thousands of tiny components. These components are of various types–each type is designed to detect a different kind of particle. The detectors are also extremely powerful magnets, so that they
will BEND the paths of charged particles. For example, an electron will bend inside the magnetic detector in one direction, while a positron (an anti-electron) will bend exactly in the opposite
way: same curve but in the other direction, because a positron is just an electron with a positive (rather than negative) electric charge. So the magnet separates the characteristic flights of
electrons from positrons and also of muons (they are like electrons but 209 times heavier) from anti-muons. NOW: The Higgs lives for a very, very tiny fraction of a second before it disintegrated
into other particles: it will not even make it to the edge of the detector, where all the electronic tiny detectors that make up the large “detector” apparatus are. So we detect a Higgs, meaning
we determine that we have found it, when we find particles into which we believe it has decayed. Look at the picture below, from the ATLAS group at CERN:
Here you see four muons: two muons and two anti-muons that ATLAS physicists believe were formed from the decay of an actual Higgs boson. The decay route is as follows:
H–>ZZ–>M+M+M-M- This means that the very short-lived Higgs has quickly decayed into two Z bosons. These Zs live very short lifetimes, too, and each of them decayed into a muon-antimuon pair. The
red paths in the picture are the muons and antimuons. Muons, luckily, have relatively long lifetimes (naturally, they are created in the upper atmosphere when cosmic rays hit nuclei of atoms, and
they make it all the way down to Earth before decaying): In the lab, Muons go through everything–the entire breadth of the detector (in fact, they can make it to 100 m underground). So, thanks to
the four muons, we know that a Higgs was there. What I just described was the “golden channel” to detecting the Higgs. In actuality, many of the Higgs events at CERN have been decays into gamma
rays and decays into two electrons and two anti-electrons. Note that since the Higgs is neutral (has no electric charge) and so is the Z, when they decay into charged particles such as electrons,
the decay must be balanced by an equal number of anti-electrons (because of the important physics law of conservation of charge: start with zero, for the Higgs, and end with zero as +1 +1 -1 -1=
0. The process required an immense amount of data because the Higgs appears only rarely, and required a very sophisticated statistical analysis where Higgs events appear (after the amassing of a
lot of data) as a “bump” in some curve. That bump finally became statistically significant (i.e., beyond 5 standard deviations, called 5-sigma, bounds on the curve) just before July 4th this
year–hence the (rather late) discovery announcement.
>>>”marveling about WHY mathematics works so magically well in physics and other sciences…:
I recall an anecdote about Richard Feynman, how on his first day as an undergraduate he went to the campus bookstore and bought some large sheaves of paper and the biggest waste can the bookstore
had. Seeing him walking back to the dorm with his purchases, one of Feynman’s professors predicted that he would turn out to be a fine scientist, since he already knew what was important – plenty
of paper for doing his calculations and a big waste can where the great majority of them would end up. Feynman’s Nobel address was, to my recollection, a description of throwing a lot more good
math into the waste can before coming up with the stuff that won the Nobel prize. I would guess that physicists spend a lot of time selecting from all the math available the little bit that fits
the facts. Wondering about the effectiveness of laboriously selected or constructed math in the physical sciences seems like marveling over the fact that, starting with some flax plants and
employing careful measurements and time-tested procedures, spinners, weavers and tailors can make a suit that comfortably fits a demanding customer.
• http://www.jdweir.com
Thank you for that very clear explanation of how a particle with the life expectancy of a firefly’s glow could be detected/inferred over the length of the tunnel.
I hope this next question doesn’t turn your blog into physics 101, however, there was a second aspect of my question. I would appreciate your explanation: how are the muons or any other very tiny
sub atomic particle detected/inferred?
My understanding, as a layman, of the implications of Heisenberg’s uncertainty principle, is that at some point the relative size/masses of the photon of light and the subatomic particle can be
compared to a billiard ball colliding with a ping-pong ball. The photon of light will push the particle aside and will not rebound to the observer.
Another way of phrasing my question might be how “they” took that starkly beautiful picture of the process you linked.
Hi Yann, Good question. Particle detection has a long history, starting with the first bubble chambers and cloud chambers of the mid-20th century. In a cloud chamber, condensation of the “fog” or
mist inside the container occurs as a charged particle goes through it, attracting or repelling charged particles in the mist, which then make the molecules condensate into drops that can be
visually seen from the outside. The condensation occurs because the electric balance in the uniform mist in the container is disturbed. The droplets form a track that shows how the invading
particle went through the mist! The bubble chamber, which works on a similar principle, was invented by Don Glaser in the 1950s and–I am told but haven’t confirmed this, so might be physics
lore–he got the idea by watching bubbles form in a glass of beer! Beer didn’t work for detecting particles but he ended up finding a liquid that did. Here, bubbles form in the liquid, indicating
where the original particle traveled.
By the time we get to CERN of the 21st century, things have advanced a lot. Here is a partial description from the ATLAS group as to how a PART of their giant detector works:
“A high-energy electron emerging from the inner detector will interact with the metal particle absorbers and will result in the creation of many electrons, positrons, and photons. All three kinds
of particles are then measured as they go through the liquid argon, because they ionize the argon atoms. Electrons produced by these ionizations are collected by the copper grid inside it,
causing a current. The totality of the measurements of the current and its location…allows scientists to determine the energy of the original electron.”
The point is: Even though these are quantum particles, as you well point out, analysis at CERN and other accelerators is surprisingly “classical.” The reason is that–as you see from the above
description–many particles are disturbed by a single electron or muon of gamma ray, so you have more of a normal curve of energies of measured particles and you get statistical information. In
any case, everything is done electronically: particles disturbing other particles in their path, creating currents that are measured. The only visualization occurs on the physicist’s computer
screen. Hope this helps!
• http://www.jdweir.com
Yes, very clear. Thanks again!
|
{"url":"http://blogs.discovermagazine.com/crux/2012/07/30/the-mathematical-magic-behind-the-mysterious-higgs-boson/","timestamp":"2014-04-19T02:54:50Z","content_type":null,"content_length":"186785","record_id":"<urn:uuid:2f7bdd84-1844-4ff1-9e67-3a9e49054dc0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Networks of queues – a survey of weak convergence results
- Queueing Systems , 2001
"... Abstract. The paper provides the up- and down-crossing method to study the asymptotic behavior of queue-length and waiting time in closed Jackson-type queueing networks. These queueing networks
consist of central node (hub) and k single-server satellite stations. The case of infinite server hub with ..."
Cited by 13 (13 self)
Add to MetaCart
Abstract. The paper provides the up- and down-crossing method to study the asymptotic behavior of queue-length and waiting time in closed Jackson-type queueing networks. These queueing networks
consist of central node (hub) and k single-server satellite stations. The case of infinite server hub with exponentially distributed service times is considered in the first section to demonstrate
the up- and down-crossing approach to such kind of problems and help to understand the readers the main idea of the method. The main results of the paper are related to the case of single-server hub
with generally distributed service times depending on queue-length. Assuming that the first k − 1 satellite nodes operate in light usage regime, we consider three cases concerning the kth satellite
node. They are the light usage regime and limiting cases for the moderate usage regime and heavy usage regime. The results related to light usage regime show that, as the number of customers in
network increases to infinity, the network is decomposed to independent singleserver queueing systems. In the limiting cases of moderate usage regime, the diffusion approximations of queue-length and
waiting time processes are obtained. In the case of heavy usage regime it is shown that the joint limiting non-stationary queue-lengths distribution at the first k − 1 satellite nodes is represented
in the product form and coincides with the product of stationary GI/M/1 queue-length distributions with parameters depending on time.
, 2002
"... This paper proposes an algorithm, referred to as BNA/FM (Brownian network analyzer with finite element method), for computing the stationary distribution of a semimartingale reflecting Brownian
motion (SRBM) in a hypercube. The SRBM serves as an approximate model of queueing networks with finite buf ..."
Cited by 8 (0 self)
Add to MetaCart
This paper proposes an algorithm, referred to as BNA/FM (Brownian network analyzer with finite element method), for computing the stationary distribution of a semimartingale reflecting Brownian
motion (SRBM) in a hypercube. The SRBM serves as an approximate model of queueing networks with finite buffers. Our BNA/FM algorithm is based on finite element method and an extension of a generic
algorithm developed by Dai and Harrison (1991). It uses piecewise polynomials to form an approximate subspace of an infinite dimensional functional space. The BNA/FM algorithm is shown to produce
good estimates for stationary probabilities, in addition to stationary moments. This is in contrast to BNA/SM (Brownian network analyzer with spectral method) of Dai and Harrison (1991), where global
polynomials are used to form the approximate subspace and it sometime fails to produce meaningful estimates of these stationary probabilities. Extensive computational experiences from our
implementation are reported that may be useful for future numerical research on SRBMs. A three-station tandem network with finite buffers are presented to illustrate the effectiveness of the Brownian
approximation model and our BNA/FM algorithm.
- Mathematics of Operations Research , 1997
"... The diffusion approximation is proved for a class of multiclass queueing networks under FIFO service disciplines. In addition to the usual assumptions for a heavy traffic limit theorem, a key
condition that characterizes this class is that a J \Theta J matrix G, known as the workload contents matrix ..."
Cited by 5 (2 self)
Add to MetaCart
The diffusion approximation is proved for a class of multiclass queueing networks under FIFO service disciplines. In addition to the usual assumptions for a heavy traffic limit theorem, a key
condition that characterizes this class is that a J \Theta J matrix G, known as the workload contents matrix, has a spectral radius less than unity, where J represents the number of service stations.
The (j; `)th component of matrix G can be interpreted as the amount of future work for station j that is embodied in per unit of immediate work at station ` at time t. This class includes
Rybko-Stolyar network with FIFO service discipline as a special case. The result extends existing diffusion limiting theorems to non-feedforward multiclass queueing networks. In establishing the
diffusion limit theorem, a new approach is taken. The traditional approach is based on an oblique reflection mapping, but such a mapping is not well-defined for the network under consideration. Our
approach takes two steps: f...
- Annals of Applied Probability
"... This paper derives the strong approximation for a multiclass queueing network, where jobs after service completion can only move to a downstream service station. Job classes are partitioned into
groups. Within a group, jobs are served in the order of arrival, i.e., a rst-in-rst-out (FIFO) discipline ..."
Cited by 5 (2 self)
Add to MetaCart
This paper derives the strong approximation for a multiclass queueing network, where jobs after service completion can only move to a downstream service station. Job classes are partitioned into
groups. Within a group, jobs are served in the order of arrival, i.e., a rst-in-rst-out (FIFO) discipline is in force, and among groups, jobs are served under a pre-assigned preemptive priority
discipline. We obtain the strong approximation for the network, through an inductive application of an input-output analysis for a single station queue. Specically, we show that if the input data
(i.e., the arrival and the service processes) satisfy an approximation (such as the functional law-of-iterated logarithm approximation or the strong approximation), then the output data (i.e., the
departure processes) and the performance measures (such as the queue length, the workload and the sojourn time processes) satisfy a similar approximation. Based on the strong approximation, some
procedures are propo...
- Faculty of Commerce and Business Administration, UBC , 2001
"... In this paper, we extend the work of Chen and Zhang (2000b) and establish a new sufficient condition for the existence of the (conventional) diffusion approximation for multiclass queueing
networks under priority service disciplines. This sufficient condition relates to the weak stability of the flu ..."
Cited by 5 (1 self)
Add to MetaCart
In this paper, we extend the work of Chen and Zhang (2000b) and establish a new sufficient condition for the existence of the (conventional) diffusion approximation for multiclass queueing networks
under priority service disciplines. This sufficient condition relates to the weak stability of the fluid networks and the stability of the high priority classes of the fluid networks that correspond
to the queueing networks under consideration. Using this sufficient condition, we prove the existence of the diffusion approximation for the last-bufferfirst-served reentrant lines. We also study a
three-station network example, and observe that the diffusion approximation may not exist, even if the “proposed” limiting semimartingale reflected Brownian motion (SRBM) exists.
- IBM Research Division , 1995
"... This paper introduces the framework of synchronous constrained fluid systems (SCFS) to model ..."
, 2000
"... The goal of this chapter is to demonstrate the usefulness of analytical and numerical methods of stochastic control theory in the design, analysis and control of telecommunication networks. The
emphasis will be concentrated on the heavy traffic approach for queueing type systems in which there is li ..."
Add to MetaCart
The goal of this chapter is to demonstrate the usefulness of analytical and numerical methods of stochastic control theory in the design, analysis and control of telecommunication networks. The
emphasis will be concentrated on the heavy traffic approach for queueing type systems in which there is little idle time and the queue length processes can be approximated by reected diffusion
processes under suitable scaling. Three principal problems are considered: the multiplexer system, controlled admission in multiserver systems such as ISDN, and the polling or scheduling problem.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1419294","timestamp":"2014-04-20T22:15:10Z","content_type":null,"content_length":"30142","record_id":"<urn:uuid:85c86b4e-2c6d-4790-99e2-cd039fe5b8b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics problem - Spring Constant
Two spheres are mounted on two different identical horizontal springs and rest on a frictionless table (one spring is connected to the left side of a wall with a sphere attached to the other end &
the other sphere is connected to the other spring which is connected to the other wall -- both spheres are facing each other) When the spheres are uncharged, the spacing between them is .05m, and the
springs are unstrained. When each sphere has a charge of +1.60 uC, the spacing doubles (springs compress). Assuming that the spheres have a negligible diameter, determine the spring constant of the
Okay so I know F= -kx, F=k [((q1)(q2)) / r^2] but I am not quite sure how to connect the dots....and what would q1 and q2 be??? Im confused....
Please HELP!
|
{"url":"http://www.physicsforums.com/showthread.php?t=238713","timestamp":"2014-04-20T08:40:49Z","content_type":null,"content_length":"26482","record_id":"<urn:uuid:4d0c9ab1-d9c4-44dd-99db-c74601dfd3be>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Welcome to the math olympics
It was a beautiful day in the Winter of 1994. I had slept in because it was Saturday, but my older brother, Nikos, was not so lucky. He had just come back from a regional math competition named after
the Greek philosopher Θαλής (Thales of Miletus) and he did not look happy. I asked him how he did, but he did not answer; he just reached into his backpack and took out the paper that contained the
problems from the competition. Looking back, what I did next defines much of how I approach life since that day: I took the paper and started solving the problems one after the other until I was
done. It took me three days – the competition lasted three hours.
The next year, I was old enough to compete in the first round of the Greek Math Olympiad and I convinced a bunch of my friends from school (remember the cool nerds from Geniuses wanted?) to wake up
early on a Saturday and go test our mettle against the rest of the Greek population still in school (though university students were too old to compete). You know how people talk about the 1% these
days… To make it into the next round you had to be in the 1% of something that none of us knew how to define back then. After all, we were all equally scared of the old guys with the formal suits and
the strict faces holding the exam papers. How was I, or any one of my friends from school, different from all the other kids taking the exam throughout the country that day? Did we have a special
upbringing? Did our parents answer math questions for fun? At the moment I received the paper with the 4 problems on it, I immediately hoped I had a special upbringing. That my dad was not a
high-school dropout and my mom had studied math in college, instead of political science. Damn. These problems looked hard. But I knew what I was supposed to do. So I started solving them, one after
the other. Unfortunately, the time ran out after 3 hours and I had only solved 2 and 1/2 problems by then. But after I got home, I said hi to my mom, ate a sandwich and went into my room. I stayed
there until I had finished the other 1 and 1/2 problems. Emerging victorious from my room that night, I made another ham-and-cheese sandwich and added extra ketchup to reward myself.
A month later, I learned that I had made it into the 1%. And four months later, into the 0.01% that went on to the final round named after the famous Greek nudist, Αρχιμήδης (Archimedes of Syracuse).
My dad, mom, brothers, classmates and, especially, teachers, were incredulous. But that was nothing compared to how I felt. They thought I was a lazy bum, but I knew it for certain. How did I get to
reach the finals and make it into the top 20 of all Greek high-schoolers? I was not even in high-school yet! Who knows… All I know is that I would stay up past 3 a.m. on Tuesday nights to solve
problems from Crux Mathematicorum with Mathematical Mayhem, when everyone else was fast asleep. And I wouldn’t stay up that late even if my favorite cartoon (Thundercats) was on TV at that time
(which is a bit ironic, since my parents thought I was watching TV all night and so considered it appropriate to wake me up using buckets of water).
I didn’t solve these math problems because I was planning to be a mathematician. I disliked math class in school as much as (probably more than) my classmates. But this was different. These problems
did not appear at the end of a chapter full of dry equations and ready-made formulas. The problems on Crux Mathematicorum were dancing in the middle of pages surrounded by other problems vying for
attention. Any progress I made was solely due to stubbornness. But then, I started unlocking my superpowers one-by-one. And I started seeing patterns, not only mathematical patterns, but patterns in
my own thoughts and my own temper when confronting impossible problems. Which came in really handy later in life when I was asked to solve a much harder problem (An intellectual tornado).
So now that you are all primed, here is a problem to keep you occupied for a few hours, or days…
Problem 1: If $f(0,y) = y+1, \, f(x+1,0) = f(x,1)$ and $f(x+1,y+1) = f(x, f(x+1,y))$, for all non-negative integers $x,y$, find $f(4,2009)$.
34 thoughts on “Welcome to the math olympics”
1. Using the notation from the wikipedia article about tetration, I get ${}^{n+3}2-3$. :)
□ oh, I thought I could use latex…. the answer in words is: a “power tower” of exponents of the number 2, with tower height (n+3), then subtract 3 from that. For your special case, n=2009.
☆ Use $… [latex code]$ and substitute ... with the word latex, to write tex. Then write the details of how you got your answer :)
☆ Dr. Flammia’s answer: ${}^{n+3}2-3$
2. Don’t see how it converges.
f(4,2009) = f(3,f(4,2008))
f(3,0) = f(2,1) = f(2,f(2,0)) = f(2,f(1,1)) = f(2,f(1,f(1,0))) = f(2,f(1,f(1,f(0,1))))
Lowest level is a recursive loop
f(1,2) = f(1,f(1,1)) = f(1,f(1,f(1,0))) =
□ f(0,y) = y wouldn’t be in the recursive loop but it also doesn’t evaluate to anything
f(1,f(0,1)) = f(1,1) = f(1,f(1,0)) = f(1,f(0,1))
f(0,y+1) =y isn’t in the recursive loop but it just evaluates to 0
f(1,f(0,1)) = f(1,0) = f(0,1) = 0
☆ f(3,0) = f(2,1) = f(2,f(2,0)) = f(2,f(1,1)) = f(2,f(1,f(1,0))) = f(2,f(1,f(0,1)))
last equals on that line had an extra nest
3. Alright.
I was coming back from a gym and read your nostalgic story on my smartphone while walking.
I enjoyed the reading a lot, mostly because I had pretty much the same experiences with physics competitions in Korea. And, of course, your challenge has been accepted!
My first reaction was like this: “Alright. let’s grab a pen and a piece of paper. I should be able to find the anser in like 5 or 10mins, and then I would take a shower..” However, after 20 mins
or so, no hope. I got serious about the problem, sat in front of a desk, rewrote the problem statement again and started from the beginning.
Honestly, from that poing it took (a very slightly) more than 20mins to find the answer, which agrees with Steve’s one. Even though I am dry and smells nasty, I feel very good because now I know
the answer! and more importantly because this problem reminded me of the time when I was spending nights to solve physics problems. It seems like I have not changed so much from since that time.
PS. For those of you who are still struggling, let me give you a hint. Don’t try to solve the problem at once, the structure of the two dimensional function is rather complex than to be called as
a two dimensional nice looking function. Begin with small numbers and find the “pattern” just as Sprios (and probably all of us) has been doing since he was a little kid (and probably even now
for our research!).
4. By the way, the answer is very very very big number.
Just out of curiosity, is there anyone who can write the last 4 digits of the answer?
□ Soonwon, I was going to ask for the last digit of the answer. Steve?
☆ I can give you the answer with 25% chance to be right.
□ I think the last digit must be 3, but I have no idea of the next one…
☆ Apparently, the next digit of the answer is 3, and the whole huge number is …………..33. But I am not sure I have not made a mistake somewhere.
Any confirmation or refutation?
○ Hi Konstantin. Just landed back in the good ol’ US of A. The last digit is 3 (easy to figure that out) and I am sure that you got 33 right, though I can’t confirm it in my head.
○ I define $p_n = 2^{p_{n-1}}$, $p_1 = 2$. To find the last two digits we compute $p_{2009} \pmod{4}$ and $p_{2009} \pmod{25}$. It’s easy to see that $p_{2009} \equiv 0 \pmod{4}$. Now,
$p_{2009} \equiv 2^{p_{2008}} \pmod{25}$.
However, $2^{\phi(25)} \equiv 1 \pmod{25}$, where $\phi$ is the Euler totient function $\phi(25) = 25 \left(1 - \frac 1 5\right) = 20$. Therefore $2^{p_{2008}} \equiv 2^{p_{2008} \
pmod{20}} \pmod{25}$. So now we need to compute $p_{2008} \pmod{20}$.
We proceed as before, and we compute $p_{2008} \pmod{4}$ and $p_{2008} \pmod{5}$. The first is easy, $p_{2008} \equiv 0 \pmod{4}$. For the second we have $p_{2008} \equiv 2^{p_{2007}}
\pmod{5} \equiv 2^{p_{2007} \pmod{\phi(5)}} \pmod{5}$. Since $\phi(5) = 5 \left(1-\frac 1 5\right) = 4$. Since $p_{2007} \equiv 0 \pmod{4}$, we get that $p_{2008} \equiv 2^0 \pmod{5}
\equiv 1 \pmod{5}$.
So, we have that $p_{2008} \equiv 0 \pmod{4}$ and $p_{2008} \equiv 1 \pmod{5}$. This implies that $p_{2008} \equiv 16 \pmod{20}$. Going back, we obtain that $p_{2009} \equiv 2^{16} \
pmod{25} \equiv 11 \pmod{25}$. So, we have $p_{2009} \equiv 11 \pmod{25}$ and $p_{2009} \equiv 0 \pmod{4}$. From here we find the only solution is $p_{2009} \equiv 36 \pmod{100}$.
Since we need to subtract $3$ to obtain the answer of this problem, we find that the last two digits are $33$.
■ Excellent solution Lord Sidious! The power of the dark side is indeed great. To post in latex, you only need to append “latex” after the opening $. For example, $latex …latex
code… and then close the dollar sign.
■ Lord Sidious, thank you for your calculation that yields “…33″ answer in a professional “powerful arithmetic” language. My amateur solution does not involve Euler’s totient
function, but it is basically about the same ideas, I think.
The first simple observation is that consecutive powers of 2, i.e. 2, 4, 8, 16, 32, 64, 128, 256, … in decimal notation end with 2, 4, 8, 6, 2, 4, 8, 6, …, respectively, which is
a cycle with four members. Therefore, the last digit of any power of 2 depends on the remainder of the index mod 4. In our problem, the number in question is 2 to some great
positive integer power N, where index N is itself a great power of 2, and it is certainly divided by 4, hence the last digit of $2^{N}$ is 6.
Now to the second decimal digit. It does not take long to figure out that N is not only divided by 4, as we have already seeen, but also ends with 6 (because N is a power of 2
whose own index is divided by 4). Therefore, $N = 10A + 6 = 4B$, hence $5A + 3 = 2B$, and A has to be odd: $A = 2C + 1$. Then $N = 20C + 16$.
Now to $2^{N} = 2^{16} \times 2^{20C}$. Obviously, $2^{16} = 65536$ ends with 36. Also, $2^{20} = 1048576$ ends with 76. The key observation is that the product of two numbers
each ending with 76 must also end with 76 — it could easily be seen by calculating the product using the standard school multiplication method (in Russia it is called “stolbikom”,
which means that you write multiplicands one under another). Therefore, $2^{20C} = ...76 \times ...76 \times ... \times ...76$ ends with 76.
So, our “power tower” $2^{N}$ has been shown equal to the product of two numbers, one ending with 36 and the other ending with 76. Such a product must end with 36 (again,
“stolbikom”multiplication method helps to see why). Subtracting 3, we get a number that ends with 33.
★ I like this so much.
★ And this method seems scalable to certain extent: now it is easy to show that the next figure is 7. In fact,
${}^{m}2 = {2}^{2^{2^{\text{...m times}...^{2}}}} = .........736$ if $m \geq 5$
It might turn out quite possible to calculate a few more decimal digits in the same manner, but enough is enough. :)
5. :) The following problem was given to some 5th graders in Russia this week. The answer is unique.
Two friends meet on the street. One asks the other
- you have kids?
- yup, I have three kids.
- how old are they?
- if you multiply their ages, you get 36.
- this is not enough information – tell me more!
- if you add their ages, you get the same number as number of windows you can see in this house.
- still not enough information! tell me more!
- the oldest child is a redhead.
- now I get it!
□ 3,3 and 4? A wild guess.
□ 2, 2, 9
☆ It would be a cool house :)
○ …and with two more siblings this cool house could even turn into a full house — 2, 2, 9, 9, 9. :)
■ And it would still be a prime house!
★ Konstantin is right – sorry, Spiros :)
★ A house with 13 windows must be a very long or very tall house (given that 13 is prime so more floors wouldn’t help.) But it is all about the sum of the ages having two
solutions (2,2,9 and 6,6,1) that is relevant. Great problem :)
★ 1. First floor always has different number of windows to save space for door(s). So it may be house with three floors and three windows on th first one.
2. Even if both children have age 6, one of them may be older, so information is still not enough.
6. Nice story Spiros.
I used to go every year to math competitions in my country, starting from the 4th grade until I finished the elementary school (those were for the elementary school pupils). I was always among
the first 3 at school, and every year I managed to get through the first 2 rounds and compete in regional (after that was the national competition). This is not some big success at all, but what
I regret most is that I was never preparing for the competition and never doing some problems at home. I actually never studied math until high school, all I heard in class was enough. Although I
never had problems with math (and I had it a lot in high school and at the faculty later), I really regret for not being more into it when I was young. I don’t know, I just did not find it
interesting back then as I find it today.
7. Pingback: HoT NeWs » Прайм крайн
8. Pingback: Unsolvable | Quantum Frontiers
Your thoughts here.
|
{"url":"http://quantumfrontiers.com/2012/09/06/welcome-to-the-math-olympics/","timestamp":"2014-04-18T15:39:09Z","content_type":null,"content_length":"128449","record_id":"<urn:uuid:88a1270e-dde0-404f-9f41-d7aa73cf9e91>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relationship power and amplitude
Amrit N
hey topher 92 donot flaoat in maths (iamengineer)said power proportional to amplitue square and i donot understand power = A^2
If you don't fundamentally understand the question, then don't try to answer it.
, however using common sense i can tell that the amplitude that iamanengineer mentioned is the highest peak of current signal
No you can't, you just made the assumption.
, and surely he didnot mean the amplitude of the particles because there are no particles vibrating in electricity as in sound propagation,
Yes there is, look up AC electrical power. I'm guessing the electrons powering your computer are vibrating back and forth 50 times a second.
i request topher925 be more physical then mathematical
You want me to answer a question asking how something is mathematically derived without using mathematics?
Amrit, the bottom line is that your definition is only true for some simple cases, but not all. Do you understand why?
Pop Quiz:
(You can solve this with simple HS math and the definition I gave, but try yours)
Lets assume that the AC electricity waveform that is powering your computer can be described with the equation;
U = 5 cos(wt) + 10
What is the power of this electrical waveform?
|
{"url":"http://www.physicsforums.com/showthread.php?t=312175","timestamp":"2014-04-20T18:25:12Z","content_type":null,"content_length":"77627","record_id":"<urn:uuid:ebe76fac-9879-434b-9494-0c2b60000991>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Tutors
Severna Park, MD 21146
Professor available to tutor in Math, Science and Engineering
...Recently, I have had great success helping students significantly improve their Math SAT scores. I am willing to tutor any math class, K-12, any
class, math SAT prep and some college classes such as Statics, Dynamics, and Thermodynamics. All 3 of my children...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/Adelphi_MD_calculus_tutors.aspx","timestamp":"2014-04-20T13:34:40Z","content_type":null,"content_length":"61158","record_id":"<urn:uuid:5d7b3075-64cb-4153-ac06-4e134c4cd4f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adelphi, MD SAT Math Tutor
Find an Adelphi, MD SAT Math Tutor
...Private residence within 15 mile radius of Bowie (someone 18 or over must be home)... in home tutoring is generally available for the first or last scheduled sessions of the day. The location
of slots before and after open slots dictate location of sessions for open slots. WHAT YOU SHOULD EXPEC...
33 Subjects: including SAT math, English, reading, geometry
...I've worked with students to set goals, timelines, or even simple routines to aid them in achieving their study and homework goals. I also incorporate these skills into my own life - that's
what has enabled me to complete research for my master's degree in engineering, and continues to aide me i...
17 Subjects: including SAT math, reading, elementary math, algebra 1
...I was born and raised in China. I lived in Qingdao, China for 17 years. I speak perfect Mandarin, and I read Simplified and traditional Chinese, and I am fluent in writing simplified Chinese.
13 Subjects: including SAT math, calculus, geometry, Chinese
...My name is Sonja. I'm happy to say that 95% of my WyzAnt students report a 1 to 3 letter grade increase (those students who already have an "A" or "4" maintain their grade). This data is
up-to-date as of 11/24/2013. Over 80% of my students who improve achieve better grades in a month.
10 Subjects: including SAT math, geometry, algebra 1, GED
...I have seen excellent results in all of the students I have tutored who have made an effort to learn.I took ordinary differential equations, I and II, and partial differential equation. I
received an A for all classes. I used differential equations, in undergraduate and graduate physics classes, including electrodynamics, quantum mechanics and plasma physics.
46 Subjects: including SAT math, reading, English, writing
Related Adelphi, MD Tutors
Adelphi, MD Accounting Tutors
Adelphi, MD ACT Tutors
Adelphi, MD Algebra Tutors
Adelphi, MD Algebra 2 Tutors
Adelphi, MD Calculus Tutors
Adelphi, MD Geometry Tutors
Adelphi, MD Math Tutors
Adelphi, MD Prealgebra Tutors
Adelphi, MD Precalculus Tutors
Adelphi, MD SAT Tutors
Adelphi, MD SAT Math Tutors
Adelphi, MD Science Tutors
Adelphi, MD Statistics Tutors
Adelphi, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Aspen Hill, MD SAT math Tutors
Avondale, MD SAT math Tutors
Berwyn, MD SAT math Tutors
Chillum, MD SAT math Tutors
Colesville, MD SAT math Tutors
College Park SAT math Tutors
Glenmont, MD SAT math Tutors
Green Meadow, MD SAT math Tutors
Hillandale, MD SAT math Tutors
Landover, MD SAT math Tutors
Langley Park, MD SAT math Tutors
Lewisdale, MD SAT math Tutors
North Bethesda, MD SAT math Tutors
Takoma Park SAT math Tutors
Wheaton, MD SAT math Tutors
|
{"url":"http://www.purplemath.com/adelphi_md_sat_math_tutors.php","timestamp":"2014-04-18T19:12:28Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:4aff8546-a91a-484a-9230-121d747b04f0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
i have problem in finishing my code
02-12-2012 #1
Registered User
Join Date
Feb 2012
i have problem in finishing my code
this program finds the largest multiple of 5 in a randomly generated Vector of 20 elements and returns it as well as its index. but What would biggest() return if none of the integers in the vector<> are multiple of five? and how can i correct it?
#include <iostream>
#include <algorithm>
#include <vector>
#include <fstream>
using namespace std;
int biggest(vector<int>);
const int k=20;
int main()
ofstream fout("pasuxi.out");
vector <int> v(k);
for(int i=0; i<k; i++)
v[i]=rand() % 71+30;
fout<<"biggest is "<<v[biggest(v)]
<<". index= "<< biggest(v);
int biggest(vector<int> x)
int big=-1;
int index;
for(int i=0; i<x.size(); i++)
if(x[i]%5==0 && x[i]>big){
return index;
Suggestion: learn references and/or exceptions.
If you can't return a value that indicates "error" (which you very well could, since any number < 0 would be an invalid index, and hence could be reserved for errors), you can pass in an extra
argument that would hold return information, or you can throw an exception.
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
02-12-2012 #2
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/145966-i-have-problem-finishing-my-code.html","timestamp":"2014-04-17T20:14:56Z","content_type":null,"content_length":"45211","record_id":"<urn:uuid:cd61bd2a-3223-45bb-89b2-f757e488de34>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
History of Circle Area Formula
Date: 03/19/2007 at 18:26:08
From: Richard
Subject: history of equation using Pi
Do we know who figured out that pi r squared is the area of a circle?
I can find out about the history of Pi and the circumference of a
circle, but not its area. I looked through your FAQs and on Google
but to no avail. Perhaps it is just not known?
Date: 03/20/2007 at 06:08:17
From: Doctor Peterson
Subject: Re: history of equation using Pi
Hi, Richard.
It's hard to answer that question, because the area of a circle was
known long before pi was actually used. Proposition 2 of book XII of
Euclid's Elements, which was undoubtedly known before Euclid
himself, is equivalent to the formula A = pi r^2:
Euclid's Elements Book XII, Proposition 2
Circles are to one another as the squares on their diameters.
That is, the area of a circle is proportional to (2r)^2, which in
turn is proportional to r^2. All that is lacking here is a name for
the constant of proportionality, which has been called pi since 1706.
There are two parts to your question: who discovered that the area is
SOMETHING times the square of the radius (for which the answer is
whoever gave Euclid his proof, commonly considered to be Eudoxus); and
who discovered that the constant of proportionality is pi. The answer
to the latter question is Archimedes.
The form in which Archimedes stated it was that the area of a circle
is equal to that of a right triangle whose base is the circumference
of the circle, and whose height is the radius of the circle. That is,
A = 1/2 (2 pi r) r = pi r^2
in modern terms. So except for the lack of algebraic notation and a
name for pi, he got the entire formula. You may be aware that he also
worked out the value of pi.
His proof can be found in Hawking's _God Created the Integers_ (a
collection of important primary documents in math history); and in
sites like the following:
Archimedes and the Area of a Circle
It is related to what I said in the following simplified explanation:
Why Pi?
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/70604.html","timestamp":"2014-04-19T22:51:30Z","content_type":null,"content_length":"7669","record_id":"<urn:uuid:20ae588c-20e2-4a69-8150-b6f2e453a60d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Mathematical Proofs
It has been a very rewarding experience to move my book, Mathematical Reasoning: Writing and Proof, from the world of commercial publishing to the world of OER (Open Educational Resources). I have
had more contact with users of the textbook (both students and professors) since August than I did for over ten years when the book was commercially published. It is really nice to know when someone
adopts the book for use in their class, and it is especially nice to get messages from students who are grateful that they can obtain a book free of charge or obtain the printed copy for less than
Here is a link to an interesting article by Nicole Allen, OER Program Director for SPARC (Scholarly Publication and Academic Resources Coalition).
In a post on this blog dated August 13, 2013, I expressed my opinions about the importance of writing in the introduction to proofs course. At Grand Valley State University, our introduction to
proofs course, MTH 210 Communicating in Mathematics, is in the university's Supplemental Writing Skills (SWS) Program. Following is a description of this program that I include in my course syllabus.
As many of you know, I am the author of a book for the Introduction to Proofs course. My book is Mathematical Reasoning: Writing and Proof. I have made this book available to download for free using
a Creative Commons License. You can download the book at the website for the book. A soft cover version of this book can also be purchased for less than $20 at Amazon.com.
Request #1
Advertising is one of the difficulties with an open-source book. There are websites that give lists of free textbooks, but there is usually no information from users of the book. So my request to
those of you who have used (or are using) the book to send me a short quote about the book or a longer review of the book. I would like to include these quotes and reviews on the website for the
book. You can make the quote or or review as a response to this post or you can send it to me at mathreasoning@gmail.com. If you do so, please include your name and affiliation so that I can include
that with the quote or review. If you prefer to have it be anonymous, just tell me so and I will post it that way.
Before I describe a few small updates to the materials that are available for Mathematical Reasoning: Writing and Proof, I would like to remind people to check out the List of Approved Open-Source
Textbooks that are available through the American Institute of Mathematics.
In preparing for class this semester, I revised the study guides that are available for Mathematical Reasoning: Writing and Proof. These guides are available of the web site for the book. In
addition, because "flipping" a proofs course can be a difficult and time-consuming thing to do, I have written short fact-based quizzes for most sections of the textbook. I usually give these quizzes
at the start of class. (Students are supposed to study the section of the textbook along with the screencasts that are available for the text.) Instructors who would like to obtain a copy of these
quizzes and their solutions, should contact me at mathreasoning@gmail.com.
I certainly have not been active with this blog the past couple of months. I guess other things just go in the way and I let it slide. The only reason for this post is to inform those that are
attending the Joint Mathematics Meetings in Baltimore, that I will be give a presentation at the contributed papers session on Open Source Textbooks in Mathematics. My session will be on Friday
January 17 at 10:00 am. I do not know the location yet.
For those interested in open source textbooks, this should be an interesting session. The complete list of presentations is:
A friend of mine recently posted a link on Facebook to the following blog post:
This has the provocative title "The Death of Math." Side note: The use of the word "math" tends to bug me. In formal writing and public writing, I always try to use the term "mathematics."
This is a fairly long post and what I want to focus on now is one of the two recommendations Mr. Rubenstein makes to "fix mathematics." This one is: Greatly reduce the number of required topics and
to expand the topics that remained so they can covered more deeply with thought provoking lessons and activities. (The second recommendation is to make mathematics beyond the 8th grade into
|
{"url":"http://www.proofscourses.blogspot.com/","timestamp":"2014-04-19T17:02:48Z","content_type":null,"content_length":"91617","record_id":"<urn:uuid:073560f1-2273-4a84-b462-adbd2aa9ea99>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HeadFirst Java, Ch3 Pool Puzzle
Author HeadFirst Java, Ch3 Pool Puzzle
So I'm a little stumped...the following code produces this output:
Joined: Apr
20, 2008 triangle 0, area = 4.0
Posts: 6 triangle 1, area = 10.0
triangle 2, area = 18.0
triangle 3, area = 28.0
y = 4, t5 area = 343
What has me stumped is the last line of output...I understand that both t5 and ta[2] are pointing to the same Triangle object, and that by setting ta[2].area = 343, t5.area will also =
343. But where does "y = 4" come from - if y = x, and x is set as 27, where is the value changed to 4? Any help would be appreciated, thanks!
[ May 26, 2008: Message edited by: Campbell Ritchie ]
Joined: May It is straight forward.
05, 2008
Posts: 20 After the while loop you are setting y = x where the value of x is 4 at that time. Later you changed the value of x to 27 but not �Y� so it printed value of y as 4.
First, thanks for the fast reply!
Joined: Apr
20, 2008 Second, duh - I hadn't thought about the value of x as the loop was exited...for some reason, I thought that since x was given a new value, y took the same value. I see now that since
Posts: 6 the line "y = x" was written before "x = 27", y retains the previous value of x. Thanks!
Joined: Oct
13, 2005
Posts: 36453 Please use code tags round quoted code; it makes it easier to read. I shall edit your post to add tags.
Joined: Apr
20, 2008 Sure, will use code tags in the future. Thanks for the heads up.
Posts: 6
Joined: Oct
13, 2005 Originally posted by Yong Lee:
Posts: 36453 Sure, will use code tags in the future. Thanks for the heads up.
15 Thank you. You see how much better it looks with the code tags added. There are buttons below the "message" box for formatting your posts.
This is an old thread, but i also had a question.
Joined: Feb
01, 2012 But i am also having a trouble understanding the last part.
Posts: 28
On line 17 and 18, y was 4, then the value changed to 27. but it had no influence on printing out y as 4(line21).
On line 19 and 20, t5=ta[2]. i am thinking t5=18 from previous area ta[2]. Then why all of sudden from line 20, t5.area changed to 343? should it not have any influence on t5 value, and
print out 18 as area(line22) instead of 343?
sorry to bump the old thread, but someone please answer my question.
Ranch Hand
Joined: Sep Your first question is answered in this thread by Yong Lee.
21, 2006
Posts: 140 As to your second question, ta[2] references a Triangle object. When you call
I like... is a new object created, or does t5 reference the same object as ta[2]?
Ranch Hand
Yong Lee wrote:So I'm a little stumped...the following code produces this output:
Joined: Feb
19, 2012 triangle 0, area = 4.0
Posts: 42 triangle 1, area = 10.0
triangle 2, area = 18.0
triangle 3, area = 28.0
y = 4, t5 area = 343
What has me stumped is the last line of output...I understand that both t5 and ta[2] are pointing to the same Triangle object, and that by setting ta[2].area = 343, t5.area will also
= 343. But where does "y = 4" come from - if y = x, and x is set as 27, where is the value changed to 4? Any help would be appreciated, thanks!
[ May 26, 2008: Message edited by: Campbell Ritchie ]
I realize this is an old post but I run the exact same code as above (chapter 3 HeadFirstJava),
and I receive the following :
subject: HeadFirst Java, Ch3 Pool Puzzle
|
{"url":"http://www.coderanch.com/t/410530/java/java/HeadFirst-Java-Ch-Pool-Puzzle","timestamp":"2014-04-16T10:36:54Z","content_type":null,"content_length":"38558","record_id":"<urn:uuid:2d4eea60-22bc-4311-9f8d-715bb0b4b173>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Frequency of integer division in source code?
tfj@apusapus.demon.co.uk (Trevor Jenkins)
Thu, 10 Feb 1994 22:36:29 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: tfj@apusapus.demon.co.uk (Trevor Jenkins)
Keywords: architecture, optimize
Organization: Don't put it down; put it away!
References: 94-02-058
Date: Thu, 10 Feb 1994 22:36:29 GMT
meissner@osf.org writes:
>Another place where division (actually modulus) is used quite
>frequently is in calculating hash tables, which often involves modulus
>by a prime number constant. This of course lends itself nicely to
>being replaced by multiplication.
Better yet is not to be mis-lead by Maurer's paper and to ignore his
advice by using a table whose size is a power of two. Then the integer
divide becomes an AND operation which is faster that division or
multiplication on almost all architectures.
The ``theory'' of using powers of two is explained in various papers by
Hopgood et all from the late-60/early-70s.
Regards, Trevor.
Trevor Jenkins
134 Frankland Rd, Croxley Green, RICKMANSWORTH, WD3 3AU, England
email: tfj@apusapus.demon.co.uk phone: +44 (0)923 776436 radio: G6AJG
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/94-02-070","timestamp":"2014-04-20T08:18:59Z","content_type":null,"content_length":"5934","record_id":"<urn:uuid:b6f65544-9dc3-4a08-90ae-d097dcc553c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus and Discrete Mathematics Learning Software, Tutorials, Etc.
June 2nd 2011, 07:03 PM #1
Calculus and Discrete Mathematics Learning Software, Tutorials, Etc.
Hello All:
Could anyone recommend me software or tutorial videos for learning calculus or discrete mathematics from a beginner level?
Thank you
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/math-software/182274-calculus-discrete-mathematics-learning-software-tutorials-etc.html","timestamp":"2014-04-16T11:37:33Z","content_type":null,"content_length":"30040","record_id":"<urn:uuid:0c00d438-1808-4eb2-bacd-039e7647325d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Word Problems Examples Page 1
Lisa has five times as many books as she has CDs. She must be either over 45 or a librarian. Lisa also has 35 books. Okay, scratch that. She just hates music. How many CDs does she have?
(# books Lisa has) = 5(#CDs Lisa has)
Let C be the number of CDs Lisa has. Then
5C = 35
Solving this equation gives us C = 7, so Lisa has 7 CDs. Sadly, all seven of them are ABBA CDs. You need to expand your horizons and get with the times, Lisa.
Although ABBA is pretty cool, we're not going to lie. Disco forever!
|
{"url":"http://www.shmoop.com/word-problems/translating-words-examples.html","timestamp":"2014-04-17T04:49:51Z","content_type":null,"content_length":"35709","record_id":"<urn:uuid:cb591865-99f4-4477-bf99-1e276c34aa17>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sunday FunctionSunday Function
Just a quick one today, as I get caught back up from Thanksgiving. We all know and love the very basic quadratic function. Any second-order polynomial will give you a nice little parabola, which of
course is ubiquitous in physics. We all know what that looks like. But what if we’re willing to square complex numbers instead of just real numbers? Traditionally we denote complex numbers with z
instead of x, so our Sunday Function is:
Ok, so what happens when we square a complex number? Well, we can write any complex number as (a + bi), where “a” and “b” are real numbers. “a” is the real part and “b” is the imaginary part. Keeping
in mind that “i” squared is -1, we can go ahead and square our generic expression for any complex number:
The first term (a^2 – b^2) is the real part of the number z^2 and 2ab is the imaginary part of z^2. As such we’re done if we just want to calculate numerical values. But we would like a bit better of
a theoretical understanding as well. First, we see that the real part is zero if and only if a and b are equal. The imaginary part is zero if and only if one or both of a and b are also zero. So
positive real numbers are sent to positive real numbers, imaginary numbers are sent to negative real numbers, and negative real numbers are sent to positive real numbers. Complex numbers will do
something in between. In fact if we plot the arg(z^2) [Note: If you think of a complex number as a point on the complex plane, arg(z) represents the angle between the real axis and that point.],
we’ll get this:
If you think of the complex plane as a rubber sheet, this suggests that the function f(z) = z^2 both stretches the sheet radially and bends it in a counterclockwise direction. To verify this, we’ll
need to use the polar representation of complex numbers. That’ll be a job for next week.
1. #1 Uncle Al November 30, 2009
Nice! 3-D is notable for its chirality. You graphed a right-handed propeller. Fundamental physical theory demands the universe and its mirror image work equally well. The universe disagrees, and
increasing so for weaker interactions.
2. #2 Lyle November 30, 2009
In complex space you see lots of interesting behavior the sheet cut as it is called that shows up in the plot is one example. I wish that gnuplot and more advanced programs had been around when I
studied math in the early 70s. More pictures would have helped then.
3. #3 Paul Murray November 30, 2009
Kinda cool … but what’s always puzzled me is the infinite number of densely-packed roots you get when you take a root that’s irrational. At a guess – the complex number plane gets twisted into an
infinitely long spiral by exponentation.
4. #4 Joshua Zelinsky November 30, 2009
Paul, as I understand it, that is really related at some level to the failure of irrational powers to give you a well-behaved function. f(z)=z^n for positive integers n is trivially holomorphic
while f(z)^-n is mereomorphic with a well behaved pole. Even the badness of f(z)^r where r is rational is reasonably well-behaved. But what you get depends closely on what r you pick. So if you
want to think of z^a for an irrational a, you can’t think of it as a limit of z^r with r being a sequence of rationals approaching z. So the only way to think of z^a is by using the exponential
function. And in fact this gives us some insight to what is really going on. Suppose we want to solve for w^a = z. So we really have exp (a log w) = z. Now, assume we have such a w. Then exp
(2inPi + a log w) = z for any n. That is, w’ will also be a solution if w’ = w * exp (log ((2in Pi) /a)) which makes the density issue more apparent because exp (log ((2in Pi) /a) can get
arbitrarily close to 1.
This is a rough sketch. I may have screwed up some of the details but the basic idea can I think be made rigorous.
5. #5 IBY November 30, 2009
Isn’t something like that used to create Julian set or something?
6. #6 S M December 26, 2009
Hi,I’m Iranian. thx alot for gragh of z^2. that help me alot.
7. #7 Κάρτες r4i December 29, 2009
This function is really useful for me .I have problem in this kind of example and it is really complicated.I am in weak in mathematics.This one is really useful for me.
|
{"url":"http://scienceblogs.com/builtonfacts/2009/11/30/sunday-function-55/","timestamp":"2014-04-18T00:50:55Z","content_type":null,"content_length":"50753","record_id":"<urn:uuid:e16d0d6b-74a8-4968-9a0c-e6d714133e5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonparametric hypothesis testing for a spatial signal
, 2010
"... We consider the problem of detecting whether or not in a given sensor network, there is a cluster of sensors which exhibit an “unusual behavior.” Formally, suppose we are given a set of nodes
and attach a random variable to each node. We observe a realization of this process and want to decide bet ..."
Cited by 17 (3 self)
Add to MetaCart
We consider the problem of detecting whether or not in a given sensor network, there is a cluster of sensors which exhibit an “unusual behavior.” Formally, suppose we are given a set of nodes and
attach a random variable to each node. We observe a realization of this process and want to decide between the following two hypotheses: under the null, the variables are i.i.d. standard normal;
under the alternative, there is a cluster of variables that are i.i.d. normal with positive mean and unit variance, while the rest are i.i.d. standard normal. We also address surveillance settings
where each sensor in the network collects information over time. The resulting model is similar, now with a time series attached to each node. We again observetheprocessovertime and want to decide
between the null, where all the variables are i.i.d. standard normal; and the alternative, where there is an emerging cluster of i.i.d. normal variables with positive mean and unit variance. The
growth models used to represent the emerging cluster are quite general, and in particular include cellular automata used in modelling epidemics. In both settings, we consider classes of clusters that
are quite general, for which we obtain a lower bound on their respective minimax detection rate, and show that some form of scan statistic, by far the most popular method in practice, achieves that
same rate within a logarithmic factor. Our results are not limited to the normal location model, but generalize to any one-parameter exponential family when the anomalous clusters are large enough.
, 2003
"... This paper extends False Discovery Rates to random fields, where there are uncountably many hypothesis tests. This provides a method for finding local regions in the field where there is a
significant signal while controlling either the proportion of area or the number of clusters in which false rej ..."
Cited by 4 (0 self)
Add to MetaCart
This paper extends False Discovery Rates to random fields, where there are uncountably many hypothesis tests. This provides a method for finding local regions in the field where there is a
significant signal while controlling either the proportion of area or the number of clusters in which false rejections occur. We develop confidence envelopes for the proportion of false discoveries
as a function of the rejection threshold. This yields algorithms for constructing a confidence superset for the locations of the true nulls. From this we derive rejection thresholds that control the
mean and quantiles of the proportion of false discoveries. We apply this method to scan statistics and functional neuroimaging.
"... Wavelet denoising methods have been proven useful for many one- and two-dimensional problems. Most existing methods can in principle be carried over to three-dimensional problems, such as the
denoising of volumetric positron emission tomography (PET) images, but they may not be sufficiently flexible ..."
Cited by 2 (0 self)
Add to MetaCart
Wavelet denoising methods have been proven useful for many one- and two-dimensional problems. Most existing methods can in principle be carried over to three-dimensional problems, such as the
denoising of volumetric positron emission tomography (PET) images, but they may not be sufficiently flexible in allowing some regions of an image to be denoised more aggressively than others. In this
paper, we propose a semi-local paradigm for wavelet denoising. The semi-local paradigm involves the division of an image into suitable blocks, which are then individually denoised. To denoise the
blocks, we use our modification of the generalized cross validation (GCV) technique of Jansen and Bultheel [1] to choose thresholding parameters; we also present risk estimators to guide some of the
other choices involved in the implementation. Experiments with phantom PET images show that the semi-local paradigm provides superior denoising compared to standard application of the GCV technique.
An asymptotic analysis demonstrates that, under some regularity conditions, semi-local denoising is asymptotically consistent on the logarithmic scale. The paper concludes with a discussion on the
nature of semi-local denoising and some topics for future research. Index Terms imaging, logarithmic consistency, positron emission tomography, thresholding EDICS: 2-WAVP. We acknowledge the
following individuals, who provided software, useful information, or helpful discussion:
, 2010
"... SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding
significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer ..."
Add to MetaCart
SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding
significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data
analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the
statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The
spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the
decadal temperature change over some regions of the Earth.
, 2009
"... Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial forecasts produced by two competing models exists on average across the
entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differe ..."
Add to MetaCart
Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial forecasts produced by two competing models exists on average across the entire
spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of forecasts, and the loss function
chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero mean, and spatially correlated.
Constant and non-constant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example
based on daily average wind speeds in Oklahoma is used for illustration. The test is also compared to a wavelet-based method presented by Shen et al. (2002) that is designed to test for a spatial
signal at every location in the domain.
, 2005
"... – is a general procedure for estimating a lower bound for the number of components and for estimating their parameters in an additive regression model. The method consists of a series of steps:
a preliminary step for separating the signal from the background followed by identification of local maxim ..."
Add to MetaCart
– is a general procedure for estimating a lower bound for the number of components and for estimating their parameters in an additive regression model. The method consists of a series of steps: a
preliminary step for separating the signal from the background followed by identification of local maxima up to a noise level-dependent threshold, estimation of the component parameters using an
iterative algorithm, and detection of mixtures of components within one local maximum using hypothesis testing. The leading example is a nuclear magnetic resonance (NMR) experiment for protein
structure determination. After applying a Fourier transform to the NMR signals, NMR frequency data are multiple-peak data, where each peak corresponds to one component in the additive regression
model. In this example, the primary objective is accurate estimation of the location parameters. Key words and phrases: mixture regression model, tensor-product wavelet decomposition, noise
level-dependent threshold, backfitting, mixture detection, nuclear magnetic resonance, protein structure determination.
, 2004
"... A comparative evaluation of wavelet-based methods for hypothesis testing of brain activation maps ..."
Add to MetaCart
A comparative evaluation of wavelet-based methods for hypothesis testing of brain activation maps
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=655667","timestamp":"2014-04-18T13:42:48Z","content_type":null,"content_length":"30886","record_id":"<urn:uuid:604632d9-4ce0-4854-90fb-c9bc914e12b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Citizendium - building a quality general knowledge free online encyclopedia. Click here to join and contribute—free
Many thanks March 2014 donors; special to Darren Duncan. April 2014 donations open; need minimum total $100. Let's exceed that.
Donate here. Donating gifts yourself and CZ.
Electromagnetic wave
From Citizendium, the Citizens' Compendium
(Difference between revisions)
m m
← Older edit (→Relation to Maxwell's equations: rm scriptstyle added vertical-align: baseline)
Newer edit →
Line 102: Line 102:
\mathbf{E}(\mathbf{r}, t) = \mathbf{e}_x E_0 \sin\big[k(z-ct)\big]\quad\hbox{with}\quad k \equiv \ \mathbf{E}(\mathbf{r}, t) = \mathbf{e}_x E_0 \sin\big[k(z-ct)\big]\quad\hbox{with}\quad k \equiv
frac{2\pi}{\lambda}. \frac{2\pi}{\lambda}.
</math> </math>
The snapshot is taken at <math>\[DEL:scriptstyle :DEL]t = 2\pi n /(ck)</math> for some arbitrary The snapshot is taken at <math> \t = 2\pi n /(ck)</math> for some arbitrary integer ''n''. We
integer ''n''. We assumed here that the direction of '''E''' defines the direction of the assumed here that the direction of '''E''' defines the direction of the ''x''-axis with unit
- ''x''-axis with unit vector '''e'''<sub>''x''</sub> along this axis. The quantity ''E''<sub>0</ + vector '''e'''<sub>''x''</sub> along this axis. The quantity ''E''<sub>0</sub> is the
sub> is the [[amplitude]] of the wave. Insertion of this expression in in the left hand side of [[amplitude]] of the wave. Insertion of this expression in in the left hand side of the wave
the wave equation for '''E''' gives equation for '''E''' gives
:<math> :<math>
\boldsymbol{\nabla}^2 \mathbf{E} = \mathbf{e}_x E_0 \frac{\partial^2 \sin\big[k(z-ct)\big]}{\ \boldsymbol{\nabla}^2 \mathbf{E} = \mathbf{e}_x E_0 \frac{\partial^2 \sin\big[k(z-ct)\big]}{\
partial z^2} = - k^2 \mathbf{e}_x E_0 \sin\big[k(z-ct)\big]. partial z^2} = - k^2 \mathbf{e}_x E_0 \sin\big[k(z-ct)\big].
Revision as of 14:10, 14 January 2009
In physics, an electromagnetic wave is a change, periodic in space and time, of an electric field E(r,t) and a magnetic field B(r,t). A stream of electromagnetic waves is referred to as
electromagnetic radiation. Because an electric as well as a magnetic field is involved, the term electromagnetic (EM) is used, a contamination of electric and magnetic. Examples of EM waves in
increasing wavelength are: gamma rays, X-rays, ultraviolet light, visible light, infrared, microwaves, and radio waves. All these waves propagate in vacuum with the same speed c, the speed of light.
The speed of light in air (at standard temperature and pressure) is very close to the speed of light in vacuum (the refractive index of air, n, is 1.0002926, meaning that the speed of electromagnetic
waves in air is c/n ≈ c).
Classically (non-quantum mechanically), EM radiation is produced by accelerating charges, for instance, by the oscillating charge in a radio antenna. Quantum mechanically, EM radiation is emitted
whenever a system in an energetically high state (of energy E[2]) makes a transition to a state of lower energy (E[1]); during this transition a photon (light quantum) of energy proportional to the
difference E[2] - E[1] > 0 is emitted. This is what happens in a fluorescent tube: mercury atoms are brought into an energetically high state by collisions with electrons, and upon subsequent falling
down to their lowest energy state they emit photons.
Electromagnetic waves were predicted on theoretical grounds by James Clerk Maxwell in 1861 and first emitted and received in the laboratory by Heinrich Hertz a quarter century later. The first to see
the applicability for communication purposes was the inventor of radiotelegraphy Guglielmo Marconi (around 1900). Later applications are radio, television, radar, cellular phones, gps, and all kinds
of wireless applications, from remote control of television sets to internet access in public places.
In figure 1 we see a snapshot (i.e., a picture at a certain point in time) of the magnetic and electric fields in adjacent points of space. In each point, the vector E is perpendicular to the vector
B. The wave propagates to the right, along an axis which we conveniently refer to as z-axis. Both E and B are perpendicular to the propagation direction, which is expressed by stating that an
electromagnetic wave is a transverse wave, in contrast to sound waves, which are longitudinal waves (i.e., air molecules vibrate parallel to the propagation direction of the sound).
Assume that the snapshot in figure 1 is taken at time t, then at a certain point z we see an arrow of certain length representing E(z,t) and also a vector B(z,t). At a point in time Δt later, the
same values of E and B (same arrows) are seen at z + c Δt. The arrows seem to have propagated to the right with a speed c.
The time t is fixed and the position z varies in figure 1. Conversely, we can keep the position fixed and imagine what happens if time changes. Focus on a fixed point z, then in progressing time the
two vectors E(z,t) and B(z,t) in the point z, grow to a maximum value, then shrink to zero, become negative, go to a minimum value, and grow again, passing through zero on their way to the same
maximum value. This cycle is repeated indefinitely. When we now plot E and B in the fixed point z as a function of time t, we see the same type (sine-type) function as in figure 1. The number of
times per second that the vectors go through a full cycle is the frequency of the electromagnetic wave.
Periodicity in space means that the EM wave is repeated after a certain distance. This distance, the wavelength is traditionally designated by λ, see figure 1. If we go at a fixed time a distance λ
to the right or to the left we encounter the very same fields E and B.
Basically, the only property distinguishing different kinds of EM waves, is their wavelength, see figure 2. Note the enormous span in wavelengths, from one trillionth of a millimeter for gamma-rays (
radioactive rays) up to the VLF (very low frequency) radio waves of about 100 kilometer.
Frequency of electromagnetic waves
Often EM waves are characterized by their frequency ν, instead of their wavelength λ. If the EM field goes through ν full cycles in a second, where ν is a positive integral number, the field has a
frequency of ν Hz (hertz). The speed of propagation of the EM waves being c, in 1/ν seconds the wave propagates a distance c × (1/ν) meter (according to the formula: distance traveled is speed times
traveling time). The distance covered in 1/ν seconds is by definition the wavelength λ:
$\lambda = \frac{c}{u} \quad \Longrightarrow\quad u = \frac{c}{\lambda}$
If we express c in m/s then λ is obtained in m. To convert quickly from wavelength to frequency we can approximate c by 3·10^8 m/s.
As was pointed out above, the wavelengths of the various parts of the EM spectrum differ many orders of magnitude. Furthermore, the sources of the radiations, the interactions with matter, and the
detectors employed differ widely, too. So, it is not surprising that in the past different parts of the spectrum were discovered at different times and that electromagnetic radiation of different
wavelengths is called by different names. In the table and in figure 2 some illustrative values are given for several kinds of EM waves, together with their names.
Some typical values of: wavelength (λ), frequency (ν = c/λ), photon energy (hν), cycle time (T = 1/ν), and inverse wavelength [1/(100⋅λ)].
EM wave λ (m) ν (1/s) hν (J) T (s) 1/λ (cm^−1)
γ-rays 1.00⋅10^−14 3.00⋅10^22 1.98⋅10^−11 3.33⋅10^−23 1.00⋅10^12
X-rays 5.00⋅10^−10 6.00⋅10^17 3.96⋅10^−16 1.67⋅10^−18 2.00⋅10^7
Ultraviolet 2.00⋅10^−7 1.50⋅10^15 9.90⋅10^−19 6.67⋅10^−16 5.00⋅10^4
Visible 6.00⋅10^−7 5.00⋅10^14 3.30⋅10^−19 2.00⋅10^−15 1.67⋅10^4
Infrared 5.00⋅10^−6 6.00⋅10^13 3.96⋅10^−20 1.67⋅10^−14 2.00⋅10^3
Microwave 1.00⋅10^−2 3.00⋅10^10 1.98⋅10^−23 3.33⋅10^−11 1.00
Radio 1.00⋅10^2 3.00⋅10^6 1.98⋅10^−27 3.33⋅10^−7 1.00⋅10^−4
Monochromatic linearly polarized waves
The wave depicted in figure 1 is monochromatic, i.e., it is characterized by a single wavelength (monochromatic means "of one color". In the visible region, different wavelengths correspond to light
of different colors). It is known that EM waves can be linearly superimposed, which is due to the fact that they are solutions of a linear partial differential equation, the wave equation (see next
section). A linear superposition of waves is a solution of the same wave equation as the waves themselves. Such a superposition is also an electromagnetic wave (a propagating periodic EM field). If
waves of different wavelengths are superimposed, then a non-monochromatic wave is obtained (the term multi-chromatic wave would be apt, but is never used). By means of Fourier analysis a
non-monochromatic wave can be decomposed into its monochromatic components.
The electric field vectors in figure 1 are all in one plane, this is the plane of polarization, and a wave with one fixed polarization plane, is called linearly polarized.
The radiation of many lasers is monochromatic and linearly polarized (at least to a very good approximation).
Relation to Maxwell's equations
In this section it will be shown that the electromagnetic wave depicted in figure 1 is a solution of the Maxwell equations in the vacuum.
We assume that at some distance away from the source of EM waves (a radio transmitter, a laser, gamma radiating nuclei, etc.), there is no charge density ρ and no current density J. For that region
of space, the microscopic (vacuum) Maxwell equations become (in SI units):
$\boldsymbol{abla} \cdot \mathbf{B} = 0, \qquad \boldsymbol{abla} \cdot \mathbf{E} = 0,$
$\boldsymbol{abla} \times \mathbf{B}= \frac{1}{c^2} \frac{\partial \mathbf{E}}{\partial t}, \qquad \boldsymbol{abla} \times \mathbf{E}= -\frac{\partial \mathbf{B}}{\partial t}.$
Apply to the last Maxwell equation the following relation, known from vector analysis and valid for any (differentiable) vector field,
$\boldsymbol{abla} \times (\boldsymbol{abla} \times \mathbf{E}) = \boldsymbol{abla}(\boldsymbol{abla} \cdot \mathbf{E}) - \boldsymbol{abla}^2 \mathbf{E}$
and use that ∇ · E = 0, then E satisfies the wave equation,
$\boldsymbol{abla}^2 \mathbf{E} = \frac{\partial(\boldsymbol{abla}\times \mathbf{B})}{\partial t} = \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2}.$
Note that the displacement current (time derivative of E) is essential in this equation, if it were absent (zero), the field E would be a static, time-independent, electric field, and there would be
no waves.
In the very same way we derive a wave equation for B,
$\boldsymbol{abla}^2 \mathbf{B} = \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2}.$
Observe that E and B are related by the third and fourth Maxwell equation, which express the fact that a displacement current causes a magnetic field, and a changing magnetic field causes an electric
field (Faraday's law of induction), respectively. So a time-dependent electric field that is not associated with a time-dependent magnetic field cannot exist, and conversely. Indeed, in special
relativity E and cB can be transformed into one another by a Lorentz transformation of the electromagnetic field tensor, which shows their close relationship.
The wave equation is without doubt the most widely studied differential equation in mathematical physics. In figure 1 the electric field depicts a particular solution, with special initial and
boundary conditions. The snapshot that is depicted has the analytic form
$\mathbf{E}(\mathbf{r}, t) = \mathbf{e}_x E_0 \sin\big[k(z-ct)\big]\quad\hbox{with}\quad k \equiv \frac{2\pi}{\lambda}.$
The snapshot is taken at $\, t = 2\pi n /(ck)$ for some arbitrary integer n. We assumed here that the direction of E defines the direction of the x-axis with unit vector e[x] along this axis. The
quantity E[0] is the amplitude of the wave. Insertion of this expression in in the left hand side of the wave equation for E gives
$\boldsymbol{abla}^2 \mathbf{E} = \mathbf{e}_x E_0 \frac{\partial^2 \sin\big[k(z-ct)\big]}{\partial z^2} = - k^2 \mathbf{e}_x E_0 \sin\big[k(z-ct)\big].$
Insertion of this expression in in the right hand side of the wave equation for E gives
$\frac{\mathbf{e}_x E_0}{c^2} \frac{\partial^2 \sin\big[k(z-ct)\big]}{\partial t^2} = - \frac{c^2\,k^2}{c^2} \mathbf{e}_x E_0 \sin\big[k(z-ct)\big] = - k^2 \mathbf{e}_x E_0 \sin\big[k(z-ct)\
so that it follows that the special solution, depicted in figure 1, is indeed a solution of the wave equation for E.
We could now proceed in the very same way and solve the wave equation for B, but then we could easily overlook the relation between the two fields. So we rather substitute the solution for E into the
fourth Maxwell equation and use the definition of curl as a determinant,
$\begin{vmatrix} \mathbf{e}_x &\quad \mathbf{e}_y & \mathbf{e}_z \\ \frac{\partial}{\partial x}&\quad \frac{\partial}{\partial y}&\frac{\partial}{\partial z}\\ E_0 \sin\big[k(z-ct)\big] &\quad 0
& \quad 0 \\ \end{vmatrix} = k \mathbf{e}_y E_0 \cos\big[k(z-ct)\big] = - \frac{\partial \mathbf{B}}{\partial t}.$
It is easy to see that
$\mathbf{B}(\mathbf{r},t) = \mathbf{e}_y B_0 \sin\big[k(z-ct)\big] \quad \hbox{with}\quad B_0 \equiv \frac{E_0}{c}$
is a solution of this equation. It follows that E and B are perpendicular (along the x-axis and y-axis, respectively) and are in phase. That is, E and B are simultaneously zero and attain
simultaneously their maximum and minimum. The fact that (in vacuum) the amplitude of B is a factor c smaller than that of E is due to the use of SI units, in which the amplitudes have different
dimensions. In Gaussian units this is not the case and E[0] = B[0].
Energy and energy flow in electromagnetic field
Relation between power densities
In this section the following balance of power (energy per unit time) densities will be discussed:
$\boldsymbol{abla}\cdot \mathbf{S} + \frac{\partial \mathcal{E}_\textrm{Field}}{\partial t} = - \mathcal{E}_\textrm{Joule} \qquad\qquad\qquad(1)$
\begin{align} \mathbf{S} &= \mathbf{E} \times \mathbf{H}\equiv \mathbf{E} \times \frac{1}{\mu_0}\mathbf{B} \qquad\mathrm{(the\,\, Poynting\,\,vector)} \\ \mathcal{E}_\textrm{Field} &= \frac{1}{2}
( \mathbf{E}\cdot\mathbf{D}+ \mathbf{B}\cdot\mathbf{H}) \equiv \frac{1}{2} ( \epsilon_0 \mathbf{E}\cdot\mathbf{E}+ \frac{1}{\mu_0}\mathbf{B}\cdot\mathbf{B}) \\ \mathcal{E}_\textrm{Joule} &= \
mathbf{E}\cdot \mathbf{J} \\ \end{align}
The terms in equation (1) have dimension W/m^3 (watt per volume). The quantity $\scriptstyle -\mathcal{E}_\textrm{Joule}\, >\, 0$ represents the rate at which energy is produced per unit volume by
ordinary Joulean (resistance) heating. The quantity $\scriptstyle \mathcal{E}_\textrm{Field}$ is the energy density of the EM field and the time derivative is the rate of increase (power density).
The vector S, Poynting's vector, is the power flux, the amount of energy crossing unit area perpendicular to the vector, per unit time.
Multiplying the terms in equation (1) by a volume ΔV and a time span Δt, both small enough that the terms in the equation may assumed to be constant over the volume and the time span, the equation
represents the conservation of energy: the energy generated in ΔVΔt (the right hand side) is equal to the increase in energy in ΔV plus the net flow of energy leaving ΔV. Hence, when this equation is
multiplied by ΔVΔt it is an equation of continuity for energy.
The intensity I of an electromagnetic wave is by definition the modulus of the Poynting vector, the amount of energy carried by the wave across a unit surface in unit time [dimension: volt×ampere/
meter^2 = joule/(second×m^2) = W/m^2],
$I \equiv |\mathbf{E} \times \mathbf{H}| = \Big[ (\mathbf{E}\cdot\mathbf{E})\,(\mathbf{H}\cdot\mathbf{H})- (\mathbf{E}\cdot\mathbf{H})\,(\mathbf{E}\cdot\mathbf{H})\Big]^{1/2}.$
Use E ⋅ H = 0 and that in vacuum and SI units
$\mathbf{H}(\mathbf{r},t) = c\,\epsilon_0\ \mathbf{E}(\mathbf{r},t),$
$I = c\,\epsilon_0\, |\mathbf{E}(\mathbf{r},t)|^2.$
Often I is time-averaged over a complete cycle. Since the integral of cos^2 and sin^2 over a cycle is 1/2, we get for the time-averaged intensity:
$\bar{I} = \frac{1}{2} c\,\epsilon_0\, E_0^2,$
where E[0] is the amplitude of the electric field.
Clearly, the cycle averaged intensity of a wave going through vacuum is constant, independent of the direction of propagation, z. This is because in vacuum the wave does not lose energy to the
medium. In a medium this may be different, $\,\bar{I}\,$ may decrease because of energy loss. The Lambert-Beer law states in that case
$\frac{d\bar{I}(z)}{dz} = - k \bar{I}(z) \quad\Longrightarrow\quad \bar{I}(z) = \bar{I}_0 e^{-kz}.$
Hence the electromagnetic wave is damped by the factor exp[− kz/2]. This damping gives rise to an imaginary component of the index of refraction.
Derivation of relation between power densities
Equation (1), the balance of power, will be proved. Recall from elementary electricity theory the laws of Joule and Ohm. They state that the amount of energy W per unit time, produced by a conduction
current I, is equal to
$W = I^2\, R = I\,V, \quad\hbox{with}\quad R,\, W > 0 ,$
where R is the resistance and V a voltage difference.
Assuming that the current flows along z, we introduce the current density J[z], and using
$V = - E_z \Delta z \quad\hbox{and}\quad I = J_z \Delta x \Delta y$
we obtain
$W = - E_z J_z\, \Delta x\Delta y\Delta z .$
We could continue discussing the system with the small volume $\Delta x\Delta y\Delta z$. However, because all terms in the equations would be multiplied by the same volume, it is more convenient to
consider densities and to divide out the volume. Nevertheless, we still refer to the system. Thus, we define
$\mathcal{E}_\textrm{Joule} \equiv -\frac{W}{ \Delta x\Delta y\Delta z} = \mathbf{E}\cdot \mathbf{J} < 0.$
The negative quantity $\scriptstyle \mathcal{E}_\textrm{Joule}$ is the loss of energy of the system per unit time and per unit volume (according to Joule's and Ohm's laws). One may look upon the
quantity $\scriptstyle -\mathcal{E}_\textrm{Joule}$ as the work (per unit time and unit volume) done by the Lorentz force $\scriptstyle q (\mathbf{v}\times \mathbf{B} + \mathbf{E})$ on the moving
particles constituting the current density J. Since $\scriptstyle \mathbf{v}\cdot (\mathbf{v}\times \mathbf{B}) = 0$, this work depends only on the electric field E.
Apply one of Maxwell's equations:
$\mathbf{E}\cdot \mathbf{J} = \mathbf{E}\cdot\left( - \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} + \frac{1}{\mu_0} \boldsymbol{abla}\times \mathbf{B} \right)$
Use a rule known from vector analysis and apply another one of Maxwell's equations,
\begin{align} \mathbf{E}\cdot(\boldsymbol{abla}\times \mathbf{B}) =& \mathbf{B}\cdot(\boldsymbol{abla}\times \mathbf{E}) - \boldsymbol{abla}\cdot(\mathbf{E} \times \mathbf{B}) \\ =& -\mathbf{B}\
cdot\left(\frac{\partial \mathbf{B}}{\partial t}\right) - \boldsymbol{abla}\cdot(\mathbf{E} \times \mathbf{B}) \end{align}
$\mathcal{E}_\textrm{Field} \equiv \frac{1}{2}\left( \epsilon_0 E^2 + \frac{1}{\mu_0} B^2 \right) = \frac{1}{2}\left( \mathbf{E}\cdot\mathbf{D} + \mathbf{B}\cdot\mathbf{H} \right)$
$\mathbf{D}\equiv \epsilon_0\mathbf{E}\quad\hbox{and}\quad\mathbf{B} \equiv \mu_0\mathbf{H},$
where ε[0] is the electric constant and μ[0] the magnetic constant of the vacuum. Define also
$\mathbf{S} \equiv \frac{1}{\mu_0} \mathbf{E}\times\mathbf{B} = \mathbf{E}\times\mathbf{H}$
where S is the Poynting vector called after John Henry Poynting. This vector is perpendicular to the plane of E and B and by the right-hand rule, it points in the direction of propagation of the EM
wave. The divergence of the Poynting vector is the energy flow associated with the electromagnetic wave, i.e., with the pair E(r,t) and B(r,t). By definition ∇·S gives the flow leaving the system and
− ∇·S gives the flow entering the system. The total energy balance becomes
$-\mathcal{E}_\textrm{Joule} = \frac{\partial \mathcal{E}_\textrm{Field}}{\partial t} + \boldsymbol{abla}\cdot \mathbf{S}.$
Here we have found an example of the conservation of energy, known as Poynting's theorem: The energy produced per unit time according to Joule's law $\scriptstyle -\mathcal{E}_\textrm{Joule}$ is
equal to the rate in increase of the electromagnetic energy of the system $\scriptstyle \mathcal{E}_\textrm{Field}$, plus the flow of EM radiation ∇·S leaving the system.
If there is no current, J = 0, then
$\frac{\partial \mathcal{E}_\textrm{Field}}{\partial t} = - \boldsymbol{abla}\cdot \mathbf{S},$
which is the continuity equation. The increase of field energy per unit time is the flow of radiation energy into the system.
As an example we give an order-of-magnitude-calculation of an electromagnetic energy density. Consider to that end a radio station with a signal of strength P kW. We compute the energy density at a
distance R from the station. First we must assume what the shape is of the waves emitted by the antenna, are they spherical or cylindrical? We choose the latter and call the cylinder height z.
Further it is assumed that power density is homogeneous and that all power crosses the cylindrical walls, that is, power crossing the top and bottom of the cylinder is assumed to be zero. Also no
absorption by the atmosphere or the Earth will occur. When a steady state is reached (some time after the beginning of the transmission), the time derivative of $\scriptstyle \mathcal{E}_\mathrm
{Field}$ vanishes. The energy density at a distance R becomes constant in time,
$\mathcal{E}_\mathrm{Field}(R) = \frac{P} {2\pi R z c} \quad \mathrm{J/m^3},$
where c is the speed of propagation of the radio signal (is speed of ligth ≈ 3·10^8 m/s).^[1] To give a numerical example: P = 100 kW, R = 5 km (about 3 miles), z = 50 m, then $\scriptstyle \mathcal
{E}_\mathrm{Field}$ = 2.1·10^−10 J/m^3.
Fourier expansions of the fields
The quantization of the EM fields leads to photons, light quanta of well-defined energy and momentum. In field quantization a key role is played by the Fourier expansion of the different vector
fields. Hence, we will now discuss the Fourier expansions of the fields E, B, and A. It will be seen that the expansion of the vector potential A yields immediately the expansions of the fields E and
Fourier expansion of a vector field
For an arbitrary real vector field F its Fourier expansion is the following:
$\mathbf{F}(\mathbf{r}, t) = \sum_\mathbf{k} \left( \mathbf{f}_k(t) e^{i\mathbf{k}\cdot\mathbf{r}} + \bar{\mathbf{f}}_k(t) e^{-i\mathbf{k}\cdot\mathbf{r}} \right)$
where the bar indicates complex conjugation. Such an expansion, labeled by a discrete (countable) set of vectors k, is always possible when F satisfies periodic boundary conditions, i.e., F(r + p,t)
= F(r,t) for some finite vector p. To impose such boundary conditions, it is common to consider EM waves as if they are in a virtual box of finite volume V. Waves on opposite walls of the box are
enforced to have the same value (usually zero). Note that the waves are not restricted to the box: the box is replicated an infinite number of times in x, y, and z direction.
Vector potential and its expansion
The magnetic field B is a transverse field and hence can be written as
$\mathbf{B}(\mathbf{r}, t) = \boldsymbol{abla}\times \mathbf{A}(\mathbf{r}, t),$
in which the vector potential A is introduced. Also the electric field E is transverse, because earlier we assumed absence of charge distributions. The electric field E also follows from A,
$\mathbf{E}(\mathbf{r}, t) = - \frac{\partial \mathbf{A}(\mathbf{r}, t)}{\partial t}.$
The fact that E can be written this way is due to the choice of Coulomb gauge for A:
$\boldsymbol{abla}\cdot\mathbf{A}(\mathbf{r}, t) = 0.$
By definition, a choice of gauge does not affect any measurable properties (the best known example of a choice of gauge is the fixing of the zero of an electric potential, for instance at infinity).
The Coulomb gauge makes A transverse as well, and clearly A is in the same plane as E. (The time differentiation does not affect direction.) So, the vector fields A, B, and E are all in a plane
perpendicular to the propagation direction and can be written in terms of e[x] and e[y] (in the definition of figure 1). It is more convenient to choose complex unit vectors:
$\mathbf{e}^{(1)} \equiv \frac{-1}{\sqrt{2}}(\mathbf{e}_x + i \mathbf{e}_y)\quad\hbox{and}\quad\mathbf{e}^{(-1)} \equiv \frac{1}{\sqrt{2}}(\mathbf{e}_x - i \mathbf{e}_y)$
which are orthonormal,
$\mathbf{e}^{(\mu)}\cdot\bar{\mathbf{e}}^{(\mu')} = \delta_{\mu,\mu'}\quad\hbox{with}\quad\mu,\mu'= 1,\, -1.$
The Fourier expansion of the vector potential reads
$\mathbf{A}(\mathbf{r}, t) = \sum_\mathbf{k}\sum_{\mu=-1,1} \left( \mathbf{e}^{(\mu)}(\mathbf{k}) a^{(\mu)}_\mathbf{k}(t) \, e^{i\mathbf{k}\cdot\mathbf{r}} + \bar{\mathbf{e}}^{(\mu)}(\mathbf{k})
\bar{a}^{(\mu)}_\mathbf{k}(t) \, e^{-i\mathbf{k}\cdot\mathbf{r}} \right).$
The vector potential obeys the wave equation. The substitution of the Fourier series of A into the wave equation yields for the individual terms,
$-k^2 a^{(\mu)}_\mathbf{k}(t) = \frac{1}{c^2} \frac{\partial^2 a^{(\mu)}_\mathbf{k}(t)}{\partial t^2} \quad \Longrightarrow \quad a^{(\mu)}_\mathbf{k}(t) \propto e^{-i\omega t}\quad\hbox{with}\
quad\omega = kc.$
It is now an easy matter to construct the corresponding Fourier expansions for E and B from the expansion of the vector potential A. For instance, the expansion for E follows from differentiation
with respect to time,
$\mathbf{E}(\mathbf{r}, t) = i\sum_\mathbf{k}\sum_{\mu=-1,1} \omega \left( \mathbf{e}^{(\mu)}(\mathbf{k}) a^{(\mu)}_\mathbf{k}(t) \, e^{i\mathbf{k}\cdot\mathbf{r}} - \bar{\mathbf{e}}^{(\mu)}(\
mathbf{k}) \bar{a}^{(\mu)}_\mathbf{k}(t) \, e^{-i\mathbf{k}\cdot\mathbf{r}} \right)$
Fourier-expanded energy
The electromagnetic energy density $\scriptstyle \mathcal{E}_\mathrm{Field}$, defined earlier in this article, can be expressed in terms of the Fourier coefficients. We define the total energy
(classical Hamiltonian) of a finite volume V by
$H = \iiint_V \mathcal{E}_\mathrm{Field}(\mathbf{r},t) \mathrm{d}^3\mathbf{r}.$
The classical Hamiltonian in terms of Fourier coefficients takes the form
$H = V\epsilon_0 \sum_\mathbf{k}\sum_{\mu=1,-1} \omega^2 \big(\bar{a}^{(\mu)}_\mathbf{k}(t)a^{(\mu)}_\mathbf{k}(t)+ a^{(\mu)}_\mathbf{k}(t)\bar{a}^{(\mu)}_\mathbf{k}(t)\big)$
$\omega \equiv |\mathbf{k}|c = 2\pi u = 2\pi \frac{c}{\lambda}$
and ε[0] is the electric constant. The two terms in the summand of H are identical (factors commute) and may be summed. However, after quantization (interpretation of the expansion coefficients as
operators) the factors do no longer commute and according to quantum mechanical rules one must depart from the symmetrized classical Hamiltonian.
Fourier-expanded momentum
The electromagnetic momentum, P[EM], of EM radiation enclosed by a volume V is proportional to an integral of the Poynting vector (see above). In SI units:
$\mathbf{P}_\textrm{EM} \equiv \frac{1}{c^2} \iiint_V \mathbf{S}\, \textrm{d}^3\mathbf{r} = \epsilon_0 \iiint_V \mathbf{E}(\mathbf{r},t)\times \mathbf{B}(\mathbf{r},t)\, \textrm{d}^3\mathbf{r}.$
Quantization of the electromagnetic field
Einstein postulated in 1905 that an electromagnetic field consists of energy parcels (light quanta, later called photons) of energy hν, where h is Planck's constant. In 1927 Paul A. M. Dirac was able
to fit the photon concept into the framework of the new quantum mechanics. He applied a technique which is now generally called second quantization,^[2] although this term is somewhat of a misnomer
for EM fields, because they are, after all, solutions of the classical Maxwell equations, and it is the first time that they are quantized.
Second quantization
Second quantization starts with an expansion of a field in a basis consisting of a complete set of functions. The coefficients multiplying the basis functions are then interpreted as operators and
(anti)commutation relations between these new operators are imposed, commutation relations for bosons and anticommutation relations for fermions (nothing happens to the basis functions themselves).
By doing this, the expanded field is converted into a fermion or boson operator field. The expansion coefficients have become creation and annihilation operators: a creation operator creates a
particle in the corresponding basis function and an annihilation operator annihilates a particle in this function.
In the case of EM fields the required expansion of the field is the Fourier expansion.
Quantization of EM field
The best known example of quantization is the replacement of the time-dependent linear momentum of a particle by the rule
$\mathbf{p}(t) \rightarrow -i\hbar\boldsymbol{abla}$.
Note that Planck's constant is introduced here and that time disappears (in the so-called Schrödinger picture).
For the EM field we do something similar and apply the quantization rules:
\begin{align} a^{(\mu)}_\mathbf{k}(t)\, &\rightarrow\, \sqrt{\frac{\hbar}{2 \omega V\epsilon_0}}\, a^{(\mu)}(\mathbf{k}) \\ \bar{a}^{(\mu)}_\mathbf{k}(t)\, &\rightarrow\, \sqrt{\frac{\hbar}{2 \
omega V\epsilon_0}}\, {a^\dagger}^{(\mu)}(\mathbf{k}) \\ \end{align}
subject to the boson commutation relations
\begin{align} \big[ a^{(\mu)}(\mathbf{k}),\, a^{(\mu')}(\mathbf{k}') \big] & = 0 \\ \big[{a^\dagger}^{(\mu)}(\mathbf{k}),\, {a^\dagger}^{(\mu')}(\mathbf{k}')\big] &=0 \\ \big[a^{(\mu)}(\mathbf
{k}),\,{a^\dagger}^{(\mu')}(\mathbf{k}')\big]&= \delta_{\mathbf{k},\mathbf{k}'} \delta_{\mu,\mu'}. \end{align}
The square brackets indicate a commutator, defined by
$\big[ A, B\big] \equiv AB - BA$
for any two quantum mechanical operators A and B.
The quantized fields (operator fields) are the following
\begin{align} \mathbf{A}(\mathbf{r}) &= \sum_{\mathbf{k},\mu} \sqrt{\frac{\hbar}{2 \omega V\epsilon_0}} \left(\mathbf{e}^{(\mu)} a^{(\mu)}(\mathbf{k}) e^{i\mathbf{k}\cdot\mathbf{r}} + \bar{\
mathbf{e}}^{(\mu)} {a^\dagger}^{(\mu)}(\mathbf{k}) e^{-i\mathbf{k}\cdot\mathbf{r}} \right) \\ \mathbf{E}(\mathbf{r}) &= i\sum_{\mathbf{k},\mu} \sqrt{\frac{\hbar\omega}{2 V\epsilon_0}} \left(\
mathbf{e}^{(\mu)} a^{(\mu)}(\mathbf{k}) e^{i\mathbf{k}\cdot\mathbf{r}} - \bar{\mathbf{e}}^{(\mu)} {a^\dagger}^{(\mu)}(\mathbf{k}) e^{-i\mathbf{k}\cdot\mathbf{r}} \right) \\ \mathbf{B}(\mathbf{r})
&= i\sum_{\mathbf{k},\mu} \sqrt{\frac{\hbar}{2 \omega V\epsilon_0}} \left((\mathbf{k}\times\mathbf{e}^{(\mu)}) a^{(\mu)}(\mathbf{k}) e^{i\mathbf{k}\cdot\mathbf{r}} - (\mathbf{k}\times\bar{\mathbf
{e}}^{(\mu)}) {a^\dagger}^{(\mu)}(\mathbf{k}) e^{-i\mathbf{k}\cdot\mathbf{r}} \right), \\ \end{align}
where ω = c |k| = ck.
Hamiltonian of the field
Substitution of the operators into the classical Hamiltonian gives the Hamilton operator of the EM field
\begin{align} H &= \frac{1}{2}\sum_{\mathbf{k},\mu=-1,1} \hbar \omega \Big({a^\dagger}^{(\mu)}(\mathbf{k})\,a^{(\mu)}(\mathbf{k}) + a^{(\mu)}(\mathbf{k})\,{a^\dagger}^{(\mu)}(\mathbf{k})\Big) \\
&= \sum_{\mathbf{k},\mu} \hbar \omega \Big({a^\dagger}^{(\mu)}(\mathbf{k})a^{(\mu)}(\mathbf{k}) + \frac{1}{2}\Big) \end{align}
By the use of the commutation relations the second line follows from the first. Note that $\hbar\omega = hu =\hbar c |\mathbf{k}|$ , which is the well-known Einstein expression for photon energy.
Remember that ω depends on k, even though it is not explicit in the notation. The notation ω(k) could have been introduced, but is not common.
Digression: harmonic oscillator
The second quantized treatment of the one-dimensional quantum harmonic oscillator is a well-known topic in quantum mechanical courses. We digress and say a few words about it. The harmonic oscillator
Hamiltonian has the form
$H = \hbar \omega \big( a^\dagger a + \frac{1}{2} \big)$
where ω ≡ 2πν is the fundamental frequency of the oscillator. The ground state of the oscillator is designated by | 0 > and is referred to as vacuum state. It can be shown that $\scriptstyle a^\
dagger$ is an excitation operator, it excites from an n fold excited state to an n+1 fold excited state:
$a^\dagger |n \rangle = |n+1 \rangle \sqrt{n+1} \quad\hbox{in particular}\quad a^\dagger |0 \rangle = |1 \rangle \quad\hbox{and}\quad (a^\dagger)^n |0\rangle \propto |n\rangle.$
Since harmonic oscillator energies are equidistant, the n-fold excited state | n > can be looked upon as a single state containing n particles (sometimes called vibrons) all of energy hν. These
particles are bosons. For obvious reason the excitation operator $a^\dagger$ is called a creation operator.
From the commutation relation follows that the Hermitian adjoint $\scriptstyle a$ de-excites:
$a |n \rangle = |n-1 \rangle \sqrt{n} \quad\hbox{in particular}\quad a |0 \rangle \propto 0 \rightarrow a |0 \rangle = 0,$
because a function times the number 0 is the zero function. For obvious reason the de-excitation operator $\,a$ is called an annihilation operator.
Suppose now we have a number of non-interacting (independent) one-dimensional harmonic oscillators, each with its own fundamental frequency ω[i]. Because the oscillators are independent, the
Hamiltonian is a simple sum:
$H = \sum_i \hbar\omega_i \Big(a^\dagger(i) a(i) +\frac{1}{2} \Big).$
Making the substitution
$i \rightarrow (\mathbf{k}, \mu)$
we see that the Hamiltonian of the EM field can be looked upon as a Hamiltonian of independent oscillators of energy ω = |k| c and oscillating along direction e^(μ) with μ=1,−1.
Photon energy
The quantized EM field has a vacuum (no photons) state | 0 >. The application of, say,
$\big( {a^\dagger}^{(\mu)}(\mathbf{k}) \big)^m \, \big( {a^\dagger}^{(\mu')}(\mathbf{k}') \big)^n \, |0\rangle \propto \Big| m^{(\mu)}(\mathbf{k}), \, n^{(\mu')}(\mathbf{k}') \, \Big\rangle,$
gives a quantum state of m photons in mode (μ, k) and n photons in mode (μ', k'). We use the proportionality symbol because the state on the right-hand is not normalized to unity.
We can shift the zero of energy and rewrite the Hamiltonian as
$H= \sum_{\mathbf{k},\mu} \hbar \omega N^{(\mu)}(\mathbf{k}) \quad\hbox{with}\quad N^{(\mu)}(\mathbf{k}) \equiv {a^\dagger}^{(\mu)}(\mathbf{k})a^{(\mu)}(\mathbf{k})$
The operator $N^{(\mu)}(\mathbf{k})$ is the number operator. When acting on a quantum mechanical photon state, it returns the number of photons in mode (μ, k). Such a photon state is an eigenstate of
the number operator. This is why the formalism described here, is often referred to as the occupation number representation. The effect of H on a single-photon state is
$H \left({a^\dagger}^{(\mu)}(\mathbf{k}) \,|0\rangle\right) = \hbar\omega \left( {a^\dagger}^{(\mu)}(\mathbf{k}) \,|0\rangle\right).$
Apparently, the single-photon state is an eigenstate of H and $\hbar \omega = h u$ is the corresponding energy.
Example photon density
In an earlier example we introduced a radio station and calculated the electromagnetic energy density that the station creates in its environment; at 5 km from the station this was 2.1·10^−10 J/m^3.
Let us now see if we need quantum mechanics to describe the broadcasting of this station.
The classical approximation to EM radiation is good when the number of photons is much larger than unity in the volume
$\left(\frac{\lambda}{2\pi}\right)^3 ,$
where λ is the length of the radio waves. In that case quantum fluctuations are negligible and cannot be heard.
Suppose the radio station broadcasts at ν = 100 MHz, then it is sending out photons with an energy content of νh = 1·10^8× 6.6·10^−34 = 6.6·10^−26 J, where h is Planck's constant. The wavelength of
the station is λ = c/ν = 3 m, so that λ/(2π) = 48 cm and the volume is 0.111 m^3. The energy content of this volume element is 2.1·10^−10 × 0.111 = 2.3 ·10^−11 J, which amounts to
3.5 ·10^12 photons per $\left (\frac{\lambda}{2\pi}\right)^3$
Obviously, 3.5 ·10^12 is much larger than one and hence quantum effects do not play a role; the waves emitted by this station are well into the classical limit, even when it plays non-classical
music, for instance of Led Zepplin.
Photon momentum
Introducing the operator expansions for E and B into the classical form
$\mathbf{P}_\textrm{EM} = \epsilon_0 \iiint_V \mathbf{E}(\mathbf{r},t)\times \mathbf{B}(\mathbf{r},t)\, \textrm{d}^3\mathbf{r},$
$\mathbf{P}_\textrm{EM} = \sum_{\mathbf{k},\mu} \hbar \mathbf{k} \Big({a^\dagger}^{(\mu)}(\mathbf{k})a^{(\mu)}(\mathbf{k}) + \frac{1}{2}\Big) = \sum_{\mathbf{k},\mu} \hbar \mathbf{k} N^{(\mu)}(\
The 1/2 that appears can be dropped because when we sum over the allowed k, k cancels with −k. The effect of P[EM] on a single-photon state is
$\mathbf{P}_\textrm{EM} \left({a^\dagger}^{(\mu)}(\mathbf{k}) \,|0\rangle \right) = \hbar\mathbf{k} \left( {a^\dagger}^{(\mu)}(\mathbf{k}) \,|0\rangle\right).$
Apparently, the single-photon state is an eigenstate of the momentum operator, and $\hbar \mathbf{k}$ is the eigenvalue (the momentum of a single photon).
Photon mass
The photon having non-zero linear momentum, one could imagine that it has a non-vanishing rest mass m[0], which is its mass at zero speed. However, we will now show that this is not the case: m[0] =
Since the photon propagates with the speed of light, special relativity is called for. The relativistic expressions for energy and momentum squared are,
$E^2 = \frac{m_0^2 c^4}{1-v^2/c^2}, \quad p^2 = \frac{m_0^2 v^2}{1-v^2/c^2}.$
From p^2/E^2,
$\frac{v^2}{c^2} = \frac{c^2p^2}{E^2} \quad\Longrightarrow\quad E^2= \frac{m_0^2c^4}{1 - c^2p^2/E^2} \quad\Longrightarrow\quad m_0^2 c^4 = E^2 - c^2p^2.$
$E^2 = \hbar^2 \omega^2\quad\mathrm{and}\quad p^2 = \hbar^2 k^2 = \frac{\hbar^2 \omega^2}{c^2}$
and it follows that
$m_0^2 c^4 = E^2 - c^2p^2 = \hbar^2 \omega^2 - c^2 \frac{\hbar^2 \omega^2}{c^2} = 0,$
so that m[0] = 0.
Photon spin
The photon can be assigned a triplet spin with spin quantum number S = 1. This is similar to, say, the nuclear spin of the ^14N isotope, but with the important difference that the state with M[S] = 0
is zero, only the states with M[S] = ±1 are non-zero.
We define spin operators:
$S_z \equiv -i\hbar\Big( \mathbf{e}_{x} \mathbf{e}_{y} - \mathbf{e}_{y} \mathbf{e}_{x}\Big) \quad\hbox{and cyclically}\quad x\rightarrow y \rightarrow z \rightarrow x.$
The products between the unit vectors on the right-hand side are dyadic products. The unit vectors are perpendicular to the propagation direction k (the direction of the z axis, which is the spin
quantization axis).
The spin operators satisfy the usual angular momentum commutation relations
$[S_x, \, S_y] = i \hbar S_z \quad\hbox{and cyclically}\quad x\rightarrow y \rightarrow z \rightarrow x.$
$[S_x, \, S_y] = -\hbar^2 \Big( \mathbf{e}_{y} \mathbf{e}_{z} - \mathbf{e}_{z} \mathbf{e}_{y}\Big) \cdot \Big( \mathbf{e}_{z} \mathbf{e}_{x} - \mathbf{e}_{x} \mathbf{e}_{z}\Big) + \hbar^2 \Big( \
mathbf{e}_{z} \mathbf{e}_{x} - \mathbf{e}_{x} \mathbf{e}_{z}\Big) \cdot \Big( \mathbf{e}_{y} \mathbf{e}_{z} - \mathbf{e}_{z} \mathbf{e}_{y}\Big) = i\hbar \Big[ -i\hbar \big(\mathbf{e}_{x} \mathbf
{e}_{y} - \mathbf{e}_{y} \mathbf{e}_{x}\big)\Big] =i\hbar S_z.$
Define states
$| \mathbf{k}, \mu \rangle \equiv {a^\dagger}^{(\mu)}(\mathbf{k}) \,|0\rangle \leftrightarrow \mathbf{e}^{(\mu)} e^{i\mathbf{k}\cdot \mathbf{r}}.$
By inspection it follows that
$-i\hbar\Big( \mathbf{e}_{x} \mathbf{e}_{y} - \mathbf{e}_{y} \mathbf{e}_{x}\Big)\cdot \mathbf{e^{(\mu)}} = \mu \mathbf{e}^{(\mu)}, \quad \mu=1,-1,$
and correspondingly we see that μ labels the photon spin,
$S_z | \mathbf{k}, \mu \rangle = \mu | \mathbf{k}, \mu \rangle,\quad \mu=1,-1.$
Because the vector potential A is a transverse field, the photon has no forward (μ = 0) spin component.
1. ↑ P is the integral of $\scriptstyle -\mathcal{E}_\textrm{Joule}$ = −E⋅J over the volume of the cylinder; $\scriptstyle (2\pi R z c)\,\mathcal{E}_\mathrm{Field}$ is the integral of the Poynting
vector S over the surface of the cylinder.
2. ↑ The name derives from the second quantization of quantum mechanical wave functions. Such a wave function is a scalar field: the "Schrödinger field" and can be quantized in the very same way as
EM fields. Since a wave function is derived from a "first" quantized Hamiltonian, the quantization of the Schrödinger field is the second time quantization is performed, hence the name.
External link
ISO 21348 Definitions of Solar Irradiance Spectral Categories
|
{"url":"http://en.citizendium.org/wiki?title=Electromagnetic_wave&diff=100434618&oldid=prev","timestamp":"2014-04-25T00:25:52Z","content_type":null,"content_length":"101290","record_id":"<urn:uuid:5f6b57b3-af75-4961-9794-904aa360b467>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about pracma on Xi'an's Og
nother number theory Le Monde mathematical puzzles:
Find 2≤n≤50 such that the sequence {1,…,n} can be permuted into a sequence such that the sum of two consecutive terms is a prime number.
Now this is a problem with an R code solution:
while (foundsol){
while ((t<uplim)&&(noseq)){
if (!noseq){
lastsol=randseq}else{ N=N-1}
which returns the solution as
> N
[1] 12
> lastsol
[1] 6 7 12 11 8 5 2 1 4 3 10 9
and so it seems there is no solution beyond N=12…
However, reading the solution in the next edition of Le Monde, the authors claim there are solutions up to 50. I wonder why the crude search above fails so suddenly, between 12 and 13! So instead I
tried a recursive program that exploits the fact that subchains are also verifying the same property:
if (length(ens)==2){
if (nut$find){
if (isprime(but+tut[1]))
And I ran the R code for N=13,14,…
> stop=TRUE
> while (stop){
+ a=findord(1:N)
+ stop=!(a$find)}
until I reached N=20 for which the R code would not return a solution. Maybe the next step would be to store solutions in N before moving to N+1. This is just getting me too far from a mere Saturday
afternoon break.
|
{"url":"http://xianblog.wordpress.com/tag/pracma/","timestamp":"2014-04-16T07:16:34Z","content_type":null,"content_length":"68348","record_id":"<urn:uuid:8c31d679-188f-4646-a96d-85dc79ab0326>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Electrical Engineering
Introduction to Electrical Engineering
Robert Page Ward
Prentice-Hall, 1952 - Electrical engineering - 412 pages
From inside the book
29 pages matching heat in this book
Where's the rest of this book?
Results 1-3 of 29
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Introductory 1
The beginnings 2
The electrical nature of matter 5
148 other sections not shown
Common terms and phrases
air gap alternating current ammeter ampere ampere-turns anode atoms battery Calculate capacitance capacitor carrying a current cathode cell cent charge coil conductor constant copper core coulomb
cross section cross-sectional area curve cycles per sec decrease defining equation deflection determine diameter dielectric direct current eddy-current loss electric field intensity electric flux
density electromotive force energy Example ferromagnetic flow free space frequency galvanometer heat hysteresis hysteresis loss increase induced insulation internal resistance ions iron joules
kilolines per sq Kirchhoff's voltage law lamp length load magnetic circuit magnetic field intensity magnetic flux density maximum measured metal meter MKS unit molecules moving negative number of
turns obtain Ohm's law ohms parallel permeability positive direction potential difference quantity of electricity resistor self-inductance shown in Fig solution steel surface temperature terminals
toroid torque tube velocity voltage drop voltmeter volts wattmeter weber per sq wire zero
Bibliographic information
Introduction to Electrical Engineering
Robert Page Ward
Prentice-Hall, 1952 - Electrical engineering - 412 pages
|
{"url":"http://books.google.com/books?id=u-pPAAAAMAAJ&q=heat&dq=related:UOM39015018424179&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-17T16:28:29Z","content_type":null,"content_length":"106093","record_id":"<urn:uuid:2409912b-c417-4567-a681-e342a6e6a549>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Miquon, PA SAT Math Tutor
Find a Miquon, PA SAT Math Tutor
...Most notably, I have worked with students who have special needs in learning phonics, particularly students who have Autism, dyslexia, and language processing difficulties. I have been playing
chess since I was in first grade. I was part of my school's elementary chess league for three years, went to chess camp for two years, and was the second best chess player in my sixth grade
32 Subjects: including SAT math, English, reading, writing
...I have learned a great deal from my students in this process! My tutoring focuses on a solid understanding of the material and a consistent and methodical approach to problem-solving, with
special attention paid to a good foundation in mathematical methods. I am a native German-speaker, and have been working for several years as a German-to-English translator.
21 Subjects: including SAT math, reading, physics, writing
...Mason --> PS: I have two Ivy League degrees - Columbia BA and Wharton MBA - and have been an instructor for the SAT, GMAT and LSAT for the Princeton Review. Personally, I missed 5 questions on
the SAT, 3 questions on the GMAT, received a perfect score on the LSAT and scored in the 99th perce...
23 Subjects: including SAT math, reading, English, writing
...The math section of the SAT's tests students math skills learned up to grade 12. I have a college degree in mathematics. I have successfully passed the GRE's (to get into graduate school) as
well as the Praxis II content knowledge test for mathematics.
16 Subjects: including SAT math, English, physics, calculus
...Students find me friendly and supportive, and I have innovative ways, based on each student's learning style, to help them understand each topic in math. I can provide references from
satisfied clients. My bachelor's degree in is math, and I have a master's of education (M.Ed) from Temple University.
22 Subjects: including SAT math, calculus, writing, geometry
Related Miquon, PA Tutors
Miquon, PA Accounting Tutors
Miquon, PA ACT Tutors
Miquon, PA Algebra Tutors
Miquon, PA Algebra 2 Tutors
Miquon, PA Calculus Tutors
Miquon, PA Geometry Tutors
Miquon, PA Math Tutors
Miquon, PA Prealgebra Tutors
Miquon, PA Precalculus Tutors
Miquon, PA SAT Tutors
Miquon, PA SAT Math Tutors
Miquon, PA Science Tutors
Miquon, PA Statistics Tutors
Miquon, PA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Belmont Hills, PA SAT math Tutors
Erdenheim, PA SAT math Tutors
Foxcroft, PA SAT math Tutors
Gladwyne SAT math Tutors
Gulph Mills, PA SAT math Tutors
Ithan, PA SAT math Tutors
Lafayette Hill SAT math Tutors
Laverock, PA SAT math Tutors
Melrose Park, PA SAT math Tutors
Melrose, PA SAT math Tutors
Ogontz Campus, PA SAT math Tutors
Penn Valley, PA SAT math Tutors
Plymouth Valley, PA SAT math Tutors
Rosemont, PA SAT math Tutors
Upton, PA SAT math Tutors
|
{"url":"http://www.purplemath.com/Miquon_PA_SAT_Math_tutors.php","timestamp":"2014-04-18T23:15:30Z","content_type":null,"content_length":"24221","record_id":"<urn:uuid:6c35a862-994b-4a26-a8f9-d3051ba4a1b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3. One angle in triangle PTR is thirty-one degrees. The difference between the measures of the other two angles is sixty-three degrees. What is the measure of each angle in triangle PTR?
4. The measures of two angles in a triangle are in the ratio of 1:2. The third angle is 54
|
{"url":"http://www.edhelper.com/AlgebraWorksheet17.html","timestamp":"2014-04-18T03:14:21Z","content_type":null,"content_length":"3798","record_id":"<urn:uuid:73798072-12de-4835-8c00-1dcd28724142>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Areas of Triangles
The most common formula for finding the area of a triangle is K = ½ bh, where K is the area of the triangle, b is the base of the triangle, and h is the height. (The letter K is used for the area of
the triangle to avoid confusion when using the letter A to name an angle of a triangle.) Three additional categories of area formulas are useful.
Two sides and the included angle (SAS): Given Δ ABC (Figure ), the height is given by h = c sinA. Therefore,
Figure 1
Reference triangles for area formulas.
Two angles and a side (AAS) or (ASA): Using the Law of Sines and substituting in the preceding three formulas leads to the following formulas:
Three sides (SSS): A famous Greek philosopher and mathematician, Heron (or Hero), developed a formula that calculates the area of triangles given only the lengths of the three sides. This is known as
Heron's formula. If a, b, and c are the lengths of the three sides of a triangle, and s is the semiperimeter, then
One of many proofs of Heron's formula starts out with the Law of Cosines:
Example 1: (SAS) As shown in Figure 2 , two sides of a triangle have measures of 25 and 12. The measure of the included angle is 51° Find the area of the triangle.
Figure 2
Drawing for Example 1.
Use the SAS formula:
Example 2: (AAS and ASA) Find the area of the triangle shown in Figure 3 .
Figure 3
Drawing for Example 2.
First find the measure of the third angle of the triangle since all three angles are used in the area formula.
Example 3: (AAS orASA) Find the area of an equilateral triangle with a perimeter of 78.
If the perimeter of an equilateral triangle is 78, then the measure of each side is 26. The nontrigonometric solution of this problem yields an answer of
The trigonometric solution yields the same answer.
Example 4: (SSS) Find the area of a triangle if its sides measure 31, 44, and 60.
Use Heron's formula:
Heron's formula does not use trigonometric functions directly, but trigonometric functions were used in the development and proof of the formula.
|
{"url":"http://www.cliffsnotes.com/math/trigonometry/trigonometry-of-triangles/areas-of-triangles","timestamp":"2014-04-18T23:33:56Z","content_type":null,"content_length":"118232","record_id":"<urn:uuid:df04c320-a389-4113-957a-a6b67f40903a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ACPS (July 20, 2001) Questions for Drug Transfer into Breast Milk:
1. Is it important to estimate and/or to determine the amount of drug (and/or its significant metabolites) in breast milk?
a) For what type of drugs, is information on the extent of drug transfer into breast milk needed?
b) When such information is needed, when is it appropriate to estimate or collect the data from non-clinical (such as animal studies, in vitro studies) and/or clinical studies?
c) What parameters can be used to assess the safety risk presented to breastfed infants by drugs predicted to or demonstrated to transfer into breast milk?
2. For drugs that are primarily transferred into milk by diffusion, equations (such as log phase distribution model) incorporating drug characteristics (such as pKa, Log P, protein binding) and
distribution of drug in milk lipids, are shown to be predictive of drug milk to plasma (M/P) ratios. The M/P values are ultimately used to predict the amount of drug in breast milk.
a) Would the Panel find utilization of a model such as log phase distribution model acceptable as a first estimate for predicting the extent of drug transfer into milk for all drugs?
b) What percent of drugs, an approximation, are transferred into breast milk by processes other than diffusion?
c) What can be considered as reliable screen(s) to identify the potential of a drug to be actively transported into milk?
3. The approach followed in calculation of M/P values such as utilizing Area Under Curve (AUC) ratios versus single point ratios can influence the accuracy of this estimate. The accuracy of M/P value
is important as this value is further utilized to calculate the amount of drug in breast milk.
a) What are the advantages and limitations of using M/P values (calculated based on AUC ratios) to estimate the extent of drug transferred into breast milk?
b) Are there other acceptable approaches/methods for determining the extent of drug transferred into milk? Would milk drug concentration data alone (instead of both milk and plasma drug concentration
data) be satisfactory to determine the extent of drug transferred into milk?
|
{"url":"http://www.fda.gov/ohrms/dockets/ac/01/questions/3763q2.htm","timestamp":"2014-04-19T18:13:26Z","content_type":null,"content_length":"2994","record_id":"<urn:uuid:a15f7a20-980e-4204-b4e4-9e83a95a89c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The inductive approach to verifying cryptographic protocols
Lawrence C. Paulson
February 1998, 46 pages
Informal arguments that cryptographic protocols are secure can be made rigorous using inductive definitions. The approach is based on ordinary predicate calculus and copes with infinite-state
systems. Proofs are generated using Isabelle/HOL. The human effort required to analyze a protocol can be as little as a week or two, yielding a proof script that takes a few minutes to run.
Protocols are inductively defined as sets of traces. A trace is a list of communication events, perhaps comprising many interleaved protocol runs. Protocol descriptions incorporate attacks and
accidental losses. The model spy knows some private keys and can forge messages using components decrypted from previous traffic. Three protocols are analyzed below: Otway-Rees (which uses shared-key
encryption), Needham-Schroeder (which uses public-key encryption), and a recursive protocol (which is of variable length).
One can prove that event ev always precedes event ev′ or that property P holds provided X remains secret. Properties can be proved from the viewpoint of the various principals: say, if A receives a
final message from B then the session key it conveys is good.
Full text
PDF (0.3 MB)
PS (0.2 MB)
BibTeX record
author = {Paulson, Lawrence C.},
title = {{The inductive approach to verifying cryptographic
year = 1998,
month = feb,
url = {http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-443.pdf},
institution = {University of Cambridge, Computer Laboratory},
number = {UCAM-CL-TR-443}
|
{"url":"http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-443.html","timestamp":"2014-04-18T13:20:55Z","content_type":null,"content_length":"5341","record_id":"<urn:uuid:29cf2b83-8fdb-44ea-8580-fb470e14c2d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Rose Mathematics Seminar - History
This page give a history of the talks in the Rose Mathematics Seminar, which was started in 1994-95 under the name of the Applied Math seminar. Some years later it was expanded to include all
mathematics with a suitable name change. The speakers, titles and abstracts are listed below with the later years first. The current year's schedule of talks in given on the current seminar page.
Speakers, Titles, and Abstracts
2012-13 (latest first)
• Topic: Building a Better Runge-Kutta Method
• Speaker: Dr. David Goulet
• Date: 15 May 2013
• Abstract: Runge-Kutta methods are a common and effective means of numerically approximating the solutions to ordinary differential equations. The idea underlying these methods was spawned at the
end of the 19th century, but high quality methods have emerged only in the past few decades. To build a high quality RK method, researchers have drawn on ideas from Calculus, Differential
Equations, Graph Theory, Combinatorics, Complex Analysis, Geometry, Continuous Optimization, Control Theory, and other areas of mathematics and computer science. This talk will review basic RK
methods, often taught in Calculus and DE classes, before delving into the study of more sophisticated methods. Many areas of mathematics will be applied to create methods which have been
optimized for different classes of ODE problems. In this context, practical implementations, such as those used by Matlab and Mathematica, will be discussed. Throughout the talk, RK method design
principles will be highlighted, culminating in the creation of pedagogical examples of simple but accurate methods with good stability properties, and built-in error controls. The talk will
conclude with contemporary research directions in the design of RK methods, including parallelized methods, methods optimized for advection equation solvers, and generalizations known as
Rosenbrock and W methods.
• Topic: Graphs and DNA
• Speaker: Larry Langley, University of the Pacific
• Date: 08 May 2013
• Abstract: During his early work into microbiology in the 50s, Seymour Benzer examined the topology of DNA through mutant bacteriophages. The data gained through his experiments was consistent
with a hypothesis that genetic material was linearly structured. The results and analysis of his experiments provide the foundation for much of the research in graph theory regarding interval
graphs. This talk will look at Benzer's work, the corresponding theory of interval graphs as well as present some more recent work on generalizations of interval graphs.
• Topic: A New Information-Splitting Image Analysis Technique
• Speaker: Mark Inlow
• Date: 01 May 2013
• Abstract: : Important insights into various brain diseases, Alzheimer’s for example, can be obtained by correlating changes in the brain with genetic information. Detecting such correlations is
complicated by the size and complexity of the data. An MRI image of a subject’s brain may consist of over 200,000 picture elements and his/her genetic data may consist of 500,000 or more pieces
of information (single nucleotide polymorphisms). For such data current brain image analysis methods based on Gaussian random field theory are inadequate for various reasons. We present research
on new methods which are based on a simple geometric property of the t-statistic. Thus although preliminary results indicate these methods are superior to random field methods, the theory behind
them is straightforward, requiring little beyond introductory probability and statistics.
• Topic: Robust Analysis of Metabolic Pathways: Engineering, Biology, and Math
• Speaker: Allen Holder
• Date: 27 Mar 2013
• Abstract: We show how topics in engineering design can aid problems in the biological sciences, and in reverse, how the engineering fields can gain from the biological application. We
particularly focus on robust optimization, which has been used in several engineering fields to support optimal designs in which parameters are uncertain. We review a couple of classic examples
to highlight the central modeling themes. We then adapt the robust paradigm to a popular problem in computational biology called flux balance analysis (FBA). Previous FBA models have been linear
or quadratic and have assumed a static relationship between a cell's environment and its growth rate. This assumption is doubtful, and we extend the static model to a robust counterpart that
accounts for the inherit uncertainty in individual variation. The robust model advances traditional FBA's validity with regard to its scientific goals since it removes the menacing shortcoming of
ignoring dynamics. The biological setting leads naturally to questions about if, and if so how, solutions to robust models converge to their static counterparts as uncertain parameters become
certain. One of these results argues that static solutions are robust solutions if the variation is appropriately restricted. With regard to engineering designs, this means that optimal designs
created under static conditions are indeed robust under some restricted set of parametric variation. Many of the engineering models are solved efficiently with modern second-order cone solvers.
However, these solvers have been unsuccessful at solving the robust FBA models. The exact reason for this failure is unknown, and we are working to enhance the numerical stability of the
optimizers. We will point to some of our suspicions about why the solvers have been unfaithful in the biological setting. If we are successful in rectifying the numerics, then the engineering
applications will gain more trustworthy solvers. Thankfully, we can re-model the robust FBA problem to make use of a different solver, which has proven itself worthy of the computational task.
• Topic: Pairs of Pants and the Congruence Laws of Geometry
• Speaker: Allen Broughton
• Date: 30 Jan 2013
• Abstract: Many of us know that a torus can be constructed by gluing together the opposite ends of a parallelogram. Different parallelograms yield geometrically different surfaces. For surfaces of
higher genus, with more holes, the surface can be constructed by gluing together hexagons with six right angles (yes that can happen in hyperbolic geometry). Then all possible surfaces arise from
the gluing of some set of hexagons. The hexagons are from a "pairs of pants" decomposition of the surface which is the big idea of this talk. Understanding the possible constructions depends on
the following proposed Congruence Law in Hyperbolic Geometry: If two right-angled hexagons have three corresponding sides of equal length then they are congruent. The talk will explain all
concepts from the ground up. The proposed congruence law will be related to the familiar side-side-side and side-angle side congruence theorems from high school geometry.
• Topic: Network-based Quantitative Analysis of Crossword Puzzle Difficulty
• Speaker: John McSweeney
• Date: 16 Jan 2013
• Abstract: What distinguishes a crossword puzzle from a simple list of trivia questions is the interlocking nature of the answers in the grid -- one solution can promote further ones in a
cascading fashion. To model this mathematically, we build a network object from a puzzle: answers in the puzzle are nodes in the network, and nodes are linked via an edge if the corresponding
answers cross. Each node also has a state, "solved" or "unsolved", that depends dynamically on the states of its neighbors. Motivated by analogous issues which arise in epidemiological analyses
of structured populations, we consider the following general questions: what features of the distribution of the difficulties of the clues, and of the structure of the crossword network,
determine whether a puzzle can be fully (or nearly fully) solved? Are impediments to full solution typically due to puzzle structure or clue difficulty? I will present rigorous results for
certain puzzles with a high degree of symmetry, as well as simulation-based analyses of "real-world" puzzles from the Sunday New York Times.
• Topic: Challenging Problems in Computational Biochemistry
• Speaker: Yosi Shibberu
• Date: 19 Dec 2012
• Abstract: Biochemistry is a rich source of important computational problems that should be of interest to mathematicians, computer scientists and engineers. The dramatic drop in the cost of
sequencing DNA as well as progress in several structural genomics initiatives have created many new and exciting opportunities. I will begin with a review of elementary concepts in biology and
biochemistry and then describe recent progress reported in the literature on solving the grand challenge problem of biochemistry - the protein folding problem. Protein molecules are the
workhorses of life. An efficient solution to the protein folding problem will dramatically improve our ability to identify the function of individual proteins and go a long way towards enabling
us to design proteins with new functions. I conclude with an update of ongoing research conducted in collaboration with Mark Brandt in Chemistry and Biochemistry on characterizing structural
changes believed to occur in the estrogen receptor, a protein that plays an important role in breast cancer.
• Topic: Discrete Optimization Problems at NASA Langley Research Center
• Speaker: Rex Kincaid, Visiting Scholar from College of William and Mary
• Date: 12 Sep 2012
• Abstract: An overview of several discrete optimization problems of interest to NASA Langley Research Center will be presented. Applications include placement of actuators for noise control in
turboprops, locating truss elements for vibration control in space structures, optimizing network metrics for air transportation, and scheduling runway configuration changes at airports.
2011-12 (latest first)
• Topic: Synthetic Biology: Collaboration Required
• Speaker: Ric Anthony, Applied Biology and Biomedical Engineering
• Date: 09 May 2012
• Abstract: Synthetic biology is, essentially, the engineering of life. Ultimately, the discipline aims to utilize engineering principles and practices to rationally and systematically design and
build novel biological systems. This emerging discipline has the potential to provide solutions to diverse problems in the areas of medicine, energy, materials, security and sustainability, among
others. The purpose of this presentation is to explore some current applications of synthetic biology and to elucidate opportunities for interdisciplinary collaboration in this exciting new
• Topic: Projective Planes and Graph Theory
• Speaker: Gabriela Araujo-Pardo, National University of Mexico
• Date: 02 May 2012
• Abstract: We will discuss relationships between two areas of discrete mathematics: finite geometry and graph theory. Specifically, we investigate properties of the projective planes and their
impact on the solution of problems in extremal graph theory and graph colorings.
• Topic: Who Painted this Painting - A Project in Image Registration?
• Speaker: Allen Broughton
• Date: 25 Apr 2012
• Abstract: An art collector possesses a "painting of unknown origin" which he suspects was painted by a "very famous painter". The only possible evidence is a portion of the painting in an old
photograph taken in the painter's studio. Is there a way to match the two, such as matching fingerprints? A proposed technique is to match the portion to a modern image of the painting. Since the
the two images are taken by two different cameras, you just can't compare the two photos. This problem is called image registration which is an optimization problem in multi-dimensional camera
orientation space. This problem will be discussed in two parts. First we set up the problem mathematically in mathematical terms. In a later talk we discuss the computer implementation of the
• Topic: What's a data algebra and how do you build one?"
• Speaker: Gary Sherman, Algebraix Data
• Date: 18 Apr 2012
• Abstract: Ask n people the question "What's data?" and the cardinality of the set of responses is better approximated by n than by one. Any self respecting mathematician is puzzled by this ---
denizens of data-world, not so much. Indeed, ever since E. F. Codd's 1970 paper, A Relational Model of Data for Large Shared Data Banks (Comm. ACM, Vol. 13, No. 6, pp. 377-387) gave rise to the
Relational Data Model (RDM), the data-world's solution to this congenital ambiguity has been to exacerbate it by conflating the data, whatever it is, with some prejudicial visual artifice (tables
in the case of the RDM); i.e., by confusing the message with the paper it's written on --- so to speak. What is worse, each new artifice comes equipped with a brief, supposedly-mathematical
incantation to justify the trip down a new rabbit hole. This talk discusses Algebraic Data Corporation's approach to knowing data in the context of Zemelo-Frankel set theory, the foundation for
all modern mathematics and, therefore, the only legitimate incantation to use when invoking the good name of mathematics. Indeed, our incantation births a rigorous notion of data algebra in plain
sight of the RDM and its mongrel spawn, Structured Query Language (SQL).
• Topic: Domination Densities of Stacks and Generalized Stacks ?
• Speaker: Jake Wildstrom, University of Louisville Mathematics
• Date: 11 Apr 2012
• Abstract: The Cartesian product of a graph G and a large path Pn can be thought of as a "stack" of copies of G. The domination numbers of such stacks are asymptotically linear in n, and the
coefficient of linearity can be thought of as the "density" of the domination. This parameter will be explored in this talk, with bounds developed in terms of the traditional and total domination
numbers, with emphasis on calculating the domination densities of specific graphs and bounding the densities of graph Mycielskians. A generalized variant of the stack domination density will also
be presented, where the underlying stack topology is nonlinear.
• Topic: A Mathematical Model for the Baking Process – A Phenomenological Approach
• Speaker: Andrew Harris, Rose Student, Rose Mathemtics Major
• Date: 28 Mar 2012
• Abstract: In this talk, I will present a mathematical model for the baking process of a cake and/or bread. The model is based on basic physical principles including diffusion, elasticity, and
thermodynamics. I will explain the modeling process from these first principles to partial differential equations. The final model then consists of a coupled system of seven nonlinear partial
differential equations that specify the temperature, moisture content, vapor content, pressure, and deformation in the dough. This is solved numerically to produce a reasonable representation of
the baking process.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Word Graphs
• Speaker: Tanya Jajcay
• Date: 21 Mar 2012
• Abstract: In the talk, we will introduce graphs (automata) that are closely related to inverse semigroups (as Cayley graphs are related to groups). Inverse semigroups can be viewed as natural
generalization of groups: As every group can be represented via permutations (one-to-one transformations), inverse semigroups can be represented via partial one-to-one transformations. We will
consider actions of special groups on the graph related to an inverse semi-group, which will allow us to answer some structural questions about the original semigroup. In specific cases, automata
are in general infinite, but they contain a very strong finiteness - finite core - in the sense that all important information about the automaton is encoded in a "finite" subgraph. This will
allow us to answer some algorithmic questions (the Word Problem, for instance) as well. We will also address the question of languages recognized by these automata.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Singular Perturbations and Reductive Asymptotics via the Renormalization Group - Part II
• Speaker: Vin Isaia
• Date: 14 Mar 2012
• Abstract: Forced oscillators (like those describing a musical instrument: simple versions are the Van Der Pol oscillator and Duffing's equation) and fluid boundary interactions (e.g.
Stokes-Oseen) give rise to ODEs (and PDEs) which exhibit behavior that may require care in establishing. Such problems share the distinction of having a regularization parameter epsilon which
tends to be small in practice, and interesting behavior can arise in the limit as epsilon approaches zero. This talk will show how RG ideas sidestep the inherent difficulties of approximating
solutions to these problems with a systematic approach rather than ad hoc methods. Two examples with boundary layers and a WKB problem will all be handled by the same RG approach, and it will
capture a transcendentally small term ("beyond all orders"), correctly generate an asymptotic expansion without a priori information about the form of the solution and derive a WKB approximation.
The viewpoint will then shift in an attempt to completely bypass the solution, so that the correct asymptotic expansion can be generated directly from the ODE itself, which tends to be a much
simpler process. The result is a proof of asymptotic validity of the RG expansion for sufficiently general ODEs on infinite domains, which will make use of the notion of almost periodic
• Topic: Optimization Problems in Computational Biology
• Speaker: Allen Holder
• Date: 07 Mar 2012
• Abstract: Computational Biology is a burgeoning field of study that applies mathematics and computer science to answer questions in biology. We will discuss recent work on two problems. 1) One of
the primary studies in computational biology lies with identifying protein structure and function, and one way to assess a protein's function is to associate it with similar proteins whose
function is already known. We develop a model for pairwise comparisons using fundamental topics in linear algebra. We show that we can solve our optimization model in polynomial time with a
single application of dynamic programming. Recent computational results show that our mathematical model is stable with regards to data perturbations due to experimental errors and/or protein
dynamics. 2) Another common study in computational science is the study of whole-cell network interactions. We will discuss the metabolic network and the use of flux balance analysis (FBA) to
establish a cell's metabolic state. The traditional linear models have been successful at identifying such things as lethal gene knockouts. The linear model assumes that every cell's metabolism
is similar to the average metabolism, which is a questionable assumption. We will re-model the problem stochastically and show that the new model can be solved efficiently as a second order cone
• Topic: Wheel of Misfortune
• Speaker: Nicole Burton, Grange Insurance Company
• Date: 06 Feb 2012
• Abstract: The "Wheel of Misfortune" is a metaphor for the risks that individuals experience through the course of their daily lives. Just by driving a car or owning a home, people are exposed to
"pure risk" and may take action such as purchasing insurance to eliminate this risk. We will simulate the act of buying and selling insurance to avoid spinning this metaphorical "wheel of
misfortune" and learn why it is so important for insurance companies to price their insurance products accurately. I will also describe typical projects for a property and casualty actuary and
why students might be interested in a career as an actuary
• Topic: Singular Perturbations and Reductive Asymptotics via the Renormalization Group - Part I
• Speaker: Vin Isaia
• Date: 01 Feb 2012
• Abstract: Forced oscillators (like those describing a musical instrument: simple versions are the Van Der Pol oscillator and Duffing's equation) and fluid boundary interactions (e.g.
Stokes-Oseen) give rise to ODEs (and PDEs) which exhibit behavior that may require care in establishing. Such problems share the distinction of having a regularization parameter epsilon which
tends to be small in practice, and interesting behavior can arise in the limit as epsilon approaches zero. This talk will show how RG ideas sidestep the inherent difficulties of approximating
solutions to these problems with a systematic approach rather than ad hoc methods. Two examples with boundary layers and a WKB problem will all be handled by the same RG approach, and it will
capture a transcendentally small term ("beyond all orders"), correctly generate an asymptotic expansion without a priori information about the form of the solution and derive a WKB approximation.
The viewpoint will then shift in an attempt to completely bypass the solution, so that the correct asymptotic expansion can be generated directly from the ODE itself, which tends to be a much
simpler process. The result is a proof of asymptotic validity of the RG expansion for sufficiently general ODEs on infinite domains, which will make use of the notion of almost periodic
• Topic:
• Speaker: Mark Ward, Purdue University, Statistics
• Date: 25 Jan 2012
• Abstract: We survey some of the ways that symbolic methods can be used for counting the number of large objects of various types, or for finding the average, variance, distribution, etc., of
large randomly generated objects. Examples include integer compositions and partitions, set partitions, permutations, sequences, trees, words, etc. We discuss the basics of probabilistic,
combinatorial, and analytic techniques for the analysis of algorithms and data structures. All of the discussion will be given from a very elementary level. Anybody who understands Taylor series
will be able to comprehend the discussion. The talk can be viewed as an invitation to learn about the methods of Analytic Combinatorics, as in Philippe Flajolet and Robert Sedgewick's 2009 book.
• Topic: Introduction to the Mathematics of Optical Tomography
• Speaker: Joe Eichholz
• Date: 18 Jan 2012
• Abstract: Optical tomography (OT) is an emerging class of biomedical imaging techniques in which volumetric information about the target object is computed from measurements of light emitted and
scattered from the object. This class of modalities offers several benefits over other techniques; they are regarded as safe for the patient, have the potential for very high resolution, have
potential for functional imaging, and are (relatively) portable and inexpensive. This talk provides a high level overview of OT and a description of the mathematics and computational science that
arise when recovering volumetric information from measurements of emitted light. Topics include inverse problems, high performance computing, partial differential equations, and optimization.
Details of some recent work with undergraduates may be presented as time permits.
• Topic: Holditch's Theorem, Ambiguous Bicycle Tracks and Floating Bodies
• Speaker: David Finn
• Date: 11 Jan 2012
• Abstract: A curious result of Rev. Hamnet Holditch, the President of Caius College, which was published in 1858 in the Quarterly Journal of Pure and Applied Mathematics (a one page paper), states
that if a chord of a closed curve is divided into two parts of length a and b respectively, the difference of the area of the closed curve and of the locus of the dividing point (as the chord is
moved along the given closed curve) will be .
For more information see Announcement/Abstract/Paper in PDF form
• Topic: From Chaos to Ellipse: Using Eigenvalues and Eigenvectors to Explain Phenomenon
• Speaker: Charles Van Loan, Cornell University - SIAM Distinguished Lecturer
• Date: 14 Dec 2011
• Abstract: Let P(0) be a given random polygon in the plane. Let P(k+1) be obtained from P(k) by connecting P(k)'s side midpoints in order and then normalizing the vertex vectors x and y so that
they each have unit 2-norm length . Why is it that the vertices of P(k) converge to an ellipse having a 45-degree tilt? A simple eigenvalue/singular value analysis explains it all and carries
with it a story about computational science and engineering and its connection to mathematics. RHIT Math Department Presents DISTINGUISHED LECTURER Charles Van Loan, Cornell University Wednesday,
November 9, 2011 10th period – 4:20 p.m. – 5:10 p.m., Rm. E-104 TITLE: "From Chaos to Ellipse: Using Eigenvalues and Eigenvectors to Explain Phenomenon" Abstract: Let P(0) be a given random
polygon in the plane. Let P(k+1) be obtained from P(k) by connecting P(k)'s side midpoints in order and then normalizing the vertex vectors x and y so that they each have unit 2-norm length . Why
is it that the vertices of P(k) converge to an ellipse having a 45-degree tilt? A simple eigenvalue/singular value analysis explains it all and carries with it a story about computational science
and engineering and its connection to mathematics. Bio: Charles Van Loan, Professor of Computer Science and John C. Ford Professor of Engineering at Cornell University and a leading authority on
computational matrix algebra, is a Society for Industrial and Applied Mathematics Visiting Lecturer this year. He is co-author of the well-known reference "Matrix Computations" (with Gene Golub).
Dr. Van Loan will be visiting Rose-Hulman on Wednesday November 9th. He'll be on campus much of the day to speak with those who have interests in computing science and engineering and to consult
on a soon-to-be-proposed major in Computational Science at Rose-Hulman. He will be delivering a lecture during 10th hour in E-104. The subject of his talk became a paper in SIAM Review but is
accessible to those with a basic matrix algebra background. In addition, he states that "the topic is also a metaphor for computational science and engineering plus it speaks to the teaching/
research connection." Please contact Jeff Leader, Professor of Mathematics, at leader@rose-hulman.edu if you have any questions.
• Topic: Mathematical Modeling of Ocular Blood Flow and Its Relation to Glaucoma
• Speaker: Giovanna Guidoboni, IUPU Mathematics
• Date: 07 Dec 2011
• Abstract: Glaucoma is a disease in which the optic nerve is damaged, leading to progressive, irreversible loss of vision. Glaucoma is the second leading cause of blindness worldwide, and yet the
mechanisms underlying its occurrence remain elusive. Elevated intraocular pressure (IOP) remains the current focus of therapy, but unfortunately many glaucoma patients continue to experience
disease progression despite lowered IOP, even to target levels. Clinical observations show that alterations in ocular blood flow play a very important role in the progression of glaucoma.
Significant correlations have been found between impaired vascular function and optic nerve damage, but the mechanisms giving rise to these correlations are still unknown. This talk will present
some recent results of a mathematical study aimed at investigating the bio-mechanical connections between vascular function and optic nerve damage, in order to gain a better understanding of the
risk factors that may be responsible for glaucoma onset and progression. In particular, two mathematical models will be discussed. The first entails partial differential equations for solid and
fluid mechanics and aims at modeling the interaction between lamina cribrosa and central retinal artery, the main vessel nourishing the retina. The second consists of a system of nonlinear
ordinary differential equations which represents the whole retinal circulation.
• Topic: From Chaos to Ellipse: Using Eigenvalues and Eigenvectors to Explain Phenomenon
• Speaker: Charles Van Loan, Cornell University - SIAM Distinguished Lecturer
• Date: 9 Nov 2011
• Abstract: Let P(0) be a given random polygon in the plane. Let P(k+1) be obtained from P(k) by connecting P(k)'s side midpoints in order and then normalizing the vertex vectors x and y so that
they each have unit 2-norm length . Why is it that the vertices of P(k) converge to an ellipse having a 45-degree tilt? A simple eigenvalue/singular value analysis explains it all and carries
with it a story about computational science and engineering and its connection to mathematics. Bio: Charles Van Loan, Professor of Computer Science and John C. Ford Professor of Engineering at
Cornell University and a leading authority on computational matrix algebra, is a Society for Industrial and Applied Mathematics Visiting Lecturer this year. He is co-author of the well-known
reference "Matrix Computations" (with Gene Golub). Dr. Van Loan will be visiting Rose-Hulman on Wednesday November 9th. He'll be on campus much of the day to speak with those who have interests
in computing science and engineering and to consult on a soon-to-be-proposed major in Computational Science at Rose-Hulman. He will be delivering a lecture during 10th hour in E-104. The subject
of his talk became a paper in SIAM Review but is accessible to those with a basic matrix algebra background. In addition, he states that "the topic is also a metaphor for computational science
and engineering plus it speaks to the teaching/research connection." Please contact Jeff Leader, Professor of Mathematics, at leader@rose-hulman.edu if you have any questions.
• Topic: "Programming in a world of GPUs and pervasive parallelism
• Speaker: Eric Holk, Ph.D. student - Indiana University
• Date: 09 Nov 2011
• Abstract: Today's mid-range gaming PC easily outperforms the world's fastest supercomputer from 15 years ago, and yet high performance parallel programming is as much a black art as ever. Over
the past decade, CPU clock speeds have not increased at the rate they once did. Increases in computing power now come from increasing parallelism. This trend is seen in the proliferation of
multicore CPUs and the use of graphics processors for a broader variety of tasks. The majority of programmers remain ill-equipped to effectively use multiple processor cores. As parallelism
becomes pervasive, parallel thinking must become the norm rather than the exception. Programming languages and algorithms must evolve to incorporate parallelism as a fundamental feature. I'll be
discussing the trends in programming language research that support ubiquitous parallelism, as well as some of the changes that will be needed to design effective parallel algorithms. Bio: Eric
Holk is a Ph.D. student studying programming languages for parallel computing at Indiana University. He graduated from Rose-Hulman in 2006 with a B.S. in computer science and mathematics.
• Topic: Perturbation Methods: When Do Small Parameters Have Large Consequences? - Part II
• Speaker: David Goulet
• Date: 02 Nov 2011
• Abstract: This is the second part of my talk on applied asymptotic methods, including discussions of the following concepts. Boundary Layers How shock waves propagate and why peeing in your
wetsuit is more than just fun. Multi-Scaling When adding more time dimensions to your problem actually makes it easier to solve. Homogenization How fine structure determines bulk behavior, e.g.,
what the Riemann-Lebesgue lemma says about waves traveling through chaotic environments. Overview: Mathematical models sometimes include one parameter whose size is dramatically different than
the others. Small parameters often arise in models governing systems with dramatically different spacial or time scales. In these cases, the ratio of scales can be very small. Typical situations
include chemical reactions with rate limiting steps, composite materials with a fine microstructure, and the rapid flow of low-viscosity fluids. In models with a small parameter, it is tempting
to set the parameter to zero, with the hope of obtaining a rough approximation to reality. This talk will include examples where setting the small parameter to zero gives disastrous results.
Several methods of correcting this mistake will be presented.
• Topic: Perturbation Methods: When Do Small Parameters Have Large Consequences? - Part I
• Speaker: David Goulet
• Date: 26 Oct 2011
• Abstract: This two part talk will be a smorgasbord of applied asymptotic methods, including discussions of the following concepts. Boundary Layers How shock waves propagate and why peeing in your
wetsuit is more than just fun. Multi-Scaling When adding more time dimensions to your problem actually makes it easier to solve. Homogenization How fine structure determines bulk behavior, e.g.,
what the Riemann-Lebesgue lemma says about waves traveling through chaotic environments. Overview: Mathematical models sometimes include one parameter whose size is dramatically different than
the others. Small parameters often arise in models governing systems with dramatically different spacial or time scales. In these cases, the ratio of scales can be very small. Typical situations
include chemical reactions with rate limiting steps, composite materials with a fine microstructure, and the rapid flow of low-viscosity fluids. In models with a small parameter, it is tempting
to set the parameter to zero, with the hope of obtaining a rough approximation to reality. This talk will include examples where setting the small parameter to zero gives disastrous results.
Several methods of correcting this mistake will be presented.
• Topic: Balanced Matching: A Generalized Approach for Causal Inference
• Speaker: Jason Sauppe, University of Illinois
• Date: 19 Oct 2011
• Abstract: Causal inference is applied in a wide range of scientific disciplines in order to determine the effect of a treatment or procedure. In observational studies one does not have the luxury
of controlling who receives treatment and who does not. To address this problem, the technique of matching similar treated and control individuals are widely used to estimate a treatment effect.
A matching paradigm may be too restrictive in many cases because exact matches often do not exist in the available data. One mechanism for overcoming this issue is to relax the requirement of
exact matching on some or all of the covariates (attributes that may affect the outcome) to one of balance on the covariate distributions of individuals in the treatment and control groups. Such
a relaxation is considered here, and several complexity results are presented for the resulting problem.
• Topic: Automatic Image Registration Techniques
• Speaker: Tony Kellems, METRON
• Date: 05 Oct 2011
• Abstract: Image registration is the process of aligning an acquired image with a reference image for the purposes of making some comparisons in a common frame of reference, a problem which arises
in many fields including medical imaging, environment mapping, and photography. The acquired image may be geometrically deformed and may have contrast differences that arise from being imaged
under different sensing modalities; it is the parameters of these transformations which we require in order to accurately register an image. While it is possible to perform registration by hand
in some cases, this is very expensive when the number of images to register is large and it also sacrifices quantitative accuracy for qualitative matching. This talk will examine a few automated
techniques for image registration that handle different classes of transformations, each requiring the solution of a multivariable optimization problem, and will show the strengths and weaknesses
of each technique with some illustrative examples.
• Topic: Exponential Equations and p-adic Numbers
• Speaker: Josh Holden
• Date: 28 Sep 2011
• Abstract: The discrete logarithm is a problem that surfaces frequently in the field of cryptography as a result of using the transformation x goes to gx reduced modulo n. Analysis of the security
of many cryptographic algorithms depends on the assumption that it is statistically impossible to distinguish the use of this map from the use of a randomly chosen map with similar
characteristics. For instance, we can ask when gx is equal to x modulo n (a "fixed point"), or when repeating the map twice gets you back to x (a "two-cycle"), or when gx and gy are the same
modulo n for different x and y (a "collision"). When n is a prime p, much is known about the answers to these questions, although much is left to be learned. When n is a prime power pe, however,
these questions have not been studied as much. It turns out that they can be answered using a type of number called a "p-adic" number, which is neither real nor complex. I will introduce these
numbers, which are interesting in their own right, and then show how they can be used to find solutions to equations involving exponentials modulo pe. If students are interested, I may teach a
course in p-adic numbers in the Winter. So come see if you are interested!
• Topic: Nyquist Beat Down: Applications of Compressed Sensing
• Speaker: Kurt Bryan
• Date: 14 Sep 2011
• Abstract: Last week, I showed how incorporating randomness into sampling techniques and seeking sparse solutions allows one to solve drastically under determined systems of linear equations (but
if you missed the first talk, I'll start with a warp speed review, so you'll be able to follow.) This week I'll flesh out a little more of the theory and show how these ideas allow one to do an
end run around the cherished Nyquist sampling rule, which in its simplest form states that if you want resolve frequencies up to "f" hertz in a signal, you need to sample the signal twice as
fast, at "2f" hertz. Then I'll show how researchers at Rice University have used these ideas to construct a one-pixel camera!
• Topic: Making Do With Less: The Mathematics of Compressed Sensing
• Speaker: Kurt Bryan
• Date: 07 Sep 2011
• Abstract: Abstract: Suppose a bag contains 100 marbles, each with mass 10 grams, except for one defective off-mass marble. Given an accurate electronic balance that can accommodate anywhere from
one to 100 marbles at a time, how would you find the defective marble with the fewest number of weighings? (You've probably thought about this kind of problem and know the answer.) But what if
there are two bad marbles, each of unknown mass? Or three or more? An efficient scheme isn't so easy to figure out now, is it? Is there a strategy that's both efficient and generalizable? The
answer is "yes," at least if the number of defective marbles is sufficiently small. Surprisingly, the procedure involves a strong dose of randomness. It's a nice example of a new and very active
topic called compressed sensing (CS) that spans mathematics, signal processing, statistics, and computer science. In this first talk I'll explain the central ideas, which require nothing more
than simple matrix algebra and elementary probability. Next week I'll show some applications, including how one can use this to beat the Nyquist sampling rule in signal processing, and build a
high-resolution one-pixel camera.
2010-11 (latest first)
• Topic: Non-trivial Composite Sequences through Digit Appendage
• Speaker: John Rickert
• Date: 18 May 2011
• Abstract: Begin with a number whose base ten representation is s0 = anan-1 ...a0 and append the digit dn times to obtain the terms sn = anan-1 ...a0dd . . . d. For what seed values s0 is the
sequence {sn} always composite? L. Jones showed that when d = 1 the seed s0 = 37 produces a composite sequence and provided upper bounds for the smallest seeds that produce composite sequences
when for other values of d. Grantham, Jarnicki, Rickert, and Wagon have conjectured minimals seeds for d =3, 7, 9 and have proven the value for d = 7, up to certification of some probable primes.
Grantham, Jarnicki, Rickert, and Wagon also have conjectures for seed values for the sequences produced when bases other than ten are used, and have found seed values base ten to which any digit
can be appended an arbitrary number of times and produce only composite values.
• Topic: How many slopes are produced
• Speaker: Leanne Holder
• Date: 11 May 2011
• Abstract: We modify the typical Euclidean geometry to remove the concept of parallel lines, which supports the development of a projective plane. We continue by discussing the relationships
between Euclidean and projective planes, emphasizing the existence and construction of blocking sets in the projective plane. A blocking set is a collection of points in the plane that do not
contain a line, but have the property that every line meets that set. We use blocking sets to obtain bounds on the number of distinct slopes produced by q points distributed on two lines in a
projective plane. Moreover, we offer a construction that classifies the sizes of all possible slope sets obtained in this configuration.
• Topic: Really, how hard is it to schedule final exams?
• Speaker: David Rader
• Date: 04 May 2011
• Abstract: In this talk we explore the scheduling of final exams here at Rose-Hulman. We will discuss how the exams are currently scheduled and some ideas for reducing student conflicts. Finally,
we provide some preliminary results which indicate better schedules can be found in a relatively short amount of extra time and effort. This talk explores some of the work done during the
speaker's recent sabbatical.
• Topic: The Joy of Computational Complexity
• Speaker: Jeff Kinne, Indiana State University, Computer Science
• Date: 27 Apr 2011
• Abstract: The overriding theme of computational complexity theory is the question - what problems can be solved efficiently? Some specific questions of importance are - Is cryptography and
provably secure communication possible (current best answer - maybe/probably, see P versus NP...)? Can optimization problems always be solved optimally, or are some optimization problems "hard"
(current answer - maybe/probably, see P versus NP...)? Can use of randomness significantly speed up computation (sometimes yes, and sometimes no)? In this talk, I will look at how computational
complexity phrases these types of questions and what is currently known (for many questions, not nearly as much as we would like). I will focus mostly on high level ideas and the big picture, but
will also plan to give a few "nice ideas/proofs". I will also be happy to diverge from "the plan" at times based on the interests of the audience.
• Topic: The Heisenberg Uncertainty Principle is Necessary for Life on Earth
• Speaker: Rick Ditteon, Physics and Optical Engineering
• Date: 13 Apr 2011
• Abstract: This talk examines the physics of stars, specifically how the Sun generates the energy which makes life, as we know it, possible on Earth. I will show that the Heisenberg Uncertainty
Principle plays an essential role in energy production in the Sun. Even though numerical calculations will be presented which explicitly illustrate the important concepts, the talk is at a level
which should be understood by the general public.
• Topic: Terre Haute Warming
• Speaker: Tim Ekl and Dr. Allen Holder
• Date: 23 Mar 2011
• Abstract: The question of whether or not human behavior is altering the earth's global weather pattern is among the most intense, popular, and scientifically charged of our day. Whereas global
atmospheric models are complex and debatable, we postulate a simple, linear regression model that calculates the historic trend locally, i.e. right here in Terre Haute, IN. Numerical evidence
suggests that the model is accurate enough to identify the solar cycle. Our Terre Haute model is an adaptation of R. Vanderbei's at Princeton. However, unlike Vanderbei's result for the McGuire
Air Force Base, which is near Princeton, we find a predicted temperature change of approximately 0.6 degrees Fahrenheit per century, as compared to his predicted 3.63 degrees Fahrenheit per
century at the McGuire Air Force Base.
• Topic: Yeast and Mathematics
• Speaker: Kyla Lutz, Rose Student, Biomedical Engeineering
• Date: 16 Mar 2011
• Abstract: Mathematics underlies many biological problems, including the metabolism of Saccharomyces cerevisiae, or baker's yeast. The metabolic network of this organism is modeled using flux
balance analysis (FBA), which incorporates linear algebra, computer science, and the chemical reactions within a cell to determine what a single cell is doing while it is in steady-state. More
basic mathematics is also used for this metabolic model. Specifically, Boolean algebra is used to represent the reactions so that the experimental data can be used accurately in the model. The
metabolites given to the cell initially are collectively called the environment, or medium, which can be controlled by the user in the model to mimic experimental conditions so that accurate
predictions can be made. Using this tool, some questions that can be asked are "What are the minimum number of metabolites that the cell needs and what are they?" and "What are the compositions
of all of the possible 'minimal media' in which the cell can survive?" These questions were addressed in Dr. Jason Papin's Biomedical Engineering laboratory over the course of an REU at the
University of Virginia. Another problem that was addressed is the connectedness among all of the reactions in a cell and each metabolite with which the cell interacts. These connections can be
found using a modification of the Floyd- Warshall algorithm from computer science. Mathematics played a major part in this research project and others very similar to it.
• Topic: Algebra in Geometric Combinatorics
• Speaker: Chris McDaniel
• Date: 19 Jan 2011
• Abstract: Geometric combinatorics studies shapes or figures made up out of a finite number of pieces. Convex polytopes play a prominent role in this field of mathematics and, although their study
dates back to antiquity, polytopes continue to serve as a rich source of problems for us even today. One basic problem is that of counting faces of convex polytopes. It turns out that numerical
constraints on the number of faces a polytope can have can be gleaned (in the "simplest" cases) from algebraic properties of a certain ring associated with the polytope. In this talk, I will
introduce convex polytopes and simple polytopes, showing several examples along the way. Then I will describe how to associate a ring to a simple polytope, and I will describe some important
properties of this ring. Finally I will show you how these properties "classify" the number of faces that a simple polytope can have. More concretely, I will use this result to answer the
following question: Does there exist a four dimensional convex polytope with 7 vertices, 14 edges, 13 two-dimensional faces, and 6 three-dimensional faces? Come and find out. The answer will
shock and amaze you!
• Topic: Using inverse functions to solve equations
• Speaker: Cabral Balreira, Trinity University Mathematics
• Date: 03 Nov 2010
• Abstract: We will discuss the problem of solving an equation as a functional problem in Mathematics. We shall observe that solving an equation entails finding the inverse of a map, a task that is
generally difficult. Based on several applications from Economics, Geometry, and Differential Equations, we will show that a natural setting to answer such problems lies in Topology.
• Topic: Combinatorial Structures with Prescribed Automorphism Groups
• Speaker: Tanya Jajcay
• Date: 27 Oct 2010
• Abstract: The concept of an automorphism group of a combinatorial structure is a fundamental concept in the cross-section of Combinatorics and Group Theory. Finding the automorphism group of a
specific structure is a notoriously hard problem whose general complexity has not been resolved but it is believed to be exponential. In the talk, I will address the opposite problem of
constructing a combinatorial structure for a given automorphism group. I will survey the known results for the classes of oriented and non-oriented graphs, outline the solution to this problem
for the class of general combinatorial structures, and present a strategy for solving this problem for the class of hypergraphs. I will start from the basic concepts and present the theory
through a series of examples so that the talk will be accessible to all mathematically minded students.
For more information see Announcement/Abstract/Paper in PDF form
2009-10 (latest first)
• Topic: All Things Infinite
• Speaker: Bill Butske
• Date: 12 May 2010
• Abstract: Why can’t you cancel $ \frac{\infty}{\infty}=1$? (Being a trained mathematician, I am licensed by the state of Indiana to safely do so.) How much is $\infty$? In this talk I’m going to
answer these questions (“because” and “lots” respectively, there, talk is done) and use the math you know against you, in the process rendering your finals frazzled minds inert with shock and
awe. This will make grading the math finals much easier and settle a bet I have going with the other math profs about whether or not I can use $\infty$ to hypnotize a room of students.
• Topic: Just what does the number of slopes of a collection of points have to do with a flock?
• Speaker: Leanne Holder
• Date: 05 May 2010
• Abstract: We being this talk with an introduction to Projective Geometry and some uses for Projective Geometry. Afterwards, we will use the axioms for a projective plane to collectively construct
several examples of finite projective planes. Then, we discuss the history and evolution of the geometric structure known as a flock. Finally, we examine the problem of determining the number of
slopes produced by a collection of points in a projective plane. (Considering taking MA423, Topics in Geometry, next fall? Then consider coming to this talk and get a small taste of what the
future could have in store for you.)
• Topic: Why be an Actuary?
• Speaker: Richard Lenar (Chief Actuary) and Brad Jones (Rose grad, Associate Actuary), McCready and Keene, Inc. (based in Indianapolis)
• Date: 27 Apr 2010 (special time 10th hour)
• Abstract: hese two actuaries from McCready and Keene, Inc. (based in Indianapolis) are coming to Rose-Hulman to discuss their careers as Actuarial Scientists.
• Topic: Cryptography, Finite Groups, and the Discrete Log Problem
• Speaker: Jonathan Webster, 2001 Rose alum, Ph.D graduate University of Calgary
• Date: 21 Apr 2010 (special time 10th hour)
• Abstract: Modern day cryptosystems typically rely on the computational difficulty of one of two problems: integer factorization (RSA) or the discrete log problem (ECC). It is usually easy to
convince people of the computational difficulty of integer factorization; therefore, we will focus on the discrete log problem. In order to understand this problem, we will examine the ln(x)
function and its discrete analogue, define what a group is, and give many examples.
• Topic: Compositions and the Jacobsthal Numbers
• Speaker: Ralph Grimaldi
• Date: 14 Apr 2010
• Abstract: What is a composition? For a positive integer n, there are 2^(n-1) ways to write n as an ordered sum of positive integers. These 2^(n-1) summations constitute the compositions of n.
Since order is relevant here, the compositions of a positive integer are different from the partitions of the integer. When the summands for the compositions are restricted, certain notable
sequences, such as the Fibonacci numbers, arise. When one requires that the last summand in the composition be odd, the Jacobsthal numbers come into play. These compositions constitute the major
part of this presentation, where characteristics of these compositions will be examined and enumerated.
• Topic: A Computational Study of the Dynamics of Human Tear Film
• Speaker: Kara L. Maki, Institute for Mathematics and its Applications (IMA) - University of Minnesota
• Date: 24 Mar 2010
• Abstract: Each time someone blinks, a thin multilayered film of fluid must reestablish itself, within a second or so, on the front of the eye. This thin film is essential for both the health and
optical quality of the human eye. An important first step towards effectively managing eye syndromes, like dry eye, is understanding the fluid dynamics of the tear film. In this talk, I will
describe, through demonstrations and experiments, the physical phenomena that play an important role in the tear film mechanics. These will be used to create a mathematical model that is realistic
enough to reflect the essential aspects of the tear film dynamics, but simple enough so that it can be analyzed and solved through the construction of a numerical algorithm. Simulations of the tear
film equations using an overset grid based computation method can then be compared to experimental observations to show both the model’s strengths and shortcomings.
• Topic: Modeling Complex Fluids - A Primer on Continuum Mechanics
• Speaker: David Finn
• Date: 10 Feb 2010
• Abstract: To mathematically describe a complex fluid, a fluid that exhibits properties of both a solid and a liquid, we need mathematics capable of modeling deformation, continuous changes of the
shape of an object, and the interaction of the fluid with its boundary. We also need to be able to model the forces acting on the material through the surface of the material, and how the
material's deformation influences the forces and possibly exert forces on the material, and even how material properties can be incorporated into the description of the forces. This is done
through the subject of continuum mechanics, which is part mathematics and part physics, and heavily used in certain areas of engineering. This talk will be an introduction to continuum mechanics
and its application in describing complex fluids. Only knowledge of vectors and partial derivatives are required to understand the mathematical methods of continuum mechanics in this talk, plus
enough physics to understand F=ma.
• Topic: Oobleck, Silly Putty, Shampoo, Syrups and Other Complex Fluids
• Speaker: David Finn
• Date: 03 Feb 2010
• Abstract: In this first of a series of two talks on complex fluids from my sabbatical at the Institute of Mathematics and its Application, I will give examples and motivation of complex fluids or
non-Newtonian fluids, and how they differ from usual fluids or Newtonian fluids. This talk will be more demonstrations and experiments to give some of the phenomenological differences between
non-Newtonian fluids and Newtonian fluids. The mathematical descriptions of the phenomenon given in this talk will be given in the next talk.
• Topic: THE WORLD’S HARDEST EASY GEOMETRY PROBLEM
• Speaker: Herb Bailey
• Date: 18 Jan 2010
• Abstract: If you Google the title you will find the problem and many solutions -‘some short, some long, some right, some wrong’. It is easy to solve with trig, but hard if you use only high
school geometry. There is also a second hardest problem of the same type that has been published. The common theme is that all angles must be an integer multiple of 10 degrees. Using trig, I have
shown that there are only four more problems of this type. I can solve three of them using geometry and have tried for many hours, without success, to find a geometric solution of the fourth. As
a prize, if you can help me solve the fourth, we can coauthor a paper describing our results and propose a problem that is harder than 'The World’s Hardest'.
• Topic: Deciding Complex Feasibility by Reducing to Finite Fields
• Speaker: Arnold Yim, Rose Student
• Date: 16 Dec 2009
• Abstract: Last summer, I participated in an REU program at Texas A&M University where I worked with Dr. Rojas and a couple of other students on deciding complex feasibility. This problem was to
determine whether a system of polynomials has a complex root or not. Our approach was to reduce the polynomial to different finite fields and determine whether the system had roots in those
finite fields. If enough of those fields had a root, then we can conclude with some certainty that the system had complex roots. We coded up an algorithm outlined by Koiran, then looked at how
different families of polynomials behaved in different finite fields in attempt to improve the algorithm. In particular, we looked at polynomials whose Galois groups are dihedral groups and
symmetric groups. After running some tests, we were able to find certain patterns in the density of prime numbers for which the polynomials had a solution, which we later used to generalize
specific formulas for the prime density based on the Galois group of the polynomial. Although a little background would be in algebra would be helpful, this talk should be accessible to all
• Topic: Using Mathematical Models and Operations Research to Tackle the Risky Business of Aviation Security
• Speaker: Sheldon H. Jacobson, Professor and Director, Simulation and Optimization Laboratory, Department of Computer Science University of Illinois
• Date: 09 Dec 2009
• Abstract: Aviation security has become a topic of intense national interest, as the risk of terrorism and of other hazardous threats to the nation's air system increase. Recent events have
hastened changes to improve the security of the air traffic industry. This includes multi-million dollar investments in new security technologies and equipment. Passenger screening is a critical
component of such aviation security systems. This paper introduces the sequential stochastic security design problem (SSSDP), which models passenger and carry-on baggage-screening operations in
an aviation security system. SSSDP is formulated as a two-stage model, where in the first stage security devices are purchased subject to budget and space constraints, and in the second stage a
policy determines how passengers that arrive at a security station are screened. Passengers are assumed to check in sequentially, with passenger risk levels determined by a prescreening system.
The objective of SSSDP is to maximize the total security of all passenger-screening decisions over a fixed time period, given passenger risk levels and security device parameters. SSSDP is
transformed into a deterministic integer program, and an optimal policy for screening passengers is obtained. Examples are provided to illustrate these results, using data extracted from the
Official Airline Guide.
• Topic: Actuary: The Best Job in the World
• Speaker: Phil Banet, Allstate - Rose math alumnus
• Date: 04 Nov 2009
• Abstract: Ever wanted to know more about being an actuary? Not sure what it is? It’s only one of the highest rated jobs in the country. Please join us Wednesday, November 4, at the Mathematics
Colloquium to hear more from a practicing actuary with Allstate Insurance Company who just happens to also be a graduate of Rose-Hulman. Phil Banet has been an actuary since he graduated from
Rose in 1991. He’ll be discussing his experiences as well as how Rose helped prepare him for this fascinating career.
• Topic: Classical Markov Logic and Network Analysis
• Speaker: Ralph Wojtowicz and Geoff Ulman, Metron Inc
• Date: 07 Oct 2009
• Abstract: Markov logic is a set of techniques for estimating the probabilities of truth values of formulae written in first-order languages. In network analysis applications, the formulae
describe properties of and relationships or links among entities. The truth values tell if an entity has a property or whether or not a link exists. The networks may involve many different sorts
of entities and types of links. Estimates are based on the values specified in training and test data. We refer to the special case involving two truth values as classical Markov logic. Data in
this case must assign either ‘false’ or ‘true’ to all (closed) formulae. In practical applications, however, we may have limited confidence in some information sources or data values. To model
such uncertainties, we generalize Markov logic in order to allow non-classical sets of truth values. The concepts and methods of category theory give precise guidelines for selecting sets of
truth values based on the form of a network model. We plan to give an overview of Markov logic; discuss applications to alias detection, cargo shipping, insurgency analysis, and social network
analysis; and describe open problems.
• Topic: Moment Convergence Rates and Method of Moments Central Limit Theorems via Induction
• Speaker: Mark Inlow
• Date: 30 Sep 2009
• Abstract: In 1965 von Bahr proved that the difference between each finite moment of the sample mean and the corresponding normal moment is O(n^{-1/2}) by appealing to results by Esseen, Loeve,
and Cramer. We present two proofs of his result using only elementary properties of expectation plus mathematical induction. Since the normal distribution is determined by its moments, if all
moments of the sample mean exist then, by a converse of the second limit theorem, it is asymptotically normal. Thus our results also provide simple versions of Markov's 1898 method of moments
central limit theorem.
• Topic: Roll-ups and Differential Geometry
• Speaker: S. Allen Broughton
• Date: 16 Sep 2009
• Abstract: We all know that cylinders and (frustums of) right cones can be formed by rolling up a flat strip of paper, metal, plastic, or other flexible material. In fact there are pictures of
such "roll-ups" in Calculus books. However, what happens when we do not have such a standard cone shape? What region do we cut out of the paper or metal to achieve a desired cone shape? The
problem started as a phone call from a local manufacturing design company who had to solve this problem. They wanted to build a specific shape but did not know what the flattened out shape would
be. Since their plan was to build the part from a flattened sheet of metal, the answer to the roll-up problem was crucial. In this talk we discuss the geometry problem and show a solution using
the techniques of differential geometry. The techniques are not advanced, in fact everything can be done with multi-variable Calculus and the simple separation of variables in Differential
Equations. The time for the talk does not allow for a complete discussion of the "ghastly derivations" but we will discuss the formulas that allow us to solve the practical problem. The formulas
can be evaluated using numerical integration (Calculus II) and we show the flattened out shape form the given problem. For those interested in full details see the technical report at http://
2008-09 (latest first)
• Topic: Modeling Pattern Formation in the Auditory and Visual System
• Speaker: Kim Montgomery
• Date: 22 Apr 2009
• Abstract: In the course of biological development interesting hexagonal patterns are formed in both the non-mamalian auditory system and the visual system of the fly's eye. I'll discuss how
mathematical models for intercellular signaling and cell motility can be useful in explaining the formation of these patterns.
• Topic: The Mathematics of Cloaking
• Speaker: Kurt Bryan
• Date: 18 Mar 2009
• Abstract: Cloaking and invisibility are old staples of popular fiction, especially science fiction. The pseudo-explanation usually given is that "the selective bending of light rays" (to quote
Mr. Spock) around the object to be cloaked can render the object invisible. But with the laws of physics in the real world, is this actually possible, even in theory? Scientists and
mathematicians have recently found that the answer to this question is a qualified "yes." In this talk I'll give a quantitative, but accessible account of an essential mathematical idea behind
cloaking, in the context of an electromagnetic imaging technique called "impedance imaging."
• Topic: Braids, Cables, and Cells: An Interesting Intersection of Mathematics, Computer Science, and Art
• Speaker: Joshua Holden
• Date: 11 Mar 2009
• Abstract: The mathematical study of braids combines aspects of topology and group theory to study mathematical representations of one-dimensional strands in three-dimensional space. These strands
are also sometimes viewed as representing the movement through a time dimension of points in two-dimensional space. On the other hand, the study of cellular automata usually involves a one- or
two-dimensional grid of cells which evolve through a time dimension according to specified rules. This time dimension is often represented as an extra spacial dimension. Therefore, it seems
reasonable to ask whether rules for cellular automata can be written in order to produce depictions of braids. The ideas of representing both strands in space and cellular automata have also been
explored in many artistic media, including drawing, sculpture, knitting, crochet, and macrame, and we will touch on some of these.
• Topic: Linear Volterra Inverse Problems - Formulation and Regularization
• Speaker: Cara Brooks
• Date: 18 Feb 2009
• Abstract: When solving practical problems, one often tries to gain intuition by first making many assumptions to obtain a simplified model. As the problem becomes better understood, the
assumptions can be relaxed and a more complex model can be considered. In this spirit, we will start by examining the problem of differentiating data, then move on to the problem of computerized
tomography, demonstrating how a few simplifying assumptions and a lot of calc II lead to solving a linear Volterra integral equation of the first kind. Depending on the function spaces involved,
this means solving an ill-posed (inverse) problem. We will then examine regularization techniques for handing some linear Volterra problems and discuss some of the work involved in obtaining
``good'' approximations to the exact solution when using measurement data corrupted with noise.
• Topic: Nonlinear Design Problems, Model Discrimination, and Impossible Solutions
• Speaker: Mike DeVasher
• Date: 11 Feb 2009
• Abstract: This talk will cover three distinct topics. First, an introduction to the fundamental conundrum of optimal design for nonlinear experiments will be discussed. A novel approach in
applying Bayesian ideas to the need for prior information will be compared to the historical standard of local optimality. Next, a related nonlinear design problem, that of model discrimination
for exponential regression models will be introduced. A brief review of model discrimination techniques for linear models will be offered as well as a discussion of model discrimination
techniques particular to nonlinear models. Finally, time permitting, a solution to a variant of Freudenthal's "Impossible Problem" attributable to Lee Sallows will be discussed. Solutions to the
so-called "Superimpossible Problem" will have to wait until a later date.
• Topic: Inverse Problems on Resistor Networks
• Speaker: Kurt Bryan
• Date: 04 Feb 2009
• Abstract: Suppose we have a rectangular grid (of finite extent) of resistors in the 2D plane. The "interior" resistors are not accessible to us and have unknown resistance. However, we have
access to the resistors on the boundary of the grid, to which we can apply voltages and measure the resulting currents. Can we use this kind of information to determine the resistance of the
inaccessible interior resistors? What if we have access to only part of the boundary? This kind of problem is a natural discrete analogue to certain problems in nondestructive testing and
"impedance imaging", but easier to analyze---all we need is linear algebra. I'll show some results obtained by students in the mathematics REU last summer at Rose-Hulman.
• Topic: Solutions to the Pure Parsimony Problem
• Speaker: Joshua Burbrink, Nicole Fehribach, Tony Ferrell, Fred Freers, Casimir Ksiazek, Jason Sauppe, Jeremy Schendel and Al Holder
• Date: 28 Jan 2009
• Abstract: The students in MATH 444 addressed the problem of finding the least amount of genetic diversity needed to describe a population, which is known as the Pure Parsimony Problem. This talk
will start with a succinct introduction to the problem and then will proceed to a discussion of the proposed solution methods. In particular, local search methods and their efficiency will be
discussed. The talk will end with a mathematical result that identifies a sub-class of these APX-Hard problems that can be solved in polynomial time.
• Topic: The role of beauty in the search for world-record cages
• Speaker: Robert Jajcay, Indiana State University
• Date: 21 Jan 2009
• Abstract: A (k,g)-cage is a very neat and efficient mathematical creature; a k-regular graph of girth g that has the smallest number of vertices possible. As finding the (absolutely) smallest
cage is extremely hard, researchers often settle for finding a graph that is smaller than anyone else's in the world -- the world record cage. This gives rise to a curious area of mathematics, an
area where tables of current record holders are constantly being updated and closely watched, and every new entry gives rise to frantic attempts at beating it. It is also an area where everybody
has an equal chance (well, not really, being smart helps quite a bit), and even newcomers may get their chance for their 15 minutes of fame (or how long it takes until someone else beats their
record and erases their name from the tables). In our talk we intend to introduce some order into the competition by looking into the relation between beauty and efficiency. We make a very
non-mathematical claim that beautiful (i.e., highly symmetric) structures have the best chance for being the world records, and we support this claim with a little bit of evidence and a lot of
speculation. We will take great care not to start fights with proof-demanding mathematicians, but cannot make any promises.
• Topic: Differentiating the QZ Algorithm with Application to Gradient Based Output Feedback Optimization
• Speaker: Brad Burchett
• Date: 17 Dec 2008
• Abstract: Special time 10'th Hour PARTA: The QZ algorithm gives a robust way of computing solutions to the generalized eigenvalue problem. The generalized eigenvalue problem is used in linear
control theory to find solutions to Ricatti equations, as well as to determine system transmission zeros. In state space linear system analysis, the system poles and transmission zeros are
particularly important for determining system time and frequency response. Here we embed calculation of the eigenvalue derivatives in the QZ algorithm such that the derivatives of system poles
and transmission zeros are computed simultaneously with the poles and zeros themselves. The resulting method is further exercised in finding generalized eigenvalues and their sensitivities
required for finding the derivatives of system residues. This technique should openthe door to solutions of problems of interest by unconstrained gradient based methods. Typical numerical results
are presented. PART B: A new method for gradient based determination of H2 optimal output feedback gains is presented. Constraints representing the dynamics of a linear time invariant system are
substituted into the quadratic cost function. Sylvester's expansion is used to write the matrix exponential in a form which can then be integrated closed-form. The cost function and its
derivatives can then be written as algebraic expressions in terms of the system eigenvalues. PART C (time permitting): A numerical model of the Ares I upper stage main propulsion system is
ormulated based on first principles. Equations are written as non-linear ordinary differential equations. The GASP Fortran code is used to compute thermophysical properties of the working fluids.
Complicated algebraic constraints are numerically solved. The model is implemented in Simulink and provides a rudimentary simulation of the time history of important pressures and temperatures
during re-pressurization, boost and upper stage firing. The model is validated against an existing reliable code, and typical results are shown.
• Topic:
• Speaker: S. Allen Broughton, Voronoi Tesselations, Delaunay Tesselations and Flat Surfaces
• Date: 10 Dec 2008
• Abstract: Voronoi tessellations are all about us. In crystallography, the can be used to define a unit cell. In coding theory the can be used measure effectiveness of detection and correction of
errors in transmission. The sizes of the cells can give us information about uniform placement of points on a sphere such as satellites in the sky. Delaunay tessellations are dual to Voronoi
tessellations and have their own uses. In the first part of this talk we will give some examples of the tessellations and discuss algorithms for determining them. In the second part of the talk
we will look at how these tessellations can be used to understand the geometry of flat surfaces, such as a cube or icosahedron. This talk is the second of two sabbatical report talks from
Professor Broughton's sabbatical at Indiana University last spring. The first talk "Billiards and Flat Surfaces" was a motivational introduction to flat surfaces intended for a general audience.
This second talk, will discusses additional geometrical concepts and problems about flat surfaces suitable for undergraduate research.
• Topic: (Almost) The Poincare Conjecture or "What's the difference between a ball and a doughnut?"
• Speaker: Bill Butske
• Date: 29 Oct 2008
• Abstract: In this talk I'll discuss how to (mathematically) answer one of the burning questions of our time and indeed of this election cycle. Namely, what is the difference between a doughnut
and a ball (there's a giant hole in one of them) and how it relates to the recently proven Poincare Conjecture (which has to do with holes in things). This talk is intended for a general audience
(this means you M. Fouts) though you'll learn lots of fancy words that you can use to impress the public at large.
• Topic: Algebraic Tori and Their Applications
• Speaker: Arnold Yim, Rose Student
• Date: 22 Oct 2008
• Abstract: In this talk, we will discuss the structure of algebraic tori. In particular, we will go over what it takes for an algebraic variety to be rational. We will then look at an example of a
small torus. Finally, I will describe the applications of algebraic tori in public key cryptosystems. Most of the mathematics involved in this talk should be accessible to anyone, however,
familiarity with finite fields and basic algebraic ideas will help.
• Topic: Billiards and Flat Surfaces
• Speaker: Allen Broughton
• Date: 01 Oct 2008
• Abstract: What do flat surfaces like a cube or icosahedron have to do with billiards? The billiard question is simply: If you hit a billiard on a polygonally shaped billiard table and it
continues indefinitely, will it eventually get near to every point on the table? The answer is fairly easy for rectangular shaped tables but more complicated for other shapes. In this talk we
will discuss how flat surfaces arise from the discussion of billiards and look at some of the properties of flat surfaces, including a suitable interpretation of Euler's formula. This talk is the
first of two sabbatical report talks from Professor Broughton's sabbatical at Indiana University last spring. The first talk is a motivational introduction to flat surfaces and is intended for a
general audience of Rose faculty and students. The second talk, to be given later in the year, will discuss additional concepts and problems about flat surfaces suitable for undergraduate
research topics.
• Topic: Mathematical Programming, Systems Biology, and Undergraduate Research
• Speaker: Allen Holder
• Date: 24 Sep 2008
• Abstract: Several recent advances in biology, medicine and health care are due to computational efforts that rely on new mathematical models. The underlying mathematics largely lies within
discrete mathematics, statistics & probability, and optimization, which are combined with savvy computational tools and an understanding of cellular biology to advance our biological insights.
One of the most significant areas of growth is in the field of systems biology, where we are using information from high-throughput computing to construct models that describe larger entities. We
will introduce the overriding goal of systems biology and will highlight the role of mathematical programming. In particular, places for undergraduate research will be discussed.
• Topic: My Summer as an Actuarial Intern
• Speaker: Casimir G. Ksiazek III, Rose Student
• Date: 17 Sep 2008
• Abstract: In this talk, I will describe the work I did as an actuarial intern this summer at Allstate Insurance in Northbrook, IL. The talk will be informal, with discussion and questions
encouraged. I STRONGLY recommend that anyone even remotely interested in actuarial science attend. Most of the mathematics involved should be accessible to anyone, though a familiarity with
regression will help.
2007-08 (latest first)
• Topic: Generalizations of Niven Numbers
• Speaker: Robert Lemke Oliver, Rose Student
• Date: 14 May 2008
• Abstract: A Niven number is an integer that is divisible by the sum of its base q digits. For example, 2008 is Niven both in base 3 and in base 5 (see abstract). Several people have derived
asymptotic formulae for the function N(x) that counts the number of Niven numbers less than x. We proceed in a more general case, studying functions that act only on the base q digits of an
integer. An asymptotic formula for the counting function of these generalized Niven numbers is known, but the question of divisibility by multiple functions is still open. We present partial work
toward acquiring an asymptotic formula in this case, as well as conjectures based off of numerical evidence.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: A Generalization of the Fibonacci and Jacobsthal Sequences
• Speaker: Ian Rogers, Rose Student
• Date: 07 May 2008
• Abstract: Among the sequences of discrete mathematics, the Fibonacci sequence is probably the most well-known. Turning up in myriad areas from geometry to graph theory, seashells to the stock
market, the Fibonacci numbers display an amazing number of interesting properties. The Jacobsthal numbers, another well-known sequence, are defined by a different, yet closely related, recurrence
relation to that of the Fibonacci numbers. While slightly less popular, the Jacobsthal numbers too display many desirable properties. In this talk, we will describe a new class of generalizations
of the Fibonacci and Jacobsthal numbers. We then look at a few examples in which the Fibonacci and Jacobsthal numbers are known to occur, and expand them to produce the new sequences. Finally, we
show that many of the desirable properties of the Fibonacci numbers still hold in the general case, and provide suggestions for further research into this new family of sequences.
• Topic: Total Variation Image Restoration
• Speaker: Ely Spears, MIT Lincoln Labs - Rose Alum
• Date: 26 Mar 2008
• Abstract: One of the most widely studied areas of applied mathematics is image processing. Image restoration, also called image inpainting, is one of the most prominent uses for these
mathematical techniques. In this talk, a particular procedure for restoring damaged or corrupted images, called total variation, is discussed. Most of the material will be accessible to students
familiar with linear algebra. A brief description of numerical methods, in particular the Fast Level Set Transform, is included.
• Topic: Modeling a Slice of French Bread
• Speaker: David Finn
• Date: 23 Jan 2008
• Abstract: Why does a slice of French or Italian bread have a somewhat elliptical shape? In this talk, I will provide a heuristic model to describe the shape based on treating dough as a liquid.
Then from data from slices of bread, I will show that this model provides a good description of a slice of bread.
• Topic: Introduction to the Life Table
• Speaker: Casimir G.Ksiazek III, Rose Student, Mathematics
• Date: 19 Dec 2007
• Abstract: Buying life insurance is a quite a common occurrence. But how do people determine how much life insurance premiums should cost? Historically, actuaries have used life tables to assist
in pricing insurance and annuities. In this talk, the concept of a life table will be introduced. In addition, examples will be given to show how from seemingly simple data, quantities such as
life expectancy and insurance premiums can be calculated. A knowledge of probability is recommended, but not required. Anyone interested in actuarial science is strongly encouraged to attend.
• Topic: Introduction to Infinity or Why Johnny Can't Add
• Speaker: Bill Butske
• Date: 07 Nov 2007
• Abstract: First I want emphasize that this talk is for anyone who has wondered what mathematics has to say about the concept of infinity. In particular non-math majors are encouraged to attend
and the talk is aimed primarily at them. I'm going to talk about infinity in two ways, first in counting, where we will see that there are two different kinds of infinity (at least) and second in
geometry where we know that parallel lines DO intersect, namely at infinity. Of course this is a math talk and the underlying intent is to warp your mind.
• Topic: FETCHING WATER WITH MINIMUM RESIDUES: Generalization of a problem from Die Hard 3
• Speaker: Herb Bailey, Emeritus Rose Math professor
• Date: 31 Oct 2007
• Abstract: Bruce Willis can disarm a bomb if he is able to get exactly 4 gallons of water from a well using only a 3 gallon jug and a 5 gallon jug. This problem dates back to the 13th century. A
generalization of this problem is to determine all possible integer gallons that can be obtained using an M gallon jug and an N gallon jug, with M < N. We solve the generalized problem using some
congruence results. It turns out that there are only two distinct pouring sequences to get a given number of gallons. The shorter of the two can be determined by solving a linear congruence
equation. Short is good since Bruce has but 5 minutes prior to detonation. Not to worry, no previous knowledge of number theory will be needed to enjoy this talk.
• Topic: Blow-up Solutions to Differential Equations
• Speaker: Kurt Bryan
• Date: 24 Oct 2007
• Abstract: Nonlinear differential equations of the form u' = f(u) where u=u(t) are common in applied mathematics. Usually t is time, u(t) is the amount of some "stuff" in a system, and f(u) models
the rate stuff is produced or destroyed, as a function of the amount present. If the function f is positive and increasing (the stuff catalyzes its own production) then solutions may grow to
infinity in a finite time, a phenomena called "blow-up". In this talk I'll start with the simple ODE above, then describe some recent progress in analyzing blow-up phenomena for similar partial
differential equations in which diffusion is present.
• Topic: Models for Emergent Behavior
• Speaker: Ely Spears, Rose Student, Mathematics
• Date: 17 Oct 2007
• Abstract: Emergent behavior is a division of biology that seeks to understand and better explain phenomena that appear in group situations but not on an individual basis. Fish schooling,
bacterial growth properties, and bird flocking are just a few prominent examples of this sort of behavior. The latter of these examples motivated summer research at the City University of Hong
Kong, in China. This introductory presentation will give the details behind some popular mathematical models for bird flocking behavior. Additionally, numerical simulations of these models will
be discussed at length and the various model parameters will be explored. The talk is such that students of any background are encouraged to attend.
• Topic: Generalized Niven Numbers
• Speaker: Robert Lemke-Oliver, Rose Student, Mathematics
• Date: 27 Sep 2007
• Abstract: A base-q Niven number is one which is divisible by the sum of its digits. For example, 18 is a base 10 Niven number, since 9 divides 18. We will be interested in simultaneous Niven
numbers, numbers that are Niven in more than one base. Returning to the example, 18 is also base 9 Niven, since 18 is 20 in base 9. Thus, 18 is a simultaneous base 9 and base 10 Niven number. We
are interested in counting the number of simultaneous Niven numbers up to a point, x. One approach to this is to look at completely q-additive functions. These functions essentially act on the
digits of a number, so that f(124)=f(1)+f(2)+f(4). Note that the sum of digits function is completely q-additive. If we can understand these generalized Niven numbers, we can hopefully gain some
information about the standard Niven numbers. In this talk, we will prove an asymptotic formula for the number of generalized Niven numbers, and we will present the work that has been done to
relate this to Niven numbers.
2006-07 (latest first)
• Topic: Modeling Hysteresis (PART II): A load dependent hysteresis model for a simple shape memory wire actuator.
• Speaker: Steve Galinaitis
• Date: 31 Jan 2007
• Abstract: To accurately position an object with an actuator that exhibits load dependent hysteresis requires a hysteresis model that is capable of adjusting to a change in load. In this talk we
investigate the specific problem of modeling the hysteresis of a simple shape memory alloy wire that is operated under changing tensile loads. A Preisach operator that incorporates load dependent
parameters in the Preisach density function is proposed as the hysteresis model. In support of this selection, a relationship between the Preisach density function and the wire’s thermal
coefficient of expansion is established. It is then shown that the load dependent Austenite-Martensite transition temperatures of the wire can be used to estimate the parameters of the density
function. Based on these findings a load dependent Preisach operator is defined. To test this approach, a bivariate density function that incorporates two load dependent parameters is substituted
for the Preisach density function. Two load dependent linear estimators are developed from experimental data and used to estimate the parameters of the density function. These estimators and the
load dependent Preisach operator are then used to estimate the length of a SMA wire that is operated under several tensile loads. The estimates are compared to experimental data and a discussion
of the effectiveness of this approach is given.
• Topic: Modeling Hysteresis: A load dependent hysteresis model for a simple shape memory wire actuator.
• Speaker: Steve Galinaitis
• Date: 24 Jan 2007
• Abstract: To accurately position an object with an actuator that exhibits load dependent hysteresis requires a hysteresis model that is capable of adjusting to a change in load. In this talk we
investigate the specific problem of modeling the hysteresis of a simple shape memory alloy wire that is operated under changing tensile loads. A Preisach operator that incorporates load dependent
parameters in the Preisach density function is proposed as the hysteresis model. In support of this selection, a relationship between the Preisach density function and the wire’s thermal
coefficient of expansion is established. It is then shown that the load dependent Austenite-Martensite transition temperatures of the wire can be used to estimate the parameters of the density
function. Based on these findings a load dependent Preisach operator is defined. To test this approach, a bivariate density function that incorporates two load dependent parameters is substituted
for the Preisach density function. Two load dependent linear estimators are developed from experimental data and used to estimate the parameters of the density function. These estimators and the
load dependent Preisach operator are then used to estimate the length of a SMA wire that is operated under several tensile loads. The estimates are compared to experimental data and a discussion
of the effectiveness of this approach is given.
• Topic: Alignment of Protein Structures
• Speaker: Yosi Shibberu
• Date: 17 Jan 2007
• Abstract: Proteins play a key role in nearly all of the biochemical processes of living organisms. A protein is a long molecular chain constructed from twenty types of molecules called amino
acids. Proteins produced by living organisms fold up into unique, tightly packed, structures called folds. The particular sequence of amino acids in a protein's chain determines its unique fold.
The geometry of a protein’s fold largely determines the protein's specific biological function.
Identifying the biological function of individual proteins is an important and challenging problem. A better understanding of the evolution of protein folds will help us decipher the function of
individual proteins and will lead to major advances in biology and new treatments for many human diseases.
The evolution of proteins is studied by making comparisons. Proteins are typically compared by comparing their sequence of amino acids, by comparing the geometry of their folds, and more
recently, by comparing their expression profiles.
Fold-based comparisons of proteins is believed to be much more informative and robust than sequence based comparisons. However, the problem of aligning protein folds is not as well understood as
the problem of aligning protein sequences. In this talk, we describe a new mathematical framework for describing the geometry of protein folds. This mathematical framework may lead to a better
understanding of the fold alignment problem.
• Topics: Cookius Maximus by Robert Lemke Oliver and On the Minimum Vector Rank of a Graph by Ian Rogers
• Speakers: Robert Lemke Oliver and Ian Rogers , Rose Students
• Date: 20 Dec 2006
• Abstracts:
Shape of a Cookie How can the shape of a sugar cookie be modeled mathematically? It turns out that it’s a solution of a non-linear partial differential equation. In this talk, we examine a
simplified version of this “cookie equation” to find the highest point on the cookie. Our eyes seem to be very good at locating it, but whatever process we’re using turns out to be hard to
explain mathematically. We will look in particular at convex regions, which are known to have only one maximum.
Graphs Given a graph or multigraph G on n vertices, we associate a set of nonzero complex vectors to the vertices of G in the following manner: If vertices i and j are not joined then the
corresponding vectors are orthogonal, and if i and j are connected by a single edge, the associated vectors are not orthogonal. The rank of a vector representation is the maximum number of
linearly independent vectors in the representation. The minimum vector rank of G, mvr(G), is the minimum rank among all vector representations of G. We present methods for determining mvr(G) if G
is among certain classes of graphs, including perfect graphs, complete graphs, and cycles. Further, we present upper and lower bounds on mvr(G) for all multigraphs that contain only multiedges,
and provide two conjectures on the exact value of mvr(G) for a graph.
• Topic: Optimizing 4th-Order and 5th-Order Explicit Runge-Kutta Formulas
• Speaker: Stephen Dupal, Rose Student
• Date: 13 Dec 2006
• Abstract: Differential equations have been solved numerically with explicit Runge-Kutta methods for over a century. Runge-Kutta methods are used in the sciences as well as mathematical software
such as Matlab’s ode45 solver. Utilizing techniques in polynomial theory based on Gröbner bases, it becomes more manageable to find Runge-Kutta formulas that minimize higher-order truncation
error. In this talk, we will discuss the connection between the Runge-Kutta method and Gröbner bases, and we will present some of the results of exploring the optimization of fourth- and
fifth-order Runge-Kutta formulas. This presentation is based on work done by Iowa State University’s summer 2006 Numerical Analysis REU group consisting of Stephen Dupal (Rose-Hulman) and Michael
Yoshizawa (Pomona College).
• Topics: Characterizing Holes in Wires and Plates Inverting the Heat Equation: Tom Werne and
Characterizing Refinable Rational Functions: Ely Spears
• Speakers: Thomas Werne and Ely Spears, Rose Students
• Date: 06 Dec 2006
• Abstracts:
Heat Equation: The heat equation is a classical partial differential equation that can predict the temperature distribution on some domain subject to certain boundary conditions. Motivated by the
field of nondestructive testing, the equation turns out to be a useful tool for characterizing defects in metallic plates. In this talk we will discuss solution methods and results that show how
to characterize certain defects in two dimensional regions using only boundary data. The presentation is based on work done during the summer of 2006 by the Inverse Problems REU group of Thomas
Werne (Rose-Hulman) and Jay Preciado (The College of New Jersey) at Rose-Hulman under Dr. Bryan (Rose-Hulman Mathematics Department).
Refinable Functions: A k-refinable function is a function f(x) that can be re-written in terms of the function f(x). In recent decades, refinable functions have become increasingly popular due to
their desirable properties in many applications, such as wavelet analysis. While the refinability properties of many popular classes of functions, such as compactly supported splines, have been
known for a while, rational functions had seemed to escaped notice in terms of refinability. This talk is based on research investigating the refinability of rational functions that took place at
Texas A&M University during the summer. Preliminary simplifications to the general problem are presented in a chronological collection of lemmas. A complete characterization of refinable rational
functions follows with an interesting connection to an open problem in number theory.
• Topic: Knots, Braids, and an Application followed by Probability, Electrical Circuits, and Rectangles
• Speaker: Jennifer Franko and Michael Bateman, Indiana University
• Date: 29 Nov 2006
• Abstract: This week we have two mathematics seminars on Wednesday that may be of special interest to students. The seminars are 9th and 10th period in G221 on Wednesday by two Graduate Students
from the Mathematics Department of Indiana University. The first (during all of 9th period) is by Jennifer Franko entitled “Knots, Braids, and an Application” which concerns the application of
topology to quantum computing, and the second (during the first half of 10th period) is by Michael Bateman entitled “Probability, Electrical Circuits, and Rectangles”. Following Michael Bateman’s
talk, both graduate students will answer questions about graduate school, the application process, what life is like as a graduate students, etc, so if you are considering Graduate School in your
future it might be worthwhile to attend.
Knots, Braids, and an Application: One method proposed to build quantum computers is based on braid representations. In this talk, we will define the braid group and discuss the connection
between braids and knots. Any invertible matrix which satisfies the Yang Baxter Equation can be used to obtain representations of the braid group, and we will study these types of representations
and as well as link invariants they might yield. Finally, we will mention how these representations might be used in a topological model of quantum computation.
• Topic: Actuarial Mathematics
• Speaker: Nate Dorr, Rose Student
• Date: 08 Nov 2006
• Abstract: Actuarial Mathematics refers to the mathematics of the insurance industry. Actuaries use probability and statistics in calculating premiums, determining reserves, and modeling insurance
products. In this talk, actuarial components of a whole life insurance product will be covered. Life insurance, life annuities will be discussed and will lead to how premiums are calculated. In
addition, information about the actuarial profession will be presented with time for questions at the end. Probability should be sufficient background for this talk.
• Topic: Geometry from Chemistry II - The Geometry of Nanotubes
• Speaker: Allen Broughton
• Date: 01 Nov 2006
• Abstract: Carbon nanotubes are an interesting but as of yet incompletely understood part of nanotechnology, an area of science that has really grown up in just that last 15 years. From the
mathematical perspective nanotubes have an interesting molecular structure based on the hexagonal honeycomb structure of graphite. In this talk I will describe the geometry and symmetries of
nanotubes. There is an infinite family of such nanotubes, so describing the structure takes some care. Multivariable calculus should provide plenty of background to make this talk accessible.
There will be a brief recap from lecture I of this series to motivate the atom labelling problem - a graph theory problem - for nanotubes.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Geometry from Chemistry I - Understanding Molecular Dynamics of Bucky Balls
• Speaker: Allen Broughton
• Date: 25 Oct 2006
• Abstract: Buckminsterfullerene is a complex molecule consisting of sixty carbon atoms is an arrangement like a soccer ball, and so the molecules are often called bucky balls. Trying to
understands the molecular dynamics of bucky balls leads to some interesting problems in geometry, algebra and differential equations. In the talk, the theory will be described in some detail for
very simple objects such as triangular molecules such as water. We then will examine the geometrical issues that come about in modeling the much more complex bucky balls. We are only go to talk
about classical dynamics as quantum mechanics add a level of complexity well beyond an hour's talk This work is a collaboration with Dan Jelski of the chemistry department and Guo-Ping Zhang of
the ISU physics department. We do not have complete results at this stage, in fact I'd like to describe some problems that could be tackled by undergraduates. I don't plan to use much more beyond
multi variable calculus, though understanding of differential equations helps.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: How to Paint Your Way out of a Maze
• Speaker: Joshua Holden
• Date: 18 Oct 2006
• Abstract: Many people don't realize that what we now call "algorithm design" actually dates back to the ancient Greeks! Of course, if you think about it, there's always the "Euclidean Algorithm".
A more dubious example might be Theseus's use of a ball of string to solve the "Labyrinth Problem". (Google "Theseus, Labyrinth, string".) Solutions to this problem got a lot less dubious after
graph theory was invited, since a graph turns out to be a good way of representing a maze mathematically. We will examine the classical solutions to this problem, and then throw in a twist --- a
Twisted Painting Machine that puts restrictions on which paths we can take to explore the maze. Applications to sewing may also appear, depending on the presence of audience interest and string.
• Topic: More Talks on the Numerical Range
• Speaker: Thomas Werne, Ted Lyman and Robert Lauer, Rose students EE/MA, ME/MA , EE/MA
• Date: 27 Sep 2006
• Abstract:
First Talk
Title: Finding the Centroid of W(A)
Student: Thomas Werne
Abstract: The numerical range of a matrix A is a subset of the complex plane. One method of generating this subset is to choose random vectors on the unit ball in complex hyperspace. The method
of generating these random vectors induces a probability density function on the numeric range. In this talk, we examine these density functions and a possible connection with the centroid of the
numerical range and the spectrum of the matrix.
Second Talk
Title: Pre-Images of Points in the Numerical Range
Students: Ted Lyman (speaking) and Robert Lauer
Abstract: If A is an n x n matrix, the numerical range of A is the set of complex numbers W(A) = (Ax,x), where x is a unit vector in C^n and (Ax,x) denotes the dot product between Ax and x.
Although W(A) appears simple, it has many intriguing properties. We give a brief overview of some of these properties and take a look specifically at the connectedness of the pre-image of points
in W(A).
• Topic: Home on the (Numerical) Range
• Speaker: Dr. Roger Lautzenheiser
• Date: 20 Sep 2006
• Abstract: Like the eigenvalues, the numerical range of a matrix is a subset of the complex plane. However, unlike the eigenvalues, the numerical range will not be a finite set except when the
matrix is a multiple of the identity. Indeed, the numerical range of A is the singleton set {a} if and only if A = a I. In addition to containing the eigenvalues, the numerical range has many
interesting properties. In this talk we survey the history of the numerical range, the relationships between the geometric properties of the numerical range and the algebraic properties of the
matrix, and perhaps most importantly, how the numerical range is used as a research project in Linear Algebra 2.
2005-06 (latest first)
• Topic: Fuzzy Topological Spaces Part II (of II) - "Correct" Fuzzification of Topological Spaces: Functors and the General Tychonoff Theorem
• Speaker: Stephan Carlson*
• Date: 17 May 2006
• Abstract: In this second part of his presentation, the speaker will discuss Lowen's modified definition of a fuzzy topology on a set and its ramifications for the investigation of fuzzy
topological spaces. Emphasis will be placed on the use of category theory as a test for a correct generalization of set-based topology and the success in proving a general theorem on products of
compact fuzzy topological spaces. *Research on the results presented was completed during the presenter's 2004-2005 sabbatical leave.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Fuzzy Sets and Fuzzy Topologies: Early Ideas and Obstacles
• Speaker: Stephan Carlson
• Date: 10 May 2006
• Abstract: Fuzzy set theory and fuzzy logic were introduced in the 1960s by electrical engineers as tools for understanding and developing efficient control methods. Since fuzzy sets in a fixed
set generalize subsets of the set, mathematicians - especially topologists - took on the challenge of generalizing existing set-based theories to the fuzzy set context. In this first part of his
presentation, the speaker will survey the initial development of the field of fuzzy topology, which yielded some elegant results but also left some challenging gaps. The presentation will be
intended for a general audience, in the sense that no previous background in either fuzzy set theory or topology will be necessary in order to comprehend basic ideas.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Bicycle Tracks on the Plane and the Sphere
• Speaker: David Finn
• Date: 29 Mar 2006
• Abstract: The title problem of the MAA book "Which way did the bicycle go? … and other intriguing mathematical mysteries" by Konhauser, Velleman and Wagon considers the following situation:
Imagine a 20-foot wide mud patch through which a bicycle has just passed, with its front and rear tires leaving tracks as illustrated below. In which direction was the bicyclist traveling? This
problem is motivated by the Sherlock Holmes mystery, The Priory School, in which the great detective encounters a pair of tire tracks in the mud and immediately deduces the direction the bicycle
was going. This evidence then leads to the finding of a duke's son and the arrest of a murderer. In this talk, we will describe solutions to two variations of this problem on both the plane and
the sphere in which a criminal could potentially fool the great detective as it is possible for an incredible bicyclist to create tracks for which it is impossible to determine which direction
the bicycle went by only the geometry of the tracks. Moreover, an incredible bicyclist can also defeat the great detective by riding in such a way to leave only one track, possibly causing the
detective into believing he is pursuing a unicyclist instead.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Breaking the MD5
• Speaker: Brandon Borkholder, Rose Student, Computer Science
• Date: 22 Mar 2006
• Abstract: The MD5 hash function and its family are security algorithms that have been used world-wide for nearly a decade. Just a few years after creation there were hints of weakness and now
there are algorithms to crack it efficiently. How do these algorithms work? Is the MD5 completely broken? How can a potential hacker exploit this weakness to undermine the trust of those who use
• Topic: Investigating the Shape of a Cookie
• Speaker: Hari A. Ravindran, Rose Student, Mathematics
• Date: 15 Feb 2006
• Abstract: This is a continuation of the previous two talks on the Shape of a Cookie. The goal of the investigation is the establishment of an asymptotic expansion for the shape of a cookie with
an elliptical base domain. The talk summarizes Hari's work towards this goal over the past summer and during this academic year. This research was funded in part by a Joseph B. and Reba A. Weaver
Undergraduate Research Award.
• Topic: Existence for a Heuristic Model for the Shape of a Cookie (Part II Cookie Series)
• Speaker: David Finn
• Date: 08 Feb 2006
• Abstract: Have you ever wondered why cookies are generically round? Well, I have. And, the reason involves some interesting mathematics: Calculus of Variations, Nonlinear Partial Differential
Equations, and Differential Geometry (Sorry, cookie picture is too large to e-mail!) In this second of two talks on the shape of a cookie, I will first give an overview of the first talk
developing a heuristic model for determining the shape of a cookie. Then, we will prove that this model can be solved mathematically and outline the method used to generate the numerical
solutions presented. Finally, I will present some interesting questions that will be examined during the REU this summer. Homemade Cookies will be provided during the talk.
• Topic: Modeling the Shape of a Cookie
• Speaker: David Finn
• Date: 01 Feb 2006
• Abstract: Have you ever wondered why cookies are generically round? Well, I have. And, the reason involves some interesting mathematics: Calculus of Variations, Nonlinear Partial Differential
Equations, and Differential Geometry. In this first of two talks on the shape of a cookie, I will overview a heuristic model for determining the shape of a cookie, and show under some physically
reasonable assumptions that a cookie is generally round. Some interesting questions suggest themselves, when the generically round is stated in a mathematically precise language, and the
physically reasonable assumptions are allowed not to hold. An investigation of some of aspects of this mathematical model for the shape of a cookie will examined during the REU this summer.
Homemade Cookies will be provided during the talk.
• Topic: A Combinatoric Proof of the Chan-Robbins-Yuen Theorem
• Speaker: Daniel Litt, High School Student from Ohio
• Date: 25 Jan 2006
• Abstract:
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Solving the Rubik's cube: An Introduction to Group/Graph Theory
• Speaker: William Butske
• Date(s): 02 Nov 2005, 09 Nov 2005
• Abstract: The Rubik's cube is one of the most concrete examples of a finite non-abelian group that one is likely to come across. If these terms don't mean anything to you, don't worry, they will
by the end of the talk(s). We will see how group theory and graph theory can be used to solve fundamental problems about the Rubik's cube. For example, how many different positions are possible
is the same as asking what is the order of the Rubik's group. How many moves are necessary to solve the worst possible scrambling (God's Algorithm) is a question about the diameter of the
associated Cayley graph.
• Topic: Do Dogs Really Know Calculus?
• Speaker: Eric Reyes, Rose Student, Math/Econ Major
• Date: 26 Oct 2005
• Abstract: Least squares is a regression technique frequently used by engineers and scientists to gain insight into data generating processes. In 2003, Timothy Pennings of Hope College asked the
question: "Do Dogs Know Calculus?" In an effort to see if his dog Elvis minimized the retrieval time when playing fetch, Professor Pennings collected data during a game of fetch on the beach. We
take a second look at his data and statistical analyses. We show how a simple-looking problem can require intricate analysis. We use advanced methods, including weighted least squares, to detect
and compensate for violations in the standard least squares assumptions. And, we seek to answer the question: Do Dogs Really Know Calculus?
• Topic: Nonparametric estimation of volatility models with serially dependent Innovations
• Speaker: Michael Levine, Purdue University
• Date: 20 Oct 2005
• Abstract: We are interested in modeling the time series process yt = ¾(xt)"(xt)) where "t = Á0"t¡1 + vt. This model is of interest as it provides a plausible linkage between risk and expected
return of financial assets. Further, the model can serve as a vehicle for testing the martingale difference sequence hypothesis, which is typically uncritically adapted in financial time series
models. When xt has a fixed design, we provide a novel nonparametric estimator of the variance function based on the difference approach and establish its limiting properties. When xt is strictly
stationary on a strongly mixing base (hereby allowing for ARCH effects) the nonparametric variance function estimator by Fan and Yao (1998) can be applied and seems very promising. We propose a
semiparametric estimator of Á0 that is pT-consistent, adaptive, and asymptotic normally distributed under very general conditions on xt.
• Topic: Finite Groups of Matrices with Integer Entries
• Speaker: James Kuzmanovich (joint work with Andrey Pavlichenk), Wake Forest University
• Date: 28 Sep 2005
• Abstract: Finite groups of nonsingular matrices with integer entries are some of the first groups seen in an undergraduate algebra course, since they only require knowledge of matrix
multiplication and inversion. They nevertheless have many interesting properties and associated problems (some unsolved), and they have been the object of study by many famous mathematicians. Not
much of this theory (or history) appears in undergraduate texts (and was not known by at least one algebraist - me), even though it is a good source of problems and projects. Indeed, this talk is
a report on what Andrey and I learned as he wrote a term paper for my undergraduate algebra course and we followed it up with independent study. Most of this talk should be accessible to students
who have had a linear algebra course. It will introduce ideas and concepts from many areas of mathematics, but prior knowledge will not be assumed.
2004-05 (latest first)
• Topic: An Introduction to Constructible Numbers
• Speaker: Kurtis Katinas, Rose Student, Mathematics
• Date: 18 May 2005
• Abstract: Around 2500 years ago, the ancient Greeks proposed a set of three geometry problems about constructing certain lengths with an unmarked straightedge and compass. These are trisecting an
arbitrary angle, doubling the cube, and "squaring" the circle. It turns out that all three of these feats are impossible. Trisecting the angle was the first to be disproved, but the ancient
Greeks were not the ones who did it. It wasn't until the 1800's, when all three were disproved. What is most surprising about these solutions was that they did not use any heavy geometry.
Instead, they relied on field theory and number theory. This talk is aimed primarily at undergraduate students as a walkthrough of two of these proofs and some details on the proof of the third.
No knowledge of field theory is required or assumed. Basic number theory and little geometry will be used, but not required either.
• Topic: New Goodness-of-Fit Tests
• Speaker: Dr. Mark Inlow, inlow@rose-hulman.edu
• Date: 11 May 2005
• Abstract: Goodness-of-fit tests are formal procedures for assessing the fit between a given model and the distribution of some quantity of interest. Using the moment-matching property of the
exponential family of distributions, we derive new generalizations of the smooth goodness-of-fit test. (The exponential family of distributions encompasses many distributions including the
normal, t, chi, exponential, gamma, beta, and Poisson families.) We compare the performance of our new tests with standard goodness-of-fit tests for the normal distribution.
• Topic: Computational Modeling with Partial Differential Equations
• Speaker: Chad Westfall, Wabash College
• Date: 13 Apr 2005
• Abstract: Partial differential equations (PDEs) are used in many areas of science to model the behavior of quantities that depend on several independent variables. In this talk we will look at
the process of modeling physical phenomena with partial differential equations. Working through a simple example we will highlight the issues and challenges in the discretization and solver
stages of the process.
• Topic: Batch Calculation of the Residues and Their Sensitivities, Or: How to compute almost any derivative using sum(prod([combnk(factors), dfactors.]))
• Speaker: Brad Burchett
• Date: 30 Mar 2005
• Abstract: In determining the time-domain response of linear time invariant systems, the inverse LaPlace transform technique using partial fraction expansions has both practical and historical
significance. The values which constitute the numerators of the partial fraction expansion are commonly known as the residues. A recent application of interest is brute force computation of the
quadratic cost function for optimal output feedback which can be facilitated by the Sylvester expansion. Sylvester's expansion requires computation of the system residues. Computation of the
residues is typically accomplished by deconvolving the system transfer function and evaluating ratios of polynomials at a system pole. In this work, the first order form of the partial fraction
expansion is investigated. A general matrix equation is derived for computation of the residues. This equation is generalized for cases involving repeated as well as distinct system poles. The
sensitivities of the residues to changes in system parameters can then be computed by differentiating this matrix equation. Typical numerical results are presented.
• Topic: The $20,000,000,000 Eigenvector - Part II
• Speaker: Kurt Bryan, bryan@rose-hulman.edu
• Date: 26 Jan 2005
• Abstract: In the last talk I showed a key idea that lies behind how Google ranks the importance of each page in a web of interconnected pages. The problem boils down to computing an eigenvector
of a certain n by n matrix, where n is the number of pages in the web. But Google currently indexes over 8 billion pages---how does one do linear algebra on matrices of that size? Gaussian
elimination? If you believe that, I have got a bridge for sale. In part II we will look at how one can reasonably compute an eigenvector for these very large matrices, and I will address a few
questions that were raised in the first talk.
• Topic: The $20,000,000,000 Eigenvector - Part I
• Speaker: Kurt Bryan, bryan@rose-hulman.edu
• Date: 19 Jan 2005
• Abstract: When Google went online in the last decade, one thing that set it apart from other search engines was that its search result listings always seemed to deliver the good stuff up front.
With other search engines you often had to wade through screen after screen of links to unimportant web pages that just happened to match the search text. Part of the magic behind Google is its
ability to quantitatively rate the importance of each page on the web and so rank the pages, then present to the user the more important pages first. In these two talks I will explain one popular
approach to rating web page importance. It turns out to be a delightful application of standard linear algebra.
• Topic: Why Number Theorists Care About Elliptic Curves - Part II
• Speaker: Ken McMurdy, mcmurdy@rose-hulman.edu
• Date: 08 Dec 2004
• Abstract: Let E be an elliptic curve whose Weierstrass equation has rational coefficients. In the first installment of this talk, we defined an abelian group structure on E. We then showed how to
compute the p-torsion subgroup, denoted E[p], which must always be isomorphic to two copies of the integers mod p. In Part II, we will show how a certain Galois group acts on E[p], resulting in a
Galois representation into the group of invertible two-by-two matrices over the field F[p]. This will all be done in great detail for the specific curve whose 3-torsion was worked out explicitly
in Part I. Time permitting, I will then discuss an analogous construction of l-adic Galois representations, and connections with modular forms such as the Shimura-Taniyama-Weil Conjecture.
• Topic: Algebraic Cycles on Abelian Varieties
• Speaker: Reza Akhtar, Miami University of Ohio
• Date: 27 Oct 2004
• Abstract: The theory of algebraic cycles was initially developed with an view towards studying intersections on algebraic varieties. Since then, it has found many applications to K-theory, number
theory, and most recently to the theory of motives. This talk will provide an introduction to algebraic cycles and abelian varieties, and will describe the interaction between the product
structure on cycles and the group law on an abelian variety. Some recent results of the speaker in this area will also be discussed.
• Topic: Equivalence of Real Elliptic Curves - Part II - Birational Equivalence
• Speaker: Allen Broughton, brought@rose-hulman.edu
• Date: 13 Oct 2004
• Abstract: This second talk on real elliptic curves will complete the picture of birational equivalence of real elliptic curves by looking at the complex elliptic curve defined by the original
curve. The complex curve is called a complexification of the real curve and the real curve is called a real form of the complex curve. The complex curve is a torus and it interesting to visualize
the real forms as curves on the torus. We will spend most of the talk exploring the very interesting relationship among the real forms, mirror reflections on the torus, and the automorphisms of
the complex curve. Non-isomorphic real curves can have can have isomorphic complexifications. The main result we will show is that each complex elliptic curve defined by real equations has
exactly two real forms which are birationally inequivalent. The most interesting part is that there is exactly one complex elliptic curve that has a real form with one component and another real
form with two components. We will not use any calculations more complex than high school algebra and nor any geometric concepts beyond what we cover in our multi-variable calculus course. The
calculations are made quite easy by using the Weierstrass form discussed in the first talk. The first part of the talk will be a recap of the first talk in the context of complex elliptic curves.
There will be lots of pictures.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Equivalence of Real Elliptic Curves - Part I - Linear Equivalence
• Speaker: Allen Broughton, brought@rose-hulman.edu
• Date: 06 Oct 2004
• Abstract: This is the first of several talks on elliptic curves given by Allen Broughton and Ken McMurdy. In the two talks by Allen Broughton a complete answer will be given to a question posed
by Ken McMurdy during his job talk last spring.
What is the moduli space of real elliptic curves like?
Since then a complete answer has been worked out and it is surprisingly simple.
In the first talk a basic introduction to real elliptic curves will be given -- starting from definitions, smoothness, projective completion, the geometry of the group law, the geometry of
tangents and inflection points and ending up with the notions of embedded linear equivalence, normal Weierstrass form, and linear classification. The main result is that there are two families of
curves each depending on a single real parameter. Each curve in one family has one component and each curve in the other family has two components*. The talk does not use calculations more
complex than high school algebra and the geometric concepts that we cover in our multi-variable calculus course (except a smidgen of topology at one point). There will be lots of pictures. *Well
that statement is almost true. The explanation of almost true will be given in the second talk, which will cover the complexifications of real elliptic curves, real forms of complex elliptic
curves, the moduli space complex elliptic curves, and the automorphism groups of curves.
For more information see Announcement/Abstract/Paper in PDF form
• Topic: Fast Reconstruction of Cracks using Impedance Imaging
• Speaker: Dr. Kurt Bryan, bryan @rose-hulman.edu
• Date: 22 Sep 2004
• Abstract: This talk is based on the work done in our mathematics REU in the summers of 2002-2004, concerning some mathematical problems that arise in the non-destructive testing of materials. I
will present an absurdly simple and fast algorithm to reconstruct linear cracks inside an object, by using electrical currents applied to the outer boundary of the object and then measuring the
induced voltages on the outer boundary (or if you prefer to think in terms of heat, one applies a known heat source to the outer boundary and measures the resulting steady-state boundary
temperatures). An insightful result by the 2003 group (extended by the 2004 students, using results from the 2002 group) turns this apparently hard problem into a DE I exercise!
2003-04 (latest first)
• Topic: The Celestial Sphere: Geometry and Astrolabes
• Speaker: Tanya Leise leise@rose-hulman.edu
• Date: April 14, 2004
• Abstract: In the first and second centuries BC, Greek thinkers took the Babylonian beginnings of astronomy, which included the zodiac, and incorporated their brilliant geometrical ideas to create
a mathematical model of the heavens that was both useful and accurate. Ptolemy's Almagest (ca. 100-150 AD) marks the peak of the development of the Greek mathematical astronomy. This early
astronomy viewed the heavens as a great rotating celestial sphere with a stationary Earth at its center. The stars were fixed to the celestial sphere, while the sun moved along the zodiac, making
one full circle each year. We will survey some of the geometry used in developing coordinate systems on the celestial sphere and in projecting the sphere onto a plane to result in a working
two-dimensional model of the heavens -- the astrolabe. In order to visualize this sphere-to-plane stereographic projection, we will work some basic computations with astrolabes that I will
provide to the audience, and compare the 2D astrolabe to a 3D celestial globe.
• Topic: The Joy of Zero Divisors (and possibly the horror if time permits )
• Speaker: Mike Axtell, Wabash College, axtellm@wabash.edu
• Date: March 31, 2004
• Abstract: The talk will focus on a beautiful and surprising result linking Abstract Algebra to Graph Theory. You need not know anything about Graph Theory (the speaker doesn't either). You need
not know anything about Abstract Algebra - relevant ideas are basic and will be introduced. Warning: The speaker may use this opportunity to trash talk Rose prior to the ICMC (Indiana Collegiate
Mathematics Competition) on Friday.
• Topic: Numerical ODE Solving for a Chaotic System
• Speaker: Brad Burchett, Rose-Hulman - Mech Eng, Bradley.T.Burchett@rose-hulman.edu
• Date: March 17, 2004
• Abstract: A simple non-linear dynamical system with chaotic properties is used to illustrate the advantages and limitations of Runge-Kutta (RK) based ODE solving. Herein we describe the course
"Computer Applications in Engineering 2" (ME 323): how it fits in the ME curriculum, and course objectives. We quickly review the techniques of fixed and adaptive step fourth order RK (RK4). The
definition of stability for non-linear autonomous systems is reviewed. We then present the physical system and its ODE representation. Results are shown for adaptive and fixed-step RK4 where the
system stability boundary estimate visibly changes due to numerical inaccuracies.
• Topic: Probability Models in Genetics
• Speaker: Amanda Lynn Stephens, Rose student, stephanal@rose-rulman.edu
• Date: February 18, 2004
• Abstract: A discussion of probability models in genetics. Genetics models such as the Wright-Fisher and the Moran Model will be analyzed with Probability Modeling. The talk is based on an
undergraduate research project by the speaker.
• Topic: Theme and Variations from Geometric Function Theory (3 talks)
• Speaker: Jerry Muir, muir@rose-rulman.edu
• Dates: January 28, 2004, February 4, 2004, February 11, 2004
• Abstracts of the talks:
I. Convex Mappings of the Unit Disk: The theory of univalent (one-to-one and analytic) functions of the unit disk in the complex plane has been an area of active research for almost a century.
Bieberbach's conjecture, proposed in 1916 and proved by de Branges in 1985, that a univalent function defined on the unit disk of the form
must satisfy n motivated a great deal of this research. In particular, many elegant results were proved for families of univalent functions that are defined by some geometric condition on the
image of the function. Usually, there is no nice extension of results from one complex variable into higher dimensions, and this topic is no exception. Because of this, the geometric classes of
functions are of special importance in that setting. In this, the first of three talks, we will consider univalent mappings of the unit disk whose image is a convex set in the plane. A sequence
of appealing results will be given that draw upon some of the classical principles from Complex Analysis.
II: Some Examples and Obstacles in Higher Dimensional Geometric Function Theory: Having been introduced to some of the basic and elegant results of one variable geometric function theory, we turn
our attention to the higher dimensional setting. Although natural to consider, this setting yields problems of much greater difficulty. Many of the simplest one variable results either have no
reasonable extension or the extensions require difficult unintuitive arguments. We will introduce the basic ideas of function theory in higher dimensions, including all of the necessary
definitions, and examine some situations where difficulties arise. This will include some counterexamples to natural generalizations of the one variable theory. We will conclude by considering
different norms on the space C^2 of two-dimensional complex vectors. The impact that changing norms has on the function theory is substantial. Recently developed constructions of convex mappings
of the unit ball of C^2 with certain non-Euclidean norms will be given.
III: Analysis of Convex Mappings of the Ball in C^n Onto Sets Containing a Line: In the last talk, we saw some instances in which elementary properties of convex mappings of the unit disk do not
easily extend to a higher dimensional setting. Few examples of higher dimensional mappings are known, and those that are known fail to extend the familiar properties that some one-dimensional
mappings posses. In this talk, we will focus on mappings F of the Euclidean ball B in C^n such that F(B) is a convex subset of C^n containing a line. These provide an interesting generalization
of mappings of the unit disk onto strips and half-planes and may eventually be useful in the determination of the extreme points of the family of convex mappings.
• Topic: The Banach Fixed Point Theorem and Solvability of Integral Equations
• Speaker: Dan Abretske, Rose student, Daniel.A.Abretske@rose-rulman.edu
• Date: January 21, 2004
• Abstract: As part of my independent study last quarter I studied various solvability conditions that can be placed on both linear and non linear operators. As an extension of that course I will
be discussing the Banach Fixed Point Theorem and the Geometric Series Theorem. I will then show how they can be applied to integral equations of the second kind.
• Topic: Black Box Linear Algebra
• Speaker: William Turner, Wabash College, turnerw@wabash.edu
• Date: November 12, 2003
• Abstract: In symbolic computation and its subfield of computer algebra, we desire algebraic methods to compute an exact solution to a problem, as opposed to the numerical approximations supplied
in numerical analysis. In this talk, we introduce the black box model for symbolic linear algebra. We investigate Wiedemann's approach to solve a system of linear equations and compute the
determinant and rank of a black box matrix.
• Topic: Inverse Electrocardiography
• Speaker: Lorraine Olson, Mech Eng, Lorraine.Olson@rose-rulman.edu
(Joint work with Robert Throne, Rose-Hulman Institute of Technology and John R. Windle, University of Nebraska Medical Center )
• Date: November 5, 2003
• Abstract: The heart is an electromechanical device. In its resting state, the heart is electrically polarized. For each heartbeat, a wave of "depolarization" travels through the heart muscle,
causing the tissues to contract. If the electrical pathways in the heart malfunction, this leads to arrhythmias and poor blood flow. Hence, knowledge of the electrical patterns on the heart is
extremely useful in diagnosing and correcting certain types of heart-conduction related defects. In recent years there have been a growing number of attempts at reconstructing surface potentials
on the heart from minimally invasive remotely measured signals. Two basic approaches have been taken. In the oldest approach, body surface potentials are measured and used to estimate the
potential patterns on the endocardium (outside surface of the heart). More recently, a probe which can be inflated within a heart chamber has been developed and is used to estimate potential
patterns on the interior surface of the heart. Both of these estimation problems are "inverse problems", and they are very sensitive to small errors in the measurements. We therefore need to use
some form of "regularization", or smoothing, to ensure that the answers we obtain are reasonable. The key question is how much smoothing to use, so that we obtain accurate answers. This talk will
focus on the mathematical details behind the inverse electrocardiography problem for the inflated probe case: the governing equations, finite element methodology, regularization techniques, and
methods for selecting the regularization parameters. We will also show preliminary results for the probe data.
• Topic: Small Cycles of the Discrete Logarithm (2 talks)
• Speaker: Joshua B. Holden, holden@rose-hulman.edu
• Dates: October 22, 2003 and October 29, 2003
• Abstract of the talks: Brizolis asked the question: does every prime p have a pair (g,h) such that h is a fixed point for the discrete logarithm with base g? In other words, is g^h congruent to h
modulo p? We will extend this question to ask about not only fixed points but also two-cycles, and examine methods for estimating the number of such pairs given certain conditions on g and h.
This problem has applications to cryptography, since one well-known cryptographically secure random number generator uses the idea of iterating the discrete logarithm and we hope that it does not
fall into cycles too often!
• Topic: Fast Reconstruction of Internal Cracks with Thermal Imaging
• Speaker: Nic Trainor, Rose student, Nic.A.Trainor@rose-rulman.edu
• Dates: October 1, 2003 and October 8, 2003
• Abstract: The ability to characterize the interior of an object without damaging the object is an invaluable tool in industry. One useful technique of recent interest is "impedance imaging", or
equivalently, "steady-state thermal imaging". The idea, in thermal terms, is to use temperature measurements on the boundary of an object---specifically, imposed thermal energy fluxes and
measured boundary temperatures---to determine interior structure, for example, to find internal cracks or voids. In these two talks we'll discuss some new mathematical results on thermal imaging
for cracks, obtained in Rose-Hulman's summer REU mathematics program. In the first talk we'll examine a new and very rapid approach to finding a single crack in the interior of an object, under
the assumption that the crack blocks the flow of heat. In the second talk we'll discuss how to extend the procedure to the problem of finding several interior cracks, and look at the issue of
what types of input fluxes provide optimal resolution and stability.
• Topic: Imaging the Inner Wall Profile of a Blast Furnace
• Speaker: Kurt Bryan, bryan@rose-rulman.edu
• Date: September 249,2003
• Abstract: A blast furnace is essentially a large vessel filled with molten material. It turns out that the inner wall of the furnace, which is in contact with the molten interior, can change
shape over time, becoming either thinner due to the corrosive nature of the furnace interior, or the wall can become thicker due to the build up of deposits. Walls that become too thin are
obviously dangerous, and it's also undesirable to have the walls become too thick.
It's obviously difficult to directly measure the profile of the inner wall when the furnace is in operation, so one would like a means of determining the profile indirectly, from the outside. One
approach is to use thermal methods, by measuring the temperature and heat flux at positions on the outer wall and from this information infer the inner wall profile.
In this talk we'll consider a simple one-dimensional model of the situation, in which the furnace wall (or a cross section of it) is modeled as a thermally conductive bar, whose length changes
slowly over time. We'll look at how one can use temperature and heat flow measurements at one end of the bar to determine the length of the bar at any time. This is work done during our summer
REU program.
• Topic: Approximate solutions to the Boussinesq equation
• Speaker: Aleksey Telyakovskiy
• Date: October 2, 2002
• Abstract : The Boussinesq equation is a nonlinear diffusion equation that models the behavior of groundwater in unconfined aquifers. Solutions of the Boussinesq equation are considered in many
areas of hydrology. In case of zero initial conditions, solutions of the Boussinesq equation exhibit wetting fronts that propagate with finite speed. For certain types of initial-boundary value
problems the Boussinesq equation can be reduced to boundary-value problems for an ordinary differential equation for a scaling function. In this talk we construct approximate closed-form
solutions to the one-dimensional Boussinesq equation.
• Topic: An Inverse Problem Arising In Non-destructive Testing for Cracks
• Speaker: Kurt Bryan, bryan@rose-hulman.edu
• Date: October 9,2002
• Abstract : Consider some material object which may or may not have an internal "crack ". You want to find out if there is indeed such a crack, and if so, determine the location of the crack. The
catch is that you must do it non-destructively---there's no point to cutting the thing in half only to find out it was good. Recently, two methods for imaging the interior of an object to find
defects have been much investigated. The techniques use either heat or electrical energy to "see " inside objects, non-destructively. In this seminar I'll talk about mathematical research done
with undergraduates in our REU program last summer, in which we extended some known theoretical and computational techniques for finding cracks in objects using thermal and electrical methods.
• Topic: The Distribution of the Kolmogorov-Smirnov Statistic for Exponential Populations with Estimated Parameters
• Speaker: Diane Evans, evans@rose-hulman.edu
• Date: October 23,2002
• Abstract: I will present the derivation of the distribution of the Kolmogorov-Smirnov, Cramer-von Mises, and Anderson-Darling test statistics in the case of exponential sampling when the
parameters are unknown and estimated from sample data for n = 1 and n = 2 via maximum likelihood.
• Topic: Factoring Integers via Lenstra's Elliptic Curve Method
• Speaker: Noor Martin, noor.martin@rose-hulman.edu
• Date: October 30,2002
• Abstract: This talk examines a method for factoring integers based on the use of Elliptic Curves modulo some composite number n. Published by H. W. Lenstra in 1987, this method is a modification
of Pollard's p-1 method for factoring integers. Background information on both Elliptic Curves and Pollard's p-1 method will be covered as well.
• Topic: Elliptic Curve Cryptography
• Speaker: Matthew Ford, matthew.ford@rose-hulman.edu
• Date: November 6,2002
• Abstract: Elliptic Curve Cryptography (ECC) provides an alternative method of public key cryptography. While RSA is based on the factorization of a composite number, ECC is based on the Elliptic
Curve Discrete Log Problem. The difference in these problems makes ECC not vulnerable to some of the attacks against RSA. The current best known attack against ECC is an exponential time
• Topic: The Combinatorics of Symmetric Functions
• Speaker: Thomas Langley, thomas.langely@rose-hulman.edu
• Date: November 13,2002
• Abstract: There is a remarkable connection between representations of the symmetric group and symmetric multivariable polynomials polynomials that are unchanged when the variables are permuted).
This correspondence, in which characters of irreducible representations are mapped to Schur functions, allows the combinatorics of symmetric functions to be used to solve representation theoretic
problems. This talk will provide an introduction to this complex and beautiful combinatorial world, introducing symmetric functions, tableaux, Schur functions, plethysm, and the
Robinson-Schensted correspondence.
• Topic:Vanishing Cycles and Kaleidoscopic Quadrilateral Tilings
• Speaker: Allen Broughton, brought@rose-hulman.edu
• Date: December 11,2002
• Abstract: For the last 5 years the focus of the Rose-Hulman REU Tilings group has been hyperbolic, kaleidoscopic tilings of Riemann surfaces by triangles. A lot has been discovered about these
objects including a complete classification up to genus 13. Last summer we pushed beyond triangles to consider quadrilateral tilings. On the plus side the group theory did get a bit simpler; on
the minus side we lost rigidity. A surface constructed from triangles is rigid in the sense that there are no transformations that preserve both angles and area. This is not true in the
quadrilateral case. The euclidean analog is that all triangles with congruent corresponding angles and the same area are congruent. However, there is a one-parameter family of mutually
non-congruent rectangles with the same area. On hyperbolic surfaces the same holds true, but there is an interesting twist. As we vary the quadrilaterals through an infinite family of
equiangular, equal area quadrilaterals some curves on the surface take on arbitrarily small lengths, and shrink to a point as we go to infinity. These are the so-called "vanishing cycles" studied
in algebraic geometry. We will show how to identify the vanishing cycles in simple geometric terms. Much of the talk will be explaining the basic concepts in terms of small visual examples.
Students Isabel Averill, Michael Burr, John Gregoire and Kathryn Zuhr all contributed to this project.
• Topic: Equations, Scramblings, and Random Walks in Finite Groups
• Speaker: Gary Sherman, gary.sherman@rose-hulman.edu
• Date: December 18, 2002
• Abstract: We prove (casually) that the probability of solving an equation in a (finite) group is just about the reciprocal of the cardinality of the groups derived subgroup. Our approach is to:
□ view your favorite group equation, xy = yx, in terms of a permutation action,
□ introduce a new class of permutations, so-called scramblings, which are combinatorially related to derangements
□ spawn a natural random walk on the derived subgroup. Natural research questions suitable for Natural research questions suitable for undergraduates ensue
• Topic:de Casteljau's Algorithm in Hyperbolic Space
• Speaker: Alla Genkina, Rose CS major
• Date: January 22, 2003
• Abstract: Geometric Modeling can be defined as the application of mathematics to describe the shape and properties of physical or virtual objects. This application of mathematics extends to
various industrial and graphical fields. Since most of the fields are computerized, the algorithms developed to describe objects in mathematical terms can be programmed and analyzed by computers.
This presentation will describe de Casteljau's algorithm which is used to generate Bezier curves. The curves that are created can then be utilized to model various objects. The presentation will
demonstrate the use of the algorithm in both Euclidean and Hyperbolic Space, but the main concentration will be on its application and use in Hyperbolic Space.
• Topic:Prescribing the curvature of level curves
• Speaker: Dave Finn, finn@rose-hulman.edu
• Date: January 29, 2003
• Abstract: Given a function u(x,y), it is a straight forward calculation in vector calculus to determine the curvature K[u] of the level curves of u. This curvature can be computed using the
Hessian of u. In this talk, we consider the problem of prescribing the curvature of level curves of a function ,
Given a function k(x,y), is it possible
to find a function u(x,y) with K[u]=k
As a problem in nonlinear partial differential equations, this problem poses some interesting questions, starting with the nature of the equation, the correct boundary values to consider, the
effect of the domain on solvability, the effect of the boundary values on solvability, and finally the existence of a solution.
• Topic: Why (and how!) we should all use group projects in all introductory statistics courses
• Speaker: Douglas Andrews, Wittenberg University
• Date: March 12, 2003
• Abstract: The overwhelming consensus emerging from the statistics education community over the past twenty years is for greater emphasis on exploratory data analysis, design, interpretation, and
concepts, at the expense of probability, theory, recipes, and techniques. Moreover, education reform efforts in many fields highlight the benefits of active and collaborative learning pedagogies,
as well as more authentic forms of assessment. Group data analysis projects -- in which students analyze data from simple observational studies and experiments of their own design -- can be an
ideal way to implement these stat ed recommendations and realize these broader educational benefits in introductory statistics courses for all audiences. In this talk, I'll lay out some of the
rationale for using such projects and give plenty of concrete advice for how to structure the experience.
• Topic: Automatic Differentiation of Algorithms (2 talks)
• Speaker: Jeffery Leader, leader@rose-hulman.edu
• Dates: March 26, 2003 and April 2, 2003
• Abstract: Many algorithms used in scientific computation require derivatives. Typically the function is provided--often in the form of a computer program--and the user must find or approximate
its derivative. Automatic differentiation is a technique for automatically generating a program that produces those derivatives, by reading in the code of the program defining the function,
considering its computational graph, and then finding its exact derivative. In Part I of this talk I will define the problem and outline its solution using this technique; in Part II of this talk
I will discuss the two principal modes of automatic differentiation, forward and reverse, and how they use the computational graph to produce code for the required derivative(s).
• Topic: Homogenization: It's Not Just for Dairy Products
• Speaker: Kurt Bryan bryan@rose-hulman.edu
• Date: April 9, 2003
• Abstract: A material is homogeneous if its physical properties don't vary with position, at least at the physical scale of interest. But many (one could argue all) materials are not homogeneous
at the microscopic level, but possess a structure with small-scale periodic or random variation. Indeed, composite materials are intentionally designed with such small-scale variations, in order
to have certain desirable physical properties. In many cases one would like to predict the bulk or macroscopic physical properties of a composite material from the microscopic structure.
Homogenization is a set of mathematical techniques for modeling a material with microscopic inhomogeneous structure as a macroscopically homogeneous material. In this talk I'll show one
mathematical framework in which this is done, and illustrate with simple examples .
• Topic:Kaleidoscopic tilings on surfaces, this time with the groups (1st of 2 talk series)
• Speaker: Allen Broughton, brought@rose-hulman.edu
• Date: April 30, 2003 and May 7, 2003
• Abstract: In the past I have given several lectures on kaleidoscopic tilings by triangles and quadrilaterals on surfaces, and asserted in these talks that the tiling group completely determined
the combinatorial and topological structure of a tiling. However, I have never really talked about the influence of the group theory! In this series of two talks I will give two examples of
determining combinatorial and topological structure, by group computations. Each talk will focus on a problem I intend to give to REU students this summer. Thus, there will be no general theorems
just problems statements with suggestions of attack, the talks will focus on developing the background to get to the problem statements. The first talk will include the necessary review of
tilings and hyperbolic geometry. You don't need to know much about group theory or hyperbolic geometry.
First talk: Constructing a fundamental domain for kaleidoscopically tiled surfaces. We are all familiar with the process of creating a torus by identifying opposite sides of a euclidean
rectangle. For higher genus surfaces of genus s > 1, a surface may be constructed by identifying sides of a hyperbolic 4s-gon. For a kaleidoscopically tiled surface can this be done so that the
polygon is a "nice" collection of tiles? The group theory computation will be focus on relating the infinite tiling group on the hyperbolic plane to the finite tiling group on the surface.
Second talk: When are kaleidoscopic tilings separating? Every edge of a kaleidoscopic tiling generates a reflection of the surface to itself fixing the edge. In the case of a sphere the fixed
point set (or mirror) of the reflection is a great circle which separates the sphere into two pieces. This is very misleading example, since for higher genus the mirror very rarely separates the
surface. The question is: is there a fast way to determine this splitting property from the properties of the tiling group? The talk will present a method of attack using the group algebra of the
talk. Again, no previous knowledge of group theory is assumed.
• Topic: Guessing Secrets
• Speaker: Jon Mastin - Rose CS major
• Date: May 21 , 2003
• Abstract: This talk will present a variation on the game "20 questions" which has arisen in the last few years in relation to internet security. In the two player game, one player holds two or
more secrets (IP addresses) while the second player asks yes or no questions. If the first player must answer truthfully using one of his secrets, how much can the second player discover? We will
discuss the answer to this question and discuss strategy from the point of view of both players.
• Topic: Mathematical Phylogeny
• Speaker: Jeff Leader, leader@rose-hulman.edu
• Date: September 19 and 26, 2001
• Abstract (September 19): I will discuss how search engines use the singular value decomposition (SVD) to improve, and to score, the relevance of the results they return. This material will be
needed for the second talk.
• Abstract (September 26): I will define the problem of mathematical phylogeny and the reconstruction of phylogenetic trees, then discuss current research being performed by Gary Stuart of ISU and
myself that uses the ideas of the first talk to create such trees.
• Topic: Species Phylogenies from Whole Genomes using SVD
• Speaker: Gary Stuart (ISU), Gary.Stuart@isugw.indstate.edu
• Date: October 3, 2001
• Abstract : Following Jeff Leader's fine series of seminars intoducing SVD (Singular Value Decomposition) as a tool for generating biomolecular phylogenies, I will describe some of our very recent
attempts to solve some very large problems using the same method. In particular, I will describe the generation of a tree summarizing the evolutionary relationships of 19 bacterial species.
Unlike most trees, which result from the analysis of only one or a few genes or proteins, this tree is based on an exhaustive comparison of over 35,000 proteins predicted from whole genome
sequence. Along the way, I plan to explore the "meaning" of the SVD relative to our application, and to present some of the (questionable?) assumptions upon which our method is based.
• Topic: The Mathematics of Financial Derivatives and Option Pricing
• Speaker: Kurt Bryan, bryan@rose-hulman.edu
• Date: October 17, 24, 31, 2001
• Abstract : Most people are familiar with traditional investments like stocks, bonds, and commodities. However, in the past few decades a huge market has arisen in the trading of "options" and
other "financial derivatives", contracts in which payment is based on the value of some benchmark, e.g., the price of a given stock on a certain date. In short, the value of the contract is
derived from the price of some underlying asset (hence the term "derivative').
As an example, suppose a contract is written in which I give you the option (but not the obligation) to buy from me one share of Microsoft stock for a guaranteed price of $50 on January 1, 2002
(today, October 15, it's selling for $57). This is an example of a European Call Option, in which you have the right to buy some asset at a guaranteed price sometime in the future. How much
should you pay to enter into such an agreement? Surprisingly, there is a very quantitative strategy for determining the price of this option contract.
In these talks (3 or 4) we'll examine the problem of option pricing. We'll start by looking at some common options, then at basic probabilistic models for asset prices. Finally, we'll derive the
celebrated Black-Scholes partial differential equation which shows how one can rationally determine option prices. This is work for which Robert Merton and Myron Scholes won the 1997 Nobel Prize
in economics.
• Topic: Ideal Error-Correcting Codes: Reed-Solomon Decoding without Fourier Analysis
• Speaker: Matt Lepinski (Rose alumnus, MIT graduate student), lepinski@theory.lcs.mit.edu
• Date: December 19, 2001
• Abstract : An error-correcting code is a set of strings (called codewords) such that any two strings in the set differ in a large number of positions. Error-correcting codes are very useful in
data transmission (where a noisy channel may corrupt some positions in the string). This is because many positions must be corrupted by the channel in order for the receiver to mistake one
codeword for another.
This talk deals with the decoding problem for error-correcting codes. That is, given a string, how do we find the codeword that differs from the given string in the fewest number of positions.
The codes considered in this talk will be the commonly used Reed-Solomon codes and the number theoretic Redundant Residue Number System codes. The talk will present a new algebraic framework for
thinking about error-correcting codes and show how this framework allows us to use the same ideas to decode both Reed-Solomon codes and RRNS codes. These ideas are of particular interest because
they can also be applied to decoding Algebraic Geometry codes which are asymptotically the best known codes. (Although a discussion of Algebraic Geometry codes is beyond the scope of this talk).
This talk assumes no prior knowledge of error-correcting codes. However, familiarity with polynomial algebra and finite fields is helpful in understanding some of the ideas in this talk. Most of
the material in this talk comes from the work of Madhu Sudan, Venkatesan Guruswami and Amit Sahai.
Probably the most common Error-correcting codes in practice are the Reed-Solomon codes which are based on polynomials over finite fields. These codes are used by everyone from NASA to CD
• Topic: Optimizing College Enrollments Under Uncertainty
• Speaker: Concetta DePaolo, Indiana State University, sdcetta@befac.indstate.edu
• Date: January 16,2002
• Abstract : Each year an institution must decide which students to admit in order to accomplish its goals (e.g. quality, enrollment, etc.) while satisfying various capacity constraints. This
presentation details a mathematical optimization model for this problem, which assumes that students are of different types that exhibit different (random) behavior. The presentation will
describe the properties of the optimal solution, as well as an implementation and a heuristic algorithm that are both Excel-based. How the model is being used by Indiana State University to
compare alternative admissions strategies and forecast the long-term effects of those strategies will also be touched upon.
• Topic: Automorphisms of Riemann Surfaces, Galois Groups, and Hecke Algebras
• Speaker: Allen Broughton, Rose-Hulman, brought@rose-hulman.edu
• Date: March 20 and 27, 2002
• Abstract : There is a classical and very well-understood connection between automorphism groups of compact Riemann surfaces and Galois groups of branched coverings of surfaces. In the first of
this series of two talks we will introduce and explore this idea. In the second talk we will consider non-Galois coverings, and see how this situation can be partially captured by Hecke Algebras.
These talks will highlight past and continuing work by students in the "tilings group" of the Rose-Hulman REU.
• Topic: The Theodorus Equations
• Speaker: Jeff Leader, Rose-Hulman, jeff.leader@rose-hulman.edu
• Date: May 1, 2002
• Abstract : The square-root spiral, or spiral of Theodorus, will be introduced, then generalized to a map on R^n with many strange attractors.
• Topic: Calculation of Bernoulli Numbers and Values of Zeta Functions
• Speaker: Josh Holden, Rose-Hulman, josh.holden@rose-hulman.edu
• Date: May 15, 2002
• Abstract : This talk will discuss some of the methods known to calculate Bernoulli numbers, with an emphasis on asymptotic analysis of their running times. Definitions (and some motivation) will
be provided. We will also discuss some more modern extensions of the Bernoulli number concept, and explore how and whether the methods for calculating Bernoulli numbers extend.
• Topic: Statistics, Earwax, and the Bering Strait
• Speaker: Dr. Doug Wolf, Department of Statistics, Ohio State University
• Topic: Teaching statistics the EESEE way
• Speaker:Dr. Elizabeth Stasny, Department of Statistics, Ohio State University
• Date: September 12, 2000
• Abstract: This was a visit to recruit students into graduate statistics programs
• Topic: Singular Solutions to a Partial Differential Equation Arising in Corrosion Modeling
• Speaker: Kurt Bryan, bryan@rose-hulman.edu
• Date: September 20 and 27, October. 4, 2000
• Abstract for Sept. 20 and 27: I'll talk about some joint work with Michael Vogelius at Rutgers University, specifically a partial differential equation (PDE) that arises in the modeling of
electrochemical systems. Although the PDE is linear, the boundary conditions contain an exponential type of nonlinearity. Under certain conditions the problem has a unique solution, but in other
cases the boundary value problem has an infinite family of solutions with logarithmic singularities on the boundary of the domain. I'll show some numerical simulations, what we've been able to
deduce about the nature of the solutions, and talk about what remains to be proved.
Abstract for Oct. 4: I'll discuss joint work with Lester Caudill at the University of Richmond, specifically a partial differential equation that arises in the modeling of heat flow through an
object with an interior "crack" or flaw. The flaw is modeled as a discontinuity or jump in the temperature over the flaw, with a nonlinear relationship between the heat flux over the flaw and the
temperature jump. I'll look at conditions under which the PDE has a unique solution, and discuss the inverse problem that motivates this: how to determine the location and nature of the interior
flaw from boundary measurements.
• Topic: Cwatsets
• Speaker: Gary Sherman Gary.Sherman@rose-hulman.edu, and Dennis Lin, Rose student
• Date: October 18, 25, November 1, November 8, 2000
• Abstract: A cwatset is a subset of binary n-space that is closed (c) with (w) a (a) twist (t) For example, C = {000,110,101} is a cwatset because;
C + 000 = C,
C + 110 = {110,000,011} is C with the first two components of each element transposed,
C + 101 = {101,011,000} is C with the first and last components of each element transposed
That is, for each element c of C there exists a permutation, pi, of three symbols such that the coset C + c is just C with pi applied to the components of each element of C.
The theory of cwatsets has roots in statistics (a cwatset determines a confidence interval for the mean or median of a symmetric random variable) and blossoms in graph theory (each isomorphism
class of simple graphs has a unique cwatset associated with it) and algebra (constructions, morphisms, representations). In this sequence of four talks we trace the development of the theory from
the first cwatset sighting at Rose-Hulman in 1987 to the latest results on isomorphism classes of cwatsets while highlighting the contributions undergraduates have made to the theory
Talk 1: The statistical motivation for cwatsets, examples of cwatsets, and constructions of cwatsets.
Talk 2: The group theoretic ideas which bare the soul of the theory of cwatsets.
Talk 3: The connection between representation and isomorphism of cwatsets.
Talk 4: The determination of all cwatsets of order at most 23.
Talks 1,2 and 3 will be given by Gary Sherman and talk 4 will be given by Dennis Lin (a Rose student).
• Topic: Pi in the Mandelbrot set
• Speaker: Aaron Klebanoff
• Date: Dec. 6, 2000
• Abstract: The Mandelbrot set is arguably one of the most beautiful sets in mathematics. In 1991, Dave Boll discovered a surprising occurrence of the number pi while exploring a seemingly
unrelated property of the Mandelbrot set. Boll's finding is easy to describe and understand, and yet it is not widely known -- possibly because the result has never before been shown rigorously.
In this presentation, I will provide the necessary background material to understand what the Mandelbrot set is and what Boll's discovery was. I will then outline a proof of the result.
• Topic: Why Chaos Toys are Chaotic.
• Speaker: Aaron Klebanoff
• Date: Dec. 13 and 20, 2000
• Abstract for Dec. 13: The Horseshoe Map. The horseshoe map is a simple map of the unit square into itself that is the prototypical example for a chaotic map. I will define the map, explore its
dynamics, and subsequently define what is meant by a chaotic dynamical system. Although this talk stands alone, it is preliminary material for next week's talk.
Abstract for Dec. 20: A Simple Chaotic Toy. I will describe a simple (chaotic) toy that my colleague and I developed, built, and analyzed. I'll outline a rigorous argument for showing that the
toy (along with many executive-type "chaotic" desk toys) is chaotic by showing that it is well modeled by a system that is conjugate to the horseshoe map. I'll also show a picture of the real toy
as well as some computer generated animations.
• Topic: A New Formula for Computing Frobenius Numbers in Three Variables.
• Speaker: Janet Trimm, Rose student
• Date: Jan 24, 2001
• Abstract : It is well known that if a and b are relatively prime positive integers, then the Frobenius number of a and b is equal to ab-a-b. Many authors have developed "explicit" formulas and
algorithms for computing Frobenius numbers of relatively prime integers a1,a2, ... an when n>2. But these formulas and algorithms are clumsy and complicated even for n=3. In this paper, we will
prove that there is surprisingly a nice formula that computes the Frobenius number of three positive integres a, b, and c where a and b are relatively prime.
• Topic: On the Probability that a Monic Integral Polynomial Is Irreducible
• Speaker: Timothy Kilbourn, Rose student
• Date: Jan 31, 2001
• Abstract : It is proved that if m is any positive integer, then the limiting value, as the prime-power q goes to infinity, of the probability that an m-th degree polynomial in F_q[X] is
irreducible is 1/m. As a corollary, one obtains an identity which is indexed by the partitions of m and whose terms are unit fractions. Analogous probabilistic studies are carried out for various
classes of integral polynomials, where the underlying notion of "probability" is defined in the spirit of "natural density," namely, as the limiting value, as n goes to infinity, of the usual
combinatorial probability of irreduciblity in Q[X] (equivalently, Z[X]) for integral polynomials whose coefficients are bounded in absolute value by n. With this notion of "probability", it is
shown that if m is between 2 and 6, then with probability 1, the random integral polynomial X^m+aX+b is irreducible; and if m is between 1 and 5, the same conclusion holds for the random monic
integral m-th degree polynomial. Numerical evidence is presented in support of related conjectures.
• Topic: Mathematical Modeling with Categories
• Speaker: Ralph Wojtowicz
• Date: Feb 7, 2001
• Abstract : Every concept arises from the equation of unequal things. Just as it is certain that one leaf is never totally the same as another, so it is certain that the concept "leaf" is formed
by arbitrarily discarding these individual differences and by forgetting the distinguishing aspects. ...What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in
short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and
binding. Truths are illusions which we have forgotten are illusions... it is originally "language" which works on the construction of concepts, a labor taken over in later ages by "science".
--Friederich Nietzsche
"On Truth and Lies in a Nonmoral Sense" (1873)
A theory is a mathematical model for an aspect of nature. One good theory extracts and exaggerates some facets of truth. Another good theory may idealize other facets. A theory cannot duplicate
nature, for if it did so in all respects, it would be isomorphic to nature itself and hence useless, a mere repetition of all the complexity which nature presents to us, that very complexity we
frame theories to penetrate and set aside. With this sober and critical understanding of what a theory is, we need not see any philosophical conflict between two theories, one of which represents
a gas as a plenum, the other as a numerous assembly of punctual masses. Models of either kind represent aspects of real gases; if they represent those properly, they should
entail many of the same conclusions, though of course not all.
---Clifford A. Truesdell and Robert G. Muncaster
"Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas" (1980)
...in mathematical practice we must, more than in any other science, hold a given object quite precisely in order to construct, calculate, and deduce; yet we must also constantly transform it
into other objects.
---F. William Lawvere
"Some thoughts on the Future of Category Theory" (1990)
Categories are abstract mathematical structures which may be viewed as the places where mathematical models live. A category consists of two sorts of things: objects and morphisms. Every morphism
has source and target objects: Source ----" Target Each object has an identity morphism and there is an associative composition operation on adjacent pairs of morphisms. An example is the
category having sets as objects and functions as morphisms.
The language of category theory is rich enough to describe diverse structures which arise in mathematical modeling and to express precise comparisons between models of different types. After
discussing basic definitions and examples and giving a brief history of the theory, I will describe categories of sets and of stochastic matrices and a category having transition probabilities as
morphisms. I will give examples of deterministic and stochastic, discrete-time dynamical systems and show how the former may be viewed as special cases of the latter. Certain constructions that
can be made with sets (points, cartesian products, disjoint unions) have useful interpretations in other categories. I will also present an implementation of the category of stochastic matrices
using Maple.
• Topic: Improving Solar Car Strategy
• Speakers: Brad Berron, Todd Goldfinger, Mike Ritter, Tom Schneider, Bill Stephen, Jerod Weinman, Rose students
• Date: Feb 14, 2001
• Abstract : The MA331 Mathematical Modeling class has been working for the past four weeks on modeling a few important aspects of the Rose-Hulman Solar Phantom VI solar car project, with the goal
of improving race strategy for the upcoming 2001 American Solar Challenge. We have focused on two basic issues: calculating available power from sunlight and computing torque-power curves for
various speeds and hill grades. The intensity of solar radiation changes over the course of a day, and depends on the current latitude, time of year, cloud conditions, and angle of the solar cell
array on the car. The resulting power available versus time of day curve can then be used to help determine race strategy for that day (e.g., the maximum speed allowed by the available power for
current road and weather conditions). To complement these calculations, we combined the efficiency curves provided by the engine manufacturer with the vast amounts of data compiled by past solar
car runs to find torque-power curves for different constant speeds and hill grades. These models can use the GPS data supplied by the race coordinators, giving information like latitude and
altitude along the racecourse, to help determine optimal race strategies.
• Topic: Elliptic Curve Cryptography
• Speakers: John Rickert, rickert@rose-hulman.edu
• Date: March 21, 28 and April 4, 2001
• Abstract : Public key cryptography has been applied to many systems in which encoding and decoding a secret message needs to be simple, while cracking to code must be difficult. Computer
security, Internet sales and smart cards are three of the places in which public key cryptosystems are being used. In 1978, Rivest, Shamir, and Adleman proposed the RSA cryptosystem, which is
currently used in many secure applications. As attacks on RSA have grown more sophisticated, other cryptosystems have been proposed. The Elliptic Curve cryptosystem is in currently use, and is
growing in popularity, especially in applications, such as smart cards, in which memory is limited.
March 21 - Number Theory and Public Key Cryptography
An introduction to public key cryptography and how some basic ideas from number theory are used to generate relatively secure cryptographic systems. This talk will discuss elementary modular
arithmetic and SA-cryptography.
March 28 - Introduction to Elliptic Curves
A look at some simple examples of elliptic curves and the emergence of algebraic structure through some simple geometry and basic polynomial algebra. We will also look at how the correspondence
between the algebra and the geometry is used to work with elliptic curves over finite fields.
• Topic: Mathematical and Computer Models of Specifications for Complex Systems
• Speakers: Bill Schindel, ICTT, Bill.Schindel@ictt.com
• Date: April 18, 2001
• Abstract : ICTT, in conjunction with System Sciences, LLC, carries out systems engineering projects for industrial clients with high complexity systems composed of many technologies--mechanical,
electronic, hydraulic, computers and communication, and business processes. (Refer to www.ictt.com.) The
company has evolved over many years a methodology called Systematica Methodology(tm). This methodology is specialized for modeling not single systems but (economically more important) families
(product lines) of systems, in which patterns can be detected and their common content propagated and managed. Among the patterns we are interested in are patterns of intelligent behavior. The
resulting approach establishes class hierarchies of system models that show the degree to which families of systems share common content (behaviors in particular) and the extent to which these
are variant to satisfy local (e.g., market) specialization needs. A broad group of rules, called Gestalt Rules, are used to express commercial engineering guidelines for individuals trying to
keep their divisional designs consistent with the general patterns of corporate architectures.
This suggests that metrics could be developed to express a variety of useful commercially and scientifically/mathematically interesting quantities--similarity, re-use, variance, etc., along with
a number of other interesting and useful tools. These appear to be potentially good student projects. The general subject of this work is important (economically and competitively) in almost all
large product line oriented product and services enterprises.
• Topic: Mathematical Modeling of Shape Memory Alloys
• Speakers: Tanya Leise, Tanya.Leise@rose-hulman.edu
• Date: April 25, 2001
• Abstract : The first shape memory alloy was discovered in 1962 at the Naval Ordnance Lab, when metallurgist
William Buehler passed a NiTi sample around at a meeting and showed how the metal was very flexible and held up well to repeated bending. You can imagine their surprise when, on a whim, a
pipe-smoking scientist heated the bent sample with his lighter and it immediately sprang out straight again. I'll provide samples of nitinol wire and springs so we can experience this phenomenon
firsthand, and then we'll look at some of the mathematical models developed in the past few decades for shape memory alloys. (In particular, I'll include a model makes a great calculus project
that takes students beyond naive implementation of the Second Derivative Rule.)
• Topic: Partial Least Squares Analysis
• Speakers: Yosi Shibberu, shibberu@rose-hulman.edu
• Date: May 2, 2001
• Abstract : Partial least squares analysis is a relatively new empirical modeling technique. It has been found to be useful in rational drug design where the objective is to relate the chemical
structure and properties of a drug molecule to its biological activity. In such problems, the number of variables significantly exceeds the number of equations. We will begin with a review of
ordinary least squares and principle component analysis. Partial least squares analysis will then be introduced and compared to the previous two techniques. A simple spring system will be used to
illustrate the main ideas.
• Topic: Hidden Markov Models
• Speaker:Yosi Shibberu shibberu@rose-hulman.edu
• Date: September 22, September 29, and October 6, 1999
• Abstract: Hidden Markov Models were introduced and developed in the late 1960s and early 1970s. They are in wide spread use in speech recognition computer algorithms and more recently are being
used as part of computational algorithms in bioinformatics. We will begin with a description of Hidden Markov Models and then proceed to an application of these models to DNA sequence alignment.
• Topic: Algebraic Numbers and Triangle Iterations
• Speaker:Matt Lepinski, Rose-Hulman student
• Date: October 20, 1999
• Abstract: The Hermite problem is to find ways of representing a number which make algebraic properties of the number apparent. One well studied solution to the Hermite problem is the continued
fraction which is a method of representing a number as a sequence of integers in such a way that the sequence is repeating if and only if the number is a quadradic irrational. We present a
generalization of the continued fraction based on an iterative mapping of a triangle in the plane. This allows us to represent a point in the plane with a sequence of integers that is periodic
only if both coordinates of the point are algebraic numbers of degree at most three. We show how these sequences can be used to construct integer vectors arbitrarily close to a plane in three
dimensions. We then present a link between this linear algebra and the geometry of the iteration that defines the sequences. Finally, we make use of this connection to address the question of
when an infinite sequence represents a unique point.
• Topic: Motion of a Hanging Chain after the Free End Is Given an Initial Velocity
• Speaker: Herb Bailey, Herb Bailey@rose-hulman.edu
• Date: October 27, 1999
• Abstract: One end of a chain is attached to the ceiling and the free end is given a sharp horizontal blow. The resulting pulse travels to the top of the chain, and a few seconds later the
reflected pulse causes the free end to give a kick. The free end kicks again and again at regular intervals. The time between kicks is constant and has been accurately predicted by the solution
of an ordinary differential equation. Close observation of the nature of successive kicks shows that they are not always in the same direction, but they do follow a pattern that repeats every
four kicks. We have modeled this experiment by solving of the wave equation with variable tension and summing the resulting series solution. The lateral deflection as a function of time and
distance along the chain was calculated. The predicted deflection of the free end is in good agreement with experimental results obtained from a movie of the chain motion.
• Topic: An Introduction to Dynamically Accelerating Cracks
• Speaker: Tanya Leise Tanya.Leise@rose-hulman.edu
• Date: November 3, November 10, 1999
• Abstract: Fracture behavior is highly dependent on the microstructure of the material. Two competing philosophies have arisen in the study of dynamic fracture mechanics. Lattice models and
molecular dynamics treat materials as a lattice of atoms and focus on the microscale to capture the microstructure of the material. Continuum models treat the material as a bulk with large-scale
properties, with special small-scale properties only near the crack-tip, where the microstructure has the strongest effect on the material response. A continuum model for a dynamically
accelerating crack will be presented with a solution method based on the idea of a Dirichlet-to-Neumann map. The fracture problem is essentially a hyperbolic PDE with boundary conditions that
depend on the unknown crack tip path. To solve for the crack tip motion, an appropriate fracture criterion must be chosen to model the material's microstructure and fracture behavior. The
solution method will be demonstrated for the simplest case of a single crack in the context of antiplane shear in a linearly elastic material.
• Topic: Dynamical Analysis of a Chaotic Electrical Circuit
• Speaker: Evan Graves, Rose student
• Date: December 15, 1999
• Abstract: Rollins and Hunt published a paper in 1982 in which they described the presence of chaotic behavior in a simple electrical circuit consisting of a resistor, inductor, and a diode in
series. They attributed the nonlinear dynamics to the way in which the diode acted as either a capacitor or a voltage source, depending on the flow of current and the reverse recovery time of the
diode itself. They also developed a set of equations that they claimed would model the circuit behavior. My objective was to experimentally acquire data and use the given model as a guideline to
analytically determine the underlying dynamics of the chaotic system. As voltage increases in the circuit, driven at a frequency around the system's resonance, we are able to see period doubling
in both the voltage and the current along with chaotic behavior at high voltages. Chaotic data of the observed current was analyzed with the use of software developed by Randle Inc. at Applied
Nonlinear Sciences, LCC. A three dimensional representation of the reconstructed chaotic attractor was created, along with an approximation for its true dimension and the corresponding Lyapunov
exponents of the system.
• Topic: Which way did that bicycle go? and other geometric questions about bicycle tracks
• Speaker: David Finn finn@rose-hulman.edu
• Date: January 12, 2000
• Abstract: You are walking across a snow covered road, and come across a set of bicycle tracks. Can you tell which way the bicycle was going? Which track was produced by the front tire and back
tire of the bike? In this talk, we will discuss a mathematical description of bicycle tracks to answer these questions and pose others.
• Topic: Interval Methods for Optimization
• Speaker: Jeff Leader, Jeffrey.Leader@rose-hulman.edu
• Date: January 26, February 2 and February 9, 2000
• Abstract: Part I (January 26) We introduce interval arithmetic, an extension of standard floating
point arithmetic which uses directed rounding to generate intervals guaranteed to contain the result of a given computation. We indicate some types of problems for which this approach, sometimes
called reliable computing, is appropriate, and address the issue of whether or not the width of the intervals can be made sufficiently small. (Interval packages are available for C/C++, Fortran,
MATLAB, and other
Part II (February 2) We discuss the interval Newton's method as a prelude to introducing Hansen's method, an interval technique for global optimization. We discuss the advantages and
disadvantages of this technique and compare it to other methods for global optimization problems.
Part III (February 9) We discuss the centered form and its variations. These are the particular expressions used to obtain quadratic convergence in interval methods. We also discuss some related
applications of interval methods in matrix algebra.
• Topic: Which way did he say that bicycle went?
• Speaker: David Finn finn@rose-hulman.edu
• Date: April 12, 2000
• Abstract: In a previous talk, I claimed that the only bicycle tracks for which you can not tell which way the bicycle went are either straight lines or concentric circles. This fact about bicycle
tracks is true only in a restricted case. We need to assume a hypothesis that was not stated during the previous talk. In this talk, a general procedure for constructing bicycle tracks for which
you can not tell which way the bicycle went will be given.
• Topic: Higher Genus Soccer Balls and Kaleidoscopic Tilings in the Hyperbolic Plane.
• Speaker: Allen Broughton brought@rose-hulman.edu
• Date: April 19 and 26, 2000
• Abstract: Two talks on kaleidoscopic tilings, for a general mathematical audience of students and faculty. The purpose of the talks is to present an area of intriguing mathematical research, rich
with problems suitable for undergraduate research.
A soccer ball has an attractive pattern of pentagons and hexagons on its surface, with a great deal of symmetry. Baseballs and basketballs also have certain patterns of symmetry which are
different from the soccer ball pattern. Though the sportsman might never ask, a mathematician would be intrigued by the possibility of a higher genus soccer ball (a soccer ball with patterned
handles). It turns out that they exist in great abundance though we need to give up on having only hexagons and pentagons.
The key to creating and understanding "soccer balls" are kaleidoscopic tilings of the 2-dimensional geometries: the sphere, the euclidean plane and the hyperbolic plane. The sphere tilings, of
course, yield the patterns of sports balls. The tilings of the euclidean and hyperbolic planes form beautiful patterns and have their own artistic interest, as in some of the art of Escher. The
higher genus soccer balls, though impractical, are a convenient mental hook for generating questions about patterned surfaces, e.g., constructing simple examples.
In the first talk the relation between (higher genus) soccer balls and tilings will be explored, including an introduction to hyperbolic geometry.
In the second talk I will present some work completed jointly by undergraduates and myself on divisible kaleidoscopic tilings, i.e., simultaneous tilings of the plane by two different
kaleidoscopic polygons. It has a nice interplay between combinatorics (Catalan numbers) and geometry.
• Topic: Linear Fractional Transformations in Complex Euclidean Space
• Speaker: Jerry Muir Jerry Muir@rose-hulman.edu
• Date: May 3 and 10, 2000
• Abstract: Linear fractional transformations (LFTs) of a single complex variable are key tools for complex analysts interested in geometric properties of analytic functions in the plane. For
instance, an LFT can be used to conformally map a domain of the complex plane that is either the interior of a circle, the exterior of a circle, or a half-plane onto another prechosen domain of
the same type. Moreover, LFTs are the only functions that will do this. Using LFTs to switch domains is useful when the geometric nature of one domain is more suitable to a particular problem. If
similar processes existed in higher complex dimensions, they would likely have similar uses. We will define a higher-dimensional analog to one-variable LFTs and explore what, if any, one-variable
properties extend to the higher dimensional case, and we will consider several examples.
• Topic: The method of lines for solving differential equations.
• Speaker:Dave Voss, Western Illinois University
• Dates: Last half of winter quarter.
• Abstract: The Method of Lines (MOL) provides a flexible and general approach for solving systems of time dependent partial differential equations. Using MOL, the space variables are discretized
on a selected mesh yielding an approximating system of ordinary differential equations. The numerical solution of this system can present certain difficulties depending on the method used. In
this talk, the one-dimensional heat equation will be used to introduce MOL with semidiscretization provided by central difference approximations in space. Some of the numerical difficulties will
be exposed by following the time evolution using the Euler methods and the Crank-Nicolson method. Approaches for overcoming these difficulties will be suggested.
• Topic: Elementary Inversion of the Laplace Transform
• Speaker: Kurt Bryan bryan@rose-hulman.edu
• Dates: Last half of winter quarter.
• Abstract: Last summer while working on a research problem, I discovered a very beautiful and simple formula for inverting the Laplace Transform. Even the proof that the formula works is very
simple and involves only elementary analysis. After a long search (and a bit of luck---bad luck) I learned that the formula had been discovered by Emil Post in 1930. I'll show the formula, sketch
the proof that it inverts the Laplace Transform, and give a few computational examples.
• Topic: Root Locus, Feedback, and Block Diagrams
• Speaker: Robert Lopez
• Date:4/21/99
• Abstract: Our second course in differential equations has traditionally contained the topics of linear systems of ODEs, eigenvalues, and stability. Of late, there is a growing tendency to solve
these linear systems by Laplace transform. When our students enter engineering courses on feedback controls, courses which should build on our presentations of systems of ODEs, they see what is
at first glance, a totally different subject. The ODEs are hidden behind block diagrams, representations of the transfer functions, and "asymptotic stability" never appears in the engineering
controls texts. This talk will make the connection between the linear systems we teach, feedback control as seen in engineering, and the root locus, the locus, in the complex plane, of the roots
of a polynomial which depends on a parameter.
• Topic: Maple-Inspired Vignettes in Applied Math
• Speaker: Robert Lopez
• Date: 4/28/99
• Abstract: The following topics from classical applied math will be presented through the medium of Maple: the longitudinal vibrations in an elastic rod, Bezier curves, and the problem of " two
beads on a string through a hole in the table".
• Topic: Algorithmic RSA Number Factorization
• Speaker: Stephen Young, Rose student
• Date: 5/18/99
• Abstract: Presentation of an elementary algorithm to factor arbitrarily large RSA encryption numbers. Also discussion of the practicality of this algorithm and potential for efficient
• Topic: THE GAME OF BILLIARDS ON THE PLANE AND ITS CONNECTION WITH ARITHMETIC, GEOMETRY (TOPOLOGY), AND PHYSICS
• Speaker: Dr. Gregory Galperin, Department of Mathematics Eastern Illinois University
• Date: November 3, November 10, 1999
• Abstract: A billiard system is the simplest dynamical system one can imagine: it's just a region and one point that moves inside that region with unit speed and reflects off the boundary
according to the law "angle of incidence equals angle of reflection." It turns out, however, that the billiard system is very rich and can explain many interesting mathematical facts. The speaker
will discuss connections between billiards and mathematical notions and results such as hyperbolic geometry, compact surfaces, the "first digit problem" for powers of 2, decimal digits of pi ,
and geodesic flow on the surface of a polyhedron.
• Topic: Wavelet - based methods in Image Processing
• Speaker:Allen Broughton brought@rose-hulman.edu
• Dates: last half of winter quarter and first half of spring quarter.
• Abstract: Seven talks on mathematical methods used in image processing, the discussion will give the background on matrix models of images, Fourier and filtering methods and then finally wavelet
and filter bank methods.
• Special Notes: Lecture Notes
• Topic: RSA Cryptography
• Speaker: Kurt Bryan bryan@rose-hulman.edu
• Dates: last half of spring quarter
• Abstract: 4 to 6 talks explaining the very elementary number theory behind RSA, how RSA works, and allied topics like primality testing and factoring.
• Resources: outline
• Topic: Using Mathematics in Industry
• Speaker: Roy Primus, Rose alumnus
• Date: Friday, December 13, 1996
• Abstract: Roy Primus, a Rose-Hulman graduate (BA in Mathematics in 1975 and a Master's Degree in 1977) and currently the Research and Engineering Director of Combustion Research at Cummings
Engine Company, will give a presentation during 4th period on Friday, December 13, in E-104. The topic of his talk will be "Using Mathematics in Industry"
• Topic: Traveling Salesman Problem and Other Problems in Combinatorial Optimization
• Speakers: Kurt Bryan bryan@rose-hulman.edu, Lynn Kiaer, David Mutchler David.Mutchler@rose-hulman.edu
• Dates: winter and spring quarters
• Abstract: The traveling salesman problem, scheduling problems, computational solution methods, simulated annealing, genetic algorithms
• Topic: Wavelets for data analysis
• Speakers: Dave Bond
• Dates: winter quarter
• Topic: The USA Mathematical Talent Search: Rose-Hulman's Role in Its Development
• Speaker: George Berzsenyi
• Date: May 2, 1996
• Abstract: The USA Mathematical Talent Search (USAMTS) is a year-round program for talented high school students in creative mathematical problem solving. The present talk will provide an overview
of the past seven years of this program, focusing on Rose-Hulman's role in its development, on the last two years of activities, and on the future of the USAMTS. The speaker will also discuss
some of the mathematical problems used in the program and some of his other mathematical activities during his recent sabbatical leave.
|
{"url":"http://www.rose-hulman.edu/math/seminar/SeminarHistory.php","timestamp":"2014-04-16T10:29:11Z","content_type":null,"content_length":"253242","record_id":"<urn:uuid:ce1cd859-c0ac-42c4-9f15-18921a51e7ba>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
You can see a map showing only coastlines, rivers, etc instead.
Click on a place on the map to see the mathematician(s) born there.
Index of places on this map
Mathematicians born in Bosnia
Mathematicians born in Croatia
Mathematicians born in Italy
Mathematicians born in Malta
Map of Europe
|
{"url":"http://www-history.mcs.st-and.ac.uk/BirthplaceMaps/Italy.html","timestamp":"2014-04-19T06:51:35Z","content_type":null,"content_length":"14919","record_id":"<urn:uuid:09333b0d-a36c-4d37-b82e-2add4889f651>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The String Coffee Table
Tuesday at the Streetfest II
Posted by Guest
Yesterday was a bit of a blur for me - lack of sleep in the previous week catching up, but I did get Kapranov’s talk (excellent - Marni’s doing a post on that) and also Voronov’s (that’s not to say I
didn’t go to other talks :) ).
Voronov talked on operads (Informally calling his talk “the grocery of operads”), and didn’t really get to the Swiss cheese as promised, but was a good talk anyway, starting from the little intervals
operad. Actually he started from the definition of an operad, which is a souped up version of an $n$-ary operation - actually a space of operations
${V}^{\otimes k}\to V$
satisfying some axioms (mostly distributive stuff, I think - no details were given), and $V$ can be a vector space, but not necessarily.
Yesterday was a bit of a blur for me - lack of sleep in the previous week catching up, but I did get Kapranov’s talk (excellent - Marni’s doing a post on that) and also Voronov’s (that’s not to say I
didn’t go to other talks :) ).
Voronov talked on operads (Informally calling his talk “the grocery of operads”), and didn’t really get to the Swiss cheese as promised, but was a good talk anyway, starting from the little intervals
operad. Actually he started from the definition of an operad, which is a souped up version of an $n$-ary operation - actually a space of operations
${V}^{\otimes k}\to V$
satisfying some axioms (mostly distributive stuff, I think - no details were given), and $V$ can be a vector space, but not necessarily.
One thing I thought was missing is that we weren’t actually shown how the operad examples were actually operads.
Little intervals
The little intervals operad is the compactified (real Fulton-MacPherson) configuration space of $k$ distinct points on the real line, modding out translations and scaling factors.
Little intervals $=\overline{C\left(R,k\right)/R×{R}_{+}^{*}}$
For two points they just don’t get any closer to each other, but when we have three points, two of the adjacent points can converge, but not coincide - those two points are said to bubble off - it
was likened to having to look at those two points under a magnifying glass (and so we ge two scenarios depending on which points). The relation to category theory is that we can think of the two
point situation as composition of arrows (a 0-cell) and the three point as the associator between the two ways of composing three arrow (a 1-cell).
We get the pentagon from considering four points, which gives us five ways of fiddling four arrows into the associator - the different cases corresponding to two or three adjacent points bubbling
Little disks
The little disks operad is defined similar to the previous one - the compactified configuration space of $k$ disks in ${R}^{2}$ wih translations and dilations/dilatations modded out (there was a bit
of contention at this point as to the proper word, mostly from Max Kelly). For $1\le k<6$ everything is well behaved and it all goes to pieces after that. Like the little intervals is related to the
associator, little disks has something to do with what Voronov called the “Breenator”, something with 4 hexagonal faces and 2 rectangluar ones. Visualising the configuration spaces are tricky at
best, but impossible with words, so I’ll give you a link later
The cacti operad was mentioned, it’s mst handy pictures looking like cacti - which have involve ribbon braided monoidal $n$-categories - and then (since time was running out) we got a picture of what
a sample of swiss cheese looked like (here’s the paper: math.QA/9807037). All we were told was that it had something to do with pairs of braided monoidal $n$-categories. For what I think are papers
involving swiss cheese in string theory, see math.AT/0412249 and math.QA/0410291.
Alexei Bondal gave a somewhat entertaining talk on derived categories of toric varieties, but the humorous parts seemed ill motivation for the heavy algebraic geometry. A toric variety X is one with
an action of the torus $T=\left({S}^{1}{\right)}^{n}$ or $\left({C}^{*}\right)n$, i.e. $X×T\to X$ and the inclusion $T\to X$. A nice example of a variety is the 2-sphere (or ${\mathrm{CP}}^{1}$ if
you’re cool and want to call it a curve), given by ${x}^{2}+{y}^{2}+{z}^{2}-1=0$. The non-zero complex numbers act on the pointed sphere (thinking of it as the complex projective line, as above) by
multiplication, and are included into it,
${C}^{*}\to {\mathrm{CP}}^{1}$
making the sphere a toric variety. After that it got all hard, and talked a lot about mirror symmetry, which swaps the derived category of bounded coherent sheaves on our manifold $X$ and the Fukaya
category of $X$. This may still be a conjecture - the string theorists know a bit about this. I know because it is closely tied up with T-duality (the subject of my honours reseach project)
I didn’t go to the talk by Jurgen Fuchs (pdf version - properly typeset) on conformal field theory, and the last lecture of the day (Grandis’, on directed algebraic topology) was cancelled as the
speaker is not at the conference.
That’s enough for this post.
Posted at July 12, 2005 11:30 PM UTC
Operads and Strings
I have had a look at the Kajiura & Stasheff paper math.QA/0410291 that you mentioned.
So the little disk operad is related to closed string amplitudes and to nonplanar corollas (nonplanar trees with a single vertex)…
…while the little interval operad is related to open string amplitudes and to planar corollas.
We can think of each ‘little disk’ as a little disk cut out from a sphere, being the vertex for a closed string. Similarly, we can think of each ‘little interval’ as a little interval cut out from
the boundary of the disk, being the vertex for an open string.
The nice thing is that one knows that the little disk operad gives rise to ${L}_{\infty }$-algebras, while the little interval operad gives rise to ${A}_{\infty }$-algebras. This nicely explains why
${A}_{\infty }$-algebras know about open string field theory, while ${L}_{\infty }$-algebras know about closed string field theory.
The fact that in an ${L}_{\infty }$-algebra everything in sight is (graded-)commutative comes from the fact that you can move little disks cut out from a surface around each other, i.e. that there is
no order on closed string vertices.
The fact that in an ${A}_{\infty }$-algebra no (graded-)commutativity is present comes from the fact that we cannot move a little interval cut out from a line around another such interval, i.e. that
there is an order on open string vertices.
There should be a close relation between the little disk operad and what Voronov on p. 11 of math.QA/0111009 calls the Riemann surface operad.
Posted by: Urs on July 13, 2005 6:03 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/string/archives/000595.html","timestamp":"2014-04-21T07:04:34Z","content_type":null,"content_length":"21027","record_id":"<urn:uuid:db0d66fa-e654-4ec6-b31b-181b154fadf0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mother Nature Network
8 odd facts about piIn celebration of National Pi Day, we tip our hat to 3.14.
What’s the universe made of? MathNature is full of patterns waiting to be unlocked.
Mathematical formula predicts if your marriage will lastCan math quantify all the complexities of your relationship and predict whether it will last? One researcher thinks he has found the equation.
Math ability starts in infancy, study suggestsExperience, education and motivation still matter a great deal later in life as the child learns additional math skills.
5 ways science classes will changeThe Next Generation Science Standards aim to improve U.S. students’ performance in STEM (science-technology-engineering-math) subjects.
The beautiful visualization of the numbers of natureWatch ‘Nature by Numbers’ artfully show the Fibonacci sequence manifesting all over nature.
How do plants survive without sun? MathNew research finds that plants calculate how much food they need to get through the night.
Dallas banker Andy Beal is offering big bucks to anyone who finally solves the Beal Conjecture.
The new number is 17,425,170 digits long, which crushes the last one discovered in 2008 that was a paltry 12,978,189 digits long.
|
{"url":"http://mothernaturenetwork.tumblr.com/tagged/math","timestamp":"2014-04-16T07:23:26Z","content_type":null,"content_length":"115759","record_id":"<urn:uuid:f0c9fcf9-7c2d-4584-bb16-5ec21776295a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complementary series (of representations)
From Encyclopedia of Mathematics
The family of irreducible continuous unitary representations of a locally compact group [2]. A connected Lie group Levi–Mal'tsev decomposition). A complementary series was first discovered for the
complex classical groups [1]. At the time of writing (1987) complementary series have been fully described only for certain locally compact groups. Certain problems in number theory (see, for
example, [5]) are equivalent to problems in the theory of representations connected with the complementary series of adèle groups of linear algebraic groups.
[1] I.M. Gel'fand, M.A. Naimark, "Unitäre Darstellungen der klassischen Gruppen" , Akademie Verlag (1957) (Translated from Russian)
[2] F.P. Greenleaf, "Invariant means on topological groups and their applications" , v. Nostrand (1969) MR0251549 Zbl 0174.19001
[3] M.A. Naimark, "Linear representations of the Lorentz group" , Macmillan (1964) (Translated from Russian) MR0170977 Zbl 0100.12001 Zbl 0084.33904 Zbl 0077.03602 Zbl 0057.02104 Zbl 0056.33802
[4] B. Kostant, "On the existence and irreducibility of certain series of representations" Bull. Amer. Math. Soc. , 75 (1969) pp. 627–642 MR0245725 Zbl 0229.22026
[5] H. Petersson, "Zur analytische Theorie der Grenzkreisgruppen I" Math. Ann. , 115 (1937–1938) pp. 23–67
In the theory of semi-simple Lie groups the notion of a complementary series representation often is introduced in a different fashion, viz. as a generalized principal series representation (cf.
Continuous series of representations) that is (infinitesimally) unitary.
[a1] A.W. Knapp, "Representation theory of semisimple groups" , Princeton Univ. Press (1986) MR0855239 Zbl 0604.22001
How to Cite This Entry:
Complementary series (of representations). Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Complementary_series_(of_representations)&oldid=21973
This article was adapted from an original article by A.I. Shtern (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article
|
{"url":"http://www.encyclopediaofmath.org/index.php/Complementary_series_(of_representations)","timestamp":"2014-04-21T09:35:57Z","content_type":null,"content_length":"20743","record_id":"<urn:uuid:a6fe8842-73c9-45e6-85d0-521a9bbfb723>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: An application of the extended generalized gamma function
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: An application of the extended generalized gamma function
From Francisco Augusto <francisco.augusto.7@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: An application of the extended generalized gamma function
Date Thu, 30 Aug 2012 18:01:50 +0100
Dear Statalist,
I have sent this question before, but since it laked a considerable
ammount of expressions and reference, I am sending the question again
hoping for a possible anwser.
This is the problem: I have a variable x, which is in logarithmic
scale, that I want to adjust to the extended generalized gamma
distribution following the approach used by
R.L. Prentice 1974, "A log gamma model and its maximum likelihood
Estimation", Biometrika, 61(3), pp 539-544
and developed by
J.F. Lawless 1980, "Inference in the generalized gamma and log gamma
distributions" Technometrics, 22(3) pp 409-419.
in such a way I may replicate the work developed by
Cabral & Mata 2003, "On the Evolution of the firm size
distribution:Facts and Theory", The American Economic Review, 93(4),
The first approach I considered was the streg command, which is not
suitable for this problem, since there is a little change in the
original expression of the extended generalized gamma function:
starting from the original expression and considering the three
parameters (mu, sigma, k). The parametrezation that I present
considers the following transformation q = k^(-1/2), leading to the
following expression (citing from Cabral and Mata 2003):
if x follows an extended generalized gamma distribution, then w = (lnx
- mu) / sigma has p.d.f.
|q|*(q^-2)^(q^-2) *exp(q^(-2) *(qx - exp(qx))) / gamma(q^(-2)) if q is
different from 0
(2pi)^(-1/2) èxp(-(1/2)x^(2)) if q is equal to 0
where gamma(t) is the gamma function.
My objective is to estimate the three parameters (mu, sigma, q) and
then to regress on x assuming that x follows an extended generalized
gamma function (exactly the same as Cabral and Mata 2003 did)
(Sorry for the messy expression...). I am using Stata 11.0.
I am open to any suggestion and please correct me in case of mistakes.
If any other information is needed, I will be please to add it to the
Thanks for your consideration,
Francisco Augusto
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2012-08/msg01364.html","timestamp":"2014-04-20T10:56:59Z","content_type":null,"content_length":"9153","record_id":"<urn:uuid:45a6f1c7-af8c-41de-891a-513757f891d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classical-equations-of-motion calculations of high-energy heavy-ion collisions
Results are obtained with the classical-equations-of-motion approach which provides a complete microscopic, classical, description including finite-range interaction effects. Nonrelativistic
classical-equations-of-motion calculations are made for equal mass projectile and target nuclei with $AP=AT=20$ (Ne + Ne) at laboratory energies per projectile nucleon of $EL=117, 400, and 800$ MeV
and at 400 MeV for $AP=AT=40$ (Ca + Ca). A static two-body potential $Vst$ is used which is fitted to $σ(2)$, the $sin2θ$ weighted differential cross section. For $AP=AT=20$ we also use a scattering
equivalent momentum dependent potential $Vtr$. $Vst$ and $Vtr$ give identical two-body scattering but are not equivalent for many-body scattering and are used to test for finite-range interaction
effects in heavy-ion collisions. The evolution of central collisions is discussed. For these multiple scattering is large leading to high momentum components. Dissipation quite generally is larger at
lower energies and is appreciable during the expansion phase of central collisions giving approximately thermalized distributions at the lower $EL$. A peak at approximately the same momentum at all
angles develops in the momentum distribution near the beginning of the expansion and changes roughly in step with the potential energy; for $AP=AT=20$ at 800 MeV the peak persists to the final
distributions. There are very appreciable differences in the densities, potential energies, and distributions between $Vst$ and $Vtr$ during the strong interaction stage. However, the final
distributions are not significantly different even for $AP=AT=20$ at 800 MeV. For $AP=AT=40$ at 400 MeV a transverse peaking develops in the momentum distribution suggestive of collective effects.
Noncentral collisions show typical nonequilibrium features and for larger impact parameters the final distributions show a strong single scattering component. This is true also of the impact
parameter averaged distributions which are in fair agreement with experiment. A partial test of thermal models is made. Limitations and extensions of the classical-equations-of-motion approach are
discussed. In particular we propose a new kinetic equation which includes finite-range interaction effects. Relativistic classical-equations-of-motion calculations to $O(v2c2)$ are briefly discussed.
NUCLEAR REACTIONS Heavy ion (Ne + Ne, Ca + Ca); $laboratory energy /n=117, 400, 800$ MeV; classical microscopic many-body calculations.
DOI: http://dx.doi.org/10.1103/PhysRevC.22.1025
• Received 5 October 1979
• Published in the issue dated September 1980
© 1980 The American Physical Society
|
{"url":"http://journals.aps.org/prc/abstract/10.1103/PhysRevC.22.1025","timestamp":"2014-04-19T17:34:28Z","content_type":null,"content_length":"30299","record_id":"<urn:uuid:097df961-c1e5-489f-8c90-41ab90bbf1eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 6
i have difficulty in this question. any help would be appreciated. 1) What pH corresponds to each of the following: (a) (H+)=0.35M,(b)(OH^-)= 6.0*10^-6M. (d)(H+)= 2.5*10^-8M?
why eating carrots is important for good vision?
why eating carrots is important for good vision?
I'm cubing the binomial (3x+2)^3. THe answer is 54x^2 according to my book, however I got different answers each time. Once I got 74x^2. How do I get 54x^2? I know the order of operations but when I
followed it, the boo answer was not the one I got.
How come when you find the square of the binomial (2x-5y)^2, the answer is 4x^2-20xy+25y^2 (according to my book)? I would think that the middle term, 2(2x-5y) would be 4x-10y, but my book says it is
-20xy. But I thought you 2x and 5y were separate terms so you can't combi...
I multiplyed x^2-2x+2 times x^2+2x+2. The book suggested the order this be done in as: x^2(x^2-2x+2) then 2x(x^2-2x+2) then 2(x^2-2x+2), with the result being x^4+4. However, I wondered if this could
be done in reverse. So I multiplyed x^2(x^2+2x+2), then -2x(x^2+2x=2) then 2(...
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Sandrine","timestamp":"2014-04-17T16:24:41Z","content_type":null,"content_length":"7254","record_id":"<urn:uuid:9623d9f9-b79e-4e72-a5b5-920a44412988>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra Tutors
Pompton Plains, NJ 07444
MIT Grad for Math and Science Tutoring
I have 5 years experience teaching high school mathematics -
1 and geometry - in a classroom setting and held certification to teach mathematics in the State of Virginia from 2007 to 2012. I also have 1 year experience teaching high school chemistry. I have...
Offering 8 subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Sparta_NJ_Algebra_tutors.aspx","timestamp":"2014-04-16T10:35:15Z","content_type":null,"content_length":"59314","record_id":"<urn:uuid:7b232c72-e7c0-4296-9d1f-aa37877034c3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New trigonometry is a sign of the times
September 16th, 2005 in Other Sciences /
Mathematics students have cause to celebrate. A University of New South Wales academic, Dr Norman Wildberger, has rewritten the arcane rules of trigonometry and eliminated sines, cosines and tangents
from the trigonometric toolkit.
What's more, his simple new framework means calculations can be done without trigonometric tables or calculators, yet often with greater accuracy.
Established by the ancient Greeks and Romans, trigonometry is used in surveying, navigation, engineering, construction and the sciences to calculate the relationships between the sides and vertices
of triangles.
"Generations of students have struggled with classical trigonometry because the framework is wrong," says Wildberger, whose book is titled Divine Proportions: Rational Trigonometry to Universal
Geometry (Wild Egg books).
Dr Wildberger has replaced traditional ideas of angles and distance with new concepts called "spread" and "quadrance".
These new concepts mean that trigonometric problems can be done with algebra," says Wildberger, an associate professor of mathematics at UNSW.
"Rational trigonometry replaces sines, cosines, tangents and a host of other trigonometric functions with elementary arithmetic."
"For the past two thousand years we have relied on the false assumptions that distance is the best way to measure the separation of two points, and that angle is the best way to measure the
separation of two lines.
"So teachers have resigned themselves to teaching students about circles and pi and complicated trigonometric functions that relate circular arc lengths to x and y projections – all in order to
analyse triangles. No wonder students are left scratching their heads," he says.
"But with no alternative to the classical framework, each year millions of students memorise the formulas, pass or fail the tests, and then promptly forget the unpleasant experience.
"And we mathematicians wonder why so many people view our beautiful subject with distaste bordering on hostility.
"Now there is a better way. Once you learn the five main rules of rational trigonometry and how to simply apply them, you realise that classical trigonometry represents a misunderstanding of
Wild Egg books: http://wildegg.com/
Divine Proportions: web.maths.unsw.edu.au/~norman/book.htm
Source: University of New South Wales
"New trigonometry is a sign of the times." September 16th, 2005. http://phys.org/news6555.html
|
{"url":"http://phys.org/print6555.html","timestamp":"2014-04-20T09:00:05Z","content_type":null,"content_length":"6197","record_id":"<urn:uuid:ee56fb82-bb01-4279-b0c7-face0e2e6d81>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heegner Points and Binary Quadratic Forms
up vote 5 down vote favorite
I've been trying to read Gross' paper on Heegner points on $X_0(N)$ and I am stuck on a few details. The definition he is working with is that a heegner points is a pair $y=(E,E')$, where $E$ and
$E'$ are elliptic curves admitting an isogeny that has cyclic kernel of order $N$ and where $E$ and $E'$ both have complex multiplication by the order $\mathcal{O}$ of discriminant $D$ in a quadratic
imaginary field $K$. Gross goes on to explain that we may assume the lattice for $E$ is a fractional ideal $\mathfrak{a}$ and the lattice for $E'$ is $\mathfrak{b}$ such that the ideal $\mathfrak{n}=
\mathfrak{a}\mathfrak{b}^{-1}$ is proper ideal of $\mathcal{O}$ such that the quotient $\mathcal{O}/\mathfrak{n}$ is cyclic of order $N$. It is the next line that I don't understand:
"Such an ideal will exist if and only if there is a primitive binary quadratic form of discriminant $D$ which properly represents $N$...". The line goes on, but this is one of the things I'm stuck
on. I've tried googling some notes/papers on binary quadratic forms, but I can't find anything that helps me understand what a binary quadratic form representing $N$ has to say about an order
admitting a cyclic quotient. An explanation or a good reference would be much appreciated.
The second and, I think, more important part of my confusion is a bit later on in the same section: Gross goes on to explain that if we have such an $\mathfrak{n}$, we can construct a heegner point
as follows. Let $\mathfrak{a}$ be an invertible $\mathcal{O}$-submodule of $K$ and let $[\mathfrak{a}]$ denotes its class in $Pic(\mathcal{O})$. Let $\mathfrak{n}$ be a proper $\mathcal{O}$-ideal
with cyclic quotient of order $N$, put $E=\mathbf{C}/\mathfrak{a}$, $E'=\mathbf{C}/\mathfrak{a}\mathfrak{n}^{-1}$. They are related by an obvious isogeny and thus determine a Heegner point, denoted $
Next, given $y=(\mathcal{O},\mathfrak{n},[\mathfrak{a}])$, we can find the image of it in the upper-half plane by picking an oriented basis $\langle\omega_1,\omega_2\rangle$ of $\mathfrak{a}$ such
that $\mathfrak{a}\mathfrak{n}^{-1}=\langle\omega_1,\omega_2/N\rangle$. Then $y$ corresponds to the orbit of $\omega_1/\omega_2$ under $\Gamma_0(N)$. Lastly, since $\tau\in K$ it follows that it
satisfies $A\tau^2+b\tau+C=0$ for some integers $A,B,C$ such that $gcd(A,B,C)=1$.
Finally, what I don't understand is that Gross claims that $D=B^2-4AC, A=NA'$ from some $A'$ and $gcd(A',B,NC)$. I don't see what the $\tau$ we cooked up has to do with the discriminant of our order.
I have read a paper that defined a Heegner point to be a quadratic imaginary point in the half-plane such that $\Delta(\tau)=\Delta(N\tau)$. I have seen how this would help with part of the claim
above, but I don't see why in this situation, $\Delta(\tau)=\Delta(N\tau)$. In fact, it seems that everything I'm confused about here is the fact that it seems to be the case that $$D=\Delta(\tau)=\
Delta(NT),$$ where $\Delta$ denotes discriminant.
Any insight into these two questions would very appreciated.
elliptic-curves algebraic-number-theory binary-quadratic-forms nt.number-theory
add comment
1 Answer
active oldest votes
I think the following facts, which you can find in Cox's book Primes of the Form $x^2+ny^2$, will alleviate your confusion. First off, if ${\mathfrak a}=[\alpha,\beta]$ is a proper ideal
of ${\mathcal O}$ then one can show that $$ f(x,y) := \frac{N(\alpha x-\beta y)}{N{\mathfrak a}} $$ is a primitive binary quadratic form of discriminant $D = {\rm disc}(\mathcal O)$.
Moreover, the map that associates such an ${\mathfrak a}$ to such an $f(x,y)$ induces an isomorphism from the class group ${\rm Pic } ({\mathcal O})$ onto the form class group $C(D)$. The
inverse of this map is given by $$ f(x,y) := ax^2+bxy+cy^2 \mapsto [a,(-b+\sqrt{D})/2] = [a,a\tau], $$ where $\tau$ is the unique point in the upper-half plane such that $f(\tau,1)=0$.
It's not hard to show that we'll have ${\mathcal O} = [1,a\tau]$ for all such $\tau$ (see Addendum below). In particular, we see that ${\mathcal O}/[a,a\tau] \cong {\mathbb Z}/a{\mathbb Z}
$ is cyclic.
The last piece of the puzzle is this: a positive integer $N$ is represented by a form $f(x,y)$ in $C(D)$ if and only if $N$ is the norm of some ideal in the corresponding ideal class in $
up vote 4 {\rm Pic}({\mathcal O})$ (loc. cit., Theorem 7.7(iii)). On the other hand, $N$ is properly represented by such an $f(x,y)$ if and only if $f(x,y)$ is properly equivalent to $Nx^2+bxy+cy^2$
down vote for some $b,c \in {\mathbb Z}$. Now the results mentioned in the preceding paragraph will take you home.
Addendum: Given a proper ideal $\mathfrak a$ of $\mathcal O$, we can recover $\mathcal O$ as the set ${\mathfrak a}^\vee = \{x \in K \mid x\mathfrak a \subset \mathfrak a \}$. This last
set is easy to compute in the following special case. Let $K=\mathbb Q(\tau)$ be quadratic and suppose that $ax^2+bx+c$ is the minimal polynomial of $\tau$, where $a,b,c$ are coprime
integers. Then $[1,\tau]^\vee = [1,a\tau]$ (loc. cit., Lemma 7.5).
By applying this to Gross's $\mathfrak a = [\omega_1, \omega_2] = \omega_2 [\tau, 1]$, which is a proper ideal in some order $\mathcal O$, we find that $\mathcal O = [A\tau, 1]$.
Consequently,$$D = {\rm disc}({\mathcal O}) = \det \begin{pmatrix}1 & A\tau \\ 1 & A\bar{\tau} \end{pmatrix}^2 = B^2 - 4AC.$$ The assertion about $A$ can be gotten in a similar manner.
Thank you. This certainly clears up the first bit of confusion. I'm afraid I still don't see how to convince myself of the later facts when we try to find the point in the half plane
corresponding to a heegner point. – user27464 Oct 22 '12 at 19:59
I added some clarification. I hope it helps. – Faisal Oct 22 '12 at 23:37
add comment
Not the answer you're looking for? Browse other questions tagged elliptic-curves algebraic-number-theory binary-quadratic-forms nt.number-theory or ask your own question.
|
{"url":"https://mathoverflow.net/questions/110351/heegner-points-and-binary-quadratic-forms","timestamp":"2014-04-18T18:41:43Z","content_type":null,"content_length":"58677","record_id":"<urn:uuid:23000a0d-001d-426a-a05c-497507394cc9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(This article was first published on Ecology in silico, and kindly contributed to R-bloggers) Violin plots are useful for comparing distributions. When data are grouped by a factor with two levels
(e.g. males and females), you can split the violins in half to see the difference between groups. Consider a 2 x 2 factorial experiment: treatments A and B...
A comprehensive guide to time series plotting in R
As R has evolved over the past 20 years its capabilities have improved in every area. The visual display of time series is no exception: as the folks from Timely Portfolio note that: Through both
quiet iteration and significant revolutions, the volunteers of R have made analyzing and charting time series pleasant. R began with the basics, a simple...
Natural Language Processing Tutorial
Introduction This will serve as an introduction to natural language processing. I adapted it from slides for a recent talk at Boston Python. We will go from tokenization to feature extraction to
creating a model using a machine learning algorithm. The goal is to provide a reasonable baseline on top of which more complex natural language processing can be done, and...
My Talk at Boston Python
I just gave a talk at Boston Python about natural language processing in general, and edX ease and discern in specific. You can find the presentation source here, and the web version of it here.
There is a video of it here. Nelle Varoquaux and Michael ...
Getting started with R
I wanted to avoid advanced topics in this post and focus on some “blocking and tackling” with R in an effort to get novices started. This is some of the basic code I found useful when I began using R
just over 6 weeks ago. Reading in data from a .csv file is a breeze with this command. > data =...
The Dream 8 Challenges
The 8th iteration of the DREAM Challenges are underway.DREAM is something like the Kaggle of computational biology with an open science bent. Participating teams apply machine learning and
statistical modeling methods to biological problems, competing to achieve the best predictive accuracy.This year's three challenges focus on reverse engineering cancer, toxicology and the kinetics
Three Ways to Run Bayesian Models in R
There are different ways of specifying and running Bayesian models from within R. Here I will compare three different methods, two that relies on an external program and one that only relies on R. I
won’t go into much detail about the differences in syntax, the idea is more to give a gist about how the different modeling languages...
Exploratory Data Analysis: 2 Ways of Plotting Empirical Cumulative Distribution Functions in R
$Exploratory Data Analysis: 2 Ways of Plotting Empirical Cumulative Distribution Functions in R$
Introduction Continuing my recent series on exploratory data analysis (EDA), and following up on the last post on the conceptual foundations of empirical cumulative distribution functions (CDFs),
this post shows how to plot them in R. (Previous posts in this series on EDA include descriptive statistics, box plots, kernel density estimation, and violin plots.) I
Predicting spatial locations using point processes
I’ve uploaded a draft tutorial on some aspects of prediction using point processes. I wrote it using R-Markdown, so there’s bits of R code for readers to play with. It’s hosted on Rpubs, which turns
out to be a great deal more convenient than WordPress for that sort of thing.
|
{"url":"http://www.r-bloggers.com/2013/06/page/5/","timestamp":"2014-04-20T23:42:46Z","content_type":null,"content_length":"38100","record_id":"<urn:uuid:d9d295ba-8009-48e1-a68a-62554b83325f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Cayley Graphs on the Symmetric Group Generated by Tranpositions
• Postscript version.
• Dvi version.
• PDF version.
• Abstract:
Given a connected graph, $X$, we denote by $\lambda_2=\lambda_2(X)$ its smallest non-zero Laplacian eigenvalue. In this paper we show that among all sets of $n-1$ transpositions which generate
the symmetric group, $S_n$, the set whose associated Cayley graph has the highest $\lambda_2$ is the set $\{(1,n),(2,n),\ldots,(n-1,n)\}$ (or the same with $n$ and $i$ exchanged for any $i
|
{"url":"http://www.math.ubc.ca/~jf/pubs/web_stuff/newcayley.html","timestamp":"2014-04-20T13:35:22Z","content_type":null,"content_length":"1772","record_id":"<urn:uuid:5f9edcce-4fce-440d-ad6e-ec61172ce98d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fonctions récursives générales par itération en théorie des types
- Types for Proofs and Programs, volume 3085 of LNCS , 2003
"... Abstract. In this paper we present the Coq formalisation of the QArith library which is an implementation of rational numbers as binary sequences for both lazy and strict computation. We use the
representation also known as the Stern-Brocot representation for rational numbers. This formalisation use ..."
Cited by 9 (2 self)
Add to MetaCart
Abstract. In this paper we present the Coq formalisation of the QArith library which is an implementation of rational numbers as binary sequences for both lazy and strict computation. We use the
representation also known as the Stern-Brocot representation for rational numbers. This formalisation uses advanced machinery of the Coq theorem prover and applies recent developments in formalising
general recursive functions. This formalisation highlights the rôle of type theory both as a tool to verify hand-written programs and as a tool to generate verified programs. 1
- Theorem Proving in Higher Order Logics (TPHOLS'03), volume 2758 of LNCS , 2003
"... Abstract. We show that certain input-output relations, termed inductive invariants are of central importance for termination proofs of algorithms defined by nested recursion. Inductive
invariants can be used to enhance recursive function definition packages in higher-order logic mechanizations. We d ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. We show that certain input-output relations, termed inductive invariants are of central importance for termination proofs of algorithms defined by nested recursion. Inductive invariants can
be used to enhance recursive function definition packages in higher-order logic mechanizations. We demonstrate the usefulness of inductive invariants on a large example of the BDD algorithm Apply.
Finally, we introduce a related concept of inductive fixpoints with the property that for every functional in higher-order logic there exists a largest partial function that is such a fixpoint. 1
, 2011
"... The use of interactive theorem provers to establish the correctness of critical parts of a software development or for formalising mathematics is becoming more common and feasible in practice.
However, most mature theorem provers lack a direct treatment of partial and general recursive functions; ov ..."
Add to MetaCart
The use of interactive theorem provers to establish the correctness of critical parts of a software development or for formalising mathematics is becoming more common and feasible in practice.
However, most mature theorem provers lack a direct treatment of partial and general recursive functions; overcoming this weakness has been the objective of intensive research during the last decades.
In this article, we review many techniques that have been proposed in the literature to simplify the formalisation of partial and general recursive functions in interactive theorem provers. Moreover,
we classify the techniques according to their theoretical basis and their practical use. This uniform presentation of the different techniques facilitates the comparison and highlights their
commonalities and differences, as well as their relative advantages and limitations. We focus on theorem provers based on constructive type theory (in particular, Agda and Coq) and higher-order logic
(in particular Isabelle/HOL). Other systems and logics are covered to a certain extend, but not exhaustively. In addition to the description of the techniques, we also demonstrate tools which
facilitate working with the problematic functions in particular theorem provers. 1.
, 2011
"... The use of interactive theorem provers to establish the correctness of critical parts of a software development or for formalising mathematics is becoming more common and feasible in practice.
However, most mature theorem provers lack a direct treatment of partial and general recursive functions; ov ..."
Add to MetaCart
The use of interactive theorem provers to establish the correctness of critical parts of a software development or for formalising mathematics is becoming more common and feasible in practice.
However, most mature theorem provers lack a direct treatment of partial and general recursive functions; overcoming this weakness has been the objective of intensive research during the last decades.
In this article, we review several techniques that have been proposed in the literature to simplify the formalisation of partial and general recursive functions in interactive theorem provers.
Moreover, we classify the techniques according to their theoretical basis and their practical use. This uniform presentation of the different techniques facilitates the comparison and highlights
their commonalities and differences, as well as their relative advantages and limitations. We focus on theorem provers based on constructive type theory (in particular, Agda and Coq) and higher-order
logic (in particular Isabelle/HOL). Other systems and logics are covered to a certain extent, but not exhaustively. In addition to the description of the techniques, we also demonstrate tools which
facilitate working with the problematic functions in particular theorem provers. 1.
"... To use this theorem, one should be able to express that the domain of interest has the required completeness property and that the function being considered is continuous. If the goal is to
define a partial recursive function then this requires using axioms of classical logic, and for this reason th ..."
Add to MetaCart
To use this theorem, one should be able to express that the domain of interest has the required completeness property and that the function being considered is continuous. If the goal is to define a
partial recursive function then this requires using axioms of classical logic, and for this reason the step is seldom made in the user community of type-theory based theorem proving. However, adding
classical logic axioms to the constructive logic of type theory can often be done safely to retain the consistency of the whole system. In this paper, we work in classical logic to reason about
potentially non-terminating recursive functions. No inconsistency is introduced in the process, because potentially non-terminating functions of type A → B are actually modelled as functions of type
A → B⊥: the fact that a function may not terminate is recorded in its type, non-terminating computations are given the value ⊥
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=925457","timestamp":"2014-04-17T23:44:36Z","content_type":null,"content_length":"24528","record_id":"<urn:uuid:52ce8871-ec8c-4d09-93ba-1b83d110d684>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computational geometry in a curved world
, 1990
"... We show how (now familiar) hierarchical representations of (convex) polyhedra can be used to answer various separation queries efficiently (in a number of cases, optimally). Our emphasis is i)
the uniform treatment of polyhedra separation problems, ii) the use of hierarchical representations of prim ..."
Cited by 105 (5 self)
Add to MetaCart
We show how (now familiar) hierarchical representations of (convex) polyhedra can be used to answer various separation queries efficiently (in a number of cases, optimally). Our emphasis is i) the
uniform treatment of polyhedra separation problems, ii) the use of hierarchical representations of primitive objects to provide implicit representations of composite or transformed objects, and iii)
applications to natural problems in graphics and robotics. Among the specific results is an O(log jP j 1 log jQj) algorithm for determining the sepa- ration of polyhedra P and Q (which have been
individually preprocessed in at most linear time).
, 1988
"... Given a simple n-vertex polygon, the triangulation problem is to partition the interior of the polygon into n-2 triangles by adding n-3 nonintersecting diagonals. We propose an O(n log logn)
-time algorithm for this problem, improving on the previously best bound of O (n log n) and showing that tria ..."
Cited by 37 (4 self)
Add to MetaCart
Given a simple n-vertex polygon, the triangulation problem is to partition the interior of the polygon into n-2 triangles by adding n-3 nonintersecting diagonals. We propose an O(n log logn)-time
algorithm for this problem, improving on the previously best bound of O (n log n) and showing that triangulation is not as hard as sorting. Improved algorithms for several other computational
geometry problems, including testing whether a polygon is simple, follow from our result.
- IN ESA 2002, LNCS 2461 , 2002
"... We give an exact geometry kernel for conic arcs, algorithms for exact computation with low-degree algebraic numbers, and an algorithm for computing the arrangement of conic arcs that immediately
leads to a realization of regularized boolean operations on conic polygons. A conic polygon, or polygon ..."
Cited by 29 (15 self)
Add to MetaCart
We give an exact geometry kernel for conic arcs, algorithms for exact computation with low-degree algebraic numbers, and an algorithm for computing the arrangement of conic arcs that immediately
leads to a realization of regularized boolean operations on conic polygons. A conic polygon, or polygon for short, is anything that can be obtained from linear or conic halfspaces ( = the set of
points where a linear or quadratic function is non-negative) by regularized boolean operations. The algorithm and its implementation are complete (they can handle all cases), exact (they give the
mathematically correct result), and efficient (they can handle inputs with several hundred primitives).
- In Proceedings of 6th Annual ACM Symposium on Computational Geometry , 1990
"... We have developed techniques which contribute to efficient algorithms for certain geometric optimiza-tion problems involving simple polygons: computing minimum separators, maximum inscribed
triangles, ..."
Cited by 11 (1 self)
Add to MetaCart
We have developed techniques which contribute to efficient algorithms for certain geometric optimiza-tion problems involving simple polygons: computing minimum separators, maximum inscribed
- IN ESA 2003, LNCS 2832 , 2000
"... We present an approach that extends the Bentley-Ottmann sweep-line algorithm [3] to the exact computation of the topology of arrangements induced by non-singular algebraic curves of arbitrary
degrees. Algebraic ..."
Cited by 7 (3 self)
Add to MetaCart
We present an approach that extends the Bentley-Ottmann sweep-line algorithm [3] to the exact computation of the topology of arrangements induced by non-singular algebraic curves of arbitrary
degrees. Algebraic
- SIAM J. Comput
"... The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of efficient algorithms for certain geometric optimization problems
involving simple polygons: computing optimum separators, maximum area or perimeter inscribed triangles, a mi ..."
Cited by 6 (0 self)
Add to MetaCart
The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of efficient algorithms for certain geometric optimization problems
involving simple polygons: computing optimum separators, maximum area or perimeter inscribed triangles, a minimum area circumscribed concave quadrilateral, or a maximum area contained triangle. The
structure for our algorithms is as follows: a) decompose the initial problem into a low-degree polynomial number of optimization problems; b) solve each individual subproblem in constant time using
standard methods of calculus, basic methods of numerical analysis, or linear programming. These same optimization techniques can be applied to splinegons (curved polygons). To do this, we first
develop a decomposition technique for curved polygons which we substitute for triangulation in creating equally efficient curved versions of the algorithms for the shortest-path tree, ray-shooting
and two-point shortes...
- In Theory and Practice of Geometric Modeling , 1989
"... A family of algorithms is presented for solving problems related to the one of whether a given object fits inside a rectangular box, based on the use of Minkowski Sums and convex hulls. We
present both two and three dimensional algorithms, which are respectively linear and quadratic in their runn ..."
Cited by 3 (0 self)
Add to MetaCart
A family of algorithms is presented for solving problems related to the one of whether a given object fits inside a rectangular box, based on the use of Minkowski Sums and convex hulls. We present
both two and three dimensional algorithms, which are respectively linear and quadratic in their running time in terms of the complexity of the objects. In two dimensions, both straight sided and
curved sided objects are considered; in three dimensions, planar faced objects are considered, and extensions to objects with curved faces are discussed. 1
- Proc. 1990 IEEE Int'l Conf. on Robotics and Automation , 1990
"... We present a discrete approximation method for planar algebraic curves. This discrete approximation is hierarchical, curvature dependent, and provides eÆcient algorithms for various primitive
geometric operations on algebraic curves. We consider applications on the curve intersections, the distance ..."
Cited by 2 (1 self)
Add to MetaCart
We present a discrete approximation method for planar algebraic curves. This discrete approximation is hierarchical, curvature dependent, and provides eÆcient algorithms for various primitive
geometric operations on algebraic curves. We consider applications on the curve intersections, the distance computations, and the common tangent and convolution computations. We implemented these
approximation algorithms on Symbolics 3650 Lisp Machine using Common Lisp. 1 Introduction The geometric modeling issue of representing, manipulating and reasoning about geometric objects is the
ultimate common goal of computer vision, graphics and robotics [5]. To achieve this general goal, the geometric modeling need to extend its geometric coverage from its traditional techniques on
parametric curves and surfaces to a broader body of theories and techniques on a variety of geometric objects [9]. Computational geometry oers new tools and perspectives on these problems, however,
most of its results are ...
, 1990
"... We show how (now familiar) hierarchical representations of (convex) polyhedra can be used to answer various separation queries efficiently (in a number of cases, optimally). Our emphasis is i)
the uniform treatment of polyhedra separation problems, ii) the use of hierarchical representations of prim ..."
Cited by 1 (0 self)
Add to MetaCart
We show how (now familiar) hierarchical representations of (convex) polyhedra can be used to answer various separation queries efficiently (in a number of cases, optimally). Our emphasis is i) the
uniform treatment of polyhedra separation problems, ii) the use of hierarchical representations of primitive objects to provide implicit representations of composite or transformed objects, and iii)
applications to natural problems in graphics and robotics. Among the specific results is an O(log jP j 1 log jQj) algorithm for determining the sepa- ration of polyhedra P and Q (which have been
individually preprocessed in at most linear time). 1 Introduction and background Given pairs of geometric objects A and B the problems of testing for non-empty intersection (A " B 6= ;), together
with the construction of A " B (when A " B 6= ;) or a description of their separation (when A " B = ;), comprise some of the most fundamental issues in computational geometry [24,20,14] The
, 2001
"... Boolean set representations of curved two-dimensional polygons are expressions constructed from planar halfspaces and (possibly regularized) set operations. Such representations arise often in
geometric modeling, computer vision, robotics, and computational mechanics. The convex deficiency tree (C ..."
Add to MetaCart
Boolean set representations of curved two-dimensional polygons are expressions constructed from planar halfspaces and (possibly regularized) set operations. Such representations arise often in
geometric modeling, computer vision, robotics, and computational mechanics. The convex deficiency tree (CDT) algorithm described in this paper constructs such expressions automatically for polygons
bounded by linear and curved edges that are subsets of convex curves. The running time of the algorithm is not worse than O(n 2 log n) and the size of the constructed expressions is linear in the
number of polygon edges. The algorithm has been fully implemented for polygons bounded by linear and circular edges.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1399238","timestamp":"2014-04-16T06:36:02Z","content_type":null,"content_length":"36322","record_id":"<urn:uuid:73befd71-cdf8-47a2-a9f5-fd324b047e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] How do I compare 47 GLM models with 1 to 5 interactions and unique combinations?
[R] How do I compare 47 GLM models with 1 to 5 interactions and unique combinations?
Frank Harrell f.harrell at vanderbilt.edu
Thu Jan 26 18:10:08 CET 2012
To pretend that AIC solves this problem is to ignore that AIC is just a
restatement of P-values.
Rubén Roa wrote
> I think we have gone through this before.
> First, the destruction of all aspects of statistical inference is not at
> stake, Frank Harrell's post notwithstanding.
> Second, checking all pairs is a way to see for _all pairs_ which model
> outcompetes which in terms of predictive ability by -2AIC or more. Just
> sorting them by the AIC does not give you that if no model is better than
> the next best by less than 2AIC.
> Third, I was not implying that AIC differences play the role of
> significance tests. I agree with you that model selection is better not
> understood as a proxy or as a relative of significance tests procedures.
> Incidentally, when comparing many models the AIC is often inconclusive. If
> one is bent on selecting just _the model_ then I check numerical
> optimization diagnostics such as size of gradients, KKT criteria, and
> other issues such as standard errors of parameter estimates and the
> correlation matrix of parameter estimates. For some reasons I don't find
> model averaging appealing. I guess deep in my heart I expect more from my
> model than just the best predictive ability.
> Rubén
> -----Mensaje original-----
> De: r-help-bounces@ [mailto:r-help-bounces@] En nombre de Ben Bolker
> Enviado el: miércoles, 25 de enero de 2012 15:41
> Para: r-help at .ethz
> Asunto: Re: [R]How do I compare 47 GLM models with 1 to 5 interactions and
> unique combinations?
> Rubén Roa <rroa <at> azti.es> writes:
>> A more 'manual' way to do it is this.
>> If you have all your models named properly and well organized, say
>> Model1, Model2, ..., Model 47, with a slot with the AIC (glm produces
>> a list with one component named 'aic') then something like
>> this:
>> x <- matrix(0,1081,3) x[,1:2] <- t(combn(47,2)) for(i in 1:n){x[,3]
>> <- abs(unlist(combn(eval(as.name(paste("Model",i,sep="")))$aic[1,])-
>> unlist(combn(eval(as.name(paste("Model",i,sep="")))$aic,2)[2,]))}
>> will calculate all the 1081 AIC differences between paired comparisons
>> of the 47 models. It may not be pretty but works for me.
>> An AIC difference equal or less than 2 is a tie, anything higher is
>> evidence for the model with the less AIC (Sakamoto et al., Akaike
>> Information Criterion Statistics, KTK Scientific Publishers, Tokyo).
> I wouldn't go quite as far as Frank Harrell did in his answer, but
> (1) it seems silly to do all pairwise comparisons of models; one of the
> big advantages of information-theoretic approaches is that you can just
> list the models in order of AIC ...
> (2) the dredge() function from the MuMIn package may be useful for
> automating this sort of thing. There is an also an AICtab function in the
> emdbook package.
> (3) If you're really just interested in picking the single model with the
> best expected predictive capability (and all of the other assumptions of
> the AIC approach are met -- very large data set, good fit to the data,
> etc.), then just picking the model with the best AIC will work. It's when
> you start to make inferences on the individual parameters within that
> model, without taking account of the model selection process, that you run
> into trouble. If you are really concerned about good predictions then it
> may be better to do multi-model averaging *or* use some form of shrinkage
> estimator.
> (4) The "delta-AIC<2 means pretty much the same" rule of thumb is fine,
> but it drives me nuts when people use it as a quasi-significance-testing
> rule, as in "the simpler model is adequate if dAIC<2". If you're going to
> work in the model selection arena, then don't follow hypothesis-testing
> procedures! A smaller AIC means lower expected K-L distance, period.
> For the record, Brian Ripley has often expressed the (minority) opinion
> that AIC is *not* appropriate for comparing non-nested models (e.g.
> <http://tolstoy.newcastle.edu.au/R/help/06/02/21794.html>).
>> -----Original Message-----
>> From: r-help-bounces <at> r-project.org on behalf of Milan
>> Bouchet-Valat
>> Sent: Wed 1/25/2012 10:32 AM
>> To: Jhope
>> Cc: r-help <at> r-project.org
>> Subject: Re: [R] How do I compare 47 GLM models with 1 to 5
>> interactions and unique combinations?
>> Le mardi 24 janvier 2012 à 20:41 -0800, Jhope a écrit :
>> > Hi R-listers,
>> >
>> > I have developed 47 GLM models with different combinations of
>> > interactions from 1 variable to 5 variables. I have manually made
>> > each model separately and put them into individual tables (organized
>> > by the number of variables) showing the AIC score. I want to compare
>> all of these models.
>> >
>> > 1) What is the best way to compare various models with unique
>> > combinations and different number of variables?
>> See ?step or ?stepAIC (from package MASS) if you want an automated way
>> of doing this.
>> > 2) I am trying to develop the most simplest model ideally. Even
>> > though adding another variable would lower the AIC,
> No, not necessarily.
>> how do I interpret it is worth
>> > it to include another variable in the model? How do I know when to
>> stop?
> When the AIC stops decreasing.
>> This is a general statistical question, not specific to R. As a
>> general rule, if adding a variable lowers the AIC by a significant
>> margin, then it's worth including it.
> If the variable lowers the AIC *at all* then your best estimate is that
> you should include it.
>> You should only stop when a variable increases the AIC. But this is
>> assuming you consider it a good indicator and you know what you're
>> doing. There's plenty of literature on this subject.
>> > Definitions of Variables:
>> > HTL - distance to high tide line (continuous) Veg - distance to
>> > vegetation Aeventexhumed - Event of exhumation Sector - number
>> > measurements along the beach Rayos - major sections of beach
>> > (grouped sectors) TotalEggs - nest egg density
>> >
>> > Example of how all models were created:
>> > Model2.glm <- glm(cbind(Shells, TotalEggs-Shells) ~ Aeventexhumed,
>> > data=data.to.analyze, family=binomial) Model7.glm <-
>> > glm(cbind(Shells, TotalEggs-Shells) ~ HTL:Veg, family = binomial,
>> > data.to.analyze) Model21.glm <- glm(cbind(Shells, TotalEggs-Shells)
>> > ~ HTL:Veg:TotalEggs, data.to.analyze, family = binomial) Model37.glm
>> > <- glm(cbind(Shells, TotalEggs-Shells) ~
>> > HTL:Veg:TotalEggs:Aeventexhumed, data.to.analyze, family=binomial)
>> To extract the AICs of all these models, it's easier to put them in a
>> list and get their AICs like this:
>> m <- list()
>> m$model2 <- glm(cbind(Shells, TotalEggs-Shells) ~ Aeventexhumed,
>> data=data.to.analyze, family=binomial)
>> m$model3 <- glm(cbind(Shells, TotalEggs-Shells) ~ HTL:Veg, family =
>> binomial, data.to.analyze)
>> sapply(m, extractAIC)
>> Cheers
> ______________________________________________
> R-help@ mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> ______________________________________________
> R-help@ mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
Frank Harrell
Department of Biostatistics, Vanderbilt University
View this message in context: http://r.789695.n4.nabble.com/How-do-I-compare-47-GLM-models-with-1-to-5-interactions-and-unique-combinations-tp4326407p4331016.html
Sent from the R help mailing list archive at Nabble.com.
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2012-January/301711.html","timestamp":"2014-04-18T05:46:56Z","content_type":null,"content_length":"13669","record_id":"<urn:uuid:7a070062-f918-44a7-8aa8-d18873a38b20>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability Problem
March 22nd 2013, 01:24 PM #1
Mar 2013
Probability Problem
Hey all,
Haven't ever been one for probability.
There are 150 options for students to choose from. Student 1 picks an option and then it is removed from the pool, Student 2 then picks and follows suit. Each option is given equal weight. What
is the probability of any student obtaining their choice?
I got stuck generalizing this:
P of first student getting his or her choice: P(1) = 1
P of second student getting his or her choice: P(2) = 1 - probability of first student taking their choice = 1 - 1/150 = 149/150
at this point i wonder if i'm already heading down the wrong path... shouldn't it be more like P(2) = 148/149?
P(3) = 1 - probability of first and second student taking their choice = 1 - (1/150) - (1/149)
I'm afraid to go any further without first understanding what I'm doing... Initially I thought it would turn out to be something like the probability of drawing two aces from a deck 4|52 * 3|51 ,
but I'm slamming my head against the wall now. Any help would be appreciated.
Re: Probability Problem
If I'm understanding your problem, then there's no decrease in your denominator.
$\frac{150}{150}, \frac{149}{150}, \frac{148}{150}, \frac{147}{150}, \frac{146}{150}... \frac{0}{150}.$
Re: Probability Problem
Maybe. I'm not sure I described it correctly. If there are 150 options, and each option can only be picked once (kind of like removing a marble from a pile) won't this decrease the denominator?
Some of my initial thoughts were of the form
product over (150-k)/(151-k) from k = 1 to student number of interest
I'm still thinking along the lines of drawing two aces from a deck. If student 52 wants a certain option, then the 51 students before him have to not pick his option. The 51st student would have
100 options to decide from... am I making any sense?
Re: Probability Problem
I find this question so poorly worded.
In particular, what does "any student obtaining their choice" mean exactly?
Is it asking about a particular student or any student whatsoever?
Say the names of the students are in the hat. Each student picks a slip from the hat at random and keeps it.
Is it asking for example, What is the probability that John Adams draws his own name?
OR is it asking What is the probability that at least one student draws his/her own name?
I think it is meant to be the latter. But which is it?
Re: Probability Problem
I apologize for my obtuse wording. Perhaps I can restate my question. A class consists of 150 students. Each student is supposed to select a number 1 to 150. Beginning with the first student.
After that particular student picks a number, the number is removed from the pool (analogous to your name and hat scenario). Assuming that each student makes a decision in their head about which
number they would like, what is the probability that a given student will be able to select their number when it is their turn to select? The selection process itself is not dependent on chance.
If John wants number 32 and 32 is available in the pot he will pick that number but its possible that another student before him also wanted that number, so it has already been removed
A concrete question might look like this: What is the probability of the 52nd student drawing the number he wants.
Re: Probability Problem
I apologize for my obtuse wording. Perhaps I can restate my question. A class consists of 150 students. Each student is supposed to select a number 1 to 150. Beginning with the first student.
After that particular student picks a number, the number is removed from the pool (analogous to your name and hat scenario). Assuming that each student makes a decision in their head about which
number they would like, what is the probability that a given student will be able to select their number when it is their turn to select? The selection process itself is not dependent on chance.
If John wants number 32 and 32 is available in the pot he will pick that number but its possible that another student before him also wanted that number, so it has already been removed
Well the answer is $\frac{1}{150}$ that John gets his 32,
Here is why. Think of a roster of this class. Go down that list and randomly assign one of those 150 numbers to each name using all numbers exactly once( that models student choice). There are
$150!$ ways to do that. There are $149!$ of those in which 32 will be next to John's entry in the roster. Divide, what do you get?
Re: Probability Problem
Thank you Plato.
March 22nd 2013, 01:41 PM #2
March 22nd 2013, 01:51 PM #3
Mar 2013
March 22nd 2013, 01:58 PM #4
March 22nd 2013, 02:26 PM #5
Mar 2013
March 22nd 2013, 03:11 PM #6
March 22nd 2013, 03:20 PM #7
Mar 2013
|
{"url":"http://mathhelpforum.com/new-users/215301-probability-problem.html","timestamp":"2014-04-16T05:43:46Z","content_type":null,"content_length":"51228","record_id":"<urn:uuid:c4095013-400a-4778-b661-d05b479c4669>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
|