content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Difference between Divide by n counter and clock dividers
Junior Member level 1
Join Date
Mar 2010
0 / 0
Difference between Divide by n counter and clock dividers
I want to know the difference between a divide by 3 or 4 or 5 or watever counter and a clock divider. As of now, I know that a counter begins from an initial value and counts tilll a
specified value. A divide by 5 counter counts upto 5 and resets to 0 after 5, then what is the concept of the clock pulse apperaing every five clock cycles and the frequency being divided by
5??? Does the counter divide the clock's frequency or count upto a paricular number in a cycle???????????????
Junior Member level 3
Join Date
Sep 2008
7 / 7
Re: Difference between Divide by n counter and clock divider
the counter just counts. for instance in your example of counting up to five, you'll need a 3 bit counter, then when it reaches the value 3'b101 some combinational logic will go high (output
will be a divide by five clock signal with %20 duty cycle) and cause the counter to reset to zero. the master clocks frequency never changes. if you're dividing by a power of two you can just
use the MSB of the counter and get a new slower %50 duty cycle clock. eg. the MSB of a 3 bit counter is a square wave toggling at 1/2^3 of the clock signal.
1 members found this post helpful.
3. 10th March 2010, 00:24 #3
Junior Member level 1
Join Date
Mar 2010
0 / 0
Re: Difference between Divide by n counter and clock divider
Thanks so much!! That helped a lot.. could you also tell me, when we are looking at the circuit, say three D flip flops connected such that Inverted output is connected back to the D input,
this is Divide by 8 counter(1/2^3 like you said), Now, I am guessing that the last D flip Flop's output, q3 will give us the clock/(2^3) clock signal continuously, then where do the numbers 0
to 7 appear??? on the lines q0, q1 and q3????
4. 10th March 2010, 06:14 #4
Newbie level 2
Join Date
Aug 2007
0 / 0
Re: Difference between Divide by n counter and clock divider
can u give me how u connected 3 d flipflops for divide by 8 counter?
is it synchronous or asynchronous..?
5. 10th March 2010, 16:14 #5
Junior Member level 1
Join Date
Mar 2010
0 / 0
Re: Difference between Divide by n counter and clock divider
Asynchronous, q of one going into D of the next...
6. 10th March 2010, 18:46 #6
Junior Member level 3
Join Date
Sep 2008
7 / 7
Re: Difference between Divide by n counter and clock divider
shwetha100: what you're describing is called a "shift register" not a counter. you can also divide the clock with the circuit you described, but it will by a %50 div by 6 clock.
the best way to figure out how it works is to to draw the circuit. label all the nodes (d1,q1,d2,q2 keeping in mind that q1 = d2 and so on) if you have n registers then qn will be your
output, and draw all the waveforms including the clock. you must pick a beginning state for all of your registers (this will also determine the functionality) say setting them all to zero. so
all the q outputs will be zero, and all the d inputs will be zero with the exception of d1, which is 1 because of the feedback inverter. the number of register stages is the number of clock
cycles that it takes for that 1 to propagate through the shift register, and similarly, the number of cycles for the 0 to propagate. so in the end this gives you a 1/(2*n) divider not a 1/(2^
n) divider.
let me know if that made sense!
Added after 1 minutes:
shwetha100: that's wrong, there's nothing asynchronous in this circuit. everything is happening in sync with the clock.
7. 10th March 2010, 20:16 #7
Junior Member level 1
Join Date
Mar 2010
0 / 0
Re: Difference between Divide by n counter and clock divider
it is a divide by 8 circuit and not divide by 6, each D flip flop output divides its input clock by 2, meaning, first flop divides the clock by 2 and this is fed to the second flop which
divides this by 2, thus the original clock by 4 and the third flop divides this by 2, the original clock by 8... n its asynchronous because the D output of one clock is being fed into the
next flops clock input... meaning all the flops are not getting their clock from the same master clock = asynchronous circuit..
8. 11th March 2010, 08:25 #8
Junior Member level 3
Join Date
Sep 2008
7 / 7
Re: Difference between Divide by n counter and clock divider
shwetha100: i get it know. i think you're talking about a ripple counter as in
http://web.mit.edu/bunnie/www/xi/seminar/slide3.pdf. i was talking intially about a regular counter (simple fsm type where the current state is the counter value) and in my last post was
talking about a mobius counter.
there's more than one way to make a counters and clock dividers. if it's the ripple counter then it can be viewed as a counter or a clock divider. i'm not sure if you're original question is
answered or not? | {"url":"http://www.edaboard.com/thread172119.html","timestamp":"2014-04-17T21:38:03Z","content_type":null,"content_length":"80769","record_id":"<urn:uuid:6e412359-0bb0-437a-86d4-9ec0575cc09f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brisbane Algebra 1 Tutor
Find a Brisbane Algebra 1 Tutor
...Before earning my credential, I spent 3 years working with special needs students as a paraeducator in an inclusion program. These students were mainstreamed in general education classes at an
elementary school in Davis, CA. This included modifying and adapting coursework in all subject areas with elementary aged students with a variety of special needs and circumstances.
16 Subjects: including algebra 1, English, algebra 2, special needs
...Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore have a huge impact on your life. If you approach
test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve. Working with a kind and patient tutor can be instrumental in this process.
48 Subjects: including algebra 1, English, Spanish, reading
...I tutor high school, college, and university students on a daily basis in AP Calculus classes and in single and multivariable calculus courses at colleges and universities. I also teach
calculus to individuals outside of any class. I supervised small groups with aspiring math teachers learning calculus and I taught for several years my own university courses that built heavily on
41 Subjects: including algebra 1, calculus, statistics, geometry
With a BA in Economics from the University of Chicago, and an MFA in Creative Writing from the University of Georgia, I can tutor a wide variety of subjects. I have worked with kids of all ages
through 826 Valencia, and I currently teach undergraduate writing at the University of San Francisco. I was also a Research Fellow at Stanford Law School, where I did empirical economics research.
39 Subjects: including algebra 1, English, chemistry, physics
...Most concepts are covered in the Economics and Accounting classes as well. I've tutored students in Business courses in college as well. I have been using Microsoft Outlook for the past 5 years
as my main form of Work communication.
21 Subjects: including algebra 1, reading, English, writing | {"url":"http://www.purplemath.com/brisbane_algebra_1_tutors.php","timestamp":"2014-04-18T00:56:36Z","content_type":null,"content_length":"24271","record_id":"<urn:uuid:0ddf2b5e-4263-42ff-8a15-87936e30ca69>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaithersburg SAT Math Tutor
...I am a business major and I am working towards my undergraduate degree. I like teaching a lot as it is an extremely fulfilling job. I have always enjoyed helping my friends out with any
questions they have regarding school.
19 Subjects: including SAT math, statistics, accounting, algebra 1
...I am also a long-distance runner and have run the Marine Corp Marathon. I am looking to teach tennis, law, government, math-related subjects or marathon training.I am a practicing attorney and
regularly argue before the court. I lecture frequently at bar courses.
11 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Throughout the past few years as a middle school math teacher, I have helped many students strengthen their basic skills to allow them to be successful in other areas of math. I have attended
many math trainings and conferences which have provided me with a wealth of teaching strategies and tech...
4 Subjects: including SAT math, algebra 1, elementary math, prealgebra
...I will go through the assignments you may have, explain the objective of the assignment, then provide the guideline to accomplish the assignments.I have been working in Java application for
more than eight years. Those applications are web based application with Oracle database as the back end d...
8 Subjects: including SAT math, Chinese, TOEFL, Java
...I minored in Art Studio. I concentrated in black & white media, and I understand perspective as well as art theory as it pertains to Socrates. I currently have 8+ years of experience playing
31 Subjects: including SAT math, English, reading, writing | {"url":"http://www.purplemath.com/Gaithersburg_SAT_Math_tutors.php","timestamp":"2014-04-20T04:22:10Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:b4a99e2a-0021-422b-b6cb-e061cb864586>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Drawing System Operations
The Initial Button
When the system is first started, e.g., through a page, there is a single button Show which shows the main graph drawing window. When this button is pressed it changes into Hide, which naturally
hides the graph drawing window.
The User Operations
When in the graph window the following operations are supported:
• Left-Click (and Drag) Move vertices around, with the edges pulling them back like rubber bands.
• Shift Left-Click This fixes the location of a vertex, so it does not jitter.
• Double-Click This changes the graph to display only the localized region around the clicked vertex, i.e. only those vertexs connected to the clicked one. To go back, use the Back button.
• Shift Double-Click This changes the graph so that several vertices are grouped into one vertex. Either all vertices within a certain distance are grouped, or, if there are some vertices which are
fixed in place, the set of fixed vertices is grouped.
• Ctrl Left-Click (and Drag) In the relaxed embedding this adjusts the lengths of the edges.
• Blue Button Pressing the blue button in the lower right corner of the graph window shows/hides the control window, described below.
The Embeddings
The following embeddings are supported:
• Stable stabilizes everything so that nothing moves.
• Relax does a weak-repulsion layout of the graph with fixed length edges.
• Random puts the vertices on random static points.
• Circular puts the vertices on static points on the circumference of a 2D circle (random in Z).
• Barycentric positions the vertices by solving a linear system, given a set of previously placed vertices. The set of previously placed vertices can be either a cycle from the graph or the set of
vertices fixed in place.
• ForceDirect is similar to that above, but is a more conventional implementation without some heuristics which are used above.
• Linear does a hierarchical layout of the vertices.
The Controls
The various controls in the control window are the following:
• Embeddings, the upper left selector, allows selection of one of the above embeddings.
• 3D / 2D allows or disallows Z values other than 0 for vertices.
• Back goes back to a previous localization or grouping (see above).
• Rotate Shows or hides the rotation control window, in which there are three sliders, X, Y, and Z which control rotation around the tree axes.
• Area Constant Control the constant multiplier of the area of the graph drawing space. Usually allows you to control the size of the graph, shrinking or expanding the embedding.
• Localization Depth Select how deep a double-click localization or grouping is, e.g., if it is 2, all vertices within depth 2 of the selected vertex are used in the localization or grouping.
• Minimum Temperature Select the minimum possible temperature of an annealing process. This allows you to ``turn up the heat'' on a force-directed embedding.
• Attraction Constant and Exponent In the force-directed embedding the attraction between adjacent vertices follows the formula f(d) = c/k * d^e where c and e are the constant and exponent
controlled by these sliders.
• Repulsion Constant and Exponent In the force-directed embedding the repulsion between any two vertices follows the formula f(d) = -c*k^2 / d^e where c and e are the constant and exponent
controlled by these sliders.
• Current Temperature This field shows the current temperature of a force-directed placement, or the timestep of the hierarchical layout. | {"url":"http://www.cs.rpi.edu/research/groups/pb/graphdraw/manual.html","timestamp":"2014-04-21T04:38:30Z","content_type":null,"content_length":"4864","record_id":"<urn:uuid:51b1e7b3-20e9-430a-a7e7-921b7a7576fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by lawrence on Wednesday, October 12, 2011 at 11:59am.
( 1.) An objectis thrown upward from the edge of a tall building with a velocity of 10m/s. Where will the object be 3s after it is thrown? Take g = 10m/s^2
15m above the top of the building
30m below the top of the building
15m below the top of the building
30m above the building
( 2.) A stone thrown from ground level returns to the same level 4s after. With what speed was the stone thrown? Take g = 10m/s^2
( 3.) What is common to the variation in the range and the height of a projectile?
horizontal velocity
time of flight
vertical velocity
horizontal acceleration
( 4.) A cart is moving horizontally along a straight line with constant speed of 30m/s. A projectile is fired from the moving cart in such a way that it it will return to the cart after the cart has
moved 80m. At what speed (relative to the cart) and at what angle (to the horizontal) must the projectile be fired?
35.8m/s at 24degrees
38.6m/s at 54degrees
27m/s at 35degrees
24m/s at 44degrees
( 5.) The trajectory of a projectile is
an ellipse
a circle
a parabola
a straight line
( 6.) How fast must a ball be rolled along the surface of a 70-cm high table so that when it rolls off the edge it will strike the floor at the same distance (70cm) from the point directly below the
edge of the table?
( 7.) A ball is kicked and flies from point P to Q following a parabolic path in which the highest point reached is T. The acceleration of the ball is
zero at T
greatest at P
greatest at T and Q
the same at P as at Q and T.
( 8.) A mass accelerates uniformly when the resultant force acting on it
is zero
is constant but not zero
increases uniformly with respect to time
is proportional to the displacement of the mass from a fixed point
( 9.) The term that best describes the need to hold the butt of a riffle firmly against the shoulder when firing to minimise impact on the shoulder is
forward displacement
forward acceleration
recoil velocity
( 10.) Two trolleys X and Y with momenta 20 Ns and 12 Ns respectively travel along a straight line in opposite directions before collision. After collision the directions of motion of both trolleys
are reversed and the magnitude of the momentum of X is 2 Ns. What is the magnitude of the corresponding momentum of Y?
6 Ns
8 Ns
10 Ns
30 Ns
( 11.) A force of 2i + 7j N acts on a body of mass 5kg for 10 seconds. The body was initially moving with constant velocity of i – 2j m/s. Find the velocity of the body in m/s, in vector form.
5i – 12j
12i – 5j
10i – 7j
7i – 10j.
( 12.) The exhaust gas of a rocket is expelled at the rate of 1300kg/s, at the velocity of 50,000m/s. Find the thrust on the rocket in newtons
6.7 x 10^7
3.5 x 10^7
7.6 x 10^7
5.7 x 10^7
( 13.) Sand drops at the rate of 2000kg/min. from the bottom of a hopper onto a belt conveyor moving horizontally at 250m/min. Determine the force needed to drive the conveyor, neglecting friction.
( 14.) A 30,000-kg truck travelling at 10.0m/s collides with a 1700-kg car travelling at 25m/s in the opposite direction. If they stick together after the collision, how fast and in what direction
will they be moving?
( 15.) A gun of mass M is used to fire a bullet of mass m. The exit velocity of the bullet is v. Find the recoil velocity of the gun
( 16.) A 40-g ball travelling to the right at 30cm/s collides head on with an 80-g ball that is at rest. If the collision is perfectly elastic, find the velocity of each ball after collision
the first ball is going to the right at 10m/s while the other is going to the left at 20m/s
the first ball is going to the left at 10m/s while the other is going to the right at 20m/s
the first ball is going to the left at 20m/s while the other is going to the right at 10m/s
the first ball is going to the right at 10m/s while the other is going to the left at 10m/s
( 17.) A 10-g pellet of unknown speed is shot into a 2000-kg block of wood suspended from the ceiling by a cord. The pellet hits the block and becomes lodged in it. After the collision, the block and
the pellet swing to a height 30cm above the original position. What was the speed of the pellet? (This device is called the ballistic pendulum)
( 18.) How large an average force is required to stop a 1400-kg car in 5.0s if the car’s initial speed is 25m/s
( 19.) Which of these is not a statement of Newton’s law of universal gravitation?
gravitational force between two particles is attractive as well as repulsive
gravitational force acts along the line joining the two particles
gravitational force is directly proportional to the product of the masses of the particles
gravitational force is inversely proportional to the square of the distance of the particles apart
( 20.) What is the gravitational field strength at a height h above the surface of the Earth?
Related Questions
Physics - An objectis thrown upward from the edge of a tall building with a ...
math - A ball is thrown from the edge of a cliff with an initial velocity of 60·...
physics. - a ball is thrown upward with an initial velocity of 38.9 m/s from the...
physics - A stone is thrown vertically upward with a speed of 14.0 m/s from the ...
Physics - A stone is thrown from the edge of a cliff into the ocean below. It is...
Physics - A stone is thrown from the edge of a cliff into the ocean below. It is...
physics - A rock is thrown vertically upward from the edge of a cliff. The rock ...
physics - A rock is thrown vertically upward from the edge of a cliff. The rock ...
physics - A ball thrown vertically upward has an upward velocity of 5.68 m/s at ...
physics - A ball is thrown straight upward and returns to the thrower's hand ... | {"url":"http://www.jiskha.com/display.cgi?id=1318435161","timestamp":"2014-04-19T08:03:01Z","content_type":null,"content_length":"15265","record_id":"<urn:uuid:d39df11c-7cc4-402c-bd10-e3a12f7c39ef>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Total" EH - incorporating different damage types into EH
Moderators: Fridmarr, Worldie, Aergis, theckhd
Xenix wrote:There are, however, some times where the question is "Is dA armor + dR resistance overall better than dS stam for a given fight" (e.g. the Onyxia resist ring), or similar questions
for resist vs. armor, resist + health vs. armor, etc. In that case you need a coupled formula between all three variables, which is one use for the formula I posted.
In that case, you just convert dA and dR to stamina equivalents using those two formulas and compare to dS.
I'm not saying there's no value in having the fundamental formula somewhere, and it's certainly more useful for matlab situations. But in reality, I don't see many people other than you or I firing
up matlab to do calculations with it.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
theckhd wrote:In that case, you just convert dA and dR to stamina equivalents using those two formulas and compare to dS.
I'm not saying there's no value in having the fundamental formula somewhere, and it's certainly more useful for matlab situations. But in reality, I don't see many people other than you or I
firing up matlab to do calculations with it.
Heh - can't argue with you much there. I'd say the main difference is in the intent of the formulas, though. Your separate formulas basically hold the fight variables constant and let armor/health/
resist vary, with the intent of giving you a fight-specific formula to weight an upgrade against. Mine is more designed to to hold the upgrades constant and let the fight variables (Pmit,Pnmit and
Mmit) vary, so I give you an upgrade-specific equation to weight a fight against.
As an example, in a program based on your formulas, you'd pick a fight you're interested in and it could then sort all available gear based on which provides the larges NEH upgrade (which is of
course also doable with the dNEH line in my own formulas). My approach, on the other hand, would let you pick a piece of gear and then would show a plot of when it is an upgrade, with markers on that
plot representing each fight you're interested in. It gives you a better overall view, that doesn't require Matlab once you've found the formula, especially if you're analyzing just one piece, but
people not interested in the theory would be more looking for the gear list that your approach works better for.
There's tons of data you can get out of the 7-D scalar field that the NEH formula covers though, everything from what we've calculated to things like calculating an iso-NEH surface in
armor-health-resist space that represents the minimum NEH for a specific fight so you could plug in your armor/health/resistance to a simple inequality (like the ones for the method I posted earlier)
and see if you meet it.
As much as I like the formulas I've posted, solving them for those variables instead and creating a separate surface equation based on NEH >= f(a,h,r) for each encounter/class would probably be the
most useful thing to do with it, since even more than "Is this piece better for me", you want to know "Do I meet the NEH requirements for this fight". Doing that should be a simple matter of plugging
the fight's coordinates in Pnmit-Pmit-Mmit space to the NEH definition, setting it equal to the required NEH and reducing all the coefficients as much as possible.
Once again, this would be easily possible to do for your specific armor/health/resist using only one NEH formula calculation, but plotting the iso-NEH surface would let you see what other
possibilities gear choices you could make, as well as which gear possibility uses the least itemization points to meet the NEH requirements. Furthermore, you can plot a similar iso-NEH surface for
each class (and spec when considering death knights) to see where they lie in relation to each other if you are really curious.
Posts: 244
Joined: Thu Jun 25, 2009 4:56 am
I had a nagging doubt about your calculation, and now that I've sat down and started to work through it I'm certain that there's a mistake in the derivation. Mostly because it's the same one that I
battled through for a few hours while working on the derivation in the OP.
To make it obvious where the error is, let's go back to the EH expression. To make it easier to read, I'm going to convert back to an abbreviated notation. The particular damage types aren't
important at the moment to identify the error. Instead of P-NEH, Pn-NEH, M-NEH, and Mn-NEH, let's just let the damage types be X, Y, Z, and W, with total mitigations Mx, My, Mz, and Mw, so that the
EH formula is:
or, defining Px through Pw as the damage-specific EH contributions:
Here's where the error occurs. You then say you want to talk about gear changes which change the damage-specific EH contributions. So you introduce dPx, dPy, dPz, and dPw to represent those changes,
you're looking for the plane with zero change in EH. In other words, you're differentiating this equation to get:
or using 1=X+Y+Z+W
From which you can easily find your intercepts Xo, Yo, and Zo
Unfortunately, all of those equations are wrong. While the EH equation is linear in X, Y, Z, and W, it is NOT linear in Mx, My, Mz, or Mw. Any change in Mx will change both Px and X due to the way X
is defined (as a post-mitigation value).
To do it properly, you'd have to differentiate like this:
Where dX would be the change in the observed damage intake percentage due to the change in mitigation for damage type X, dY would be the change in observed damage intake percentage due to the change
in mitigation for damage type Y, and so forth.
Needless to say, this can get very messy very fast, and the algebra became tedious enough that after an hour or so of trying to make it work out nicely I gave up. I'm sure it's possible, but the
algebra is so annoying that you're much more likely to make a mistake.
Note that for small changes, such that the observed damage intake percentages dX, dY, dZ, and dW are small, your formula will give a close enough approximation. But for anything more than a few
percent (i.e. the most trivial gear changes), it starts to give erroneous results.
This is why I went back to the definition in equation (12) to start differentiating:
Here P, B, and G are the percentages of raw boss output, which are constants. So when differentiating this equation, we get an accurate representation of how changing any of those mitigation factors
Mi changes EH.
The general form of the differential equation would look like this, using <stuff> to represent the denominator of equation 12:
You could convert this to a general form in X, Y, and Z:
But as you see, that's not the same as X*dPx + Y*dPy + Z*dPz + W*dPw (and, sadly, still has <stuff> in it, which requires knowledge of P, B, and G, though these can also be solved in terms of X, Y,
and Z).
Just to check, you can plug in algebraically if you want - dPx is [dH + dMa*H/(1-Ma) + dMt*H/(1-Mt)]/(1-Ma)(1-Mt), and so on. If you work through it, you get:
Last edited by
on Wed Dec 02, 2009 9:09 am, edited 1 time in total.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
As a separate thought - if your interest is doing things numerically in Matlab, there's very little point in bothering with the differentiation. We could just start from Equation 12, or a variation
that included non-resistable magic damage (i.e. spellfire):
We have H, and all of the Mi's. What we don't have are P, B, G, and J - instead we have X, Y, Z, and W, along with the definitions (again using <stuff> for the denominator):
Those are three equations in four variables (since one is redundant due to the constraint of W=1-X-Y-Z). But with P+B+G+J=1 we have four, which means we can solve for P, B, G, and J in terms of X, Y,
Z, and W. Putting this system of linear equations in matrix form, we have:
Solving this (I used Mathematica, but you could do it by hand if you wanted) gives us:
Plugging these into P+B+G+J=1 gives:
You could then plug this in to get definitions of P, B, and G in terms of only X, Y, and Z. That's a lot of algebra, so again I'll enlist the help of Mathematica to simplify all four expressions:
So now we have expressions for P, B, G, and J that depend only on X, Y, Z, and the Mi's (i.e. known values).
This means that to answer questions about how changing our gear affects a fight, we can follow this procedure:
1) Take X, Y, Z, and the Mi's to calculate P, B, G, and J for a given boss fight.
2) Using those known values of P, B, G, and J, use equation (12) to calculate EH based on the mitigation factors Mi. For example, this could plot EH as you vary any of the Mi's.
3) Rather than differentiate, we can just calculate EH twice for the two cases we're interested in and subtract. For example, if you want to find out if A armor is better or worse than S stamina,
calculate EH for both cases and subtract to get the net difference in EH. This can easily be done for a variety of P, B, G, and J values just as you've done already, and plotted against X, Y, Z, and
W instead of P, B, G, or J just by using the definitions.
Last edited by
on Wed Dec 02, 2009 9:08 am, edited 1 time in total.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
You guys lost me somewhere on page 5....
Can you guys confirm some numbers for Theck's variables for Mt, Mg, and Mr. Currently in my spreadsheet, I am using the following:
Code: Select all
Posts: 548
Joined: Sat Feb 28, 2009 4:17 pm
Wrathy wrote:You guys lost me somewhere on page 5....
Can you guys confirm some numbers for Theck's variables for Mt, Mg, and Mr. Currently in my spreadsheet, I am using the following:
Code: Select all
Those can't be right, Mg should be larger than Mt thanks to
Guarded by the Light
. Which also means that I made a typo in the article when I re-wrote the long part of the derivation. I'll fix that when I get home and can make sure the m-file I used is accurate.
, it looks like:
Mg = 0.1686 without BoSanc, 0.1936 with BoSanc, and 0.2178 with BoSanc and Renewed Hope
Mt = 0.1156 without BoSanc, 0.1421 with BoSanc, and 0.1678 with BoSanc and Renewed Hope
Ma is just A/(A+K) as in the article
Mr should be 0.1 for having an aura up (128 resistance to guarantee 10% reduction).
I would say it's probably safe to use the BoSanc values since we bring that to the table automatically. Renewed Hope is a toss-up, there's no guarantee you'll have a Disc priest around, so it's
probably better to ignore it.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
Use the numbers without renewed hope:
BoSanc and Renewed Hope Don't Stack
So 0.1421 and 0.1936 should be the numbers of choice.
Posts: 1420
Joined: Wed Mar 12, 2008 10:15 am
Ah, ok. So they did finally get around to fixing that.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
By the way, just to drive home the point about the error in the derivation, I whipped up a quick matlab script to demonstrate it:
After doing a few test plots, it calculates the Armor:Stamina ratio three ways:
• The analytical form dA/dS = 12.54*(K+A)/H*1/(1-X-Y-Z)
• The "computational" form, where it calculates P,B,G,J and then calculates EH for (H,A), (H+dH,A), and (H,A+dA) and from that calculates the ratio [EH(H,A+dA)-EH(H,A)]/[EH(H+dH,A)-EH(H,A)]
• Analytically using Xenix's form, where we differentiate incorrectly by ignoring dX, dY, and dZ
This is the result:
As you can see, the computational form is very accurate (the difference between the green line and the blue line is around 0.02 armor/stam, which is good considering we're effectively doing a coarse
differentiation rather than a proper one). Xenix's form agrees for small Y, but deviates considerably once Y grows appreciably.
Anyhow, here are some pretty plots to demonstrate how you'd do the type of calculations Xenix is talking about. If we let Z=0 and let X and Y each vary from 0 to 0.4, we can convert the 2-dimensional
plot above into a 3-dimensional one:
And perhaps more conveniently, we can invert the ratio (i.e. plot Stamina:Armor). The advantage to doing this is that instead of blowing up like the Armor:Stam ratio does (since it varies as 1/
(1-X-Y)), it's a linear function of Y and X, so we can plot it out to Y=1 and X=1. This is probably more conveniently viewed in two dimensions with a colorbar:
In other words, if every point of armor is worth 1/11.6961 = 0.0855 stamina on a purely physical fight, it's worth less on a fight with magical or bleed damage. The exact amount can be figured out by
picking X and Y values, locating the color on the plot, matching that up to an approximate value on the colorbar.
The effectiveness could be found by dividing that number by 0.0855, or I could just plot that for you:
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
Edited after doing the math:
After looking at it, you are correct - I used the NEH formula which had X/Y/Z as the post-mitigation damage-type ratio thinking that it was in fact the pre-mitigation damage% ratio. The whole "this
is equal to XX differential equation" thing was something I came up with after seeing the plot in mitigation%-space of a change in stats calculated from the NEH formula. If you make such a 3-d plot,
you will see that the delta-NEH = 0 surface is in fact a plane with the given equation, but that equation doesn't quite represent what I though it did since my axis variables were different from what
I thought they were.
The formula can still be used to calculate for which fights the gear change you made is good/bad, but using the formula to extrapolate beyond that will not work unless you transform back to
pre-mitigation% space.
Last edited by
on Wed Dec 02, 2009 2:21 pm, edited 2 times in total.
Posts: 244
Joined: Thu Jun 25, 2009 4:56 am
eh... woosh?
wtb tankadin plugin, lol
"Do I have any gear that would've boosted my EH for the past fight?"
Posts: 671
Joined: Wed Jul 16, 2008 3:02 am
Location: Silvermoon, EU
Awesome work. I look forward to going through this line by line.
If cataclysm wasn't now on the horizon with its' promised simplifications, i would advocate the creation of a Theck style Ratingbuster addon, complete with a layman's term plugin :p
Edit: The 3-D color graphs are awesome btw, just watch out as you may start giving people seizures.
Posts: 65
Joined: Tue Jun 09, 2009 9:38 am
<edit> Looks like you were editing your post while I was replying to it.
I'm going to leave my response here just so I have something to point to in case anyone else makes the same mistake later. Which is likely - I made the same mistake in my first few tries at this
calculation, and only realized something was up when I started plotting the results in Matlab and getting weird answers.
Xenix wrote: My own X-Y-Z-W were pre-mitigation percentages for exactly that reason - so they would not change with any change in the gear variables. Remember that EH is how much un-mitigated
damage of a specific type you can take before dying. That would necessitate the percentages in my own formula being pre-mitigation ones.
Except that's not the formula. If you write it as:
Code: Select all
EH = X*Px + Y*Py + Z*Pz + W*Pw
Then X, Y, Z, and W are
by definition
post-mitigation. This is very clear from the derivation - look back at the section going from equations (8) to (12) in the original post.
If you're going to use pre-mitigation values, then the form has to be the same as eq. (12), since that's the formula for effective health derived from first principles. The only way to get it in the
nice, intuitive form of equation (19) is to use post-mitigation values.
The first plot in my previous post demonstrates the discrepancy as well. It wouldn't matter whether I used the analytical form of your expression or the "computational" equivalent - it would still
give the wrong result.
Xenix wrote:However, if you were to start with the NEH equation and differentiate it, this is what you would do (to get my equations):
Code: Select all
Your starting equation, where Px, Py, Pz and Pw are your 100% damage-specific EH's, and X,Y,Z and W are your :
EH = X*Px + Y*Py + Z*Pz + W*Pw
Take the partial derivative with to Pw for:
dEH dPx dPy dPz dPw dX dY dZ dW
--- = X*--- + Y*--- + Z*--- + W*--- + Px*--- + Py*--- + Pz*--- + Pw*---
dPw dPw dPw dPw dPw dPw dPw dPw dPw
Now, in the plane I plotted, EH = constant. Also, dPw/dPw = 1, and the last four terms are all zero since I'm using pre-mitigation numbers for them (If you're taking 50,000 raw P-mit damage and
50,000 raw M-mit damage, changing your gear still means you're taking that much raw damage and still need 100,000 NEH to survive, and you'll always be on the 50%X,0%Y,50%Z point in mitigation%
This is the fundamental error highlighted. dX/dPw through dW/dPw are
zero if you define EH the way you just have. This is also why it doesn't matter whether you computationally determine the dPi/dPw's or whether you do it analytically, because you've just thrown away
half of the relevant information in the equation.
Posts: 7655
Joined: Thu Jul 31, 2008 3:06 pm
Location: Harrisburg, PA
Yeah, definitely a good idea to keep that post for reference.
Also, I could re-do my original visualization by just changing the neh.m file to the correct formula which would give me the same plots for the corrected axes, but with the derivations you did, it's
much simpler to avoid the brute-force method and instead just calculate the dNEH=0 surface directly for a certain gear change and compare that to the full range of possible fights.
Once again, this will not be worth doing if you only care about a single fight - in that case, you should just calculate the dNEH for that fight for each piece of gear available and sort them in
order. It would only be worth doing if you care about a single piece of gear and want to see over what ranges of fights it's an upgrade.
Also, if you want to try and avoid confusion, the following image shows the generalized formulas 12 and 19 in a format that's a bit easier to read for people who are used to math notation:
In these formulas, H is health, Phi_i is the percent unmitigated damage (of the total unmitigated damage) of the i'th type, Phi-hat_i is the percent mitigated damage (of the total mitigated damage)
of the i'th type, and M_j is the percent reduction in damage of that type from your j'th source. For damage types we're used to these products are well-known, but I left it as a product over j types
of damage reduction since that's the true form of the equation.
Furthermore, in these forms you can manually differentiate with respect to any variable U and find a closed-form solution that's only slightly messy (what Theckd ended up using Mathematica for when
considering the four damage types we currently encounter), but that might be taking it a bit far. I'll see how bored I get studying for finals and might end up doing it to see if the result is
<Edit>: I really was bored tonight, enough to complete the derivation of the full differential form of the NEH equation using pre-mitigation variables (e.g. the -first- one.) So as not to scare
everyone away from this thread (the final equation should do that alone), I won't post all 10 steps of the derivation, but if you want to check my work, I made a
PDF of the derivation
The final equation is the following, when you differentiate with respect to ANY variable U:
where Q-bar is what I am calling your "Weighted Damage Intake %", defined as the denominator of the original NEH equation (or simply your Health divided by your NEH):
Now, before your eyes glaze over from trying to comprehend that cold, let's see what it says. The change in NEH with any other variable can be broken down into three terms:
• Term 1: How your health changes with that variable.
• Term 2: How the damage percentages change with that variable.
• Term 3: How the mitigation percentages change with that variable.
In most cases, two of those terms will be zero, simplifying matters greatly.
As a quick example of an easy use of this, let's see how NEH changes with a change in stamina. Terms 2 and 3 would be zero in this case because stamina has no effect on the damage percentages or
mitigation percentages, which means the two scary terms disappear, leaving you with:
And even better, dH/dS is a simple constant for any class (12.54 for paladins), which means that your change in NEH per point of stamina is equal to 12.54 divided by Q-bar.
To check my work against Theckd's equations, I then took dNEH/dA. For this case, the first two terms disappear as those values do not change with armor, and all of the non-pmit damage parts of the
final summation term disappear as well. In the end, you are left with:
Where Phi_p is the % of raw damage affected by armor, Mp is your physical damage mitigation, Ma is your armor mitigation and K is the armor constant.
Now, you can invert one and multiply the two to get the following:
Code: Select all
dA dA dNEH
-- = ----*----
dS dNEH dS
which gives for dA/dS:
Note that the first equation is only in terms of pre-mitigation variables, but as Theckd noticed in his derivation of the same equation, the second fraction is the same as 1/(Phi-hat_p), or 1 over
the post-mitigation % of damage that is physical. Also, dH/dS is how much health you get from a point of stamina, or 12.54 for paladins, and you set dA/dS equal to zero to find the equivalence
points. Furthermore, just to make it clear (since I didn't realize this initially from Theckd's previous posts), dS/dA is linear only with the % of physical damage post-mitigation. If you're using
raw damage percentages, you'll have to use the first version of the equation.
Now, you may ask yourself - what is the big deal since I just re-derived an equation that we already knew (aside from proving the differential form is correct)? In short, we can use the same method
to calculate how ANY two variables relate to each other.
Want to know dPhi_p/dPhi_m for some strange reason? Just take (dNEH/dPhi_m) / (dNEH/dPhi_p). This equation lets you analytically determine the differential ratio between any two variables related to
NEH, or even just the change in NEH versus one variable. It's probably not something more than a few people will use, but we now actually have it posted if someone were to want to derive a new
relationship analytically instead of numerically, and it is valid for any number of damage types.
Last edited by
on Thu Dec 03, 2009 7:22 am, edited 1 time in total.
Posts: 244
Joined: Thu Jun 25, 2009 4:56 am
You two are both totally batshit insane. And I love you for it.
Posts: 1420
Joined: Wed Mar 12, 2008 10:15 am
Return to Advanced Theorycraft and Calculations
Who is online
Users browsing this forum: Google [Bot] and 1 guest | {"url":"http://maintankadin.failsafedesign.com/forum/viewtopic.php?p=516952","timestamp":"2014-04-16T13:37:21Z","content_type":null,"content_length":"86438","record_id":"<urn:uuid:f716dac6-8956-4b70-987c-a58acea19968>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACT SparkNotes Test Prep: The Format of the Math Test
The Format of the Math Test
The format of the ACT Math Test is straightforward. ACT simply lumps all the problems into one big list of math questions. The only visible quirk in formatting is that all the questions are printed
in the left half of the page, while the right half is reserved for “your figuring.” We’ll discuss this empty space and what you should do with it in the Strategies section of this chapter.There are
two other aspects of Math Test questions that you should keep in mind. We’ll describe them to you below.
Five, Not Four, Multiple Choice Answers
Unlike the three other ACT Subject Tests, the Math Test offers you five, not four, multiple choice answers. You should be aware of this fact when filling in the bubbles on the answer sheet. If you
are answering choices D or J, don’t automatically fill in the last bubble in the row because you’ll really be filling in E or K. Again, this is just another reason to verbalize to yourself which
blank you want to be filling in as you actually fill it in.
Guessing with the Extra Answer Choice
The additional answer choice will also affect your chances of guessing the right answer. If you plan to guess blindly on a math problem, your odds of getting the correct answer are one in five, or 20
percent. On the other Subject Tests, your chances are higher: one in four, or 25 percent. This difference of 5 percent really isn’t that big of a deal and shouldn’t change your guessing strategy. You
should still guess on any question you can’t answer. Guess blindly if you have no clue about how to answer the question. But your best bet is always to eliminate whatever answer choices you can and
then guess.
Question Types: Basic Problems and Word Problems
There are two kinds of questions on the ACT Math Test: basic problems and word problems. Word problems tend to be more difficult than basic problems simply because they require the additional step of
translating the words into a numerical problem that you can solve. Of course, a basic problem on a complex topic will still likely be more difficult than a word problem on a very easy topic.
Basic Problems
Basic math problems are exactly how they sound: basic. You won’t see any complicated wording or context in these problems. They simply present you with a math problem in a no-frills fashion. If you
encounter a basic math problem that asks you to calculate what two plus two is (you won’t), the question would look like this:
That’s pretty straightforward; you shouldn’t have a problem figuring out what this question wants you to do.
Word Problems
Word problems are so named because they use words to describe a math problem. These questions are by nature more complicated than basic math problems because you have to sort through the words to
figure out the math problem beneath them. In essence, you have two steps: figuring out what the math problem is and then solving it.
For example, if you were asked for the same calculation of two plus two as a word problem, it might look something like this:
Jane has two green marbles, and Beth has two red marbles. Together, how many marbles do Jane and Beth have?
This question isn’t exactly complicated, but it is certainly more complicated than the basic version of the problem. The setting of the problem, rather than elucidating the question, only adds to its
complexity. Your job on this and all word problems is to sort through the muck and translate the words into a straightforward math problem. A question like “Together, how many marbles do Jane and
Beth have?” really means “Jane’s marbles plus Beth’s marbles equals what?” or ultimately “Two plus two equals what?” | {"url":"http://www.sparknotes.com/testprep/books/act/chapter8section2.rhtml","timestamp":"2014-04-18T18:27:34Z","content_type":null,"content_length":"48626","record_id":"<urn:uuid:56062177-92bc-488f-9a68-2602de4b1758>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from October 2011 on Cycle of Futility
The Evening Standard reports that Transport for London’s staff have made expense claims for over a million miles of journeys in their own cars since April 2008.
This includes some incredibly short trips – my favourite of those listed being Moorgate to Liverpool Street (about 300 metres).
Having seen the Freedom of Information responses (see below), I can reveal that other trips include Bank to Moorgate, Pimlico to Victoria and Baker Street to Marylebone. I’ve fallen over and
travelled further than some of these.
Except if you’re TfL staff
The Standard article notes Transport for London’s defence that the journeys were “made at night when the Tube had stopped”.
Yet how they’re able to make this claim, I don’t know. When I asked TfL for the details of when specific trips were made, under Freedom of Information legislation, they responded that they “do not
record” this information.
Insulated from London
Of the 1069 journeys of five miles or under that were claimed for in 2010/11, [DEL:not a single one was cycled.:DEL] (Correction 9.14am: Only 8 were cycled. Thanks to the beady-eyed Twitter user who
pointed this out.)
Is it any surprise that cyclists are dying on TfL’s dangerously designed streets? Can we realistically expect that an organisation so entrenched in car culture will actually try to sort this out?
These expense claims raise a number of questions:
1. Why aren’t employees being encouraged to avoid using cars for short trips, as outlined in the Travel at Work Policy?
2. Why aren’t TfL recording the times of trips made? And why is their Press Office then allowed to claim that journeys are made at night?
3. Who is making expense claims for car trips of less than 1 mile in Central London? And who is signing them off?
Something is deeply wrong at the core of TfL. The organisation’s Chair, Boris Johnson, needs to get a grip on it.
For the main FOI response, including total miles travelled and cash paid in the last five years click here. For the spreadsheet of all 6000 trips made in the last year, click here. | {"url":"http://cycleoffutility.wordpress.com/2011/10/","timestamp":"2014-04-17T04:58:06Z","content_type":null,"content_length":"46076","record_id":"<urn:uuid:3209cccc-3ed7-424f-90f9-b56687de5902>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lighting on 3D Objects - Page 3
In-depth: The Math
We'll have to calculate a number of things:
• the angle of the facets on the z-axis
• the angle of the facets on the xy-axis
• the angle of the light source on the z-axis - the light source has a fixed distance: the radius of the sphere
• the angle of the light source on the xy-axis
• the whiteness of each facet - depending on both angle (50%) and distance (50%) of the light source
• the path the light source will follow in 'automatic' mode
What was the deal with these trigonometric functions again?
sin(a) = B / C cos(a) = A / C tan(a) = B / A
sin(b) = 1 (deg) cos(b) = 0 (deg) tan(b) = endless
sin(c) = A / C cos(c) = B / C tan(c) = A / B
And this pythagoras thingy?
Important! - Flash works in radians!
The formula for calculating degrees from radians is:
radian = math.PI/180 * degree
Calculating the facet angles
To save me some serious paper filling calculations I've built in an angle detector (it's the lower left button). Just drag point1 and point2 parallel with the desired angle ;-)
The front and side view can be accesed with the top right button.
The use of atan is discussed in the next topic. The formula code is on the angle detection line mc:
onClipEvent (enterFrame) {
//place the line between mcs point1 and point2
//notice the width and height of this line are 100
this._x = _root.angledetect.point1._x;
this._y = _root.angledetect.point1._y;
this._xscale = (_root.angledetect.point2._x-_root.angledetect.point1._x);
this._yscale = (_root.angledetect.point2._y-_root.angledetect.point1._y);
// Math.atan for angles
xa= this._width;
ya= this._height;
// -------------------------------
// make less decimals and calculate angle2
_parent.point1.angle= _parent.angle
_parent.point2.angle= 90 - _parent.angle
// -------------------------------
The light source angle on the xy-axis
We'll use Math.atan to calculate this angle: we know A and B (distance base to light source on x and on y):
xt= xm - xb;
yt= ym - yb;
xb and yb are the base coordinates, xm and ym the coordinates of the light source.
_root.DgrMouseX=int(Math.atan(yt/xt) / _root.convrad);
convrad= math.PI/180
This function gives a flash angle, further on we'll recalculate this angle to an angle between 0 and 360 degrees.
The light source angle on the z axis
The fixed distance of the light source lets us calculate the angle it has. We know that side (C) will be the radius of the sphere, in this case 180. We calculate A (DeltaX) by using pythagoras on xt
and yt.
Then we use the Math.acos on A and C to calculate the z-angle:
Flash and angles
Flash treats degrees a little different than we're used to, so we'll re-calculate them to something more handable:
if(xt < 0 && yt > 0){
_root.DgrMouseX += 270;
} else if(xt < 0 && yt <= 0){
_root.DgrMouseX += 270;
} else if(xt >= 0 && yt <= 0){
_root.DgrMouseX += 90;
} else if(xt >= 0 && yt >= 0){
_root.DgrMouseX += 90;
This'll return any angle to one between 0 and 360 degrees counting from 0 at the top clockwise to 360.
On to the whiteness!
Each facet calculates it's own whiteness. The whiteness is done by setting the _alpha of a white facet MC over the background of the object.
Each facet has 2 variables: FacetX (its xy-angle) and FacetZ (its z-angle).
We'll use the Math.cos function so that one side of the object is lit (cos(x)=1) and the other side is not (cos(x)=-1).
function SinMouseX(AngleX) {
return Math.cos(Math.PI/180 * (_root.DgrMouseX - AngleX));
AngleX= FacetX
This function sets the most fierce point at FacetX and the less fierce point at (FacetX - 180) degrees.
Then we set the _alpha. The ratio is 50% so z-angle and xy-angle have an equally strong influence:
function AlphaMouseX(Z,X) {
return ((_root.DgrMouseZ/Z) * _root.ratio + X * (100 - _root.ratio));
Z= FacetZ
X= FacetX
» Level Advanced
Added: 2001-11-06
Rating: 8 Votes: 204
» Author
Mr10 stands for Maarten van de Voorde, Graphic Design student from Holland. Being in his forth year he hopes to be employed by January, but for now he has some sparetime to do some free-lance
assignments while graduating.
» Download
Download the files used in this tutorial.
Download (99 kb)
» Forums
More help? Search our boards for quick answers!
• There are no comments yet. Be the first to comment!
• You must have javascript enabled in order to post comments. | {"url":"http://www.flashkit.com/tutorials/3D/Lighting-Mr10-678/more3.php","timestamp":"2014-04-18T10:35:01Z","content_type":null,"content_length":"50093","record_id":"<urn:uuid:9f2633eb-12ff-45d7-9e74-8c97533b2e98>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Similar Searches: algebra, college algebra, introductory algebra, elementary and intermediate algebra, begin algebra, elementary & intermediate algebra, lial, algebra 2 teacher edition, john hornsby,
blitzer, algebra for college student, intermediate algebra 10th edition, algebra and trigonometry fourth edition, tom carson, intermediate account 12 edition, intermediate algebra 11th ed,
intermediate alegbra, 4th edition, elementary & intermediate algebra second edition pearson publisher, intermediate algebra 2nd edition, and intermediate algebra 8th edition
We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals.
As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based
Customer Service team is ready to help by email, chat or phone.
For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going!
*It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time.
With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days.
Need it faster? Our shipping page details our Express & Express Plus options.
Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page.
Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students
the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same
people, only our corporate name has changed. | {"url":"http://www.bookrenter.com/intermediate-algebra/search--p8","timestamp":"2014-04-19T18:17:04Z","content_type":null,"content_length":"41838","record_id":"<urn:uuid:bc2c4032-c62b-4a6b-ba8d-4bc1223e16c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrices, Equivalent Conditions
September 27th 2009, 01:11 PM #1
Junior Member
Aug 2009
Matrices, Equivalent Conditions
My problem is this:
State whether the following is true or false, provide a reason.
1. Ax = O has only the trivial solution if and only if Ax = b has a unique solution for every n x 1 column matrix b.
I am leaning towards false. The reason for this lies with a list of equivalent conditions in my book:
If A in an n x n matrix, then the following statements are equivalent:
1. A is invertible
2. Ax = b has a unique solution for every n x 1 column matrix b
3. Ax = O has only the trivial solution
4. A is row equivalent to $I_n$
5. A can be written as a product of elementary matrices.
The reason I am leaning towards false is because the problem makes it seem that part one of it (Ax = O ...) can only be true if part two (Ax = b...) is. Perhaps I am misinterpreting however.
Thanks for input.
it's true because $A\bold{x} = \bold{o}$ has only the trivial solution $\bold{x}=\bold{0}$ if and only if $A$ is invertible and so $\bold{x}=A^{-1}\bold{b}$ is the unique solution of $A\bold{x}=\
September 27th 2009, 10:05 PM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/104651-matrices-equivalent-conditions.html","timestamp":"2014-04-17T01:07:49Z","content_type":null,"content_length":"34870","record_id":"<urn:uuid:e14bc522-4d69-45ee-8cfa-8a27aaefba33>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the unity of duality
- In Dynamic Languages Symposium (DLS , 2007
"... Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs
that define its types, and its semantics is determined by the interactions among those constructs. The sou ..."
Cited by 11 (4 self)
Add to MetaCart
Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that
define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design—the absence of ill-defined programs— follows naturally. The purpose of
this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules
governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics
and the dynamics of a language. In this way we establish a foundation for the study of programming languages. But why these particular methods? Though it would require a book in itself to
substantiate this assertion, the type-theoretic approach
- SUBMITTED TO POPL ’09 , 2008
"... We define a dependent programming language in which programmers can define and compute with domain-specific logics, such as an access-control logic that statically prevents unauthorized access
to controlled resources. Our language permits programmers to define logics using the LF logical framework, ..."
Cited by 6 (3 self)
Add to MetaCart
We define a dependent programming language in which programmers can define and compute with domain-specific logics, such as an access-control logic that statically prevents unauthorized access to
controlled resources. Our language permits programmers to define logics using the LF logical framework, whose notion of binding and scope facilitates the representation of the consequence relation of
a logic, and to compute with logics by writing functional programs over LF terms. These functional programs can be used to compute values at run-time, and also to compute types at compiletime. In
previous work, we studied a simply-typed framework for representing and computing with variable binding [LICS 2008]. In this paper, we generalize our previous type theory to account for dependently
typed inference rules, which are necessary to adequately represent domain-specific logics, and we present examples of using our type theory for certified software and mechanized metatheory.
- SUBMITTED TO PLPV ’09 , 2008
"... This paper is part of a line of work on using the logical techniques of polarity and focusing to design a dependent programming language, with particular emphasis on programming with deductive
systems such as programming languages and proof theories. Polarity emphasizes the distinction between posit ..."
Cited by 4 (0 self)
Add to MetaCart
This paper is part of a line of work on using the logical techniques of polarity and focusing to design a dependent programming language, with particular emphasis on programming with deductive
systems such as programming languages and proof theories. Polarity emphasizes the distinction between positive types, which classify data, and negative types, which classify computation. In previous
work, we showed how to use Zeilberger’s higher-order formulation of focusing to integrate a positive function space for representing variable binding, an essential tool for specifying logical
systems, with a standard negative computational function space. However, our previous work considers only a simply-typed language. The central technical contribution of the present paper is to extend
higher-order focusing with a form of dependency that we call positively dependent types: We allow dependency on positive data, but not negative computation, and we present the syntax of dependent
pair and function types using an iterated inductive definition, mapping positive data to types, which gives an account of type-level computation. We construct our language inside the dependently
typed programming language Agda 2, making essential use of coinductive types and induction-recursion.
- In: ACM SIGPLAN-SIGACT Workshop on Programming Languages Meets Program Verification , 2009
"... One lesson learned painfully over the past twenty years is the perilous interaction of Curry-style typing with evaluation order and side-effects. This led eventually to the value restriction on
polymorphism in ML, as well as, more recently, to similar artifacts in type systems for ML with intersecti ..."
Cited by 2 (1 self)
Add to MetaCart
One lesson learned painfully over the past twenty years is the perilous interaction of Curry-style typing with evaluation order and side-effects. This led eventually to the value restriction on
polymorphism in ML, as well as, more recently, to similar artifacts in type systems for ML with intersection and union refinement types. For example, some of the traditional subtyping laws for unions
and intersections are unsound in the presence of effects, while union-elimination requires an evaluation context restriction in addition to the value restriction on intersection-introduction. Our aim
is to show that rather than being ad hoc artifacts, phenomena such as the value and evaluation context restrictions arise naturally in type systems for effectful languages, out of principles of
duality. Beginning with a review of recent work on the Curry-Howard interpretation of focusing proofs as pattern-matching programs,
, 2011
"... Focusing, introduced by Jean-Marc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations
by eagerly applying invertible rules and grouping sequences of non-invertible rules. A focused sequent calc ..."
Cited by 1 (1 self)
Add to MetaCart
Focusing, introduced by Jean-Marc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations by
eagerly applying invertible rules and grouping sequences of non-invertible rules. A focused sequent calculus is defined relative to some non-focused sequent calculus; focalization is the property
that every non-focused derivation can be transformed into a focused derivation. In this paper, we present a focused sequent calculus for polarized propositional intuitionistic logic and prove the
focalization property relative to a standard presentation of propositional intuitionistic logic. Compared to existing approaches, the proof is quite concise, depending only on the internal soundness
and completeness of the focused logic. In turn, both of these properties can be established (and mechanically verified) by structural induction in the style of Pfenning’s structural cut elimination
without the need for any tedious and repetitious invertibility lemmas. The proof of cut admissibility for the focused system, which establishes internal soundness, is not particularly novel. The
proof of identity expansion, which establishes internal completeness, is the principal contribution of this work. 1
"... strategies in the sequent calculus ..."
"... Abstract. In previous work, the author gave a higher-order analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to
Buchholz’s Ω-rule. Among other benefits, this “pattern-based ” description of focusing simplifies the cut ..."
Add to MetaCart
Abstract. In previous work, the author gave a higher-order analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to
Buchholz’s Ω-rule. Among other benefits, this “pattern-based ” description of focusing simplifies the cut-elimination procedure, allowing cuts to be eliminated in a connective-generic way. However,
interpreted literally, it is problematic as a representation technique for proofs, because of the difficulty of inspecting and/or exhaustively searching over these infinite objects. In the spirit of
infinitary proof theory, this paper explores a view of pattern-based focusing proofs as façons de parler, describing how to compile them down to first-order derivations through defunctionalization,
Reynolds ’ program transformation. Our main result is a representation of pattern-based focusing in the Twelf logical framework, whose core type theory is too weak to directly encode infinitary
rules—although this weakness directly enables so-called “higher-order abstract syntax ” encodings. By applying the systematic defunctionalization transform, not only do we retain the benefits of the
higher-order focusing analysis, but we can also take advantage of HOAS within Twelf, ultimately arriving at a proof representation with surprisingly little bureaucracy. 1
"... 3.0 United States License. To view a copy of this license, visit ..."
"... The duality of computation under focus ..."
, 2010
"... Abstract We develop a polarised variant of Curien and Herbelin’s ¯ λµ˜µ calculus suitable for sequent calculi that admit a focalising cut elimination (i.e. whose proofs are focalised when
cut-free), such as Girard’s classical logic LC or linear logic. This gives a setting in which Krivine’s classica ..."
Add to MetaCart
Abstract We develop a polarised variant of Curien and Herbelin’s ¯ λµ˜µ calculus suitable for sequent calculi that admit a focalising cut elimination (i.e. whose proofs are focalised when cut-free),
such as Girard’s classical logic LC or linear logic. This gives a setting in which Krivine’s classical realisability extends naturally (in particular to callby-value), with a presentation in terms of
orthogonality. We give examples of applications to the theory of programming languages. In this version extended with appendices, we in particular give the two-sided formulation of classical logic
with the involutive classical negation. We also show that there is, in classical realisability, a notion of internal completeness | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4785198","timestamp":"2014-04-20T20:01:38Z","content_type":null,"content_length":"36724","record_id":"<urn:uuid:afc175a6-1852-4389-9eda-4441d0b841b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newcastle, WA Trigonometry Tutor
Find a Newcastle, WA Trigonometry Tutor
...I am uniquely qualified to tutor trigonometry, with a PhD in Aeronautical and Astronautical Engineering from the University of Washington and more than 40 years of project experience in
science and engineering. The coursework for my Ph.D. included an extensive amount of mathematics, including ca...
21 Subjects: including trigonometry, chemistry, English, calculus
...With my years of tutoring experience, I've helped many students improve their math scores. If you are not sure about something or can't solve a math problem, I can simplify it so that you can
understand it well and solve the problem all by yourself. I enjoy tutoring math.
13 Subjects: including trigonometry, geometry, Chinese, algebra 1
...Schedule: My current schedule (as of March 2014), is fairly competitive and any scheduled sessions should be made a week in advanced. Cancellations should be made 6 hours in advanced. Any
later can result in a charge of a late-cancellation fee.I have been using Windows since Windows 95 when I was 6.
17 Subjects: including trigonometry, chemistry, physics, calculus
...I have tutored high school level Algebra II for both Public and Private School courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework.
As an aspiring physician I spent great amounts of time thoroughly studying Biology during my undergradua...
27 Subjects: including trigonometry, chemistry, reading, writing
...I trained for a year as a yoga instructor from the oldest institute for yoga training in Mumbai - The Yoga Institute(Santa Cruz) I worked with children and adults, showing them how to
incorporate yoga into their daily routines. I also addressed specific ailment cures through yoga at camps held a...
16 Subjects: including trigonometry, geometry, algebra 1, algebra 2
Related Newcastle, WA Tutors
Newcastle, WA Accounting Tutors
Newcastle, WA ACT Tutors
Newcastle, WA Algebra Tutors
Newcastle, WA Algebra 2 Tutors
Newcastle, WA Calculus Tutors
Newcastle, WA Geometry Tutors
Newcastle, WA Math Tutors
Newcastle, WA Prealgebra Tutors
Newcastle, WA Precalculus Tutors
Newcastle, WA SAT Tutors
Newcastle, WA SAT Math Tutors
Newcastle, WA Science Tutors
Newcastle, WA Statistics Tutors
Newcastle, WA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Beaux Arts Village, WA trigonometry Tutors
Bellevue, WA trigonometry Tutors
Burien, WA trigonometry Tutors
Des Moines, WA trigonometry Tutors
Hazelwood, WA trigonometry Tutors
Issaquah trigonometry Tutors
Kenmore trigonometry Tutors
Mercer Island trigonometry Tutors
Mill Creek, WA trigonometry Tutors
Newport Hills, WA trigonometry Tutors
Normandy Park, WA trigonometry Tutors
Renton trigonometry Tutors
Sammamish trigonometry Tutors
Seatac, WA trigonometry Tutors
Tukwila, WA trigonometry Tutors | {"url":"http://www.purplemath.com/newcastle_wa_trigonometry_tutors.php","timestamp":"2014-04-19T05:23:07Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:c0c0b171-8a2f-47e5-bbb7-3fbe1d69c006>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lanham Algebra Tutor
Find a Lanham Algebra Tutor
...Collaboratively and patiently, we can identify, utilize, and capitalize on critical elements of study skills in order to show that you know the material. Start with me now to check-off the
first step to overcoming the barriers of academic frustration and anxiety and the disappointment associated...
64 Subjects: including algebra 1, English, algebra 2, chemistry
...I am able to tutor young children in most areas of mathematics. I consider myself to be patient. Some students are intimidated by math and most times I able to help them get rid of the fear
that comes with math.
12 Subjects: including algebra 2, algebra 1, calculus, elementary science
...I minored in Art Studio. I concentrated in black & white media, and I understand perspective as well as art theory as it pertains to Socrates. I currently have 8+ years of experience playing
31 Subjects: including algebra 1, algebra 2, reading, English
...I was a volunteer science facilitator for five years with a non-profit organization based in Philadelphia, teaching students in grades 2-6 various science concepts through experimentation as
well as aiding in the creation and execution of city-wide science fair projects. In addition to teaching ...
14 Subjects: including algebra 2, geometry, algebra 1, reading
...I have a wide experience of working with pupils of many different ages, in small groups or on an individual basis, who have struggled with mathematics. I am able to work at home or in the
pupil's home and can travel up to about 8 miles. I currently work in the Prince George's County School District.
4 Subjects: including algebra 1, algebra 2, prealgebra, elementary math
Nearby Cities With algebra Tutor
Bowie, MD algebra Tutors
Cheverly, MD algebra Tutors
College Park algebra Tutors
Glenarden, MD algebra Tutors
Glenn Dale algebra Tutors
Greenbelt algebra Tutors
Hyattsville algebra Tutors
Landover Hills, MD algebra Tutors
Lanham Seabrook, MD algebra Tutors
New Carrollton, MD algebra Tutors
Riverdale Park, MD algebra Tutors
Riverdale Pk, MD algebra Tutors
Riverdale, MD algebra Tutors
Seabrook, MD algebra Tutors
Takoma Park algebra Tutors | {"url":"http://www.purplemath.com/Lanham_Algebra_tutors.php","timestamp":"2014-04-20T01:59:48Z","content_type":null,"content_length":"23662","record_id":"<urn:uuid:b9f19e7e-69ce-4970-ab7b-6549e1606d29>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where have the .400 hitters gone?
This is a question that comes up from time to time among baseball fans who lament that the giants of yesteryear - Ty Cobb, Rogers Hornsby, and Ted Williams among them are long gone and that we'll
never see their kind again.
If one doesn't accept the premise that these immortals were truly superior to modern stars such as George Brett, Tony Gwynn, and Ichiro Suzuki what accounts for the disappearance of the .400 hitter?
Some of the factors that have been discussed in recent years that might cause modern hitters to be at a disadvantage include changes in the game itself - the increase of pitching specialists, the
development of the slider and split finger fastball, the prevalence of night games, and intracontinental travel. But do any of these hold water?
Recently, this topic came up again on the SABR listserv in the context of sabermetric references made by the late Harvard paleontologist and baseball fan Stephen Jay Gould in his book Triumph and
Tragedy in Mudville: A Lifelong Passion for Baseball. The discussion revolved around an essay that Gould wrote for Discover magazine in 1986 and that was reprinted both in his 1996 book Full House
and Triumph and Tragedy under the title "Why No One Hits .400 Any More?"
In epitome Gould's argument was that .400 hitters haven't disappeared because of cosmetic changes in the game or that the heroes of the past were supermen, but rather as the natural consequence of of
an increasing level of play that comes closer to the "right-wall" of human ability coupled with stabilization of the game itself. These factors tend to decrease the differences between average and
stellar performers. As a result, although the mean batting average has remained roughly .260 since the 1940s, there are now fewer players at both the left and right ends of the spectrum. In other
words, "variation in batting averages must decrease as improving play eliminates the rough edges that great players could exploit, and as average performance moves towards the limits of human
possibility and compresses great players into an ever decreasing space between average play and the immovable right wall."
To support his argument Gould (actually his research assistant) calculated the standard deviation (a measure of the spread in values assuming a normal distribution) of batting averages over time and
plotted them in a graph and presented the following table showing also the coefficient of variation (the standard deviation divided by the mean useful for comparing distributions with different
Decade Stdev Coeff
1870s .0496 19.25
1880s .0460 18.45
1890s .0436 15.60
1900s .0386 14.97
1910s .0371 13.97
1920s .0374 12.70
1930s .0340 12.00
1940s .0326 12.23
1950s .0325 12.25
1960s .0316 12.31
1970s .0317 12.13
This certainly shows a trend towards decreasing variability over time. Gould's conclusion was that this decreasing variability was due to refinments in the game (standardized techniques for pitching
starting in the 1880s, the introduction of gloves, stabilization of the number of balls and strikes, and refinement of strategies) coupled with the entire system moving farther towards the limits of
human ability in much the same way that track athletes move closer to that wall with each Olympics thereby decreasing the variation in sprint times. However, since Gould's data was published almost
20 years ago I decided to take a look and see if I could reproduce Gould's data and add data for the last two plus decades.
To do so I used the Lahman database and calculated the league batting average and OPS for the 250 league seasons included in the database. I then selected the 74,277 seasons where a player batted at
least once calculating their batting average, OPS, and plate appearances along with their respective league averages. Finally, I selected all of the players with more than 2 at bats per game
(relative to the league schedule) which pared the list down to 18,104 seasons. From these I calculated the standard deviation (using the league average) and coefficient of variation by decade and
produced the following table.
Decade Seasons Stdev Coeff
1870s 587 .0508 18.60
1880s 1189 .0423 16.85
1890s 1004 .0402 14.57
1900s 1110 .0373 14.68
1910s 1220 .0372 14.56
1920s 1194 .0369 12.93
1930s 1233 .0349 12.53
1940s 1149 .0329 12.64
1950s 1145 .0334 12.88
1960s 1448 .0319 12.83
1970s 1867 .0316 12.33
1980s 1970 .0299 11.54
1990s 2103 .0311 11.73
2000s 885 .0310 11.71
As you can see I wasn't able to recreate Gould's results precisely for some reason but got very close in several decades including the 1970s (.317 to .316) and the 1910s (.371 to .372). Overall there
appears to be less spread in my data in the early years and more in the latter years, again for unknown reasons. I've tried several different cutoffs (200 at bats, 250 at bats, etc.) but none have
come any closer to reproducing Gould's numbers. It is also possible that Gould used a different dataset that was less complete for the years prior to 1900. The Lahman database includes the National
Association for 1871-1875 and the American Association for 1882-1891, the Player's League for 1890 and the Federal League for 1914-1915. Adding more player seasons will tend to decrease the standard
deviation. Overall though, the trend towards lower standard deviations seems to continue with the addition of the 1980s through 2003 as the three lowest standard deviations and coefficients of
variation are for those three decades.
I then produced the following scatter plot that shows the standard deviations for each year.
Intuitively, it seems as if Gould's argument holds. However, several suggestions and points of discussion that were brought out on the SABR-L list included:
• The analysis shows that standard deviations have fallen over time but much less so since the 1940s. Many then agreed that the stabilization of the game had occurred by the 1940s.
• In order to test Gould's hypothesis some argued that higher standard deviations should be found for expansion years since the talent pool expands letting in more players, some of whom would not
have been in the major leagues the previous year. When looking at the years 1901, 1961, 1962, 1969, 1977, and 1993 there is no evidence that the standard deviations were greater in these years.
The reason this study doesn't find that result may be because when looking at players with 2 at bats a game or more in an expansion year you're really looking at players who were already bonafide
major leaguers but who simply didn't get the at bats before expansion. When lowering the cutoff to 50 at bats you do see small increases the stdev in 1901, 1962, 1969, 1977, and 1993.
• The league average has not been consistently .260 and so some argued that in order to perform the calculation you need to standardize the averages. I reran the numbers computing the average as
AVG/lgAVG*0.260 and produced the following table. These results are very similar to the first set although the variation appears more consistent from the 1920s through the 1970s before dropping
in the 1980s and later:
Decade # Stdev
1870s 587 .0483
1880s 1189 .0439
1890s 1004 .0379
1900s 1110 .0381
1910s 1220 .0379
1920s 1194 .0336
1930s 1233 .0325
1940s 1149 .0328
1950s 1145 .0335
1960s 1448 .0334
1970s 1867 .0321
1980s 1970 .0300
1990s 2103 .0305
2000s 885 .0305
• To be more precise some argued that you should weight the averages by the number of at bats. However, since we're already selecting those who garnered significant playing time with greater than 2
at bats per game (and I'm too lazy to do the weighted calculation) I doubt weighting would change the results very much.
• Perhaps the disappearance of the .400 hitter has more to do with an emphasis on power over average since the 1920s as some speculated. In other words, hitters are knowingly sacrificing average
for power in the modern era. I have no doubt that generally this is true as both the increase in strikeouts and the increase in the diversity of skills of batting champions shows. I'm just not
sure that there hasn't always been a substantial population of players who have focused on average and who would test the limits of singles hitting. In addition, major league baseball and the
general public still hold batting average in high regard and so players are still rewarded for high averages over on base percentage.
• Others contended that the stdev of other measures such as OPS and SLUG have increased over time or held steady and so would put a hole in Gould's hypothesis. I don't think that's the case since
clearly the increase in slugging percentage (for example after 1920) doesn't reflect on the talent level but rather on many players adopting a different style of play coupled with rule change
which would naturally increase the standard deviation. On a side note some mentioned that Gould's analysis was fatally flawed since it takes into consideration only batting average, a dubious
although ubiquitous, measure of offensive value. I wouldn't disagree if the discussion was about pure offensive value. The question however, is what happened to .400 hitters, which by definition
looks at batting average.
• Other argued that in order to test Gould's hypothesis you should really be looking at the percentage of players several standard deviations above the league average since the players in the
population are not a normal distribution but rather the right hand tail of the distribution (since players much below the league average won't get enough at bats while players above the league
average will). I performed this calculation selecting only those players who had a batting average greater than league average plus 2.5 times the standard deviation. The percentage of players per
decade is shown below. Interestingly, one would expect a higher percentage of players in the early years although this is not the case. The percentage begins to drop only in the 1960s. Why this
is the case I don't know.
Decade %Players #Players
1870s 0.0153 26
1880s 0.0162 86
1890s 0.0127 112
1900s 0.0172 153
1910s 0.0181 248
1920s 0.0167 226
1930s 0.0161 166
1940s 0.0174 261
1950s 0.0177 236
1960s 0.0138 322
1970s 0.0110 311
1980s 0.0108 328
1990s 0.0097 407
2000s 0.0091 198
• Finally, another way to look at this problem as pointed out by Bill James is to calculate the difference in standard deviations from .400 for the various decades. To do this I took the players
with greater than 2 at bats per game and calculated their average batting average by decade. Then I subtracted the average from .400 and divided by the standard deviation for the decade to figure
out how many standard deviations the average player in the study was away from .400 during that time period. This analysis showed that indeed players in the 1920s who qualified hit .300 with a
league standard deviation of .0369 which puts them only 2.7 standard deviations away from .400. On the contrary in the 1990s the players that qualified hit .276 with a stdev of .0311 putting them
3.98 standard deviations away from .400. The higher average combined with higher standard deviations made it statistically more likely that someone would hit .400 as evidenced by the number of
.400 hitters per decade (7 in the 1920s, 0 in the 1990s). It appears from this that both factors, the higher relative averages and the increased standard deviations, were in play to account for
the prevalence of .400 hitters. What this also shows is that since league averages have risen in the past 11 years the odds are now slightly better that someone will hit .400 than they were in
the 1970s and 80s (although not as high as in the 1930s through 1950s).
Decade # .400 hitters AVG Stdev Stdev from .400
1870s 587 9 0.275 0.0505 2.466
1880s 1189 3 0.261 0.0423 3.283
1890s 1004 11 0.286 0.0402 2.837
1900s 1110 1 0.268 0.0373 3.544
1910s 1220 3 0.272 0.0372 3.445
1920s 1194 7 0.300 0.0369 2.705
1930s 1233 1 0.294 0.0349 3.050
1940s 1149 1 0.275 0.0329 3.800
1950s 1145 0 0.275 0.0334 3.733
1960s 1448 0 0.265 0.0319 4.218
1970s 1867 0 0.269 0.0316 4.138
1980s 1970 0 0.269 0.0299 4.369
1990s 2103 0 0.276 0.0311 3.983
2000s 885 0 0.278 0.0310 3.941
So in the final analysis what does it all mean? My tentative conclusions are:
• Based on intuition and observation Gould was certainly correct that baseball players are better than they were in the past and that the game is better played now than ever before. Analogs with
basketball and track make this apparent.
• Gould was partially correct that decreasing standard deviations do in fact record an increasing and standardized level of play. However, decreased league batting averages also played a role in
the disappearance of the .400 hitter.
For other analyses on this question see this article and this one.
1 comment:
What do you know Perfect World Gold. And do you want to know? You can Buy Perfect World Gold here. And welcome to our website, here you can play games, and you will get Perfect World Silver to
play game. I know Perfect World money, and it is very interesting. I like playing online games. Do you want a try, come and view our website, and you will learn much about cheap Perfect World
Gold. Come and join with us. We are waiting for your coming. | {"url":"http://danagonistes.blogspot.com/2004/08/where-have-400-hitters-gone.html","timestamp":"2014-04-18T05:31:31Z","content_type":null,"content_length":"119446","record_id":"<urn:uuid:20f66297-9e2a-4c58-b80e-02a88b64a67b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Assessing the efficacy of molecularly targeted agents on cell line-based platforms by using system identification
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Genomics. 2012; 13(Suppl 6): S11.
Assessing the efficacy of molecularly targeted agents on cell line-based platforms by using system identification
Molecularly targeted agents (MTAs) are increasingly used for cancer treatment, the goal being to improve the efficacy and selectivity of cancer treatment by developing agents that block the growth of
cancer cells by interfering with specific targeted molecules needed for carcinogenesis and tumor growth. This approach differs from traditional cytotoxic anticancer drugs. The lack of specificity of
cytotoxic drugs allows a relatively straightforward approach in preclinical and clinical studies, where the optimal dose has usually been defined as the "maximum tolerated dose" (MTD). This
toxicity-based dosing approach is founded on the assumption that the therapeutic anticancer effect and toxic effects of the drug increase in parallel as the dose is escalated. On the contrary, most
MTAs are expected to be more selective and less toxic than cytotoxic drugs. Consequently, the maximum therapeutic effect may be achieved at a "biologically effective dose" (BED) well below the MTD.
Hence, dosing study for MTAs should be different from cytotoxic drugs. Enhanced efforts to molecularly characterize the drug efficacy for MTAs in preclinical models will be valuable for successfully
designing dosing regimens for clinical trials.
A novel preclinical model combining experimental methods and theoretical analysis is proposed to investigate the mechanism of action and identify pharmacodynamic characteristics of the drug. Instead
of fixed time point analysis of the drug exposure to drug effect, the time course of drug effect for different doses is quantitatively studied on cell line-based platforms using system
identification, where tumor cells' responses to drugs through the use of fluorescent reporters are sampled over a time course. Results show that drug effect is time-varying and higher dosages induce
faster and stronger responses as expected. However, the drug efficacy change along different dosages is not linear; on the contrary, there exist certain thresholds. This kind of preclinical study can
provide valuable suggestions about dosing regimens for the in vivo experimental stage to increase productivity.
Drug development is currently an expensive and prolonged process with high attrition rate. The rate of new drug approvals in the U. S. has remained essentially constant since 1950, while the costs of
drug development have soared [1]. Industry analysts estimate that it takes $1 billion to $4 billion in R&D and 10-15 years for every new drug brought to market [1-3]. In aggregate, the industrial
average rate of attrition measured from first trials in humans to registration seems to be locked at ~85-90% [4,5]. The situation in oncology drug development is even worse [3,6,7]. By contrast, the
overall clinical success rate for new anticancer agents (~5%) is much lower than other therapeutic areas (e.g. success rate for cardiovascular diseases is ~20%) [8]. As a result, the American Cancer
Society's 2005 statistical report shows that cancer is now the leading cause of death for Americans under age 85 [9]. One common explanation for the recent shrinking of oncology drug pipelines is
that discovery is moving into more complex areas of human health [10,11], such as cancer, which is more likely to result from the interaction of several different genes/pathways [12,13]. The
conundrum confronting the cancer research community is twofold: first, the pharmaceutical industry is facing difficult times owing to low productivity and spiraling cost [4]; second, on consumers
front, patients await better treatments and cancer drugs are an unaffordable luxury for many consumers [14]. To move ahead, scientists realize that they need some fresh thinking in basic,
translational and clinical research [15] to improve R&D productivity and reduce attrition rates, and such efforts calls for joint collaboration from different disciplines [5,16-20].
The focus of anticancer drug development in recent years has shifted from cytotoxic drugs to targeted therapy [16,19,21-23]. The goal of this target-based approach is to improve the efficacy and
selectivity of cancer treatment by developing agents that block the growth of cancer cells by interfering with specific targeted molecules needed for carcinogenesis and tumor growth [21,22]. This
approach is different from traditional cytotoxic anticancer drugs, where most compounds are targeted against molecules required for the maintenance of structural and genetic integrity of rapidly
dividing cells. However, despite advances in understanding of the molecular mechanisms of cancer, the promise of targeted cancer therapy remains largely unfulfilled [8,24], with only a few well-known
examples, such as imatinib [25] and trastuzumab [26], currently approved [27]. Many promising candidates prove ineffective or toxic owing to a poor understanding of the molecular mechanisms of
biological systems they target. Different reasons have been proposed to explain this limited effectiveness of anticancer drug development, including insufficient translational research and lack of
adequate preclinical models that recapitulate disease complexity and molecular heterogeneity [8,16,28,29]. Ideally, preclinical models should validate the target, provide information about the
mechanism of action of the drug, and identify pharmacodynamic markers of activity. Once the target and mechanism of action have been identified using in vitro models, experiments should be undertaken
to ensure that inhibition of the target can be achieved at tolerated doses in vivo and to identify possible biomarkers of response. Improved preclinical evaluation of compounds has the potential to
augment the detection of activity and toxicity, and to reduce the high attrition rate.
While the lack of specificity of the traditional cytotoxic anticancer agents allows a relatively straightforward, well-established approach, developing a paradigm to better analyze the efficacy of
molecularly targeted agents (MTAs) is substantially more complex [18,22,30-32]. Many targets are involved in cell signaling pathways, which are most often not linear, but connected and redundant [33
]. Control strategies typically involve a higher multiplicity of inputs and a multiple layer of feedback [34]. As a result, strategies traditionally applied to the development of cytotoxic drugs may
not be appropriate for MTAs [32]. Current treatment plan and efficacy evaluations are usually designed empirically for MTAs, without adequate knowledge of the optimal dose and the appropriate
schedule [32]. A novel preclinical model combining experimental methods and theoretical analysis is proposed in this study to investigate the mechanism of action and identify pharmacodynamic
characteristics of the drug. It is expected that through such preclinical study, valuable suggestions about dosing regimens could be furnished for the in vivo experimental stage to increase
productivity. We consider several challenges for MTA dosing.
Firstly, the optimal dose has usually been defined as the "maximum tolerated dose" (MTD) for conventional cytotoxic anticancer drugs rather than the dose that produces a quantifiable therapeutic
effect. This toxicity-based dosing approach is founded on the assumption that the therapeutic anticancer effect and toxic effects of the drug increase in parallel as the dose is escalated [22]. Such
an assumption is sound if the mechanisms of action of the toxic and therapeutic effects are the same, as is often the case with cytotoxic agents. However, most MTAs are expected to be more selective
and less toxic than conventional cytotoxic drugs [23]. As a result, the maximum therapeutic effect may be achieved at a dose, defined as the "biologically effective dose" (BED), which could be
substantially lower than the traditionally established MTD as discussed by Johnston [31]. A hypothetical dose-effect curve is shown in Figure Figure1.1. In addition, the toxic effect may not
parallel the therapeutic effect and not be predictive of the therapeutic effect [22]. Hence, the dosing study for MTAs should be based on both drug efficacy and toxicity considerations. Enhanced
efforts to molecularly characterize the drug efficacy for MTAs in preclinical models will be valuable for successfully estimating the BED for clinical trials.
A hypothetical dose-effect curve for targeted therapy.
Secondly, the pharmacodynamics (PD) of drugs have been extensively investigated in vitro and in vivo; however, most analyses have reported the relationship of drug exposure to drug effect at a fixed
time point. When drug effect is examined at a fixed time point, the drug concentration-effect relationship can be characterized through well established models, such as the Hill equation [35], also
called the sigmoidal E[max ]model [36]. However, characterization of the entire time course of drug effect may provide additional information [37]. For example, it may help to design the optimal
schedule for drug administration.
Thirdly, traditional design of the dosing regimen to achieve some desired target goal such as relatively constant serum concentration may not be optimal because MTA targets mostly sit in interacting
complex dynamical regulatory networks and such complex target contexts pose significant challenges for assessing mechanisms of action for MTAs [30]. For example, Shah and co-workers [38] demonstrate
that the BCR-ABL inhibitor dasatinib, which has greater potency and a short half-life, can achieve deep clinical remission in CML patients by achieving transient potent BCR-ABL inhibition, while
traditional approved tyrosine kinase inhibitors usually have prolonged half lives resulting in continuous target inhibition. A similar study of whether short pulses of higher dose or persistent
dosing with lower doses have the most favorable outcomes has been carried out by Amin and co-workers in the setup of inactivation of HER2-HER3 signaling [39].
In sum, it is difficult and expensive to optimize dosing regimens using strictly empirical methods for MTAs. A novel preclinical model combining experimental methods and theoretical analysis is
proposed in this study to investigate the mechanisms of action and identify pharmacodynamic characteristic of MTAs. As a first step, the time courses of drug effect for different doses are
quantitatively studied on cell line-based platforms using system identification, where a tumor cell's response to investigational drugs through the use of fluorescent reporters is sampled frequently
over a time course. A dynamic model is proposed to study the time course of drug efficacy for MTAs and then the experimental data are analyzed by our proposed model using a Kalman filter. Through
such preclinical study, valuable suggestions about dosing regimens may be furnished for the in vivo experimental stage to increase productivity.
The proposed approach is an integration of experiment and theory to investigate regulatory process dynamics by combining multiple complementary disciplines, including: (i) using fluorescent reporters
in molecular technology to study cells' transcriptional activities under drug perturbation; (ii) these being captured by an automatic epifluorescent microscope over a time course; and (iii) such data
being processed by large-scale image processing for dynamic analysis. A truly multi-dimensional dynamics of tumor cell response to drugs can be characterized through systematic perturbations to test
different combinations of cell types, reporters, and drugs/dosages, augmented by iterative systematic theoretical analysis. This methodology differs from high-throughput technique like RNA expression
profiling with microarrays, which provide a snapshot of an aspect of the system at one time point.
Experimental methodology
Understanding cell response to a drug requires experimental designs that ask very specific questions about what is happening in a cell in the absence of a drug and how the cell activities change when
the drug is present. The objective of the experimental protocol is to efficiently capture cell process dynamics in response to drugs and thereby obtain a deeper understanding of the genetic
regulatory mechanisms, the point being to make preclinical research more predictive. Fluorescent reporters have long been used in molecular technology to study cells' transcriptional activities or
the cellular localization of components, either in a population of cells or a single cell [40-42]. In this study, we track the transcriptional activities of particular genes. A fluorescent reporter
to serve this purpose can be constructed by fusing the promoter region of a gene of interest with the coding sequence of a fluorescent protein, most commonly a green fluorescent protein (GFP). By
delivering a single cassette bearing the promoter/GFP reporter into the genome of each cell in a population of cells, any change in the expression levels of the native coding sequence driven by that
promoter will be reflected in the transcriptional activity of the cassette. This allows the estimation of the total fluorescence of the reporter in the cell, captured by imaging with an
epifluorescent microscope, which is then used as a relative measure of the transcriptional activity of the native gene. Because this procedure is non-invasive to the cell, it allows tracking of the
same cell population for an extended period of time by imaging the same site repeatedly. The recent introduction of automated digital microscopes allows researchers to use multi-well microtiter
plates and sequentially capture the transcriptional activities in all wells. In our experimental protocol, a single assay is carried out by epifluorescent imaging of a site at the bottom of each well
in a 384 well plate, producing an image of the cells in that region (~200-400 cells) bearing fluorescent reporters. The imaging speed of automated systems easily accommodates sampling an entire 384
well plate at hourly intervals. If needed, the experiment can be extended to multiple plates to cover a wider range of cell types and reporters.
In this experimental set-up, using different wells to test different combinations of cell type, GFP reporter and experimental condition allows this approach to provide a multi-dimensional examination
of the cells' responses to a variety of stimuli. Not only can it follow multiple genes simultaneously, but it can also compare cellular activities under various conditions. Furthermore, it captures
the dynamics of transcriptional regulation. This produces data on ~200-400 individual cells per well that can be analyzed both individually, as a distribution, or in aggregate, as an average.
Fluorescent intensity data can be extracted from these images using specialized image analysis tools developed for this application [43]. This image processing procedures include finding cells,
identifying individual cells, and quantifying the fluorescence associated with each cell. The objective is to extract gene expression levels from the fluorescent image and track them over the time
course. We approach this goal through morphology-based image processing methods.
Image processing
Typical fluorescent images are shown in Figure Figure22 (left panels), where nuclei are detected in the blue channel and promoter reporters to study cells' transcriptional activities are detected in
the green channel. With a 384-well plate there will be at least 384 videos for evaluation and the number can be much higher if the experiment requires multiple plates to cover all experimental
conditions. Visual evaluation is unreliable when one needs to quantitatively compare different conditions and the high-throughput nature of the green fluorescent protein reporter approach calls for a
more automatic and quantitative solution to efficiently extract gene-expression levels from the fluorescent images and track them over the time course.
Time course response to lapatinib by HCT116 with reporter for MKI67: Left panels show 2 typical fluorescent images (nuclei: blue, GFP: green) sampled for the same site in a 48-hour lapatinib
treatment. a) The upper panels show the case before any drug ...
To facilitate automatic processing of the experiment results, the transcriptional levels of the fluorescent images need be properly extracted, quantized, and saved and the image processing algorithm
should be fast with good balance between performance and robustness [43]. An algorithm based on morphological image processing [44], in particular, the watershed transformation [45] is currently
adopted in our study. Overall, the image processing breaks down into three major components: (i) nuclei channel segmentation, (ii) reporter channel segmentation, and (iii) measurement of cell-by-cell
promoter activity levels. Figure Figure33 shows the segmentation results of a typical fluorescent image pair, where only a portion of the full image is shown in order to show the segmentation
details. Once the individual cells are identified, the transcriptional activity represented by the reporter is extracted for every cell by summing up the background subtracted pixel intensity of the
whole cell area and taking a log[2 ]transform before being exported.
Segmentation Results: a) left panel: nuclear channel, where red lines are the identified nuclei boundaries; b) right panel: reporter channel, where green lines are identified cell boundaries, while
the red objects are the nuclei used as markers.
Experimental set-up for the dosing study
The dosing study is carried out on the colon cancer cell-line HCT116 with a reporter for the MKI67 gene, a nuclear antigen tightly correlated with proliferation [46,47], with responses to lapatinib
treatment with 6 dosages (1 to 32 µM). First we infect the HCT116 cell lines with the desired packaged reporter (packaged as lentiviral particles). Then plate cells/reporter pair in a media
containing a live-cell nuclear stain. The cells are allowed to attach to the plate and grow overnight. Drugs are added to the appropriate wells (we have 6 wells [biological duplicates] for each
dosage). In order to remove environmental effects, such as growth factor depletion, there are 6 control wells for each dosage (no drug added, total 36 wells). We image the plate once an hour for 48
hours to characterize the response of each cell/reporter pair to the drug over time. Note that the fluorescence intensity of cells without a GFP reporter expressed is not zero, since cells have
numerous small molecules which fluoresce in the same wavelengths as GFP when excited with 488 nm light. This defines the minimum fluorescence, which is approximately 2^14. One of the time courses
from experiment (dosage = 8µM) is shown in Figure Figure2.2. The left panels of Figure Figure22 show two fluorescent images sampled for the same site in a 48 hour lapatinib treatment for 8 µM
dosage. The right panels of Figure Figure22 show the log[2](GFP) intensity histogram for each time point.
Since MKI67 is turned on during proliferation and off when the cells are not cycling, it is expected to show a binary, switch-like histogram of cell intensities, rather than a graded transition. This
behavior is observed in Figure Figure2.2. We have the readout of the GFP intensity level for each individual cell/dosage pair with 48 time points. These can be compared with a threshold value to
determine whether that cell is shifted or not [37,43]. Such a reporter assay allows one to determine the dynamics of drug responses for different dosages. Consequently, we propose a time-varying
model for the cell shifting process where the drug effect coefficient is assumed changing with time. This is in contrast to many existing approaches where the drug effect coefficient is treated as a
constant and the experiment just provides one reading rather than time-series characterization.
Mathematical model formulation
The experimental results provide information on the percentage of cells shifted as a consequence of the drug activity. The measurements facilitate asking important questions in drug development. For
instance, does dosing alter the extent of response, the timing of response, or both? In addition to qualitative questions, we are interested in modeling the drug effect quantitatively, which requires
a novel mathematical model that is biologically sound and fits the experimental setup. Our experiments and the proposed modeling has two important features: (i) Our experiment is based on the readout
of the intensity level of each individual cell, which is compared with a threshold value to determine whether that cell is shifted or not. Although we count the number of shifted cells at each
sampling time point, the proposed model is not a population model merely giving the average readout of all the cells. (ii) Our experiment collects time-series data under drug perturbation for 48
hours, with one sample per hour. A time-varying model is proposed for the cell shifting process, where the drug effect coefficient is assumed changing with time.
Because there are different numbers of cells in different wells (the range is about ~200-400 cells per well), we perform normalization to calculate the percentage of cells shifted. Since there are
many factors including drug effect that contribute to the cell shifting, calibration is performed by comparing to the control group to exclude other contributing factors. The notations used in this
work are listed below
• N: total number of cells
• N[1](t): number of shifted cells at time t after applying drug
• ρ[1](t) = N[1](t)/N: percentage of cells shifted at time t after applying drug
• N[c]: total number of cells in the control group (no drug applied)
• N[1c](t): number of shifted cells at time t in the control group
• ρ[1c](t) = N[1c](t)/N[c]: percentage of cells shifted at time t in the control group
• ρ(t) = ρ[1](t) − ρ[1c](t): calibrated percentage of cells shifted at time t after applying drug
• ρ[av](t) = E[ρ(t)]: mean of the calibrated percentage of cells shifted at time t after applying drug
• X[i](t): state of cell i at time t after applying drug (either shift-ready or not)
We justify N[1](t) being modeled as a Gaussian process when the number of cells per well is sufficiently large. Then a model is proposed for the cell shifting process, where the calibrated percentage
of shifted cells follows a Gaussian process.
N[1](t) is a Gaussian process when the number of cells per well is large enough
In general, N is a random variable since N may be different from well to well in the experiment; however, N can be treated as a known constant for each specific well, as can N[c]. At any given time
point t[j ]in the experiment, X[i](t[j]) can be considered as either shift-ready or not. Thus, the experiment of drug effect on each cell can be treated as a Bernoulli trial and X[i](t[j]) can be
modeled as a Bernoulli random variable, i.e., the Probability Mass Function (PMF) of X[i](t[j]) is given by
where 0 ≤ p ≤ 1 and t[j ]is dropped for simplicity of presentation. Under this definition, $N1= ∑i=1NXi$. Assuming that all cell states are independent, N[1 ]has the binomial PMF given by
When the number of cells per well is large, say N > 100, the PMF of N[1 ]at any given time instant can be accurately approximated by the Gaussian distribution due to the central limit theorem. Next
we show that N[1](t) is a Gaussian process.
Proposition 1. The random process N[1](t) is approximately Gaussian when the number of cells per well is large.
Proof. At the beginning of the experiment, t[0], N[1](t[0]) is a Gaussian random variable. For any sampling point, at time t[j], N[1](t[j]) can be expressed as
where N[1](t[j][−1]) is the total number of shifted cells at time t[j][−1], and the additional number of shifted cells in the time interval [t[j][−1], t[j]] is given by
$ΔN1(tj)= ∑i=1N-N1(tj-1)Xi$
If N − N[1](t[j][−1]) is sufficiently large, N − N[1](t[j][−1]) > 32, then ΔN[1](t[j]) is well approximated by a Gaussian random variable. Since N[1](t[0]) is Gaussian, N[1](t[j]) is Gaussian as well
by mathematical induction. □
Modeling the cell shifting process
From our previous experimental observation, the cell shifting process on colon cancer cell-line HCT116 with a reporter for the MKI67 gene under lapatinib treatment shows a binary shifting
characteristic. It is assumed that the number of shifting cells is related to: (i) the drug effect corresponding to different dosages; and (ii) the number of proliferating cells (non-shifted cells, N
− N[1]). Since N[1](t) is Gaussian process when the number of cells per well is large and N is a constant, the percentage of cells shifted at time t after applying drug, ρ[1](t) = N[1](t)/N, is a
Gaussian process normalized to 0[1]. Similarly, for the control group, ρ[1c](t) = N[1c](t)/N[c], is also a Gaussian process normalized to 0[1]. Then ρ(t) = ρ[1](t) − ρ[1c](t), the calibrated
percentage of cells shifted at time t after applying drug, is a Gaussian process too. We are interested in the distribution of ρ(t), specifically, how the mean value of ρ(t), ρ[av](t), changes along
time under different dosage. Based on the above discussions, we propose the following model for cell shifting:
where $γ1u$ is the drug effective coefficient depending on the dosage d, and β > 0 is a balancing factor. ρ[av](t) changes along time since the corresponding random process ρ(t) is non-stationary,
thus its mean changes with time. Specifically, the change of ρ[av](t) follows a linear differential equation (Eq.(5)) that reflects the fact that the change would be positively affected by the
product of drug effectiveness and the percentage of cells not shifted (1st term in Eq.(5)), and negatively affected by the percentage of cells already shifted (2nd term in Eq.(5)), thus the term
"balancing factor" for β since more shifted cells mean less non-shifted cells that the drug may affect.
In this model, we assume that both $γ1u$ and β change along time, thus the proposed model is a time-varying system. It is also assumed that the number of non-shifted cells, N − N[1], decreases
exponentially with the factor $γ1u$. $μ=[μ1μ2]T$ and ν are independent Gaussian white noise processes. µ represents the process noise. Its covariance matrix is
ν is the measurement noise. Its covariance matrix is
The noise terms account for the various uncertainties introduced by the experiment. For instance, the cells may not be at the same cell cycle during the experiments, and thus may not be affected by
the drug if some of the cells are actually dormant. This kind of uncertainties are modeled by process noise µ. There also exists another type of uncertainty due to measurement procedures, such as the
imperfect photographic device and the image processing software. This type of uncertainty is modeled by measurement noise ν.
To observe the relationship between the drug effect coefficient $γ1u$ and the dosage d, we need to estimate $γ1u$ for each dosage. Since this is a time-varying model, $γ1u$ changes with time.
System identification from time-series data using Kalman filter
Kalman filtering [48] provides minimum-mean-square-error estimation of the state of a stochastic linear system disturbed by Gaussian white noise. In our proposed scheme, a Kalman filter is applied to
estimate the coefficients, $γ1u$ and β, of the proposed cell shifting model. The corresponding state and measurement equations are
where the 2-dimensional state vector (containing the parameters to be estimated) is $w=[γ1uβ]T$. δ can be calculated as $δ(n)=ρav(n+1)-ρav(n)Δt$. $C=[1-ρav-ρav]$.
The implementation of the Kalman filter is given by the following equations [48]:
where K(n) is the Kalman filter gain and P is the covariance matrix of the error. The superscripts ^- and ^+ indicate the a priori and a posteriori values of the variables, respectively. $ŵ-$ and
$ŵ+$ are the prior and posterior estimates, respectively. Q and R are the covariance matrices of the parameter noise and external noise, respectively. The initial conditions are $ŵ(0|δ0)=E[ŵ(0)]$ and
In general, a Kalman filter may be interpreted as a one-step predictor with an appropriate gain calculator [49]. Specifically, Eq.(10) is the one-step predictor, Eq.(11) calculates the Kalman filter
gain, and Eq.(12) solves the corresponding Riccati equation.
Convergence of the Kalman filter is an important issue [48]. The rate of convergence is defined as the number of iterations to obtain the optimum estimates. The convergence of the Kalman filter
includes the convergence of the estimates $ŵ(n)$and the convergence of the estimation error e(n). Convergence will be studied in detail in the simulations.
In practice, noise statistics (such as the covariance matrices) may not be known and need to be estimated. The Kalman filter is sensitive to the estimation error of noise statistics. Poor estimates
of the noise covariance can result in filter divergence. An alternative would be using an H[∞ ]filter [50,51].
Two-step analysis is performed to evaluate the drug effect study for different dosages. Firstly, we performed a proof-of-concept experiment using Monte Carlo simulation to demonstrate that the
proposed model can mimic experimental observation. Secondly, we analyzed the time-varying drug effect for different dosages based on real experimental data from Dr. Bittner's lab at Translational
Genomics Research Institution (TGen).
Proof-of-concept experiment using Monte Carlo simulation
It is assumed that a group of 200 cells has mean GFP intensity at 2^18. When the drug is applied, each cell determines whether to shift to a lower intensity or not individually by flipping a coin
(Bernoulli trial) at each time point, as we assumed in the theoretical model. The histograms of percentage of cells at intensity in the range of [2^14, 2^19] along time are shown in Figure Figure4.4
. It is observed that the resulting histograms from the Monte Carlo simulation of the theoretical model match the measurement results from the TGen experiments performed on the cell-line. This
demonstrates that the cell shifting is probably a binary decision, which lays the ground for our proposed theoretical model where a group of cells' decision can be modeled as binomial and can be
closely approximated by Gaussian distribution when the number of cells is large.
The change of histogram of percentage of cells at intensity [2^14, 2^19] under drug intake along time using Monte Carlo simulation.
Drug effect analysis for the dosing study performed at TGen
For the experiments performed on the cell-line at TGen, there are 6 different dosages tested for the drug laptinib, from 1µM to 32µM. There are 6 biological duplicates for each dosage and each
biological duplicate contains 200 to 400 cells. The obtained experimental data set contains time-series data of the intensity readings for each cell per hour along a 48-hour period. There are also
corresponding experimental data set of the control group (without drug) for the purpose of calibration. The calibrated percentage of shifted cells is used as measurement data in the proposed
algorithm using Kalman filtering. The obtained estimates of the drug effect coefficient $(γ1u)$ and the balancing factor (β) along 48 hours for 6 different dosages are shown in Figure Figure55 and
Figure Figure6,6, respectively.
The estimate of the drug effect coefficient along time for 6 different dosages.
The estimate of the balancing factor along time for 6 different dosages.
It is observed from Figure Figure55 that in general the drug effect coefficient $(γ1u)$ increases with the applied dosage, as expected. It seems that there exist certain thresholds for $γ1u$. For
instance, $γ1u$ is much bigger with the dosages above 8µM. It is also observed that $γ1u$ increases with time as well. This reveals the time varying nature of the drug effect. Furthermore, Figure
Figure55 shows that higher dosage corresponds to faster response time, e.g., $γ1u$ increases earlier and faster for higher dosage starting at ~10 hour. It is worth pointing out that, ideally, the
percentage of shifted cells should be more than that in the control group without drug input, i.e., 0 ≤ ρ(t) ≤ 1. However, due to uncertainties and noise in the experiments, we actually observe that
ρ(t) may be negative, especially during the first ~10 hours, before the drug is in effect.
Unlike $γ1u$, it is observed in Figure Figure66 that β remains roughly flat along time for a given dosage, because β is the balancing factor and should not change with time. However, β is different
for different applied dosage, since higher dosage requires a higher balancing factor to maintain stability of the system. Again, the uncertainties and noise may dominate the system during the first
~10 hours (before the drug is in effect).
Figure Figure77 shows the convergence of the Kalman filter. It converges in a few iterations in all cases.
The Convergence result of the the proposed algorithm using Kalman filter.
Post data processing for the dosing study performed at TGen
From Figure Figure55 and Figure Figure6,6, it is observed that drug effect $(γ1u)$ and the balancing factor (β) is very "jittery," especially for the initial ~10 hours. Such a phenomenon may result
from experimental noise, or that the cells may need certain "commitment time" after the drug is added. In order to better compare the drug effect for different dosages, we smooth the results and only
take into account data after the first 10 hours. We apply a moving-average filter with filter coefficients determined by an unweighted linear least-squares regression and a 2nd-degree polynomial
model. The span for the moving average is 5. Figure Figure88 shows the smoothed drug effect coefficient $(γ1u)$ along time for 6 individual dosages. It can be observed that the drug effect is more
jittery for small dosages, such as 1µM. The smoothed $γ1u$ along time for 6 dosages are compared in Figure Figure9.9. It is observed that there exists a "plateau" $(γ1u≈0.01)$ for higher dosages
above 8µM. The plateau is reached at 38 hours, 30 hours, and 24 hours, for dosages 8µM, 16µM, and 32µM, respectively. The smoothed balancing factor (β) for individual dosage can be found in Figure
Figure10,10, and the smoothed β for 6 dosages are compared in Figure Figure1111.
The smoothed drug effect coefficient along time for 6 individual dosage.
The smoothed drug effect coefficient along time for 6 different dosages.
The smoothed balancing factor coefficient along time for 6 individual dosage.
The smoothed balancing factor coefficient along time for 6 different dosages.
Conclusions and future work
The ultimate goal of target-based cancer drug development is to improve the efficacy and selectivity of cancer treatment by exploiting the differences between cancer cells and normal cells. The
current cancer drug development process is confronting huge challenges, such as how to better understand the target in context and develop predictive preclinical models to better understand the
molecular mechanisms of the biological systems they target and hence reduce the attrition rate. An integrated experimental and theoretical approach is proposed to assess the efficacy of molecularly
targeted agents based on cell-line platforms. As a first step, drug efficacies for different dosages are characterized along time. Specifically, tumor cell's responses are analyzed through the use of
fluorescent reporters sampled frequently over a time course; quantification is done by microscopic scanning of cells in culture in multi-well plates using the automated epifluorescent imager;
fluorescent intensity data are extracted from these images using specialized large-scale image analysis tools developed for this application; the dynamics of drug efficacy for different dosages are
studied using dynamic modeling; and time-varying parameters are estimated using system identification techniques. It is observed that the drug efficacy is time and dosage dependent. The objectives
are two-fold: (i) The dosing study for MTAs should be based on both efficacy and toxicity consideration to find the biologically effective dose (BED) instead of the maximum tolerated dose (MTD) for
cytotoxic agents. The time course of drug effect for different dosages can provide information on the gradient of drug effect vs. dosage, and thus on the BED. (ii) Instead of a fixed time point
pharmacodynamics study of MTA, characterization of the entire time course of drug effect provides insight into designing an optimal schedule for drug administration.
Based on a similar experimental set-up and measurements to follow the cell/drug (dosages) dynamics, a truly multi-dimensional dynamics of tumor cell responses to drugs can be characterized through
systematic perturbations to test different combinations of cell types, reporters, and drugs/dosages, augmented by iterative systematic theoretical analysis. Such an approach would facilitate the
study of optimal dose and schedule, such as whether short pulses of higher dose, persistent dosing with lower dose, or some other regimen would have the most favorable outcomes. Moreover, the complex
target context can be inferred with multi-dimensional cell response dynamics with the help of advanced system identification methods. In sum, better intervention strategies can be designed. Such
topics are either currently being pursued or will be in future projects.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
XL and LQ developed and implemented the algorithm, conducted all simulations and data processing and wrote the initial draft of the paper. JH performed the image analysis. MB performed the
experiments. ED advised XL on algorithm development and revised the paper. All authors read and approved the final manuscript.
Based on “Assessing the efficacy of molecularly targeted agents by using Kalman filter”, by Xiangfang Li, Lijun Qian, Michael L Bittner and Edward R Dougherty which appeared in Genomic Signal
Processing and Statistics (GENSIPS), 2011 IEEE International Workshop on. © 2011 IEEE [37].
Xiangfang Li has been supported by the National Cancer Institute (2 R25CA090301-06). The experimental and image analysis work was supported in part by the W. M. Keck Foundation and Predictive
Biomarker Sciences.
This article has been published as part of BMC Genomics Volume 13 Supplement 6, 2012: Selected articles from the IEEE International Workshop on Genomic Signal Processing and Statistics (GENSIPS)
2011. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcgenomics/supplements/13/S6.
• FitzGerald GA. Re-engineering Drug Discovery and Development. LDI Issue Brief. 2011;17(2):1–4. [PubMed]
• DiMasi E. et al. The price of innovation: new estimates of drug development costs. J Health Econ. 2003;22:151–185. doi: 10.1016/S0167-6296(02)00126-1. [PubMed] [Cross Ref]
• DiMasi J, Grabowski H. Economics of new oncology drug development. Journal of Clinical Oncology. 2007;25(2):209–216. doi: 10.1200/JCO.2006.09.0803. [PubMed] [Cross Ref]
• Federsel HJ. In search of sustainability: process R&D in light of current pharmaceutical industry challenges. Drug Discovery Today. 2006;11:966–974. doi: 10.1016/j.drudis.2006.09.012. [PubMed] [
Cross Ref]
• Kola I, Landis J. Can the pharmaceutical industry reduce attrition rates? Nat Rev Drug Discov. 2004;3(8):711–715. doi: 10.1038/nrd1470. [PubMed] [Cross Ref]
• DiMasi E. et al. The price of innovation: new estimates of drug development costs. J Health Econ. 2003;22:151–185. doi: 10.1016/S0167-6296(02)00126-1. [PubMed] [Cross Ref]
• Hait WN. Anticancer drug development: the grand challenges. Nature Reviews Drug Discovery. 2010;9:253–254. doi: 10.1038/nrd3144. [PubMed] [Cross Ref]
• Ocana A, Pandiella A, Siu LL, Tannocky IF. Preclinical development of molecular-targeted agents for cancer. Nat Rev Clin Oncols. 2010;8(4):200–209. doi: 10.1038/nrclinonc.2010.194. [PubMed] [
Cross Ref]
• Twombly R. Cancer Surpasses Heart Disease as Leading Cause of Death for All But the Very Elderly. Journal of the National Cancer Institute. 2005;97(5):330–331. [PubMed]
• Horrobin D. Modern biomedical research: an internally self-consistent universe with little contact with medical reality? Nat Rev Drug Discov. 2003;2(2):151–154. doi: 10.1038/nrd1012. [PubMed] [
Cross Ref]
• Stoffels P. Collaborative Innovation for the Post-Crisis World. The Boston Globe. 2009.
• Vogelstein B, Kinzler K. Cancer genes and the pathways they control. Nature Medicine. 2004;10:789–799. doi: 10.1038/nm1087. [PubMed] [Cross Ref]
• Hanahan D, Weinberg R. The hallmarks of cancer. Cell. 2000;100:57–70. doi: 10.1016/S0092-8674(00)81683-9. [PubMed] [Cross Ref]
• Communications S. TI Pharma Escher Workshop: Barriers to Pharmaceutical Innovation. Leiden, The Netherlands; 2010. The sustainability of the current drug development process: barriers and new
orientations; pp. 1–6.
• Abbott A. The drug deadlock. Nature. 2010;468:158–159. doi: 10.1038/468158a. [PubMed] [Cross Ref]
• Kummar S, Chen HX, Wright J, Holbeck S, Millin MD, Tomaszewski J, Zweibel J, Collins J, Doroshow JH. Utilizing targeted cancer therapeutic agents in combination: novel approaches and urgent
requirements. Nature Reviews Drug Discovery. 2010;9:843–856. doi: 10.1038/nrd3216. [PubMed] [Cross Ref]
• Li X, Qian L, Bittner M, Dougherty E. Characterization of Drug Efficacy regions based on dosage and frequency schedules. IEEE Trans Biomed Eng. 2011;58(3):488–498. [PubMed]
• Paul SM, Mytelka DS, Dunwiddie CT, Persinger CC, Munos BH, Lindborg SR, Schacht AL. How to improve R&D productivity: the pharmaceutical industry's grand challenge. Nature Reviews Drug Discovery.
2010;9:203–214. [PubMed]
• Collins I, Workman P. New approaches to molecular cancer therapeutics. Nature Chemical Biology. 2006;2:689–700. doi: 10.1038/nchembio840. [PubMed] [Cross Ref]
• Davidov E, Holland J, Marple E, Naylor S. Advancing drug discovery through systems biology? Drug Discov Today. 2003;8(8):175–183. [PubMed]
• Balis F. Evolution of anticancer drug discovery and the role of cell-based screening. J Natl Cancer Inst. 2002;94(2):78–79. doi: 10.1093/jnci/94.2.78. [PubMed] [Cross Ref]
• Fox E, Curt G, Balis F. Clinical trial design for target based therapy. The Oncologist. 2002;7(5):401–409. doi: 10.1634/theoncologist.7-5-401. [PubMed] [Cross Ref]
• Hait WN, Hambley T. Targeted cancer therapeutics. Cancer Res. 2009;69:1263–1267. doi: 10.1158/0008-5472.CAN-08-3836. [PubMed] [Cross Ref]
• Hambley T. Is anticancer drug development heading in the right direction? Cancer Res. 2009;69(4):1259–1262. doi: 10.1158/0008-5472.CAN-08-3786. [PubMed] [Cross Ref]
• Druker B. Perspectives on the development of imatinib and the future of cancer research. Nature Med. 2009;15:1149–1152. doi: 10.1038/nm1009-1149. [PubMed] [Cross Ref]
• Vogel C. et al. Efficacy and safety of trastuzumab as a single agent in first-line treatment of HER2-overexpressing metastatic breast cancer. J Clin Oncol. 2002;20:719–726. doi: 10.1200/
JCO.20.3.719. [PubMed] [Cross Ref]
• McClellan M, Benner J, Schilsky R, Epstein D, Woosley R, Friend S, Sidransky D, Geoghegan C, Kessler D. An accelerated pathway for targeted cancer therapies. Nature Reviews Drug Discovery. 2011;
10:79–80. doi: 10.1038/nrd3360. [PubMed] [Cross Ref]
• Dougherty E, Bittner M. Epistemology of the Cell: A Systems Perspective on Biological Knowledge. Wiley-IEEE Press; 2011.
• Sawyers C. Translational research: are we on the right track? J Clin Invest. 2008;118(11):3798–3801. doi: 10.1172/JCI37557. [PMC free article] [PubMed] [Cross Ref]
• Millar A, Lynch K. Rethinking clinical trials for cytostatic drugs. Nature Reviews Cancer. 2003;3:540–545. doi: 10.1038/nrc1124. [PubMed] [Cross Ref]
• Johnston S. Farnesyl transferase inhibitors: a novel targeted therapy for cancer. The Lancet Oncology. 2001;2:18–26. doi: 10.1016/S1470-2045(00)00191-1. [PubMed] [Cross Ref]
• Kummar S, Gutierrez M, Doroshow J, Murgo A. Drug development in oncology: classical cytotoxics and molecularly targeted agents. British Journal of Clinical Pharmacology. 2006;62:15–26. doi:
10.1111/j.1365-2125.2006.02713.x. [PMC free article] [PubMed] [Cross Ref]
• Kholodenko BN. Cell-signalling dynamics in time and space. Nat Rev Mol Cell Biol. 2006;7:165–176. doi: 10.1038/nrm1838. [PMC free article] [PubMed] [Cross Ref]
• Dougherty E, Brun M, Trent J, Bittner M. Conditioning-Based Modeling of Contextual Genomic Regulation. IEEE/ACM Trans Comput Biol Bioinform. 2009;6(2):310–320. [PubMed]
• Hill A. The possible effects of the aggregation of the molecules of haemoglobin on its dissociation curves. J Physiol. 1910;40:iv–vii.
• Holford N, Sheiner L. Understanding the dose-effect relationship: clinical application of pharmacokinetic-pharmacodynamic models. Clin Pharmacokinet. 1981;6(6):429–53. doi: 10.2165/
00003088-198106060-00002. [PubMed] [Cross Ref]
• Li X, Qian L, Bittner ML, Dougherty ER. Assessing the efficacy of molecularly targeted agents by using Kalman filter. Genomic Signal Processing and Statistics (GENSIPS), 2011 IEEE International
Workshop on: 4-6 December 2011. 2011. pp. 50–51. [Cross Ref]
• Shah N, Kasap C, Weier C, Balbas M, Nicoll J, Bleickardt E, Nicaise C, Sawyers C. Transient Potent BCR-ABL Inhibition Is Sufficient to Commit Chronic Myeloid Leukemia Cells Irreversibly to
Apoptosis. Cancer cell. 2008;14(6):485–493. doi: 10.1016/j.ccr.2008.11.001. [PubMed] [Cross Ref]
• Amin D, Sergina N, Ahuja D, McMahon M, Blair J, Wang D, Hann B, Koch K, Shokat K, Moasser M. Resiliency and Vulnerability in the HER2-HER3 Tumorigenic Driver. Science Transitional Medicine. 2010;
2(16):16ra7. doi: 10.1126/scitranslmed.3000389. [PMC free article] [PubMed] [Cross Ref]
• Chalfie M, Tu Y, Euskirchen G, Ward WW, Prasher DC. Green fluorescent protein as a marker for gene expression. Science. 1994;263:802–805. doi: 10.1126/science.8303295. [PubMed] [Cross Ref]
• Hao L, Johnsen R, Lauter G, Baillie D, Brglin TR. Comprehensive analysis of gene expression patterns of hedgehog-related genes. BMC Genomics. 2006;7:280. doi: 10.1186/1471-2164-7-280. [PMC free
article] [PubMed] [Cross Ref]
• Kanda T, Sullivan KF, Wahl GM. Histone-GFP fusion protein enables sensitive analysis of chromosome dynamics in living mammalian cells. Curr Biol. 1998;8:377–385. doi: 10.1016/S0960-9822(98)
70156-3. [PubMed] [Cross Ref]
• Hua J, Chao S, Cypert M, Gooden C, Shack S, Alla L, Smith E, Trent JM, Dougherty ER, Bittner ML. Tracking Transcriptional Activities with High-throughput Epifluorescent Imaging. Journal of
Biomedical Optics. 2012;17(4):046008. doi: 10.1117/1.JBO.17.4.046008. [PubMed] [Cross Ref]
• Dougherty E, Lotufo R. Hands-on morphological image processing. SPIE Optical Engineering Press; 2003.
• Vincent L, Soille P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1991;13(6):583–598. doi:
10.1109/34.87344. [Cross Ref]
• Walker R, Camplejohn R. Comparison of monoclonal antibody Ki-67 reactivity with grade and DNA flow cytometry of breast carcinomas. Br J Cancer. 1988;57(3):281–283. doi: 10.1038/bjc.1988.60. [PMC
free article] [PubMed] [Cross Ref]
• Spyratos F. et al. Correlation between MIB-1 and other proliferation markers: clinical implications of the MIB-1 cutoff value. Cancer. 2002;94(8):2151–2159. doi: 10.1002/cncr.10458. [PubMed] [
Cross Ref]
• Grewal M, Andrews A. Kalman Filtering: Theory and Practice. Englewood Cliffs, N.J.: Prentice Hall; 1993.
• Haykin S. Adaptive Filter Theory (4th Ed) Prentice Hall; 2001.
• Shaked U, Theodor Y. H[∞ ]optimal estimation: A tutorial. IEEE CDC. 1992. pp. 2278–2286.
• Qian L, Wang H, Li X. Applied Statistics for Network Biology: Methods in Systems Biology. Wiley; 2011. Genetic Regulatory Networks Inference: Combining a genetic programming and H[∞ ]Filtering
Approach; pp. 133–153.
Articles from BMC Genomics are provided here courtesy of BioMed Central
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
• Substance
PubChem Substance links
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3481481/?tool=pubmed","timestamp":"2014-04-19T12:32:02Z","content_type":null,"content_length":"164367","record_id":"<urn:uuid:0d23b118-9877-4a9c-a9e4-16ebd776cf16>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
R Software
This page provides brief descriptions of R packages related to my work on data visualization and the history of statistical graphics.
heplots package provides functions for visualizing hypothesis tests in multivariate linear models. They represent sums-of-squares-and-products matrices for linear hypotheses and for error using
ellipses (in two dimensions) and ellipsoids (in three dimensions). See Fox, Friendly and Monette (2009) for a brief introduction.
The candisc package includes functions for computing and visualizing generalized canonical discriminant analyses for a multivariate linear model (mlm). They are designed to provide low-rank
visualizations of terms in a mlm via the plot method and the heplots package.
The vcd package, by David Meyer, Achim Zeileis, Kurt Hornik provides a fully-general implementation of the graphical methods for categorical data analysis described in my book, Visualizing
Categorical Data. In particular, mosaic plots, association plots, sieve diagrams and related methods are implemented in a common, general framework of the "strucplot".
The vcdExtra package extends these methods in a variety of ways. In particular, vcdExtra extends mosaic, assoc and sieve plots from vcd to handle glm() and gnm() models and adds a 3D version in
The genridge package introduces generalizations of the standard univariate ridge trace plot used in ridge regression and related methods. These graphical displays show both bias and precision, by
plotting covariance ellipsoids of the estimated coefficients, rather than just the estimates themselves.
The mvinfluence package calculates regression deletion diagnostics for multivariate linear models that are close analogs of methods for univariate and generalized linear models. Some new plotting
methods are included, among these, the LR plot of generalized leverage and residuals.
The Guerry package comprises maps of France in 1830, data from Andre-Michel Guerry and others, and statistical and graphic methods related to Guerry's Moral Statistics of France (1833). The goal of
providing these as an R package is to facilitate the exploration and development of statistical and graphic methods for multivariate data in a geo-spatial context.
The package contains a vignette, Spatial multivariate analysis of Guerry's data in R [vignette("MultiSpat")] by Stéphane Dray, demonstrating both classical approaches and modern methods that attempt
to integrate geographical and multivariate aspects simultaneously.
The HistData package provides a collection of data sets that are interesting and important in the history of statistics and data visualization. The goal of the package is to make these available,
both for instructional use and for historical research.
Some of the data sets have examples which reproduce an historical graph or analysis. These are meant mainly as starters for more extensive re-analysis or graphical elaboration. Some of these present
graphical challenges to reproduce in R.
A tableplot (developed by Ernest Kwan) is a semi-graphic display in the form of a table with numeric values, supplemented by symbols with size proportional to cell value(s), and with visual
attributes that can be used to encode other information. The tableplot package provides an implementation.
An R package collecting several classical word pools used in studies of learning and memory (Paivio word list, Toronto Word Pool, Battig and Montague categorized words) and functions for selecting
word lists with given ranges on variables. [Under development]
Other R packages
Some links to a few important R packages for data visualization and statistical analysis | {"url":"http://datavis.ca/R/index.php","timestamp":"2014-04-19T22:07:47Z","content_type":null,"content_length":"22759","record_id":"<urn:uuid:5145f976-2f08-47b8-885f-536c7238685e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the mass in grams of a sample of Fe2(SO4)3 that contains 3.59x10^23 sulfate ions, SO4 with negative two at... - Homework Help - eNotes.com
What is the mass in grams of a sample of Fe2(SO4)3 that contains 3.59x10^23 sulfate ions, SO4 with negative two at the top ?
Every mole of any compound has a fixed mass in grams and a fixed number of molecules, namely 6.02x10^23. What we need to do here is:
find out how many moles of the compound we have
find out how many grams in a mole of the compound
multiply moles times grams per mole.
Each molecule of Fe2(SO4)3 has 3 sulfate ions SO4. So a mole of iron sulfate would contain 3 x 6.02x10^23=18.06 x 10^23 sulfate ions. We have 3.59 x 10^23 sulfate ions so the number of moles we have
is 3.59/18.06 = 0.19878 moles of iron sulfate.
each mole of iron sulfate has an atomic weight which is the sum of the stomic weights of all the atoms that make up a molecule.
at.weight = 2Fe + 3S +3x4 O= 2 (55.847) + 3 (32.06) + 12 (15.9994) = 399.8668. The mass in grams of one mole of iron sulfate is 399.8668 g
thhe mass of .19878 moles = .19878 X 399.8668g = 79.486 g
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-mass-grams-sample-fe2-so4-3-that-contains-3-182489","timestamp":"2014-04-21T15:39:30Z","content_type":null,"content_length":"25637","record_id":"<urn:uuid:8abc3a40-da15-45c4-ac1e-0e0162d8caed>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aerodynamics of Discus Throw
We explain how the rotation of a discus makes it into a reasonably efficient airfoil generating substantial lift at a lift/drag ratio ~ 3, thus increasing the length of the throw with 5-10 meters.
The rotation makes the boundary layer turbulent which delays separation at the high angle of attack in descent.
A Discus Acts Like a Wing
A properly thrown discus acts like a symmetric wing generating lift with a lift/drag ratio ~ 3 at an angle of attack ~ 30 degrees, as explained in the Knol Why It Is Possible to Fly, which can
increase the length of the throw by 5 meters in a head wind of 10m/s.
Elementary Calculus
Assuming that lift and drag are constant during the flight and that the discus has unit mass, it follows by elementary mechanics that the time of flight T and traveled distance d are given by the
following formulas:
T = (V sin(a) + sqrt(V2sin2(a) + 2gh) )/G
d = V cos(a) T - DT2/2
where V is the initial speed, a is the launch angle, h the launch height, G = g – L the effective vertical force with g the gravitational force and L the vertical lift force and D the horisontal drag
The maximal lift coefficient at 30 degrees of angle of attack is ~ 1.0 with lift/drag ratio ~ 3 [1].
Typical values are V = 20 m/s, a = 35 degrees, h=1.5 m, G = 0.8g which gives T ~ 4 s and d ~ 80 m, see also Optimal discus trajectories.
Shortcut to the Action of a Wing
In the following pictures we decribe how the flow of air around a wing generates large lift and small drag by a perturbation of zero lift/drag potential flow arising from a mechanism of instability
at separation changing the pressure distribution around the trailing edge. The perturbed flow does not separate at the crest because the boundary layer is turbulent which in a fluid of small
viscosity acts like a slip boundary condition. On the other hand, viscous flow with a laminar boundary layer separates at the crest and gives poor lift and large drag.
Sideview of velocity and pressure, and topview of streamwise vorticity of Naca0012 wing at aoa = 14. Observe the turbulent streamwise vorticity emanating from separation instability. Computed
solution of the Navier-Stokes equations with slip boundary condition [1]. It is possible that the rims (and holes of some frisbees) of a frisbee trigger transition to turbulence in the boundary
layer and thus improves
the flight.
Principle of action of a wing: Potential flow (upper left) with zero lift/drag modified by low-pressure counter-rotating rolls of streamwise vorticity from instability mechanism at separation
(upper right), switching the pressure on rear wing (bottom left ) to give both lift and drag (H high, L low pressure). Viscous flow separating at the crest with low lift and large drag (bottom
Lift/drag ratio of a Naca0012 airfoil as function of the angle of attack
Lift (and circulation) as function of the angle of attack
Drag as function of the angle of attack
We see that lift peaks at 20 degrees angle of attack with lift/drag ratio ~ 3.
Flight of a Discus
The rotation of a discus has several effects:
• It stabilizes the flight into maintaining the launch angle, although it increases slightly due to precession as explained in Why a Frisbee Flies so Well.
• It makes the boundary layer turbulent which delays separation and maintains a useful lift/drag ratio.
The angle of attack changes during the flight since the flight direction changes, and is in fact negative at launch but increases to a positive maximum in descent, during which the lift helps to
prolong the flight.
Assuming G = 0.8g during half of the flight increases d by 10% or ~ 8m from lift, while with D ~ 0.4 the
reduction is ~ 3m from drag , altogether ~ 5m increase.
The World Record of discus throw is 74.08m set in 1986 by Jürgen Schult (GER/GDR), while for hammer throw it is 81m and for javelin 98m.
What determines if the boundary layer is turbulent (which is good) or laminar (which is bad) is the
Reynolds number = Re = UL/v where U is a relevant speed, L is a relevant length scale and v is
(kinematic) viscosity which for air is about 0.00001. The switch from laminar to turbulent boundary layer occurs at Re ~ 100.000. The rotation increases the effective Reynolds number and helps the
boundary to turn turbulent, thus improving lift and reducing drag. | {"url":"http://claesjohnsonmathscience.wordpress.com/article/aerodynamics-of-discus-throw-yvfu3xg7d7wt-32/","timestamp":"2014-04-20T15:52:17Z","content_type":null,"content_length":"47659","record_id":"<urn:uuid:221ea97d-122b-46ac-81aa-bd81df733d05>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highland Village, TX Algebra Tutor
Find a Highland Village, TX Algebra Tutor
...I have always loved math and took a variety of math classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side. I
really enjoy tutoring because with every student there is a challenge of figuring out the best way to explain concepts so that they will understand.
7 Subjects: including algebra 2, algebra 1, statistics, geometry
...He, of course, has a great grasp of English, having spent many years as an ESL instructor. In his free time, Drew enjoys either picking up his guitar or sitting down at the piano. Between
work, school, and family, those times do not come frequently.
37 Subjects: including algebra 1, English, algebra 2, Spanish
...I’ve tutored over 100 hours of ACT and SAT prep, including over 30 hours of ACT and SAT Reading. ACT Reading questions range from finding specific facts to making inferences from the text.
While it’s important to read these passages and questions carefully, slower readers may have trouble finishing in time.
15 Subjects: including algebra 1, algebra 2, reading, geometry
My background is in physics. I have a undergraduate and masters degree in Physics. I am currently pursuing my PhD in Physics at the University of Texas at Dallas.
25 Subjects: including algebra 1, algebra 2, chemistry, physics
...I hold a B Sc in Electrical Engineering and have worked in the telecom and technology sector as a design engineer and program manager in Canada, Australia and the U.S.I taught both Geometry
and Pre-AP Geometry for almost five years with the Plano ISD. From experience, I know what concepts studen...
11 Subjects: including algebra 1, geometry, GRE, SAT math
Related Highland Village, TX Tutors
Highland Village, TX Accounting Tutors
Highland Village, TX ACT Tutors
Highland Village, TX Algebra Tutors
Highland Village, TX Algebra 2 Tutors
Highland Village, TX Calculus Tutors
Highland Village, TX Geometry Tutors
Highland Village, TX Math Tutors
Highland Village, TX Prealgebra Tutors
Highland Village, TX Precalculus Tutors
Highland Village, TX SAT Tutors
Highland Village, TX SAT Math Tutors
Highland Village, TX Science Tutors
Highland Village, TX Statistics Tutors
Highland Village, TX Trigonometry Tutors
Nearby Cities With algebra Tutor
Addison, TX algebra Tutors
Bartonville, TX algebra Tutors
Coppell algebra Tutors
Copper Canyon, TX algebra Tutors
Corinth, TX algebra Tutors
Double Oak, TX algebra Tutors
Flower Mound algebra Tutors
Hickory Creek, TX algebra Tutors
Lake Dallas algebra Tutors
Lewisville, TX algebra Tutors
Little Elm algebra Tutors
Northlake, TX algebra Tutors
Oak Point, TX algebra Tutors
Shady Shores, TX algebra Tutors
Southlake algebra Tutors | {"url":"http://www.purplemath.com/Highland_Village_TX_Algebra_tutors.php","timestamp":"2014-04-20T04:22:44Z","content_type":null,"content_length":"24245","record_id":"<urn:uuid:f03244c3-ec0f-4771-81a4-e20b60fe37e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
A generic, reusable, and extendable matrix class
There are times when it is necessary to store data in a matrix so as to make programming easier. This is especially true in scientific programming where matrices are often used in calculations.
However, there is no built-in Matrix type in C++.
There are several ways to implement a matrix in C++. The easiest involve arrays or pointers. For example, to create a simple matrix using an array, just do double matrix[10][10]; to create a variable
‘matrix’ that has 10 rows and 10 columns. Programming books will tell you that the data isn't exactly stored in a two-dimensional form, but that is of no interest to us. What is important is that we
can easily access the elements in the matrix; for example, to access the element in row 2 and column 5, we do matrix[1][4] taking into account that arrays start from 0. For ease of explaining, for
now on, all matrices start from zero, thus if I refer to row 2 and column 5, it is matrix[2][5]. The disadvantage of using arrays as matrices is that the size of the rows and columns must be fixed at
To dynamically adjust the size of the matrix, we can create a matrix using pointers. We can use:
double* matrix = new double[100]
double** matrix = new double*[10];
for(int i=0; i<10;++i)
matrix[i] = new double[10];
The first form is a special type of matrix in that, to access an element in row i and column j, you need to do matrix[i*10+j]. This is obviously not as intuitive as matrix[i][j]. However, it has its
own merits in that the matrix is simpler to create. The second form is slightly harder to create a matrix, but accessing an element is the same as in the array, i.e. matrix[i][j]. The disadvantage of
using pointers is that you must remember to call delete[] on the matrix after using it, otherwise there will be a memory leak.
A third way of creating a matrix is to use STL containers like vectors. We can nest two vectors together, like vector<vector<double> > matrix. To access an element, we can still use matrix[i][j]. In
addition, we can get the size of the matrix relatively easily, through the size function of the vector container. We can also easily increase the size of the matrix by using the resize function.
Another advantage of using STL containers is it is not prone to memory leaks unlike pointers.
Mathematical operations are harder to perform on matrices implemented using arrays, pointers, or STL containers. For example, to add two matrices together, you can't write "matrixA + matrixB".
Instead, you must write two loops to add the individual elements together. To enable ease of calculations and also to provide useful functions for manipulating matrices, we can create a matrix class.
Many programmers have created their own matrix classes. Among the more notable are the simple matrix class implemented for use in Numerical Recipes in C++^1, the Matrix Template Library (MTL)
developed by Open Systems Laboratory at Indiana University^2, and the Matrix Expression Template implemented by Masakatsu Ito^3.
Since there are already excellent matrix classes available, why is there a need to create another one? There are several motivations for creating a new matrix class which is the focus of this
article. First, the available matrix classes doesn't have all the features that I need. To add extra functions to these matrix classes will require an understanding of the implementation of the
classes, which is time-consuming. Second, as an amateur programmer, creating a new matrix class will improve my programming skills as I will have to consider the implementation problems thoroughly.
The third reason is that I will not have to spend time learning about how to use the other matrix classes. Although some are quite straightforward, some are not. Implementing my own matrix will
eliminate this learning curve.
The matrix class that I have implemented is not perfect. It is not meant to be the best matrix class. Before proceeding further, the deficiencies of the matrix class are listed below:
1. Only dense matrix storage type has been implemented. Other storage types like sparse matrix, triangular matrix, diagonal matrix etc., have not been implemented.
2. The performance of the matrix is not the fastest among the available matrix classes. Although I have not done extensive comparison tests, I believe that my matrix class is not the fastest in
terms of calculations.
3. The number of functions available for manipulating matrices is less than that in some available matrix classes.
4. The most important deficiency is that this class can only be compiled by compilers that are compliant to the C++ standard. I have only tested the present version on Visual C++ 2005.
Other than the motivations for creating this matrix class, my matrix class hase advantages over other matrix classes. Otherwise, I would not have published this class. The advantages are:
1. The matrix can be used to store numerical types as well as non-numerical types. When storing non-numerical types, the mathematical functions are not available for use, and will give compiler
errors if accidentally used, instead of causing errors during runtime.
2. The functions available for performing matrix calculations are intuitive to use, unlike those in some matrix classes.
3. Although other storage types are not implemented, they can be created relatively easily by the user by following the MatrixStorageTemplate provided.
4. STL-like functions and typedefs are available for users familiar with STL containers. Certain STL algorithms can also be performed on the matrix.
For those who have decided that this class may be suitable for them or who wishes to learn certain programming techniques like using policies with templates and expression templates, the rest of the
article is divided into the following sections. The first section gives an overview of the functions available. The second section gives an example of how to use the matrix class. The third section
discusses the implementation of the matrix class, giving details of using policies for generic programming and the use of expression templates to speed up certain matrix calculations.
Available Matrix Functions
The only class that the user needs to use is the template Matrix<ValueT,MatrixTypeP,MathP> class. ValueT is a type name for the type of data that will be stored in the matrix. It can be numerical
types like int, double, or complex, or numerical types like char*, or even user-defined classes. MatrixTypeP is a template class policy for the data storage type. At the present moment, only
DenseMatrix is available. MathP is a template class policy for mathematical functions. Three choices are available: MathMatrix, MathMETMatrix, and NonMathMatrix.
No matter what type of matrix is created, the following are common for all types of matrixes:
STL-like typedefs:
│value_type │type of data stored in matrix. │
│reference │reference to type of data stored in matrix. │
│const_reference│const reference to type of data stored in matrix. │
│pointer │pointer to type of data stored in matrix. │
│const_pointer │const pointer to type of data stored in matrix. │
│difference_type│difference types for pointer calculations. │
│size_type │size type. │
Constructors, destructor, copy constructors, and assignment:
│Matrix<ValueT, MatrixTypeP, MathP> │Default constructor │
│Matrix<ValueT, MatrixTypeP, MathP>(size_type sizeRow, size_type │Constructor requiring number of rows, columns, and initial value for elements. │
│sizeCol, value_type x=value_type) │ │
│Matrix<ValueT, MatrixTypeP, MathP>(size_type sizeRow, size_type │Constructor requiring number of rows, columns, and a 1D array containing initial values for elements. │
│sizeCol, value_type* am) │ │
│Matrix<ValueT, MatrixTypeP, MathP>(size_type sizeRow, size_type │Constructor requiring number of rows, and a 2D array containing initial values for elements. │
│sizeCol, value_type** am) │ │
│~Matrix<ValueT, MatrixTypeP, MathP> │Destructor. │
│Matrix<ValueT, MatrixTypeP, MathP>(const Matrix<ValueT, │Copy constructor. │
│MatrixTypeP, MathP>& m) │ │
│template<typename ValueTT, template<typename> class MatrixTypePT, │Copy constructor taking any matrix of inter-convertible ValueT. │
│template<typename, typename> class MathPT> │ │
│ │ │
│Matrix(const Matrix<ValueTT, MatrixTypePT, MathPT>& m) │ │
│Matrix<ValueT, MatrixTypeP, MathP>& operator=(Matrix<ValueT, │Assignment operator taking the same type of matrix. │
│MatrixTypeP, MathP> m) │ │
│template<typename ValueTT,template<typename> class MatrixTypePT, │Assignment operator taking any matrix of inter-convertible ValueT. │
│template<typename,typename> class MathPT> │ │
│ │ │
│Matrix<ValueT,MatrixTypeP,MathP>& operator=(Matrix< │ │
│ValueTT,MatrixTypePT,MathPT> m) │ │
│Matrix<ValueT, MatrixTypeP,MathP>& operator=(const value_type* am)│Copy a 1D array into the matrix. The number of array elements must be equal or larger than the number of matrix elements, and the │
│ │sequence of array elements must be a concatenation of rows to be put into the matrix. │
│Matrix<ValueT, MatrixTypeP, MathP>& operator=(const value_type** │Copy a 2D array into the matrix. The size of the array must be the same as the matrix. │
│am) │ │
Matrix member functions
│void clear │Erase matrix. │
│bool empty const │Check if matrix is empty. │
│void resize(size_type sizeRow, size_type sizeCol, value_type x=value_type)│Resize matrix, filling new elements with supplied or default values and removing extra elements. │
│void swap(Matrix<ValueT, MatrixTypeP,MathP>& m) │Swap the contents of another matrix with the present matrix. The matrix to be swapped must be of the same type.│
│void swaprows(size_type i1, size_type i2) │Swap two rows in the matrix. │
│void swapcols(size_type j1, size_type j2) │Swap two columns in the matrix. │
│bool operator==(const Matrix<ValueT,MatrixTypeP,MathP>& m) const │Comparison operators. │
│ │ │
│bool operator!=(const Matrix<ValueT,MatrixTypeP,MathP>& m) const │ │
│void Update │Update matrix. │
│const_reference operator(size_type posRow, size_type posCol) │Member accessor functions │
│ │ │
│const reference operator(size_type posRow, size_type posCol) │ │
│ │ │
│const_reference at(size_type posRow, size_type posCol) │ │
│ │ │
│const reference at(size_type posRow, size_type posCol) │ │
Row and Column iterator functions
│size_type size() const │Return the total number of rows/columns. │
│template<typename ForwardIterator> void insert(size_type rowNo, const ForwardIterator& first) │Insert row/column at specified row/column. │
│void erase(size_type rowNo) │Erase row/column at specific row/column. │
│template<typename ForwardIterator> void push_back(const ForwardIterator& first) │Add row/column at end of matrix. │
│template<typename ForwardIterator> void push_front(const ForwardIterator& first) │Add row/column at beginning of matrix. │
│void pop_back() │Remove row/column at end of matrix. │
│void pop_front() │Remove row/column at beginning of matrix. │
│iterator begin() │Functions to obtain iterators to data in matrix.│
│ │ │
│const_iterator begin() const │ │
│ │ │
│iterator end() │ │
│ │ │
│const_iterator end() const │ │
│ │ │
│reverse_iterator rbegin() │ │
│ │ │
│const_reverse_iterator rbegin() const │ │
│ │ │
│reverse_iterator rend() │ │
│ │ │
│const_reverse_iterator rend() const │ │
Functors operating on Matrix
│MatrixT Transpose<MatrixT>()(const MatrixT& matrix) │Get the transpose of a matrix, i.e., swap rows with columns and vice versa. The first form returns the transposed matrix. The second │
│ │saves the transposed matrix into a variable which is passed in. │
│void Transpose<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│transposedMatrix) │ │
│MatrixT Diagonal<MatrixT>()(const MatrixT& matrix) │Get the main diagonal of a matrix, or put a vector into the main diagonal of a matrix. The first form returns the diagonal matrix. The │
│ │second saves the diagonal matrix into a variable which is passed in. │
│void Diagonal<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│diagonalMatrix) │ │
│MatrixT ovariance<MatrixT>()(const MatrixT& matrix) │Get the covariance of a matrix. The first form returns the covariance matrix. The second saves the covariance matrix into a variable │
│ │which is passed in. │
│void Covariance<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│covarianceMatrix) │ │
│MatrixT Power<MatrixT>()(const MatrixT& matrix, const │Get a matrix with all its elements raised to a defined power. The first form returns the powered matrix. The second saves the powered │
│value_type& power) │matrix into a variable which is passed in. │
│ │ │
│void Power<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│powerMatrix, const value_type& power) │ │
│MatrixT Mean<MatrixT>()(const MatrixT& matrix) │Get a row vector with the mean of each column of a matrix. The first form returns the row vector. The second saves the row vector into │
│ │a variable which is passed in. │
│void Mean<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│meanMatrix) │ │
│MatrixT Median<MatrixT>()(const MatrixT& matrix) │Get a row vector with the median of each column of a matrix. The first form returns the row vector. The second saves the row vector │
│ │into a variable which is passed in. │
│void edian<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│medianMatrix) │ │
│MatrixT Sum<MatrixT>()(const MatrixT& matrix) │Get a row vector with the sum over each column of a matrix. The first form returns the row vector. The second saves the row vector into│
│ │a variable which is passed in. │
│void Sum<MatrixT>()(const MatrixT& matrix, MatrixT& │ │
│sumMatrix) │ │
│MatrixT CumulativeSum<MatrixT>()(const MatrixT& matrix) │Get the cumulative sum of elements of a matrix. The first form returns the cumulative sum matrix. The second saves the cumulative sum │
│ │matrix into a variable which is passed in. │
│void CumulativeSum<MatrixT>()(const MatrixT& matrix, MatrixT&│ │
│cumulativeSumMatrix) │ │
Matrix Class Example
##include <span class="code-keyword"><string>
#include <span class="code-string">"Matrix.hpp"
using namespace std;
using namespace YMatrix;
int main(int argc, char* argv[])
// typedef for non-mathematical matrix with string as datatype.
typedef Matrix<string,DenseMatrix,NonMathMatrix> MatrixNM;
// typedef for normal mathematical matrix with double as datatype.
typedef Matrix<double,DenseMatrix,MathMatrix> MatrixM;
// typedef for mathematical matrix implementing expression template.
typedef Matrix<double,DenseMatrix,MathMETMatrix> MatrixEM;
// typedef for normal mathematical row vector with double as datatype.
typedef RowVector<double,DenseMatrix,MathMatrix> RVectorM;
// typedef for normal mathematical col vector with double as datatype.
typedef ColVector<double,DenseMatrix,MathMatrix> CVectorM;
int i, j;
// construct a 3 x 4 matrix and fill it with 'Hello' for all elements.
MatrixNM stringMatrix(3, 4, "Hello");
// resize matrix to 2,5 and fill new elements with 'World'
stringMatrix.resize(2, 5, "World");
// print matrix.
cout << "First way to print matrix\n";
for (i=0; i<stringMatrix.row.size(); ++i)
for (j=0; j<stringMatrix.col.size(); ++j)
cout << stringMatrix(i,j) << "\t";
cout << "\n";
cout << endl;
vector<string> s(5, "New");
stringMatrix.row.insert(1, s.begin());
stringMatrix.col.insert(2, s.begin());
// another way to print matrix
cout << "Second way to print matrix\n";
for (i=0; i<stringMatrix.row.size(); ++i)
copy (stringMatrix.row(i).begin(),
ostream_iterator<string>(cout, "\t"));
cout << "\n";
cout << endl;
// another way to print matrix
cout << "Third way to print matrix\n";
cout << stringMatrix;
// erase matrix.
cout << "Deleting contents of matrix\n";
// check if matrix is empty
cout << (stringMatrix.empty() ?
"Matrix is empty\n" : "Matrix is not empty\n");
// prepare two 3 x 3 arrays
double a1[9] = {1,2,3, 4,5,6, 7,8,9};
double** a2 = new double*[3];
for (i=0; i<3; ++i) a2[i] = new double[3];
for (i=0; i<3; ++i)
for (j=0; j<3; ++j)
a2[i][j] = 9 - (i*3+j);
// construct three 3 x 3 matrix with no initial value
MatrixM mathMatrix1(3,3), mathMatrix2(3,3), mathMatrix3;
// fill matrices with array
mathMatrix1 = a1;
mathMatrix2 = a2;
cout << "First matrix:\n" << mathMatrix1;
cout << "Second matrix:\n" << mathMatrix2;
cout << "Element(1,2) of first matrix: " << mathMatrix1.at(1,2) << endl;
// perform mathematical operations on matrix and print the results
cout << "Some mathematical operations\n";
mathMatrix3 = mathMatrix1 + mathMatrix2; cout << mathMatrix3;
mathMatrix3 = mathMatrix1 - mathMatrix2; cout << mathMatrix3;
mathMatrix3 = mathMatrix1 * mathMatrix2; cout << mathMatrix3;
mathMatrix3 = mathMatrix1 * 2; cout << mathMatrix3;
mathMatrix3 = mathMatrix1 + mathMatrix2 * mathMatrix2;
cout << mathMatrix3;
mathMatrix3 += mathMatrix1; cout << mathMatrix3;
mathMatrix3 -= mathMatrix1; cout << mathMatrix3;
mathMatrix3 *= mathMatrix1; cout << mathMatrix3;
mathMatrix3 *= mathMatrix1 + mathMatrix2; cout << mathMatrix3;
mathMatrix3 *= 2; cout << mathMatrix3;
// swap mathMatrix1 and mathMatrix2
cout << "Swapping first and second matrix:\n";
cout << "First matrix:\n" << mathMatrix1;
cout << "Second matrix:\n" << mathMatrix2;
// swap row 0 and 1 of mathMatrix1
cout << "Swapping first and second row of first matrix:\n";
mathMatrix1.swaprows(0,1); cout << mathMatrix1;
// swap column 0 and 2 of mathMatrix1
cout << "Swapping first and third column of first matrix:\n";
mathMatrix1.swapcols(0,2); cout << mathMatrix1;
// print transpose of mathMatrix1
cout << "Transpose\n" << Transpose<MatrixM>()(mathMatrix1);
// print diagonal of mathMatrix1
cout << "Diagonal\n" << Diagonal<MatrixM>()(mathMatrix1);
// print covariance of mathMatrix1
cout << "Covariance\n" << Covariance<MatrixM>()(mathMatrix1);
// print power 2 of mathMatrix1
cout << "Power 2\n" << Power<MatrixM>()(mathMatrix1, 2);
// print mean of mathMatrix1
cout << "Mean\n" << Mean<MatrixM>()(mathMatrix1);
// print median of mathMatrix1
cout << "Median\n" << Median<MatrixM>()(mathMatrix1);
// print sum of mathMatrix1
cout << "Sum\n" << Sum<MatrixM>()(mathMatrix1);
// print cumulative sum of mathMatrix1
cout << "CumulativeSum\n" << CumulativeSum<MatrixM>()(mathMatrix1);
// construct three 3 x 3 matrix using the two arrays and mathMatrix1
MatrixEM mathEmatrix1(3, 3, a1);
MatrixEM mathEmatrix2(3, 3, a2);
MatrixEM mathEmatrix3(mathMatrix1);
cout << "First matrix:\n" << mathEmatrix1;
cout << "Second matrix:\n" << mathEmatrix2;
// perform mathematical operations on matrix and print the results
cout << "Some mathematical operations\n";
mathEmatrix3 = mathEmatrix1 + mathEmatrix2; cout << mathEmatrix3;
mathEmatrix3 = mathEmatrix1 - mathEmatrix2; cout << mathEmatrix3;
mathEmatrix3 = mathEmatrix1 * mathEmatrix2; cout << mathEmatrix3;
mathEmatrix3 = mathEmatrix1 * 2; cout << mathEmatrix3;
mathEmatrix3 = mathEmatrix1 + mathEmatrix2 * mathEmatrix2;
cout << mathEmatrix3;
mathEmatrix3 += mathEmatrix1; cout << mathEmatrix3;
mathEmatrix3 -= mathEmatrix1; cout << mathEmatrix3;
mathEmatrix3 *= mathEmatrix1; cout << mathEmatrix3;
mathEmatrix3 *= 2; cout << mathEmatrix3;
// invalid mathematical operation (compare with above)
// mathEmatrix3 *= mathEmatrix1 + mathEmatrix2; cout << mathMatrix3;
// demonstrate speed of expression template
// (can't tell unless timing is done)
mathEmatrix3 = mathEmatrix1 + mathEmatrix2 -
mathEmatrix1 + mathEmatrix2;
cout << mathMatrix3;
// demonstrate interchangably of matrices
mathEmatrix3 = mathMatrix1 + mathEmatrix2; cout << mathEmatrix3;
// get transpose of mathEmatrix3
MatrixEM mathEmatrix3Transposed;
Transpose<MatrixEM>()(mathEmatrix3, mathEmatrix3Transposed);
cout << "Transposed:\n" << mathEmatrix3Transposed;
RVectorM rv(5, 3.0);
cout << rv;
rv.push_back(8); cout << "push_back 8\n" << rv;
rv.push_front(1); cout << "push_front 1\n" << rv;
rv.pop_back(); cout << "pop_back\n" << rv;
rv.pop_front(); cout << "pop_front\n" << rv;
cout << endl;
CVectorM cv(5, 2.6);
cout << cv;
cv.push_back(3.5); cout << "push_back 3.5\n" << cv;
cv.push_front(8.5); cout << "push_front 8.5\n" << cv;
cv.pop_back(); cout << "pop_back\n" << cv;
cv.pop_front(); cout << "pop_front\n" << cv;
cout << endl;
Implementation Details
This matrix class was designed to be generic and with reusability in mind. The inspiration for this design comes from Andrei Alexandrescu^4. Since the matrix was to be generic, templates are
essential for the implementation. The matrix must be able to store different types of data, not just numerical data. Thus, a typename ValueT must be present to indicate the type of data that is
There are different types of matrices: dense matrix, sparse matrix, triangular matrix, diagonal matrix, etc. Each differs in how the matrix is organized, and how the data is stored. Thus, there
should be a policy which determines how the matrix is stored and how the data can be accessed. Hence, a separate class for each kind of matrix should be created. This class will handle the storage
and access of the data. The second element in the matrix class template parameter is thus template<typename> class MatrixTypeP. At the present moment, MatrixTypeP can only be DenseMatrix.
The third thing that must be considered when designing the matrix class is that for the matrix to be useful in scientific programming, the matrix should have mathematical operations. However, if the
matrix contains non-numerical types, mathematical operations should not be available. This problem can be solved by using a mathematical policy. By implementing a mathematical policy, matrices
containing non-numerical types cannot perform mathematical operations and will give compiler-error if this is attempted, whereas mathematical operations will be available for numerical types.
Initially, two classes are created to achieve this policy: MathMatrix and NonMathMatrix, and the third element in the matrix class template parameter is the template<typename,typename> class MathP.
In the initial version of the matrix class, MathP was derived from MatrixTypeP, and then the matrix class was derived from either of the math classes. This was later changed to the present version
where the matrix class was derived from both MatrixTypeP and MathP, and MathP no longer derived from MatrixTypeP. In addition, a third math class, MathMETMatrix, was created which implements
expression templates to increase the performance when performing certain types of matrix calculations. The use of expression templates will be discussed later.
Thus, the skeleton of the matrix class is formed. The matrix class will take in three template arguments which denote the type of data which will be stored, the policy for storing and accessing the
data, and the policy for determining whether mathematical operations are available for the matrix. The only thing which was not considered was the dimension of the matrix. The matrix was assumed to
be 2-dimensional. Multi-dimensional matrixes were not considered as this would increase the difficulty of the implementation.
The operations of the matrix class was designed to be similar to a STL container. The matrix class implements the various STL-like typedefs like value_type, size_type, etc. Iterators for iterating
through the elements in the matrix were also implemented. Certain common STL container functions like clear, empty, resize, and swap were implemented. Others like assign, erase, insert, front, back,
push_back, push_front, pop_back, pop_front, begin, and end were implemented in the row and column iterators.
Implementation of MatrixTypeP
There are a few basic public functions that any MatrixTypeP matrix must have. There must be a default constructor which takes in no argument. There should be constructors which take in the initial
size of the matrix as well as the initial values. The initial values can be a default value for all the elements, or from a matrix implemented using arrays or pointers. A copy constructor is
essential because of the presence of a raw pointer as a member variable. There should be an assignment operator for copying another similar MatrixTypeP matrix, and an assignment operator for copying
any matrix class. The MatrixTypeP matrix must also implement the resize, swap, and comparison operators. In addition, an Update function was added to allow the user to inform the matrix to update
itself whenever data is changed. This Update function is not used by DenseMatrix but it may be necessary for other MatrixTypeP.
There are other basic public functions which can theoretically be placed in the final matrix class instead of the MatrixTypeP matrix. They are clear, empty, the various member accessor functions,
operator(), and at, etc. They can theoretically be placed in the final matrix class because their code doesn’t need to be changed for all MatrixTypeP matrices. Their implementation is based on common
member variables and functions that will be present in all MatrixTypeP matrices. However, they are still placed in the MatrixTypeP matrix at the present moment, because they provide functions which
may be useful for the MatrixTypeP matrix to use in the implementation of other functions.
There are two more private functions which should be present in a MatrixTypeP matrix. They are ResizeIterators and UpdateIterators. ResizeIterators is used to resize the four private member
variables: rowIteratorStart, rowIteratorFinish, colIteratorStart, and colIteratorFinish. UpdateIterators is used to update the data stored by these four member variables. Initially, there is only
UpdateIterators. The resizing of the four private member variables is done inside UpdateIterators. Later on, the resizing is separate from UpdateIterators to form ResizeIterators in order to increase
the performance of the matrix. The four private member variables are raw pointers which keep information on the iterators which point to the start and end of each row and column. By storing these
iterators, the member accessor functions and the various iterator functions can be speeded up and made generic for all MatrixTypeP. Initially, the four private member variables were implemented using
the STL vector. Then it was changed to use raw pointers, to increase performance.
Finally, each MatrixTypeP matrix must implement a row iterator class and a column iterator class. The row iterator is responsible for iterating through a row in the matrix, and the column iterator
for iterating through a column. These two iterator classes are the most important aspects of the implementation of a MatrixTypeP matrix. I will try to give a detailed explanation on how to implement
the iterator classes.
Since the iterator classes are to be similar to an STL container, it should have the various STL-like typedefs. Thus, iterator_category, value_type, reference, pointer, size_type, and difference_type
are defined.
The iterator should have appropriate constructors. It must have a default constructor which takes in no arguments. It should have a constructor which takes in an iterator (VectorTypeIterator) which
iterates through a row or column of the type which stores the matrix data. For example, in DenseMatrix, the matrix data is stored by a raw pointer of type value_type*. Thus, the VectorTypeIterator
should be of type value_type*. If the matrix data is stored by an STL vector, then the VectorTypeIterator is of type std::vector::iterator. If the matrix data is stored by an STL deque<deque>, then
the VectorTypeIterator is of type std::deque::iterator. This VectorTypeIterator is stored in a member variable for use in various functions. In addition to the VectorTypeIterator, the constructor may
take in any variable which may help in the operation of the iterator. For example, the constructor in the column iterator of the DenseMatrix takes in the total number of columns in the matrix. It
uses this information in its Subtract, Increment, Decrement, and Advance functions.
Since the default constructor may be used to construct the iterator, there should be an assignment operator which takes in VectorTypeIterator. This is to allow the update of the member variable which
stores the VectorTypeIterator. Other functions to allow updates of any variables which is necessary for the iterator operation should also be available. For example, there is a SetCols function to
update the total number of column variables in the column iterator of DenseMatrix.
The type of functions which should be present in the iterator will depend on the type of the iterator, i.e., input, output, bidirectional, or random. All types of iterators should implement the
operator* to allow dereferencing, and operator-> to allow accessing of matrix elements’ member variables or functions. Friend comparison functions must also be implemented for all types of iterators.
For a random access iterator, it should have pre- and post increment and decrement operators. It should also have operator- to determine the distance between two similar iterators. In addition, it
should implement operator+=, operator-=, operator+, and operator- to allow the iterator to increase or decrease by a certain distance. These functions can be made generic by delegating the jobs to
other functions like Subtract, Increment, Decrement, and Advance. A random access iterator should also implement the operator[] to allow it to act like an array.
template<typename ValueT> class DenseMatrix
The implementation of dense matrix has gone through several stages. Initially, deque<deque<ValueT> > was used to store the matrix elements. Then, deque<ValueT> was used. In the end, ValueT* was used
to increase the performance of the matrix as the use of STL containers slow matrix calculations greatly. However, the usage of raw pointers in the storage increases the difficulty of implementing the
DenseMatrix class, slightly. Care must be taken during the implementation to avoid memory leaks. Also, memory leaks may occur when exceptions occur. Since performance is crucial in scientific
programming, this disadvantage of using raw pointers is overlooked in this case.
The implementation of iterators is slightly different from that of the template described above. The column iterator uses the template above for implementation, but the row iterator uses a raw
pointer ValueT*. In the initial version, a class following the template format was used to create the row iterator. Later, it was found that in matrix calculations, repeated use of functions in the
row iterator class actually slows down the calculations. Since the storage of data allows the use of raw pointers for the row iterators, raw pointers are used to increase the speed of matrix
calculations. The same cannot be done for the column iterator because iterating through a column requires knowledge of the total number of columns in the matrix, and this information needs to be
stored and used. Thus, a proper class is needed for the column iterator. Since using the row iterator is faster than using the column iterator, the row iterator should be used when using the
DenseMatrix storage policy unless it is absolutely necessary to use the column iterator so as to increase the efficiency of the matrix.
Approximate analysis of the speed of matrix calculations using the DenseMatrix storage policy indicates that it is nowhere as fast as hand-written loops involving raw pointers or arrays. This is to
be expected since the underlying matrix elements are stored under many layers of code, which add a significant overhead. In order to make this matrix class appealing to scientific programmers who
require the fast speed of raw pointers, three member variables and a function, which normally should be private or protected, is made public. This is against the principle of encapsulation, but the
end justifies the means. The three variables are rows_ and cols_, which contain the total number of rows and columns in the matrix, respectively, and matrix_ which is a raw pointer ValueT* containing
the matrix elements. The function which is made public is the ResizeIterators. Scientific programmers who desire speed, yet convenience, can make use of the various functions of the matrix class to
help maintain the matrix, and when doing calculations, can directly access the matrix storage variable, matrix_, to perform fast lookup, assignment, or calculations, by using it as an ordinary raw
pointer. The ResizeIterators is exposed so that if the programmer ever resizes the matrix_ or points the matrix_ to another matrix manually, the internal iterators can be updated so that the
structure of the matrix is maintained.
Implementation of MathP
The functions which are to be implemented in a MathP class must support addition, subtraction, dot multiplication, and scalar multiplication of the matrix when the data is of numerical type. When the
data is of non-numerical type, all these functions should not be available. There are two MathP classes which provide mathematical operations, and one which does not. The two that does, implement the
operator+=, operator+, operator-=, operator-, operator*=, and operator*.
The implementation of a MathP class presents a unique problem. The MathP class is a base class from which the final matrix class inherits from. It is separated from the storage class, thus it has no
idea of how the matrix elements are stored. Since the storage class is also in charge of providing accessor functions, the MathP class effectively has no way of accessing the matrix elements. But to
perform matrix calculations, it has to have access to the matrix elements. The way to overcome this problem is to pass in a pointer representing the final matrix class to the MathP class so that it
can have access to the matrix elements. The MathP class must maintain a pointer variable that points to its child matrix. Thus, the final declaration of a MathP class is template<typename ValueT,
typename MatrixT>, where MatrixT is the type of the final matrix class. Since the MathP class cannot exist without a pointer to its child matrix, there is no default constructor for a MathP class
which takes in no arguments. There is only a constructor which requires a pointer to the child matrix.
template<typename ValueT, typename MatrixT> class MathMatrix
The MathMatrix class is a relatively simple class. The main function of this class is to provide mathematical operators to the final matrix class so that matrix calculations can be done intuitively
using mathematical operators.
The mathematical operators are implemented in such a way that the operator+ and operator- calls the functions operator+= and operator-= respectively to perform their functions. This technique is also
used to implement the operator*= and operator* for scalar multiplication. However, for dot multiplication, the situation is reversed. The operator*= calls the functions operator* to perform its
function. The reason for this reversal is because it is more efficient. During matrix dot multiplication, a temporary matrix is created to store the results of the multiplication. If this code
resides in operator*= instead of in operator*, when using operator*, three temporary matrices will be created, instead of the usual two, in operator+ and operator-. This reduces the efficiency of the
matrix dot multiplication. By putting the code in operator*, only two temporary matrixes are created, so it is more efficient. Approximate analysis reveals that this does not degrade the performance
of the operator*=, so there is no need to duplicate the multiplication code in operator*=.
The MathMatrix class implements the checking of the matrix sizes before performing the various matrix calculations. It will return an exception if the two matrixes cannot be added, subtracted, or dot
multiplied together because of wrong sizes. This checking does not slow down the matrix calculations appreciably, but can be removed if really necessary for performance.
template<typename ValueT, typename MatrixT> class MathMETMatrix
MathMETMatrix uses a technique called expression templates to increase the performance of certain types of matrix addition and subtraction. Expression templates was introduced by Todd Veldhuizen^5.
To explain expression templates, consider the problem below.
A, B, C, D, E, F are matrices. Consider the code A = B + C + D + E + F;. To perform matrix calculations, the compiler will first add B and C together, then add the results to D, then the results to
E, and the results to F, before putting the final result into A. Thus, several temporay matrices are created during the addition, and this decreases the performance greatly. What expression templates
hope to achieve is to eliminate all these unnecessary temporary matrices by creating code equivalent to the following:
for (size_t i=0; i<rows; ++i)
for (size j=0; j<cols; ++j)
A(i,j) = B(i,j) + C(i,j) + D(i,j) +E(i,j) + F(i,j);
To achieve the above, templates are used, and several supporting classes are needed. I will not go into the theory of expression templates. Rather, I will explain the creation of the various
supporting template classes.
The first template class that is needed is a wrapping class for the Operations type. It is named MET. In the MathMETMatrix class, there is only one type of Operations, that is METElementBinaryOp,
which perform binary operations on the elements in the matrix. The MET main job is to wrap around METElementBinaryOp, providing access to the functions present in METElementBinaryOp. To do that, it
maintains a variable for the template class METElementBinaryOp. MET is necessary because it helps to bind different METElementBinaryOp template classes together. Without MET, the whole concept of
expression templates cannot work.
The next template class is the Operations types. Presently, there is only METElementBinaryOp. METElementBinaryOp takes in two template variables in its constructor, and saves them in pointers that
point to the data in those variables. The two template variables can either be a matrix class or a MET template class. METElementBinaryOp template argument also takes in an operator type class. This
operator type class determines the kind of operation to perform on the two template variables.
There are three operator types present in MathMETMatrix: METAdd, METSubtract, and METMultiply. METMultiply is not used yet because a good algorithm implementing expression templates for performing
dot multiplication has not been found. METAdd and METSubtract are structures which provide a static function Evaluate. Evaluate takes in two matrix elements and returns the results of their addition
or subtraction.
To implement expression templates, the operator+ and operator- are overloaded so that it will return an instance of a MET object. There is a operator+ and operator- which perform operations between a
MathMETMatrix derived matrix and any other matrix. There are also friend operator+ and operator- which perform operations between any matrix and any MET object, and between two MET objects. Each
operator overload will return a MET object which contains information on the Operations types to do (only METElementBinaryOp at the present moment), the operator types to perform (METAdd for
operator+ and METSubtract for operator-), and the variables to perform the operator types on (a matrix or an expression).
Thus, when the compiler meets the code above: A = B + C + D + E + F;, it will generate the following code:
A = MET< METElementBinaryOp< METAdd<value_type>, Matrix,
Matrix> >(METElementBinaryOp< METAdd<value_type>,
MET< METElementBinaryOp< METAdd<value_type>,
Matrix, Matrix> >(METElementBinaryOp< METAdd<value_type>,
MET< METElementBinaryOp< METAdd<value_type>, Matrix,
Matrix> >(METElementBinaryOp< METAdd<value_type>, MET<
METElementBinaryOp< METAdd<value_type>, Matrix, Matrix> >,
Matrix>, Matrix>(& MET< METElementBinaryOp<
METAdd<value_type>, Matrix, Matrix> >(METElementBinaryOp<
METAdd<value_type>, MET< METElementBinaryOp<
METAdd<value_type>, Matrix, Matrix> >, Matrix>,
Matrix>(& MET< METElementBinaryOp<
METAdd<value_type>, Matrix, Matrix>
>(METElementBinaryOp< METAdd<value_type>, MET<
METElementBinaryOp< METAdd<value_type>, Matrix, Matrix>
>(METElementBinaryOp< METAdd<value_type>, MET<
METElementBinaryOp< METAdd<value_type>,
Matrix, Matrix> >, Matrix>,
Matrix>(& MET< METElementBinaryOp<
METAdd<value_type>, Matrix, Matrix>
>(METElementBinaryOp< METAdd<value_type>,
MET< METElementBinaryOp< METAdd<value_type>,
Matrix, Matrix> >, Matrix>(& MET<
METElementBinaryOp< METAdd<value_type>,
Matrix, Matrix> >(METElementBinaryOp<
METAdd<value_type>, Matrix, Matrix>(&B,
&C)), &D)), &E)), &F)); &C)), &D)), &E)), &F))
Don’t worry about the above code. What it basically does is that it generates a MET object which encapsulates all the information required for doing B + C + D + E + F. All that is required now is to
implement the calculation of the results and put it in matrix A.
To implement the calculation, the assignment operator (operator=) was overloaded in the final matrix class. The assignment operator will take in a MET object as its argument. Since there are many
different types of MET objects, the operator= has to be a template function. The resulting code is as follows:
template<typename ExprT> Self& operator=(MET<ExprT> expr)
size_t sizeRow = expr.rowsize;
size_t sizeCol = expr.colsize;
Self tempMatrix(sizeRow,sizeCol);
for (size_t i=0; i<sizeRow; ++i)
for (size_t j=0; j<sizeCol; ++j)
tempMatrix(i,j) = expr(i,j);
return *this;
The code is relatively simple to understand. What it does is that it gets the required total number of rows and columns from the MET object and creates a temporary matrix to hold the results of the
matrix calculations. Then it loops through each individual element, and finds the results for the calculation. Using the example above, the results of the calculation is the result of B(i,j) + C(i,j)
+ D(i,j) +E(i,j) + F(i,j). The temporary matrix is then swapped with the current matrix (which is matrix A, in the example) so that the results of the calculation is now assigned to the current
To digress slightly, this function is slower than that of the Matrix Expression Template^3 because of the creation of the temporary matrix. However, this temporary matrix is necessary to make the
operation more intuitive, i.e., it doesn’t matter what the size of matrix A is before the calculation, because it will be resized to the correct size required by the matrix calculation of B + C + D +
E + F. This is not the case for the Matrix Expression Template class. In that class, matrix A is required to be the same size as B, C, D, E, and F, otherwise the operation will fail. The present
matrix class can achieve better efficiency than the Matrix Expression Template if the same stance was adopted. The temporary matrix just needs to be removed, and substitute tempMatrix(i,j) with (*
this)(i,j). However, this was not done because it is more essential for the operation to be intuitive in this case than for it to be fast.
Now, let’s go back to our discussion on expression templates. From the operator= code, it can be seen that there is a need to implement some functions in the various expression template supporting
classes. Two functions, row.size and col.size need to be implemented to find out the total number of rows and columns needed for the matrix receiving the results. There is also a need to implement an
operator which takes in the matrix element position and outputs the results of the mathematical operation.
For the MET class, it is simple to implement these three functions. These three functions are just passed on to the METElementBinaryOp for processing.
METElementBinaryOp implements row.size and col.size by passing it on to one of its two template variables. The variable left_ is chosen. There is no reason why the variable right_ cannot be chosen.
If left_ is a matrix, it will have the functions row.size and col.size, and thus will return the number of rows and columns respectively. If left_ is a MET object, it will pass it on to
METElementBinaryOp, which will pass in on to left_ recursively and so forth, until it reaches a matrix.
METElementBinaryOp implements an operator by applying it on the left_ and right_ variables. If left_ and right_ are matrices, they will return the element at position (i,j). If they are a MET object,
they will pass it on to METElementBinaryOp until it finally reaches a matrix and the element at position (i,j) is returned. The returned elements are then sent to the Evaluate function of the
operator type which will then add or subtract them and return the results. Thus, the innocently looking expr(i,j) actually performs all the necessary calculations on the individual elements (e.g., B
(i,j) + C(i,j) + D(i,j) + E(i,j) + F(i,j)).
That concludes the discussion on the implementation of expression templates for the matrix class. One problem which can be seen is that there is no proper way to ensure that all the matrices are of
the same size, before the matrix calculation is done. Thus, it is up to the user to ensure that the matrices are of the same size, before performing the calculations. Otherwise, unexpected errors may
arise. Another problem is that expression templates may not speed up all matrix addition or subtraction. For short additions like A = B + C;, using expression templates may actually slow down the
calculation. Thus, the user should experiment with both MathMatrix and MathMETMatrix to determine the best mathematical matrix to use for each situation. Both contains code to interconvert one form
to another, and both types of matrices can be used in the same calculation, so there is no penalty incurred in having both types available for use in the final matrix class. There is currently no
good algorithm for implementing dot multiplication using expression templates. Thus, at the present moment, dot multiplication for MathMETMatrix is using the same code as MathMatrix.
template<typename ValueT, typename MatrixT> NonMathMatrix
This is the simplest of the three MathP classes. It contains the bare essential functions so that it presents the same interface as MathMatrix and MathMETMatrix. Other than that, it does not contain
any mathematical operators so that a matrix deriving from it cannot perform any mathematical operations and will give a compiler error if tried.
Implementation of the Matrix
template<typename ValueT, template<typename> class MatrixTypeP, template<typename, typename> class MathP> class Matrix
We have reached the most important class, the Matrix class. It is the class that the user interacts with. The final functions that are present in this class is dependent on which storage policy and
which mathematical policy that it inherits from.
Even though Matrix inherits most of its functions from MatrixTypeP and MathP, constructors, the destructor, copy constructors, and assignment operators are not inherited. Thus, it needs to implement
these. Since it is to be a STL-like container, it has to define various STL-like typedefs. These typedefs are obtained from MatrixTypeP by redefining them. The MathP policy does not have any
constructor or assignment operator which must be mirrored by Matrix, thus Matrix only has to define constructors, copy constructors, and assignment operators which are present in MatrixTypeP and pass
the operations to it. Hence, there must be a default constructor which takes in no arguments. There should be constructors which take in the initial size of the matrix as well as the initial values.
The initial values can be a default value for all elements, or from a matrix implemented using arrays or pointers. A copy constructor should also be present. There should be an assignment operator
for copying another similar matrix, and an assignment operator for copying any matrix class.
In addition, Matrix implements one additional copy constructor which is not found in MatrixTypeP. This copy constructor takes in any matrix class. The code is implemented by passing the matrix to be
copied to the template operator= in MatrixTypeP instead of a similar copy constructor in MatrixTypeP. The reason for this is that if there is a copy constructor in MatrixTypeP which can take in all
matrix classes, it will prevent the normal copy constructor in Matrix from working properly because all the matrixes passed into MatrixTypeP will be processed by the template version of the copy
constructor instead of the normal version. The reason for this is simple. The construct of the normal copy constructor in Matrix is Matrix(const Self& m) : MathBase(this), StorageBase(m). As can be
seen, StorageBase is passed a const reference to m. If there is no template version of the copy constructor in StorageBase, m will be processed by the normal copy constructor in StorageBase. However,
if a template version is present in StorageBase, the StorageBase will use the template version to process m since m is of type Matrix, so it does not fit into the normal copy constructor which
requires the type StorageBase. Hence, a template version of the copy constructor should not be present in StorageBase, otherwise all copies may result in slow operations or error. However, a template
version of the copy constructor will be useful since there could be different Matrix types each differing only in their MatrixTypeP or MathP, and it should be possible to copy one type to another.
Thus, a template version of the copy constructor is created only in the Matrix class, and it passes the operation to a template operator= in the MatrixTypeP which normally should perform a copy of
the matrix using the common functions available to all matrices.
Matrix also implements an additional assignment operator (operator=) which takes in a MET object. The reason is discussed above in MathMETMatrix.
Two other functions which are useful for all matrices but are not essential for MatrixTypeP operations are implemented in Matrix. They are swaprows and swapcols. These two functions use only
iterators for their operations so that they can be applicable to all types of MatrixTypeP matrices.
Implementation of RowVector and ColVector
template<typename ValueT,template<typename> class MatrixTypeP,template<typename,typename> class MathP> class RowVector
template<typename ValueT,template<typename> class MatrixTypeP,template<typename,typename> class MathP> class ColVector
Both RowVector and ColVector are derived from the Matrix class. Their existence is almost unnecessary as the Matrix class can be a row vector or a column vector, by setting the number of rows or
columns to 1, respectively. Also, STL already implements three useful and powerful 1-dimensional containers. So, why is there a need for RowVector and ColVector? The initial idea for creating them is
for performance and ease of use.
RowVector and ColVector can be faster than STL containers when they use the DenseMatrix class as a storage policy. As mentioned earlier, DenseMatrix exposes its storage mechanism to
performance-hungry users. Thus, users can directly access the elements in the matrix using the exposed variable instead of passing through accessor functions. STL containers do not expose their
storage mechanism, thus the only way to access their elements is through accessor functions. This adds overhead and slows down performance.
For variables destined to be a vector throughout their lifetime, RowVector and ColVector are easier to use than the Matrix class. For these variables, either their number of rows or columns is always
1. Thus, the flexibility offered by the Matrix class in accessing the elements or obtaining iterators is unnecessary. To enable ease of use, RowVector and ColVector implement several functions
present in STL containers: operator[], at, push_back, resize, size, begin, end, rbegin, and rend. RowVector and ColVector also implement constructors requiring only a single size variable instead of
the two in the Matrix class (one for the number of rows and one for the number of columns).
Although RowVector and ColVector have simplified functions for accessing and modifying its elements and iterators, they retain all the functions available in the Matrix class in order to be
compatible with the Matrix class. They are also inter-convertible with the Matrix class. However, the user must ensure that when copying or assigning a Matrix class to a RowVector or ColVector class,
the matrix class must be either a row vector or column vector (i.e. either its number of rows or columns must be 1) respectively. Otherwise, the RowVector or ColVector may end up being a matrix
instead. This will not cause any errors when using the class, and the user may not even notice it, but it is logically wrong as a row vector or column vector should not store a matrix.
Functions Operating On Matrix
template<typename MatrixT> struct Transpose
Transpose is a functor which finds the transpose of a matrix (a transposed matrix is a matrix where the rows and columns are swapped). It has two overloaded operators, the first taking in a matrix to
be transposed and returns the transposed matrix, the second taking in both a matrix to be transposed and a matrix to hold the transposed matrix. The first operator is suitable to use in matrix
calculations where it can be used intuitively, or in any appropriate STL algorithms. The second operator is faster, and should be used when the transpose of a matrix needs to be found but need not be
used immediately in calculations.
template<typename MatrixT> struct Diagonal
Diagonal is a functor which finds the main diagonal of a matrix or puts a vector into the main diagonal of a matrix. It has two overloaded operators, the first taking in a matrix and returns the
diagonal matrix, the second taking in both a matrix and a matrix to hold the diagonal matrix. The first operator is suitable to use in matrix calculations where it can be used intuitively or in any
appropriate STL algorithms. The second operator is faster, and should be used when the diagonal of a matrix needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct Covariance
Covariance is a functor which finds the covariance of a matrix. It has two overloaded operators, the first taking in a matrix and returns the covariance matrix, the second taking in both a matrix and
a matrix to hold the covariance matrix. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any appropriate STL algorithms. The second operator is
faster, and should be used when the covariance of a matrix needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct Power
Power is a functor which calculates the matrix with all its elements raised to a defined power. It has two overloaded operators, the first taking in a matrix and returns the powered matrix, the
second taking in both a matrix and a matrix to hold the powered matrix. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any appropriate STL
algorithms. The second operator is faster, and should be used when the powered matrix needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct Mean
Mean is a functor which finds a row vector with the mean of each column of a matrix. It has two overloaded operators, the first taking in a matrix and returns the row vector, the second taking in
both a matrix and a matrix to hold the row vector. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any appropriate STL algorithms. The second
operator is faster, and should be used when the row vector needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct Median
Median is a functor which finds a row vector with the median of each column of a matrix. It has two overloaded operators, the first taking in a matrix and returns the row vector, the second taking in
both a matrix and a matrix to hold the row vector. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any appropriate STL algorithms. The second
operator is faster, and should be used when the row vector needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct Sum
Sum is a functor which finds a row vector with the sum over each column of a matrix. It has two overloaded operators, the first taking in a matrix and returns the row vector, the second taking in
both a matrix and a matrix to hold the row vector. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any appropriate STL algorithms. The second
operator is faster, and should be used when the row vector needs to be found but need not be used immediately in calculations.
template<typename MatrixT> struct CumulativeSum
CumulativeSum is a functor which finds a matrix with the cumulative sum of the elements of a matrix. It has two overloaded operators, the first taking in a matrix and returns the cumulative sum
matrix, the second taking in both a matrix and a matrix to hold the cumulative sum matrix. The first operator is suitable to use in matrix calculations where it can be used intuitively, or in any
appropriate STL algorithms. The second operator is faster, and should be used when the cumulative sum matrix needs to be found but need not be used immediately in calculations.
This concludes the discussion on the Matrix class. The Matrix class makes use of concepts like policies and expression templates in its implementation. It is designed to be generic, reusable, and
easily extendable. Though some of the concepts may not be implemented correctly or in the best possible ways, it is an attempt to consolidate various C++ techniques together. This article serves as
an example of how to use the Matrix class, a documentation for all the various classes and functions, and also a documentation of all the various phases and thoughts in the design and implementation
of the classes. It is hoped that this article will provide sufficient information for all who wish to use, modify, or extend this class. It is also hoped that the brief explanations on policies,
iterators, and expression templates will spur those who are interested to read more about these techniques and use them in their own work.
Any comments about this class are greatly appreciated, and any requests for new functions will be considered and added as soon as possible, if appropriate.
1. William H. Press, Numerical recipes in C++: the art of scientific computing (Cambridge University Press, 2002).
2. Matrix Template Library.
3. Matrix Expression Template.
4. Andrei Alexandrescu, Modern C++ Design (Addison-Wesley, 2001).
5. Todd Veldhuizen, Expression Templates: C++ Report, Vol 7, No. 5, June 1995, pg 26-31.
Change logs
• Version 1.1
□ Removed the identity matrix implementation because it was not well implemented.
• Version 1.2
□ Added RowVector and ColVector classes.
□ Fixed errors involving reverse iterators.
□ Changed function empty().
• Version 1.3
□ Added member spaces row and col to StoragePolicy and Expression Templates.
□ Replaced rowsize, colsize, rowbegin, rowend, colbegin, and colend with row::size, col::size, row(i).begin, row(i).end, col(i).begin, and col(i).end.
□ Added row::insert, row::erase, row::push_back, row::pop_back, col::insert, col::erase, col::push_back, and col::pop_back.
□ Added pop_back for RowVector and ColVector.
• Version 1.4
□ Changed member spaces row and col to Row and Col to prevent conflicts causing row(i) not to work when RowVector or ColVector is declared.
• Version 1.5 (16 April 2006)
□ Fixed code so that it is no longer dependent on STLport and will work on Visual Studio 2005, and GCC 3.3.2.
□ Because of the fix, the constancy of the row and col iterators may not really work.
□ Added row::push_front, row::pop_front, col::push_front, and col::pop_front.
□ Added push_front and pop_front for RowVector and ColVector. | {"url":"http://www.codeproject.com/Articles/3613/A-generic-reusable-and-extendable-matrix-class?PageFlow=FixedWidth","timestamp":"2014-04-18T12:37:14Z","content_type":null,"content_length":"229380","record_id":"<urn:uuid:18c9bde1-1b0c-49e0-b2e5-950a3b4c921d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cumberland, RI Prealgebra Tutor
Find a Cumberland, RI Prealgebra Tutor
...Some topics I work with include solving equations, absolute value, inequalities, proportions and factoring. The study of anatomy involves a good deal of learning terminology. I try to help
students by teaching them the meaning of common anatomical prefixes, suffixes, and root words.
15 Subjects: including prealgebra, chemistry, physics, geometry
...Though I did not tutor much during college, I started up with tutoring again with a company as soon as I had my degree (and more free time). I enjoy working with children of all ages and of
all levels, whether it is to help them catch up when they have fallen behind or help them ace the class. M...
17 Subjects: including prealgebra, calculus, geometry, statistics
...I started teaching again in 2008 and have been steadily growing my own studio of students. I have been salsa dancing since 2003 and I have been teaching salsa since 2006. I enjoy going out
salsa dancing at various venues and I have performed at a few places.
13 Subjects: including prealgebra, English, writing, geometry
...Thank you for considering me to help your student achieve their academic goals -- I hope to hear from you soon!I have studied music for over 12 years by means of intensive piano and French
horn lessons and am an experienced performer. I held the position of principal horn in the Thames Valley Yo...
21 Subjects: including prealgebra, English, reading, French
...I received a high score on the U.S. AP history test in high school, then continued studying American history from the beginning of the twentieth century to the present in my undergraduate
classes. Math and science used to be my passions in high school and Biology was my favorite.
17 Subjects: including prealgebra, chemistry, geometry, biology
Related Cumberland, RI Tutors
Cumberland, RI Accounting Tutors
Cumberland, RI ACT Tutors
Cumberland, RI Algebra Tutors
Cumberland, RI Algebra 2 Tutors
Cumberland, RI Calculus Tutors
Cumberland, RI Geometry Tutors
Cumberland, RI Math Tutors
Cumberland, RI Prealgebra Tutors
Cumberland, RI Precalculus Tutors
Cumberland, RI SAT Tutors
Cumberland, RI SAT Math Tutors
Cumberland, RI Science Tutors
Cumberland, RI Statistics Tutors
Cumberland, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/Cumberland_RI_Prealgebra_tutors.php","timestamp":"2014-04-18T13:50:59Z","content_type":null,"content_length":"24255","record_id":"<urn:uuid:50a60a0f-6ebd-49fa-a9c4-9333a651e249>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
☆ com.hp.hpl.jena.enhanced.EnhNode
□ Method Summary
Modifier and Type Method and Description
<T extends RDFNode> as(Class<T> t)
allow subclasses to implement RDFNode & its subinterface
Answer the graph node that this enhanced node wraps
<X extends RDFNode> canAs(Class<X> t)
API-level method for polymorphic testing
equals(Object o)
An enhanced node is equal to another enhanced node n iff the underlying nodes are equal.
Answer the graph containing this node
The hash code of an enhanced node is defined to be the same as the underlying node.
An enhanced node is Anon[ymous] iff its underlying node is Blank.
An enhanced node is Literal iff its underlying node is too.
An enhanced node is a resource if it's node is a URI node or a blank node.
An enhanced node is a URI resource iff its underlying node is too.
answer true iff this enhanced node is still underpinned in the graph by triples appropriate to its type.
<X extends RDFNode> viewAs(Class<X> t)
Answer a facet of this node, where that facet is denoted by the given type.
□ Method Detail
☆ getGraph
public EnhGraph getGraph()
Answer the graph containing this node
An enhanced graph
☆ isAnon
public final boolean isAnon()
An enhanced node is Anon[ymous] iff its underlying node is Blank.
☆ isLiteral
public final boolean isLiteral()
An enhanced node is Literal iff its underlying node is too.
☆ isURIResource
public final boolean isURIResource()
An enhanced node is a URI resource iff its underlying node is too.
☆ isResource
public final boolean isResource()
An enhanced node is a resource if it's node is a URI node or a blank node.
☆ viewAs
public <X extends RDFNode> X viewAs(Class<X> t)
Answer a facet of this node, where that facet is denoted by the given type.
t - A type denoting the desired facet of the underlying node
An enhanced nodet that corresponds to t; this may be this Java object, or a different object.
☆ as
public <T extends RDFNode> T as(Class<T> t)
allow subclasses to implement RDFNode & its subinterface
☆ canAs
public <X extends RDFNode> boolean canAs(Class<X> t)
API-level method for polymorphic testing
☆ hashCode
public final int hashCode()
The hash code of an enhanced node is defined to be the same as the underlying node.
hashCode in class Object
The hashcode as an int
☆ equals
public final boolean equals(Object o)
An enhanced node is equal to another enhanced node n iff the underlying nodes are equal. We generalise to allow the other object to be any class implementing asNode, because we allow
other implemementations of Resource than EnhNodes, at least in principle. This is deemed to be a complete and correct interpretation of enhanced node equality, which is why this method
has been marked final.
Specified by:
equals in class Polymorphic<RDFNode>
o - An object to test for equality with this node
True if o is equal to this node.
Licenced under the Apache License, Version 2.0 | {"url":"http://jena.apache.org/documentation/javadoc/jena/com/hp/hpl/jena/enhanced/EnhNode.html","timestamp":"2014-04-17T10:23:32Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:87709100-a8a6-4cab-a54d-1f9b6e2efd7b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/lolwolf/medals/1","timestamp":"2014-04-20T03:25:55Z","content_type":null,"content_length":"96519","record_id":"<urn:uuid:b3753732-57d7-42a4-a237-580cd6011dee>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of upper-bound
least upper bound axiom
, also abbreviated as the
LUB axiom
, is an
real analysis
stating that if a
nonempty subset
of the
real numbers
has an
upper bound
, then it has a
least upper bound
. It is an axiom in the sense that it cannot be proven within the system of real analysis. However, like other axioms of classical fields of
, it can be proven from
Zermelo-Fraenkel set theory
, an external system. This axiom is very useful since it is essential to the proof that the real number line is a
complete metric space
. The
rational number line
does not satisfy the LUB axiom and hence is not complete.
An example is $S = { xin mathbb{Q}|x^2 < 2}$. 2 is certainly an upper bound for the set. However, this set has no least upper bound — for any upper bound $x in mathbb{Q}$, we can find another upper
bound $y in mathbb{Q}$ with $y < x$.
Proof that the real number line is complete
${ s_n}_{ninN}$
be a
Cauchy sequence
. Let S be the set of real numbers that are bigger than
for only finitely many
. Let
$varepsiloninR ^+$
. Let
be such that
$forall n,mge N,$$|s_n-s_m|. So, the sequence passes through the interval (s_N-varepsilon ,s_N+varepsilon ) infinitely many times and through its complement at most a finite number of times. That
means that s_N-varepsilonin S and hence Snot=emptyset. Clearly, s_N+varepsilon is an upper bound for S. By the LUB Axiom, let b be the least upper bound. s_N-varepsilonle ble s_N+varepsilon. By the
triangle inequality, forall nge N, d(s_n,b)le d(s_n,s_N)+d(s_N,b)levarepsilon +varepsilon =2varepsilon. Therefore, s_nlongrightarrow b and so R is complete. Q.E.D. See alsosupremum Dedekind cut
Completeness (order theory)References upper and lower bounds (including the lub axiom) at Springer"s Encyclopedia of Mathematics$ | {"url":"http://www.reference.com/browse/upper-bound","timestamp":"2014-04-19T01:55:43Z","content_type":null,"content_length":"74590","record_id":"<urn:uuid:d5058057-0256-4449-92bb-2b9d670e6d93>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 1999 [00268]
[Date Index] [Thread Index] [Author Index]
Re: Defining a function P[A_ | B_]:=f[A,B] overriding Alternatives
• To: mathgroup at smc.vnet.net
• Subject: [mg19241] Re: [mg19205] Defining a function P[A_ | B_]:=f[A,B] overriding Alternatives
• From: Carl Woll <carlw at u.washington.edu>
• Date: Wed, 11 Aug 1999 02:06:53 -0400
• Organization: Physics Department, U of Washington
• References: <199908100652.CAA18375@smc.vnet.net.>
• Sender: owner-wri-mathgroup at wolfram.com
Why don't you use one of Mathematica's other characters instead, such as esc |
esc, which can also be entered in as \[VerticalSeparator]. One thing to note
here is that
p[a_ esc | esc b_]
is automatically parsed as p[VerticalSeparator[a_,b_]].
Carl Woll
Dept of Physics
U of Washington
Peltio wrote:
> Is there a general way to call a function with arguments separated not by
> commas but by user defined symbols (particularly Mathematica sacred symbols)
> ?
> _________________________________________________
> Here's what I mean:
> I was willing to define a function for the calculation of the conditional
> probabilty, via the Bayes formula, and I wanted to define it in such a way
> that the function call would be:
> p[A|B]
> but Mathematica correctly interpreted the lhs of the definition
> p[A_|B_] as if there were two alternative arguments.
> I can unprotect Alternatives and clear it and that would work,
> Unprotect[Alternatives];
> Clear[Alternatives];
> p[A_|B_]:= {A,B}
> p[A|B]
> {A,B}
> but, is there a less traumatic way to achieve this without fully sacrificing
> Alternatives? It would be fine to avoid its evaluation only when inside a
> [ ] function, by means of upvalues. But the following does not seems to
> work:
> Unprotect[Alternatives];
> Alternatives /: p[A_|B_]:={A,B}
> And yet I wanted to be able to define my function without showing the
> Alternatives /: part. Is there a way to inhibit the Alternatives evaluation
> inside p[] in a single statement (that could be hidden in a initialization
> cell)?
> Thanks for any help.
> Peltio
> peltio AT usa DOT net
> Please don't do a reply to : spammers forced me to hide my address. Use
> insted the address shown above substituting @ to AT and . to DOT.
Carl Woll
Dept of Physics
U of Washington | {"url":"http://forums.wolfram.com/mathgroup/archive/1999/Aug/msg00268.html","timestamp":"2014-04-20T13:34:47Z","content_type":null,"content_length":"36530","record_id":"<urn:uuid:7efd80c0-9840-4071-a834-386faa088a27>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
ShareMe - free Pocket Pc Calculator download
Pocket Pc Calculator
From Title
1. pocket Loan calculator for pocket pc 2002 - Business & Productivity Tools/Accounting & Finance
... pocket Loan Manager is an easy tool to use program that helps you keep track of multiple loans for all your customers or properties and keep your paid log,paid amount, loan balance.It's
also an useful tool to calculate monthly payment when you need loan for your car, house or business etc.>>This program include Loan calculator and Amortization schedules.>>It also displays
paid amount, remaining amount, total interest and principal. You can try various payments and export calculation result to ...
2. IP calculator for pocket pc - Utilities/Other Utilities
... IP calculator takes an IP address and netmask and calculates the resulting broadcast, network. By giving the IP Address,Netmask, you can design subnets. It is also intended to be a
teaching tool and presents the subnetting results as easy-to-understand binary values.***Features:****(1) Subnet Mask CalculatorThis calculator will calculate what subnet mask to use to get
the required number of networks or nodes. You'll need to enter the network portion of a IP address and the required number of ...
3. Capacitor calculator plus for pocket pc - Educational/Other
... Capacitor calculator Plus is designed to give the value of color coded poly capacitors. They come in various shapes and types. Most capacitors actually have the numeric values stamped on
them. The capacitor's first and second bands are the significant number bands, the third is the multiplier, followed by the percentage tolerance band and the voltage band. Select the band
values from the individual fields. After clicking on Calculate, the calculator will use the selected conditions for all of ...
4. Idea Value calculator (pocket pc OS) - Educational/Other
... The idea value calculator (IVC) allows the value of an idea to be estimated.The method of calculation is based on the assumption that the value of an idea is equal to the sum of all
royalties that would be collected on an equivalent valid patent based on this idea. It is assumed that the patent will not be invalid during its life. If there is a need to estimate an idea's
value under the assumption that the equivalent patent would be invalid after t years, a user needs to decrease the term (on t ...
5. Photo calculator for pocket pc - Home & Personal/Misc
... This software with its unique features for the still photographer, gives instant calculations including the angle of view for selected lens and film format combination as well as depth of
field. In addition to near and far focus, total depth of field and hyperfocal distance, the width and height of the image are also calculated. The user can select between 30 digital formats,
9 digital backs and 12 conventional film formats. ...
6. Calories calculator for pocket pc - Educational/Teaching Tools
... The Calories calculator for pocket pc is a food database for calculate the amount of calories. The application lets you know about calories of foods and calculate the amount of
calories.Moreover, it tells you how many calories in each meal and you can limit the amount of calories that you need each meal. The application contains the standard food database and you
can add more foods into database by yourself. ...
7. Smart Investor's calculator for pocket pc - Utilities/Other Utilities
... Smart Investor's calculator (SIC) allows users to find optimal investment portfolios through solving allocation problems with the selected criterion and restrictions on shares of
investment in each instrumentgroup. In other words, SIC helps investors to find investment portfolios that, maximize returns and limit risks.A user may choose to maximize a pre-tax or
post-tax rate of return on a portfolio, subject to restrictions ...
8. Forex Arbitrage calculator for pocket pc - Business & Productivity Tools/Accounting & Finance
... Forex Arbitrage calculator allows to determine risk free arbitrage opportunities on forex cross rates. ...
9. Financial calculator Suite for pocket pc - Utilities/Other Utilities
... Financial calculator Suite contains 7 Financial Calculators that frequently use.Let the calculators perform all of the complex calculations for you.***Features:***1)Minimum Payment
Credit-Card Interest Calculator2)Savings Goals calculator 3)Deposit Savings Calculator4)Debt Investment Calculator5)Loan Calculator6)Mortgage Calculator7)Wage Conversion calculator
***Descriptions***>Minimum Payment Credit-Card Interest calculator. This calculator will show you how much interest you will end up paying ...
10. Steam Table calculator for pocket pc - Educational/Other
... Steam Table MX is an application that calculates temperature, pressure, specific volume, and enthalpy for saturated steam and water. This innovative program provides a fast and
easy-to-use reference tool. This is an essential tool for students, engineers, technicians, process operators, and power plant personnel who frequently use hard copy steam
tables.***Features***1) Vapor pressure (PSIA and Bar)2) Enthalpy of saturated steam vapor and water.3) Input temperature in multiple ...
Pocket Pc Calculator
From Short Description
1. calculator - Educational/Mathematics
... calculator Pro is a calculator tool for windows like a common pocket calculator. calculator Pro gives you the possibility to calculate substantial mathematical terms in an easy and
efficient way. Therefore calculator Pro supports a lot of mathematical functions like trigonometric, hyperbolic functions etc. The advantage of the calculator Pro compared with the other
calculators like the Standard Windows calculator for example, is that you can see the term you want to solve and you can combine ...
2. calculator Pro - Educational/Mathematics
... calculator Pro is a calculator tool for windows like a common pocket calculator. calculator Pro gives you the possibility to calculate substantial mathematical terms in an easy and
efficient way. Therefore calculator Pro supports a lot of mathematical functions like trigonometric, hyperbolic functions etc. The advantage of the calculator Pro compared with the other
calculators like the Standard Windows calculator for example, is that you can see the term you want to solve and you can combine ...
3. calculator-7 - Business & Productivity Tools/Accounting & Finance
... calculator-7 is full analog of 24 digits pocket calculator with many additional features. For example, calculate total from clipboard (copy data to clipboard from table or text and press
one button), customize an appearance (scale, fonts, background color etc.). ...
4. MatrixCalculator - Educational/Mathematics
... Morello Matrix calculator is a simple, pocket calculator style program which performs most standard matrix calculations. It supports matrices up to 20 elements square (shareware version
is limited to 2 elements square). It also allows data to be exchanged with most common spreadsheets and other programs via the clipboard or text files ...
5. tApCalc Scientific tape calculator(Arm) - Home & Personal/Misc
... tApCalc SciFi is a handy Scientific calculator for pocket pc. It provides a simulated paper tape that allows users to record calculations and save them for future reference. Paper tape
simulation has many advantages. You can start a calculation and continue adding new calculations till you have entered all data, you can edit wrong data in the calculation without re-entering
all data again, you can recalculate the calculations recorded and edited on the tape. The tape can also be saved as a txt ...
6. tApCalc Suite, Financial,Desk,Scientific tape Calculators (Arm,xScale) - Business & Productivity Tools/Accounting & Finance
... tApCalc Suite is a handy calculator value pack for pocket pc. It contains all 3 calculators i.e. tApCalc Desk, tApCalc Finance and tApCalc Scientific. All 3 calculaors provide. - Paper
tape to save,load,edit,run calculations done before - Print and beam options for paper tapes - Easy access to all 3 calculators from New menu - Preferences specific to functionality - They
provide all standard Financial,Accounting, Scientific keys ...
7. tApCalc Financial tape calculator(Arm & xScale - Business & Productivity Tools/Accounting & Finance
... tApCalc Finance is a handy financial calculator for pocket pc. It provides a simulated paper tape that allows users to record calculations and save them for future reference. Paper tape
simulation has many advantages. You can start a calculation and continue adding new calculations till you have entered all data, you can edit wrong data in the calculation and recalculate
without re-entering all data again. The tape can also be saved as a txt file for editing with other applications such as ...
8. MxCalc 10B Financial calculator PocketPC - Business & Productivity Tools/Accounting & Finance
... Windows Mobile pocket pc Business ( Financial ) calculator software. Calculate loan payments, interest rates, amortization, discounted cash flow analysis, interest rate conversions,
standard deviation, percent, percent change, mark-up as a percent of cost price, margin as a percent of price, forecasting based on linear regression & many other functions. Ideal for
Finance, Accounting, Economics & Business Studies related work. Free Desktop version ...
9. EngCalc- Engineering calculator WM PPC - Business & Productivity Tools/Accounting & Finance
... Engineers Engineering Formula calculator Software for Windows Mobile pocket pc. Comprehensive Unit Converter , 50000+ Conversions, Scientific Evaluator,100+ Property tables, Periodic
chart. 500+ formulae calculators catering to Mechanical Eng., Hydraulic Eng., Structural Eng., Machine Design, Electrical Eng., Fluid Mechanics, Heat and Mass Transfer, Thermodynamics, Pulp
and Paper, HVAC, Heat Exchanger and more... ...
10. Calc98 - Business & Productivity Tools/Accounting & Finance
... Calc98 is a pocket calculator simulator program for the Microsoft Windows operating system. It includes a comprehensive set of conversions, constants and physical property data, a
built-in periodic table of the elements, number base conversions, vectors, matrices and complex numbers. It is especially designed for scientific and engineering users, students and teachers
and it is also widely used in finance and medicine. ...
Pocket Pc Calculator
From Long Description
1. HP25c - BlackBerry Storm RPN calculator - Utilities/Other Utilities
... This application emulates an HP25c RPN calculator on a BlackBerry Storm/Storm 2. It is 100% compatible with the classic pocket calculator of the late 1970s, is fully programmable, and
will include a program librarian to save and recall user programs. <br>100% compatible with the classic HP25c pocket calculator<br>Fully programmable<br>Expanded program and register memory
2. calculator.NET - Utilities/System Utilities
... calculator.NET is an enhanced version of the standard calculator in Microsoft Windows. It uses RPN (Reverse Polish Notation), which is used by many pocket calculators, e.g. from HP. If
you have used a HP 11/28/32/42/48, or any other RPN calculator, you will feel right at home. Some of the enhancements included: RPN, extended function set, and customizable user interface.
This program is completely free. ...
3. AB-Euro - Business & Productivity Tools/Accounting & Finance
... AB-Euro is the Free Currency calculator for your desktop. You may convert between all Euro-based and 28 foreign currencies. A quick pocket calculator is integrated. Just mimimize AB-Euro
to a small icon in your task bar. So its always available. The extravagant look makes AB-Euro an eyecatcher. Includes automatic update of the foreign rates via internet. ...
4. tApCalc Financial tape calculator(Arm) - Business & Productivity Tools/Accounting & Finance
... tApCalc Finance is a handy financial calculator for pocket pc with simulated paper tape that allows users save calculations, load,edit and rerun them without reentering all data again.
Tapes can also be directly beamed and printed from within the program. Features include:- Time Value of Money, NPV,IRR,BOND Price/Yield,Depreciation, Amortization along with all standard
calculator keys and memory keys. ...
5. Tibi's Mathematics Suite - Utilities/Other Utilities
... The Tibi's Mathematics Suite is a package of various mathematical applications. Included in the package are: a scientific calculator, a graphing calculator, a matrix calculator, and a
factorizator. <br><br>Requirements: .NET Framework 4.0 <br>Scientific calculator<br>Graphing calculator<br>Matrix calculator<br>Numeric factorization ...
6. SeeThru calculator - Desktop Enhancements/Themes & Wallpaper
... SeeThru calculator v7.1, the enhanced Windows desktop calculator. The calculator utilizes the desktop wallpaper as its background to produce a fun, customizable interface. SeeThru
calculator comes with all standard Windows calculator features, along with 50 unit conversions, cooking conversions, calendar, date stamp, printable tape, fraction converter, and sales tax
calculator for home or work use. ...
7. DVD to pocket pc converter - Multimedia & Design/Animation
... Easiestutils DVD to pocket pc converter - is a powerful, easiest and fastest DVD to pocket pc ripper application for converting DVDs to pocket pc playable movie and video with excellent
output quality. You can enjoy your favorite DVD on your pocket pc as a portable DVD Player. With integrated advanced MPEG4 encoder, it is faster than other DVD to pocket pc Converter
software. ...
8. pocket pc Contacts Synchronizer - Home & Personal/Clocks, Calendars & Planners
... With pocket pc Contacts Synchronizer, you can view and syncronize desktop databases, such as MDB, Oracle, MySQL.One unique feature is that you can transfer data to specific category of
pocket pc Contacts Synchronizer contacts such as Business, Personal etc. It follows very simple approach. pocket pc Contacts Synchronizer provides you with two programs. One works on desktop
computer having Microsoft windows and other on pocket pc device. You just have to use the desktop side program (pocket pc ...
9. java scientific calculator - Utilities/Other Utilities
... A desktop calculator/scientific calculator. Use as desktop/graphical calculator, web applet, or to show calculator operations: arithmetic, trigonometry, logarithms, complex numbers,
memory, statistics. Features four bases, copy and paste, and history ...
10. Troy Ounce Conversion Tool - Home & Personal/Misc
... Troy Oz Coversion Tool Converts and Computes and gives the complet breakdown of Troy ounces in Kilos, Oz, Dwt, grams, and grains. Just enter the weight and Troy Oz Conversion Tool will
do the rest. Our other related products are Gold calculator, Gold calculator Gold, Gold calculator Lite, Silver calculator and Diamonds calculator. ...
Pocket Pc Calculator
Related Searches: | {"url":"http://shareme.com/programs/pocket/pc-calculator","timestamp":"2014-04-17T21:45:07Z","content_type":null,"content_length":"58749","record_id":"<urn:uuid:eb8040af-a9e6-49f7-9383-9cb3e52640ae>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
hermicity of alpha (dirac equation)
Thank you. I think I need to clarify something though. There are three matrices. We can ignore [tex]\left[\begin{array}{cc}0&\sigma_x\\\sigma_x&0\end{array}\right][/tex] and [tex]\left[\begin{array}
{cc}0&\sigma_z\\\sigma_z&0\end{array}\right][/tex] because they are real and symmetric and hence hermitian, and focus on [tex]\left[\begin{array}{cc}0&\sigma_y\\\sigma_y&0\end{array}\right][/tex]
If I treat it as a 2x2 matrix, and use the definition of hermicity (that you stated), then [tex]\sigma_y[/tex] must be equal to its complex conjugate. This is not the case. However, it IS equal to
its hermitian conjugate, but that's not what the definition of hermicity states!
Thus, to agree with the definition, [tex]\mathbf\alpha[/tex] must be a 4x4 matrix. This would mean that the zeros in [tex]\mathbf\alpha[/tex] are really 2x2 null matrices.
The only other way I can see it working is if the definition of hermicity is only valid when the elements of the matrix are all scalars. In this case, the definition would need to be modified to say
that the hermitian conjugate is the transposed matrix of hermitian conjugates, rather than complex conjugates. This would obviously require the condition that the hermitian conjugate of a scalar is
the same as its complex conjugate, to avoid the argument being circular.
To sum up, the main point of my question is whether [tex]\mathbf\alpha[/tex] as defined in my first post is actually shorthand for a 4x4 matrix. | {"url":"http://www.physicsforums.com/showthread.php?t=359033","timestamp":"2014-04-20T09:01:46Z","content_type":null,"content_length":"32085","record_id":"<urn:uuid:b1887ac8-637d-4287-9f64-0b1c3dc21ca3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cross and dot product vectors
July 5th 2010, 12:44 AM
Cross and dot product vectors
b-2c= $\lambda$a
given that |a|=|c|=1 and |b|=4
and the angle between b and c is $\arccos \frac{1}{4}$
show that $\lambda =+4 , -4$
For each of these cases find the cosine of the angle between a and c.
I don't know where to begin
July 5th 2010, 02:50 AM
I suppose your first step would be to work out what to do with the arccos. So, where should that go? What does this tell you?
Secondly, you should work out that $a \times (b-2c) = 0$, and so that means the second proposition makes sense - the vectors are in the same direction. That is to say, they only differ by a
What you now want to do is prove that $|b-2c|=4$ (why?). So, work out what $\sqrt{(b-2c).(b-2c)}$ is, and you will be done (why?)! (You will need the information you gleamed from my first line
Does that make sense?
July 5th 2010, 03:11 AM
I'm not sure I exactly understand. Cross products have the sin value multiplied by the magnitudes, right? So that's confusing me. I did work out $a \times (b-2c) = 0$ and so $(b-2c)=\lambda a$
and $\lambda$ is a scalar. Ok, I understand why $|b-2c|=4$, since $\lambda=\pm 4$. I don't get the last one about $\sqrt{(b-2c).(b-2c)}$.
July 5th 2010, 03:17 AM
I'm not sure I exactly understand. Cross products have the sin value multiplied by the magnitudes, right? So that's confusing me. I did work out $a \times (b-2c) = 0$ and so $(b-2c)=\lambda a$
and $\lambda$ is a scalar. Ok, I understand why $|b-2c|=4$, since $\lambda=\pm 4$. I don't get the last one about $\sqrt{(b-2c).(b-2c)}$.
You do not know that $|b-2c| = 4$, this is what you have to prove!
To do this, you need to remember that $|v| = \sqrt{v.v}$. To work out what this is, you need to know some dot products. The arccos that you have been given will help you in this.
July 5th 2010, 03:55 PM
$|b-2c|=\sqrt{b.b-4b.c+4c.c}=\sqrt{5-4|4||1|(\frac{1}{4})}=\sqrt{1}=\pm 1$
I'm still doing something wrong.
July 5th 2010, 04:01 PM
I think this is what you are missing.
$\left( {b - 2c} \right) \cdot \left( {b - 2c} \right) = b \cdot b - 4b \cdot c + 4c \cdot c = \left\| b \right\|^2 - 4b \cdot c + 4\left\| c \right\|^2$
July 5th 2010, 04:04 PM | {"url":"http://mathhelpforum.com/advanced-algebra/150116-cross-dot-product-vectors-print.html","timestamp":"2014-04-17T18:46:46Z","content_type":null,"content_length":"14282","record_id":"<urn:uuid:aae91022-0160-45ce-9f4b-08ef9cfe8286>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Revision #1 to TR12-020 | 1st May 2012 18:06
Inapproximability of the Shortest Vector Problem: Toward a Deterministic Reduction
We prove that the Shortest Vector Problem (SVP) on point lattices is NP-hard to approximate for any constant factor under polynomial time randomized reductions with one-sided error that produce no
false positives. We also prove inapproximability for quasi-polynomial factors under the same kind of reductions running in subexponential time. Previous hardness results for SVP either incurred
two-sided error, or only proved hardness for small constant approximation factors. Close similarities between our reduction and recent results on the complexity of the analogous problem on linear
codes, make our new proof an attractive target for derandomization, paving the road to a possible NP-hardness proof for SVP under deterministic polynomial time reductions.
Changes to previous version:
Fixed some bugs and improved presentation.
TR12-020 | 3rd March 2012 02:56
Inapproximability of the Shortest Vector Problem: Toward a Deterministic Reduction
We prove that the Shortest Vector Problem (SVP) on point lattices is NP-hard to approximate for any constant factor under polynomial time reverse unfaithful random reductions. These are probabilistic
reductions with one-sided error that produce false negatives with small probability, but are guaranteed not to produce false positives regardless of the value of the randomness used in the reduction
process. We also prove inapproximability for quasi-polynomial factors under the same kind of reductions running in subexponential time. Previous hardness results for SVP either incurred 2-sided
error, or only proved hardness for some small constant approximation factors. Close similarities between our reduction and recent results on the complexity of analogous problems on linear codes, make
our new proof an attractive target for derandomization, paving the road to a possible NP-hardness proof for SVP under deterministic polynomial time reductions. | {"url":"http://eccc.hpi-web.de/report/2012/020/","timestamp":"2014-04-16T07:27:13Z","content_type":null,"content_length":"22148","record_id":"<urn:uuid:4614718f-f820-49df-9ae7-370c695b2513>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 810 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
Noise Conditions for Prespecified Convergence
Rates of Stochastic Approximation Algorithms
Edwin K. P. Chong, Senior Member, IEEE,
I-Jeng Wang, Member, IEEE, and Sanjeev R. Kulkarni
Abstract-- We develop deterministic necessary and sufficient
conditions on individual noise sequences of a stochastic approximation
algorithm for the error of the iterates to converge at a given rate.
Specifically, suppose fngfngfng is a given positive sequence converging
monotonically to zero. Consider a stochastic approximation algorithm
xn+1 = xn 0an(Anxn 0bn) + anenxn+1 = xn 0an(Anxn 0bn) + anenxn+1 = xn 0an(Anxn 0bn) + anen, where fxngfxngfxng is the iterate
sequence, fangfangfang is the step size sequence, fengfengfeng is the noise sequence,
and x3x3x3 is the desired zero of the function f(x) = Ax 0bf(x) = Ax 0bf(x) = Ax 0b. Then, under
appropriate assumptions, we show that xn 0x3 = o(n)xn 0x3 = o(n)xn 0x3 = o(n) if and only
if the sequence fengfengfeng satisfies one of five equivalent conditions. These
conditions are based on well-known formulas for noise sequences:
Kushner and Clark's condition, Chen's condition, Kulkarni and
Horn's condition, a decomposition condition, and a weighted averaging
condition. Our necessary and sufficient condition on fengfengfeng to achieve
a convergence rate of fngfngfng is basically that the sequence fen=ngfen=ngfen=ng | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/382/1464634.html","timestamp":"2014-04-20T07:55:01Z","content_type":null,"content_length":"8588","record_id":"<urn:uuid:9115d7e0-c35c-44e9-9d7c-e58500f8cf37>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
WebDiarios de Motocicleta
I suspect I got a lot of bad karma this day (I'm the evil mind behind "Crocodile" and "Elephant".) Congratulations to the winners! See here for final
Problem Crocodile.
You have a weighted undirected graph with a start node and
designated target nodes. Two players play a full-information game, taking turns:
1. Player 1 moves on the nodes of the graph, starting at the designated start node, and aiming to reach a target node. In each turn, she can traverse any edge out of her current node [except the
blocked edge / see below], incurring a cost equal to the weight of the edge.
2. Player 2 can block one edge at every turn. When an edge is blocked, Player 1 cannot traverse it in the next turn. An edge can be blocked multiple times, and any past edge is "unblocked" when a
new edge is blocked.
Find the minimum budget B such that Player 1 can reach a target node with cost ≤ B, regardless of what Play 2 does. Running time: O~(n+m).
Problem Elephants. You have n elephants on the line at given coordinates. The elephants make n moves: in each unit of time, some elephant moves to another position. You have cameras that can
photograph any interval of length L of the line. After each move, output the minimum number of cameras needed to phtograph all elephants in the current configuration.
Running time: subquadratic in n.
Problem Parrots. You are given a message of M bytes (0..255) to transmit over a weird channel. The channel can carry elements between 1 and R, but elements don't arrive in order (they are permuted
adversarially). Send the message, minimizing the number L of elements sent across the channel.
Running time: polynomial. (Yes, this is an easy problem for a trained theorist.)
The problems for Day 1 of the IOI have been posted (in many languages). Exciting! Here are the executive summaries. (Though I am on the International Scientific Committee (ISC), I do not know all the
problems, so my appologies if I'm misrepresenting some of them. In particular, the running times may easily be wrong, since I'm trying to solve the problems as I write this post.)
The usual rules apply: if you're the first to post a solution in the comments, you get eternal glory. (If you post annonymously, claim your glory some time during the next eternity.)
Problem Fountain. Consider an undirected graph with costs (all costs are distinct) and the following traversal procedure. If we arrived at node v on edge e, continue on the cheapest edge different
from e. If the node has degree 1, go back on e. A walk following these rules is called a "valid walk."
You also have a prescribed target node, and an integer k. Count the number of valid walks of exactly k edges which end at the prescribed target node. Running time O~(n+m) ["O~" means "drop logs"]
Problem Race. You have a tree with integer weights on the edges. Find a path with as few edges as possible and total length exaclty X. Running time O(nlg n) [log^2 may be easier.]
Problem RiceHub. N rice fields are placed at integer coordinates on the line. The rice must be gathered at some hub, which must also be at an integer coordinate on the line (to be determined). Each
field produces one truck-load of rice, and driving the truck a distance d costs exactly d. You have a hard budget constraint of B. Find the best location for the hub, maximizing the amount of rice
that can be gathered in the budget constraint.
Running time: O~(N).
Now is time for the problems on day 2 (of 2). See here for day 1. Feel free to post solutions in the comments (you get eternal glory if you're the first to post one) and discuss anything related to
the problems.
Day2–Problem TimeIsMoney. You have a graph with two costs (A and B) on each edge. Find a spanning tree that minimizes (its total A-cost) * (its total B-cost) [that's "times" in the middle]. Desired
running time: polynomial. Sharper bounds are possible (I think I get O(n^2m log)) but this is hard enough. To be entirely fair, the contestants just need to find an algorithm, not to prove it runs in
poly-time, which may be easier (but I'm writing for a theory audience so consider yourself challenged).
Day2–Problem Trapezoid. Consider two horizontal lines, and n trapezoids stretching from one line to the other. The proverbial picture worth 1000 words:
Day 2–Problem Compare. Alice gets a number a, and Bob gets a number b. Both are integers in {0, ..., 4095}. Bob's goal is to compare b to a (and output "greater than", "less than" or "equal").
Charlie is helping them. Alice can send Charlie a set A ⊂ {1, ..., 10240} (intuitively, think of Alice marking some bits in an array of 10Kbits). Bob can ask Charlie whether some x is in A or not
(think of Bob as querying some bits of the bit vector). The goal is to minimize (for worst-case a and b)
T=|A| + the number of queries made by Bob
Desired solution: We know how to get T=9. In the Olympiad, T=10 was enough for full score, but we left the problem open-ended, so T=9 would've earned you 109 points.
Somewhat unusually for computer olympiads, in this problem we could test contestant solutions exhaustively: we ran their code for all 4096^2 choices for (a,b) and really computed the worst-case T.
The Balkan Olympiad is over, and I am slowly recovering from the experience of chairing the Scientific Commitee. For your enjoyment, I will post summaries of the competition problems. Perhaps this
post should be titled "Are you smarter than a high school student?"
Day 1 – Problem 2Circles. You are given a convex polygon. Find the largest R such that two circles of radius R can be placed inside the polygon without overlapping. Target running time: O~(N). I
think this problem illustrates the rather significant difference between coming up with an algorithm on paper and actually implementing it (only 4 contestants scored nonzero; the committee says
Day 1– Problem Decryption. We define the following pseudorandom number generator:
• initialize R[0], R[1], R[2] with random values in {0,..., 255}
• let R[n] = R[n-3] XOR R[n-2].
We also define the following cypher:
• let π be a random permutation of {0,...,255}
• to encrypt the i-th byte of the message, output π(Message[i] XOR R[i])
You have to implement a chosen plaintext attack. You get a device implementing this cypher. You may feed it a message of at most 320 bytes (you give the device one byte at a time, and observe the
encyption immediately). Your goal is to recover the secret keys (R[0..2], and π).
Day 1–Problem Medians. Given an array A[1..n] of integers, we define the prefix medians M[0..(n-1)/2] as M[k] = median(A[1..2k+1]).
The problem is: given M, recover A. O~(n) running time.
PS: This was by far the best prepared Committee that I've been on, and I think we conclusively proved that, no matter how much work you do before the Olympiad, there's still a lot left to do during
the Olympiad itself. Many thanks to everybody who volunteered their time. I wish I could do more to express my gratitude, but for lack of other ideas, it is my pleasure to acknowledge the committee
• Marius Andrei - Senior Software Engineer, Google Inc.
• Radu Ștefan - Researcher, Technische Universiteit Eindhoven
• Dana Lica - "I.L. Caragiale" High School, Ploiești
• Cosmin Negrușeri - Senior Software Engineer, Google Inc.
• Mugurel Ionuț Andreica - PhD, Assist., Polytechnic University, Bucharest
• Cosmin Tutunaru- Student, Babes-Bolyai University, Cluj-Napoca
• Mihai Damaschin - Student, Bucharest University
• Gheorghe Cosmin - Student, MIT
• Bogdan Tătăroiu - Student, Cambridge University,
• Filip Buruiană - Student, Polytechnic University, Bucharest
• Robert Hasna - Student, Bucharest University
• Marius Dragus - Student, Colgate University NY
• Aleksandar and Andreja Ilic -- University of Nis, Serbia; external problem submitters, who were not intimidated by the fact that I only posted the problem call in Romanian :) | {"url":"http://infoweekly.blogspot.com/2011_07_01_archive.html","timestamp":"2014-04-20T09:13:32Z","content_type":null,"content_length":"64015","record_id":"<urn:uuid:bf15d3f1-db6a-4e7c-99b0-db4de3115881>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Sets and Proper Classes
Robert Lindauer robbie at biginteractive.com
Mon Feb 23 12:33:05 EST 2004
>> A set is an arbitrary collection of objects.
>> A property of sets
>> for which there is no set of all sets that satisfy the property is
>> known
>> as a proper class.
Can you explain what you mean by "arbitrary" and how it is to be
distinguished from "any old ... whatsoever" and why this wouldn't
include the simple predicate "everything" or "all" as in the extension
of the inverted A in (standard?) logic, eg: "A(x) x = x" or " A(P)
P<->P", etc. (Here we'll have to appeal to Wittgenstein's "fact-world"
in order to ensure that the second is always true, I'm not sure how I
feel about it - in any case, if you can't express it, it's not going to
be governed by the rules of syntactic logic anyway.)
That is, the force of "arbitrary" and "collection" must have their
meaning pre-theoretically if they are to have the force being put on
them here. Otherwise, they seem to be restricting what has already
been claimed to be arbitrary. If "arbitrary" has a technical meaning
for you - e.g. doesn't mean what we innocent onlookers think it does,
that technical meaning will not be anything like what it is in English.
Once you invoke "arbitrary" in English, anything goes. "His wives are
arbitrary" might mean that he's married a dog.
Perhaps the better way of saying "arbitrary" in this case is: "If we
call everything a set, then the set of everything has a cardinality,
according to our normal rules, which is higher than itself and lesser
than itself. We therefore call things that aren't everything "sets",
and things that are "proper classes". " And therefore take on the
immediately axiomatic attitude.
The alternate desire to retain some semblance of "intuitiveness" is
mistaken - who would you be kidding?
>> We know that for every set s, there is the set of all subsets of s.
"We" take this on faith or stipulate it, it is an axiom. I don't know
what the sense of the word "know" is when applied to axioms. This is
like "I know how to speak English" not "I know that my car is in the
Garage." How is it like the one, not the other? The contrary isn't
just not true in the language in question, it's simply nonsense in the
language in question. "I don't know how to speak English" is like "The
powerset axiom is false in ZFC". Is it even meaningful outside of
ZFC? In what sense is it the "same" axiom if the rules for how it is
applied are all different - same in some ways, different in others.
Again, the problem is mixing the pre-theoretic "know" with the
axiomatic "powerset" to derive the appearance of having
"known-in-English" that the powerset axiom is true when in fact what we
know-in-English is that it is an axiom in some axiomatic systems and
not others and has such-and-such expression and has these consequences
in this system, but not in some others, etc.
Best Wishes,
Robbie Lindauer
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-February/007935.html","timestamp":"2014-04-21T13:26:05Z","content_type":null,"content_length":"5561","record_id":"<urn:uuid:6fdd6b95-ccfa-4970-a4f0-57b8a7356446>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function satisfying $f^{-1} =f'$
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
How many functions are there which are differentiable on $(0,\infty)$ and they satisfy the relation $f^{-1}=f'$.
What is the motivation for this question?
JBL Jul 31 '10 at 20:07
Unless I am missing something, this is an elementary differential equations question, and so might be more appropriately asked at math.stackexchange.com .
Emerton Jul 31 '10 at 20:34
If $f^{-1}$ means $1/f$, then yes, it is an easy differential equations question. On the other hand if $f^{-1}$ is the functional inverse of $f$, then it looks pretty hard.
Robin Chapman Jul 31 '10 at 21:18
$f^{-1}$ means inverse of $f$
Chandrasekhar Jul 31 '10 at 21:30
Dear Chandru1, My apologies; I misunderstood the notation (in exactly the way that Robin Chapman suggested).
Emerton Aug 1 '10 at 1:20
add comment
Let $a=1+p>1$ be given. We shall construct a function $f$ of the required kind with $f(a)=a$ by means of an auxiliary function $h$, defined in the neighborhood of $t=0$ and coupled to $f$ via $x=h
(t)$, $f(x)=h(a t)$, $f^{-1}(x)=h(t/a)$. The condition $f'=f^{-1}$ implies that $h$ satisfies the functional equation $$(*)\quad h(t/a) h'(t)=a h'(at).$$ Writing $h(t)=a+\sum_{k \ge 1} c_k t^k$ we
obtain from $(*)$ a recursion formula for the $c_k$, and one can show that $0< c_r<1/p^{r-1}$ for all $r\ge 1$. This means that $h$ is in fact analytic for $|t|< p$, satisfies $(*)$ and possesses an
inverse $h^{-1}$ in the neighborhood of $t=0$. It follows that the function $f(x):=h(ah^{-1}(x))$ has the required properties.
I know little about analysis, so my apologies if this is a silly question. Are you solving a version of the problem where the function f is not necessarily defined on (0,∞) but is defined on some
interval (0,b) (which is also an interesting problem)?
Tsuyoshi Ito Aug 1 '10 at 14:16
The function $f$ constructed here is a priori only defined in a neighborhood of the point $a$.
Christian Blatter Aug 1 '10 at 14:57
Thanks for the clarification.
Tsuyoshi Ito Aug 1 '10 at 14:59
add comment
Wow. I remember that I thought exactly the same problem out of curiosity as a high school student but did not reach an answer. In fact, I was thinking about posting this problem on
up vote 18 down
vote At least it is easy to construct one solution: f(x)=x^φ/φ^φ−1, where φ=(1+√5)/2 is the golden ratio.
Edit: Corrected the calculation. Thanks to Aaron Meyerowitz for spotting the error!
Which does not answer the question, as far as I can tell.
Did Nov 18 '12 at 12:53
@Didier Piau: It clearly does not. In case anyone is wondering, the asker posted the answer shortly after Christian Blatter posted his related analysis, and deleted it after I asked
him if he had posted the question knowing the answer.
Tsuyoshi Ito Nov 20 '12 at 1:04
add comment
Not the answer you're looking for? Browse other questions tagged real-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/34052/function-satisfying-f-1-f?sort=votes","timestamp":"2014-04-18T18:57:36Z","content_type":null,"content_length":"65129","record_id":"<urn:uuid:64bcbac8-1f07-4760-b1c3-4d79325978d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taunton, MA Trigonometry Tutor
Find a Taunton, MA Trigonometry Tutor
I have been teaching math at the high school level for the past 14 years. I teach at a vocational school so I must be able to adapt to all learning styles. I feel it is critical to understand why
something is and not just how to go through the motions to get the correct answer.
6 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...I include a lot of positive reinforcement during my tutoring sessions, reassuring my students that they can learn and succeed. My positive approach allows my students to relax and start
focusing on the material to be learned. I have had excellent success in motivating students and creating real results in academic achievement.
13 Subjects: including trigonometry, physics, calculus, geometry
...Algebra, Trigonometry, Geometry, precalculus, and calculus. In some cases, I've worked under difficult circumstances. Either because of time pressure, or otherwise, where we had a lot of
material to cover quickly.
47 Subjects: including trigonometry, reading, chemistry, geometry
...Home-schooling is, in essence, tutoring. I need to work with my children one-on-one until they feel confident enough to try the material independently. My oldest is dyslexic.
25 Subjects: including trigonometry, English, reading, calculus
...I have experience providing in-class support and homework help with kids in 1st-10th grades. I am very friendly and outgoing. I love kids and like teaching them new things.
19 Subjects: including trigonometry, reading, writing, geometry
Related Taunton, MA Tutors
Taunton, MA Accounting Tutors
Taunton, MA ACT Tutors
Taunton, MA Algebra Tutors
Taunton, MA Algebra 2 Tutors
Taunton, MA Calculus Tutors
Taunton, MA Geometry Tutors
Taunton, MA Math Tutors
Taunton, MA Prealgebra Tutors
Taunton, MA Precalculus Tutors
Taunton, MA SAT Tutors
Taunton, MA SAT Math Tutors
Taunton, MA Science Tutors
Taunton, MA Statistics Tutors
Taunton, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/taunton_ma_trigonometry_tutors.php","timestamp":"2014-04-21T15:01:04Z","content_type":null,"content_length":"23938","record_id":"<urn:uuid:c8e453ae-1b8d-4ac0-99f3-9f6bdc535872>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Systems of explicit mathematics with non-constructive µ-operator
Results 11 - 20 of 39
, 1998
"... This paper is about two topics: 1. systems of explicit mathematics with universes and a non-constructive quantification operator ¯; 2. iterated fixed point theories with ordinals. We give a
proof-theoretic treatment of both families of theories; in particular, ordinal theories are used to get upper ..."
Cited by 5 (3 self)
Add to MetaCart
This paper is about two topics: 1. systems of explicit mathematics with universes and a non-constructive quantification operator ¯; 2. iterated fixed point theories with ordinals. We give a
proof-theoretic treatment of both families of theories; in particular, ordinal theories are used to get upper bounds for explicit theories with finitely many universes. 1 Introduction The two major
frameworks for explicit mathematics that were introduced in Feferman [4, 5] are the theories T 0 and T 1 . T 1 results from T 0 by strengthening the applicative axioms by the so-called
non-constructive ¯ operator. Although highly non-constructive, ¯ is predicatively acceptable and makes quantification over the natural numbers explicit. While the proof theory of T 0 is well-known
since the early eighties (cf. Feferman [4, 5], Feferman and Sieg [10], Jager [14], Jager and Pohlers [17]), the corresponding investigations of subystems of T 1 have been completed only recently by
Feferman and Jager [9, 8] and G...
, 2000
"... The unfolding of schematic formal systems is a novel concept which was initiated in Feferman [6]. This paper is mainly concerned with the proof-theoretic analysis of various unfolding systems
for non-nitist arithmetic NFA. In particular, we examine two restricted unfoldings U 0 (NFA) and U 1 (NFA ..."
Cited by 5 (3 self)
Add to MetaCart
The unfolding of schematic formal systems is a novel concept which was initiated in Feferman [6]. This paper is mainly concerned with the proof-theoretic analysis of various unfolding systems for
non-nitist arithmetic NFA. In particular, we examine two restricted unfoldings U 0 (NFA) and U 1 (NFA), as well as a full unfolding, U(NFA). The principal results then state: (i) U 0 (NFA) is
equivalent to PA; (ii) U 1 (NFA) is equivalent to RA<! ; (iii) U(NFA) is equivalent to RA< 0 . Thus U(NFA) is proof-theoretically equivalent to predicative analysis.
, 1999
"... The system EMU of explicit mathematics incorporates the uniform construction of universes. In this paper we give a proof-theoretic treatment of EMU and show that it corresponds to transfinite
hierarchies of fixed points of positive arithmetic operators, where the length of these fixed point hierarc ..."
Cited by 5 (2 self)
Add to MetaCart
The system EMU of explicit mathematics incorporates the uniform construction of universes. In this paper we give a proof-theoretic treatment of EMU and show that it corresponds to transfinite
hierarchies of fixed points of positive arithmetic operators, where the length of these fixed point hierarchies is bounded by # 0 . 1 Introduction Metapredicativity is a new general term in proof
theory which describes the analysis and study of formal systems whose proof-theoretic strength is beyond the Feferman-Schutte ordinal # 0 but which are nevertheless amenable to purely predicative
methods. Typical examples of formal systems which are apt for scaling the initial part of metapredicativity are the transfinitely iterated fixed point theories # ID # whose detailed proof-theoretic
analysis is given by Jager, Kahle, Setzer and Strahm in [18]. In this paper we assume familiarity with [18]. For natural extensions of Friedman's ATR that can be measured against transfinitely
iterated fixed point ...
- The Journal of Symbolic Logic , 2000
"... We define a novel interpretation R of second order arithmetic into Explicit Mathematics. As a di#erence from standard D-interpretation, which was used before and was shown to interpret only
subsystems proof-theoretically weaker than T0 , our interpretation can reach the full strength of T0 . The ..."
Cited by 4 (2 self)
Add to MetaCart
We define a novel interpretation R of second order arithmetic into Explicit Mathematics. As a di#erence from standard D-interpretation, which was used before and was shown to interpret only
subsystems proof-theoretically weaker than T0 , our interpretation can reach the full strength of T0 . The R-interpretation is an adaptation of Kleene's recursive realizability, and is applicable
only to intuitionistic theories. Introduction Systems of Explicit Mathematics were introduced by S. Feferman in the 70-es as a logical framework for Bishop-style constructive mathematics (see
[Fef75], [Fef79]). In [Fef79] he gave an embedding of the basic theory T 0 into a subsystem # 1 2 -CA+BI of 2-nd order arithmetic and conjectured that the converse also holds. In [Ja83] G. Jager
carried out a necessary well-ordering proof in T 0 , which together with [JP82] completed its proof-theoretical analysis and established prooftheoretic equivalence of the system of Explicit
Mathematics T 0 , system o...
- Transactions American Math. Soc , 2000
"... We define a realizability interpretation of Aczel's Constructive Set Theory CZF into Explicit Mathematics. The final results are that CZF extended by Mahlo principles is realizable in
corresponding extensions of T0 , thus providing relative lower bounds for the proof-theoretic strength of the latter ..."
Cited by 4 (2 self)
Add to MetaCart
We define a realizability interpretation of Aczel's Constructive Set Theory CZF into Explicit Mathematics. The final results are that CZF extended by Mahlo principles is realizable in corresponding
extensions of T0 , thus providing relative lower bounds for the proof-theoretic strength of the latter. Introduction Several di#erent frameworks have been founded in the 70-es aiming to give a
foundation for constructive mathematics. The most well-developed of them nowadays are Martin-Lof type theory, Aczel's constructive set theory, and Feferman's explicit mathematics. While constructive
set theory was built to have an immediate type interpretation, no theory stronger than # 1 2 -CA, which proof-theoretically is still far below the basic system T 0 of Explicit Mathematics, have been
shown up to now to be directly embeddable into explicit systems. It also yielded that the only method for establishing lower bounds for T 0 and its extensions remained to be well-ordering proofs.
This omissi...
"... This is a survey paper on various weak systems of Feferman’s explicit mathematics and their proof theory. The strength of the systems considered in measured in terms of their provably
terminating operations typically belonging to some natural classes of computational time or space complexity. Keywor ..."
Cited by 4 (3 self)
Add to MetaCart
This is a survey paper on various weak systems of Feferman’s explicit mathematics and their proof theory. The strength of the systems considered in measured in terms of their provably terminating
operations typically belonging to some natural classes of computational time or space complexity. Keywords: Proof theory, Feferman’s explicit mathematics, applicative theories, higher types, types
and names, partial truth, feasible operations 1
"... After briefly discussing the concepts of predicativity, metapredicativity and impredicativity, we turn to the notion of Mahloness as it is treated in various contexts. Afterwards the ..."
Cited by 3 (2 self)
Add to MetaCart
After briefly discussing the concepts of predicativity, metapredicativity and impredicativity, we turn to the notion of Mahloness as it is treated in various contexts. Afterwards the
"... [t]he analysis of the phrase “how many ” unambiguously leads to a definite meaning for the question [“How many different sets of integers do their exist?”]: the problem is to find out which one
of the א’s is the number of points of a straight line … Cantor, after having proved that this number is gr ..."
Cited by 3 (0 self)
Add to MetaCart
[t]he analysis of the phrase “how many ” unambiguously leads to a definite meaning for the question [“How many different sets of integers do their exist?”]: the problem is to find out which one of
the א’s is the number of points of a straight line … Cantor, after having proved that this number is greater than א0, conjectured that it is א1. An equivalent proposition is this: any infinite subset
of the continuum has the power either of the set of integers or of the whole continuum. This is Cantor’s continuum hypothesis. … But, although Cantor’s set theory has now had a development of more
than sixty years and the [continuum] problem is evidently of great importance for it, nothing has been proved so far relative to the question of what the power of the continuum is or whether its
subsets satisfy the condition just stated, except that … it is true for a certain infinitesimal fraction of these subsets, [namely] the analytic sets. Not even an upper bound, however high, can be
assigned for the power of the continuum. It is undecided whether this number is regular or singular, accessible or inaccessible, and (except for König’s negative result) what its character of
cofinality is. Gödel 1947, 516-517 [in Gödel 1990, 178]
- In Hendricks et al
"... Both the constructive and predicative approaches to mathemat-ics arose during the period of what was felt to be a foundational crisis in the early part of this century. Each critiqued an
essential logical aspect of classical mathematics, namely concerning the unre-stricted use of the law of excluded ..."
Cited by 3 (0 self)
Add to MetaCart
Both the constructive and predicative approaches to mathemat-ics arose during the period of what was felt to be a foundational crisis in the early part of this century. Each critiqued an essential
logical aspect of classical mathematics, namely concerning the unre-stricted use of the law of excluded middle on the one hand, and of apparently circular \impredicative " de nitions on the
other. But the positive redevelopment of mathematics along constructive, resp. pred-icative grounds did not emerge as really viable alternatives to classical, set-theoretically based mathematics
until the 1960s. Now wehave a massive amount of information, to which this lecture will constitute an introduction, about what can be done by what means, and about the theoretical interrelationships
between various formal systems for constructive, predicative and classical analysis. In this nal lecture I will be sketching some redevelopments of classical analysis on both constructive and
predicative grounds, with an emphasis on modern approaches. In the case of constructivity, Ihave very little to say about Brouwerian intuitionism, which has been discussed extensively in other
lectures at this conference, and concentrate instead on the approach since 1967 of Errett Bishop and his school. In the case of predicativity, I concentrate on developments|also since the 1960s|which
take up where Weyl's work left o, as described in my second lecture. In both cases, I rst look at these redevelopments from a more informal, mathematical, point This is the last of my three lectures
for the conference, Proof Theory: History and | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1534888&sort=cite&start=10","timestamp":"2014-04-20T02:10:52Z","content_type":null,"content_length":"37173","record_id":"<urn:uuid:aa4ff76d-ca63-4125-b394-242e60ad03bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
MACD, the Moving Average Convergence/Divergence Indicator
MACD, the Moving Average Convergence/Divergence indicator, developed and popularized by Gerald Appel, provides a uniquely sensitive measurement of the intensity of the trading public's sentiment and
provides early clues to trend continuation or reversal. According to Appel, this indicator is particularly dependable in signaling entry points after a sharp decline. The MACD indicator may be
applied to the stock market as a whole or to individual stocks or mutual funds.
The MACD indicator uses three exponential moving averages: a short or fast average, a long or slow average, and an exponential average of the difference between the short and long moving averages,
which is used as a signal line. (See Moving Averages below for a discussion on simple and exponential moving averages.)
• MACD reveals overbought and oversold conditions for securities and market indexes, and generates signals that predict trend reversals with significant accuracy.
• MACD produces less frequent whipsaws, as compared with moving averages.
• Telescan uses a type of shorthand to refer to MACD indicators. An "8-17-9 MACD", for example, uses a short (fast) moving average of eight days or weeks, a long (slow) moving average of 17 days or
weeks, and an exponential moving average of nine days or weeks. (The use of days or weeks depends on the time span of the stock graph.)
• Gerald Appel recommends an 8-17-9 MACD to generate buy signals and a 12-25-9 MACD to confirm a sell signal for a stock, which has had a strong bullish move.
• Regardless of the accuracy of this indicator, one should not rely on a single indicator. Study as many technical and fundamental indicators as possible before arriving at your investment
Moving Averages
Simple Moving Average
A simple moving average is calculated by totaling the closing prices of a stock over a prescribed period (say, 30 days) and dividing that total by the number of days in the period (i.e., 30). The
resulting number is the average. In order for the average to "move", the most recent closing price is added to the previous total and the oldest closing price used in that total is subtracted. The
new total is then divided by the number of days of the moving average, and the process repeated.
Changes in the upward or downward trend of the stock being measured are identified by the stock price or index crossing over its moving average, rather than a change in direction of the moving
average itself. According to the moving average theory, when a stock price moves below its moving average, a change is signaled from a rising to a declining market; when a stock price moves above its
moving average, the end of the declining trend is signaled.
A disadvantage of the simple moving average approach is that it will allow an extreme high or extreme low to distort the true value of the stock, possibly giving false buy or sell signals or rapid
Exponential Moving Average
To overcome the distortion caused by extreme highs or lows, the exponential moving average weights recent closing prices more heavily than earlier closing prices. Many market technicians consider the
exponential moving average to be a more accurate indicator than a simple moving average.
To calculate an exponential moving average, Telescan first calculates a simple average for the desired period. Then it uses the following formula for each new moving average:
[ Last MA Value x ( 1 - 2/L+1 )] + [ NP x 2/L+1 ]
MA = Moving Average
L = Length of Moving Average
NP = Most Recent Closing Price of Stock
For example, let's say the simple moving average of a certain stock over a 19-day period is 100 and the stock closed today at 105. If we plug these figures into the above formula (Last MA Value =
100, L = 19, and NP (New Price) = 105), the New Moving Average will be 100.5. The same formula is used to figure consecutive values for the remaining periods.
• A buy signal (positive breakout) is given when the MACD graph is in an oversold condition below the origin and the MACD line crosses above the signal line.
• A sell signal (negative breakout) is given when the MACD graph is in an overbought condition above the origin and the MACD line falls below the signal line.
• Significant MACD signals occur far from the zero line. When the MACD line is far from the zero line, it shows that the public is reacting to the emotion of the trend. Thus, when the crowd surges
in the opposite direction and a crossover occurs, the implication is strong. Crossovers in the vicinity of the zero line suggest that public emotion is flat and disinterested and often do not
lead to productive moves.
• The amount of divergence between the MACD line and the signal line is important-the greater divergence, the stronger the signal.
Actual Entry and Exit Dates : DOW | NASDAQ | S&P | {"url":"http://www.stocktradersalmanac.com/sta/research_tool_MACDAbout.jsp","timestamp":"2014-04-16T07:41:27Z","content_type":null,"content_length":"17296","record_id":"<urn:uuid:2bf99366-1f67-49c9-af0c-bb982b942f91>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] cross product of two 3xn arrays
Ian Harrison harrison.ian at gmail.com
Thu Feb 16 19:21:03 CST 2006
On 2/15/06, Travis Oliphant <oliphant.travis at ieee.org> wrote:
> Ian Harrison wrote:
> >Hello,
> >
> >I have two groups of 3x1 arrays that are arranged into two larger 3xn
> >arrays. Each of the 3x1 sub-arrays represents a vector in 3D space. In
> >Matlab, I'd use the function cross() to calculate the cross product of
> >the corresponding 'vectors' from each array. In other words:
> >
> >
> Help on function cross in module numpy.core.numeric:
> cross(a, b, axisa=-1, axisb=-1, axisc=-1)
> Return the cross product of two (arrays of) vectors.
> The cross product is performed over the last axis of a and b by default,
> and can handle axes with dimensions 2 and 3. For a dimension of 2,
> the z-component of the equivalent three-dimensional cross product is
> returned.
> It's the axisa, axisb, and axisc that you are interested in.
> The default is to assume you have Nx3 arrays and return an Nx3 array.
> But you can change the axis used to find vectors.
> cross(A,B,axisa=0,axisb=0,axisc=0)
> will do what you want. I suppose, a single axis= argument might be
> useful as well for the common situation of having all the other axis
> arguments be the same.
> -Travis
Thanks for your patience. This is what I was looking for.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/018717.html","timestamp":"2014-04-21T09:52:04Z","content_type":null,"content_length":"4272","record_id":"<urn:uuid:ed57d976-1f64-4335-a4f4-1112feee893d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verizon Thinkfinity
Recent notes and tags
kelseyk10 on May 03, 2013
"The purpose of this site is to provide vision and leadership in improving the teaching and learning of mathematics so that every student is ensured of an equitable standards-based mathematics
education and every teacher of mathematics is ensured the opportunity to grow professionally. I will be able to gain knowledge and ideas about Math. The rest of the world of Education can benefit
from this as well if they need help with math lessons."
tyler.greve@smail.astate.edu on May 03, 2013
"1. This is a website that helps in teaching mathematics in all grades. 2. I will use this in my classroom to help me teach the students in my class the common core mathematics they are required
to learn.3. This is a great website for educators because it displays many ways to incorporate math into your teachings, even those that do not specifically have to deal with mathematics."
hannah.arnold on May 03, 2013
"1. This site provides vision and leadership in improving the teaching and learning of mathematics so that every student is ensured of an equitable standards-based mathematics education and every
teacher of mathematics is ensured the opportunity to grow professionally. ACTM sponsors major professional development events for mathematics teachers every year:"
mnmccord on May 02, 2013
"This website is dedicated to providing instructors with increased knowledge in mathematics also to ensure that students recieve the best education possible pertaining to mathematics. This is
important because so many students have problems in math, and this website can be very useful to accomodate their needs."
Show 4 more | {"url":"http://www.thinkfinity.org/bookmarks/43918","timestamp":"2014-04-18T15:11:08Z","content_type":null,"content_length":"155341","record_id":"<urn:uuid:15b0db4b-7ee6-401c-9b00-df5fa2d10760>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Do You Need?
Copyright © University of Cambridge. All rights reserved.
We received many, many solutions to this What Do You Need? problem, but not so many of you explained how you came to the solution and which clues were useful and which weren't.
Just to warn you - the clues are ordered slightly differently on the printable sheet compared with in the problem itself so do read the solutions below carefully!
Holly from Sacred Heart School in New Zealand told us:
The answer is 35
* statement 1 does not help because by following the other clues you can tell that you need more than 1 digit to find the answer and the only multiple of 7 smaller than 9 is 7, which consists of only
1 digit.
* statement 2 does not help because the ones digit has to be larger than the tens digit and the only multiple of 7 and 10 is 70 and the 0 is smaller than the 7
* statement 3 helps because being a multiple of 7 cancels out a lot of numbers that could have been possibilities.
* statement 4 helps because by being an odd number it too cancels out a lot of other possibilities
* statement 5 does not help because the only multiple of 7 and 11 is 77 and the ones digit has to be bigger than the tens digit and the two digits in 77 are even
(even - meaning the same, I think)
* statement 6 does not help because you can only choose the numbers from 1-100, and those numbers are all below 200 anyway, so that statement is completely worthless.
* statement 7 helps because by using it there will only be a few numbers left to choose from.
* statement 8 helps because by using it you can easily narrow the number down so that there is only one left.
The only number left after using all of the useful clues is 35
Children at St Faith's School worked together and sent us the following solution
The rules you need (in order) are:
- The number is odd (so cross out all the columns of even numbers). This is the most important rule because you can get rid of half of the numbers very quickly.
- The tens digit is odd (so cross out all the rows that have even tens numbers. These are 0-9; 20-29; 40-49; 60-69 and 80-89). We decided that all the numbers in the row 0-9 counted as numbers with
an even tens number as the tens numbers in this row all equal zero and if we say that an even number is one which doesn't leave a remainder when you divide by 2 then zero counts as an even number.
This rule is important because it helps us cross out 25 more numbers.
- The number is a multiple of 7 (so cross out all numbers that are not in the seven times table). There are now only three numbers left: 35, 77 and 91.
- Its ones digit is larger than its tens digit (so cross out all numbers that have a number in the ones (units) column the same as or less than the number in the tens). This rule lets you cross out
two of the remaining three numbers to leave the correct answer.
This leaves the number 35 so 35 is the correct answer.
The rules we didn't need were:
- The number is less than 200: you don't need this rule because all the numbers are less than 200 already so none would get crossed out!
- The number is greater than 9: you don't need this rule because numbers 0-9 have zero tens and all these numbers get crossed out later in the problem when you cross out all the numbers that have an
even number in the tens.
- The number is not a multiple of ten: you don't need this rule because you have already crossed out all the multiples of tens when you crossed out all the even numbers.
- The number is not a multiple of 11: you don't need this rule because at the end when there are three numbers left (35, 77 and 91) if you used this rule you would only cross out 77 and still have
two numbers left and need to use a 5th rule to choose between them so it is better for the last rule to be 'Its ones digit is larger than its tens digit' as this gets rid of two numbers and leaves
the right answer.
Cong also explained clearly how he went about the problem:
The number is 35. The way I worked it out is:
I started with 'The number is a multiple of 7' : With that clue only 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91 and 98 are left.
Next, 'The number is odd' : With that clue only 7, 21, 35, 49, 63, 77 and 91 are left.
Next, 'Its ones digit is larger than its tens digit' : With that clue only 35 and 49 are left.
Lastly, 'Its tens digit is odd' left me with 35.
The clues that you need are:
The number is a multiple of 7.
The number is odd.
Its ones digit is larger than its tens digit.
Its tens digit is odd.
The clues that you don't need are:
The number is greater than 9.
The number is not a multiple of 10.
The number is not a multiple of 11.
The number is less than 200.
The number is 35.
Thank you in particular to Vasil from Slivern, Bulgaria; Isobel from Springfield Primary; and Alex who also sent us clear solutions. | {"url":"http://nrich.maths.org/5950/solution?nomenu=1","timestamp":"2014-04-16T22:20:16Z","content_type":null,"content_length":"8801","record_id":"<urn:uuid:5315ab68-b516-4fdb-a9c7-71258989e3ba>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plainfield, NJ Algebra 1 Tutor
Find a Plainfield, NJ Algebra 1 Tutor
...I also work part time at tutoring company. I have experience tutoring students from kindergarten to advanced mathematics at the undergraduate level. I believe that developing a "number sense"
is very important to succeed in math and spend more time developing this sense rather than having students memorize formulas and algorithms.
16 Subjects: including algebra 1, calculus, algebra 2, geometry
...You will see your your grades go up from the very first meeting. I have never had a disappointed student, so press the button and see your your grades soar.I have received a 100% on all the
algebra qualifications. I have tutored over 20 people in Alegbra II in the last few years.
13 Subjects: including algebra 1, statistics, geometry, trigonometry
...I am able to be reached at most hours of the day for questions!I have experience doing research in a genetics lab and would be able to help anyone struggling in this subject. I've been using
Apple products ever since I received my iphone years ago. I currently use a macbook air and have taught friends and family to use their iPhones, apple computers, iPads, and iTunes.
26 Subjects: including algebra 1, chemistry, organic chemistry, physics
...I will work with you until you learn the concept, task, or skill, trying various approaches to find the one that works best with your learning style. If you do not learn it, I did not do my
job.I have been drawing since I was a child. I took classes at Pratt Institute in Brooklyn, NY.
21 Subjects: including algebra 1, reading, algebra 2, geometry
I am a retired Bell Labs researcher who really enjoys teaching. I have taught courses both at Bell Labs and at Rutgers. I've taught both groups of secretaries and PhDs.
18 Subjects: including algebra 1, physics, calculus, geometry
Related Plainfield, NJ Tutors
Plainfield, NJ Accounting Tutors
Plainfield, NJ ACT Tutors
Plainfield, NJ Algebra Tutors
Plainfield, NJ Algebra 2 Tutors
Plainfield, NJ Calculus Tutors
Plainfield, NJ Geometry Tutors
Plainfield, NJ Math Tutors
Plainfield, NJ Prealgebra Tutors
Plainfield, NJ Precalculus Tutors
Plainfield, NJ SAT Tutors
Plainfield, NJ SAT Math Tutors
Plainfield, NJ Science Tutors
Plainfield, NJ Statistics Tutors
Plainfield, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Plainfield_NJ_algebra_1_tutors.php","timestamp":"2014-04-16T04:49:20Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:21fea47e-2331-4c94-a6a6-932f72c8b253>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Sella on Wednesday, July 11, 2012 at 9:52am.
mang pedring wanted to construct a square table such that the length of its side is 30 cm longer than its height.
What expression represents the area of the table? Suppose the table is 90 cm high what would be the area?
• algebra - Jai, Wednesday, July 11, 2012 at 10:15am
Let x = height of table
Let x+30 = length of table
Recall that area of square is given by
A = L^2
where L = length of one side.
A = (x+30)^2
A = x^2 + 60x + 900
If table is 90 cm high, then the length of square is 120 cm. Substituting this to the equation for area,
A = 120^2 + 60(120) + 900
A = 22500 cm^2
hope this helps~ :)
..btw, pinoy ka din ba? (na-curious ako dahil sa mang Pedring na pangalang pinoy). But you don't have to answer if you don't want to. :)
• algebra - Sella, Wednesday, July 11, 2012 at 10:34am
yes, pinoy ako. btw, salamat :)
• algebra_correction - Jai, Wednesday, July 11, 2012 at 11:03am
*wait. I had a correction:
If table is 90 cm high, then the length of square is 120 cm. Substituting this to the equation for area,
A = 90^2 + 60(90) + 900
A = 14400 cm^2
*sorry about that! The x value substituted must be 90, not 120. x__x
..haha wow ang galing naman,, fellow Filipino :D :D :D
Related Questions
algebra - mang pedring wanted to construct a square table such that the length ...
algebra - mang pedring wanted to construct a square table such that the length ...
algebra - mang pedring wanted to construct a square table such that the length ...
algebra - mang pedring want to build square table the length is 30cm. longer ...
construct truth table - Construct a truth table for q w/upside down v~p. Using ...
algebra - the local high school is setting up square-shaped lunch tables in the ...
Geometry - Ida plans to construct a square showing exact measurements of the ...
Statistics - A company that manufactures exercise machines wanted to know the ...
Statistics - A company that manufactures exercise machines wanted to know the ...
Algebra - Construct a truth table for ~q v ~p. I am really lost here. | {"url":"http://www.jiskha.com/display.cgi?id=1342014741","timestamp":"2014-04-17T22:10:27Z","content_type":null,"content_length":"9520","record_id":"<urn:uuid:c8e890df-d25e-4010-9ed8-ce197c5612e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relevance of the complex structure of a function algebra for capturing the topology on a space.
up vote 8 down vote favorite
This question is the outcome of a few naive thoughts, without reading the proof of Gelfand-Neumark theorem.
Given a compact Hausdorff space $X$, the algebra of complex continuous functions on it is enough to capture everything on its space. In fact, by the Gelfand-Neumark theorem, it is enough to consider
the commutative C*-algebras instead of considering compact Hausdorff spaces.
The important thing here is that $C*$-algebras have a complex structure. The real structure is not enough. Given the algebra $C(X, \mathbb R) \oplus C(X, \mathbb R)$ of real continuous functions on
$X$, the algebra $C(X, \mathbb{C})$ is simply the direct sum $C(X, \mathbb R) \oplus C(X, \mathbb R)$, as a Banach algebra(and this can be given a complex structure, (seeing it as the
complexification...)). But to obtain a C*-algebra, we need an additional C*-algebra, and the obvious way, ie, defining $(f + ig)$* $= (f - ig)$ does not work out. More precisely, the C* identity does
not hold.
So one cannot weaken (as it stands) the condition in the Gelfand-Neumark theorem that we need the algebra of complex continuous functions on the space $X$, since we do need the C* structure. Of
course, this is without an explicit counterexample. Which brings us to:
Qn 1. Please given an example of two non-homeomorphic compact Hausdorff spaces $X$ and $Y$ such that the function algebras $C(X, \mathbb R)$ and $C(Y, \mathbb R)$ are isomorphic(as real Banach
(Here I am hoping that such an example exists).
Then again,
Qn 2. From the above it appears that the structure of complex numbers is involved when the algebra of complex functions captures the topology on the space. So how exactly is this happening?
(The vague notions concerning this are something like: the complex plane minus a point contains nontrivial $1$-cycles, so perhaps the continuous maps to the complex plane might perhaps capture all
the information in the first homology, etc..)..
Note : Edited in response to the answers. Fixed the concerns of Andrew Stacey, and changed Gelfand-Naimark to Gelfand-Neumark, as suggested by Dmitri Pavlov.
fa.functional-analysis noncommutative-geometry
If you know a little algebraic geometry, an exercise toward the end of chapter 1 in Atiyah-Macdonald walks you through an attractive proof that a CH space is determined by its ring of real valued
functions. What the C* algebra formalism gets you in this context is the "image" of the functor $X \mapsto C(X, \mathbb{C})$ from CH spaces to $\mathbb{C}$-algebras. It is possible to formulate an
analogous condition for $\mathbb{R}$-algebras, but the entire theory is more awkward (though more powerful in some ways!). – Paul Siegel May 25 '10 at 23:48
Let me try to justify my last remark. Aside from the awkwardness of the definition, the Bott periodicity theorem for the K-theory of real C* algebras has period 8 rather than period 2 in the
complex case, and thus many arguments are much more arduous. But there is also room for more subtlety. For example, the complex structure of C* algebras is at some level responsible for the fact
that the Atiyah Singer index theorem is most naturally formulated for even dimensional manifolds. One way to formulate an odd dimensional analogue is to use real C* algebras. This is not without
applications. – Paul Siegel May 25 '10 at 23:57
@Paul: That is a nice exercise, but I am confused by "If you know a little algebraic geometry". I guess you mean because of the topology placed on the maximal ideal space? Also, thanks for
addressing some differences between the real and complex case. @Akela: In spite of Paul's comments, I must say I'm a little confused by the noncommutative-geometry tag in light of the fact that
there is nothing noncommutative in your question. – Jonas Meyer May 26 '10 at 2:40
1 @Jonas: In AM, $X$ is recovered from $C(X)$ as the subspace of $Spec(C(X))$ (with the Zariski topology) consisting of maximal ideals. So the exercise requires a casual familiarity with $Spec$,
which one usually accumulates via AG. For what it's worth, I support the NCG tag for this question. I don't always work directly with NC C*-algebras in a substantial way, but I usually tell people
I work in NCG because I often use $C(X)$ as a proxy for $X$. Maybe I am "mistagging" myself. :) – Paul Siegel May 26 '10 at 14:03
Makes sense. This isn't the first time I've heard of someone working in commutative NCG, but I still appreciate the novelty. – Jonas Meyer May 26 '10 at 19:34
add comment
3 Answers
active oldest votes
Here is a slightly different, perhaps simpler take on showing that $C(X,\mathbb{R})$ determines $X$ if $X$ is compact Hausdorff. For each closed subset $K$ of $X$, define $\mathcal{I}_K$
to be the set of elements of $C(X,\mathbb{R})$ that vanish on $K$. The map $K\mapsto\mathcal{I}_K$ is a bijection from the set of closed subsets of $X$ to the set of closed ideals of $C(X,
\mathbb{R})$. Urysohn's lemma and partitions of unity are enough to see this, with no complexification, Gelfand-Neumark, or (explicitly) topologized ideal spaces required. I remember doing
this as an exercise in Douglas's Banach algebra techniques in operator theory in the complex setting, but the same proof works in the real setting.
Here are some details in response to a prompt in the comments. (Added later: See Theorem 3.4.1 in Kadison and Ringrose for another proof. Again, the functions are assumed complex-valued
there, but you can just ignore that, read $\overline z$ as $z$ and $|z|^2$ as $z^2$, to get the real case.)
I will take it for granted that each $\mathcal{I}_K$ is a closed ideal. This doesn't require that the space is Hausdorff (nor that $K$ is closed). Suppose that $K_1$ and $K_2$ are unequal
closed subsets of $X$, and without loss of generality let $x\in K_2\setminus K_1$. Because $X$ is compact Hausdorff and thus normal, Urysohn's lemma yields an $f\in C(X,\mathbb{R})$ such
that $f$ vanishes on $K_1$ but $f(x)=1.$ Thus, $f$ is in $\mathcal{I}_{K_1}\setminus\mathcal{I}_{K_2}$, and this shows that $K\mapsto \mathcal{I}_K$ is injective. The work is in showing
that it is surjective.
up vote 4 Let $\mathcal{I}$ be a closed ideal in $C(X,\mathbb{R})$, and define $K_\mathcal{I}=\cap_{f\in\mathcal{I}}f^{-1}(0)$, so that $K_\mathcal{I}$ is a closed subset of $X$. Claim: $\mathcal{I}
down vote =\mathcal{I}_{K_\mathcal{I}}$.
It is immediate from the definition of $K_\mathcal{I}$ that each element of $\mathcal{I}$ vanishes on $K_\mathcal{I}$, so that $\mathcal{I}\subseteq\mathcal{I}_{K_\mathcal{I}}.$ Let $f$ be
an element of $\mathcal{I}_{K_\mathcal{I}}$. Because $\mathcal{I}$ is closed, to show that $f$ is in $\mathcal{I}$ it will suffice to find for each $\epsilon>0$ a $g\in\mathcal{I}$ with $\
|f-g\|_\infty<3\epsilon$. Define $U_0=f^{-1}(-\epsilon,\epsilon)$, so $U_0$ is an open set containing $K_\mathcal{I}$. For each $y\in X\setminus U_0$, because $y\notin K_\mathcal{I}$ there
is an $f_y\in \mathcal{I}$ such that $f_y(y)\neq0$. Define $$g_y=\frac{f(y)}{f_y(y)}f_y$$ and $U_y=\{x\in X:|g_y(x)-f(x)|<\epsilon\}$. Then $U_y$ is an open set containing $y$. The closed
set $X\setminus U_0$ is compact, so there are finitely many points $y_1,\dots,y_n\in X\setminus U_0$ such that $U_{y_1},\ldots,U_{y_n}$ cover $X\setminus U_0$. Relabel: $U_k = U_{y_k}$ and
$g_k=g_{y_k}$. Let $\varphi_0,\varphi_1,\ldots,\varphi_n$ be a partition of unity subordinate to the open cover $U_0,U_1,\ldots,U_n$. Finally, define $g=\varphi_1 g_1+\cdots+\varphi_n
g_n$. That should do it.
In particular, a closed ideal is maximal if and only if the corresponding closed set is minimal, and because points are closed this means that maximal ideals correspond to points. (Maximal
ideals are actually always closed in a Banach algebra, real or complex.)
To wrap things up, I might as well purchase and wear a donkey costume.. – Akela May 25 '10 at 22:09
I'm not sure how to respond to that, but I hope you don't think I am disparaging your question. – Jonas Meyer May 25 '10 at 22:25
No I didn't mean that. I got thinking into all absurd directions with the wrong premises. I like your answer. This is a complete solution modulo the solution of the exercise. Would you
please include a sketch of the proof? Also it may be a good idea to include explicitly that the maximal ideals in C(X,R) are precisely the points of the space. – Akela May 25 '10 at
I mean I accepted your answer. Since it's an accepted answer, it would be more helpful to any readers when it's complete with more details .. – Akela May 25 '10 at 22:30
Sure, I'll add some details. – Jonas Meyer May 25 '10 at 22:36
add comment
Qn 1 is trivial because you said "topological spaces" rather than "compact Hausdorff spaces" (or "locally compact Hausdorff" would be okay, I guess). Simply $\lbrace 0,1\rbrace$ with the
order topology and $\lbrace 0\rbrace$ will do.
If we refine to "compact Hausdorff spaces" then I take $C(X,\mathbb{R})$ and $C(Y,\mathbb{R})$, complexify, and apply GN to recover $X$ and $Y$, thus I claim that no counterexample exists.
up vote 6 I think that the issue stems from a confusion between the complexification of a real algebra and the underlying real algebra of a complex one. Since I can recover $C(X,\mathbb{C})$ from $C
down vote (X,\mathbb{R})$, all the information about the former is captured in the latter. However, since I can find several complex structures on the same real algebra, $C(X,\mathbb{C})_{\mathbb{R}}
$ does not contain all the information that is contained in $C(X,\mathbb{C})$. There is a reason why they are called forgetful functors!
So $C(X,\mathbb{R})$ is not the underlying real algebra of $C(X,\mathbb{C})$, but $C(X,\mathbb{C})$ is the complexification of $C(X,\mathbb{R})$.
Sorry, I unconsciously typed "topological" instead of "compact Hausdorff". How do you get the *-structure after complexification? – Akela May 25 '10 at 14:42
1 The next-to-last R should be a C. – S. Carnahan♦ May 25 '10 at 15:01
@Scott: Thanks, fixed. @Akela: since C(X,R)oC = C(X,C), you get it from the same place as normally. – Andrew Stacey May 25 '10 at 20:14
I understand that C(X,R)oC is a complex Banach algebra. What I do not understand is how you gave it the structure of a C*-algebra, as "normal". As I understand Gelfand-Naimark theorem,
you need commutative C*-algebras, not commutative Banach algebras. – Akela May 25 '10 at 20:21
1 Dear Akela, $C(X,\mathbb R)\otimes_{\mathbb R} \mathbb C$ is isomorphic to $C(X,\mathbb C)$ as a $\mathbb C$-algebra. This is all that is needed to then recover $X$ from the usual
Gelfand--Neumark theorem: one takes all maximal ideals, topologizes them via the weak topology, and this is $X$, by Gelfand--Neumark. – Emerton May 25 '10 at 20:55
show 5 more comments
The noncommutative Gelfand-Neumark theorem can be stated and proved for real C*-algebras. See Corollary 4.10 in Johnstone's book “Stone Spaces”.
up vote 2 P.S. “Gelfand-Naimark” theorem is a misnomer. Take a look at the original paper and note how Gelfand and Neumark spell their names. In fact, they consistently use these spellings
down vote throughout all of their non-Russian papers.
3 I don't think 'oxymoron' means what you think it means. – HJRW May 25 '10 at 22:04
"Misnomer" instead of "oxymoron" seems apt. However, it is confusing to us ignorant of Russian and the subtleties of its transliteration to Latin characters, because the name is written
as "Naĭmark" in some of his other work. (For example, springerlink.com/content/v71158h17p227p39) I believe some choose "Gelfand-Naimark" for simplicity and consistency, but I appreciate
your point. – Jonas Meyer May 25 '10 at 22:15
1 @Henry: Inconceivable! – Yemon Choi May 25 '10 at 22:19
@Henry: By an oxymoron I meant “a combination of contradictory or incongruous words”. – Dmitri Pavlov May 26 '10 at 4:46
@Jonas: It's either Gelfand-Neumark if you cite their non-Russian papers, or Gelʹfand-Naĭmark if you cite one of their Russian papers and use the AMS transliteration system. The
2 spelling Naimark is incorrect and should never be used. As for the cited errata, note that the original paper springerlink.com/content/n3m5656p81712676 lists both spellings. – Dmitri
Pavlov May 26 '10 at 5:23
show 1 more comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis noncommutative-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/25878/relevance-of-the-complex-structure-of-a-function-algebra-for-capturing-the-topol","timestamp":"2014-04-18T18:55:18Z","content_type":null,"content_length":"91835","record_id":"<urn:uuid:7be237bf-d4a0-4d4f-b748-1f229d0e7fcc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Courses SOCR Ivo Dinov's Home
SiteMap Software Contact
Ivo Dinov
UCLA Statistics, Neurology, LONI, UCLA Statistics
│ STAT 13 (2a, 2b, 2c) │ Introduction to Statistical Methods for the Life and Health Science │
│ Department of Statistics │ Instructor: Ivo Dinov │
│ Homework 6 │ Due Date: Friday, Dec. 05, 2003 │
• (HW_7_1) [This is a creative project] You have to gather, analyze and present the data, the results and the conclusions that follow from your study. It is completely up to you to come up with an
interesting project that you need to complete by the deadline. Here are the basic requirements your project must satisfy:
□ Please submit your project nicely typeset, including text/tables/graphs, in the form according to our HW assignment policies. No hand-written reports will be graded.
□ Examples of projects (this is not an exclusive list):
☆ Investigate the change of the level of various disinfectants, by-products, contaminants, chemicals, micro-organisms, pesticides, radioactive contaminants or other substances which may be
added or naturally occurring in H[2]O, across the years (say 1996-2001) in our tap drinking water.
☆ Study the dynamics of the human populations for the past 100 years. Make predictions for the future.
☆ Analyze the prevalence or gender preference of one particular type of cancer.
☆ Study the effects of over fishing.
☆ Crime rates, geographic distributions and severity.
☆ Education changes in the past decade.
☆ Stock market volatility.
□ Examples of online resources containing interesting data:
□ Format: Include the regular HW project cover page. Start with a one paragraph abstract, followed by an intro/background of the problem, methods, results, discussion/conclusion and
acknowledgements/references. Clearly state the problem you have chosen to investigate. List the resources you used to come up with the project and reference all sources you used to complete
the project.
☆ Clearly state your hypotheses, prior to interrogating the data.
☆ Use statistical techniques from the list of techniques we have discussed in Stats 13 to convey whether or not there is statistical evidence in support of your original hypotheses (e.g.,
normal approximation, confidence intervals, hypothesis testing, linear regression, analysis of variance, goodness-of-fit, etc.).
☆ Explicitly state your approach to answer your research hypotheses. Write all formulas/tests/statistics you need.
☆ Interpret your statistical (numerical) results in a lay back language. Write conclusions and discussions at the end of your report and acknowledge outside help. Describe how this project
can be extended in the future.
☆ One or two people can work on a project as a team. If two work together both must have equally contributed for the completion and submit separate copies of the project, with their names
on top (the names of both students should be on both papers).
│ Ivo Dinov's Home │ http://www.stat.ucla.edu/~dinov │ Visitor number │ © I.D.Dinov 1997-2003 │
[Last modified on by ] | {"url":"http://www.stat.ucla.edu/~dinov/courses_students.dir/03/Fall/STAT13.2.dir/HWs.dir/HW7.html","timestamp":"2014-04-23T19:30:22Z","content_type":null,"content_length":"11094","record_id":"<urn:uuid:224e44c5-454b-42d4-8ddd-f3c6641a8ebe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Power transferred from the sources to the electromagnetic field
In essence, this is equivalent to P = V * I in a conventional circuit, where voltage V and current I are independent of each other as determined by the circuit and the source drive. So the coupling
that you are worried about is not really an issue.
Being careful reveals that [itex]\vec{E}\cdot\vec{J}[/itex] is power density, not power. It comes from applying the divergence theorem to the Poynting vector. The derivation starts from two Maxwell
equations [tex]\nabla\times\vec{E}=-\vec J_m,\\
\nabla\times\vec{H}=\vec J_e[/tex] where J_e is current density and J_m is effective magnetic current density. Multiply the first equation by H* and the second by E, take their difference and apply a
vector identity to get [tex]\nabla\cdot(\vec E\times\vec H^*) + \vec E\cdot\vec J_e^*+\vec H^*\cdot\vec J_m=0.[/tex] Integrate over all space and apply the divergence theorem to get
[tex]\oint \vec E \times \vec H^* \cdot d\vec A + \int(\vec E\cdot\vec J_e^*+\vec H^*\cdot\vec J_m)dV=0.[/tex] The last term is zero in the absence of magnetic material, so the volume integral of
[itex]\vec E \cdot \vec J_e[/itex] equals the surface integral of the Poynting vector, or, in other words, radiated power. That makes EJ itself a power density. | {"url":"http://www.physicsforums.com/showpost.php?p=4285482&postcount=2","timestamp":"2014-04-16T16:09:56Z","content_type":null,"content_length":"8085","record_id":"<urn:uuid:a0233af0-9a52-49c0-a7f5-2e3b02e4509a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math and the Fantastic: A Survey
Mathematics as Plot
Science fiction deals with speculative science, and given how close math is to science, it's no surprise that some science fiction involves creative uses of math.
, an anthology of short stories, features several of this kind. For example, the title of Larry Niven's “Convergent Series” refers to series of numbers such as 1/2 + 1/4 + 1/8 … The numbers get so
small, so quickly, that in some sense they do “add up” to one, even though there are infinitely many of them. The narrator invokes a similar process to banish a pesky demon.
Mathematics as Distraction
Niven opens his story with a discussion of magic, only reaching the mathematical image at the very end. In contrast, stories can set up a mathematical tone only to have the numbers be overshadowed
later on. The first
story, Isaac Asimov's “1 to 999,” at first searches for meaning in a plain sequence of numerals, 1 through 999. In fact, there is no mathematical significance to them at all—the twist of the story is
that the numbers are meant to be interpreted as words. Here, mathematics provides a roadblock, rather than a solution.
Another short story (in a different anthology) called “The Mathematicians” also uses this tactic. Over three-and-a-half pages, Arthur Feldman builds to a punchline of “they were great mathematicians.
So, they multiplied.”
There's a case to be made for including the use of one particular number in the
Hitchhiker's Guide to the Galaxy
series under this category, rather than strictly “humor.” The characters' (and readers') futile attempts to ascribe meaning to a number that appears out of the blue could qualify this as a deflection
on Douglas Adams' part (although not, like Asimov, to a new meaning that eventually solves the mystery).
Mathematics as Characterization
This is by no means limited to science fiction or fantasy, but portraying someone as a mathematician can open the door to various nerdy/aloof stereotypes. Greg Egan's
uses lots of mathematics as a plot device (which goes over my head), but the final pages portray a character seeking out the isolation of working on math. Similarly, Feldman's alien mathematicians
are not initially aware of human emotions.
Cultural allusions
At least in series set in some variant of the real world as we know it, superstitions attached to real-life numbers are likely to carry over to the fantastic world. Just as seven is considered a
lucky number by many people today, seven is considered “the most powerfully magical number” in
Harry Potter
. Among the many uses of seven in the book include the seven school years of Hogwarts, with real-world significance of there being seven books in the series. Similarly, thirteen is viewed as an
unlucky number in Hogwarts as well as in reality.
Some hints in the
Wheel of Time
series point to that world being a far-flung past or future of our own, in which case the seven Ajahs of the White Tower or the Seven Towers of Malkier might qualify.
Numbers as organizing themes Planet Narnia
, there's an underlying organizing principle for the
Chronicles of Narnia
that also explains why that series has seven books. D. J. Machale's
Pendragon Adventure
divides the multiverse into ten territories, and there are ten corresponding books. Each territory has a Traveler, each Traveler has a ring, for several corresponding sets of tens. (This backfired on
me, reading it; at one point I was convinced that a message that “the bad guy can only be defeated when he thinks he's won” referred to him needing to wear one ring on each finger. Nope.)
On the other hand, Garth Nix'
Seventh Tower
series (there are seven towers, one for each color of the rainbow) has six books.
Alien units
Aliens are not from Earth; it doesn't make sense to expect their systems of measurement to coincide with Earth units. In the
series, the alien Ax consistently notes that Earth units are alien to him. From book 18,
The Decision
<Actually, it has only been one of your hours and eighteen minutes,> [Ax] said helpfully.
<One of our hours,> Marco said. <You know, they really are
hours now, too. This is Earth. You're stuck here. Go ahead and set your watch to local time.>
I'm surprised I can't think of more aliens who use base 8 or 12 or anything other than 10 off the top of my head. This is probably a function of my taste, as demonstrated by the overlap with the next
large category...
Numbers and humor
Playing the “different units” game for laughs can get even more outlandish than Ax and Marco, above. The humorous
series does this as well. When characters aren't asking each other to “give me eighteen,” we have lines like:
“On the planet Zhilqueeg, for example, there are eighty neehees in a gajug, fifty gajugs in a twip, twenty-two twips in a yuryur, and so on, bringing about the common expression on Zhilqueeg: “There
just aren't enough twips in the yuryur.””
At least two of those three examples have the decency to be in multiples of ten, a trait not shared by the
Harry Potter
series' use of 29 Knuts to a Sickle, and 17 Sickles to a Galleon. (Although I've heard this is actually supposed to be a parody of the British monetary system.)
The first few
books written make a running joke of characters not saying the number that's one more than seven, because of its powerful magical properties.
The Hitchhiker's Guide to the Galaxy
's use of a number might qualify, too (see above).
Misuses of math
Not all uses of numbers are, in my opinion, very well-written ones. Examples that might otherwise fall under a different category can stretch the limits of belief.
, the alienness of units is called into question when the protagonists receive the power to become animals. This comes with a warning; they will be trapped in the form of an animal they morph into if
they stay longer than two hours. Since this technology was invented by Ax's species, and we know that Earth units are alien to them, it's implausible that the limit should have worked out to exactly
two of our Earth hours.
Inconsistency, as well as implausibility, can plague descriptions of alien units. For humorous effect,
defines a “standard galactic year” as “About twenty Earth minutes” (to exaggerate a character's problems with interest rate) in book two. By book four, however, “100 Gala-years” is being used
interchangeably with “a hundred years.”
The Marvelous Land of Oz
, the characters need to “count seventeen by twos” before they can use magical wishing pills. They first dismiss this as impossible, but then realize that you can “start counting at a half of
one...for twice one-half is one, and if you get to one it is easy to count from one up to seventeen by twos.” I'd agree with the latter part, but generally doubling is not undertaken when counting by
twos. (No one suggested starting at minus one and proceeding to one, which would have been a much more satisfying tactic in my book.)
Moreover, the “underlying theme” of an alien world can feel a little too much like home. In the
trilogy, sixteen is introduced as an important number for the planet (and, Sanderson's other works suggest, universe as a whole); there are more or less sixteen basic metals used in the magic system,
and characters stricken by a mysterious illness fall sick for sixteen days. Fine, but having the protagonist realize “oh my goodness! Sixteen percent of our armies, exactly, got sick! This is a sign!
” felt a little strained to me. Perhaps this is just my mathematical nitpickery coming into play, but sixteen percent makes the slight leap of faith that the characters would be as familiar with
percents and the base-10 system as we are. Since they're humans, that's not too farfetched, but I would have preferred “one in sixteen,” which feels a little more natural.
Old School
It's possible that the two biggest-deal uses of math in fantasy or science fiction both came in the 1800s. One of them is
, which uses a mathematical adventure in the second dimension to poke fun at Edwin Abbott Abbott's Victorian society. The other began life as a short draft, before its author, a conservative math
professor, added some more important scenes.
And the rest is history
1 comment:
1. Well, and then there's Lewis Carroll with both Alice books packed with logic and mathematical puzzles. I also recall his tale "What the Tortoise said to Achilles". More logic than math, but worth
the quote.
John Updike's "Roger's Version" doesn't really qualify as sci-fi/fantasy, but one of its main charachters is a computer scientist who wants to prove God's existence using mathematical models.
Borges' tale "The Babylon Lottery" (from his book "Ficciones", 1941) is all about Combinatorics. Another tale "The book of sand" (from the eponymous collection, 1975) deals with the problem of | {"url":"http://www.fantasy-matters.com/2011/11/math-and-fantastic-survey.html","timestamp":"2014-04-17T01:26:41Z","content_type":null,"content_length":"83143","record_id":"<urn:uuid:50410444-a713-45b9-84d8-c4fac16e9455>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Questions on Cantor
Frode Bjørdal frode.bjordal at ifikk.uio.no
Wed Jan 30 11:19:24 EST 2013
Thank you very much for your useful accounts concerning well foundedness
and well orderings, Vaughan. Indeed, from an iterative conception of sets
the notions undeniably may be taken to have a conceptual priority ordering
as you suggest. For me it was not quite clear what you meant as Mirimanoff
did not prohibit what he called “ensembles extraordinaires” that are sets
that do not abide by the iterative conception.
When I wrote that EPR had been dismantled by physics I was very hesitant as
to whether I should include a modifier such as “prevailing”, but I decided
against it as I considered an impetus to a discussion concerning
foundational problems in physics to be somewhat untoward for this list. It
was nevertheless very interesting to me to read your considerations
concerning such issues. As many others, I do not find the prevailing
foundational ideas such as the Copenhagen interpretation entirely
Frode Bjørdal
2013/1/29 Vaughan Pratt <pratt at cs.stanford.edu>
> (While the following strays from the original question about well-founded
> sets, this comparison between QM, that pure probability enters into
> physics, and the well-ordering theorem, that every set can be well-ordered,
> may be of independent interest to FOM.)
> On 1/27/2013 5:22 PM, Frode Bjørdal wrote:
>> By the way, it took many years for the EPR paradox to be dismantled in
>> physics.
> Certainly; in fact the "dismantling" is still on-going. Much the same can
> be said about the well-ordering theorem.
> Koenig's proffered refutation of the theorem was rejected only because it
> made unjustified use of a result from Bernstein's thesis three years
> earlier, pointed out by Zermelo within a day of Koenig's presentation. The
> status of the theorem itself took decades to emerge even as far as it has.
> As a proposition equivalent (in ZF at least) to Choice, we understand
> today that it is independent of ZF, although my impression is that some
> constructivists consider the well-ordering theorem false regardless of
> their opinion about Choice.
> Re-reading the history of the long-running Bohr-Einstein debate at
> http://en.wikipedia.org/wiki/**Bohr%E2%80%93Einstein_debates<http://en.wikipedia.org/wiki/Bohr%E2%80%93Einstein_debates>
> I see I may have misremembered which of Einstein's arguments was disposed
> of within a day when I made the comparison with Koenig's 1904 argument.
> The incident in question took place three years into the debate but five
> years before the 1935 EPR paper, namely at the 1930 Solvay meeting. There
> Einstein presented his box argument, which sought to establish the precise
> time and energy of light emitted from a spring-supported box through a
> shutter by (i) opening the shutter arbitrarily briefly to allow a photon to
> escape, (ii) adding mass m to the box to exactly compensate for the
> resulting loss of mass of the box by bringing the box back down to its
> original height, and (iii) inferring the energy as E = mc^2. Since
> (according to Einstein) the shutter speed can be arbitrarily fast and m
> measured to arbitrary accuracy, we have a violation of Heisenberg's
> uncertainty principle as applied to time and energy.
> The article quotes Leon Rosenfeld:
> "It was a real shock for Bohr...who, at first, could not think of a
> solution. For the entire evening he was extremely agitated, and he
> continued passing from one scientist to another, seeking to persuade them
> that it could not be the case, that it would have been the end of physics
> if Einstein were right; but he couldn't come up with any way to resolve the
> paradox. I will never forget the image of the two antagonists as they left
> the club: Einstein, with his tall and commanding figure, who walked
> tranquilly, with a mildly ironic smile, and Bohr who trotted along beside
> him, full of excitement...The morning after saw the triumph of Bohr."
> Bohr chose to attack Einstein's claim that the mass defect could be
> measured to arbitrary accuracy. This defect being h/(c*lambda) per photon
> of wavelength lambda, if the emitted photon is visible light, say with
> lambda = 500 nm, then an electron (of mass 9.11 x 10^{-31} kg) is more than
> two hundred thousand times heavier than the reduction in box weight
> Einstein is proposing to measure. Einstein would then need to propose a
> method of measuring such a tiny displacement without running afoul of
> Heisenberg uncertainty at some point in the method, a very tall order.
> Bohr could however have argued just as well that the shutter cannot be
> opened arbitrarily briefly if there is to be any reasonable chance of a
> photon escaping. That's simpler than going through the math of the masses
> involved.
> Simpler yet is to point out that classical reasoning like Einstein's is
> unsound in the quantum world. I don't know when that point of view became
> sufficiently accepted as to make it a defense against Einstein's arguments,
> but it seems pretty standard today.
> That doesn't mean that it has the force of logic, in fact as it stands
> it's circular. Had the EPR team thrown in the towel and turned to classical
> chaotic dynamics in competition with quantum chaotic dynamics they might
> have done better, see e.g. Arjendu Pattanayak's pages at
> http://www.people.carleton.**edu/~apattana/Research/**RiceTalk.html<http://www.people.carleton.edu/~apattana/Research/RiceTalk.html>
> and
> http://www.people.carleton.**edu/~apattana/Research/index.**html<http://www.people.carleton.edu/~apattana/Research/index.html>
> Quantum mechanics is by no means a closed book yet, and not necessarily on
> the ideological grounds that seem to differentiate the various
> interpretations of quantum mechanics. There may be more hope for this than
> finding non-ideological grounds for deciding Choice and the Well-Ordering
> Theorem.
> Vaughan Pratt
> ______________________________**_________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/**listinfo/fom<http://www.cs.nyu.edu/mailman/listinfo/fom>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130130/fc8dda56/attachment-0001.html>
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2013-January/016945.html","timestamp":"2014-04-20T08:16:10Z","content_type":null,"content_length":"10494","record_id":"<urn:uuid:e149cffb-4043-40c3-99ae-387acb099c79>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electromagnetic waves illustration: Origin and sources
Electromagnetic waves
When the
wave theory of light
first gained general acceptance it was considered that
waves were conveyed through a transparent
medium which filled the whole of space, even a vacuum. This substance was called the
Michael Faraday's move
A further step forward was made in 1845, when Michael Faraday showed that, under certain conditions, light was passing through a material medium were affected by a magnetic field.
Now by that time, it was known that there was an inseparable connection between magnetism and electricity. Faraday's experiment gave a strong hint that light might well have
electrical properties
It is James Clerk Maxwell's move
Some years later, the eminent mathematician and physicist,
James Clerk Maxwell
, became very interested in Faraday's work on electricity and eventually put forward a mathematical theory suggesting that
an oscillating electric current should be capable of radiating energy in the form of electromagnetic waves (e.m. waves).
An electromagnetic wave can be visualized as an oscillating electric
traveling through space accompanied by a similarly oscillating magnetic force in a plane at right angles to it.
More importantly, Maxwell's equations led to the conclusion that such waves, if they existed,
would travel with the same velocity as light ' C = 3 × 10^8 m/s²' Now, with Heinrich Hertz
The work of Heinrich Hertz
Some twenty years after the publication of Maxwell's theory, the German scientist,
Heinrich Hertz
, showed that
electromagnetic waves
could indeed be produced by means of an oscillating electric spark. Moreover, he performed numerous experiments to demonstrate that the newly discovered waves underwent reflection, refraction,
diffraction and interference:
In short,
they behave exactly like light waves but a much greater wavelength.
The inference was that light waves themselves were also electromagnetic and further experimental and theoretical studies have since confirmed this belief.
of Hertz was developed by Marconi and others who laid the foundations of our present-day use of electromagnetic waves in radio communication.
The image shows the whole range of electromagnetic waves in order of increasing wavelengths. Any particular range of wavelengths is referred to as a
. You'll notice that the visible wavelengths occupy a very small band in the complete electromagnetic spectrum.
The SI unit of frequency
In recognition of the importance of Hertz's researches into electromagnetic waves, his name has been given to the unit of frequency.
The SI unit of frequency is called the hertz (Hz) and is equal to a frequency of 1 cycle per second (formerly written as 1 c /s).
The term hertz is not restricted to wave frequencies only but is used for any regularly event, e.g., the frequency of a pendulum, on alternating current, a musical note and so on.
Larger frequency units in common use are the kilohertz (kHz) = 1000 Hz the megahertz (MHz) = 1000000 or 10^6 Hz the gigahertz (GHz) = 10^9 Hz
Example. Calculate the frequency of a radio wave of wavelength 150 meter.
The The
of all e.m. waves in free space = 3 × 10
For a wave,
v = frequency × wavelength = f λ
f = v/λ =3 × 10^8/150 = 2 × 10^6 Hz
The inverse square law for electromagnetic waves
wave intensity
Let us suppose that an area of 1 m² which forms part of the surface of a sphere of radius 1m receives wave
from a point source placed at the center.
If P is the energy passing through this unit area in joules per second, i.e, if P is the power in watts, then we say that P is the
wave intensity in W/m²
If the distance is increased to 2 m it is clear from geometry of the figure "at your right" that the same power is now spread over an area of 2² = 4 m². Consequently the wave intensity is reduced to
Similarly, when the distance is increased to 3 m the wave intensity in W/m² is now only P/9.
Wave intensity is defined as the power transmitted per unit area of the wavefront
In general, the wave intensity at any distance x from a point source is given by the equation:
The relation between wave intensity and distance is expressed by
the inverse square law, which states that,
The electromagnetic wave intensity from a point source in free space is inversely proportional to the square of the distance from the source.
Light as a special case
of light wave energy 'photometry' is complicated by the fact that
light sources contain non-visible radiation as well
, e.g., ultraviolet and infrared. Special units called
Lux (not W/m²)
therefore have to be used, but all the same,
the inverse square law STILL HOLDS.
The inverse square law for radiation as represented above holds strictly for waves through free space from a point source. If the
passes through a material medium of some kind then
the law is modified by the fact that some of wave energy is progressively absorbed.
The loss of power from this cause is described as attenuation.
Origin and sources of electromagnetic waves
The whole range of electromagnetic radiation pours on to the earth from the sun and other heavenly bodies in outer space. Those frequencies which are stopped by the earth's
have been detected by instruments in man-made satellites orbiting the earth.
Otherwise some of the main sources on earth are given in the table below.
│ Wave-Band │ Origin │ Sources │
│ X-radiation │ 1- High energy changes in electron structure of atoms. │ X-ray tubes │
│ │ 2- Decelerated electrons │ │
│ Gamma radiation │ Energy changes in nuclei of atoms │ Radioactive substances │
│ Ultraviolet Radiation │ Fairly high energy changes in electron structure of atoms │ 1- Very hot bodies, e.g., the electric arc. │
│ │ │ 2- Electric discharge through gases, particularly mercury vapor in quartz envelopes. │
│ Visible Radiation │ Energy changes in electron structure of atoms │ Various lamps, flames and anything at or above red-heat. │
│ Infrared Radiation │ Low energy changes in electron structure of atoms │ All matter over a wide range of temperature from absolute zero upwards │
│ Radio Waves │ 1- High-frequency oscillatory electric currents │ Radio transmitting circuits and aerial equipment │
│ │ 2- Very low energy changes in electron structure of atoms │ │ | {"url":"http://scienceuniverse101.blogspot.com/2012/04/electromagnetic-waves-illustration.html","timestamp":"2014-04-19T08:20:33Z","content_type":null,"content_length":"154055","record_id":"<urn:uuid:4c2f7767-a74a-4019-872b-db3c564106fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Economic Intuition of IV estimates
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Economic Intuition of IV estimates
From Antoine Terracol <Antoine.Terracol@univ-paris1.fr>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Economic Intuition of IV estimates
Date Fri, 12 Feb 2010 13:23:00 +0100
Erasmo, in your IV estimation, the parameter on the instrumented variable gives you the causal effect of the said variable (provided the instruments are valid), NOT of the instruments.
Erasmo Giambona wrote:
Thank you very much David.
I am more concerned with the economic intution of the IV estimation.
In the first stage, I regress tangible on demand for tangible assets .
Then I use the predicted value in the second stage. But now this
predicted value smells more like demand for tangible assets. So, can I
say that the second stage is generally telling me how tangible assets
affect leverage and more specifcally how demand for tangible assets
affect leverage?
Thanks again,
On Fri, Feb 12, 2010 at 12:26 PM, Vincent, David <david.vincent@hp.com> wrote:
Most linear economic models describe a causal relationship, where the parameters 'b' are interpreted as the causal-effects of the x-variables on the expected value of y. So in your model, if b=0.5, then an increase in tangible assets of 1 would lead to an expected rise in the leverage of 0.5. The OLS estimator will consistently estimate the expected leverage given tangible assets, or at least provide a best linear approximation, but will not be a consistent estimator for the causal parameter 'b' when the error term is correlated with the rhs variable. In this case we use IV/2SLS with instruments that are correalted with tangible assets but uncorrelated with the error term to identify 'b'. For more info, see any econometrics text (verbeek/Greene etc).
David Vincent
Advanced Analytics Practice
Hewlett-Packard Limited
Mobile: +44 (0)7939 200 747
Internet: mailto:david.vincent@hp.com
Hewlett-Packard Limited Registered Office: Cain Road, Bracknell, Berks RG12 1HN
Registered No: 690597 England
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender.
To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Erasmo Giambona
Sent: 12 February 2010 11:04
To: statalist
Subject: st: Economic Intuition of IV estimates
Dear Statalist,
I am trying to gain more economic intuition on IV estimation. I am
estimating the following model using a panel dataset of firm-year
Leverage Ratio = a + b*Tangible Assets+e.
Suppose Tangible Assets is endogenous. My instrument is a proxy for
Demand of Tangible Assets (Instrument1). Question 1) If I estimate the
model using 2SLS, how do I interpret "b"? In particular, is it
possible to state that "b" tells me how Demand of Tangible Assets
affects the leverage ratio? Question 2) Suppose I have an additional
instrument (e.g., Firm Age - Instrument 2) and let's assume this in
unrelated to Leverage Ratio. If I estimate the model again using both
Instruments, it seems that "b" does not tell me anymore ONLY how
Demand of Tangible Assets affects the leverage ratio. Is my
interpretation correct?
Any thoughts on the issue is highly appreciated,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Ce message a ete verifie par MailScanner
pour des virus ou des polluriels et rien de
suspect n'a ete trouve.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-02/msg00583.html","timestamp":"2014-04-19T14:44:15Z","content_type":null,"content_length":"11290","record_id":"<urn:uuid:bd45d676-1f9c-4f1d-a9a0-85f6cff6497b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given the equation (x+5)^2 + y^2 = 1
a) The Center C
b) Length of Major Axis
c) Length of Minor Axis
d)... - Homework Help - eNotes.com
Given the equation (x+5)^2 + y^2 = 1
a) The Center C
b) Length of Major Axis
c) Length of Minor Axis
d) Distance from C to foci
Given equation represents circle i.e.
`(x+5)^2+y^2=1` ,it is an equation of circle.
a. centre at (-5,0)
b and c. length of major axis= length of minor axis = diameter of the circle.
radius of the circle= 1 unit
diameter of circle=2 unit = double of radius.
d. distance between foci and centre is zero.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/given-equation-x-5-2-y-2-1-find-center-c-b-length-443472","timestamp":"2014-04-17T04:28:56Z","content_type":null,"content_length":"25191","record_id":"<urn:uuid:85ae31e7-ef9b-4d09-876a-b6435257e53c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
remainder theorem issue
August 7th 2012, 04:54 AM #1
Senior Member
Jul 2010
remainder theorem issue
When using the remainder theorem do I always plug the root in?
If yes why?
ie dividing by (x - 1) plud in a 1 and (x + 1) plug in -1
Re: remainder theorem issue
If you divide a polynomial \displaystyle \begin{align*} P(x) \end{align*} by \displaystyle \begin{align*} ax + b \end{align*}, you will get a quotient of one less degree plus a remainder with a
constant numerator. So
\displaystyle \begin{align*} \frac{P(x)}{ax + b} &= Q(x) + \frac{R}{ax + b} \\ P(x) &= (ax + b)Q(x) + R \end{align*}
Now if you let \displaystyle \begin{align*} x = -\frac{b}{a} \end{align*}, you have \displaystyle \begin{align*} ax + b = 0 \end{align*}, so
\displaystyle \begin{align*} P\left(-\frac{b}{a}\right) &= 0\,Q\left(-\frac{b}{a}\right) + R \\ P\left(-\frac{b}{a}\right) &= R \end{align*}
So yes, when you want to find the remainder when dividing a polynomial by \displaystyle \begin{align*} ax + b \end{align*}, you plug in \displaystyle \begin{align*} -\frac{b}{a} \end{align*}.
Re: remainder theorem issue
Thank you.
August 7th 2012, 05:00 AM #2
August 7th 2012, 05:03 AM #3
Senior Member
Jul 2010 | {"url":"http://mathhelpforum.com/algebra/201852-remainder-theorem-issue.html","timestamp":"2014-04-20T14:42:54Z","content_type":null,"content_length":"37667","record_id":"<urn:uuid:02083ed2-f49c-48f8-8805-55b11979e979>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
According to the international SI unit use the following equation:
Osmolarity, abbreviated as __ osmol
Calculated osmolarity = 2 Na + Glucose + Urea (all in mmol/L).
Calculated osmolarity = 2 Na + 2 K + Glucose + Urea (all in mmol/L).
Most covalent compounds do not dissociate or ionize in water. However, an ionic compound separates into cations and anions on dissolving. In some chemical processes it is important to know the
concentration of the total number of particles dissolved in solution. This is where the concentration unit osmolarity is useful. The osmolarity of a solute in solution refers to the Molarity of the
compound x the number of particles produced per mole of compound when it dissolves in water.
Example 8:
What is the osmolarity of a 1.0 M solution of ethyl alcohol (C2H6O) in water? Ethyl alcohol is a covalent compound that does not ionize in water. Each mole of C2H6O which dissolves produces only one
mole of particles. So, osmolarity = 1.0 M x 1 mole of particles/mole compound = 1.0 osmol C2H6O.
Example 9:
What is the osmolarity of a 1.0 M solution of CaCl2? When 1 mole of CaCl2; CaCl2 --> Ca^+2 + 2 Cl^-. So, osmolarity = 1.0 M x 3 moles ions/mole compound = 3.0 osmol CaCl2
Use sucrose (glucose) to increase osmolarity, and distilleddeionized water (e.g., from a MilliQ system) to reduceosmolarity. Sucrose isn't charged, so you don't have to worryabout it messing with
your reversal potentials, etc. It's alsonot a salt, so 1 mM of sucrose increases osmolarity by 1 mOsmol.
To calculate plasma osmolarity use the following equation (typical in the US. Normal serum osmolality levels are 285-295 mOsm/L):
• = 2[Na^+] + [Glucose]/18 + [ BUN ]/2.8^[5] where [Glucose] and [BUN] are measured in mg/dL.
Simplifications are sometimes used:^[6]
• = 2[Na^+] + [Glucose]/20 + BUN/3 – 2
How_to_prepare_6N_NaOH (Sodium Hydroxide)
6N NaOH = 6M NaOH 6M NaOH are 6 moles in 1L MW (NaOH) = 39.88 gr/mole so: m = n x MW = 6 x 39.88 = 239.28 gram NaOH.
1 M HCl = 1 N HCl (36.5 gram/L)
Calculations Using Molarity Page 1 of 4
Calculations Using Molarity
There are several types of calculations that you need to be able to do with molarity.
First, you should be able to calculate the molarity if you are given the components of the solution.
Second, you should be able to calculate the amount of solute in (or needed to make) a certain volume of solution.
Third, you might need to calculate the volume of a particular solution sample.
Fourth, you might need to calculate the concentration of a solution made by the dilution of another solution. This and related calculations will be covered in a separate page.
In either of the first two cases, the amount of solute might be in moles or grams and the amount of solution might be in liters or milliliters. Please note that with molarity we are concerned with
how much solute there is and with how much solution there is, but not with how much solvent there is.
Examples (Ex. 3)
The examples that follow are also shown in example 3 in your workbook. You may want to look through the workbook examples on your own and if they make perfect sense to you, test your understanding by
doing exercise 4. Then check your answers below before continuing on to dilution calculations.
General Relationship (Ex. 3a)
Here is the general relationship that you
will be using over and over again. The
molarity is equal to the number of moles of
solute divided by the volume of the
solution measured in liters. If you like to
think of numbers and units instead of
quantities look at the second version of the
equation. In this equation x, y and z
represent numbers: 2, 6 and 3 for example.
Calculating Molarity from Moles and Volume (Ex. 3b)
Calculations Using Molarity Page 2 of 4
Here we are given something to figure out. To get the molarity we need to divide the number of moles of NaCl by the volume of the solution. In this case that is 0.32 moles NaCl divided by 3.4 L, and
that gives 0.094 M NaCl.
Calculating Molarity from Mass and Volume (Ex. 3c)
This one is a bit more difficult. To get molarity we still need to divide moles of solute by volume of solution. But this time we're not given the moles of solute. We have to calculate it from the
mass of NaCl. We multiply 2.5 g NaCl by the conversion factor of 1 mole NaCl over the formula weight of NaCl, 58.5 g. That tells us that we have 0.0427 mole of NaCl. I kept an extra digit here
because we are not done with the calculations. When we are done I'll round off to two digits, the same as in the 2.5 g weight of NaCl. Now that we know the moles we can calculate the molarity. Moles
of solute (0.0427) divided by the volume of the solution (0.125 L) gives us 0.34 M NaCl.
Calculating Mass of Solute from Molarity (Ex. 3d)
This question asks how you would prepare 400. ml of 1.20 M solution of sodium chloride. In this case what you need to find out is how much NaCl would have to be dissolved in 400 ml to give the
concentration that is specified. This amount is going to have to be in grams because we don't have any balances that weigh in moles. So there
is more than one step to this problem.
The approach shown here is a conversion factor approach. It involves remembering that molarity is a relationship between moles and liters. 1.20 M NaCl means there is 1.2 moles of NaCl per 1.00 liter
of solution. We can use that as a conversion factor to set up the calculation that relates
400. ml (or .400 L) to the appropriate number of moles of NaCl. So we take .400 L and multiply by the conversion factor to
Calculations Using Molarity
get .480 moles NaCl. The next step is to find out how many grams that is. We change from moles of NaCl to grams by using the formula weight. It comes out to
28.1 g NaCl. So the answer is that you would make the solution by dissolving 28.1 g NaCl in enough water to make 400 ml of solution.
There is also more than one way to do this problem. If you like the algebraic approach, you would write down the general equation shown in part a, substitute in the known values, solve for moles of
NaCl, and then change that into grams.
= 0.480 moles NaCl
Calculating Moles of Solute from Molarity (Ex. 3e)
This question is a little easier. We do it the same way as the first step of the previous problem and then we stop. To find out how many moles of salt are contained in 300. ml of a 0.40 M NaCl
solution, we start with the volume in liters (0.300 L) and multiply it by the number of moles per liter of solution, which is 0.40 moles over 1.00 L. The answer is 0.12 moles of NaCl. This could also
have been done using algebra by writing down the general equation relating molarity, moles and liters, substituting the known values, and then solving the equation for moles.
Practice (Ex. 4)
Now you should take some time to review the calculations above (ex. 3). If you have any
http://dl.clackamas.edu/ch105-04/calculat.htm 1/7/2011
Calculations Using Molarity Page 4 of 4
questions, check with your instructor. Once you are familiar with how those are done then you should try answering the following questions (exercise 4 in your workbook). Get help from the instructor
if you need it. Check your answers below before you continue with the lesson.
Molarity Calculations: Practice
1. How would you prepare 100. mL of 0.25 M KNO[3 ]solution?
2. A chemist dissolves 98.4 g of FeSO[4 ]in enough water to make 2.000 L of solution.
What is the molarity of the solution?
How many moles of KBr are in 25.0 mL of a 1.23 M KBr solution?
Battery acid is generally 3 M H[2]SO[4]. Roughly how many grams of H[2]SO[4] are in 400.
Answers (Ex. 4)
Here are the answers to exercise 4.
1. How would you prepare 100. mL of 0.25 M KNO[3 ]solution? Dissolve 2.53 g of KNO[3 ]
2. in enough water to make 100 ml of solution.
1. A chemist dissolves 98.4 g of FeSO[4 ]in enough water to make 2.000 L of solution.
What is the molarity of the solution? 0.324 M
How many moles of KBr are in 25.0 mL of a 1.23 M KBr solution?
Battery acid is generally 3 M H[2]SO[4]. Roughly how many grams of H[2]SO[4] are in 400.
mL of this solution? 120 g
Top of Page
Back to Course Homepage
E-mail instructor: Eden Francis
Clackamas Community College ©1998, 2003 Clackamas Community College, Hal Bender
Species Concentration Page 1 of 2
Species Concentration
Ionization and Species Concentration
There are times when you need to deal with the concentration of chemical species in solution rather than the concentration of a chemical compound. Perhaps these examples will show why this is
When electrolytes dissolve in water, they
dissociate into ions. When sodium chloride
dissolves in water, sodium and chloride
ions are formed. When calcium chloride
dissolves in water, calcium and chloride
ions are formed. Note that different
amounts of chloride ions are formed. Equal
concentrations of NaCl and CaCl[2] generate
different concentrations of Cl^-ion.
With weak electrolytes, this issue is complicated by the fact that some of the chemical remains undissociated. For example when HF dissolves in water HF(aq), H^+(aq) and F^(aq) are all formed and
present in the solution. Depending on the conditions under which the solution is formed, the concentration of each of those chemical species might be different.
When dealing with the concentrations of chemical species, it is customary to use brackets around the formula of the chemical specie as a symbol for the concentration of that specie. For example, [Cl^
-] = 2 M means the concentration of chloride ion is 2 M.
Later in this lesson (e.g. equilibrium constants) and in later lessons (e.g. reaction rates) we will need to focus on the concentration of the particular chemical species that are reacting with one
another or of particular interest to us for some other reason. Molarity can be used for this. It is simply a matter of specifying that the concentration refers to a particular chemical specie rather
than a chemical compound. | {"url":"http://people.tamu.edu/~xinwu/lab_bench.htm","timestamp":"2014-04-20T18:22:56Z","content_type":null,"content_length":"50410","record_id":"<urn:uuid:6f1e9a9f-ce54-4633-a9ce-17a30a1de08f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem Set 2
Programming Language BSL
Purpose The purpose of this problem set is to practice developing simple and conditional functions. The functions deal with atomic forms of data (numbers, symbols, images).
Finger Exercises HtDP/2e: 13, 18, 19, 20, 21; 30, 31; 61, 64
Problem 1 Design favorite-star. Given the side-length of a pentagon number of corners, the color of the star, and a length l, it places an appropriately sized and colored star in the middle of a
square of the given length l.
Problem 2 HtDP/2e: 44. You will also have to solve preceding exercises but all you need to hand in is the solution of exercise 44.
Problem 3 People consider a climate humid if the average humidity is above 65%; in contrast, when the humidity is below 20%, people say it is dry. For percentages in between those two, people find it
Design a function that translates measured percentages of humidity into words that describe human sensations.
Problem 4 Design a program that translates moves a blue object closer to a red object on a 200 x 200 plane. Specifically, the main function consumes two Posns, one called red and blue. It produces an
image that contains:
1. a blue dot
2. a red dot
3. a black line between the blue and the red dots
4. a green dot on this line, 10% closer to the red dot than the blue one.
Here is a sample output:
Domain Knowledge (Geometry) To find the green point between the red and the blue one, cut the "distance" in each dimension to 90% and add the results to the respective coordinates of the red point.
For this "formula" to work, you need to keep in mind that the blue point is to be translated (that’s the geometric term) in the direction of the red point.
Follow the design recipe for programs. | {"url":"http://www.ccs.neu.edu/course/cs2500f13/ps2.html","timestamp":"2014-04-18T21:38:58Z","content_type":null,"content_length":"9070","record_id":"<urn:uuid:22440d85-9f1e-42d5-92ac-c396f89008bb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Retraction of a Riemannian manifold with boundary to its cut locus
up vote 5 down vote favorite
This question is edited following the comment of Joseph. He pointed out that the main object of the first vesrion of this question is the cut locus.
Recall that the cut locus of a set $S$ in a geodesic space $X$ is the closure of the set of all points $p$ that have two or more distinct shortest paths in $X$ from $S$ to $p$. http://
A simple lemma shows that for a disk $D^2$ with a Riemannian metric the cut locus of $D^2$ with respect to its boundary is a tree. A picture of such tree can be found on page 542, figure 17 of the
article of Thurston "Shapes of polyhedra". The tree is of white colour. http://arxiv.org/PS_cache/math/pdf/9801/9801088v2.pdf For an ellipse on the 2-plane the tree is the segment that joins its
focal points.
More generically for a Riemannian manifold $M^n$ with a boundary the cut locus of $\partial M$ should be a deformation retract of $M$ (I guess it is a $CW$ complex of dimension less than $n$). To
prove this lemma notice that $M^n\setminus Cat locus(\partial M^n)$ is canonically foliated by geodesic segments that join $X$ with $\partial M$.
I wonder if this lemma has a name or maybe it is contained in some textbook on Riemannian geometry?
riemannian-geometry dg.differential-geometry reference-request gt.geometric-topology
1 Dima, could you please comment why "it is a CW complex". – Petya Jul 14 '10 at 1:22
1 @Dimitri: Are you defining the cut locus of the boundary of the disk? – Joseph O'Rourke Jul 14 '10 at 1:36
1 @Dimitri: Pardon me for repeating the point, but I think the cut locus of an ellipse is a segment. It also goes under the name medial axis. See the figure at the Wikipedia entry: en.wikipedia.org/
wiki/Medial_axis . – Joseph O'Rourke Jul 14 '10 at 10:13
1 If it really is a retract of a smooth compact manifold (equivalently, a retract of a finite CW complex), then that's almost as good as being finite CW. By the way, what's an example of such a
space (compact ENR) not admitting a CW structure? – Tom Goodwillie Jul 14 '10 at 11:22
2 Beware that cut loci can be non-triangulable, even on strictly convex revolution surfaces, as has been shown by H. Gluck and D. Singer ams.org/journals/bull/1976-82-04/S0002-9904-1976-14125-0/… –
BS. Jul 14 '10 at 12:56
show 1 more comment
1 Answer
active oldest votes
Let me continue the comments above here so I can include a figure. Here are examples of the medial axis of two different convex polygons (from my own work):
The term medial axis is used in computer science to denote the same concept as the cut locus.
up vote 2 down vote
accepted Franz-Erich Wolter wrote his Ph.D. dissertation on "Cut loci in bordered and unbordered Riemannian manfolds." That might contain some useful information.
Joseph, thank you very much for the answer! Unfortunatelly the link that you give does not seem to work... – Dmitri Jul 14 '10 at 11:43
@Dmitri: Sorry about the broken link. Tried to fix it. In any case, Google search for the exact title of his thesis within quotes, and it is the #1 hit. – Joseph O'Rourke
Jul 14 '10 at 11:47
add comment
Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry reference-request gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/31782/retraction-of-a-riemannian-manifold-with-boundary-to-its-cut-locus","timestamp":"2014-04-16T13:41:40Z","content_type":null,"content_length":"62853","record_id":"<urn:uuid:f207efa0-3044-4665-86bb-0a09d86861d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notices of the American Mathematical Society :: Cover Art
Explore the world of mathematics and art, share an e-postcard, and bookmark this page to see new featured works..
Home > Notices of the American Mathematical Society :: Cover Art
Notices of the American Mathematical Society :: Cover Art
Click on the thumbnails to see larger image and share. | {"url":"http://ams.org/mathimagery/thumbnails.php?album=3","timestamp":"2014-04-19T00:25:52Z","content_type":null,"content_length":"29323","record_id":"<urn:uuid:bd8fc2cf-2889-4d68-ad4d-2d29292e2120>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Lab
I am Thomas Peterffy, founder of Interactive Brokers.
As I promised in our advertisement, I will tell you about a practical
way to think about options without complicated mathematics.
I will introduce the following concepts:
Please feel free to skip the parts you already know.
The first concept to understand is the probability distribution (PD), which are just fancy words for saying that all possible future outcomes have a chance or likelihood or probability of coming
true. The PD tells us exactly what the chances are for certain outcomes. For example:
What is the probability that the daily high temperature in Hong Kong will be between 21.00 and 22.00 Celsius on November 22 next year?
We can take the temperature readings for November 22 for the last hundred years. Draw a horizontal line and mark it with 16 to 30 degrees and count how many readings fall into each one degree
interval. The number of readings in each interval is the % probability that the temperature will be in that interval on November 22, assuming that the future will be like the past. It works out that
way because we took 100 readings. Otherwise you must multiply by 100 and divide by the number of data points to get the percentages. In order to achieve greater accuracy we would need more points, so
we could use data for November 20 through 24.
Let us draw a horizontal line spanning each one degree segment at the height corresponding to the number of data points in that segment. If we used data from November 20 through 24 we would get more
data and greater accuracy but would need to multiply by 100 and divide by 500.
These horizontal lines compose a graph of our PD. They indicate the percentage likelihood that the temperature will be in any one interval. If we want to know the probability that the temperature
will be below a certain level, we must add up all the probabilities in the segments below that level. In the same way we add up all the probabilities above the level if we want to know the
probability of a higher temperature.
Accordingly, the graph indicates the probability for the temperature to be between 21 and 22 Celsius is 15% and the probability that it will be anywhere under 22 degrees is 2+5+6+15=28% and above 22
degrees is 100-28=72%.
Please note that the sum of the probabilities in all segments must add up to 1.00, i.e. there is a 100% chance that there will be some temperature in Hong Kong on that date.
If we had more data we could make our PD more precise by making the intervals narrower, and as we narrowed the intervals the horizontal lines would shrink to points forming a smooth bell shaped
Just the same way as future temperature ranges can be assigned probabilities, so can ranges of future stock prices or commodities or currencies. There is one crucial difference however. While
temperature seems to follow the same pattern year after year, that is not true for stock prices which are more influenced by fundamental factors and human judgment.
So the answer to the question, "What is the probability that the price of ABC will be between 21.00 and 22.00 on November 22?" has to be more of an informed guess than the temperature in Hong Kong.
The information we have to work with is the current stock price, how it has moved in the past and fundamental data about the prospects of the company, the industry, the economy, currency,
international trade and political considerations and so on, that may influence people's thinking about the stock price.
Forecasting the future stock price is an imprecise process. Forecasting the PD of future stock prices seems to allow more flexibility, or at least we become more aware of the probabilistic nature of
the process. The more information and insight we have the more likely we are to get it right.
The prices of put and call options on a stock are determined by the PD but the interesting fact is that we can reverse engineer the process. Namely, given the prices of options, a PD implied by those
prices can easily be derived. It is not necessary that you know how and you can skip to the next section, but if you would like to know then here is one method that any high school student should be
able to follow.
Assume that AAPL is trading around $500 per share. What is the percentage probability that the price will be between 510 and 515 at the time the option expires about a month from now? Assume the 510
call trades at $6.45 and the 515 call trades at $4.40. You can buy the 510 call and sell the 515 call and pay $2.05.
• If at expiration time the stock is under 510, you lose $2.05
• If it is between 510 and 515, your gain is the average of your loss at 510 of $2.05 and your gain
at 515 of $2.95 or $0.45
• If it is above 515, you make $2.95
Further assume that we previously calculated that the probability for the stock to be below 510 is 56% or 0.56.*
Provided that options are "fairly" priced, i.e. there is no profit or loss that can be made if the market's PD is correct, then 0.56*-2.05+X*0.45+Y*2.95=0 where X=the probability that the stock will
be between 510 and 515 and Y= the probability that it will be above 515.
Since all possible prices occurring have a probability of 100%, then 0.56+X+Y=1.00 gives us 0.06 for X and 0.38 for Y.
*To calculate an entire PD you need to start at the lowest strike and you need to take a guess as to the probability below that price. That will be a small number, so that you will not make too great
an error.
If you've read this far then you will also be interested to know how you can derive the price of any call or put from the PD.
For a call you can take the stock price in the middle of each segment above the strike price, subtract the strike price and multiply the result by the probability of the price ending up in that
segment. For the tail end you need to take a guess at the small probability and use a price about 20% higher than the high strike. Summing all the results gives you the call price.
For puts you can take the stock price in the middle of each interval below the strike, subtract it from the strike and multiply by the probability. For the last segment, between zero and the lowest
strike I would use 2/3 of the lowest strike and guess the probability. Again, add all the results together to get the price of the put.
Some may say that these are all very sloppy approximations. Yes, that is the nature of predicting prices; they are sloppy and there is no point in pretending otherwise. Everybody is guessing. Nobody
knows. Computer geeks with complex models appear to the uninitiated to be doing very precise calculations, but the fact is that nobody knows the probabilities and your educated guess based on your
understanding of the situation may be better than theirs based on statistics of past history.
Note that we are ignoring interest effects in this discussion, but with current interest rates, that is a small effect. We are also adjusting for the fact that options may be exercised early which
makes them more valuable. When calculating the whole PD, this extra value needs to be accounted for but it is only significant for deep-in-the-money options. By using calls to calculate the PD for
high prices and using puts to calculate the PD for low prices, you can avoid the issue.
THE PD AS IMPLIED BY THE MARKET and YOUR OPINION
Given that puts and calls on most stocks are traded in the option markets, we can calculate the PD for those stocks as implied by the prevailing option prices. I call this the "market's PD," as it is
arrived at by the consensus of option buyers and sellers, even if many may be unaware of the implications.
The highest point on the graph of the market's implied PD curve tends to be close to the current stock price plus interest minus dividends, and as you go in either direction from there the
probabilities diminish, first slowly, then more rapidly and then slowly again, approaching but never quite reaching zero. The ForwardPrice is the expected price at expiration as implied by the
probability distribution.
The curve is almost symmetrical except that slightly higher prices have higher probability than slightly lower ones and much higher prices have lesser probability than near zero ones. That's because
prices tend to fall faster than they rise and all organizations have some chance of some catastrophic event happening to them.
In the Interactive Brokers Probability Lab℠ (Patent Pending) you can view the PD we calculate using option prices currently prevailing in the market for any stock or commodity on which options are
listed. All you need to do is to enter the symbol.
This is a live PD graph that changes as option bids and offers change at the exchanges. You can now grab the horizontal bar in any interval and move it up or down if you think that the price ending
up in that interval has a higher or lower probability than the consensus guess as expressed by the market. You will notice that as soon as you move any of the bars, all the other bars will
simultaneously move, with the more distant bars moving in the opposite direction as all the probabilities must add up to 1.00. Also notice that the market's PD remains on the display in blue while
yours is red and the reset button will wipe out all of your doodling.
The market tends to assume that all PDs are close to the statistical average of past outcomes unless a definitive corporate action, such as a merger or acquisition, is in the works. If you follow the
market or the particulars of certain stocks, industries or commodities, you may not agree with that. From time to time you may have a different view of the likelihood of certain events and therefore
how prices may evolve. This tool gives you the facility to illustrate, to graphically express that view and to trade on that view. If you do not have an opinion of the PD as being different than the
market's then you should not do a trade because any trade you do has a zero expected profit (less transaction costs) under the market's PD. The sum of each possible outcome (profit or loss in each
interval) multiplied by its associated probability is the statistically Expected Profit and under the market's PD, it equals zero for any trade. You can pick any actual trade and calculate the
expected profit to prove that to yourself. Thus, any time you do a trade with an expectation of profit, you are taking a bet that the market's PD is wrong and yours is right. This is true whether you
are aware of it or not, so you may as well be aware of what you are doing and sharpen your skills with this tool.
Please go ahead and play with the PD by dragging the distribution bars below. We display combination trades that are likely to have favorable outcomes under your PD. You can specify if you would like
to see the "optimal trades" that are a combination of up to two, three or four option legs. We will show you the three best combination trades along with the corresponding expected profit, Sharpe
ratio, net debit or credit, percentage likelihood of profit, maximum profit and maximum loss and associated probabilities for each trade, given your PD, and the margin requirement.
The best trades are the ones with the highest Sharpe ratio, or the highest ratio of expected profit to variability of outcome. Please remember that the expected profit is defined as the sum of the
profit or loss when multiplied by the associated probability, as defined by you, across all prices. On the bottom graph you will see your predicted profit or loss that would result from the trade and
the associated probability, corresponding to each price point.
The interactive graph below is a crude simulation of our real-time TWS Probability Lab application available under the Trading Tools menu, to our customers. Similarly, the "best trades" are displayed
for illustrative purposes only. Unlike in the actual application, they are not optimized for your distribution.
When you like a trade in our TWS application, you may increase the quantity and submit the order.
In subsequent releases of this tool we'll address buy writes, rebalancing for delta, multi-expiration combination trades, rolling forward of expiring positions and further refinements of the
Probability Lab.
Please play around with this interactive tool. As you do so, your understanding of options pricing and your so called "feel for the options market" will deepen.
Thomas Peterffy
Interactive Brokers
NOTE: To access the Probability Lab (Patent Pending), please run Trader Workstation Latest release (942 or newer).
Total of $20,000 in cash prizes:
• First place: $10,000
Thomas Halikias
• Second place: $5,000
Tianhua Wu
• Five (5) third place prizes: $1,000 each
Bernard Kruyne
Chung-Lin Tang
Gaudenz Schneider
Qicheng Ma
Qing Hui
Even though he has essentially developed a proprietary formula that is the financial
equivalent of Coca-Cola, Peterffy is giving away the animating power to anyone who
takes the time to study probabilistic thinking and explore his lab.
" View article
Steven M. Sears, "Peterffy's Latest Creation"
from the November 18, 2013 edition of Barron's.
Interactive Brokers LLC is a member of NYSE, FINRA, SIPC. Any trading symbols displayed are for illustrative purposes only and are not intended to portray recommendations. Options involve risk and
are not suitable for all investors. For more information read the "Characteristics and Risks of Standardized Options". For a copy visit Interactivebrokers.com/disclosures | {"url":"https://www.interactivebrokers.com/en/index.php?f=5910","timestamp":"2014-04-20T03:30:13Z","content_type":null,"content_length":"72447","record_id":"<urn:uuid:c5fe2efd-27d4-4f74-9f1b-a85ac2312ee3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Kind of Science: The NKS Forum - Edward Witten wins the Nobel Prize? Why? Because of the genius of Wolfram
David Brown
Registered: May 2009
Posts: 173
Edward Witten wins the Nobel Prize? Why? Because of the genius of Wolfram
… Finite Nature assumes that there is no thing that is smooth or continuous and that there are no infinitesmals. — Edward Fredkin
Why should anyone pay attention to Edward Fredkin, Stephen Wolfram, and David Brown? Consider the following 3 conjectures:
(1) The one and only explanation of dark matter is the f(div) theory of modified general relativity theory. A consequence of the f(div) theory is that for a large set of photons, some would travel
very slightly faster than the speed of light — these would escape into alternate universes and contribute to dark energy in our universe and also contribute to dark matter elsewhere in the
multiverse; other photons would travel very slightly slower than the speed of light — these would be associated with virtual photon mass and contribute to dark matter in our universe and also
contribute to dark energy elsewhere in the multiverse. According to the f(div) theory, dark matter is the result of a steady influx of virtual photons from alternate universes.
(2) The one and only explanation of dark energy is the nonzero cosmological constant in general relativity theory.
(3) The one and only explanation of the GZK paradox consists of paradigm-breaking photons emitted by black holes. (Black holes are digital in nature.)
What is wrong with M-theory as of the year 2009 CE? Consider Wolfram’s cosmological hypothesis:
Is Wolfram’s NKS Chapter 9 the first basically accurate conception of the mobile automaton that gradually builds time, space, and energy from an informational substrate below the Planck scale? Do
M-theorists need Wolfram’s ideas?
Consider Wolframian proto-physics, Nambu quantum field theory, and quantum field theory. Have M-theorists made the mistake of assuming that Wolframian proto-physics does not exist? Have M-theorists
made the mistake of believing that Nambu quantum field theory has some type of generalization of the gauge theory found in quantum field theory? Is Nambu quantum field theory the structural theory
obtained by smoothing out the Nambu transfer machine? Is Wolfram the supreme genius whose ideas allow Nambu quantum field theory to be used to make empirical predictions?
"Crudely speaking there is wave-particle duality in physics, but in reality everything comes from the description by waves, which are then quantized to give particles. Thus a massless classical
particle follows a lightlike geodesic (a sort of shortest path in curved spacetime), while the wave description of such particles involves the Einstein, Maxwell or Yang-Mills equations, which are
much closer to the fundamental conceptions of physics. Unfortunately, in string theory so far, one has generalized only the less fundamental point of view. As a result, we understand in a practical
sense how to do many computations in string theory, but we do not yet understand the new underlying principles analogous to gauge invariance." — Edward Witten, “Reflections on the Fate of Spacetime”
Does Nambu quantum field theory possess underlying principles analogous to gauge invariance? Does Nambu quantum field theory enable the computation of empirical corrections to the Bekenstein-Hawking
radiation law? Is the main game in Nambu quantum field theory to calculate the probability distributions for paradigm-breaking photons?
Note added Feb. 23, 2011 CE: I now think that M-theory has two physical interpretations: a Bohrian interpretation within quantum field theory and an Einsteinian interpretation within hidden
determinism. My guess is that M-theory predicts the Rañada-Milgrom effect, which implies that the -1/2 in the standard form of Einstein's field equations should be replaced by -1/2 + sqrt(15) *
10**-5. (See the posting "Dark matter: why should Rañada and Milgrom win the Nobel prize?" at nks forum applied nks.)
Last edited by David Brown on 02-23-2011 at 11:55 AM
Report this post to a moderator | IP: Logged | {"url":"http://forum.wolframscience.com/showthread.php?postid=6446","timestamp":"2014-04-21T07:50:25Z","content_type":null,"content_length":"23276","record_id":"<urn:uuid:bbe5d58f-0688-4227-948b-4180e3274dab>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Science on Ramani's blog
Speed Of Light =Gravity=Infinity = 0 = Science?
In Science on March 26, 2014 at 09:00
The Speed( I am using this term to enable every one to understand this) is 299,792,458 metres per second,
This is equal to the Speed of Gravity.
Speed of Light is equal to Speed of Gravity(+ or – 1%)
The Speed of Light, according to Newton is Infinite.
That is you can not comprehend because there are innumerable Digits,’ it is limitless.
Zero can not be comprehended because there is nothing to comprehend.( some say “A positive or negative number when divided by zero is a fraction with the zero as denominator”
So in terms of Comprehending Infinity is as comprehensible as Zero is.
Therefore Infinity is equal to Zero.
Speed of Light is Infinite.
Therefore, Speed of Light is equal to Zero.
I am aware of the Logical Fallacy of undistributed Middle is present in my argument.
So do the Scientist theories!
We take Science because it is convenient.
We do not question it.
Then why not God?
* Follow Theory of Relativity, it is more fun .
Newtonian Gravity.
Therefore the theory assumes the speed of gravity to be infinite. This assumption was adequate to account for all phenomena with the observational accuracy of that time. It was not until the 19th
century that an anomaly in astronomical observations which could not be reconciled with the Newtonian gravitational model of instantaneous action was noted.
Relativity Gravity.
General relativity predicts that gravitational radiation should exist and propagate as a wave at lightspeed: a slowly evolving and weak gravitational field will produce, according to general
relativity, effects like those of Newtonian gravitation.
Infinity (symbol: ∞) is an abstract concept describing something without any limit and is relevant in a number of fields, predominantly mathematics and physics. The English word infinity derives from
Latin infinitas, meaning “the state of being without finish”, and which can be translated as “unboundedness”, itself calqued from the Greek word apeiros, meaning “endless”
The Indian mathematical text Surya Prajnapti (c. 3rd–4th century BCE) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three
• Enumerable: lowest, intermediate, and highest
• Innumerable: nearly innumerable, truly innumerable, and innumerably innumerable
• Infinite: nearly infinite, truly infinite, infinitely infinite
In the Indian work on the theory of sets, two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between asaṃkhyāta (“countless,
innumerable”) and ananta (“endless, unlimited”), between rigidly bounded and loosely bounded infinities.
Minecraft can be a great tool for visualizing complicated subjects, such as the speed of light (aka “c”). Using a straight track and simple math, we can see how the universe might be limiting speeds
for very fast things, such as light.
The “doors” metaphor is admittedly, imperfect. While they do limit speed outside of “acceleration”, they falsely imply that there is something “in space” that slows things down, which does not appear
to be the case at the moment. The actual mechanisms that limit objects to light speed will be the topic for a future video. (Hint: It has to do with time!)
The rules governing the use of zero appeared for the first time in Brahmagupta‘s book Brahmasputha Siddhanta (The Opening of the Universe),^[25] written in 628 AD. Here Brahmagupta considers not only
zero, but negative numbers, and the algebraic rules for the elementary operations of arithmetic with such numbers. In some instances, his rules differ from the modern standard. Here are the rules of
• The sum of zero and a negative number is negative.
• The sum of zero and a positive number is positive.
• The sum of zero and zero is zero.
• The sum of a positive and a negative is their difference; or, if their absolute values are equal, zero.
• A positive or negative number when divided by zero is a fraction with the zero as denominator.
• Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator.
• Zero divided by zero is zero.
In saying zero divided by zero is zero, Brahmagupta differs from the modern position. Mathematicians normally do not assign a value to this, whereas computers and calculators sometimes assign NaN,
which means “not a number.” Moreover, non-zero positive or negative numbers when divided by zero are either assigned no value, or a value of unsigned infinity, positive infinity, or negative
Read my Posts on Time, Astrophysics.Science.
Related articles
Electricity In Hinduism Texts Agashtya Samhita
In Hinduism, Science on November 11, 2013 at 08:33
This post is a sequel to Hinduism,Mathematics Calculations and Measurements.
One who reads the Indian Puranas and Ithihasas like Ramayana would find that if a Rishi wants to curse or Bless some one’he/she sprinkles water thrice, taken from His /her Kamandalu, which is made of
Copper.(Kamandalu is shaped like a small Teapot,without the Nose/snout)
In The Vedas,one of the most powerful Mantras being recited after the Pooja is performed is The Mantra Pushpa.
It is the worship of Water.
Water,Aapah,is extolled and prayed for Welfare.
Water is used for all ceremonies,Functions.
Water is used for Purification ceremonies, like marriages,and cleansing the Home after a Death.
Temples in India are closed only between 1 and 4 pm and finally at 1030 pm.
The closing time after Noon is called the Closure after Uchchi Kala Pooja and the closure after 1030 pm is Closure afterArtha JamaPooja.
A temple is never closed at other times.
( Also a Temple is not sanctioned to be open after Artha Jama Pooja,after 1030 pm; we now have even Tirupathi Temple being kept open through out the Day during English New Year Day and other
important Days,This is wrong ,according to the Sastras)
However the Temples are closed if death occurs in the Street where The Temple is located and it is opened after the body is taken out.
To cleanse the temple Punyahachana is performed with Water.
Temples are closed during the Eclipses,Grahans and they are opened after the Eclipse is over and a Punyahavaachana is performed with Water before opening.
The Sandhyaandana, which is a Daily Duty of the three Varnas, Brahmins ,Kshtriyas, Vaisyas is performed with Water ans the purification of Food is done with Water,Parishachanam.
I searched and found references about Electricity from the Agasthya Samhita.
Here it is.
Formula for Electric battery in Agastya Samhita an Ancient Hindu text by The ancient text of Agastya Samhita describes the method of making electric battery, and that water can be split into oxygen
and hydrogen. Modern battery cell resembles Agastya’s method of generating electricity. For generating electricity, Sage Agastya had used the following material: 1.One earthen pot 2.Copper plate
3.Copper sulphate 4.Wet saw dust 5.Zinc amalgam His text says : “Sansthapya Mrinmaya Patre Tamrapatram Susanskritam Chhadyechhikhigriven Chardrarbhih Kashthpamsubhih. Dastaloshto Nidhatavyah
Pardachhaditastah Sanyogajjayte Tejo Mitravarunsangyitam” संस्थाप्य मृण्मये पात्रे ताम्रपत्रं सुसंस्कृतम्। छादयेच्छिखिग्रीवेन चार्दाभि: काष्ठापांसुभि:॥ दस्तालोष्टो निधात्वय: पारदाच्छादितस्तत:। संयोगाज्जायते तेजो मित्रावरुणसंज्ञितम्॥ Which means, “Place a
well-cleaned copper plate in an earthenware vessel. Cover it first by copper sulfate and then by moist sawdust. After that, put a mercury-amalgamated zinc sheet on top of the sawdust to avoid
polarization. The contact will produce an energy known by the twin name of Mitra-Varuna. Water will be split by this current into Pranavayu and Udanavayu. A chain of one hundred jars is said to give
a very effective force. (p. 422)” When a cell was prepared according to Agastya Samhita and measured, it gives open circuit voltage as 1.138 volts, and short circuit current as 23 mA. Anen
Jalbhangosti Prano Daneshu Vayushu Evam Shatanam Kumbhanamsanyogkaryakritsmritah. if we use the power of 100 earthen pots on water, then water will change its form into life-giving oxygen and
floating hydrogen. Vayubandhakvastren Nibaddho Yanmastake Udanah Swalaghutve Bibhartyakashayanakam. If hydrogen is contained in an air tight cloth, it can be used in aerodynamics, i.e. it will fly in
air. (Today’s Hydrogen Balloon) Process Of Electroplating by Maharshi Agastya in Agastya Sanhita: Excerpt from “Technology of the Gods: The Incredible Sciences of the Ancients” – By David Hatcher
Childress “In the temple of Trivandrum, Travancore, the Reverned S. Mateer of the London Protestant Mission saw ‘a great lamp which was lit over one hundred and twenty years ago’, in a deep well in
side the temple. ……. On the background of the Agastya Samhita text’s giving precise directions for constructing electrical batteries, this speculation is not extravagant.”
अगस्त्य संहिता में एक सूत्र हैः
संस्थाप्य मृण्मये पात्रे ताम्रपत्रं सुसंस्कृतम्।
छादयेच्छिखिग्रीवेन चार्दाभि: काष्ठापांसुभि:॥
दस्तालोष्टो निधात्वय: पारदाच्छादितस्तत:।
संयोगाज्जायते तेजो मित्रावरुणसंज्ञितम्॥
अर्थात् एक मिट्टी का बर्तन लें, उसमें अच्छी प्रकार से साफ किया गया ताम्रपत्र और शिखिग्रीवा (मोर के गर्दन जैसा पदार्थ अर्थात् कॉपरसल्फेट) डालें। फिर उस बर्तन को लकड़ी के गीले बुरादे से भर दें। उसके बाद लकड़ी के गीले बुरादे के ऊपर पारा से आच्छादित दस्त लोष्ट (mercury-amalgamated
zinc sheet) रखे। इस प्रकार दोनों के संयोग से अर्थात् तारों के द्वारा जोड़ने पर मित्रावरुणशक्ति की उत्पत्ति होगी।
यहाँ पर उल्लेखनीय है कि यह प्रयोग करके भी देखा गया है जिसके परिणामस्वरूप 1.138 वोल्ट तथा 23 mA धारा वाली विद्युत उत्पन्न हुई। स्वदेशी विज्ञान संशोधन संस्था (नागपुर) के द्वारा उसके चौथे वार्षिक सभा में ७ अगस्त, १९९० को इस प्रयोग का प्रदर्शन भी विद्वानों तथा सर्वसाधारण के समक्ष
किया गया।
अगस्त्य संहिता में आगे लिखा हैः
अनेन जलभंगोस्ति प्राणो दानेषु वायुषु।
एवं शतानां कुंभानांसंयोगकार्यकृत्स्मृत:॥
अर्थात सौ कुम्भों (अर्थात् उपरोक्त प्रकार से बने तथा श्रृंखला में जोड़े ग! सौ सेलों) की शक्ति का पानी में प्रयोग करने पर पानी अपना रूप बदल कर प्राण वायु (ऑक्सीजन) और उदान वायु (हाइड्रोजन) में परिवर्तित हो जाएगा।
फिर लिखा गया हैः
वायुबन्धकवस्त्रेण निबद्धो यानमस्तके उदान स्वलघुत्वे बिभर्त्याकाशयानकम्।
अर्थात् उदान वायु (हाइड्रोजन) को बन्धक वस्त्र (air tight cloth) द्वारा निबद्ध किया जाए तो वह विमान विद्या (aerodynamics) के लिए प्रयुक्त किया जा सकता है।
स्पष्ट है कि यह आज के विद्युत बैटरी का सूत्र (Formula for Electric battery) ही है। साथ ही यह प्राचीन भारत में विमान विद्या होने की भी पुष्टि करता है।
Process Of Electroplating by Maharshi Agastya in Agastya Sanhita:-
“Technology of the Gods: The Incredible Sciences of the Ancients” – By David Hatcher Childress
“In the temple of Trivendrum, Travancore, the Reverned S. Mateer of the London Protestant Mission saw ‘a great lamp which was lit over one hundred and twenty years ago’, in a deep well in side the
temple. ……. On the background of the Agastya Samhita text’s giving precise directions for constructing electrical batteries, this speculation is not extravagant.”
आपको यह जानकर आश्चर्य होगा कि आज की अनेक आधुनिक तकनीकों का वर्णन हमारे प्राचीन ग्रंथों में मिलता है। शुक्र नीति के अनुसार आज के इलेक्ट्रोप्लेटिंग के लिए “कृत्रिमस्वर्णरजतलेपः” शब्द का प्रयोग करते हुए इसे “सत्कृति” नाम नाम दिया गया है – “कृत्रिमस्वर्णरजतलेप: सत्कृतिरुच्यते”।
अब यह बताना चाहते हैं कि अगस्त्य संहिता में विद्युत् का उपयोग इलेक्ट्रोप्लेटिंग के लिए करने की विधि दर्शाते हुए निम्न सूत्र मिलता हैः
यवक्षारमयोधानौ सुशक्तजलसन्निधो॥
आच्छादयति तत्ताम्रं स्वर्णेन रजतेन वा।
सुवर्णलिप्तं तत्ताम्रं शातकुंभमिति स्मृतम्॥
अर्थात्- लोहे के पात्र में रखे गए सुशक्त जल (तेजाब का घोल) का सानिध्य पाते ही यवक्षार (सोने या चांदी का नाइट्रेट) ताम्र को स्वर्ण या रजत से आच्छादित कर देता है। स्वर्ण से लिप्त उस ताम्र को शातकुंभ स्वर्ण कहा जाता है।
Related Articles
• From Where Did The Brahmins Come? (ramanan50.wordpress.com)
• Atomic Explosion Mahabharata Harappa Evidence. (ramanan50.wordpress.com)
Top Mistakes Uncertainty Thy Name Is Science
In Science on September 26, 2013 at 09:39
Science, the protagonists often claim , is the authority on everything and what it says is the truth .
if so, why there has been so many revisions and in many a case total negation and repudiation of what has been theorized earlier.
The operative word is ‘theory’.
Science, in my view, is just that, nothing more.
List of mistakes by Science/Scientists(even this likely to change as what is said to incorrect may be called as correct later).
Seth Borenstein, the Associated Press‘s science correspondent, has given us a fine barometer by which to measure the scientific certainty that humans are heating the planet. He reports that the
world’s climatologists are now gearing up to officially proclaim that they are 95 percent certain that humans are to blame for global warming.
That 5 percent gap may seem large. It is not. In science, nothing is 100 percent sure—not even the law of gravity.
According to Borenstein, here are a few things that scientists are just as or less certain of than climate change:
• that cigarettes kill
• the age of the universe
• that vitamins make you healthy
• that dioxin in Superfund sites is dangerous
Here are a couple I’ll add myself. Scientists are more certain that humans are causing climate change than:
• the rate the universe expanded after the big bang.
The Periodic Table, The Bible of Chemistry.
We like to think of the periodic table as immutable. It isn’t – atoms don’t always weigh the same, says Celeste Biever.
No such thing as Reptiles.
The traditional group Reptilia things like lizards, crocodiles, snakes, tortoises plus many extinct groups – is not a true clade, says Graham Lawton.
Nuclear Fission Confusion.
We’ve built the bomb. We’ve built reactors. But the whole enterprise of nuclear fission is based on a misunderstanding.
Chemistry a confusion.
Chemistry is a much fuzzier business than we thought.
Gene is not a Gene.
What defines life’s building blocks? It depends who you ask, says Michael Le Page.
Ask a taxonomist to estimate Earth’s total inventory of species, and they’ll probably say 30 million. That is almost certainly a huge overestimate, saysKate Douglas.
Whatever undergraduate physicists are told, magnetic poles do indeed enjoy the single life, says Richard Webb.
Albert Einstein’s towering reputation is only enhanced by his self-styled biggest blunder. It might not have been a mistake after all, says Richard Webb.
Related Articles
• Science: As certain of climate change as of smoking (achangeinthewind.com)
What Is Certainty ?
In Philosophy, Science on August 20, 2013 at 12:12
I saw a Post on Truth and Beauty.
I am providing the link towards the end of the post.
I have been planning to write on this for some time.
Whenever we talk of certainty what do we mean?
That an event or thing will happen the way we expect it to happen?
Again what is ‘expected to happen?
We expect things to happen indicates that we have seen some things, events followed some events, some experienced by us, some by the others,
And we think the same pattern will follow.
Just how scientific is this?
I am talking about Science here, because it what people think is the solution for every thing and Science is the club used to beat Philosophy and Religion.
When a religious information , or even a fact is presented, the immediate question, from the people, especially who profess to have a scientific temper(?), is,
How certain are you?
The same question is addressed to Philosophy.
You Talk of Reality God.
The retort is,
Is it certain?
Does it hold good for all times the past,, present and the future.
Philosophy and Religion take this very seriously and replies,
Yes, it holds good for all times and it is Eternal.
I think the answer is not can not be reasoned out for we can not verify, it.
As of now, we know things and events as they are or reported to be ‘were’
Not it ‘will be’
What we know is the Finite and we presume, that since there is some thing Finite, there has to be some thing Infinite.
Because we are conditioned to think, I do not know how, that there
should be a pair of opposite.
True there are pairs of opposites in what we experience in Life, pleasure and Pain,Darkness and Light, Good and Bad,the list goes on.
But by logic it need not be.
What has happened yesterday need not happen to morrow.
Our mind is programmed to find similarities,categories, uniformity to make it easier for the Mind to categorize.
It is the mechanism of the Mind.
If we think it is in the outside world, then one has to accept the the Law of Uniformity of Nature and as a consequence must accept the Theory of Causation as well.
Law of Uniformity , at its best can only say that an event had happened, is happening in the present and no more.
Law of Causation can tell you an event is caused by another.
One event may be the Cause for many events and one result may be caused by more than one Cause(event)
There are many sub causes to make a particular Cause to produce one particular effect.
This, I shall deal, in a separate post
These sub or attendant causes .’ along with the Primary Cause again assumes the law of Causation.
So Causation assumes Uniformity and Uniformity assumes Causation.
This is Logical fallacy. for each assumes the other as proven.
So ‘expecting’ , including the results of scientific experiments , is not logical.
One may say some result is expected, that’s all.
So since our definition of certainty is based on Expectation, it is equally untenable.
Therefore, certainty is a Myth.
It can not be verified.
Curiously I find definitions on Uncertainty, not on Certainty.
I am providing information at the end , on this.
This Uncertainty Principle gained recognition after the advent of Quantum Theory.
Here it is:
In 1927 Heisenberg suggested the uncertainty principle, which can be formulated now as follows:
If one tries to describe the dynamical state of a quantum particle by methods of classical mechanics, then precision of such description is limited in principle. The classical state of the
particle turns out to be badly defined.
In 2005 the certainty principle was suggested, which is formulated as follows:
If one describes the dynamical state of a quantum particle (system) by methods of quantum mechanics, then the quantum state of the particle (system) turns out to be well defined. This certainty
of the quantum dynamical state means that “small” space-time transformations can not substantially change the quantum state.
Both principles are not just some misty philosophy about uncertainty and certainty, but they have quite rigorous mathematical formulations in the form of the following inequalities:
(* note the smirk behind the word,’Philosophy’ while it says the same under the garb of Science.)
And the Heisenberg uncertainty principle is a consequence of the certainty principle. The certainty principle generalizes on the unified base both the uncertainty principle and the Mandelshtam-Tamm
relation for energy and time (discovered in 1945).
A more detailed answer you can find in the paper The certainty principle (review).
An explanation for dummies is given in the article The certainty principle for dummies.
Should the certainty principle be considered more fundamental than the uncertainty principle?
From the point of view of non-relativistic quantum mechanics, the certainty principle is just more general.
But from the point of view of relativistic quantum theory, it is more fundamental.
The matter is that for the theory of relativistic quantum systems the notion of “space coordinate”, as a quantum-mechanical observable (a self-adjoint operator), is not natural. Correspondingly, the
uncertainty principle turns out to be sapless.
Why did not Heisenberg, Bohr, Schrödinger, Fock… have a hunch of the certainty principle?
Because they did not know relativistic canonical quantization.
Fortunately, now that the certainty principle is already discovered, for its understanding it is sufficient a usual introductory course of quantum mechanics (knowledge of the RCQ-theory is not
A more concrete answer is given in the popular article The certainty principle for dummies.”
In simple English, it means that if you accept that the world alone is Real and Absolute, then the The Theories are Certain.
If there is more than one world, then the Certainty theory is not correct.
Now Quantum is proving that there are Multiverses(please read my posts on this subject under Astrophysics).
In conclusion there is no such thing as certainty at all, except in our Mind.
Shankaracharya deals this subject very eloquently in his Mayavada theory, where he proves both the Worlds, Relative and Real do exist side by side.
The Uncertainty Principle.
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a
particle known as complementary variables, such as position x and momentum p, can be known simultaneously. For instance, the more precisely the position of some particle is determined, the less
precisely its momentum can be known, and vice versa.^[1] The original heuristic argument that such a limit should exist was given by Werner Heisenberg in 1927, after whom it is sometimes named the
Heisenberg principle. A more formal inequality relating the standard deviation of position σ[x] and the standard deviation of momentum σ[p] was derived by Earle Hesse Kennard^[2] later that year and
by Hermann Weyl^[3] in 1928,
Related Articles
• Nothing to See Here: Demoting the Uncertainty Principle (opinionator.blogs.nytimes.com)
Pyramids Show Astronomical Alignment
In Astrophysics, Interesting and funny, Science on August 19, 2013 at 19:44
I posted a blog on 16 October, 2011 that NASA Satellites stumble over Tirunallaru Tamil Nadu and that there is no reference to this in NASA site.
It remains a mystery, though some have called it as a Hoax.
NASA is silent.
NASA is notorious for not acknowledging inconvenient questions, like UFO, Aliens.
Now comes news that the alignment of some Pyramids represent an ancient Astronomical phenomenon.
Let me add that there are ancient temples in India, especially in the South, that allow Sunlight on a particular day on the Idol at the Sanctum Sanctorum,Tree yielding(at the Temple premises) fruit
straight from the Flower stage(outside the temple it follows the normal process) stone,Idol where if you insert an iron string through one year it comes through the other ear,Idols changing colors
every two and a half hours…the list is endless.
The two stone lines, called geoglyphs, are located about 1.2 miles (2 kilometers) east-southeast from the pyramid.
They run for about 1,640 feet (500 meters), and researchers say the lines were “positioned in such a way as to frame the pyramid as one descended down the valley from the highlands.”
Using astronomical software and 3D modeling, the researchers determined that a remarkable event would have occurred during the time of the winter solstice…..
“When viewed in 3D models, these lines appear to converge at a point beyond the horizon and frame not only the site of Cerro del Gentil [where the pyramid is], but also the setting sun during the
time of the winter solstice,” the research team wrote in a poster presentation given recently at the Society for American Archaeology annual meeting in Honolulu.
“Thus someone viewing the sunset from these lines during the winter solstice would have seen the sun setting directly behind, or sinking into, the adobe pyramid,” they write. “Thus the pyramid and
the linear geoglyph constitute part of a single architectural complex, with potential cosmological significance, that ritualized the entire pampa landscape.” (The word “pampa” stands for plain.)
The flattop pyramid is 16 feet (5 m) high and was built sometime between 600 B.C. and 50 B.C., being reoccupied somewhere between A.D. 200 and 400. Finds near the pyramid include textiles, shells and
ceramics. The stone lines were constructed at some point between 500 B.C. and A.D. 400.
Check for Image Gallery at the Link.
Related Articles
• Shadow of the Snake God – Winter solstice at Chichen Itza (yourenotfromaroundhere.com) | {"url":"http://ramanan50.wordpress.com/category/science/","timestamp":"2014-04-19T14:53:04Z","content_type":null,"content_length":"136302","record_id":"<urn:uuid:c472d10c-9ccb-41fc-8ede-2faa69f3a816>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Platonic Solids - Why Five?
mathsyperson wrote:
Or can you define a set of properties so that any solid that meets them will make F+V-E=2 work?
For all solids that can be "deformed" to the sphere it still holds. So that is an entire class of solids.
Another class of solids can be deformed to a torus, and for them F+V-E=1
I didn't go into on that page, but there is a 2D equivalent of F+V-E=2, and it is V-E=0
And the 1D version is V=2
In fact it goes like this for "non intersecting" shapes:
0D: 0
1D: V=2
2D: V-E=0
3D: F+V-E=2
4D: F+V-E-(something)=0
Each dimension needs a new parameter, and the sum goes 0,2,0,2,...
Perhaps I should mention that?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=4789","timestamp":"2014-04-16T04:12:38Z","content_type":null,"content_length":"21872","record_id":"<urn:uuid:277c2784-57d5-4d5f-b880-fd76389f00c1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbering Systems Tutorial
Number Systems
Decimal System
Most people today use decimal representation to count. In the decimal system there are 10 digits:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
These digits can represent any value, for example:
The value is formed by the sum of each digit, multiplied by the base (in this case it is 10 because there are 10 digits in decimal system) in power of digit position (counting from zero):
Position of each digit is very important! for example if you place "7" to the end:
it will be another value:
Important note: any number in power of zero is 1, even zero in power of zero is 1:
Binary System
Computers are not as smart as humans are (or not yet), it's easy to make an electronic machine with two states: on and off, or 1 and 0.
Computers use binary system, binary system uses 2 digits:
0, 1
And thus the base is 2.
Each digit in a binary number is called a BIT, 4 bits form a NIBBLE, 8 bits form a BYTE, two bytes form a WORD, two words form a DOUBLE WORD (rarely used):
There is a convention to add "b" in the end of a binary number, this way we can determine that 101b is a binary number with decimal value of 5.
The binary number 10100101b equals to decimal value of 165:
Hexadecimal System
Hexadecimal System uses 16 digits:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
And thus the base is 16.
Hexadecimal numbers are compact and easy to read.
It is very easy to convert numbers from binary system to hexadecimal system and vice-versa, every nibble (4 bits) can be converted to a hexadecimal digit using this table:
│Decimal │Binary │Hexadecimal│
│(base 10)│(base 2)│(base 16) │
│0 │0000 │0 │
│1 │0001 │1 │
│2 │0010 │2 │
│3 │0011 │3 │
│4 │0100 │4 │
│5 │0101 │5 │
│6 │0110 │6 │
│7 │0111 │7 │
│8 │1000 │8 │
│9 │1001 │9 │
│10 │1010 │A │
│11 │1011 │B │
│12 │1100 │C │
│13 │1101 │D │
│14 │1110 │E │
│15 │1111 │F │
There is a convention to add "h" in the end of a hexadecimal number, this way we can determine that 5Fh is a hexadecimal number with decimal value of 95.
We also add "0" (zero) in the beginning of hexadecimal numbers that begin with a letter (A..F), for example 0E120h.
The hexadecimal number 1234h is equal to decimal value of 4660:
Converting from Decimal System to Any Other
In order to convert from decimal system, to any other system, it is required to divide the decimal value by the base of the desired system, each time you should remember the result and keep the
remainder, the divide process continues until the result is zero.
The remainders are then used to represent a value in that system.
Let's convert the value of 39 (base 10) to Hexadecimal System (base 16):
As you see we got this hexadecimal number: 27h.
All remainders were below 10 in the above example, so we do not use any letters.
Here is another more complex example:
let's convert decimal number 43868 to hexadecimal form:
The result is 0AB5Ch, we are using the above table to convert remainders over 9 to corresponding letters.
Using the same principle we can convert to binary form (using 2 as the divider), or convert to hexadecimal number, and then convert it to binary number using the above table:
As you see we got this binary number: 1010101101011100b
Signed Numbers
There is no way to say for sure whether the hexadecimal byte 0FFh is positive or negative, it can represent both decimal value "255" and "- 1".
8 bits can be used to create 256 combinations (including zero), so we simply presume that first 128 combinations (0..127) will represent positive numbers and next 128 combinations (128..256) will
represent negative numbers.
In order to get "- 5", we should subtract 5 from the number of combinations (256), so it we'll get: 256 - 5 = 251.
Using this complex way to represent negative numbers has some meaning, in math when you add "- 5" to "5" you should get zero.
This is what happens when processor adds two bytes 5 and 251, the result gets over 255, because of the overflow processor gets zero!
When combinations 128..256 are used the high bit is always 1, so this maybe used to determine the sign of a number.
The same principle is used for words (16 bit values), 16 bits create 65536 combinations, first 32768 combinations (0..32767) are used to represent positive numbers, and next 32768 combinations (
32767..65535) represent negative numbers.
Copyright © 2003-2005 Emu8086, Inc.
All rights reserved. | {"url":"http://www.electronics.dit.ie/staff/tscarff/number_systems/number_systems.html","timestamp":"2014-04-16T22:20:13Z","content_type":null,"content_length":"12813","record_id":"<urn:uuid:bc23386c-d2b8-43eb-9150-a56a0481bc58>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Pure and Applied Mathematics, Marcel Dekker. 235. New York, NY: Marcel Dekker. ix, 401 p. (2001).
Hopf algebras arose in algebraic topology in the work of Heinz Hopf. The paper of J. W. Milnor and J. C. Moore [Ann. Math., II. Ser. 81, 211-264 (1965; Zbl 0163.28202)] on graded Hopf algebras
perhaps was the first to attract the attention of algebraists. The first book on the purely algebraic aspects (for the most part ungraded) was that of M. E. Sweedler [Hopf Algebras, Benjamin, New
York (1969; Zbl 0194.32901)]. Subsequent books of this nature were those of E. Abe [Hopf Algebras, Cambridge Univ. Press (1980; Zbl 0476.16008)] and S. Montgomery [Hopf Algebras and their Actions on
Rings, Reg. Conf. Ser. Math. 82, Am. Math. Soc., Providence RI (1993; Zbl 0793.16029)]. The subject received a huge impetus with the discovery by physicists and mathematicians starting about the mid
1980’s of quantum groups, which are certain noncommutative and noncommutative Hopf algebras. Sweedler’s book is somewhat out of date and Abe’s book is written in a style which is difficult to access,
especially for students. The book of Montgomery is an excellent outline, but does not have complete proofs and contains very little directly on quantum groups. Thus there is a need for an
introductory book which could be used in a graduate course on Hopf algebras, and the book under review is hence a timely addition to the literature. It can serve as a textbook on the algebraic theory
of Hopf algebras. It does not deal with quantum groups, e.g., the quantum Yang-Baxter equation, so that if the instructor wishes to include some quantum groups, he or she would have to add
supplementary material. A book which could serve as a textbook for a course on quantum groups is that of C. Kassel [Quantum Groups, Graduate Texts Math. 155, Springer-Verlag (1995; Zbl 0808.17003)].
The titles of the seven chapters are: 1. Algebras and coalgebras. 2. Comodules. 3. Special classes of coalgebras. 4. Bialgebras and Hopf algebras. 5. Integrals. 6. Actions and coaction of Hopf
algebras. 7. Finite-dimensional Hopf algebras. There are two appendices on category theory language and on $C$-groups and $C$-cogroups.
The presentation is partly categorical and partly concrete algebra, including, for example some Hopf Galois theory and some classification results for finite-dimensional Hopf algebras. Various
classes of coalgebras are studied, and investigated for Hopf algebras, with applications given to integrals, Hopf actions and Galois extensions. The idea of duality is extensively used. Some
fundamental theorems for finite-dimensional Hopf algebras which are given complete discussions include the Nichols-Zoeller theorem on Hopf subalgebras, the Taft-Wilson theorem on pointed Hopf
algebras and the Kac-Zhu theorem on Hopf algebras of prime dimension.
This book is ideal as an up-to-date introduction to the algebraic theory of Hopf algebras. It can be used as a textbook for a one or two semester course. It has exercises of various levels of
difficulty scattered in the text, with solutions at the end of each chapter. There are bibliographical notes at the end of each chapter, as well as a bibliography at the end of the book. If such a
course wanted to give roughly equal weight to Hopf algebras and to quantum groups, the book under review could be used together with the one of Kassel mentioned above.
16W30 Hopf algebras (assoc. rings and algebras) (MSC2000)
16-02 Research monographs (associative rings and algebras) | {"url":"http://zbmath.org/?format=complete&q=an:0962.16026","timestamp":"2014-04-16T13:30:59Z","content_type":null,"content_length":"24498","record_id":"<urn:uuid:5369eaf9-a86b-490b-abb1-ec54a410448a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
This paper is a survey of recent results on the study of the Steiner ratio of metric spaces, in particular Riemannian manifolds and normed spaces. Some of these results belong to the authors (see,
for example, [A.O. Ivanov, A.A. Tuzhilin and D. Cieslik, Math. Notes 74, No. 3, 367–374; translation from Mat. Zametki 74, No. 3, 387–395 (2003; Zbl 1066.52009)]).
The Steiner ratio can be considered as the characteristic of ‘goodness’ of approximated solutions to the well-known Steiner problem, which searches for a network of minimal length that spans a finite
set of points $N$ in a metric space ($X,\rho$). This shortest network has to be a tree and is called the Steiner minimum tree. The Steiner ratio is the greatest lower bound for the ratio of the
length of the Steiner minimum tree to the length of the minimum spanning tree for the given point set $N$. The interest in approximated solutions to the Steiner problem is due, on the one hand, to
the wealth of applications of the shortest networks, and, on the other hand, to the fact that the Steiner problem is NP-complete.
The authors start with necessary definitions and proceed with the estimates and exact values of the Steiner ratio for general metric spaces and Riemannian manifolds. Then they discuss the problems
related to the Steiner ratio in normed spaces. At the end of the paper some relations between the Steiner ratio and some known problems of discrete geometry are discussed. For example, the authors
show how the Steiner ratio can be used in the study of packing and covering problems in Euclidean space. Some open problems are also listed and a comprehensive bibliography is given.
[Reviewed by Lyuba S. Alboul (MR2269103).]
05C05 Trees
05B40 Packing; covering (combinatorics)
46B20 Geometry and structure of normed linear spaces
52B55 Computational aspects related to geometric convexity
68R10 Graph theory in connection with computer science (including graph drawing) | {"url":"http://zbmath.org/?q=an:05531161","timestamp":"2014-04-16T16:30:39Z","content_type":null,"content_length":"22531","record_id":"<urn:uuid:fec836f3-48af-4bf7-84a8-aab65205cf27>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parameter Substitution within Expression Trees
This article describes a method to transform a System.Linq.Expression object by substituting some of its parameters by different expressions. This technique can be applied, e.g., when one uses lambda
expressions to describe mathematical functions - it will allow generating new functions by argument substitution technique. Correspondingly, it can be useful in creating plots, 2-D and 3-D graphics.
This article will only show how to apply this technique to generating new functions and creating Silverlight plots of these functions. A future article will describe applying this technique to 2-D
graphics and animations.
There is a very common problem in mathematics. Suppose, we have a function y = f(x). There is also another function x = g(t); we can combine these two functions to obtain dependency y = (f*g)(t)
where f*g is a combined function.
This problem can be generalized for functions with multiple arguments: suppose we have a function y = F(x[1], x[2], ..., x[n]). Suppose there is also a function that makes one of its arguments x[j]
dependent on a set of other arguments: x[j] = G(t[1], ..., t[m]). The functions F and G can be combined into the function (F*G) operating on the same arguments as function F, except that argument x
[j] will be replaced by t[1], ..., t[m]:
y = (F*G)(x[1], ..., x[j-1], t[1], ..., t[m], x[j+1], ... ,x[n]).
In .NET, mathematical functions can be described by lambda expressions or corresponding expression trees. The problem we are trying to solve is best shown by an example: suppose we have an expression
corresponding to e.g. some polynomial:
Expression<Func<double, double>> polynomialExpression =
x => x*x*x*x + 3*x*x*x + 5*x*x + 10*x + 20;
Suppose we want to create and use other expressions that represent the arbitrary shifts of the expression above along variable x. In a straightforward way, we can simply create another expression
dependent on 2 arguments: x and shift:
Expression<Func<double, double>> polynomialExpressionWithShifts =
(x, shift) =>
(x-shift)*(x-shift)*(x-shift)*(x-shift) +
3*(x-shift)*(x-shift)*(x-shift) +
5*(x-shift)*(x-shift) +
10*(x-shift) +
This however is, not a very pleasant expression to write. Furthermore, if more degrees of freedom are required, e.g., we need to have arbitrary dilations (making the function thinner or wider along
axis x) in addition to arbitrary shifts, the expression will become increasingly more complex.
Using the approach described in this article, all one has to do is to write the following code in order to obtain the polynomialExpressionWithShifts:
Expression<Func<double, double>> shiftExpression = (x, shift) => x - shift
Expression<Func<double, double>> polynomialExpressionWithShifts =
(Expression<Func<double, double>>)polynomialExpression.Substitute
("x", shiftExpression);
The extension method Substitute will replace the ParameterExpression x in polynomialExpression with ParameterExpressions x and shift and modify the code of the expression replacing any occurrence of
variable 'x' with 'x - shift' (the body of shiftExpression).
To add the dilation functionality, all we have to do is to write the following code:
Expression<Func<double, double, double>> dilationAndShiftExpression =
(x, dilationFactor, shift) = x * dilationFactor - shift
Expression<Func<double, double, double>> polynomialExpressionWithShifts =
(Expression<Func<double, double, double>>)polynomialExpression.Substitute
("x", dilationAndShiftExpression);
The approach is generic enough to handle multiple variables also in the original expression. This is the reason for specifying the variable name in the extension function Substitute - we only replace
a variable whose name is passed to the Expression as one of its parameters.
Using the Code
The attached code consists of the following four projects:
1. ExpressionModifier - Contains the core functionality - it produces .NET 4.0 DLL
2. ExpressionModifierTester - Contains Microsoft tests for ExpressionModifier functionality
3. SilverlightExpressionModifier - The same as ExpressionModifier, but it is a Silverlight 4.0 project
4. SimplePlots - Silverlight 4.0 project that plots different expressions - its purpose is to give some visual examples of parameter substitutions.
The usage examples can be taken from SimplePlot and ExpressionModifierTester projects.
One should set SimplePlot as the startup project of the solution.
The substitution within SimplePlot project takes place within the function MainPage.PlotMainAndModified:
public void PlotMainAndModified
Expression<Func<double, double>> mainExpression,
Expression<Func<double, double>> substExpression,
double delta = 0.1,
double pointSize = POINT_SIZE
AddPlot(Colors.Red, delta, mainExpression.Compile(), pointSize);
LambdaExpression modifiedExpression =
mainExpression.Substitute("x", substExpression);
modifiedExpression.Compile() as Func<double, double>,
This function plots the original function in red and the resulting function in blue.
Make sure that the sinusoidal example lines are uncommented at the end of MainPage.MainPage constructor while the parabola example lines are commented out as below:
// uncomment the two lines below for a sinusoidal example
PlotMainAndModified(x => Math.Sin(-x), x => x * 2 + 3, 0.01, 10d);
AddPlot(Colors.Yellow, 0.01, x =>Math.Sin(-(x * 2 + 3)), 3d);
// uncomment the two lines below for a parabola example
PlotMainAndModified(x => x * x, x => x * x - 2, 0.01, 10d);
AddPlot(Colors.Yellow, 0.01, x => (x * x - 2) * (x * x - 2), 3d);
Rebuilding and running SimplePlot project will then result in the following plot:
The original function sin(-x) is shown in red. The function that was a result of substituting 2x + 3 in place of x into the original function is shown in blue. Finally, we plot the combined function
sin(-(2x + 3)) in a thinner yellow to show that it does match the result of the substitution (in thicker blue).
Now, if we comment out the two lines related to the sinusoidal example and uncomment the lines related to a parabola example as shown below:
// uncomment the two lines below for a sinusoidal example
PlotMainAndModified(x => Math.Sin(-x), x => x * 2 + 3, 0.01, 10d);
AddPlot(Colors.Yellow, 0.01, x =>Math.Sin(-(x * 2 + 3)), 3d);
// uncomment the two lines below for a parabola example
PlotMainAndModified(x => x * x, x => x * x - 2, 0.01, 10d);
AddPlot(Colors.Yellow, 0.01, x => (x * x - 2) * (x * x - 2), 3d);
and then rebuild and re-run the project, we'll have the following plot displayed:
As in the first example, the original parabola in red corresponds to x^2 function. Thick blue line is a plot of the result of substituting x^2 - 2 for x into the first function. Thin yellow is a plot
of the (x^2 - 2)^2 function just to show that it matches the result of the substitution.
ExpressionModifierTester project can also be used to learn how to use the substitution functionality. Below the code for TestSimpleFunction is shown:
public void TestSimpleFunction()
Expression<Func<double, double>>
originalExpression = t => t * t * t + 20;
Expression<Func<double, double>> substituteExpression = t => t * t;
LambdaExpression modifiedExpression =
originalExpression.Substitute("t", substituteExpression);
Func<double, double> modifiedFunction =
(Func<double, double>) modifiedExpression.Compile();
Assert.AreEqual((int) modifiedFunction(1), 21);
Assert.AreEqual((int) modifiedFunction(2), 2 * 2 * 2 * 2 * 2 * 2 + 20);
One can see that our original expression is t^3 + 20. We substitute t^2 for t. The resulting expression corresponds to t^6 + 20. Then we test that the resulting expression indeed produces correct
TestComplexPolynomial has similar functionality except that its original function is much more complex.
Review of the Core Substitution Functionality
The core functionality is located within two files: ExpressionVariableSubstituteHelper.cs and SubstExpressionVisitor.cs which contain classes that have the same names. The files are located under
ExpressionModifier project. There are links to the same files under SilverlightExpressionModifier project.
Static class ExpressionVariableSubstituteHelper contains the Substitute extension function. It finds the ParameterExpression corresponding to the parameter to be substituted within the main function
(it throws an exception if it cannot find such parameter by name). It also does some other checks, e.g. it checks that with the possibly expanded list of parameters after substitution, there will be
no name clashes. Then it calls the Visit function of SubstExpressionVisitor class to actually create the resulting Expression tree.
SubstExpressionVisitor class is derived from ExpressionVisitor with three functions overridden: VisitUnary, VisitMethodCall and VisitBinary. Perhaps I overlooked some other Expressions that might
have ParameterExpression objects as children and then, perhaps, the code might break on some complex Expression, but most of the Expressions seem to be covered by the above three functions. Each of
the above three functions is implemented to check if one or more of its children is a ParameterExpression that we are substituting, and if it is, it returns a new Expression of the same type that
replaces that ParameterExpression children with the body of the substituting Expression.
Points of Interest
This new way of manipulating the Expression trees can become very helpful in math intensive projects e.g. in 2-D and 3-D graphics and animations. This article described the application of this
approach to generating plots for different functions. My next article will use this approach for 2-D animations along arbitrary paths.
• 6^th January, 2011: Initial post | {"url":"http://www.codeproject.com/Articles/143096/Parameter-Substitution-within-Expression-Trees?fid=1603920&df=90&mpp=10&sort=Position&spc=None&tid=4123687","timestamp":"2014-04-17T13:37:11Z","content_type":null,"content_length":"81666","record_id":"<urn:uuid:014d2740-3ae9-4bd8-b3e1-8ba1be7e4a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tape Measure Markings
Date: 08/11/97 at 11:10:39
From: Anonymous
Subject: Tape measure
As a construction worker for years, I have noticed that there is a
small diamond (or triangle according to the brand) on all tape
measures every 19.2 inches, much like the mark at 16 inches (for
marking the placement of studs at 16" on center). I had first thought
that it was 1/10th of a rod, as 10 of the marks equals 16 feet (192
inches), but a rod is equal to 5 1/2 yards (16.5 feet). It is not
really an important issue here, but just something that keeps
bothering me, not knowing what it is for, and also knowing that it
is put there for a reason, for some sort of measurement. Any help
would be greatly appreciated.
- Greg
Date: 08/11/97 at 11:46:44
From: Johnny Hamilton
Subject: Re: Tape Measure
The diamonds or triangles are for placing five studs in a wall over
an eight foot space rather than four or six studs.
If you divide five into 96 inches, it will give 19.2 inches. In other
words, 4 into 96" = 24", 5 into 96" = 19.2", and 6 into 96" = 16".
These are the normal centerings of wall studs.
- Johnny
Johnny E. Hamilton <jhamilton@constructpress.com>
Construction Trades Press
Publishers of Math to Build On and Pipe Fitter's Math Guide | {"url":"http://mathforum.org/library/drmath/view/56469.html","timestamp":"2014-04-17T08:07:43Z","content_type":null,"content_length":"6438","record_id":"<urn:uuid:d0f1147a-db23-4d5a-bd8a-6a32b058795e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
02:16 am est in gmt
You asked:
02:16 am est in gmt
Greenwich Mean Time
7:16:00am Greenwich Mean Time
7:16:00am Western European Time (the European timezone equal to UTC)
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/02:16_am_est_in_gmt","timestamp":"2014-04-16T04:17:13Z","content_type":null,"content_length":"56485","record_id":"<urn:uuid:122bdacc-45b2-44b6-9c45-d87c0e5be233>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Statistical Foundations of Audit Trail Analysis for the Detection of Computer Misuse
September 1993 (vol. 19 no. 9)
pp. 886-901
ASCII Text x
P. Helman, G. Liepins, "Statistical Foundations of Audit Trail Analysis for the Detection of Computer Misuse," IEEE Transactions on Software Engineering, vol. 19, no. 9, pp. 886-901, September,
BibTex x
@article{ 10.1109/32.241771,
author = {P. Helman and G. Liepins},
title = {Statistical Foundations of Audit Trail Analysis for the Detection of Computer Misuse},
journal ={IEEE Transactions on Software Engineering},
volume = {19},
number = {9},
issn = {0098-5589},
year = {1993},
pages = {886-901},
doi = {http://doi.ieeecomputersociety.org/10.1109/32.241771},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Software Engineering
TI - Statistical Foundations of Audit Trail Analysis for the Detection of Computer Misuse
IS - 9
SN - 0098-5589
EPD - 886-901
A1 - P. Helman,
A1 - G. Liepins,
PY - 1993
KW - audit trail analysis; computer misuse; computer transactions; stationary stochastic processes; misuse detectors; detection accuracy; transaction attributes; NP-hard; heuristic approach;
density estimation; modeling; statistical foundations; system security; auditing; computer crime; security of data; stochastic processes; transaction processing
VL - 19
JA - IEEE Transactions on Software Engineering
ER -
We model computer transactions as generated by two stationary stochastic processes, the legitimate (normal) process N and the misuse process M. We define misuse (anomaly) detection to be the
identification of transactions most likely to have been generated by M. We formally demonstrate that the accuracy of misuse detectors is bounded by a function of the difference of the densities of
the processes N and M over the space of transactions. In practice, detection accuracy can be far below this bound, and generally improves with increasing sample size of historical (training) data.
Careful selection of transaction attributes also can improve detection accuracy; we suggest several criteria for attribute selection, including adequate sampling rate and separation between models.
We demonstrate that exactly optimizing even the simplest of these criteria is NP-hard, thus motivating a heuristic approach. We further differentiate between modeling (density estimation) and
nonmodeling approaches.
[1] Y. M. M. Bishop, S. E. Fienberg, and P. W. Holland,Discrete Multivariate Analysis. Cambridge, MA: M.I.T. Press, 1975.
[2] P. Clitherow and R. Herrara, "A connectionist approach to monitoring computer audit trails," Bellcore, Piscataway, NJ, 1989.
[3] D. E. Denning, "An intrusion detection mode,"IEEE Trans. Software Eng., vol SE-13, no. 2, pp. 222-232, 1987.
[4] R. O. Duda and P. E. Hart,Pattern Classification and Scene Analysis. New York: Wiley, 1973.
[5] J. H. Friedman, W. Stuetzle, and A. Schroeder, "Projection pursuit density estimation,"JASA, vol. 79, no. 387, pp. 599-608, 1984.
[6] M. R. Garey and D. S. Johnson,Computers and Intractability. San Francisco, CA: Freeman, 1979.
[7] D. M. Green and J. A. Swets,Signal Detection Theory and Psychophysics. New York: Wiley, 1976.
[8] I. J. Good, "The population frequencies of species and the estimation of population parameters,"Biometrika, vol. 40, parts 3 and 4, pp. 237-264, 1953.
[9] P. Helman, "Rule base design criteria," Technical Report, Los Alamos National Laboratory, Los Alamos, NM, 1990.
[10] H. S. Javitz, and A. Valdes, "The SRI IDES statistical intrusion detector," inProc. IEEE Symp. Research in Security and Privacy, 1990, pp. 316-326.
[11] R. D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, "Function approximation and time series prediction with neural networks," LA-UR-90_21, Los Alamos National
Laboratory, 1989.
[12] G. E. Liepins and H. S. Vaccaro (). "Anomaly detection: Purpose and framework," inProc. 12th Nat. Comput. Security Conf., 1989, pp. 495-504.
[13] G. E. Liepins and H. S. Vaccaro (), "Intrusion detection: Its role and validation,"Computers and Security J., 1991.
[14] D. O. Loftsgarden and C. P. Quesenberry, "A nonparametric estimate of a multivariate density function,"Ann. Math. Stat., vol. 36, pp. 1049-1051, 1965.
[15] T. F. Lunt, R. Jagannathan, R. Lee, S. Listgarten, D. L. Edwards, P. G. Neuman, H. S. Javitz, and A. Valdes, "IDES: The enhanced prototype," SRI International, SRI-CSL-88-12, 1988.
[16] E. Parzen, "On estimation of a probability density function and mode,"Ann. Math. Statist., vol. 33, pp. 1065-1076, 1962.
[17] T. Poggio and F. Girosi, "A theory for approximation and learning," AI Memo 1140, MIT, July 1989.
[18] S. Qian, Y. C. Lee, R. D. Joncs, C. W. Barnes, and K. Lee, "Function approximation with orthogonal basis net," LALP-90-04, Los Alamos National Laboratory, Los Alamos, NM, 1990.
[19] H. E. Robbins, "Estimating the total probability of the unobserved outcomes of an experiment,"Ann. Math. Statist., vol. 39, no. 1, pp. 256-257, 1968.
[20] M. M. Sebring, E. W. Shellhouse, M. E. Hann, and R. A. Whitehurst, "Expert systems in intrusion detection," inProc. 11th Nat. Comput. Security Conf., 1988, pp. 74-81.
[21] R. P. Simonian, P. R. Henning, J. H. Reed, and K. L. Fox, "An AI approach toward computer virus detection and removal," Harris Corporation, Government Information Systems Division, Melbourne,
FL, 1989.
[22] W. T. Tenner, "Discovery: An expert system in the commercial data security environment," TRW Information Services Division, Orange, CA, 1988.
[23] H. S. Vaccaro and G. E. Liepins, "Detection of anomalous computer session activity," inProc. IEEE Symp. Research in Security and Privacy, 1989, pp. 280-289.
Index Terms:
audit trail analysis; computer misuse; computer transactions; stationary stochastic processes; misuse detectors; detection accuracy; transaction attributes; NP-hard; heuristic approach; density
estimation; modeling; statistical foundations; system security; auditing; computer crime; security of data; stochastic processes; transaction processing
P. Helman, G. Liepins, "Statistical Foundations of Audit Trail Analysis for the Detection of Computer Misuse," IEEE Transactions on Software Engineering, vol. 19, no. 9, pp. 886-901, Sept. 1993,
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/ts/1993/09/e0886-abs.html","timestamp":"2014-04-16T10:53:31Z","content_type":null,"content_length":"55052","record_id":"<urn:uuid:671b97f7-ec4a-4ea9-88e0-ba4338169962>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
A primal-dual mixed finite element method for accurate and efficient atmospheric modelling on massively parallel computers
Efficient modelling of the atmosphere using massively parallel computers will require a quasi-uniform grid to avoid the communication bottleneck associated with the poles of the traditional
latitude-longitude grid. However, achieving an accurate solution on a quasi-uniform grid is non-trivial. A mixed finite element method can provide the following desirable properties: mass
conservation; a C-grid-like placement of variables for accurate wave dispersion and adjustment; vanishing curl of gradient; linear energy conservation; and steady geostrophic modes in the linear
f-plane case. A further desirable property is that the potential vorticity (PV) should evolve as if advected by some chosen (accurate) advection scheme. This can be achieved by inserting the PV
fluxes into the nonlinear Coriolis term that appears in the `vector invariant' form of the momentum equation, provided the PV fluxes themselves can be constructed. Introducing a dual family of
function spaces, in which the PV lives in a piecewise constant function space, along with suitable maps between primal and dual spaces, provides a convenient framework in which the PV fluxes can be
computed by a finite volume advection scheme in the dual space. The scheme can be implemented in terms of a small number of sparse matrices that can be precomputed off-line, avoiding the need for
numerical quadrature at run time. A mass matrix and two dual-primal mapping operators need to be inverted at each time step, but these are well conditioned and the inversion can be absorbed into the
iterative solver used for implicit time stepping at only a modest increase in cost. Some sample shallow water model results on a hexagonal icosahedral grid and a cubed sphere grid will be presented. | {"url":"http://www.newton.ac.uk/programmes/AMM/seminars/2012102311459.html","timestamp":"2014-04-18T23:16:42Z","content_type":null,"content_length":"5613","record_id":"<urn:uuid:a88edaa1-b768-499c-98d1-3ca287350272>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lake Forest Park, WA Algebra 2 Tutor
Find a Lake Forest Park, WA Algebra 2 Tutor
...For the multiple choice grammar questions, we review the rules students likely haven't been explicitly taught in years, then work on identifying those rules in the context of SAT questions.
I've taught the SAT/PSAT since 2003, and have ample experience helping students raise their scores. Wheth...
32 Subjects: including algebra 2, English, reading, writing
...I believe in the importance of differentiated learning or tailoring lessons for each particular student so that it successfully meets the needs of every unique student, allowing them the
opportunity to reach their full potential in terms of understanding and applying the material at hand. I have...
27 Subjects: including algebra 2, chemistry, reading, writing
...I have been tutoring for over a year now and truly enjoy getting others excited about the world of mathematics and science. I work as a tutor at my school's Math Learning Center and have
tutored students in a one-on-one setting in physics, math, and beginning chemistry. I am currently enrolled in Calculus 4, Physics: Electricity and Magnetism, and Engineering Statics.
14 Subjects: including algebra 2, chemistry, calculus, physics
...I have an A.A. in Mathematics, and almost completed a B.S. in Mathematics as well. I hold a clear California Teaching Credential, where I have been teaching professionally for the past 3
years. One of the requirements of the credential is to pass a rigorous test in subject matter competency.
11 Subjects: including algebra 2, reading, writing, geometry
...My professor also recruited me to work as her assistant for the introductory class and to teach the junior-level conversational German class. I understand the rules of German and know how to
get them across to a student. Because of the time I spent intensively learning German, I also learned the rules of English and am very capable in teaching and helping to explain the rules.
12 Subjects: including algebra 2, reading, geometry, accounting
Related Lake Forest Park, WA Tutors
Lake Forest Park, WA Accounting Tutors
Lake Forest Park, WA ACT Tutors
Lake Forest Park, WA Algebra Tutors
Lake Forest Park, WA Algebra 2 Tutors
Lake Forest Park, WA Calculus Tutors
Lake Forest Park, WA Geometry Tutors
Lake Forest Park, WA Math Tutors
Lake Forest Park, WA Prealgebra Tutors
Lake Forest Park, WA Precalculus Tutors
Lake Forest Park, WA SAT Tutors
Lake Forest Park, WA SAT Math Tutors
Lake Forest Park, WA Science Tutors
Lake Forest Park, WA Statistics Tutors
Lake Forest Park, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lake_Forest_Park_WA_Algebra_2_tutors.php","timestamp":"2014-04-17T01:14:35Z","content_type":null,"content_length":"24629","record_id":"<urn:uuid:37007293-f7b7-48e3-b134-d6c0e13f3525>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Veterinary epidemiology: software for sampling and surveys
From WikiVet
When conducting a descriptive or analytic study which requires sampling a subset of the population, it is of vital importance to consider the required sample size. Although this can be easily
achieved when using simple random sampling, it becomes more complex in the case of more complex sampling strategies. The following pieces of software are available to assist with these calculations:
Also found in Bayesfreecalc2 (part of FreeBS) http://www.epi.ucdavis.edu/diagnostictests/module02.html | {"url":"http://en.wikivet.net/Veterinary_epidemiology:_software_for_sampling_and_surveys","timestamp":"2014-04-21T09:46:00Z","content_type":null,"content_length":"23106","record_id":"<urn:uuid:ccfe306e-5cfd-4cd2-b33c-3061d7a657bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Methods for
Inductance Calculation
Part 1 – Elliptic Integrals
C) Multi-Layer Coils
Until now, we have dealt with single layer solenoid coils. However, Maxwell's elliptic integral formula [11a] for circular loops can be applied just as easily to coils consisting of multiple layers
of windings. It's just a matter of calculating the mutual inductance between every pair of turns and taking the sum. However, while this may be easy to code it can result in a program which takes a
very long time to run. For a coil with N turns, there are N2 pairs of turns. As long as N is not too large, this won't matter much. However, as N gets larger, the program will get slower and slower,
eventually becoming too slow to be practical. In this section we will look at ways of improving performance.
The naïve approach to code a program to calculate the mutual inductance between all pairs of turns in a coil is to use two nested loops as in the following pseudo-program:
' Calculate mutual inductance between
' every pair of turns in the coil
' turns are designated 'i' and 'j' respectively
for i=1 to N
for j=1 to N
<< Code to calculate mutual inductance between turns i and j >>
Now, since the mutual inductance between turn i and turn j will be the same value as the mutual inductance between turn j and turn i, it's only necessary to calculate one of these values and then
multiply by two. This is done very easily by changing the limits of the loops, as follows:
' Calculate mutual inductance between
' every pair of turns in the coil
' turns are designated 'i' and 'j' respectively
for i=1 to N-1
for j=i+1 to N
<< Code to calculate mutual inductance between turns i and j >>
This reduces the number of iterations to slightly less than half the number of iterations from the previous example. The exact number of iterations in the second example is equal to (N2-N)/2. In the
last program line, the value of the sum of mutual inductances M, is doubled to account for the reduced number of iterations. There is another difference between the two programs. The second program
will never calculate the mutual inductance between one turn and itself, because i will never be equal to j. We still need to calculate this value, but it is calculated differently from the others as
was discussed Part 1B. When using the first program, it is necessary to add an if then...else construct to check for this condition and then handle it differently. Having to add this extra condition
test in the inner loop will further decrease performance. Using the second technique, additional code still needs to be added to sum the self inductance of each turn, but the number of additional
iterations is simply N. The modified pseudo program is shown below:
' Calculate mutual inductance between
' every pair of turns in the coil
' turns are designated 'i' and 'j' respectively
for i=1 to N-1
for j=i+1 to N
<< Code to calculate mutual inductance between turns i and j >>
for i=1 to N
<< Code to calculate self inductance of each turn >>
The total number of iterations is now (N2+N)/2, which is still significantly smaller than in the first example. By comparison, for the single layer solenoid which was treated in the previous section,
we were able to reduce the number of iterations to just N. For a multi-layer coil, the (N2+N)/2 iteration count is the worst case; there are certain special cases where we can do much better.
However, we will begin with a general case based on the last example, and address the special cases later.
General Case Multi-Layer Coil
The general case is for a multi-layer coil with an arbitrary number of turns on each layer; i.e., each layer may have a different number of turns. This diagram shows an example cross section of a
seven layer winding where the innermost layer has four turns, with each successive layer increasing by one turn up until layer 4, and then decreasing by one turn thereafter, giving a hexagonal cross
section. Wire diameter is d. The radius of the innermost layer is r0. Spacing between layers (radial pitch) is ky. Spacing between turns on each layer (axial pitch) is kx. It is assumed that each
layer is centred over the previous layer.
The Open Office BASIC code is as follows:
Function LayeredCoil (d,r0,kx,ky,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10) as double
' Calculate inductance of multi-layer coil
' with arbitrary number of turns on each layer.
' d= wire diameter, r0=radius of inner winding, kx=axial pitch
' ky=radial pitch, n1..n10=number of turns on each respective layer
' dimensions in cm, inductance in microHenries
Dim xoffset(10),yoffset(10),Nturns(10)
'find the layer with the maximum number of turns and set MaxN to that number
for i=2 to 10
if MaxN<Nturns(i) then MaxN=Nturns(i)
N=0 'total number of turns
'calculate the x and y offsets for each layer, total turns, and self-L
for i=1 to 10
L=L+Nturns(i)*Mut(yoffset(i),yoffset(i),g) 'self inductance of each turn in current layer
'Change Nturns array to running total of turns
for i=2 to 10
'Calc mutual inductance for every pair of turns i and j
M=0 'Initialize sum to zero
for i=1 to N-1
if Nturns(iLayer)<i then 'check and update the layer for turn i
end if
for j=i+1 to N
if Nturns(jLayer)<j then 'check and update the layer for turn j
end if
jx=jx+kx 'step conductor j for next iteration
ix=ix+kx 'step conductor i for next iteration
end Function
The program allows for up to ten layers of windings. It can be easily modified to accommodate more layers. Each layer may have a different number of turns. It is assumed that each layer is centred
over the previous layer, and the turns on every layer will have the same pitch kx. The centre to centre spacing between layers is ky for all layers.
The shortcomings of Open Office BASIC make for somewhat clunky programming. Several lines are devoted to loading an array with the n1..n10 (turns per layer) arguments. Variable n1 is the number of
turns on the innermost layer, and n10 the turns on the outermost layer.
The method of keeping track of layers could be approached a number of ways. A for..next loop to count layers could be used. However, it was decided that it would be simpler overall just to count
turns and then determine the applicable layer from the absolute turn number. To make this simpler, the array Nturns() is converted from turns per layer to a running total of turns on the current
layer plus total of turns on all previous layers. Then the variables iLayer and jLayer (for turns i and j respectively) are easily calculated.
Variables ix, iy, jx and jy, are the axial position and radius of turn i and turn j respectively.
The mutual inductance between turns is calculated using function Mut() which was described in Part 1B.
This program could be made even more general by adding more arguments to allow the user to specify a different axial offset, radial offset and pitch for each layer individually. However, the program
as it is, should be general enough for most situations.
Execution speed, while not spectacular, is not too bad as long as the total number of turns is not too great (i.e., 100 or fewer). This is the cost of trying to keep things general. Performance can
be improved considerably for certain special cases. For example, as discussed in Part 1B, the N2 iterations was reduced to N iterations in the special case of the single layer solenoid.
Multi-Layer Coil With Rectangular Winding Cross Section
For the next case we will consider a multi-layer coil where each layer has the same number of turns. Hence, the cross section of the winding is rectangular as shown in this next diagram. In this
case, the variables d, kx, ky and r0 have the same meaning as before. The variables n1..n10 are replaced by u and v. The number of turns per layer is now constant and is designated by variable u. The
number of layers is designated by variable v. For the purpose of the following discussion, each layer will be designated by a number (starting at 0), and each turn in the layer will be designated by
a letter, as indicated by the numbers and letters at the left and top of the diagram. Hence the innermost layer is 0 and the outermost layer is 3. The lower left turn is A0 and the upper right turn
is E3.
Now, let's consider the pair of turns A0-B2 shown in red, and also the pair D0-E2 also shown in red. It is readily apparent that the mutual inductance for pair A0-B2 will be the same as the mutual
inductance for pair D0-E2, because their respective radii are identical and their respective axial spacings are also identical.
Before we go any further, let's define a convention for specifying pair configurations that are equivalent for calculating mutual inductance. Since horizontal (axial) spacing will always be multiples
of kx, let us define the first parameter nx as the number of multiples of kx for this spacing. And since the radial spacing will always be a multiple of ky, then we can define a second parameter ny
as the number of multiples of ky for the radial spacing. We will write the configuration in the format: {nx,ny}. Hence, for pair A0-B2, the axial spacing is 1×kx and the radial spacing is 2×ky, so we
would write the configuration as {1,2}. Pair D0-E2 also has configuration {1,2}. For the purpose of calculation of mutual inductance, it makes no difference whether the values are positive or
negative. The mutual inductance is the same. So, all values used in the configuration specification will be absolute values. Consequently, the pair D2-E0 shown is green, which is a mirror image of
pairs A0-B2 and D0-E2, has the same configuration {1,2}.
We soon run into a problem though. According to this definition, pair A1-B3 would also have a configuration {1,2}, but since the radii of A1-B3 are different than those of A0-B2 it won't have the
same mutual inductance. We need to add an additional parameter. We will define parameter y as the row number on which the innermost turn of pair is situated. The new format will be {nx,ny,y}.
Therefore pairs A0-B2, D0-E2, and D2-E0 will all have a configuration of {1,2,0} and pair A1-B3 will have a configuration of {1,2,1}.
We can see that there are eight pairs which have configuration {1,2,0}, eight pairs which have configuration {1,2,1}, and we can continue onwards and upwards to count all of the pairs for every
possible configuration. Having done this, we only need to calculate the inductance value once for each configuration, and then multiply it by the number of times the configuration occurs. One would
hope that there is a systematic way of specifying all of the configuration patterns and of figuring out how many there are of each. With a little perseverance and much coffee consumption, a pattern
does emerge. For a coil of u turns per layer and v layers, the value ny will range from 0 to v-1. For each of these values of ny, the variable nx will range from 0 to u-1, with the one exception that
when ny=0, then nx will range from 1 to u-1 (the zero case is omitted). For each of these combinations of ny and nx, the variable y will range from 0 to v-ny-1. Naturally, three nested program loops
will be used to control the values of nx, ny and y, using the ranges discussed here. This will then loop through each possible configuration exactly once.
Now that we know how to loop through each possible configuration, we also need to know how many times each configuration actually occurs in the coil. Again a bit of manual simulation and analysis
reveals a pattern. In general, configuration {nx,ny,y} occurs 4(u-nx) times, except when either nx=0 or ny=0 (mirror images disappear in these cases), then the configuration occurs 2(u-nx) times.
This also takes into account the fact that every pair must be counted twice (e.g., we need to count both pair A0-B1 as well as B1-A0). Although this has been a bit complicated to figure out, the
resulting program turns out to be surprisingly simple. The Open Office BASIC code is as follows:
Function RectangleCoil (d,v,u,r0,kx,ky) as double
'Calculate inductance of multi-layer coil
'd=wire diameter, u=turns/layer, v=number of layers
'r0=radius of innermost layer
'kx=axial pitch, ky=radial pitch
'All dimensions are in cm, inductance result is in microHenries
for ny=0 to v-1 'Calculate all mutual inductances
for nx=nxMin to u-1
MF=4 'multiplication factor
if ny=0 or nx=0 then
end if
for y=0 to v-ny-1
for y=0 to v-1 'Calculate all self inductances
end Function
The number of iterations required to calculate the self inductances is always equal to v, regardless of the number of turns per layer. The number of iterations required to calculate the pair mutual
inductances is equal to:
Hence, the total number of iterations is:
When v=1, then the coil is a single layer solenoid, and the number of iterations works out to be simply N which is exactly the same as for Lcoil() in Part 1B. When u=1, the coil is flat spiral, and
the number of iterations works out to be:
which is the same as for the previous function LayeredCoil(). This is the worst case because every turn has a different radius and therefore every pair of turns is a unique configuration. For the
remaining cases where u>1 and v>1 the performance falls somewhere between the two extremes. For the case u=v, where the number of turns per layer equals the number of layers, then the number of
iterations is:
So, for multi-layer coils with an approximately square cross section, function RectangleCoil() offers a significant speed advantage over function LayeredCoil(). As an example, for u=v=10, then N=100,
and LayeredCoil() would perform 5050 iterations while RectangleCoil() performs only 550 iterations. The trade-off, as it often is, is speed versus flexibility.
Thus, the main virtue of this program, is that it always performs the minimum required number of iterations, regardless of number of layers and turns per layer, thus making it unnecessary to
substitute more efficient programs for special cases.
Multi-Layer Coils on Polygonal Coil Forms
Close wound multi-layer coils can easily be wound on circular forms, each layer sitting directly on top of the previous layer. However, in cases were it is desired to have space between the layers,
some means must be provided to separate them. One option is to place a thick sheet of insulating material between each layer. However, this has at least a couple of disadvantages. This material will
reduce the Q-factor of the coil, and will increase the self-capacitance of the coil. Also, for coils in power circuits, spacing may be necessary to allow coolant (air or some liquid) to circulate
between the windings, and a thick sheet of insulating material will interfere with this. A common alternative approach is to use a polygonal coil form where the wire is wound on insulating pegs or
other guides located at the vertices of the polygon. While this may make winding the coil easier, it introduces additional complications.
One thing which may not be immediately obvious is that the spacing of the winding guides will not be the same as the radial pitch. This is illustrated in the following diagram.
Here, the coil form is a 7-sided polygon and has two layers of windings. We are looking at it in the direction of the axis. Radial spacing guides are separated by a distance of 0.500 (arbitrary
units). However, notice that the spacing between conductor 1 and conductor 2 is only 0.450. The reduction factor is equal to the cosine of half of the angle between the spokes of the coil form. Or,
more simply, for a polygonal form with the number of sides equal to NS, the spacing reduction factor is equal to cos(180/NS), when working in degrees, or cos(π/NS) when working in radians.
Now, looking at the actual inductance calculation, we refer back to the treatment given at the end of Part 1b regarding polygonal single layer coils. The method given is that of Grover [2], and
unfortunately, he doesn't say this method is valid for multi-layer coils. So, our only justification for using this method is that there doesn't appear to be any other method available. At the same
time, it's reasonable to expect that it should give better correction than no correction at all.
The numbers of the last section won't be repeated here. Suffice to say, the inner and outer radii of the polygonal form are converted to equivalent radii using the weighted formula of Part 1b, and
then the radii of any intermediate layers are interpolated from those values. An equivalent ky value can then be calculated, and then these equivalent values are used in the above RectangleCoil() and
LayeredCoil() functions to calculate the inductance.
There are two degenerate cases of a multi-layer polygonal coil which can be checked. The first is the case of a single layer coil treated in Part 1b using the weighted formula, which already have
been verified to an accuracy of within 1% for most coils. The second degenerate case is that of a coil with only one turn per layer, i.e., a spiral coil. Fortunately, there are formulae available [2]
(page 176) for flat spirals with polygonal turns. So, these cases can be checked. From comparing numbers with various examples of flat polygonal coils, it appears that the true value of inductance
lies between the value calculated using the weighted formula in Part 1b, and the value calculated assuming a round coil enclosing the same area. Therefore, it is suggested that both methods be used
to get the maximum and minimum values. A simple average of the two values is likely to be close enough for all practical purposes.
Continue to:
Back to:
This page last updated: June 28, 2012
Copyright 2011, 2012, Robert Weaver | {"url":"http://electronbunker.ca/CalcMethods1c.html","timestamp":"2014-04-16T21:51:33Z","content_type":null,"content_length":"50972","record_id":"<urn:uuid:8deb1c50-7860-4b0f-9e59-13e8f06954d5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Surbhi on Sunday, March 10, 2013 at 10:42am.
5 _
3 _ ETC
• maths - annabelle, Wednesday, March 13, 2013 at 8:17pm
v h vhv g g
• maths - Manboob, Wednesday, March 20, 2013 at 5:24pm
Please specify its hard to know what it means
• maths - Shaniqua, Sunday, March 24, 2013 at 5:51am
Oh sweetie you are so cute
• maths - Anonymous, Thursday, May 30, 2013 at 9:48pm
oh you so hot
• maths - SLICEtheBALLSoffMATH#Einstein, Thursday, May 30, 2013 at 9:50pm
whats wrong with u people? cute and HHot?
please explain your question Surbhi?
• maths - Kris, Monday, January 20, 2014 at 6:40pm
I think that what they mean is to lay out something to count the number, they give you the number 5, find something (beans, legos, something) to act as a counter. Use those counters to count to
the number 5, or the number 3, use more counters to count above those numbers.
That is what my Kinder's teacher would have them do.
Related Questions
maths - In kindergarten question it is asked to lay counters 5 _ 3 _ what does ...
Math kindergarten - The question in the math kindergarten homework is : Have ...
Kindergarten Math - what is counter? My child has to draw a piece of yarn 3 ...
Math - My son has a homework (Kindergarden) where he has numbers and next to it...
math - My son is in Kindergarten and I have no idea about the "Counters" in the ...
kindergarten math - what does this mean draw a piece of yarn 3 counters long?
kindergarten - I need to know what a counter is in order to have my child draw a...
math - Adraiana and kyle have the same number of counters. they combine them ...
4tt grade math - if 14 counters are half. then what is the one ------ counters? ...
kindergarten - HAVE CHILD COUNT THE SEA ANIMALS AND THEN DRAW COUNTERS TO SHOW ... | {"url":"http://www.jiskha.com/display.cgi?id=1362926579","timestamp":"2014-04-16T19:55:30Z","content_type":null,"content_length":"9548","record_id":"<urn:uuid:65d2bca2-4844-4eaf-8b85-1a11e3d8b4a0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Lyapunov exponents and stationary solutions for affine stochastic delay equations.
(English) Zbl 0704.60062
The authors consider stochastic functional DEs of the type of affine delay equations
$\left(*\right)\phantom{\rule{1.em}{0ex}}dX\left(t\right)=H{X}_{t}dt+dM\left(t\right),\phantom{\rule{1.em}{0ex}}{X}_{0}=y,\phantom{\rule{1.em}{0ex}}t\ge 0,$
where the driving force M is a stochastic process with stationary increments or locally integrable deterministic, y and ${X}_{t}$ with ${X}_{t}\left(u\right)=X\left(t+u\right)$, $t\ge 0$, are ${ℝ}^
{n}$-valued Lebesgue integrable functions on a fixed interval $J=\left[-r,0\right]$, and H is the bounded linear operator defined by $Hy={\int }_{J}dm\left(u\right)y\left(u\right)$ with an $n×n$-
matrix valued function m of bounded variation.
The Lyapunov spectrum of the associated homogeneous DE is used to decompose the state space into finite-dimensional and finite- codimensional subspaces and to investigate the (asymptotics and)
Lyapunov exponents of the solution X(t) via projections onto these subspaces. If the associated homogeneous DE has no vanishing Lyapunov exponents and the driving force M has stationary increments,
there exists a unique stationary solution X(t). For this situation a description of the almost sure Lyapunov spectrum and results on the p th moment Lyapunov exponents of (*) are given.
60H25 Random operators and equations
34K25 Asymptotic theory of functional-differential equations
60G10 Stationary processes
93E20 Optimal stochastic control (systems) | {"url":"http://zbmath.org/?q=an:0704.60062","timestamp":"2014-04-18T11:08:15Z","content_type":null,"content_length":"23190","record_id":"<urn:uuid:55f4ae81-b4b4-4a83-9980-923f3c5859b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Conversion between Utility and Information
Pedro Alejandro Ortega and Daniel Alexander Braun
In: AGI 2010, 5-8 March 2010, Lugano, Switzerland.
Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that
rewards should be real-valued, additive and order-preserving, where the latter implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely
determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected
utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be
achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent's I/O distribution. Thus, our results allow for
an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type: Conference or Workshop Item (Paper)
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 5530
Deposited By: Pedro Ortega
Deposited On: 20 February 2010 | {"url":"http://eprints.pascal-network.org/archive/00005530/","timestamp":"2014-04-17T10:02:45Z","content_type":null,"content_length":"7697","record_id":"<urn:uuid:0b66cc11-1cf3-43d9-b2a4-41ef466c458e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving system of nonlinear equations
up vote 0 down vote favorite
Dear all, Can anyone tell me all the algorithms that are available for finding all solutions of a system of nonlinear equations?
I am particularly interested in solving problems of the form:
where f1, f2,..., fn are nonlinear functions of their arguments and X1,X2,...,Xn are matrices.
Thanks, Pat.
Is your system known to have a solution? That, in fact matters. What are the properties of $f_n$? That matters as well. Finally, do you want an algorithm that 1. approximates a solution? 2.
computes a solution? 3. approximates and proves that a solution exists. – Rabee Tourky Oct 13 '12 at 2:22
2 "Can anyone tell me all the algorithms?" -- I strongly doubt it. This question strikes me as far too vague; see mathoverflow.net/howtoask I suspect that you don't want to consider all possible
non-linear functions, but rather some subclass such as noncommutative polynomials... – Yemon Choi Oct 13 '12 at 2:23
Hi, Depending on the parameterization of f, the system may not have a solution. In my case, f has the form fi(X1,X2,...,Xn)=inv(E0i+E1i*X1+...+Eni*Xn)*Hi, where E0i,...Eni and Hi are matrices.
There are potentially many ways in which one can get a solution to the system above. An obvious one is successive approximation. This method may fail to produce a solution even when a solution
exists. The ideal algorithm would characterize all possible solutions and show how to construct a particular solution. Thanks, Pat. – Pat M Oct 13 '12 at 10:02
Look, if your nonlinear function is the square root function, how are you proposing to define the square root of a matrix? – Yemon Choi Oct 14 '12 at 4:00
@David: This problem is not solve in undergraduate numerical analysis texts, unless I am mistaken. @Yemon: I do not have a square root function in my problem. – Pat M Oct 18 '12 at 8:41
show 2 more comments
1 Answer
active oldest votes
You can use Newton's method to solve $f(X)-X=0$: in your case, it means simply to study the recursively defined sequence $$ X_{k+1}=f(X_k),\quad\text{along with a clever choice for $X_0$.} $$
Of course here $X_k\in \mathbb R^n$. Assuming that you know that you have a solution $f(Y)=Y$ at which $f'(Y)=0$. Then $$ X_{k+1}-Y=f(X_{k})-f(Y)=\int_0^1(1-\theta)f''(Y+\theta(X_k-Y))d\theta
(X_k-Y)^2 $$ so that assuming for instance that $f''$ is a bounded quadratic form (this could be only a local assumption) you get the so-called quadratic convergence to $Y$ (very fast
convergence) $$ \Vert X_{k+1}-Y\Vert\le C\Vert X_{k}-Y\Vert^2\Longrightarrow \Vert X_{k}-Y\Vert\le C^{2^k-1}\Vert X_{0}-Y\Vert^{2^k}. $$ To make only a local hypothesis, you must choose $X_0$
not too far from $Y$, which in practice is not so difficult to achieve.
up vote
1 down On the other hand, to solve $\Phi(X)=0$, Newton's method requires only that at a solution $\Phi(Y)=0$ the differential $\Phi'(Y)$ is invertible: then your equation becomes $$\Phi(X)=0\
vote Longleftrightarrow -\Phi'(Y)^{-1}\Phi(X)+X=X\Longleftrightarrow f(X)=X $$ with $f(X)=-\Phi'(Y)^{-1}\Phi(X)+X$, $f(Y)=Y$, $f'(Y)=0$ and you are back to the previous setting.
A simple 1D example is $$ f(x)=\frac{x}{2}+\frac{a}{2x},\quad\text{$a>0$, $x_{k+1}=f(x_k)$ converging to $\sqrt{a}$} $$ an excellent algorithm to compute the square root, anyhow much faster
than the high-school tedious method of extraction. Try your hand with $a=2$, you will see how accurate is the approximation of $\sqrt 2$for simply $k=2$, starting with $x_0=2$.
Hi Bazin, writing down the Newton for the problem I want to solve is rather complicated and expensive. In addition, the newton algorithm would be sensitive to the starting value and might
deliver a different answer for every start value. The reason I am interested in an algorithm that could possibly give me all solutions is that I do not know in advance which solution I am
computing. – Pat M Oct 18 '12 at 8:37
This certainly won't give all solutions. – David Ketcheson Oct 22 '12 at 4:00
Yes: the method devised by Newton solves the equation $\Phi(X)=0$ at simple zeroes. Somehow these zeroes are stable by small perturbation and are much easier to find than the zeroes at
which the differential is vanishing. – Bazin Oct 24 '12 at 21:56
add comment
Not the answer you're looking for? Browse other questions tagged na.numerical-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/109506/solving-system-of-nonlinear-equations?answertab=active","timestamp":"2014-04-21T13:03:20Z","content_type":null,"content_length":"60472","record_id":"<urn:uuid:2a982d2b-ea52-4315-a880-dc69dd045fbb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circles & Vectors
October 16th 2008, 05:34 AM #1
Junior Member
Aug 2008
Circles & Vectors
In circle geometry, there is a mathematical property that states a radius, drawn to bisect a chord, will meet the chord at 90 degrees.
I must prove this property using vector methods. How do I do this? I posted a similar question here some times ago that uses non-vector methods. I did eventually figure that out, but now I need
to use vector methods.
Any help would be appreciated.
In circle geometry, there is a mathematical property that states a radius, drawn to bisect a chord, will meet the chord at 90 degrees.
I must prove this property using vector methods. How do I do this? I posted a similar question here some times ago that uses non-vector methods. I did eventually figure that out, but now I need
to use vector methods.
Any help would be appreciated.
1. Let C denote the center of the circle and the origin.
Then the vectors $\vec a$ and $\vec b$ are position vectors pointing at points on the circle line. Therefore
$|\vec a| = |\vec b| = r$
2. The vector $\overrightarrow{AB} = \vec b - \vec a$
3. The vector $\vec m = \frac12(\vec a + \vec b)$ has the same direction as the line passing through C and M, the midpoint of $\overline{AB}$
4. Calculate
$\vec m \cdot \overrightarrow{AB} = \frac12(\vec a + \vec b) \cdot (\vec b - \vec a) = \frac12\left( \vec a \vec b + (\vec b)^2 - (\vec a)^2 - \vec a \vec b \right)$
Since $|\vec a| = |\vec b|$ the value in the bracket is zero. Therefore $\vec m$ and $\overrightarrow{AB}$ are perpendicular.
October 16th 2008, 12:00 PM #2 | {"url":"http://mathhelpforum.com/geometry/54016-circles-vectors.html","timestamp":"2014-04-17T11:55:13Z","content_type":null,"content_length":"37126","record_id":"<urn:uuid:d6501f7e-4ada-4ce9-8807-a92527d39dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Econometric Theory/Serial Correlation
There are times, especially in time-series data, that the CLR assumption of $corr(\epsilon_t, \epsilon_{t-1})=0$ is broken. This is known in econometrics as Serial Correlation or Autocorrelation.
This means that $corr(\epsilon_t, \epsilon_{t-1}) e 0$ and there is a pattern across the error terms. The error terms are then not independently distributed across the observations and are not
strictly random.
Examples of AutocorrelationEdit
Functional FormEdit
When the error term is related to the previous error term, it can be written in an algebraic equation. $\epsilon_t = \rho \epsilon_{t-1} + u_t$ where ρ is the autocorrelation coefficient between the
two disturbance terms, and u is the disturbance term for the autocorrelation. This is known as an Autoregressive Process. $-1< \rho = corr(\epsilon_t, \epsilon_{t-1}) < 1$ The u is needed within the
equation because although the error term is less random, it still has a slight random effect.
Serial Correlation of the Nth OrderEdit
Autoregressive modelEdit
• First order Autoregressive Process, AR(1):$\epsilon_t = \rho \epsilon_{t-1} + u_t$
□ This is known as the first order autoregression, due to the error term only depending on the previous error term.
• nth order Autoregressive Process, AR(n):$\epsilon_t = \rho_1 \epsilon_{t-1} + \rho_2 \epsilon_{t-2} + \cdots + \rho_n \epsilon_{t-n} + u_t$
Moving-average modelEdit
The notation MA(q) refers to the moving average model of order q:
$X_t = \mu + \varepsilon_t + \sum_{i=1}^q \theta_i \varepsilon_{t-i}\,$
where the θ[1], ..., θ[q] are the parameters of the model, μ is the expectation of $X_t$ (often assumed to equal 0), and the $\varepsilon_t$, $\varepsilon_{t-1}$,... are again, white noise error
terms. The moving-average model is essentially a finite impulse response filter with some additional interpretation placed on it.
Autoregressive–moving-average modelEdit
The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving-average terms. This model contains the AR(p) and MA(q) models,
$X_t = c + \varepsilon_t + \sum_{i=1}^p \varphi_i X_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i}.\,$
Causes of AutocorrelationEdit
1. Spatial Autocorrelation
$corr(\epsilon_t, \epsilon_{t-1}) e 0$ Spatial Autocorrelation occurs when the two errors are specially and/or geographically related. In simpler terms, they are "next to each." Examples: The city of
St. Paul has a spike of crime and so they hire additional police. The following year, they found that the crime rate decreased significantly. Amazingly, the city of Minneapolis, which had not
adjusted its police force, finds that they have a increase in the crime rate over the same period.
• Note: this type of Autocorrelation occurs over cross-sectional samples.
1. Inertia/Time to Adjust
1. This often occurs in Macro, time series data. The US interest rate unexpectedly increases and so there is an associated change in exchange rates with other countries. Reaching a new
equilibrium could take some time.
2. Prolonged Influences
1. This is again a Macro, time series issue dealing with economic shocks. It is now expected that the US interest rate will increase. ##The associated exchange rates will slowly adjust up-until
the announcement by the Federal Reserve and may overshoot the equilibrium.
3. Data Smoothing/Manipulation
1. Using functions to smooth data will bring autocorrelation into the disturbance terms
4. Misspecification
1. A regression will often show signs of autocorrelation when there are omitted variables. Because the missing independent variable now exists in the disturbance term, we get a disturbance term
that looks like: $\epsilon_t = \beta_2 X_2 + u_t$ when the correct specification is $Y_t = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + u_t$
Consequences of AutocorrelationEdit
The main problem with autocorrelation is that it may make a model look better than it actually is.
List of consequencesEdit
1. Coefficients are still unbiased $E(\epsilon_t) = 0, cov(X_t, u_t) = 0$
2. True variance of $\hat{\beta}$ is increased, by the presence of autocorrelation.
3. Estimated variance of $\hat{\beta}$ is smaller due to autocorrelation (biased downward).
4. A decrease in $se(\hat{\beta})$ and an increase of the t-statistics; this results in the estimator looking more accurate than it actually is.
5. R² becomes inflated.
All of these problems result in hypothesis tests becoming invalid.
Testing for AutocorrelationEdit
1. While not conclusive, an impression can be gained by viewing a graph of the dependent variable against the error term (namely, a residual scatter-plot).
2. Durbin-Watson test:
1. Assume $\epsilon_t = \epsilon_{t-1} \rho + u_t$
2. Test H(0): ρ = 0 (no AC) against H(1): ρ > 0 (one-tailed test)
3. Test statistic $DW = \frac{\sum (\epsilon_t - \epsilon_{t-1})^2} {\sum \epsilon^2} = 2 - 2 \rho$
• Any value under D(L) (in the D-W table) rejects the null hypothesis and AC exists.
• Any value between D(L) and D(W) leaves us with no conclusion of AC.
• Any value larger than D(W) accepts the null hypothesis and AC does not exist.
• Note, this is one tail test. To get the other tail. Use 4 - DW as the test stat instead.
Last modified on 20 June 2012, at 23:37 | {"url":"https://en.m.wikibooks.org/wiki/Econometric_Theory/Serial_Correlation","timestamp":"2014-04-17T12:35:51Z","content_type":null,"content_length":"26088","record_id":"<urn:uuid:fd632ab4-989d-4f92-a47c-b1518999e661>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Competition between choices over time
Measuring choices over time implies competition between alternates.
This is a fairly obvious statement. However, some of the mathematical properties of this system are less well known. These inform the expected behaviour of observations, helping us correctly specify
null hypotheses.
• The proportion of {shall, will} utterances where shall is chosen, p(shall | {shall, will}), is in competition with the alternative probability of will (they are mutually exclusive) and bounded on
a probabilistic scale.
• The probability associated with each member of a set of alternates X = {x[i]}, which we might write as p(x[i] | X), is bounded, 0 ≤ p(x[i] | X) ≤ 1, and exhaustive, Σp(<x[i] | X) = 1.
A bounded system behaves differently from an unbounded one. Every child knows that a ball bouncing in an alley behaves differently than in an open playground. ‘Walls’ direct motion toward the centre.
In this short paper we discuss two properties of competitive choice:
1. the tendency for change to be S-shaped rather than linear, and
2. how this has an impact on confidence intervals.
S curves and Wilson intervals
We can sketch the overall behaviour of the system by plotting Wilson intervals for the ‘S’ curve (below). We have plotted a logistic curve for p (k = 1) and added intervals for n = 10 and n = 100.
Observe that with a small n the confidence interval is large and more highly skewed. The difference (w⁺ – w⁻) is greatest for p = 0.5.
1. The S curve
2. Poisson error bars
3. S curves and Wilson intervals
Wallis, S.A. 2010. Competition between choices over time. London: Survey of English Usage, UCL. http://www.ucl.ac.uk/english-usage/statspapers/competition-over-time.pdf | {"url":"http://corplingstats.wordpress.com/2012/03/31/competition-between-choices-over-time/","timestamp":"2014-04-18T02:58:51Z","content_type":null,"content_length":"81632","record_id":"<urn:uuid:a8c6411d-b12b-40e2-9555-19973ef0f63e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
VLSI Designs for Multiplications over Finite Fields GF(2 m
Results 1 - 10 of 24
- IEEE Transactions on Computers , 1996
"... In this paper a new bit-parallel structure for a multiplier with low complexity in Galois fields is introduced. The multiplier operates over composite fields GF ((2 n ) m ), with k = nm. The
Karatsuba-Ofman algorithm is investigated and applied to the multiplication of polynomials over GF (2 n ..."
Cited by 27 (1 self)
Add to MetaCart
In this paper a new bit-parallel structure for a multiplier with low complexity in Galois fields is introduced. The multiplier operates over composite fields GF ((2 n ) m ), with k = nm. The
Karatsuba-Ofman algorithm is investigated and applied to the multiplication of polynomials over GF (2 n ). It is shown that this operation has a complexity of order O(k log 2 3 ) under certain
constraints regarding k. A complete set of primitive field polynomials for composite fields is provided which perform modulo reduction with low complexity. As a result, multipliers for fields GF (2 k
) up to k = 32 with low gate counts and low delays are listed. The architectures are highly modular and thus well suited for VLSI implementation. This paper was presented in part at the
Swedish-Russian Workshop on Information Theory, August 2227, 1993, Molle, Sweden y The author is with the Electrical and Computer Engineering Department, Worcester Polytechnic Institute, Worcester,
MA 01609. E-mail: ...
- IEEE Transactions on Computers , 1999
"... This contribution describes a new class of arithmetic architectures for Galois fields GF (2 k ). The main applications of the architecture are public-key systems which are based on the discrete
logarithm problem for elliptic curves. The architectures use a representation of the field GF (2 k ..."
Cited by 24 (2 self)
Add to MetaCart
This contribution describes a new class of arithmetic architectures for Galois fields GF (2 k ). The main applications of the architecture are public-key systems which are based on the discrete
logarithm problem for elliptic curves. The architectures use a representation of the field GF (2 k ) as GF ((2 n ) m ), where k = n \Delta m. The approach explores bit parallel arithmetic in the
subfield GF (2 n ), and serial processing for the extension field arithmetic. This mixed parallel-serial (hybrid) approach can lead to fast implementations. As the core module, a hybrid multiplier is
introduced and several This paper is an extension of [1]. The bit parallel squarer architectures have been completely revised. 1 optimizations are discussed. We provide two different approaches to
squaring. We develop exact expressions for the complexity of parallel squarers in composite fields which can have a surprisingly low complexity. The hybrid architectures are capable of explori...
- IEEE Transactions on Computers , 2004
"... We introduce a generalized method for constructing sub-quadratic complexity multipliers for even characteristic field extensions. The construction is obtained by recursively extending short
convolution algorithms and nesting them. To obtain the short convolution algorithms the Winograd short convolu ..."
Cited by 22 (0 self)
Add to MetaCart
We introduce a generalized method for constructing sub-quadratic complexity multipliers for even characteristic field extensions. The construction is obtained by recursively extending short
convolution algorithms and nesting them. To obtain the short convolution algorithms the Winograd short convolution algorithm is reintroduced and analyzed in the context of polynomial multiplication.
We present a recursive construction technique that extends any d point multiplier into an n = d k point multiplier with area that is sub-quadratic and delay that is logarithmic in the bit-length n.
We present a thorough analysis that establishes the exact space and time complexities of these multipliers. Using the recursive construction method we obtain six new constructions, among which one
turns out to be identical to the Karatsuba multiplier. All six algorithms have sub-quadratic space complexities and two of the algorithms have significantly better time complexities than the
Karatsuba algorithm. Keywords: Bit-parallel multipliers, finite fields, Winograd convolution 1
- MASTER’S THESIS, WORCESTER POLYTECHNIC INST , 1998
"... Security issues will play an important role in the majority of communication and computer networks of the future. As the Internet becomes more and more accessible to the public, security
measures will have to be strengthened. Elliptic curve cryptosystems allow for shorter operand lengths than other ..."
Cited by 19 (0 self)
Add to MetaCart
Security issues will play an important role in the majority of communication and computer networks of the future. As the Internet becomes more and more accessible to the public, security measures
will have to be strengthened. Elliptic curve cryptosystems allow for shorter operand lengths than other public-key schemes based on the discrete logarithm in finite fields and the integer
factorization problem and are thus attractive for many applications. This thesis describes an implementation of a crypto engine based on elliptic curves. The underlying algebraic structures are
composite Galois fields GF((2 n) m) in a standard base representation. As a major new feature, the system is developed for a reconfigurable platform based on Field Programmable Gate Arrays (FPGAs).
FPGAs combine the flexibility of software solutions with the security of traditional hardware implementations. In particular, it is possible to easily change all algorithm parameters such as curve
coefficients, field order, or field representation. The thesis deals with the design and implementation of elliptic curve point multiplicationarchitectures. The architectures are described in VHDL
and mapped to Xilinx FPGA devices. Architectures over Galois fields of different order and representation were implemented and compared. Area and timing measurements are provided for all
architectures. It is shown that a full point multiplication on elliptic curves of real-world size can be implemented on commercially available FPGAs.
- IEEE Transactions on Computers , 2004
"... Abstract—Representing the field elements with respect to the polynomial (or standard) basis, we consider bit parallel architectures for multiplication over the finite field GFð2 m Þ. In this
effect, first we derive a new formulation for polynomial basis multiplication in terms of the reduction matri ..."
Cited by 17 (2 self)
Add to MetaCart
Abstract—Representing the field elements with respect to the polynomial (or standard) basis, we consider bit parallel architectures for multiplication over the finite field GFð2 m Þ. In this effect,
first we derive a new formulation for polynomial basis multiplication in terms of the reduction matrix Q. The main advantage of this new formulation is that it can be used with any field defining
irreducible polynomial. Using this formulation, we then develop a generalized architecture for the multiplier and analyze the time and gate complexities of the proposed multiplier as a function of
degree m and the reduction matrix Q. To the best of our knowledge, this is the first time that these complexities are given in terms of Q. Unlike most other articles on bit parallel finite field
multipliers, here we also consider the number of signals to be routed in hardware implementation and we show that, compared to the well-known Mastrovito’s multiplier, the proposed architecture has
fewer routed signals. In this article, the proposed generalized architecture is further optimized for three special types of polynomials, namely, equally spaced polynomials, trinomials, and
pentanomials. We have obtained explicit formulas and complexities of the multipliers for these three special irreducible polynomials. This makes it very easy for a designer to implement the proposed
multipliers using hardware description languages like VHDL and Verilog with minimum knowledge of finite field arithmetic. Index Terms—Finite or Galois field, Mastrovito multiplier, all-one
polynomial, polynomial basis, trinomial, pentanomial and equallyspaced polynomial. 1
- IEEE Transactions on Computers , 1998
"... This contribution introduces a new class of multipliers for finite fields GF ((2 n ) 4 ). The architecture is based on a modified version of the Karatsuba-Ofman algorithm (KOA). By determining
optimized field polynomials of degree four, the last stage of the KOA and the modulo reduction can b ..."
Cited by 13 (0 self)
Add to MetaCart
This contribution introduces a new class of multipliers for finite fields GF ((2 n ) 4 ). The architecture is based on a modified version of the Karatsuba-Ofman algorithm (KOA). By determining
optimized field polynomials of degree four, the last stage of the KOA and the modulo reduction can be combined. This saves computation and area in VLSI implementations. The new algorithm leads to
architectures which show a considerably improved gate complexity compared to traditional approaches and reduced delay if compared with KOA-based architectures with separate modulo reduction. The new
multipliers lead to highly modular architectures an are thus well suited for VLSI implementations. Three types of field polynomials are introduced and conditions for their existence are established.
For the small fields where n = 2; 3; : : : ; 8, which are of primary technical interest, optimized field polynomials were determined by an exhaustive search. For each field order, exact space and
- IEEE Transactions on Computers , 1997
"... Reed-Solomon (RS) error correction codes are being widely used in modern communication systems such as compact disk players or satellite communication links. RS codes rely on arithmetic in
finite, or Galois fields. The specific field GF (2 8 ) is of central importance for many practical systems. T ..."
Cited by 13 (2 self)
Add to MetaCart
Reed-Solomon (RS) error correction codes are being widely used in modern communication systems such as compact disk players or satellite communication links. RS codes rely on arithmetic in finite, or
Galois fields. The specific field GF (2 8 ) is of central importance for many practical systems. The most costly, and thus most critical, elementary operations in RS decoders are multiplication and
inversion in Galois fields. Although there have been considerable efforts in the area of Galois field arithmetic architectures, there appears to be very little reported work for Galois field
arithmetic for reconfigurable hardware. This contribution provides a systematic comparison of two promising arithmetic architecture classes. The first one is based on a standard base representation,
and the second one is based on composite fields. For both classes a multiplier and an inverter for GF (2 8 ) are described and theoretical gate counts are provided. Using a design entry based on a
VHDL descr...
, 1993
"... Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases
(for finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to repr ..."
Cited by 9 (0 self)
Add to MetaCart
Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases (for
finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to represent finite fields was noted by Hensel in 1888. With the introduction of optimal normal bases,
large finite fields, that can be used in secure and e#cient implementation of several cryptosystems, have recently been realized in hardware. The present thesis studies various theoretical and
practical aspects of normal bases in finite fields. We first give some characterizations of normal bases. Then by using linear algebra, we prove that F q n has a basis over F q such that any element
in F q represented in this basis generates a normal basis if and only if some groups of coordinates are not simultaneously zero. We show how to construct an irreducible polynomial of degree 2 n with
linearly i...
- In 1995 IEEE International Symposium on Information Theory , 1995
"... This contribution is concerned with bit parallel inverters over finite fields. Two alternative approaches for inversion with low complexity which were proposed in the late nineteen eighties will
be reviewed. Previously they seem to have received relatively little attention in the scientific communit ..."
Cited by 8 (1 self)
Add to MetaCart
This contribution is concerned with bit parallel inverters over finite fields. Two alternative approaches for inversion with low complexity which were proposed in the late nineteen eighties will be
reviewed. Previously they seem to have received relatively little attention in the scientific community. Both methods are based on multiple field extension of GF (2). We will try to restate the two
algorithms in a clear fashion. It will be shown that one architecture is a generalization of the other's architecture core algorithm. As an impressive example of the advantage of inverters operating
over extension fields, the optimized complexity of a bit parallel inverter in the important field GF (2 8 ) will be computed, resulting in a surprisingly low gate count. 1 Introduction Galois field
arithmetic has wide spread applications in contemporary communication systems, in particular in cryptography and in channel coding. Modern applications in many cases call for VLSI implementations of
the a...
- in Advances in Cryptography --- EUROCRYPT '97 , 1997
"... This contribution describes a new class of arithmetic architectures for Galois fields GF (2 k ). The main applications of the architecture are public-key systems which are based on the discrete
logarithm problem for elliptic curves. The architectures use a representation of the field GF (2 k ) ..."
Cited by 7 (3 self)
Add to MetaCart
This contribution describes a new class of arithmetic architectures for Galois fields GF (2 k ). The main applications of the architecture are public-key systems which are based on the discrete
logarithm problem for elliptic curves. The architectures use a representation of the field GF (2 k ) as GF ((2 n ) m ), where k = n \Delta m. The approach explores bit parallel arithmetic in the
subfield GF (2 n ), and serial processing for the extension field arithmetic. This mixed parallel-serial (hybrid) approach can lead to very fast implementations. The principle of these approach was
initially suggested by Mastrovito. As the core module, a hybrid multiplier is introduced and several optimizations are discussed. We provide two different approaches to squaring which, in conjunction
with the multiplier, yield fast exponentiation architectures. The hybrid architectures are capable of exploring the time-space trade-off paradigm in a flexible manner. In particular, the number of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=573823","timestamp":"2014-04-17T01:20:02Z","content_type":null,"content_length":"42503","record_id":"<urn:uuid:4c96dc0e-9a5d-492b-9212-50ef815e1c42>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition amount
I have some problems to solve this:
Let f (x, y)=xye^(− ((x−a)^2)− ((y−b)^ 2)). In which direction, one will go from the point (a, b) in the definition amount if one wants to that the function values will increase so quickly as
possible? Decide an equation for key planet to the surface z=f (x, y) in the point (a, b, ab). Use differential to f in order to calculate a close value to f (9a/10, 6b/5). In which points, key
planet is to the surface z=f (x, y) horizontal?
Anyone who knows how to solve this? | {"url":"http://www.physicsforums.com/showthread.php?t=221162","timestamp":"2014-04-17T12:37:13Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:ce68eb77-7bfd-4d57-8bbe-f67928b311ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Click here to see the number of accesses to this library.
# ---------------------------------
# Available SIMPLE DRIVER routines:
# ---------------------------------
file cgbsv.f cgbsv.f plus dependencies
prec complex
CGBSV computes the solution to a complex system of linear equations
A * X = B, where A is a band matrix of order N with KL subdiagonals
and KU superdiagonals, and X and B are N-by-NRHS matrices.
The LU decomposition with partial pivoting and row interchanges is
used to factor A as A = L * U, where L is a product of permutation
and unit lower triangular matrices with KL subdiagonals, and U is
upper triangular with KL+KU superdiagonals. The factored form of A
is then used to solve the system of equations A * X = B.
file cgees.f cgees.f plus dependencies
prec complex
CGEES computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
Optionally, it also orders the eigenvalues on the diagonal of the
Schur form so that selected eigenvalues are at the top left.
The leading columns of Z then form an orthonormal basis for the
invariant subspace corresponding to the selected eigenvalues.
A complex matrix is in Schur form if it is upper triangular.
file cgeev.f cgeev.f plus dependencies
prec complex
CGEEV computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
The right eigenvector v(j) of A satisfies
A * v(j) = lambda(j) * v(j)
where lambda(j) is its eigenvalue.
The left eigenvector u(j) of A satisfies
u(j)**H * A = lambda(j) * u(j)**H
where u(j)**H denotes the conjugate transpose of u(j).
The computed eigenvectors are normalized to have Euclidean norm
equal to 1 and largest component real.
file cgegs.f cgegs.f plus dependencies
prec complex
This routine is deprecated and has been replaced by routine CGGES.
CGEGS computes the eigenvalues, Schur form, and, optionally, the
left and or/right Schur vectors of a complex matrix pair (A,B).
Given two square matrices A and B, the generalized Schur
factorization has the form
A = Q*S*Z**H, B = Q*T*Z**H
where Q and Z are unitary matrices and S and T are upper triangular.
The columns of Q are the left Schur vectors
and the columns of Z are the right Schur vectors.
If only the eigenvalues of (A,B) are needed, the driver routine
CGEGV should be used instead. See CGEGV for a description of the
eigenvalues of the generalized nonsymmetric eigenvalue problem
file cgegv.f cgegv.f plus dependencies
prec complex
This routine is deprecated and has been replaced by routine CGGEV.
CGEGV computes the eigenvalues and, optionally, the left and/or right
eigenvectors of a complex matrix pair (A,B).
Given two square matrices A and B,
the generalized nonsymmetric eigenvalue problem (GNEP) is to find the
eigenvalues lambda and corresponding (non-zero) eigenvectors x such
A*x = lambda*B*x.
An alternate form is to find the eigenvalues mu and corresponding
eigenvectors y such that
mu*A*y = B*y.
These two forms are equivalent with mu = 1/lambda and x = y if
neither lambda nor mu is zero. In order to deal with the case that
lambda or mu is zero or small, two values alpha and beta are returned
for each eigenvalue, such that lambda = alpha/beta and
mu = beta/alpha.
The vectors x and y in the above equations are right eigenvectors of
the matrix pair (A,B). Vectors u and v satisfying
u**H*A = lambda*u**H*B or mu*v**H*A = v**H*B
are left eigenvectors of (A,B).
Note: this routine performs "full balancing" on A and B -- see
"Further Details", below.
file cgels.f cgels.f plus dependencies
prec complex
CGELS solves overdetermined or underdetermined complex linear systems
involving an M-by-N matrix A, or its conjugate-transpose, using a QR
or LQ factorization of A. It is assumed that A has full rank.
The following options are provided:
1. If TRANS = 'N' and m >= n: find the least squares solution of
an overdetermined system, i.e., solve the least squares problem
minimize || B - A*X ||.
2. If TRANS = 'N' and m < n: find the minimum norm solution of
an underdetermined system A * X = B.
3. If TRANS = 'C' and m >= n: find the minimum norm solution of
an undetermined system A**H * X = B.
4. If TRANS = 'C' and m < n: find the least squares solution of
an overdetermined system, i.e., solve the least squares problem
minimize || B - A**H * X ||.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution
matrix X.
file cgelsd.f cgelsd.f plus dependencies
prec complex
CGELSD computes the minimum-norm solution to a real linear least
squares problem:
minimize 2-norm(| b - A*x |)
using the singular value decomposition (SVD) of A. A is an M-by-N
matrix which may be rank-deficient.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution
matrix X.
The problem is solved in three steps:
(1) Reduce the coefficient matrix A to bidiagonal form with
Householder tranformations, reducing the original problem
into a "bidiagonal least squares problem" (BLS)
(2) Solve the BLS using a divide and conquer approach.
(3) Apply back all the Householder tranformations to solve
the original least squares problem.
The effective rank of A is determined by treating as zero those
singular values which are less than RCOND times the largest singular
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file cgelss.f cgelss.f plus dependencies
prec complex
CGELSS computes the minimum norm solution to a complex linear
least squares problem:
Minimize 2-norm(| b - A*x |).
using the singular value decomposition (SVD) of A. A is an M-by-N
matrix which may be rank-deficient.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix
The effective rank of A is determined by treating as zero those
singular values which are less than RCOND times the largest singular
file cgelsy.f cgelsy.f plus dependencies
prec complex
CGELSY computes the minimum-norm solution to a complex linear least
squares problem:
minimize || A * X - B ||
using a complete orthogonal factorization of A. A is an M-by-N
matrix which may be rank-deficient.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution
matrix X.
The routine first computes a QR factorization with column pivoting:
A * P = Q * [ R11 R12 ]
[ 0 R22 ]
with R11 defined as the largest leading submatrix whose estimated
condition number is less than 1/RCOND. The order of R11, RANK,
is the effective rank of A.
Then, R22 is considered to be negligible, and R12 is annihilated
by unitary transformations from the right, arriving at the
complete orthogonal factorization:
A * P = Q * [ T11 0 ] * Z
[ 0 0 ]
The minimum-norm solution is then
X = P * Z' [ inv(T11)*Q1'*B ]
[ 0 ]
where Q1 consists of the first RANK columns of Q.
This routine is basically identical to the original xGELSX except
three differences:
o The permutation of matrix B (the right hand side) is faster and
more simple.
o The call to the subroutine xGEQPF has been substituted by the
the call to the subroutine xGEQP3. This subroutine is a Blas-3
version of the QR factorization with column pivoting.
o Matrix B (the right hand side) is updated with Blas-3.
file cgesdd.f cgesdd.f plus dependencies
prec complex
CGESDD computes the singular value decomposition (SVD) of a complex
M-by-N matrix A, optionally computing the left and/or right singular
vectors, by using divide-and-conquer method. The SVD is written
A = U * SIGMA * conjugate-transpose(V)
where SIGMA is an M-by-N matrix which is zero except for its
min(m,n) diagonal elements, U is an M-by-M unitary matrix, and
V is an N-by-N unitary matrix. The diagonal elements of SIGMA
are the singular values of A; they are real and non-negative, and
are returned in descending order. The first min(m,n) columns of
U and V are the left and right singular vectors of A.
Note that the routine returns VT = V**H, not V.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file cgesv.f cgesv.f plus dependencies
prec complex
CGESV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
The LU decomposition with partial pivoting and row interchanges is
used to factor A as
A = P * L * U,
where P is a permutation matrix, L is unit lower triangular, and U is
upper triangular. The factored form of A is then used to solve the
system of equations A * X = B.
file cgesvd.f cgesvd.f plus dependencies
prec complex
CGESVD computes the singular value decomposition (SVD) of a complex
M-by-N matrix A, optionally computing the left and/or right singular
vectors. The SVD is written
A = U * SIGMA * conjugate-transpose(V)
where SIGMA is an M-by-N matrix which is zero except for its
min(m,n) diagonal elements, U is an M-by-M unitary matrix, and
V is an N-by-N unitary matrix. The diagonal elements of SIGMA
are the singular values of A; they are real and non-negative, and
are returned in descending order. The first min(m,n) columns of
U and V are the left and right singular vectors of A.
Note that the routine returns V**H, not V.
file cgges.f cgges.f plus dependencies
prec complex
CGGES computes for a pair of N-by-N complex nonsymmetric matrices
(A,B), the generalized eigenvalues, the generalized complex Schur
form (S, T), and optionally left and/or right Schur vectors (VSL
and VSR). This gives the generalized Schur factorization
(A,B) = ( (VSL)*S*(VSR)**H, (VSL)*T*(VSR)**H )
where (VSR)**H is the conjugate-transpose of VSR.
Optionally, it also orders the eigenvalues so that a selected cluster
of eigenvalues appears in the leading diagonal blocks of the upper
triangular matrix S and the upper triangular matrix T. The leading
columns of VSL and VSR then form an unitary basis for the
corresponding left and right eigenspaces (deflating subspaces).
(If only the generalized eigenvalues are needed, use the driver
CGGEV instead, which is faster.)
A generalized eigenvalue for a pair of matrices (A,B) is a scalar w
or a ratio alpha/beta = w, such that A - w*B is singular. It is
usually represented as the pair (alpha,beta), as there is a
reasonable interpretation for beta=0, and even for both being zero.
A pair of matrices (S,T) is in generalized complex Schur form if S
and T are upper triangular and, in addition, the diagonal elements
of T are non-negative real numbers.
file cggev.f cggev.f plus dependencies
prec complex
CGGEV computes for a pair of N-by-N complex nonsymmetric matrices
(A,B), the generalized eigenvalues, and optionally, the left and/or
right generalized eigenvectors.
A generalized eigenvalue for a pair of matrices (A,B) is a scalar
lambda or a ratio alpha/beta = lambda, such that A - lambda*B is
singular. It is usually represented as the pair (alpha,beta), as
there is a reasonable interpretation for beta=0, and even for both
being zero.
The right generalized eigenvector v(j) corresponding to the
generalized eigenvalue lambda(j) of (A,B) satisfies
A * v(j) = lambda(j) * B * v(j).
The left generalized eigenvector u(j) corresponding to the
generalized eigenvalues lambda(j) of (A,B) satisfies
u(j)**H * A = lambda(j) * u(j)**H * B
where u(j)**H is the conjugate-transpose of u(j).
file cggglm.f cggglm.f plus dependencies
prec complex
CGGGLM solves a general Gauss-Markov linear model (GLM) problem:
minimize || y ||_2 subject to d = A*x + B*y
where A is an N-by-M matrix, B is an N-by-P matrix, and d is a
given N-vector. It is assumed that M <= N <= M+P, and
rank(A) = M and rank( A B ) = N.
Under these assumptions, the constrained equation is always
consistent, and there is a unique solution x and a minimal 2-norm
solution y, which is obtained using a generalized QR factorization
of the matrices (A, B) given by
A = Q*(R), B = Q*T*Z.
In particular, if matrix B is square nonsingular, then the problem
GLM is equivalent to the following weighted linear least squares
minimize || inv(B)*(d-A*x) ||_2
where inv(B) denotes the inverse of B.
file cgglse.f cgglse.f plus dependencies
prec complex
CGGLSE solves the linear equality-constrained least squares (LSE)
minimize || c - A*x ||_2 subject to B*x = d
where A is an M-by-N matrix, B is a P-by-N matrix, c is a given
M-vector, and d is a given P-vector. It is assumed that
P <= N <= M+P, and
rank(B) = P and rank( (A) ) = N.
( (B) )
These conditions ensure that the LSE problem has a unique solution,
which is obtained using a generalized RQ factorization of the
matrices (B, A) given by
B = (0 R)*Q, A = Z*T*Q.
file cggsvd.f cggsvd.f plus dependencies
prec complex
CGGSVD computes the generalized singular value decomposition (GSVD)
of an M-by-N complex matrix A and P-by-N complex matrix B:
U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R )
where U, V and Q are unitary matrices, and Z' means the conjugate
transpose of Z. Let K+L = the effective numerical rank of the
matrix (A',B')', then R is a (K+L)-by-(K+L) nonsingular upper
triangular matrix, D1 and D2 are M-by-(K+L) and P-by-(K+L) "diagonal"
matrices and of the following structures, respectively:
If M-K-L >= 0,
K L
D1 = K ( I 0 )
L ( 0 C )
M-K-L ( 0 0 )
K L
D2 = L ( 0 S )
P-L ( 0 0 )
N-K-L K L
( 0 R ) = K ( 0 R11 R12 )
L ( 0 0 R22 )
C = diag( ALPHA(K+1), ... , ALPHA(K+L) ),
S = diag( BETA(K+1), ... , BETA(K+L) ),
C**2 + S**2 = I.
R is stored in A(1:K+L,N-K-L+1:N) on exit.
If M-K-L < 0,
K M-K K+L-M
D1 = K ( I 0 0 )
M-K ( 0 C 0 )
K M-K K+L-M
D2 = M-K ( 0 S 0 )
K+L-M ( 0 0 I )
P-L ( 0 0 0 )
N-K-L K M-K K+L-M
( 0 R ) = K ( 0 R11 R12 R13 )
M-K ( 0 0 R22 R23 )
K+L-M ( 0 0 0 R33 )
C = diag( ALPHA(K+1), ... , ALPHA(M) ),
S = diag( BETA(K+1), ... , BETA(M) ),
C**2 + S**2 = I.
(R11 R12 R13 ) is stored in A(1:M, N-K-L+1:N), and R33 is stored
( 0 R22 R23 )
in B(M-K+1:L,N+M-K-L+1:N) on exit.
The routine computes C, S, R, and optionally the unitary
transformation matrices U, V and Q.
In particular, if B is an N-by-N nonsingular matrix, then the GSVD of
A and B implicitly gives the SVD of A*inv(B):
A*inv(B) = U*(D1*inv(D2))*V'.
If ( A',B')' has orthnormal columns, then the GSVD of A and B is also
equal to the CS decomposition of A and B. Furthermore, the GSVD can
be used to derive the solution of the eigenvalue problem:
A'*A x = lambda* B'*B x.
In some literature, the GSVD of A and B is presented in the form
U'*A*X = ( 0 D1 ), V'*B*X = ( 0 D2 )
where U and V are orthogonal and X is nonsingular, and D1 and D2 are
``diagonal''. The former GSVD form can be converted to the latter
form by taking the nonsingular matrix X as
X = Q*( I 0 )
( 0 inv(R) )
file chbev.f chbev.f plus dependencies
prec complex
CHBEV computes all the eigenvalues and, optionally, eigenvectors of
a complex Hermitian band matrix A.
file chbevd.f chbevd.f plus dependencies
prec complex
CHBEVD computes all the eigenvalues and, optionally, eigenvectors of
a complex Hermitian band matrix A. If eigenvectors are desired, it
uses a divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file chbgv.f chbgv.f plus dependencies
prec complex
CHBGV computes all the eigenvalues, and optionally, the eigenvectors
of a complex generalized Hermitian-definite banded eigenproblem, of
the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
and banded, and B is also positive definite.
file chbgvd.f chbgvd.f plus dependencies
prec complex
CHBGVD computes all the eigenvalues, and optionally, the eigenvectors
of a complex generalized Hermitian-definite banded eigenproblem, of
the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
and banded, and B is also positive definite. If eigenvectors are
desired, it uses a divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file cheev.f cheev.f plus dependencies
prec complex
CHEEV computes all eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A.
file cheevd.f cheevd.f plus dependencies
prec complex
CHEEVD computes all eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix A. If eigenvectors are desired, it uses a
divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file cheevr.f cheevr.f plus dependencies
prec complex
CHEEVR computes selected eigenvalues and, optionally, eigenvectors
of a complex Hermitian matrix A. Eigenvalues and eigenvectors can
be selected by specifying either a range of values or a range of
indices for the desired eigenvalues.
CHEEVR first reduces the matrix A to tridiagonal form T with a call
to CHETRD. Then, whenever possible, CHEEVR calls CSTEMR to compute
the eigenspectrum using Relatively Robust Representations. CSTEMR
computes eigenvalues by the dqds algorithm, while orthogonal
eigenvectors are computed from various "good" L D L^T representations
(also known as Relatively Robust Representations). Gram-Schmidt
orthogonalization is avoided as far as possible. More specifically,
the various steps of the algorithm are as follows.
For each unreduced block (submatrix) of T,
(a) Compute T - sigma I = L D L^T, so that L and D
define all the wanted eigenvalues to high relative accuracy.
This means that small relative changes in the entries of D and L
cause only small relative changes in the eigenvalues and
eigenvectors. The standard (unfactored) representation of the
tridiagonal matrix T does not have this property in general.
(b) Compute the eigenvalues to suitable accuracy.
If the eigenvectors are desired, the algorithm attains full
accuracy of the computed eigenvalues only right before
the corresponding vectors have to be computed, see steps c) and d).
(c) For each cluster of close eigenvalues, select a new
shift close to the cluster, find a new factorization, and refine
the shifted eigenvalues to suitable accuracy.
(d) For each eigenvalue with a large enough relative separation compute
the corresponding eigenvector by forming a rank revealing twisted
factorization. Go back to (c) for any clusters that remain.
The desired accuracy of the output can be specified by the input
parameter ABSTOL.
For more details, see DSTEMR's documentation and:
- Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
- Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25,
2004. Also LAPACK Working Note 154.
- Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
tridiagonal eigenvalue/eigenvector problem",
Computer Science Division Technical Report No. UCB/CSD-97-971,
UC Berkeley, May 1997.
Note 1 : CHEEVR calls CSTEMR when the full spectrum is requested
on machines which conform to the ieee-754 floating point standard.
CHEEVR calls SSTEBZ and CSTEIN on non-ieee machines and
when partial spectrum requests are made.
Normal execution of CSTEMR may create NaNs and infinities and
hence may abort due to a floating point exception in environments
which do not handle NaNs and infinities in the ieee standard default
file chegv.f chegv.f plus dependencies
prec complex
CHEGV computes all the eigenvalues, and optionally, the eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x.
Here A and B are assumed to be Hermitian and B is also
positive definite.
file chegvd.f chegvd.f plus dependencies
prec complex
CHEGVD computes all the eigenvalues, and optionally, the eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and
B are assumed to be Hermitian and B is also positive definite.
If eigenvectors are desired, it uses a divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file chesv.f chesv.f plus dependencies
prec complex
CHESV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS
The diagonal pivoting method is used to factor A as
A = U * D * U**H, if UPLO = 'U', or
A = L * D * L**H, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is Hermitian and block diagonal with
1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then
used to solve the system of equations A * X = B.
file chpev.f chpev.f plus dependencies
prec complex
CHPEV computes all the eigenvalues and, optionally, eigenvectors of a
complex Hermitian matrix in packed storage.
file chpevd.f chpevd.f plus dependencies
prec complex
CHPEVD computes all the eigenvalues and, optionally, eigenvectors of
a complex Hermitian matrix A in packed storage. If eigenvectors are
desired, it uses a divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file chpgv.f chpgv.f plus dependencies
prec complex
CHPGV computes all the eigenvalues and, optionally, the eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x.
Here A and B are assumed to be Hermitian, stored in packed format,
and B is also positive definite.
file chpgvd.f chpgvd.f plus dependencies
prec complex
CHPGVD computes all the eigenvalues and, optionally, the eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and
B are assumed to be Hermitian, stored in packed format, and B is also
positive definite.
If eigenvectors are desired, it uses a divide and conquer algorithm.
The divide and conquer algorithm makes very mild assumptions about
floating point arithmetic. It will work on machines with a guard
digit in add/subtract, or on those binary machines without guard
digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
Cray-2. It could conceivably fail on hexadecimal or decimal machines
without guard digits, but we know of none.
file chpsv.f chpsv.f plus dependencies
prec complex
CHPSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian matrix stored in packed format and X
and B are N-by-NRHS matrices.
The diagonal pivoting method is used to factor A as
A = U * D * U**H, if UPLO = 'U', or
A = L * D * L**H, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, D is Hermitian and block diagonal with 1-by-1
and 2-by-2 diagonal blocks. The factored form of A is then used to
solve the system of equations A * X = B.
file cpbsv.f cpbsv.f plus dependencies
prec complex
CPBSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite band matrix and X
and B are N-by-NRHS matrices.
The Cholesky decomposition is used to factor A as
A = U**H * U, if UPLO = 'U', or
A = L * L**H, if UPLO = 'L',
where U is an upper triangular band matrix, and L is a lower
triangular band matrix, with the same number of superdiagonals or
subdiagonals as A. The factored form of A is then used to solve the
system of equations A * X = B.
file cposv.f cposv.f plus dependencies
prec complex
CPOSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite matrix and X and B
are N-by-NRHS matrices.
The Cholesky decomposition is used to factor A as
A = U**H* U, if UPLO = 'U', or
A = L * L**H, if UPLO = 'L',
where U is an upper triangular matrix and L is a lower triangular
matrix. The factored form of A is then used to solve the system of
equations A * X = B.
file cppsv.f cppsv.f plus dependencies
prec complex
CPPSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite matrix stored in
packed format and X and B are N-by-NRHS matrices.
The Cholesky decomposition is used to factor A as
A = U**H* U, if UPLO = 'U', or
A = L * L**H, if UPLO = 'L',
where U is an upper triangular matrix and L is a lower triangular
matrix. The factored form of A is then used to solve the system of
equations A * X = B.
file cspsv.f cspsv.f plus dependencies
prec complex
CSPSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N symmetric matrix stored in packed format and X
and B are N-by-NRHS matrices.
The diagonal pivoting method is used to factor A as
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, D is symmetric and block diagonal with 1-by-1
and 2-by-2 diagonal blocks. The factored form of A is then used to
solve the system of equations A * X = B.
file cstemr.f cstemr.f plus dependencies
prec complex
CSTEMR computes selected eigenvalues and, optionally, eigenvectors
of a real symmetric tridiagonal matrix T. Any such unreduced matrix has
a well defined set of pairwise different real eigenvalues, the corresponding
real eigenvectors are pairwise orthogonal.
The spectrum may be computed either completely or partially by specifying
either an interval (VL,VU] or a range of indices IL:IU for the desired
Depending on the number of desired eigenvalues, these are computed either
by bisection or the dqds algorithm. Numerically orthogonal eigenvectors are
computed by the use of various suitable L D L^T factorizations near clusters
of close eigenvalues (referred to as RRRs, Relatively Robust
Representations). An informal sketch of the algorithm follows.
For each unreduced block (submatrix) of T,
(a) Compute T - sigma I = L D L^T, so that L and D
define all the wanted eigenvalues to high relative accuracy.
This means that small relative changes in the entries of D and L
cause only small relative changes in the eigenvalues and
eigenvectors. The standard (unfactored) representation of the
tridiagonal matrix T does not have this property in general.
(b) Compute the eigenvalues to suitable accuracy.
If the eigenvectors are desired, the algorithm attains full
accuracy of the computed eigenvalues only right before
the corresponding vectors have to be computed, see steps c) and d).
(c) For each cluster of close eigenvalues, select a new
shift close to the cluster, find a new factorization, and refine
the shifted eigenvalues to suitable accuracy.
(d) For each eigenvalue with a large enough relative separation compute
the corresponding eigenvector by forming a rank revealing twisted
factorization. Go back to (c) for any clusters that remain.
For more details, see:
- Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
- Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25,
2004. Also LAPACK Working Note 154.
- Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
tridiagonal eigenvalue/eigenvector problem",
Computer Science Division Technical Report No. UCB/CSD-97-971,
UC Berkeley, May 1997.
Further Details
1.CSTEMR works only on machines which follow IEEE-754
floating-point standard in their handling of infinities and NaNs.
This permits the use of efficient inner loops avoiding a check for
zero divisors.
2. LAPACK routines can be used to reduce a complex Hermitean matrix to
real symmetric tridiagonal form.
(Any complex Hermitean tridiagonal matrix has real values on its diagonal
and potentially complex numbers on its off-diagonals. By applying a
similarity transform with an appropriate diagonal matrix
diag(1,e^{i \phy_1}, ... , e^{i \phy_{n-1}}), the complex Hermitean
matrix can be transformed into a real symmetric matrix and complex
arithmetic can be entirely avoided.)
While the eigenvectors of the real symmetric tridiagonal matrix are real,
the eigenvectors of original complex Hermitean matrix have complex entries
in general.
Since LAPACK drivers overwrite the matrix data with the eigenvectors,
CSTEMR accepts complex workspace to facilitate interoperability
with CUNMTR or CUPMTR.
file csysv.f csysv.f plus dependencies
prec complex
CSYSV computes the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N symmetric matrix and X and B are N-by-NRHS
The diagonal pivoting method is used to factor A as
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is symmetric and block diagonal with
1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then
used to solve the system of equations A * X = B.
# ---------------------------------
# Available EXPERT DRIVER routines:
# ---------------------------------
file cgbsvx.f cgbsvx.f plus dependencies
prec complex
CGBSVX uses the LU factorization to compute the solution to a complex
system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
where A is a band matrix of order N with KL subdiagonals and KU
superdiagonals, and X and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed by this subroutine:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
or diag(C)*B (if TRANS = 'T' or 'C').
2. If FACT = 'N' or 'E', the LU decomposition is used to factor the
matrix A (after equilibration if FACT = 'E') as
A = L * U,
where L is a product of permutation and unit lower triangular
matrices with KL subdiagonals, and U is upper triangular with
KL+KU superdiagonals.
3. If some U(i,i)=0, so that U is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
6. If equilibration was used, the matrix X is premultiplied by
diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
that it solves the original system before equilibration.
file cgbsvxx.f cgbsvxx.f plus dependencies
prec complex
CGBSVXX uses the LU factorization to compute the solution to a
complex system of linear equations A * X = B, where A is an
N-by-N matrix and X and B are N-by-NRHS matrices.
If requested, both normwise and maximum componentwise error bounds
are returned. CGBSVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
CGBSVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
CGBSVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what CGBSVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
or diag(C)*B (if TRANS = 'T' or 'C').
2. If FACT = 'N' or 'E', the LU decomposition is used to factor
the matrix A (after equilibration if FACT = 'E') as
A = P * L * U,
where P is a permutation matrix, L is a unit lower triangular
matrix, and U is upper triangular.
3. If some U(i,i)=0, so that U is exactly singular, then the
routine returns with INFO = i. Otherwise, the factored form of A
is used to estimate the condition number of the matrix A (see
argument RCOND). If the reciprocal of the condition number is less
than machine precision, the routine still goes on to solve for X
and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
that it solves the original system before equilibration.
file cgeesx.f cgeesx.f plus dependencies
prec complex
CGEESX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues, the Schur form T, and, optionally, the matrix of Schur
vectors Z. This gives the Schur factorization A = Z*T*(Z**H).
Optionally, it also orders the eigenvalues on the diagonal of the
Schur form so that selected eigenvalues are at the top left;
computes a reciprocal condition number for the average of the
selected eigenvalues (RCONDE); and computes a reciprocal condition
number for the right invariant subspace corresponding to the
selected eigenvalues (RCONDV). The leading columns of Z form an
orthonormal basis for this invariant subspace.
For further explanation of the reciprocal condition numbers RCONDE
and RCONDV, see Section 4.10 of the LAPACK Users' Guide (where
these quantities are called s and sep respectively).
A complex matrix is in Schur form if it is upper triangular.
file cgeevx.f cgeevx.f plus dependencies
prec complex
CGEEVX computes for an N-by-N complex nonsymmetric matrix A, the
eigenvalues and, optionally, the left and/or right eigenvectors.
Optionally also, it computes a balancing transformation to improve
the conditioning of the eigenvalues and eigenvectors (ILO, IHI,
SCALE, and ABNRM), reciprocal condition numbers for the eigenvalues
(RCONDE), and reciprocal condition numbers for the right
eigenvectors (RCONDV).
The right eigenvector v(j) of A satisfies
A * v(j) = lambda(j) * v(j)
where lambda(j) is its eigenvalue.
The left eigenvector u(j) of A satisfies
u(j)**H * A = lambda(j) * u(j)**H
where u(j)**H denotes the conjugate transpose of u(j).
The computed eigenvectors are normalized to have Euclidean norm
equal to 1 and largest component real.
Balancing a matrix means permuting the rows and columns to make it
more nearly upper triangular, and applying a diagonal similarity
transformation D * A * D**(-1), where D is a diagonal matrix, to
make its rows and columns closer in norm and the condition numbers
of its eigenvalues and eigenvectors smaller. The computed
reciprocal condition numbers correspond to the balanced matrix.
Permuting rows and columns will not change the condition numbers
(in exact arithmetic) but diagonal scaling will. For further
explanation of balancing, see section 4.10.2 of the LAPACK
Users' Guide.
file cgelsx.f cgelsx.f plus dependencies
prec complex
This routine is deprecated and has been replaced by routine CGELSY.
CGELSX computes the minimum-norm solution to a complex linear least
squares problem:
minimize || A * X - B ||
using a complete orthogonal factorization of A. A is an M-by-N
matrix which may be rank-deficient.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution
matrix X.
The routine first computes a QR factorization with column pivoting:
A * P = Q * [ R11 R12 ]
[ 0 R22 ]
with R11 defined as the largest leading submatrix whose estimated
condition number is less than 1/RCOND. The order of R11, RANK,
is the effective rank of A.
Then, R22 is considered to be negligible, and R12 is annihilated
by unitary transformations from the right, arriving at the
complete orthogonal factorization:
A * P = Q * [ T11 0 ] * Z
[ 0 0 ]
The minimum-norm solution is then
X = P * Z' [ inv(T11)*Q1'*B ]
[ 0 ]
where Q1 consists of the first RANK columns of Q.
file cgesvx.f cgesvx.f plus dependencies
prec complex
CGESVX uses the LU factorization to compute the solution to a complex
system of linear equations
A * X = B,
where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
or diag(C)*B (if TRANS = 'T' or 'C').
2. If FACT = 'N' or 'E', the LU decomposition is used to factor the
matrix A (after equilibration if FACT = 'E') as
A = P * L * U,
where P is a permutation matrix, L is a unit lower triangular
matrix, and U is upper triangular.
3. If some U(i,i)=0, so that U is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
6. If equilibration was used, the matrix X is premultiplied by
diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
that it solves the original system before equilibration.
file cgesvxx.f cgesvxx.f plus dependencies
prec complex
CGESVXX uses the LU factorization to compute the solution to a
complex system of linear equations A * X = B, where A is an
N-by-N matrix and X and B are N-by-NRHS matrices.
If requested, both normwise and maximum componentwise error bounds
are returned. CGESVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
CGESVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
CGESVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what CGESVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
TRANS = 'N': diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
or diag(C)*B (if TRANS = 'T' or 'C').
2. If FACT = 'N' or 'E', the LU decomposition is used to factor
the matrix A (after equilibration if FACT = 'E') as
A = P * L * U,
where P is a permutation matrix, L is a unit lower triangular
matrix, and U is upper triangular.
3. If some U(i,i)=0, so that U is exactly singular, then the
routine returns with INFO = i. Otherwise, the factored form of A
is used to estimate the condition number of the matrix A (see
argument RCOND). If the reciprocal of the condition number is less
than machine precision, the routine still goes on to solve for X
and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
that it solves the original system before equilibration.
file cggesx.f cggesx.f plus dependencies
prec complex
CGGESX computes for a pair of N-by-N complex nonsymmetric matrices
(A,B), the generalized eigenvalues, the complex Schur form (S,T),
and, optionally, the left and/or right matrices of Schur vectors (VSL
and VSR). This gives the generalized Schur factorization
(A,B) = ( (VSL) S (VSR)**H, (VSL) T (VSR)**H )
where (VSR)**H is the conjugate-transpose of VSR.
Optionally, it also orders the eigenvalues so that a selected cluster
of eigenvalues appears in the leading diagonal blocks of the upper
triangular matrix S and the upper triangular matrix T; computes
a reciprocal condition number for the average of the selected
eigenvalues (RCONDE); and computes a reciprocal condition number for
the right and left deflating subspaces corresponding to the selected
eigenvalues (RCONDV). The leading columns of VSL and VSR then form
an orthonormal basis for the corresponding left and right eigenspaces
(deflating subspaces).
A generalized eigenvalue for a pair of matrices (A,B) is a scalar w
or a ratio alpha/beta = w, such that A - w*B is singular. It is
usually represented as the pair (alpha,beta), as there is a
reasonable interpretation for beta=0 or for both being zero.
A pair of matrices (S,T) is in generalized complex Schur form if T is
upper triangular with non-negative diagonal and S is upper
file cggevx.f cggevx.f plus dependencies
prec complex
CGGEVX computes for a pair of N-by-N complex nonsymmetric matrices
(A,B) the generalized eigenvalues, and optionally, the left and/or
right generalized eigenvectors.
Optionally, it also computes a balancing transformation to improve
the conditioning of the eigenvalues and eigenvectors (ILO, IHI,
LSCALE, RSCALE, ABNRM, and BBNRM), reciprocal condition numbers for
the eigenvalues (RCONDE), and reciprocal condition numbers for the
right eigenvectors (RCONDV).
A generalized eigenvalue for a pair of matrices (A,B) is a scalar
lambda or a ratio alpha/beta = lambda, such that A - lambda*B is
singular. It is usually represented as the pair (alpha,beta), as
there is a reasonable interpretation for beta=0, and even for both
being zero.
The right eigenvector v(j) corresponding to the eigenvalue lambda(j)
of (A,B) satisfies
A * v(j) = lambda(j) * B * v(j) .
The left eigenvector u(j) corresponding to the eigenvalue lambda(j)
of (A,B) satisfies
u(j)**H * A = lambda(j) * u(j)**H * B.
where u(j)**H is the conjugate-transpose of u(j).
file chbevx.f chbevx.f plus dependencies
prec complex
CHBEVX computes selected eigenvalues and, optionally, eigenvectors
of a complex Hermitian band matrix A. Eigenvalues and eigenvectors
can be selected by specifying either a range of values or a range of
indices for the desired eigenvalues.
file chbgvx.f chbgvx.f plus dependencies
prec complex
CHBGVX computes all the eigenvalues, and optionally, the eigenvectors
of a complex generalized Hermitian-definite banded eigenproblem, of
the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
and banded, and B is also positive definite. Eigenvalues and
eigenvectors can be selected by specifying either all eigenvalues,
a range of values or a range of indices for the desired eigenvalues.
file cheevx.f cheevx.f plus dependencies
prec complex
CHEEVX computes selected eigenvalues and, optionally, eigenvectors
of a complex Hermitian matrix A. Eigenvalues and eigenvectors can
be selected by specifying either a range of values or a range of
indices for the desired eigenvalues.
file chegvx.f chegvx.f plus dependencies
prec complex
CHEGVX computes selected eigenvalues, and optionally, eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and
B are assumed to be Hermitian and B is also positive definite.
Eigenvalues and eigenvectors can be selected by specifying either a
range of values or a range of indices for the desired eigenvalues.
file chesvx.f chesvx.f plus dependencies
prec complex
CHESVX uses the diagonal pivoting factorization to compute the
solution to a complex system of linear equations A * X = B,
where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'N', the diagonal pivoting method is used to factor A.
The form of the factorization is
A = U * D * U**H, if UPLO = 'U', or
A = L * D * L**H, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is Hermitian and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
2. If some D(i,i)=0, so that D is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
3. The system of equations is solved for X using the factored form
of A.
4. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
file chesvxx.f chesvxx.f plus dependencies
prec complex
CHESVXX uses the diagonal pivoting factorization to compute the
solution to a complex system of linear equations A * X = B, where
A is an N-by-N symmetric matrix and X and B are N-by-NRHS
If requested, both normwise and maximum componentwise error bounds
are returned. CHESVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
CHESVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
CHESVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what CHESVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S)*A*diag(S) *inv(diag(S))*X = diag(S)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the LU decomposition is used to factor
the matrix A (after equilibration if FACT = 'E') as
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is symmetric and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
3. If some D(i,i)=0, so that D is exactly singular, then the
routine returns with INFO = i. Otherwise, the factored form of A
is used to estimate the condition number of the matrix A (see
argument RCOND). If the reciprocal of the condition number is
less than machine precision, the routine still goes on to solve
for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(R) so that it solves the original system before
file chpevx.f chpevx.f plus dependencies
prec complex
CHPEVX computes selected eigenvalues and, optionally, eigenvectors
of a complex Hermitian matrix A in packed storage.
Eigenvalues/vectors can be selected by specifying either a range of
values or a range of indices for the desired eigenvalues.
file chpgvx.f chpgvx.f plus dependencies
prec complex
CHPGVX computes selected eigenvalues and, optionally, eigenvectors
of a complex generalized Hermitian-definite eigenproblem, of the form
A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and
B are assumed to be Hermitian, stored in packed format, and B is also
positive definite. Eigenvalues and eigenvectors can be selected by
specifying either a range of values or a range of indices for the
desired eigenvalues.
file chpsvx.f chpsvx.f plus dependencies
prec complex
CHPSVX uses the diagonal pivoting factorization A = U*D*U**H or
A = L*D*L**H to compute the solution to a complex system of linear
equations A * X = B, where A is an N-by-N Hermitian matrix stored
in packed format and X and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'N', the diagonal pivoting method is used to factor A as
A = U * D * U**H, if UPLO = 'U', or
A = L * D * L**H, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices and D is Hermitian and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
2. If some D(i,i)=0, so that D is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
3. The system of equations is solved for X using the factored form
of A.
4. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
file cpbsvx.f cpbsvx.f plus dependencies
prec complex
CPBSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite band matrix and X
and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
factor the matrix A (after equilibration if FACT = 'E') as
A = U**H * U, if UPLO = 'U', or
A = L * L**H, if UPLO = 'L',
where U is an upper triangular band matrix, and L is a lower
triangular band matrix.
3. If the leading i-by-i principal minor is not positive definite,
then the routine returns with INFO = i. Otherwise, the factored
form of A is used to estimate the condition number of the matrix
A. If the reciprocal of the condition number is less than machine
precision, INFO = N+1 is returned as a warning, but the routine
still goes on to solve for X and compute error bounds as
described below.
4. The system of equations is solved for X using the factored form
of A.
5. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
6. If equilibration was used, the matrix X is premultiplied by
diag(S) so that it solves the original system before
file cposvx.f cposvx.f plus dependencies
prec complex
CPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite matrix and X and B
are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
factor the matrix A (after equilibration if FACT = 'E') as
A = U**H* U, if UPLO = 'U', or
A = L * L**H, if UPLO = 'L',
where U is an upper triangular matrix and L is a lower triangular
3. If the leading i-by-i principal minor is not positive definite,
then the routine returns with INFO = i. Otherwise, the factored
form of A is used to estimate the condition number of the matrix
A. If the reciprocal of the condition number is less than machine
precision, INFO = N+1 is returned as a warning, but the routine
still goes on to solve for X and compute error bounds as
described below.
4. The system of equations is solved for X using the factored form
of A.
5. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
6. If equilibration was used, the matrix X is premultiplied by
diag(S) so that it solves the original system before
file cposvxx.f cposvxx.f plus dependencies
prec complex
CPOSVXX uses the Cholesky factorization A = U**T*U or A = L*L**T
to compute the solution to a complex system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
and X and B are N-by-NRHS matrices.
If requested, both normwise and maximum componentwise error bounds
are returned. CPOSVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
CPOSVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
CPOSVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what CPOSVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S)*A*diag(S) *inv(diag(S))*X = diag(S)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
factor the matrix A (after equilibration if FACT = 'E') as
A = U**T* U, if UPLO = 'U', or
A = L * L**T, if UPLO = 'L',
where U is an upper triangular matrix and L is a lower triangular
3. If the leading i-by-i principal minor is not positive definite,
then the routine returns with INFO = i. Otherwise, the factored
form of A is used to estimate the condition number of the matrix
A (see argument RCOND). If the reciprocal of the condition number
is less than machine precision, the routine still goes on to solve
for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(S) so that it solves the original system before
file cppsvx.f cppsvx.f plus dependencies
prec complex
CPPSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
compute the solution to a complex system of linear equations
A * X = B,
where A is an N-by-N Hermitian positive definite matrix stored in
packed format and X and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
factor the matrix A (after equilibration if FACT = 'E') as
A = U'* U , if UPLO = 'U', or
A = L * L', if UPLO = 'L',
where U is an upper triangular matrix, L is a lower triangular
matrix, and ' indicates conjugate transpose.
3. If the leading i-by-i principal minor is not positive definite,
then the routine returns with INFO = i. Otherwise, the factored
form of A is used to estimate the condition number of the matrix
A. If the reciprocal of the condition number is less than machine
precision, INFO = N+1 is returned as a warning, but the routine
still goes on to solve for X and compute error bounds as
described below.
4. The system of equations is solved for X using the factored form
of A.
5. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
6. If equilibration was used, the matrix X is premultiplied by
diag(S) so that it solves the original system before
file cspsvx.f cspsvx.f plus dependencies
prec complex
CSPSVX uses the diagonal pivoting factorization A = U*D*U**T or
A = L*D*L**T to compute the solution to a complex system of linear
equations A * X = B, where A is an N-by-N symmetric matrix stored
in packed format and X and B are N-by-NRHS matrices.
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'N', the diagonal pivoting method is used to factor A as
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices and D is symmetric and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
2. If some D(i,i)=0, so that D is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
3. The system of equations is solved for X using the factored form
of A.
4. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
file csysvx.f csysvx.f plus dependencies
prec complex
CSYSVX uses the diagonal pivoting factorization to compute the
solution to a complex system of linear equations A * X = B,
where A is an N-by-N symmetric matrix and X and B are N-by-NRHS
Error bounds on the solution and a condition estimate are also
The following steps are performed:
1. If FACT = 'N', the diagonal pivoting method is used to factor A.
The form of the factorization is
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is symmetric and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
2. If some D(i,i)=0, so that D is exactly singular, then the routine
returns with INFO = i. Otherwise, the factored form of A is used
to estimate the condition number of the matrix A. If the
reciprocal of the condition number is less than machine precision,
INFO = N+1 is returned as a warning, but the routine still goes on
to solve for X and compute error bounds as described below.
3. The system of equations is solved for X using the factored form
of A.
4. Iterative refinement is applied to improve the computed solution
matrix and calculate error bounds and backward error estimates
for it.
file csysvxx.f csysvxx.f plus dependencies
prec complex
CSYSVXX uses the diagonal pivoting factorization to compute the
solution to a complex system of linear equations A * X = B, where
A is an N-by-N symmetric matrix and X and B are N-by-NRHS
If requested, both normwise and maximum componentwise error bounds
are returned. CSYSVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
CSYSVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
CSYSVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what CSYSVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S)*A*diag(S) *inv(diag(S))*X = diag(S)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the LU decomposition is used to factor
the matrix A (after equilibration if FACT = 'E') as
A = U * D * U**T, if UPLO = 'U', or
A = L * D * L**T, if UPLO = 'L',
where U (or L) is a product of permutation and unit upper (lower)
triangular matrices, and D is symmetric and block diagonal with
1-by-1 and 2-by-2 diagonal blocks.
3. If some D(i,i)=0, so that D is exactly singular, then the
routine returns with INFO = i. Otherwise, the factored form of A
is used to estimate the condition number of the matrix A (see
argument RCOND). If the reciprocal of the condition number is
less than machine precision, the routine still goes on to solve
for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(R) so that it solves the original system before | {"url":"http://www.netlib.org/lapack/complex/","timestamp":"2014-04-16T07:18:54Z","content_type":null,"content_length":"88119","record_id":"<urn:uuid:0010ccde-815d-45c6-ae51-599221ff0d81>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series/logarithm questions
October 14th 2012, 07:18 PM #1
Sep 2012
behind you
Series/logarithm questions
Hi, I was wondering how to solve these 2 questions:
c) use the proof from part a) and d) [e^x=>x+1 and (1+1)(1+1/2)...(1+1/n)=n+1] to prove that e^(1+1/2+1/3+...+1/n)>n
d) Find a value of n for which 1+1/2+1/3+...+1/n>100
You don't have to provide the entire answer, but some hints or part of the work would be nice.
Thanks in advance!
Re: Series/logarithm questions
Having established that:
$(1+1)\left(1+\frac{1}{2} \right)\left(1+\frac{1}{2} \right)\cdots\left(1+\frac{1}{n} \right)=n+1$
$e^x\ge x+1$
we may write:
$e^{1}\ge 1+1$
$e^{\frac{1}{2}}\ge 1+\frac{1}{2}$
$e^{\frac{1}{3}}\ge 1+\frac{1}{3}$
$e^{\frac{1}{n}}\ge 1+\frac{1}{n}$
Since all values are positive, we may multiply down. What do you get?
d) Having established that:
$e^{1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}}> n$
this implies:
$1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}>\ln(n )$
Now, let:
What does this say about $n$?
Re: Series/logarithm questions
Oh...thanks so much. I can't believe I didn't think that way for c)
October 14th 2012, 08:18 PM #2
October 14th 2012, 08:25 PM #3
Sep 2012
behind you | {"url":"http://mathhelpforum.com/algebra/205334-series-logarithm-questions.html","timestamp":"2014-04-17T11:53:34Z","content_type":null,"content_length":"36590","record_id":"<urn:uuid:edc22684-3439-4047-a188-0af63b8811c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
You have not finished it?
Re: Linear Interpolation FP1 Formula
Nope. It is very tough and weird.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Challenge accepted.
Re: Linear Interpolation FP1 Formula
Good luck, if you do you can explain it to me.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I got only to the first fifth of the third chapter, plus the really useful bit on snake oil method.
Last edited by anonimnystefy (2013-06-10 08:56:12)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Linear Interpolation FP1 Formula
Snake oil method? Well that's a phrase I never thought I'd hear in a mathematical context...!
Re: Linear Interpolation FP1 Formula
My brother is a big fan of his snake oil method. There is useful stuff on asymptotics or maybe I am thinking of A=B.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hi zf.
That part is really easy to understand, and is very useful.
Hi bobbym
Yes, there are asymptotics in both the gfology and A=B. And both talk about the WZ method. It seems cool, but I haven't tried it yet...
Last edited by anonimnystefy (2013-06-10 09:02:38)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Linear Interpolation FP1 Formula
Well that's a phrase I never thought I'd hear in a mathematical context...!
The way an experimental mathematician like Herbie worked is well described by that.
And both talk about the WZ method. It seems cool, but I haven't tried it yet...
That is the meat of their idea.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Hi bobbym
For A=B I'm sure it is. Not as much for gfology.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Linear Interpolation FP1 Formula
The definitively good book on gf's has not been written yet. After the Tucker book you have to scrounge around for additional ideas.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Phew, 2 exams down, 7 to go...
There are huge complaints about the C3 exam (a pure maths exam over here) because it was leaked, but I don't think the amount of complaining is justifiable...
Re: Linear Interpolation FP1 Formula
How did you do?
Because it was leaked?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I thought I did well, but you never know...
Well, people are making false claims that the paper was unreasonably hard, or contained material not on the syllabus, and over 10,000 people have complained. But Edexcel issued a replacement paper
(for emergencies such as these) and had the students sit that one instead. But I feel that there was nothing wrong with the replacement, and people are simply using the fact that the original paper
was leaked to as an excuse for their poor performance (and so, they're claiming it was impossible or whatever). Google "Missing exam paper sparks re-sit row" and click the article from BBC News.
Having done the paper (I found a copy online), it just seems like a standard paper with some parts slightly trickier than usual, but I just can't see how the whining is justifiable.
Still, I think it's really stupid that they leak the papers 2 years in a row.
Re: Linear Interpolation FP1 Formula
As long as it did not bother you... Let them whine and whine some more.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
The whining is irritating though, they are acting like it is the end of the world... with all due respect to them, that is just what happens if you only prepare for an easy paper.
Re: Linear Interpolation FP1 Formula
I agree, listening to the crying makes my back hurt even more. There was a guy on another forum crying over the beatings he got when he was 8. He is 50 now and still sobbing. Turns out he got smacked
on the knuckles with a thin ruler and he calls that a beating?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Isn't that a bit different? A thin metal ruler could probably cut...
Re: Linear Interpolation FP1 Formula
Yes, I am sure he is cut and probably he can not play the piano and wants to sue for 450 million dollars.
I know it is not the same as what you were saying.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I hate to admit it but the education reformists must be loving this...
Re: Linear Interpolation FP1 Formula
What do they want to reform? The system? The test? The students? The teachers? The cuisine?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
The reforming has already started. Currently, 47% of students here achieve an A in A-level Maths, and there is evidence to suggest that the exams have been getting easier (grade inflation). So
Michael Gove, the secretary of state for education, has banned exams from being taken in January, and also wants to stop students from being able to re-sit a paper. He is aiming to do away with the
modular system, and lump everything into one final exam at the end of 2 years (like how it used to be).
Re: Linear Interpolation FP1 Formula
So he seeks to have less people get an A? This jumping up and down on standards is also unfair. Plenty of people here complain that if they would have taken their tests 3 years ago or 3 years hence
they would do much better. This is due to the seesawing of testing. One guy gets in and thinks it is too tough, the next it is too soft.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes -- I am in favour of some of his 'reforms' but I am still fundamentally against the implementation, which is the reason he is disliked. We haven't had such big changes in about 30 years -- GCSEs
are being replaced now!!
Re: Linear Interpolation FP1 Formula
Do you think that system has produced high quality people?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=273504","timestamp":"2014-04-17T21:32:56Z","content_type":null,"content_length":"37982","record_id":"<urn:uuid:f95337bb-cc65-4169-9de7-65ab3f042b1e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ergodicity for a Probabilistic Cellular Automaton on a finite space
up vote 3 down vote favorite
Let's consider a Probabilistic Cellular Automaton on a one dimensional lattice $S$. Each site of the lattice can have two states, $0$ and $1$. The transition probability acting on each site is: $P
(x_i=1 | x_{i-1}, x_i, x_{i+1}) = 1$ when $x_{i-1}= x_i = x_{i+1} = 1$ and it is $P(x_i=1 | x_{i-1}, x_i, x_{i+1}) = \epsilon$ otherwise.
How can I proove that if $S$ is finite (in particular in my case it is the one dimensional chain from site $-N$ to $N$ with periodic boundary condition) then the stochastic process is ergodic for any
value of $\epsilon$?
For ergodic I mean that "from any initial probability distribution, the system always converge on the same invariant measure".
The invariant measure, in this case, is obviously the one which gives weight "$1$" to the configuration $1, 1, 1, \ldots 1$. Then one should just proove that the probability to get in any site $i$
the state $1$ at time $T$, $P_T(x_i=1)$, converges to $1$ for $T \rightarrow \infty$. But I cannot formalize how to see this.
stochastic-processes cellular-automata percolation probability-distributions
In the question: "the stochastic procecess IS ergodic" – QuantumLogarithm Sep 26 '12 at 8:23
1 in the title, "a probabilistic cellular AutomatON" – Igor Rivin Sep 26 '12 at 14:39
add comment
2 Answers
active oldest votes
Define a random variable $Y\in \{0,1\}^N$ by $P(y_i^{(n)}=1) = \epsilon$ for all $1\leq i\leq N$ and $n\in\mathbb{N}$ and observe that $x_i^{(n)} \geq y_i^{(n)}$ for all $i,n$ (as long
as $X,Y$ are being driven by the same random process). With probability 1 there exists a time $n$ such that $y_i^{(n)}=1$ for all $i$ (since all the events are independent and the
up vote 5 down lattice is finite), and then from this time on $X$ is in the state where every $x_i=1$.
vote accepted
add comment
The point $x=(1,1,\dots,1)$ is an absorbing state, and $p(y,x)$ is bounded away from 0 for any other state $y$. The only stationary measure of any countable Markov chain with this
up vote 0 down property is $\delta_x$.
add comment
Not the answer you're looking for? Browse other questions tagged stochastic-processes cellular-automata percolation probability-distributions or ask your own question. | {"url":"http://mathoverflow.net/questions/108139/ergodicity-for-a-probabilistic-cellular-automaton-on-a-finite-space","timestamp":"2014-04-21T12:57:53Z","content_type":null,"content_length":"57091","record_id":"<urn:uuid:63c40b91-3bee-43cb-87d6-3aa84446033b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electric Field due to a line of charge
Yeah, Griffith's book is good.
combining the two yields
[tex] \vec{E} = \frac{\lambda}{4 \pi \epsilon_{0}} \left[ \frac{1}{z} -\frac{1}{\sqrt{z^2 + L^2}} \right] \hat{x} + \frac{\lambda}{4 \pi \epsilon_{0}z} \frac{L}{\sqrt{L^2 + z^2}} \hat{z} [/tex]
is there soemthing wrong with the calculationg ofr hte X Horizontal Direction?? Please help
Thank you in advance for your help and advice!
From your calculation for [tex]\vec{E}[/tex] (yes i think it's correct for both direction), for z >> L, the electric field of component - x will dissapear (since [tex]\frac{1}{z} - \frac{1}{z} = 0)[/
tex] and leaves for us the z - component as below:
after simplifying, [tex]\vec{E} = \frac{\lambda L}{4 \pi \epsilon_{0}z^2}\hat{z}[/tex] for z>>L.
the case is the same as above example from griffith's book (that is for E at a distance z above midpoint of line L). I quoted it : From far away the line
"looks" like a point charge [tex]q = 2\lambda L[/tex].
I'm studying from Griffiths too :) and i've a question about this exercize.
The electric potential:
[tex]V = \frac{ \lambda }{4\epsilon_0 \pi} ln\left(\frac{L+\sqrt{z^2+L^2}}{z}\right) [/tex]
Calculating the electric field from the V's gradient, it has x component always at zero.
Where the problem?
I think x and y component will be gone since you do grad [tex]\nabla[/tex]-operation to potential (the potential only depend on z). so,
[tex]\vec{E} = -\nabla V = -\frac{\lambda}{4\pi \epsilon_{0}}\frac{\partial}{\partial z}\ln (\frac{L + \sqrt{z^2 + L^2}}{z})[/tex].
suppose [tex]k = \sqrt{z^2 + L^2}[/tex],
it will give [tex]\vec{E} = \frac{\lambda}{4 \pi \epsilon_{0}}(\frac{z}{k(L + k)} - \frac{1}{z})\hat{z}[/tex]
btw, i cant find this problem in griffith's book.... | {"url":"http://www.physicsforums.com/showthread.php?t=133058","timestamp":"2014-04-21T04:51:39Z","content_type":null,"content_length":"35575","record_id":"<urn:uuid:19e63d5a-6472-40b6-89d8-1a8e4390f334>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Dynamical equations
B. Network models
A. Linear stability
B. Transition to synchronization
C. Desynchronization by noise
D. Effective synchronization clusters
E. Analysis of hierarchical synchronization
A. Weak coupling:Nonsynchronization regime
B. Intermediate coupling:Phase synchronization
C. Strong coupling:Almost complete synchronization
D. Analysis of the coherent regime of HN | {"url":"http://scitation.aip.org/content/aip/journal/chaos/16/1/10.1063/1.2150381","timestamp":"2014-04-16T17:31:12Z","content_type":null,"content_length":"118410","record_id":"<urn:uuid:dfe68ca6-842d-41ad-9950-99978dabe518>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Circle of Children' Brain Teaser
Circle of Children
Math brain teasers require computations to solve.
Puzzle ID: #23713
Category: Math
Submitted By: sk8ergrl
Corrected By: shenqiang
A number of children are standing in a circle. They are evenly spaced and the 7th child is directly opposite the 18th child. How many children are there altogether?
Show Hint
22; in half of the circle there are 11 children because 18-7=11. Multiply 11x2=22! Hide
What Next?
onlyeeyore As usual I got this math teaser the long way. I've got to bone up on my algebra!
Jun 15, 2005
Riddlerman Algebra ain't anything to do with it. Just figure 10 kids either side of them (10) Double it as there are two sides (20), then add the two kids themselves (22)
Jun 15, 2005
Bit EZ but i enjoyed it. Thanks.
sk8ergrl riddle is right. There is nothing to do with Algebra. I'm glad everyone seems to like it
Jun 15, 2005
BrUnEttEcUtiE We do these as bonus questions in school all the time!
Jun 15, 2005
sweetime my brain hurts
Jun 15, 2005 i got 20 but i'm not entirely sure how
Jun 16, 2005 but ast least then I got it!
good one!
PCDguitar good 1 even tho i dont understand y u need to subtract 7 from 18 to get 11 to multiplay it by two?
Jun 17, 2005
Question_Mark Simple, but good.
Jun 17, 2005
froggygg Good teaser!
Jun 22, 2005
amanduh05 good 1
Jun 23, 2005
sk8ergrl Thanx!!!
Mar 20, 2006
Aug 06, 2006
opqpop I'm such an idiot. I figured out there were 10 kids on the right, and then figured out there were 6 kids before the 7th kid on the left, and hence 4 kids after the 18th kid to the
Sep 11, 2010 left, for a total of 18 + 4 = 22 kids.
Then I realized I could have just doubled 10 and added 2.
Sigh I'm ashamed of my stupidity.
doctapeppaman The solution assumes that the children are arranged consecutively around the circle.
Sep 27, 2012
Jimbo The question may be considered by solving a simpler problem. If 4 kids stand in a circle, 1 is opposite 3 and 2 is opposite 4. The difference in position is always 2. Double it to
Jun 03, 2013 find the number.
So 18 - 7 = 11. 2 * 11 = 22.
Jimbo Good teaser BTW.
Jun 03, 2013 | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=23713&op=2&comm=1","timestamp":"2014-04-20T13:54:33Z","content_type":null,"content_length":"37684","record_id":"<urn:uuid:7259a992-e72e-48b8-8a22-c63da1593feb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function To Create MxRAMObjective Object
mxRAMObjective {OpenMx} R Documentation
Function To Create MxRAMObjective Object
This function creates a new MxRAMObjective object.
mxRAMObjective(A, S, F, M = NA, thresholds = NA)
A A character string indicating the name of the 'A' matrix.
S A character string indicating the name of the 'S' matrix.
F A character string indicating the name of the 'F' matrix.
M An optional character string indicating the name of the 'M' matrix.
thresholds An optional character string indicating the name of the thresholds matrix.
Objective functions are functions for which free parameter values are chosen such that the value of the objective function is minimized. The mxRAMObjective provides maximum likelihood estimates of
free parameters in a model of the covariance of a given MxData object. This model is defined by reticular action modeling (McArdle and McDonald, 1984). The 'A', 'S', and 'F' arguments must refer to
MxMatrix objects with the associated properties of the A, S, and F matrices in the RAM modeling approach.
The 'A' argument refers to the A or asymmetric matrix in the RAM approach. This matrix consists of all of the asymmetric paths (one-headed arrows) in the model. A free parameter in any row and column
describes a regression of the variable represented by that row regressed on the variable represented in that column.
The 'S' argument refers to the S or symmetric matrix in the RAM approach, and as such must be square. This matrix consists of all of the symmetric paths (two-headed arrows) in the model. A free
parameter in any row and column describes a covariance between the variable represented by that row and the variable represented by that column. Variances are covariances between any variable at
itself, which occur on the diagonal of the specified matrix.
The 'F' argument refers to the F or filter matrix in the RAM approach. If no latent variables are included in the model (i.e., the A and S matrices are of both of the same dimension as the data
matrix), then the 'F' should refer to an identity matrix. If latent variables are included (i.e., the A and S matrices are not of the same dimension as the data matrix), then the 'F' argument should
consist of a horizontal adhesion of an identity matrix and a matrix of zeros.
The 'M' argument refers to the M or means matrix in the RAM approach. It is a 1 x n matrix, where n is the number of manifest variables + the number of latent variables. The M matrix must be
specified if either the mxData type is “cov” or “cor” and a means vector is provided, or if the mxData type is “raw”. Otherwise the M matrix is ignored.
The MxMatrix objects included as arguments may be of any type, but should have the properties described above. The mxRAMObjective will not return an error for incorrect specification, but incorrect
specification will likely lead to estimation problems or errors in the mxRun function.
mxRAMObjective evaluates with respect to an MxData object. The MxData object need not be referenced in the mxRAMObjective function, but must be included in the MxModel object. mxRAMObjective requires
that the 'type' argument in the associated MxData object be equal to 'cov', 'cor' or 'sscp'.
To evaluate, place MxRAMObjective objects, the mxData object for which the expected covariance approximates, referenced MxAlgebra and MxMatrix objects, and optional MxBounds and MxConstraint objects
in an MxModel object. This model may then be evaluated using the mxRun function. The results of the optimization can be found in the 'output' slot of the resulting model, and may be obtained using
the mxEval function..
Returns a new MxRAMObjective object. MxRAMObjective objects should be included with models with referenced MxAlgebra, MxData and MxMatrix objects.
McArdle, J. J. and MacDonald, R. P. (1984). Some algebraic properties of the Reticular Action Model for moment structures. British Journal of Mathematical and Statistical Psychology, 37, 234-251.
The OpenMx User's guide can be found at http://openmx.psyc.virginia.edu/documentation.
matrixA <- mxMatrix("Full", values=c(0,0.2,0,0), name="A", nrow=2, ncol=2)
matrixS <- mxMatrix("Full", values=c(0.8,0,0,0.8), name="S", nrow=2, ncol=2, free=TRUE)
matrixF <- mxMatrix("Full", values=c(1,0,0,1), name="F", nrow=2, ncol=2)
# Create a RAM objective with default A, S, F matrix names
objective <- mxRAMObjective("A", "S", "F")
model <- mxModel(matrixA, matrixS, matrixF, objective)
version 0.2.2-951 | {"url":"http://openmx.psyc.virginia.edu/docs/OpenMx/0.2.2-951/_static/Rdoc/mxRAMObjective.html","timestamp":"2014-04-19T04:45:05Z","content_type":null,"content_length":"6512","record_id":"<urn:uuid:73e9cf91-1b1a-4731-bb71-6dbd91fd5a41>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi Cubed
Pi Cubed is a visual math application for the iPhone, iPod touch, and now iPad that lets you perform calculations as you would on a piece of paper. By using an animated, touch-based interface, even
the most complex mathematical expressions can be entered and instantly evaluated. These expressions are typeset as they would be in a textbook or on a blackboard. Calculations are displayed with up
to 34 digits of decimal precision. Additionally, a library of over 150 equations ships with the application.
Download video (h.264, iPhone / iPod touch)
Equations can be easily edited onscreen using simple touch interactions. All editing and entry operations are animated through heavy use of the Core Animation framework. For fine editing or
inspection, pinch-zooming gestures can be used to magnify the equations. Equations can be scrolled around using swipes of a finger.
Any calculation or equation can be stored to an internal database for later retrieval or editing. In addition, a built-in library of over 150 equations is provided covering fields as varied as
finance, geometry, and fluid mechanics. As an aid, tapping on a variable in an equation will display a text description of that variable. This is handy when dealing with complex engineering
calculations or as an educational aid.
Equations that are entered or loaded from the library can be exported via email in plain text, LaTeX, or PDF formats for use in research papers or other documents. Equations expressed in terms of x,
y, and z can be exported to Grafly for plotting. Additionally, equations can be copied to the clipboard and pasted in other applications on the device.
A diverse range of mathematical operations are supported. These include:
• Numbers, including basic constants
• Arithmetic operations (addition, subtraction, division, multiplication)
• Parentheses and brackets
• Logarithms (base 10), natural logarithms, and logarithms with arbitrary bases
• Square roots, and roots with arbitrary powers
• Exponentials
• Trigonometry operations (sine, cosine, tangent, hyperbolic operations, inverse operations)
• The error function
• Factorials
If you want to try out the capabilities of the application before you purchase, Pi Cubed Lite is available on the App Store for free. The Lite version has many of the calculation capabilities of Pi
Cubed, but lacks the built-in and user-defined equation libraries, can't export equations via email or for plotting, and can only store up to five recent calculations. It also lacks the iPad
interface of the full version.
Pi Cubed uses several of the high-quality icons from the set published by Glyphish.
Sunset Lake Software has elected to donate 10% of the net proceeds from the sale of Pi Cubed to the Child's Play charity. Child's Play provides video games, toys, and other forms of entertainment to
children's hospitals around the world, in order to put the children there at ease and take their minds off of their immediate circumstances. Even if Pi Cubed is not for you, please take the time to
read up on Child's Play and help them out if you can.
Related posts
iPhone or iPod touch with the 3.0 software update applied iPad running iPhone OS 3.2
Price: $9.99 (or equivalent)
Version: 2.0 | {"url":"http://www.sunsetlakesoftware.com/picubed","timestamp":"2014-04-18T18:11:42Z","content_type":null,"content_length":"17416","record_id":"<urn:uuid:7f23b946-a82c-405b-bc07-0d761a29ab42>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Integral question w/ parts
September 2nd 2008, 11:10 PM #1
Sep 2008
Okay, I have a question. Is $\int arctan{5y}$ equal to $\frac{1}{1+5y^2}\times10y$?
I am working on this problem,
$\int x^2arctan{2x}dx$, it's actually tan^-1(2x) but I'm not sure how to insert the negative to make it the inverse.
I made $u=2y,\ du=2dy,\ dv=arctan{5y},\ and\ v=\frac{10y}{a+5y^2}$
Am I heading in the right direction? Thanks in advance.
No, we have $\frac{\mathrm{d}}{\mathrm{d}y}\left[\arctan (5y)\right]=\frac{5}{1+25y^2}$ or $\arctan (5y) = \int_0^{5y}\frac{1}{1+t^2}\,\mathrm{d}t$.
I am working on this problem,
$\int x^2arctan{2x}dx$, it's actually tan^-1(2x) but I'm not sure how to insert the negative to make it the inverse.
Write [tex]\tan^{-1}[/tex] or [tex]\arctan[/tex] .
I made $u=2y,\ du=2dy,\ dv=arctan{5y},\ and\ v=\frac{10y}{a+5y^2}$
Am I heading in the right direction?
I don't think so since you don't know an antiderivative of $\arctan$. One possibility is to see that if you let $v=\arctan (2x)$ and $u'=x^2$ then $v'=\frac{2}{1+4x^2}$ and $u=\frac{x^3}{3}$ so
$uv'$ is a rational fraction : we don't have to integrate $\arctan$ anymore.
Does it help ?
Okay, I have a question. Is $\int arctan{5y}$ equal to $\frac{1}{1+5y^2}\times10y$?
I am working on this problem,
$\int x^2arctan{2x}dx$, it's actually tan^-1(2x) but I'm not sure how to insert the negative to make it the inverse.
I made $u=2y,\ du=2dy,\ dv=arctan{5y},\ and\ v=\frac{10y}{a+5y^2}$
Am I heading in the right direction? Thanks in advance.
According to my book of integrals, an integral of $\arctan x$...
$\int \arctan x\, dx = x\arctan x - \frac{1}{2} \log (1+x^2)$.
Does that help?
Oops, I mixed my problem up. This is what I am currently trying to figure out: $\int2ytan^{-1}{5y}dy$
Okay so, $u=\ 2y,\ du=\ 2dy,\ dv=\ tan^{-1}{5y},\ and\ v=5ytan^{-1}{5y}-\frac {1}{2}\ln{(1+25y^4)}+c$ correct? This is looking more difficult then I thought! So now: $2y\times 5ytan^{-1}{5y}-\
frac {1}{2}\ln{(1+25y^4)}-\frac {1}{2}\int 5ytan^{-1}{5y}-\frac {1}{2}\ln{(1+25y^4)}dy$ I put the $\frac {1}{2}$ in front of the integral to get rid of the 2 from du which is 2dy...
Okay so, $u=\ 2y,\ du=\ 2dy,\ dv=\ tan^{-1}{5y},\ and\ v=5ytan^{-1}{5y}-\frac {1}{2}\ln{(1+25y^{\color{red}4})}+c$ correct? This is looking more difficult then I thought! So now: $2y\times 5ytan^
{-1}{5y}-\frac {1}{2}\ln{(1+25y^{\color{red}4})}-\frac {1}{2}\int 5ytan^{-1}{5y}-\frac {1}{2}\ln{(1+25y^{\color{red}4})}dy$ I put the $\frac {1}{2}$ in front of the integral to get rid of the 2
from du which is 2dy...
The terms in red should be 2, not 4.
Now, from here, do the following:
We found that $\int 5y\tan^{-1}(5y)\,dy=$$10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}-\frac {1}{2}{\color{red}\int} \left({\color{red}5y\tan^{-1}(5y)}-\tfrac {1}{2}\ln{(1+25y^2)}\right){\color
The integral reappears on the right side! So get it on the left side!
$\implies \tfrac{3}{2}\int 5y\tan^{-1}(5y)\,dy=10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}+\tfrac{1}{4}{\color{red}\int\ ln{(1+25y^2)}\,dy}$
Before we get our answer, try to evaluate the integral I highlighted in red.
Hint : apply integration by parts! Let $u=\ln(1+25y^2)$.....
$\int 2y \tan^{-1} (5y) \ dy$
First, let's make the sub: $\begin{array}{lll} s = 5y & \Rightarrow & ds = 5 dy \iff dy = \displaystyle \frac{ds}{5} \\ y = \displaystyle \frac{s}{5} \end{array}$
and we get: $\int 2 \left(\frac{s}{5}\right) \tan^{-1} s \frac{ds}{5} \ \ = \ \ \frac{1}{25} \int 2s \tan^{-1}s \ ds$
Now you can use integration by parts: $\begin{array}{ll} u = \tan^{-1} s & dv = 2s ds \\ du = \displaystyle \frac{1}{1+s^2} \ ds & v = s^2 \end{array}$
Bad news: tan isn't a variable.
The $5y\tan^{-1}{5y}$ is an antiderivative of $\tan^{-1}{5y}$. Try differentiating and see for yourself.
You have to be wise when you choose what u and dv should be. You're integrating $\tan^{-1}{5y}$, but you don't know its integral, so how could you think of it as dv?!
Yea, I see what you mean!
The terms in red should be 2, not 4.
Now, from here, do the following:
We found that $\int 5y\tan^{-1}(5y)\,dy=$$10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}-\frac {1}{2}{\color{red}\int} \left({\color{red}5y\tan^{-1}(5y)}-\tfrac {1}{2}\ln{(1+25y^2)}\right){\color
The integral reappears on the right side! So get it on the left side!
$\implies \tfrac{3}{2}\int 5y\tan^{-1}(5y)\,dy=10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}+\tfrac{1}{4}{\color{red}\int\ ln{(1+25y^2)}\,dy}$
Before we get our answer, try to evaluate the integral I highlighted in red.
Hint : apply integration by parts! Let $u=\ln(1+25y^2)$.....
How does it become 3/2 on the left side? I know its because you put the integral to the left though like you said, but how exactly?
$\int 2y \tan^{-1} (5y) \ dy$
First, let's make the sub: $\begin{array}{lll} s = 5y & \Rightarrow & ds = 5 dy \iff dy = \displaystyle \frac{ds}{5} \\ y = \displaystyle \frac{s}{5} \end{array}$
and we get: $\int 2 \left(\frac{s}{5}\right) \tan^{-1} s \frac{ds}{5} \ \ = \ \ \frac{1}{25} \int 2s \tan^{-1}s \ ds$
Now you can use integration by parts: $\begin{array}{ll} u = \tan^{-1} s & dv = 2s ds \\ du = \displaystyle \frac{1}{1+s^2} \ ds & v = s^2 \end{array}$
Okay, I never thought of it like that, I'm going to try that now.
The terms in red should be 2, not 4.
Now, from here, do the following:
We found that $\int 5y\tan^{-1}(5y)\,dy=$$10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}{\color{red}-\tfrac{1}{2}\int} \left({\color{red}5y\tan^{-1}(5y)}-\tfrac {1}{2}\ln{(1+25y^2)}\right){\color
The integral reappears on the right side! So get it on the left side!
$\implies \tfrac{3}{2}\int 5y\tan^{-1}(5y)\,dy=10y^2\tan^{-1}(5y)-\tfrac {1}{2}\ln{(1+25y^2)}+\tfrac{1}{4}{\color{red}\int\ ln{(1+25y^2)}\,dy}$
Before we get our answer, try to evaluate the integral I highlighted in red.
Hint : apply integration by parts! Let $u=\ln(1+25y^2)$.....
We see that we have the term $-\tfrac{1}{2}\int (5y\tan^{-1}(5y))\,dy$ on the right side of the equation and $\int 5y\tan^{-1}(5y)\,dy$ on the left side.
Add $\tfrac{1}{2}\int (5y\tan^{-1}(5y))\,dy$ to both sides. The integral will disappear on the right and on the left side of the equation we get $\int 5y\tan^{-1}(5y)\,dy+\tfrac{1}{2}\int (5y\tan
^{-1}(5y))\,dy=\tfrac{3}{2}\int (5y\tan^{-1}(5y))\,dy$
I hope this makes sense.
We see that we have the term $-\tfrac{1}{2}\int (5y\tan^{-1}(5y))\,dy$ on the right side of the equation and $\int 5y\tan^{-1}(5y)\,dy$ on the left side.
Add $\tfrac{1}{2}\int (5y\tan^{-1}(5y))\,dy$ to both sides. The integral will disappear on the right and on the left side of the equation we get $\int 5y\tan^{-1}(5y)\,dy+\tfrac{1}{2}\int (5y\tan
^{-1}(5y))\,dy=\tfrac{3}{2}\int (5y\tan^{-1}(5y))\,dy$
I hope this makes sense.
Ah, I see. Thank you. I should have just remembered the 2/2 on the left to add to 1/2
$\int 2y \tan^{-1} (5y) \ dy$
First, let's make the sub: $\begin{array}{lll} s = 5y & \Rightarrow & ds = 5 dy \iff dy = \displaystyle \frac{ds}{5} \\ y = \displaystyle \frac{s}{5} \end{array}$
and we get: $\int 2 \left(\frac{s}{5}\right) \tan^{-1} s \frac{ds}{5} \ \ = \ \ \frac{1}{25} \int 2s \tan^{-1}s \ ds$
Now you can use integration by parts: $\begin{array}{ll} u = \tan^{-1} s & dv = 2s ds \\ du = \displaystyle \frac{1}{1+s^2} \ ds & v = s^2 \end{array}$
So, I get $s^2(tan^{-1}s)-\int \frac {s^2}{1+s^2}ds$ but I need some help integrating the integral, it looks very similar to $tan^{-1}{s}$ except with the $s^2$ on the numerator.
One way to deal with the integral would be to apply long division to the integrand:
I leave it for you to verify that $\frac{s^2}{1+s^2}=1-\frac{1}{1+s^2}$
Now integrate $\int\left(1-\frac{1}{1+s^2}\right)\,ds$. It should be a piece of cake now.
I hope this makes sense!
Try $\int{ds} - \int\frac{ds}{s^2+1} = \int\frac{s^2}{1+s^2}$
oops I was too slow
Alright, so I finally get: $\frac {25y^2(tan^{-1}{5y})+tan^{-1}{5y}-5y}{25}$ Now I'm a little confused about the division though, 11rdc11, could you explain your last post a little more?
Thank you guys for all of your help! I really do appreciate it.
Where are you stuck in the division process?
September 2nd 2008, 11:33 PM #2
September 2nd 2008, 11:50 PM #3
September 3rd 2008, 07:39 PM #4
Sep 2008
September 3rd 2008, 08:00 PM #5
Sep 2008
September 3rd 2008, 08:22 PM #6
September 3rd 2008, 08:32 PM #7
September 3rd 2008, 08:40 PM #8
Sep 2008
September 3rd 2008, 08:46 PM #9
September 3rd 2008, 08:50 PM #10
Sep 2008
September 3rd 2008, 08:58 PM #11
Sep 2008
September 3rd 2008, 09:03 PM #12
September 3rd 2008, 09:07 PM #13
September 3rd 2008, 09:24 PM #14
Sep 2008
September 3rd 2008, 09:28 PM #15 | {"url":"http://mathhelpforum.com/calculus/47560-simple-integral-question-w-parts.html","timestamp":"2014-04-16T16:23:49Z","content_type":null,"content_length":"103670","record_id":"<urn:uuid:5f0126c2-c095-47a5-8634-10d8a9bbdd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: the RM and Godel
Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
Home -> Community -> Usenet -> comp.databases.theory -> Re: the RM and Godel
Re: the RM and Godel
From: Jan Hidders <jan.hidders_at_REMOVETHIS.pandora.be> Date: Sat, 25 Jun 2005 09:34:19 GMT Message-ID: <vy9ve.129657$F62.7015496@phobos.telenet-ops.be>
mountain man wrote:
> I seem to recall there is no "solid ground" in mathematics
> in that the formalisms of mathematics cannot lead to
> anything resembling "absolute truth".
That's a grave oversimplification and a very misleading statement. A slightly less oversimplified version would be that if you formalize the theory of natural numbers (or sets, for that matter) you
cannot have "the truth, the whole truth, and nothing but the truth" because you have to choose between either "the whole truth" or "nothing but the truth". Most tend to choose the "nothing but the
> How do RM theorists view the work of Godel, Turing and
> Chaitin? What are the implications of Godels theorem of
> incompleteness, or Chaitin's random truth, to the RM?
Practically zero. Note that there is Goedels *completeness* result for first-order logic (i.e., the flat relational model) that tells us that for uninterpreted predicates we in fact can and do have a
complete axiomatization. So whether there is going to be a problem in this respect depends upon what you take as your domains, and that decision's not really part of the RM anyway. But even if the
problem would occur, that would be practically meaningless in practice. Would it stop us from proving things? No. Would it stop us from being able to ask certain queries or reason correctly about
them? No.
Received on Sat Jun 25 2005 - 04:34:19 CDT
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US | {"url":"http://www.orafaq.com/usenet/comp.databases.theory/2005/06/25/0624.htm","timestamp":"2014-04-23T09:03:17Z","content_type":null,"content_length":"8210","record_id":"<urn:uuid:84e3db9c-9416-424b-9a8f-74bbb7425ccd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Igor Stamenkovic
95 followers|207,432 views
A classic of physics -- the first systematic presentation of Einstein's theory of relativity.
At last - a textbook on category theory for scientists! And it's free!
• David Spivak,
Category Theory for Scientists
"This course is an attempt to extol the virtues of a new branch of mathematics, called
category theory
, which was invented for powerful communication of ideas between different fields and subfields within mathematics. By powerful communication of ideas I actually mean something precise. Different
branches of mathematics can be formalized into categories. These categories can then be connected together by functors. And the sense in which these functors provide powerful communication of ideas
is that facts and theorems proven in one category can be transferred through a connecting functor to yield proofs of an analogous theorem in another category. A functor is like a conductor of
mathematical truth."
"I believe that the language and toolset of category theory can be useful throughout science. We build scientific understanding by developing models, and category theory is the study of basic
conceptual building blocks and how they cleanly fit together to make such models. Certain structures and conceptual frameworks show up again and again in our understanding of reality. No one would
dispute that vector spaces are ubiquitous. But so are hierarchies, symmetries, actions of agents on objects, data models, global behavior emerging as the aggregate of local behavior, self-similarity,
and the effect of methodological context."
"Some ideas are so common that our use of them goes virtually undetected, such as set-theoretic intersections. For example, when we speak of a material that is both lightweight and ductile, we are
intersecting two sets. But what is the use of even mentioning this set-theoretic fact? The answer is that when we formalize our ideas, our understanding is almost always clarified. Our ability to
communicate with others is enhanced, and the possibility for developing new insights expands. And if we are ever to get to the point that we can input our ideas into computers, we will need to be
able to formalize these ideas first."
"It is my hope that this course will offer scientists a new vocabulary in which to think and communicate, and a new pipeline to the vast array of theorems that exist and are considered immensely
powerful within mathematics. These theorems have not made their way out into the world of science, but they are directly applicable there. Hierarchies are partial orders, symmetries are group
elements, data models are categories, agent actions are monoid actions, local-to-global principles are sheaves, self-similarity is modeled by operads, context can be modeled by monads."
David Spivak asks readers from different subjects for help in finding new ways to apply category theory to those subjects. And that's the right attitude to take when reading this book. I've found
category immensely valuable in my work. But it took
to learn category theory and see how it can apply to different subjects. People are just starting to figure out these things, so don't expect instant solutions to the problems in your own favorite
field. But Spivak does the best job I've seen so far at explaining category theory as a
general-purpose tool for thinking clearly
Thanks to
+Charlie Clingen
for pointing this out!
Richard Feynman explains how a train stays on the tracks.
Professor Coleman's wit and teaching style is legendary and, despite all that may have changed in the 35 years since these lectures were recorded, many students today are excited at the prospect of
being able to view them and experience Sidney's particular genius second-hand.
Oxford University - basic quantum mechanics course, 27 lecture videos
A foundations of mathematics for the 21st century
It's here! For decades, mathematicians been dreaming of an approach to math where different proofs that x = y can be seen as different
in a
. It's finally been made precise, thanks to Vladimir Voevodsky and a gang of mathematicians who holed up for a year at the Institute for Advanced Studies, at Princeton.
I won't try to explain it, since that's what the book does. I'll just mention a few of the radical new features:
• It includes set theory as a special case, but it's founded on more general things called 'types'. Types include sets, but also propositions. Proving a proposition amounts to constructing an
element of a certain type. So, proofs are no longer 'outside' the mathematics being discussed, they're inside it just like everything else.
• The logic is 'constructive', meaning that to prove something exists amounts to giving a procedure for constructing it. As a result, the whole system can be
and is being
computerized with the help of programs like COQ and AGDA.
• Types can be seen as 'spaces', and their elements as 'points'. A proof that two elements of a type are equal can be seen as constructing a path between two points. Sets are just a special case:
the '0-types', which have no interesting higher-dimensional aspect. There are also types that look like spheres and tori! Technically speaking, the branch of topology called
homotopy theory
is now a part of logic! That's why the subject is called
homotopy type theory
• Types can also be seen as
. Very roughly, these are structures with elements, isomorphisms between elements, isomorphisms between isomorphisms, and so on
ad infinitum
. So, a certain chunk of the important new branch of math called 'higher category theory' is now part of logic, too.
• The most special contribution of Voevodsky is the
univalence axiom
. Very
roughly, this expands the concept of 'equality' so that it's just as general as the hitherto more flexible concept of 'isomorphism' - or, if you know some more fancy math, 'equivalence'.
Mathematicians working on homotopy theory and higher category theory have known for decades that equality is too rigid a concept to be right - for certain applications. The univalence axiom updates
our concept of equality so that it's good again!
Since this is all about
, and it's all quite new, please don't ask me yet what its practical applications are. Ask me in a hundred years. For now, I can tell you that this is the 'upgrade' that the foundations of math has
needed ever since the work of Grothendieck. It's truly 21-century math.
It's also a book for the 21st century, because it's escaped the grip of expensive publishers! While it's 600 pages long, a hardback copy costs less than $27. Paperback costs less than $18, and an
electronic copy is free!
#spnetwork #homotopytheory #ncategories #logic #foundations
Preserve dark skies by counting the Orion stars that you can see from where you live.
Looking for
Friends, Networking
Engineer, science lover, amateur astronomer, lazy reader... | {"url":"https://plus.google.com/117335054428118618842","timestamp":"2014-04-19T03:06:11Z","content_type":null,"content_length":"255414","record_id":"<urn:uuid:bc395564-8cff-4e4e-820e-ace99cb2eb85>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program equivalences for concurrency abstractions in a concurrent lambda calculus with buffers, cells and futures
Various concurrency primitives had been added to functional programming languages in different ways. In Haskell such a primitive is a MVar, joins are described in JoCaml and AliceML uses futures to provide a concurrent b
Various concurrency primitives had been added to functional programming languages in different ways. In Haskell such a primitive is a MVar, joins are described in JoCaml and AliceML uses futures to provide a concurrent behaviour. Despite these concurrency libraries seem to behave well, their equivalence between each other has not been proven yet. An expressive formal system is needed. In their paper "On proving the equivalence of concurrency primitives", Jan Schwinghammer, David Sabel, Joachim Niehren, and Manfred Schmidt-Schauß define a universal calculus for concurrency primitives known as the typed lambda calculus with futures. There, equivalence of processes had been proved. An encoding of simple one-place buffers had been worked out. This bachelor’s thesis is about encoding more complex concurrency abstractions in the lambda calculus with futures and proving correctness of its operational semantics. Given the new abstractions, we will discuss program equivalence between them. Finally, we present a library written in Haskell that exposes futures and our concurrency abstractions as a proof of concept.
Author: Martina Willig
URN: urn:nbn:de:hebis:30-67958
Document Type: Bachelor Thesis
Language: English
Date of Publication (online): 2009/10/27
Year of first Publication: 2009
Publishing Institution: Univ.-Bibliothek Frankfurt am Main
Release Date: 2009/10/27
Tag: concurrency ; futures ; haskell; lambda calculus ; programming languages design
SWD-Keyword: Haskell 98; Lambda-Kalkül; Nebenläufigkeit
HeBIS PPN: 217316905
Institutes: Informatik
Dewey Decimal Classification: 004 Datenverarbeitung; Informatik
Sammlungen: Universitätspublikationen
Licence (German): Veröffentlichungsvertrag für Publikationen | {"url":"http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/7210","timestamp":"2014-04-17T12:41:22Z","content_type":null,"content_length":"15974","record_id":"<urn:uuid:9187b86d-2418-4ddc-ae16-b3973d92f1d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 3
<< Previous Contents Next >>
Development and Implementation of a Performance-Related Specification for SR 9a, Florida: Final Report
Chapter 3: Development of the Performance-Related Specification
In developing the PRS for the SR 9A project, the latest FHWA procedures (Hoerner and Darter, 1999) and software (PaveSpec 3.0) were used. A level 1 (simplified) specification was chosen to minimize
deviation from the department's existing specifications and testing practices and thus provide the best chance possible for successful implementation.
To begin the development process, much information about the department's current specifications and design criteria, construction sampling and testing techniques, pavement performance measures, and
typical maintenance and rehabilitation strategies and costs was collected and carefully reviewed. This information, along with data specific to the SR 9A project, was used to create the framework for
the specification and provide the necessary inputs to the PaveSpec program, which are provided in appendix A.
This chapter discusses in detail the various types of data collected in the study and how the data were used to develop the SR 9A PRS. It also presents the resulting PRS pay factor curves used in
compensating the contractor for the level of quality achieved on the project. The final, binding version of the PRS, in the form of a Technical Special Provision, is provided in appendix B.
Selection of Acceptance Quality Characteristics and As-Designed Quality Levels
In the construction of its concrete pavements, the department calls for the inspection and testing of several quality characteristics. Among these characteristics are slump, air content, slab
thickness, strength, dowel and tie bar placement, and surface smoothness. For the SR 9A PRS implementation, Florida DOT decided that three of the five AQCs considered by PaveSpec would provide the
basis for concrete pavement pay adjustments. These AQCs included slab thickness, 28-day compressive strength, and surface smoothness, as determined using a California-type profilograph with a 0.2-in.
(5 mm) blanking band.
To define for each AQC the levels of quality for which the department is willing to pay 100 percent of the bid price (i.e., target values) and the levels it considers unacceptable (minimum and
maximum values), the department's applicable concrete specifications (Florida DOT, 2000) and design methodology were examined. In addition, actual AQC data from three nearby concrete paving projects
completed in 2000 were obtained and analyzed. The sections below discuss how the gathered information was used to establish as-designed target values (i.e., mean and standard deviation) and
corresponding rejectable and maximum quality limits (RQLs and MQLs) for each of the three AQCs included in the PRS.
Slab Thickness
Section 350-16 of the department's 2000 Standard Specifications discusses how slab thickness is measured and evaluated for acceptance. The specification requires that the contractor take cores at
randomly selected locations, with each core representing no more than 2,500 yd^2 (2,090 m^2) of pavement area. The department determines the average thickness of pavement from the lengths of all
cores taken from the entire job. In this computation, cores measuring more than 0.5 in. (12.7 mm) greater than the specified thickness are assigned a thickness equal to the specified thickness plus
0.5 in. (12.7 mm).
Areas of pavement found by the department to be deficient in thickness by more than 0.5 in. (12.7 mm) are handled in one of two ways. The first option allows the contractor to remove and replace the
deficient area with concrete of the thickness shown in the plans. No compensation is given for the removal and replacement. The second option allows the contractor to leave the deficient pavement in
place, but to receive zero compensation for the subject pavement area.
The final pay quantity is determined by multiplying the area of pavement to be paid for by the ratio of the average thickness to the specified thickness. This prorated amount of pavement is then
multiplied by the bid unit price for concrete pavement. The final pay quantity is capped, however, by a maximum average of over-thickness of 0.25 in. (6.4 mm).
As discussed in chapter 2, the specified pavement thickness on the SR 9A project is 12.5 in. (317.5 mm). Because the department will not pay for, and may require replacement for, pavement that is
more than 0.5 in. (12.7 mm) below the specified thickness, the department's RQL for slab thickness for the SR 9A project is assumed to be 12.0 in. (304.8 mm) (12.5 in. - 0.5 in. [317.5 - 12.7 mm]).
This value was deemed appropriate by the department for use in the PRS.
Department specifications indicate that the MQL for thickness for the SR 9A project is 12.75 in. (323.9 mm) (12.50 in. + 0.25 in. [317.5 + 6.4 mm]). No additional bonus money is paid to the
contractor for achieving an average thickness for the project greater than 12.75 in. (323.9 mm). For PRS development and implementation, however, the department determined that the MQL should be
increased from 12.75 in. (323.9 mm) to 13.5 in. (342.9 mm) to allow for more incentive opportunity.
The logical target mean for thickness for the SR 9A project is represented by the specified thickness of 12.5 in. (317.5 mm). To determine the appropriate standard deviation target, slab thickness
data from three previous SR 9A jobs (Financial Projects 20959315201, 20929615201, and 20929315201) were analyzed. These projects represented approximately 21 lane-miles (34 lane-kilometers) of
mainline pavement, extending from MP 24.496 northeasterly to MP 20.917. For each project, only the core thickness measurements taken on mainline pavement (specified thickness of 12.5 in. [317.5 mm])
were evaluated.
Table 1 provides a statistical breakdown of the measured slab thicknesses for each project, while figure 4 shows the corresponding thickness distributions. Because of the unusually high variation in
thickness for project 1, only the data from projects 2 and 3 were considered in establishing the target standard deviation. The weighted average thickness (based on number of independent cores) for
these two projects was computed to be 12.67 in. (321.8 mm) and the standard deviation of the pooled variances of thickness was computed to be 0.49 in. (12.5 mm) Based on these results, the department
recommended establishing the target standard deviation at 0.50 in. (12.7 mm). These are the target mean and standard deviations for which the department is willing to pay 100 percent bid price.
Table 1. Statistics for Slab Thickness Data From Three Florida
Concrete Pavement Projects
│ Statistic │Project 1│Project 2│Project 3│
│ Number of independent cores │33 │20 │31 │
│ Average, in. │11.99 │12.83 │12.57 │
│ Standard deviation, in. │0.71 │0.40 │0.54 │
│Coefficient of variation (COV) ^a │0.06 │0.03 │0.04 │
│1 in. = 25.4 mm │
│^a COV = standard deviation/average │
Figure 4. Slab thickness distributions for three Florida concrete pavement projects.
1 in. = 25.4 mm.
Compressive Strength
Acceptance sampling and testing protocol and requirements for concrete strength are provided in Sections 347-4 and 347-5 of the department's 2000 Standard Specifications. According to the protocol,
at least one representative sample of concrete must be obtained from each day's production of each design mix from each production facility. From that sample, the contractor must cast four concrete
cylinders, 6 in. (152.4 mm) in diameter by 12 in. (304.8 mm) long. Two of the cylinders must then be tested for compressive strength 7 days after casting, while the other two cylinders must be tested
28 days after casting. For each pair of cylinders tested, the average compressive strength is determined. Concrete below the 28-day minimum compressive strength requirement of 2,700 lbf/in^2 (18.62
MPa) is subject to removal and replacement by the contractor. This strength value represents the department's existing RQL, and the department recommended that it be applied to the SR 9A PRS.
A corresponding MQL for strength was found to not exist. However, based on the department's target for strength and the variability of strength observed in past projects (see discussion below), the
department determined that 5,500 lbf/in^2 (37.92 MPa) would be a suitable MQL value for the 9A PRS.
The Florida DOT's current procedure for designing JPCPs is based on the 1993 AASHTO Design Guide. The procedure and the standard design input values used by the department are presented in its 1996
Rigid Pavement Design Manual. In this manual, the design concrete strength is represented by the 28-day modulus of rupture determined through third-point loading. The standard design value is given
as 4,400 kPa (638 lbf/in^2). Using the following equation for converting flexural strength to compressive strength, the corresponding 28-day design compressive strength was computed to be 4,510 lbf/
in^2 (31.10 MPa):
M[R,28-day] = 9.5 * (f'[C,28-day])^0.5 Eq. 2
M[R,28-day] = Estimated modulus of rupture at 28 days, lbf/in^2.
f'[C,28-day] = Estimated compressive strength at 28 days, lbf/in^2.
For PRS purposes, the target mean for compressive strength was set at 4,500 lbf/in^2 (31.03 MPa).
Evaluation of 28-day compressive strength data on cylinders tested in the three previous SR 9A projects yielded the strength statistics listed in table 2 and the strength distributions shown in
figure 5. Again, because of the unusually high variation in strength for project 1, only the data from projects 2 and 3 were considered in establishing the target standard deviation. The weighted
average strength (based on number of pairs of cylinders) for these two projects was computed to be 5,548 lbf/in^2 (38.25 MPa), and the standard deviation of strength was computed from pooled
variances to be 610 lbf/in^2 (4,206 kPa). Based on these results, the department recommended establishing the target standard deviation at 610 lbf/in^2 (4,206 kPa). As previously stated, these are
the means and standard deviations for which the department is willing to pay 100 percent of bid price.
Table 2. Statistics for Compressive Strength Data From Three
Florida Concrete Pavement Projects
│ Statistic │Project 1│Project 2│Project 3│
│ Number of pairs of cylinders │26 │45 │45 │
│ Average, lbf/in^2 │5,471 │5,419 │5,698 │
│ Standard deviation, lbf/in^2 │833 │649 │570 │
│Coefficient of variation (COV) ^a │0.15 │0.12 │0.10 │
│1 lbf/in^2 = 6.89 kPa │
│^a COV = standard deviation/average │
Figure 5. Compressive strength distributions for three Florida concrete pavement projects.
1 lbf/in^2 = 6.89kPa
Sections 350-14 and 352-4c of the department's 2000 Standard Specifications describe how concrete smoothness is tested and evaluated for acceptance. The procedure requires that the contractor furnish
and operate an electronic California-type profilograph along each wheel path of each traffic lane longer than 250 ft (76.2 m). The profilograph must be capable of producing profile traces and
computing profile index (PI) based on a 0.2-in. (5 mm) blanking band (herein denoted as PI[0.2-in]).
Profilograph test results are examined by the department's field engineer. Individual high points in excess of 0.3 in. (7.6 mm) per 25-ft (7.6 m) length are identified for grinding, and the average
PI[0.2-in] for each 0.1-mi (0.16 km) section is computed using the left and right wheel path PI[0.2-in] values. Each 0.1-mi (0.16 km) tangent or slightly curved (centerline radius of curvature ≥
2,000 ft) (609.6 m) section with an average PI[0.2-in] greater than 7 in./mi (111 mm/km) must be corrected by the contractor via grinding. Contract unit price adjustments for smoothness, prior to
grinding, are made according to the schedule shown in table 3.
The information in table 3 indicates that the department's RQL and MQL values for smoothness are 7 in./mi (111 mm/km) and 3 in./mi, respectively. These values were deemed appropriate for use in the
SR 9A PRS. Table 3 also shows that the DOT's target mean smoothness (prior to grinding) is 5.5 in./mi (87 mm/km), which is the midpoint of the range (5.0 < PI[0.2-in] ≤ 6.0) that corresponds to 100
percent payment. However, because the SR 9A contract was let with the requirement that all concrete pavement be diamond ground and that contractor bid prices for concrete pavement include the cost of
grinding, a different target mean was sought for the PRS.
Table 3. Price Adjustment Schedule for Pavement Smoothness
Prior to Grinding (FDOT, 2000)
│Average Profile Index (PI)│ Contract Unit Price Adjustments, │
│per 0.1-mi Section, in./mi│Percent of Pavement Unit Bid Price │
│ 3.0 ≤ PI[0.2-in] │ 103 │
│ 3.0 < PI[0.2-in] ≤ 4.0 │ 102 │
│ 4.0 < PI[0.2-in] ≤ 5.0 │ 101 │
│ 5.0 < PI[0.2-in] ≤ 6.0 │ 100 │
│ 6.0 < PI[0.2-in] < 7.0 │ 99 │
│ PI[0.2-in] = 7.0 │ 98 │
│ PI[0.2-in] > 7.0 │ Corrective work required │
After-grinding smoothness data for the three previous SR 9A jobs were examined for this purpose. Table 4 shows a statistical breakdown of the measured PI[0.2-in] values for several 0.1-mi (0.16 km)
test segments from each project, while figure 6 shows the corresponding PI[0.2-in] distributions. The weighted average PI[0.2-in] (based on number of 0.1-mi (0.16 km) test segments) for these three
projects was computed to be 2.7 in./mi (42 mm/km), and the pooled standard deviation was computed to be 1.2 in./mi (19 mm/km). Based on these results, the department recommended establishing the PRS
target mean at 3.0 in./mi (47 mm/km) and the target standard deviation at 1.0 in./mi (16 mm/km).
Table 4. Statistics for Smoothness (PI[0.2-In]) Data From Three
Florida Ground Concrete Projects
│ Statistic │Project 1│Project 2│Project 3│
│Number of pairs of 0.1-mi sections │ 33 │ 60 │ 78 │
│Average, in./mi │ 3.0 │ 2.1 │ 3.0 │
│Standard devition, in./mi │ 1.2 │ 1.3 │ 1.2 │
│Coefficient of variation (COV) ^a │ 0.40 │ 0.65 │ 0.40 │
│1 mi = 1.6 km; 1 in./mi = 16 mm/km │
│^aCOV = standard deviation/average │
Figure 6. Smoothness distributions for three Florida concrete pavement projects.
1 in./mi = 16 mm/km
Summary of Acceptance Quality Characteristics Target Values, Rejectable Quality Levels, and Maximum Quality Levels
Table 5 summarizes the target means and standard deviations established for each AQC for the SR 9A PRS. It also lists the established RQLs and MQLs for each AQC. These values apply to each lot of
concrete pavement.
Table 5. Summary of Target, Rejectable (RQL), and Maximum (MQL)
Quality Levels for the SR 9A Performance-Related Specification
│ │Lot Target Values │ │ │
│ Acceptance Quality Level ├──────┬───────────┤Rejectable Quality Level│Maximum Quality Level│
│ │ Mean │ Std. Dev. │ │ │
│ Slab thickness, in. │12.5 │0.5 │12.0 │13.5 │
│28-day PCC compressive strength, lbf/in^2│4,500 │610 │2,700 │5,500 │
│ PI[0.2-in], in./mi │3.0 │1.0 │5.0 │0.0 │
│PCC = portland cement concrete; 1 in. = 25.4 mm; 1 lbf/in^2 = 6.89 kPa; 1 in./mi = 16 mm/km │
Pavement Performance Indicators and Models
The Florida DOT monitors JPCP performance through annual visual distress surveys and ride quality tests. The distress surveys identify the amount and severity level of up to 10 different surface
distress types, including slab cracking, joint faulting, and joint spalling, that have developed over time and through the loss of smoothness over time. Smoothness is measured with an inertial
profiler and is reported in terms of the International Roughness Index (IRI). The collected distress and smoothness data are entered into the department's pavement management system, which is used to
track deterioration rates and predict future conditions and corresponding rehabilitation needs.
For the SR 9A PRS, all four performance indicators-slab cracking, joint spalling, joint faulting, and smoothness-available in PaveSpec 3.0 were selected for predicting pavement service life. In
addition, the PaveSpec default performance models linking the three AQCs (thickness, strength, and smoothness) with the four performance indicators were selected for developing the PRS pay factor
Constant Input Values
Constant inputs represent those PaveSpec parameters that do not differ between as-designed and as-constructed pavements. They include various design, traffic, and climatic parameters, as well as the
maintenance and rehabilitation strategies and costs used to compute LCCs and corresponding pay factor amounts.
Table 6 lists the constant input values established for the SR 9A PRS. Many of these values were defined in the contract plans, while others represent standard values given in the department's rigid
design manual.
Climatic data were derived from two sources: the NOAA 1983 Climatic Atlas of the United States, which includes statistics based on roughly 30 years of U.S. weather data, and the FHWA LTPP database,
which includes weather statistics for thousands of test pavements in the United States and Canada. For this latter source, climatic data from three LTPP test sections in the Jacksonville area and
covering the last 15 to 20 years were analyzed. The climatic values shown in table 6 represent the best estimates for the SR 9A project.
Table 6. Constant Inputs for PaveSpec 3 Defining the SR 9A Project.
Input Parameter Value Source
Project Location and Design Information
Setting Urban Contract plans
Functional class Freeway Contract plans
Directions 2 (EB, WB) Contract plans
Lanes per direction 3 Contract plans
Lane widths 12 ft (14 ft outside) Contract plans
Pavement type Plain, doweled Contract plans
Dowel bar diameter 1.25 in. Contract plans
Joint spacing 16 ft Contract plans
Shoulder type Tied PCC Contract plans
Base type and thickness 48-in. permeable (5x10^-5 cm/sec) Contract plans
Transverse joint seal type Silicone Florida Department of Transportation (DOT) Rigid Design Manual
Design life 20 years Florida DOT Rigid Design Manual
Traffic Information
Initial ADT 28,500 veh/day Contract plans
Traffic growth rate 2.4% (compound) Computed from contract plan ADT estimates (18,100 in 1995; 28,500 in 2000; 37,400 in 2010; 45,800 in 2020)
Directional traffic factor 58% Contract plans
Percent trucks 14% Contract plans
Percent trucks in outer lane 65% Florida DOT Rigid Design Manual
Truck load equivalency factor 1.67 ESALs/truck Florida DOT Rigid Desgin Manual
Climatic and Materials Information
Mean annual preciptiation 51 in. U.S. Climatic Atlas (NOAA, 1983), LTPP database (ERES, 2001)
Mean annual days above 90°F 57 U.S. Climatic Atlas (NOAA, 1983), LTPP database (ERES, 2001)
Mean annual air freeze-thaw cycles 18 LTPP database (ERES, 2001)
Mean annual freezing index 0 LTPP database (ERES, 2001)
PCC modulus of elasticity 4 x 10^6 lbf/in^2 Florida DOT Rigid Design Manual
PCC water/cementitious materials ratio 0.42 Florida DOT Rigid Design Manual
Modulus of subgrade reaction (k) 200 lbf/in^2 Florida DOT Rigid Design Manual
% subgrade material passing #200 14% Florida DOT
1ft = 0.305m; 1 in. = 25.4 mm; 1 lbf/m^2 = 6.89 kPa; 1 in./mi = 16 mm/km
Maintenance and Rehabilitation Strategies and Costs
The Florida DOT exercises several different options for maintaining and rehabilitating concrete pavements. They include various concrete pavement restoration activities, such as joint resealing, slab
replacement, edge drain installation, and diamond grinding, and more extensive measures, such as conventional asphalt concrete (AC) overlays and AC overlays over cracked-and-seated PCC.
Based on discussions with key DOT staff, the following maintenance and rehabilitation activities were established for use in the SR 9A PRS:
Maintenance Plan Summary
• Reseal 50 percent of the transverse joints every 20 years.
• Reseal 50 percent of the longitudinal joints every 20 years.
• Reseal 100 percent of the cracks every 20 years.
Localized Rehabilitation Plan Summary
• If the lot average percent cracked slabs exceeds 10 percent, apply full slab replacement to 100 percent of cracked slabs.
• If the lot average percent spalled joints exceeds 10 percent, apply partial-depth repairs to 100 percent of spalled joints.
Sublot Failure Thresholds
• Consider the sublot failed if the cumulative percent of cracked slabs exceeds 15 percent.
• Consider the sublot failed if the average transverse joint faulting exceeds 0.10 in. (2.5 mm).
• Consider the sublot failed if the IRI exceeds 150 in./mi (2,366 mm/km).
• Consider the sublot failed if the cumulative amount of spalled joints exceeds 30 percent.
If 25 percent of the sublots have failed, apply the global rehabilitation procedures listed in table 7.
Table 7. Global Rehabilitation Activities If 25 Percent of Sublots Are Failed
Global Rehabilitation Activity Activities
Prior to Phase I Repair 100% of outstanding spalled joints with partial-depth repairs.
Repair 100% of outstanding cracked slabs with full slab replacements.
Assumed Life: 10 years
Phase I (diamond grinding) Starting International Roughness Index (IRI): 60 in./mi
Ending IRI: 150 in./mi
Assumed Life: 10 years
Phase II (asphalt concrete [AC] overlay) Starting IRI: 60 in./mi
Ending IRI: 150 in./mi
Assumed Life: 10 years
Phase III (AC overlay) Starting IRI: 60 in./mi
Ending IRI: 150 in./mi
Assumed Life: 10 years
Phase IV (AC overlay) Starting IRI: 60 in./mi
Ending IRI: 150 in./mi
1 in./mi = 16 mm/km
Unit Costs
Unit cost data, shown in table 8, were provided by Florida DOT in 2001 dollars. Definitions for the cost items are shown below.
Table 8. Design Feature Mean Cost Inputs Used in PaveSpec 3.0
Cost Item Cost (in 2003 Dollars)
Transverse joint sealing 1.20/ft
Longitudinal joint sealing 1.00/ft
Transverse crack sealing 1.00/ft
Local: Partial-depth repairs of transverse joints 364.00/yd^2
Local: Full slab replacements 137.76/yd^2
Local: Partial slab replacements 135.00/yd^2
Global: Asphalt concrete overlay 11.00/yd^2
Global: Portland cement concrete overlay 15.00/yd^2
Global: Diamond grinding 3.01/yd^2
Percent user cost 5 (provides about the right amount of user impact on pay factor)
Estimated bid price 53.00/yd^2 (contractor's bid for 12.5-in. jointed plain concrete pavement)
1 ft = 0.305 m; 1 yd^2 = 0.836 m^2; 1 in. = 25.4 mm
• Joint/crack sealing-Resealing of transverse and longitudinal joints and sealing of all slab cracks.
• Partial-depth joint repair-Shallow (less than half the slab depth) repairs of spalled joint segments.
• Full-depth slab replacement-Partial, full, or multiple slab removal and replacement with PCC.
• Diamond grinding-Longitudinal grinding of the concrete surface using a diamond-grinding machine.
• AC overlay-Resurfacing of existing pavement with asphalt structural course and a friction course.
Sampling and Testing Methods
As discussed previously, existing department specifications require the following:
• Cores for thickness measurement taken from randomly selected locations, with each core representing no more than 2,500 yd^2 (2,090 m^2) of pavement area.
• Casting and subsequent strength testing of four cylinders representing 1 day's production of PCC.
• Operation of California-type profilograph along each wheel path of each traffic lane longer than 250 ft (76.2 m), with average PI[0.2-in] computed for each 0.1-mi (0.16 km) section based on left
and right wheel path PI[0.2-in] values.
Under the PRS concept, pay adjustments are made on a lot-by-lot basis, with a lot being defined as a discrete quantity of constructed pavement having the same mix design, material sources, and design
characteristics (e.g., joint spacing, drainage, dowel bar size) and subjected to the same climatic, traffic, and support conditions. The size of a lot is one lane in width and between 0.1 and 1.0 mi
(0.160 and 1.61 km) long. Each lot is divided into sublots of approximately equal surface area, and all sampling and testing of concrete AQCs is performed at the sublot level.
For the SR 9A PRS, a minimum sublot length of 250 ft (76.2 m) was established, corresponding to the department's existing procedure for testing smoothness. In each sublot, it was determined that (a)
two core borings be taken at random locations after 3 days for slab thickness measurement, (b) two cylinders be cast from one truck within the sublot and be tested for compressive strength after 28
days, and (c) profilograph traces be taken for each wheel path. This defined sampling frequency is illustrated in figure 7, along with the layout of lots and sublots.
Figure 7. Illustration of lots, sublots, and sampling frequency.
1 mi = 1.61 km
It can be seen that the proposed PRS requires minimal changes to the department's existing sampling and testing procedures. The main requirement is that a complete set of AQCs be taken from each
sublot to facilitate PRS performance projection.
Table 9 shows the testing methods associated within the PRS and FDOT's existing construction specifications for concrete strength, slab thickness, and initial smoothness. The testing methods for
these AQCs are discussed further in the following sections
Table 9. Testing Methods for the Performance-Related Specification Project
Acceptance Quality Characteristic No. of Samples^1 No. of Replicates^1 Sample Method Evaluation Method
Concrete strength 1 2 ASTM C-31 ASTM C-39
Slab thickness 2 1 ASTM C-42 ASTM C-42
Smoothness 2 1 FM 5-558 FM-558
^1 Samples and replicates per sublot.
Concrete strength-The cylindrical specimens shall be molded and cured in accordance with FM 1-T 023 (Making and Curing Test Cylinders) and tested in accordance with FM 1-T 022 (Testing Cylinders),
standard Florida Test Methods. Improper sampling, molding, handling, and curing will be handled according to FDOT's existing specifications.
Slab thickness-Thickness cores shall be a minimum diameter of 2 in. The slab thickness at a cored location shall be recorded to the nearest 0.1 in. (25.4 mm) as the average of three caliper
measurements of the core length. The three measurements shall be obtained and marked at locations spaced at approximately equal distances around the circumference of the core.
Initial smoothness-The pavement surface smoothness shall be tested using an electronic model of the California profilograph with 0.2-in. (5.1 mm) blanking band. The smoothness testing shall be
conducted after the concrete cures and grinding have been completed. Pavement profiles shall be taken at the traffic wheel paths (3 ft [0.9 m] from and parallel to each edge of pavement placed at
12-ft [3.66 m] width, or less). When pavement is placed at a greater width than 12 ft (3.7 m), the profile will be taken 3 ft (0.9 m) from and parallel to each edge and each side of the planned
longitudinal joint. When the pavement being constructed is contiguous with an existing parallel pavement that was not constructed as a part of this contract, the profile parallel with the edge of
pavement contiguous with the existing pavement shall not be taken. The profile shall be started and terminated 15 ft (4.8 m) from each bridge approach or existing pavement that is being joined.
Development of Pay Factors for the SR 9A Project
Using the PaveSpec 3.0 software program and the various inputs discussed throughout this chapter, a set of concrete thickness, strength, and smoothness pay factors were developed for use in the SR 9A
project. These resultant pay factors for slab thickness are shown in table 10. These factors are also illustrated in figure 8. The lowest noted pay factor is 93.67 percent for the RQL (12.0 in.
[304.8 mm]) with a high lot standard deviation (2.0 in. [51.8 mm]). When the mean slab thickness reaches the MQL of 13.5 in. (342.9 mm), with an ideal standard deviation of 0.0 in., the pay factor is
104.26 percent. For the target standard deviation, the pay factor between the RQL and the MQL varies 9.09 percent. There is little increase in pay factor for variability less than the target value.
Pay factors for standard deviations below the target value decrease at about twice the rate of pay factor increases for standard deviations above the target. The slab thickness pay factor curves are
fairly flat due to the conservative design of 12.5 in. (317.5 mm) resulting from the AASHTO design procedures.
Table 10. Slab Thickness Pay Adjustment Table (% Pay Factor)
Lot Mean Slab Thickness, in. Lot Standard Deviation (computed from independent cores), in.
0.0 0.5^a 1.0 1.5 2.0
12.00 95.67 95.15 94.58 94.30 93.67
12.25 98.19 97.80 97.20 96.63 95.84
12.50^b 100.27 100.00 99.39 98.63 97.74
12.75 101.92 101.74 101.15 100.30 99.38
13.00 103.13 103.03 102.49 101.64 100.75
13.25 103.91 103.87 103.41 102.65 101.86
13.50 104.26 104.24 103.89 103.33 102.70
^a Target standard deviation. ^b Target mean.
1 in. = 25.4 mm.
Figure 8. Slab thickness pay adjustment curves.
1 in. = 25.4 mm.
Pay factors for strength are shown in table 11 and figure 9. Obviously, PCC strength plays an important part in long-term pavement performance, particularly on the low side of the target. As a
result, the pay factors at the RQL and MQL with target standard deviations range 50.7 percent, from 57.4 to 108.1 percent. For each incremental change in standard deviation from the target value, the
pay factor changes about two times as fast for higher standard deviations compared with lower standard deviations.
Table 11. 28-Day Compressive Strength Pay Adjustment Table (% pay factor)
Lot Mean Strength, lbf/in^2 Lot Standard Deviation (computed using means of 2 cylinders), lbf/in^2
100 325 550 610^a 775 1,000
2,700 58.63 58.13 57.55 57.40 57.05 56.27
3,000 68.72 68.13 67.44 67.27 66.86 65.94
3,250 76.26 75.61 74.84 74.65 74.20 73.18
3,500 83.03 82.32 81.49 81.27 80.78 79.67
3,750 89.01 88.19 87.20 86.95 86.32 85.09
4,000 94.22 93.33 92.24 91.97 91.23 89.93
4,250 98.65 97.73 96.60 96.32 95.53 94.21
4,500^b 102.31 101.40 100.28 100.00 99.20 97.91
4,750 105.18 104.33 103.29 103.02 102.25 101.05
5,000 107.28 106.53 105.61 105.38 104.67 103.62
5,250 108.59 108.00 107.27 107.08 106.48 105.62
5,500 109.13 108.73 108.24 108.11 107.67 107.04
^a Target standard deviation. ^b Target mean.
1 lbf/in^2 = 6.89 kPa.
Figure 9. 28-day compressive strength pay adjustment curves.
1 psi = 6.89 kPa.
Computed surface smoothness PI pay factors are shown in table 12 and figure 10. The range of pay factors between the RQL and the MQL for the target standard deviation is 9.89 percent (93.59 to
103.48). Variability within the range of 0 to 3 in./mi (0 to 47 mm/km) has greater effect on the pay factors. These curves were developed with 5 percent user costs. If a greater amount had been used,
the curves would have been steeper.
Table 12. Surface Smoothness Pay Adjustment Table (% pay factor)
Lot Mean Lot Standard Deviation (computed using means of 2 wheel paths), in./mi
PI[0.2-in], in/mi 0.0 0.75 1.0^a 1.5 2.25 3.0
0.0 104.21 103.61 103.48 103.31 103.20 102.70
1.0 102.91 102.42 102.45 102.25 102.11 101.74
2.0 101.53 101.14 101.28 101.08 100.92 100.66
3.0^b 100.08 100.08 100.00 99.79 99.63 99.47
4.0 98.56 98.35 98.35 98.35 98.25 98.16
5.0 97.04 97.04 97.04 96.90 96.78 96.74
6.0 95.38 95.38 95.38 95.29 95.21 95.20
7.0 93.59 93.59 93.59 93.57 93.54 93.54
^aTarget standard deviation. ^b Target mean.
1 in./mi = 16 mm/km.
Figure 10. Surface smoothness pay adjustment curves.
1 in./mi = 16 mm/km
<< Previous Contents Next >>
Updated: 04/07/2011 | {"url":"http://www.fhwa.dot.gov/pavement/concrete/pubs/hif09016/03.cfm","timestamp":"2014-04-18T14:28:49Z","content_type":null,"content_length":"56406","record_id":"<urn:uuid:8bb64c59-6529-40e7-8c22-d02af8304f6c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Internal Representation
Before we start with moves and search trees, we will need an internal representation of the position of our game. For the sake of simplicity, I will define a struct position p in such a way that all
information about the position is included in it. An example for TicTacToe would be:
typedef struct
int board[3][3];
int color;
} TTTPOSITION;
// use it like this:
TTTPOSITION p;
Here the board is represented by a 3x3 array which will contain the values WHITE or BLACK, and an int color which says which side will move next, again WHITE or BLACK, which are #defined somewhere. A
more sophisticated example would be a chess program which has to take more things into account.
typedef struct
int board[8][8];
int color;
boolean whitecancastleshort;
boolean whitecancastlelong;
boolean blackcancastleshort;
boolean blackcancastlelong;
int movestodraw;
} CHESSPOSITION;
Here, the board and the color are again just integers. Additionally, in chess, we must remember if we can castle and how many moves are left until the 50-move-rule kicks in. Actually, this
representation is not complete yet. For instance, we cannot check if this position has occurred three times in the game, and en-passant captures are also not yet built in.
Anyway, I do not want to talk too much about game-specific things, but rather about general algorithms. So let's just assume that we have some kind of representation of the game we want to program.
However, it is important to keep an open mind about the position representation. For instance in checkers one can do much better than using a 8x8-array. Checkers is played on 32 squares and therefore
a position representation with 32-bit words is interesting - use a 32-bit integer for black men, white men, black kings and white kings, respectively. You might ask: what's so good about this? The
reason for doing it is simple: speed. With this kind of representation you can create much faster move generators than with an array. In a later part of this series, you can read how important speed
Move generation
Next, we need to be able to generate all legal moves for a certain position. I will assume that I have a function
int makemovelist(POSITION *p, MOVE list[MAXMOVES])
This function generates all possible moves and stores them in the array list. It returns the number of legal moves, n. Note that the position is passed by reference rather than by value, although we
won't need to change anything in the position at all - passing structures by reference is much faster than passing them by value, because only a single pointer needs to be passed instead of all the
real position data. For some games, a movelist with a fixed number of moves is quite appropriate, e.g. for connect 4 where you mostly have 7 possible moves. Sometimes, however, you might be wasting a
lot of space with a fixed-size movelist - to avoid this problem, you can allocate a global move stack for your program. The move stack is just another array of moves, large enough to hold all
movelists for the search path you are looking at. Hopefully this gets clearer in the search tree section below!
DoMove and UndoMove
Now that we can generate moves, we still need functions to do and undo moves. We use
domove(MOVE *m, POSITION *p)
to make a move m in the position p and
undomove(MOVE *m, POSITION *p)
to undo the move again. Obviously, the struct move must contain all information necessary to support these two operations. As always, the structures are passed by reference, in this case it is not
only a speed question: the position will be modified by these two functions.
The Evaluation Function
The last thing we need before we can start tree search is an evaluation function. We are going to look a couple of moves ahead, and at the end of these moves we need to get an evaluation of the
position. This will be
int evaluation(POSITION *p)
The evaluation function will return positive values if the position is good for white and negative values if the position is bad for white in the MiniMax formulation.
Many things could be said about evaluation functions, for me, the two main objectives in designing an evaluation function are speed and accuracy. The faster your evaluation function is, the better,
and the more accurate its evaluation is, the better. Obviously, these two things are somewhat at odds: an accurate evaluation function probably is slower than a 'quick-and-dirty' one. The evaluation
function I'm talking about here is a heuristic one - not an exact one. For games like checkers and chess, there are endgame databases, where positions with few pieces are listed as drawn, won or
lost. This is obviously a completely accurate evaluation function! Normally, we deal with positions which we cannot evaluate correctly. For chess, an evaluation function would consist of a large term
which describes the material balance, and many positional features, like doubled pawns, passed pawns, king safety, piece centralization and whatever else you might think of.
Tree Searching
The simplest way to search the game tree and also the most easy to understand is the MiniMax algorithm. It searches all possible moves up to a fixed depth, evaluates all resulting positions and uses
these evaluations to track the score down to the root of the search tree. Here it is:
int minimax(POSITION *p, int depth)
MOVE list[MAXMOVES];
int i,n,bestvalue,value;
if (p->color == WHITE)
return -INFINITY;
return INFINITY;
if(depth == 0)
return evaluation(p);
bestvalue = -INFINITY;
bestvalue = INFINITY;
n = makemovelist(p,list);
if(n == 0)
return handlenomove(p);
for(i=0; i<n; i++)
value = minimax(p,d-1);
if(color == WHITE)
bestvalue = max(value,bestvalue);
bestvalue = min(value,bestvalue);
return bestvalue;
The idea here is that both players will try all possible moves in their position and then choose, respectively, the one which makes the value of the position as high as possible (the white side) or
as low as possible (black). I have called one color 'WHITE', this is the side which tries to maximize the value, and the other side tries to minimize the value. You can see that player 'WHITE' starts
with a value of -INFINITY, and then goes on to try every move, and always maximizes the best value so far with the value of the current move. The other player, BLACK, will start out with +INFINITY
and try to reduce this value. Note how I use a function checkwin(p) to detect a winning position during the tree search. If you only check winning conditions at the end of your variation, you can
generate variations where both sides have won, for instance in connect 4 you could generate a variation where first one side connects four, and later the other side does. Also, note the use of
handlenomove(p) - that's what you need to do when you have no legal move left. In checkers you will lose, in chess it's a draw.
If the (average) number of possible moves at each node is N, you see that you have to search N^D positions to search to depth D. N is called the branching factor. Typical branching factors are 40 for
chess, 7 for connect 4, 10 for checkers and 300 for go. The larger the branching factor is, the less far you will be able to search with this technique. This is the main reason that a game like
connect 4 has been solved, that checkers programs are better than humans, chess programs are very strong already, but go programs are still playing very poorly - always when compared to humans.
The normal MiniMax code is a bit clumsy, since one side is trying to maximize the value and the other is trying to minimize - therefore, with MiniMax we always have to check if we are the side trying
to maximize or the side trying to minimize. A neat way to get rid of this and to have a simpler function is NegaMax. With the NegaMax algorithm both sides try to maximize all the time. NegaMax is
identical to MiniMax, it's just a nicer formulation. Here's the basic NegaMax code:
int negamax(POSITION *p, int depth)
MOVE list[MAXMOVES];
int i,n,value,bestvalue=-INFINITY;
return -INFINITY;
if(depth == 0)
return evaluation(p);
n = makemovelist(p,list);
if(n == 0)
return handlenomove(p);
for(i=0; i<n; i++)
value = -negamax(p,depth-1);
bestvalue = max(value,bestvalue);
return bestvalue;
You can see that the NegaMax algorithm is shorter and simpler than the MiniMax algorithm. The point is that the call value = -negamax(p,d-1); takes care of the signs - or nearly. There is one further
modification we must make for this code to work: The evaluation function must be sensitive to the side to move - for a position with white to move it must return its normal evaluation, for a position
with black to move it must return -evaluation.
At first sight, NegaMax is a bit harder to understand than MiniMax, but it's in fact much easier to use. The side to move is always trying to maximize the value. NegaMax is no better or worse than
MiniMax - it's identical. It's just a better framework to use.
The major improvement over MiniMax/NegaMax is the AlphaBeta algorithm: Here you realize that you don't have to go through the whole search tree. If you find one winning continuation, you don't have
to look at any others. Similarly, if you have found one continuation which will give you the value V you can stop your search along another continuation if you find only one possibility for your
opponent which gives you a lower score than V. You don't have to look at all the other possibilities your opponent might have - one refutation is enough! Here is the code for AlphaBeta, extending the
earlier NegaMax code: It receives two extra parameters, alpha and beta. They define an interval within which the evaluation has to be. If it isn't, the function will return. Your first call to
AlphaBeta will be with an interval -INFINITY...INFINITY; subsequent recursive calls to the function will make the window smaller.
int alphabeta(POSITION *p, int depth, int alpha, int beta)
MOVE list[MAXMOVES];
int i,n,value,localalpha=alpha,bestvalue=-INFINITY;
return -INFINITY;
if(depth == 0)
return evaluation(p);
n = makemovelist(p,list);
if(n == 0)
return handlenomove(p);
for(i=0; i<n; i++)
value = -alphabeta(p,d-1,-beta,-localalpha);
bestvalue = max(value,bestvalue);
return bestvalue;
Note how AlphaBeta receives the parameters alpha and beta which tell it what range the value of the current position should lie. Once a move has returned with a higher value than alpha, this best
value is saved in the variable localalpha and used for the next recursive call of AlphaBeta. If the best value is larger than beta, the search terminates immediately - we have found a move which
refutes the notion that this position has a value in the range from alpha to beta, and do not need to look for another one. Note how my AlphaBeta function is returning the highest value it found,
this can be higher than beta. Some people prefer to return beta instead of the best value on a fail high, that formulation of AlphaBeta is known as fail-hard. My formulation above is called fail-soft
. The names come from the fact that in fail hard, the bounds alpha and beta are "hard", the return value cannot be outside the alpha-beta window. It would seem that fail-soft is much more sensible,
as it might lead to more cutoffs: If you can return a higher value than beta (or a lower value than alpha), then perhaps you might get a cutoff in a previous instance of AlphBeta at a lower level in
the search tree that you wouldn't get otherwise. However, the fail-hard camp says they get less search instabilities when using advanced techniques such as pruning.
Putting things together
Now that we have all the basic building blocks for a 2-person strategy game program, we can put them all together. There are two issues I have not addressed - the way my tree searching functions are
written they just return the evaluation of the position, but no move. The other issue is: how deep shall we search?
I usually define a function firstalphabeta which is an exact copy of alphabeta, except that it also returns a best move. The second issue is resolved with something called 'iterative deepening':
First we search 1 ply deep, then 2 ply, then 3, and so on. Once the user interrupts the search we return the best move of the last iteration. Of course this can also be automated by measuring the
elapsed time and returning once this is larger than some specified value.
Comments and questions are welcome!
[ Author homepage | Introduction | Part I | Part II | Part III | Part IV | Part V ]
This page was last modified on May 5th, 2005 using | {"url":"http://www.fierz.ch/strategy1.htm","timestamp":"2014-04-18T16:07:51Z","content_type":null,"content_length":"15841","record_id":"<urn:uuid:03517a49-4037-4e9a-ac59-3605035ccbc8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tom Siegfried, Randomness
Rules for computing classical probabilities might depend on quantum randomness
SN Prime | January 21, 2013 | Vol. 3, No. 4
For all the deference to “laws” of nature that supposedly govern everything that happens, the truth is that randomness rules the world.
Everywhere you look, randomness is at work, in all the processes described by the mathematics of probability. The temperature of the air and the capriciousness of the weather all depend on random
collisions of molecules. Computers operate on the principles of information theory, which is rooted in quantifying probabilities. Time rushes onward and disorder replaces order by virtue of the
probabilistic second law of thermodynamics. Randomness determines everything from who gets real medicine in clinical trials to which team gets the ball first at football games.
Yet despite its pervasive importance, randomness has always remained rather mysterious. It’s not easy to define, and nobody has ever articulated very clearly exactly where randomness comes from—at
least not to every scientist’s satisfaction.
There is one surefire source of randomness, though: quantum physics. For atoms and molecules, quantum physics requires randomness that cannot be evaded. An electron might be found in any of a number
of locations; quantum physics can’t tell you where it will be, but does permit computing the odds for the various possibilities.
But it’s hard to see what quantum randomness has to do with randomness in the macroscopic world. Quantum uncertainty in the location of a penny is much smaller than one of the hairs on Lincoln’s
head. It doesn’t seem likely that randomness in the quantum world is relevant to the realm of coins and dice and Wheel of Fortune. For things like that, “classical” probability theory seems to work
well enough. Quantum considerations are ignored.
Unfortunately, though, classical probability has no real claim to validity, except perhaps its success in keeping casinos in business.
“There has not been any systematic validation of purely classical probabilities,” write physicists Andreas Albrecht and Daniel Phillips. Classical probability theory, they say, just quantifies
ignorance about all the factors that determine exactly where the ball will fall in the roulette wheel or when your hand will catch a flipped penny. It doesn’t tell you why that ignorance exists.
So suppose, argue Albrecht and Phillips, that the ignorance quantified by classical probability theory is “rooted in specific physical properties of the world around us.” In that case, “the things we
call ‘classical probabilities’ can be seen as originating in the quantum probabilities that govern the microscopic world.”
Large-scale fluctuations in gases and liquids, for instance, can be traced back to quantum randomness on the molecular level, Albrecht and Phillips contend in a new paper, online at arXiv.org. They
calculate how tiny quantum uncertainties can propagate upward in a larger system. Even in billiards, these calculations show, after only eight collisions quantum uncertainty becomes a factor in
determining which balls will collide next.
Similarly, in flipping a coin, quantum uncertainties at the molecular level can influence why heads and tails turn up at random. If you flipped a coin with a perfect machine, imparting precisely the
same amount of momentum each time, you’d always get the same result. But when you flip a penny with your thumb, you can’t control exactly how many times the coin spins before you catch it. Precise
timing of the initial flip and the catch is limited by your brain’s control of your muscles, which in turn depends on protein molecules operating in nerve cells. Those protein molecules are buffeted
by water molecules with fluctuating frequency stemming from quantum randomness.
“We have a plausibility argument that the outcome of a coin flip is truly a quantum measurement,” write Albrecht and Phillips, of the University of California, Davis. “The 50-50 outcome of a coin
toss may in principle be derived from quantum physics … with no reference to classical notions of how we must ‘quantify our ignorance.’”
Albrecht and Phillips are concerned with probability because of its role in theories that picture the universe as only one in a multiplicity of spaces known as the multiverse. In analyzing multiverse
theories, physicists frequently encounter circumstances where quantum math does not permit probabilistic computation (as with questions such as how many of all possible universes could support life).
In situations where quantum math does not permit probabilities to be computed, physicists resort to classical probability theory. But if classical probabilities are actually quantum in origin, then
it makes no sense to use them, either, if the quantum math says probabilities can’t be calculated. “Our claim is that probabilities are only proven and reliable tools if they have clear values
determined from the quantum state,” Albrecht and Phillips write. Consequently current theories of the multiverse should be regarded with suspicion, Albrecht and Phillips remark. And if quantum
physics really is the basis for all real-life probabilities, no doubt there will be further ramifications of this realization. It might even be a good idea to replace football referees with quantum
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google. | {"url":"https://www.sciencenews.org/article/tom-siegfried-randomness-15","timestamp":"2014-04-19T10:02:27Z","content_type":null,"content_length":"76474","record_id":"<urn:uuid:31d9dee2-e6ab-4931-95ec-3537eb21e4b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
automorphism proof
If G is a group for which Aut(G)={1}, prove that $|G| \leq 2$
Caveat: I am new to abstract algebra. I can't think of a slick proof. Essentially I show this case by case. Case 1: $|G|=1$ Because the order of G is 1, then the only element of G is the identity
element, G={1}. Further more the only automorphic map $\phi:G\rightarrow G$ is then the identity map. Case 2: $|G|=2$ Because the order of G is 2, then the only two elements of G are the identity
element and some other element, G={1,a}. Because a homomorphism must contain all the inverses for its elements, $a$ must be its own inverse, $a^2 = 1$. Furthermore, because a homomorphism maps
identities to identities and inverses to inverses the only possible homomorphism (and therefore automorphism) is again the identity map. Case 3: $|G|>2$ We need to think up an automorphism that is
not the identity map. We need to show that this map is an automorphism. That is as far as I got. Sorry, hope this helps some
My thinking was for non-abelian groups, conjugation by an element not in the centre. For abelian groups for which not all elements are their inverses, $\theta (x) = x^{-1}$ and for groups where all
elements are self inverse, permuting elements should work.
so, we may assume that G is an abelian group and every element $eq 1$ of G has order 2. looking at G as an additive group, now we can consider G as a vector space over $\mathbb{F}_2.$ if $|G| > 2,$
then $\dim_{\mathbb{F}_2} G > 1.$ let $H=\{g_{\alpha}: \ \alpha \in I \}$ be a basis for G. since $|H| \geq 2,$ we may choose $g_{\alpha} eq g_{\beta} \in H.$ now define $\varphi:G \longrightarrow G$
by $\varphi(g_{\alpha})=g_{\beta}, \ \varphi(g_{\beta})=g_{\alpha}$ and $\varphi(g_{\gamma})=g_{\gamma},$ for any $\gamma \in I - \{\alpha, \beta \}.$ then $\varphi$ is a non-trivial automorphism of
G. Q.E.D. | {"url":"http://mathhelpforum.com/advanced-algebra/79877-automorphism-proof.html","timestamp":"2014-04-19T02:58:22Z","content_type":null,"content_length":"46081","record_id":"<urn:uuid:cf17190d-0672-4101-b867-4d6bb0e3d40f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
stress-energy tensor in curved spacetime
The stress-energy tensor
Any curved manifold can be seen as "locally flat". In GR, in a local region, we can always replace the metric g
with a Minkowskian metric [tex]\eta_{ab}[/tex]. This ties in with the principle of "minimal substitution" as mentioned earlier, so hopefully it's the sense that Hartle hand in mind.
We can use the local Minkowskian metric to define the amount of energy and momentum (the energy-momentum 4-vector) contained in a small volume element at any given time by the rules of special
relativity in this locally flat section of the manifold.
I'm not aware of any ambiguities here. The volume elements will be very small, so little "gotchas" regarding the relativity of simultaneity in a volume element shouldn't be an issue. The metric will
naturally define a local coordinate system which can be used to give the specific components of the energy and momentum. This will in general not be an orthonormal coordinate system of course.
The remaining issues boil down to - "how do we naturally specify a volume element on a 4-d manifold"?
The standard way of doing this is to represent a volume in a 4-d manifold as a vector - a vector orthogonal to the volume element. Perhaps this is ambiguous, but I'm not aware of how. It can be
justified a bit more formally in terms of geometric algebra.
Note: the following is optional and a bit advanaced, but I think it's interesting. If you are already familiar with vectors and their duals, one-forms, it can be quite illuminating. But the whole
purpose of this section of the post is only to discuss in more detail how we can represent a volume element by the vector orthogonal to it.
In geometric algebra, one can represent a volume element as the interior region of a parallelpiped formed by three vectors. One can also think of it as the interior region of three one-forms. If you
have MTW, you should have run across the visual image of one-forms as "stacks of plates'.
The idea of a volume element can be expressed as the geometric product, also known as the wedge product
of three one-forms. It's actually an oriented volume. This means that volumes can be represented as three-forms, because the wedge product of three one-forms is a three-form.
Rather than work with the three-forms, we work with their duals (this is the Hodges dual). There is a long complicated discussion in MTW that's not very intuitive, the geometric picture of a dual as
described in
is a lot easier. Every three-form has a unique "hodges dual', which is an ordinary one form. Thus the "natural" way to represent volumes is with a one form (or equivalently, a vector). This is the
Hodges dual of the three form.
Note that there are two types of duality here - there is a duality between one forms and vectors, this is a different sort of duality than the Hodges duality, which is the duality between one-forms
and three-forms.
So, in conclusion - the stress-energy tensor simply takes a vector-valued volume element and converets it into the local energy-momentum 4-vector. It can therefore be regarded as the "density" of
energy and momentum. | {"url":"http://www.physicsforums.com/showthread.php?t=139237","timestamp":"2014-04-21T09:47:20Z","content_type":null,"content_length":"90200","record_id":"<urn:uuid:c63c1f00-a240-4155-a845-9fa8d7133415>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2011 [00554]
[Date Index] [Thread Index] [Author Index]
Re: Fittings 2 sets of equations and 2 data sets with nonlinearmodelfit
• To: mathgroup at smc.vnet.net
• Subject: [mg121711] Re: Fittings 2 sets of equations and 2 data sets with nonlinearmodelfit
• From: Ray Koopman <koopman at sfu.ca>
• Date: Mon, 26 Sep 2011 20:06:24 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <j5pcf8$8h2$1@smc.vnet.net>
On Sep 26, 1:17 am, JamesE <haywan... at excite.com> wrote:
> Hi,
> My names is James Ellison; and I have 2 nonlinear equations of the
> type:
> y[1]=x+a*x[1]^2+b*x[1]^3 and y[2]=a*x[2]^3+b*x[2]^5 with the real
> valued parameters a and b for both equations.
> I further have 2 data sets
> data[1] for y[1] and
> data[2] for y[2]
> I can use NonlinearModelFit in order to fit each data[i] set to
> its function y[i] and get good fits. But the parameters a and b
> should be the same.
> How can I create a simple code with Mathematica, so that I can do
> a simultaneous fit resulting in the parameters a and b to be the
> same.
> I am a Mathematica beginner and would be pleased, if someone could
> answer my question for beginners.
> Best regards, James
Although your models are nonlinear in x they are linear in the
unknown parameters, and it is the latter that matters here.
Write your model as w = a*u + b*v, where {u, v, w} =
{x^2, x^3, y-x} for data[1] and {x^3, x^5, y} for data[2].
If data[1] and data[2] contain {x,y} pairs then
Join[{#1^2, #1^3, #2-#1}& @@@ data[1],
{#1^3, #1^5, #2 }& @@@ data[2]}]
will give you a data matrix that can be input to LinearModelFit.
Be sure to specify IncludeConstantBasis->False. | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Sep/msg00554.html","timestamp":"2014-04-20T21:34:38Z","content_type":null,"content_length":"26649","record_id":"<urn:uuid:10b0f03e-ec2e-4aa1-94b9-b2e28036c5f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Box Plots
As with histograms, box plots are very easy to make in R. They are made using the boxplot() function. By default, R orients the box plot vertically; to change this, simply add the argument horizontal
= TRUE to the function. For most simple plots, no additional arguments are required. You may also want to change the size of the R graph window to view the box plot in a more aesthetic scale.
Here is how you would create the box plot in R using the same data from the start of this section.
a = c(1, 11.5, 6, 7.2, 4, 8, 9,
10, 6.8, 8.3, 2, 2, 10, 1)
boxplot(a, horizontal = TRUE) | {"url":"http://cnx.org/content/m35078/latest/?collection=col11219/latest","timestamp":"2014-04-18T21:34:40Z","content_type":null,"content_length":"85821","record_id":"<urn:uuid:ae672bc6-7877-4632-858f-afb4804c7d5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
A-level Physics (Advancing Physics)/Light as a Quantum Phenomenon
We have already seen how light behaves like both a wave and a particle, yet can be proven not to be either. This idea is not limited to light, but we will start our brief look at quantum physics with
light, since it is easiest to understand.
Quantum physics is the study of quanta. A quantum is, to quote Wiktionary, "The smallest possible, and therefore indivisible, unit of a given quantity or quantifiable phenomenon". The quantum of
light is the photon. We are not describing it as a particle or a wave, as such, but as a lump of energy which behaves like a particle and a wave in some cases. We are saying that the photon is the
smallest part of light which could be measured, given perfect equipment. A photon is, technically, an elementary particle. It is also the carrier of all electromagnetic radiation. However, its
behaviour - quantum behaviour - is completely weird, so we call it a quantum.
Evidence for the Quantum Behaviour of LightEdit
Dim PhotosEdit
The easiest evidence to understand is dim photographs. When you take a photo with very little light, it appears 'grainy', such as the image on the right. This means that the light is arriving at the
camera in lumps. If light were a wave, we would expect the photograph to appear dimmer, but uniformly so. In reality, we get clumps of light distributed randomly across the image, although the
density of the random lumps is higher on the more reflective materials (the nuts). This idea of randomness, according to rules, is essential to quantum physics.
Photoelectric EffectEdit
The second piece of evidence is more complex, but more useful since a rule can be derived from it. It can be shown experimentally that, when light of an adequate frequency falls on a metallic
surface, then the surface absorbs the light and emits electrons. Hence, a current and voltage (between the surface and a positively charged terminal nearby) are produced, which can be measured.
The amount of current produced varies randomly around a certain point. This point changes depending on the frequency of the electromagnetic radiation. Furthermore, if the frequency of the radiation
is not high enough, then there is no current at all! If light were a wave, we would expect energy to build up gradually until an electron was released, but instead, if the photons do not have enough
energy, then nothing happens. This is evidence for the existence of photons.
The Relationship between Energy and FrequencyEdit
The photoelectric effect allows us to derive an equation linking the frequency of electromagnetic radiation to the energy of each quantum (in this case, photons). This can be achieved experimentally,
by exposing the metallic surface to light of different colours, and hence different frequencies. We already know the frequencies of the different colours of light, and we can calculate the energy
each photon carries into the surface, as this is the same as the energy required to supply enough potential difference to cause the electron to move. The equation for the energy of the electron is
derived as follows:
First, equate two formulae for energy:
$P = \frac{E}{t} = IV\,$
Rearrange to get:
$E = ItV\,$
We also know that:
$Q = It\,$
So, by substituting the previous equation into the equation for energy:
$E = QV = e\Delta V\,$,
where P = power, E = energy, t = time, I = current, V = potential difference, Q = charge, e = charge of 1 electron = -1.602 x 10^-19 C, ΔV = potential difference produced between anode and cathode at
a given frequency of radiation. This means that, given this potential difference, we can calculate the energy released, and hence the energy of the quanta which caused this energy to be released.
Plotting frequency (on the x-axis) against energy (on the y-axis) gives us an approximate straight line, with a gradient of 6.626 x 10^-34. This number is known as Planck's constant, is measured in
Js, and is usually denoted h. Therefore:
$E = hf\,$
In other words, the energy carried by each quantum is proportional to the frequency of the quantum. The constant of proportionality is Planck's constant.
1. How much energy does a photon with a frequency of 50kHz carry?
2. A photon carries 10^-30J of energy. What is its frequency?
3. How many photons of frequency 545 THz does a 20W bulb give out each second?
4. In one minute, a bulb gives out a million photons of frequency 600 THz. What is the power of the bulb?
5. The photons in a beam of electromagnetic radiation carry 2.5μJ of energy each. How long should the phasors representing this radiation take to rotate?
Last modified on 10 July 2013, at 06:06 | {"url":"http://en.m.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Light_as_a_Quantum_Phenomenon","timestamp":"2014-04-18T23:43:43Z","content_type":null,"content_length":"21953","record_id":"<urn:uuid:2fe71726-f2e4-477c-bda1-24b1bcc8af5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |