content
stringlengths
86
994k
meta
stringlengths
288
619
Differential Geometry Category Archives: Differential Geometry Let $\gamma: [0,1]\longrightarrow M$ be a path. Using connection $\nabla$, one can consider the notion of moving a vector in $L_{\gamma(0)}$ to $L_{\gamma(1)}$ without changing it. This is parallel transporting a vector from $L_{\gamma(0)}$ to $L_{\gamma(1)}$. The change is measured … Continue reading A connection on a line bundle can be defined in a pretty much similar fashion to a connection on a manifold that is discussed here since sections are like vector fields. Let $L\longrightarrow M$ be a line bundle. A connection … Continue reading A section of a line bundle is like a vector field. It is a map $s: M\longrightarrow L$ such that $s(m)\in L_m$ or $\pi\circ s(m)=m$. Section of a line bundle is one-to-one. Example. For the trivial bundle $L=M\times\mathbb{C}$, every section … Continue reading Simply speaking, a line bundle is a complex vector bundle such that each fibre $F_x$ is a one-dimensional complex vector space i.e. one-dimensional vector space over the complex field $\mathbb{C}$. More specifically, Definition. A complex line bundle over a manifold … Continue reading Let $M$ be a differentiable manifold of dimension $n$. Consider an atlas $\mathcal{U}=\{U_\alpha\}_{\alpha\in\mathcal{A}}$ along with coordinates $x_\alpha^1,\cdots,x_\alpha^n$ in $U_\alpha$. For $x= (x_\alpha^1(x),\cdots,x_\alpha^n(x))\in U_\alpha$, a tangent vector is given by $$v=\sum_{j=1}^nv_\alpha^j\frac{\partial}{\partial x_\alpha^j}.$$ If $x\in U_\alpha\cap U_\beta$, then $v$ is also written as … Continue reading In $\mathbb{R}^n$, there is a globally defined orthonormal frame $$E_{1p}=(1,0,\cdots,0)_p,\ E_{2p}=(0,1,0,\cdots,0)_p,\cdots,E_{np}=(0,\cdots,0,1)_p.$$ For any tangent vector $X_p\in T_p(\mathbb{R}^ n)$, $X_p=\sum_{i=1}^n\alpha^iE_{ip}$. Note that the coefficients $\alpha^i$ are the ones that distinguish tangent vectors in $T_p(\mathbb{R}^n)$. For a differentiable function $f$, the directional derivative … Continue reading A fibre bundle is an object $(E,M,F,\pi)$ consisting of The total space $E$; The base space $M$ with an open covering $\mathcal{U}=\{U_\alpha\}_{\alpha\in\mathcal{A}}$; The fibre $F$ and the projection map $E\stackrel{\pi}{ \longrightarrow}M$. The simplest case is $E=M\times F$. In this case, … Continue reading Definition. The dual 1-forms $\theta_1,\theta_2,\theta_3$ of a frame $E_1,E_2,E_3$ on $\mathbb{E}^3$ are defined by $$\theta_i(v)=v\cdot E_i(p),\ v\in T_p\mathbb{E}^3.$$ Clearly $\theta_i$ is linear. Example. The dual 1-forms of the natural frame $U_1,U_2,U_3$ are $dx_1$, $dx_2$, $dx_3$ since $$dx_i(v)=v_i=v\cdot U_i(p)$$ for each … Continue reading Tensors may be considered as a generalization of vectors and covectors. They are extremely important quantities for studying differential geometry and physics. Let $M^n$ be an $n$-dimensional differentiable manifold. For each $x\in M^n$, let $E_x=T_xM^n$, i.e. the tangent space to … Continue reading Let $E_1, E_2, E_3$ be an arbitrary frame field on $\mathbb{E}^3$. At each $v\in T_p\mathbb{E}^3$, $\nabla_v E_i\in T_p\mathbb{E}^3$, $i=1,2,3$. So, there exists uniquely 1-forms $\omega_{ij}:T_p\ mathbb{E}^3\longrightarrow\mathbb{R}$, $i,j=1,2,3$ such that \begin{align*} \nabla_vE_1&=\omega_{11}(v)E_1(p)+\omega_{12}(v)E_2(p)+\omega_{13}(v)E_3(p),\\ \nabla_vE_2&=\omega_{21}(v)E_1(p)+\omega_ {22}(v)E_2(p)+\omega_{23}(v)E_3(p),\\ \nabla_vE_3&=\omega_{31}(v)E_1(p)+\omega_{32}(v)E_2(p)+\omega_{33}(v)E_3(p) \end{align*} for each $v\in T_p\mathbb{E}^3$. These equations are … Continue reading
{"url":"http://www.math.usm.edu/lee/matharchives/?cat=8","timestamp":"2014-04-20T16:51:38Z","content_type":null,"content_length":"34051","record_id":"<urn:uuid:6089454b-b9d3-466b-95e9-3e5bd82f6684>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
minimal $L^2$ norm with $L^1$ norm fixed to one up vote -1 down vote favorite maybe this is a stupid question, but I dare it anyway: Let $\Omega$ be some bounded domain in ${\mathbb R}^n$. Then under all $L^1(\Omega)$ functions $f$ of fixed $L^1$-norm one, the constant function $\frac 1 {|\Omega|} 1_{\Omega}$ minimises the $L^2(\Omega)$ norm, as a quick Cauchy-Schwarz argument shows. Question: Is this the only minimizing function or are there others? 3 The proof of the Cauchy-Schwarz theorem tells you precisely when equality can occur... which should be enough to answer your question. I wouldn't say it's stupid, but unless I've missed something it is not really appropriate for MO, and would have belonged better on math.stackexchange.com – Yemon Choi Aug 17 '11 at 10:14 add comment closed as too localized by Yemon Choi, quid, Willie Wong, Bill Johnson, Ryan Budney Aug 17 '11 at 16:46 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. 1 Answer active oldest votes There are lots of others -- think of the function equal to $1/|\Omega|$ on one half of $\Omega$, and to $-1/|\Omega|$ on the other half. However if you restrict to non-negative up vote 0 down vote functions then it is the only minimizer, as the equality case in Cauchy-Schwarz shows. add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/73042/minimal-l2-norm-with-l1-norm-fixed-to-one/73043","timestamp":"2014-04-16T04:42:51Z","content_type":null,"content_length":"43727","record_id":"<urn:uuid:11cbecd2-a458-46ae-98f6-c9a8a769f255>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
August 24th 2009, 03:59 PM The questions asks ... Suppose that $R > 0$, $x_{0}>0$, and $x_{n+1}= \frac{1}{2}(\frac{R}{x_{n}}+x_{n})$, $n\geq0$ Prove: For $n\geq1$, $x_{n}>x_{n+1}>\sqrt{R}$ and MY attempt at a solution: I really don't know how to go about this... at first i tried to show that if $x_{n}>x_{n+1}>\sqrt{R}$ is true then $x_{n}-x_{n+1} > 0$. So that means $x_{0}- \frac{1}{2}(\frac{R}{x_{0}}+x_{0})>0$ My problem is that i don't know how big R in comparison to $x_{0}$, so unless i restrict R to be less than $x_{0}$ i can't prove that $x_{n}>x_{n+1}>\sqrt{R}$ Basically, i need help with this question... can anyone help ? I'm doing a self study by myself so i would appreciate a detailed solution if possible.. please and thank u. August 24th 2009, 06:27 PM I believe you are right, in fact if $x_0<\sqrt{2}$ (for instance $x_0=1$ and $R=4$ then the sequence increases in the first terms $x_1>x_0$, but $x_1>x_2>2$ and then the hypothesis seems to hold (for $n\geq 1$ as stated). However, if $x_0=\sqrt{R}$ we have the constant sequence $(\sqrt{R})$ I conjecture without doing any more calculations that this is the unique problem with the hypothesis, we have to require $x_0eq \sqrt{R}$. I suggest a) Try to prove the inequality if $x_0>\sqrt{R}$, it seems an easy induction at least for the first one. b) Try to show using differential calculus that $f(x):=\frac{1}{2}\left(\frac{R}{x}+x\right)$ maps $[0,\sqrt{R})$ into $(\sqrt{R},\infty)$. This would solve the case fos "small $x_0$'s" because it would allow to apply a) I haven't made the calculations but it seems to me the way of attacking it. August 25th 2009, 04:21 AM Sorry i'm not very sure about what you're asking me to do. My point was that $x_{n}>x_{n+1}$ is not true for all n if R> $x_{0}$. So do i restrict R so that this case is satistfied ? If i do this then it would seem that the sequence is decreasing at first but then it would blow up to infinity . The question is puzzuling to me. August 25th 2009, 05:34 AM My idea is to divide the problem in the following cases: a) If $x_0>\sqrt{R}$ then $x_0>x_1>\cdots x_n >\sqrt{R}$ and the sequence tends to $\sqrt{R}$. I believe it follows from the inequality you obtained in the first post, by an easy induction. Observe that the inequality you got is $x_0-x_1=\frac{x_0^2-R}{2x_0}$. If $x_0>\sqrt{R}$ then you have clearly $x_0>x_1>\sqrt{R}$ and you can apply induction. For showing $x_1>\sqrt{R}$ you have to use that $f(x):=\frac{1}{2}\left(\frac{R}{x}+x\right)$ is increasing in $[\sqrt{R},\infty)$ and $f(\sqrt{R})=\sqrt{R}$. b) If $x_0=\sqrt{R}$ then $x_n=\sqrt{R}$ for all $n$. c) If $x_0<\sqrt{R}$ then $x_1>\sqrt{R}>x_0$ and then $x_1>x_2>\cdots x_n >\sqrt{R}$ by the same argument of a). The key for obtaining $x_1>\sqrt{R}$ is to observe that $f(x):=\frac{1}{2}\left(\frac{R}{x}+x\right)$ is decreasing in $(0,\sqrt{R}]$ and $f(\sqrt{R})=\sqrt{R}$. I mean that $f(x):=\frac{1}{2}\left(\frac{R}{x}+x\right)$, $x\in \mathbb{R}^+$ has a unique attracting point , the minimum, and that any iteration goes to this point. The case b) is an exception to your statement since it involves strict inequalities. c) is NOT an exception since you have to show $x_n>x_{n+1}>\sqrt{R}$ for $n\geq 1$, If it is still not clear I will try to write any step detailedly, but I don't have now the necessary time.
{"url":"http://mathhelpforum.com/differential-geometry/99114-induction-print.html","timestamp":"2014-04-19T16:05:34Z","content_type":null,"content_length":"17356","record_id":"<urn:uuid:6d745032-157f-4bcd-9cc4-c2a9d2f4a33a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Parabolas 6.5: Applications of Parabolas Created by: CK-12 Kelly is nearly done with the chapter on parabolas in her math class. She has become pretty familiar with the general shape of a parabola, and has started noticing the use of parabolas all around her. One thing that she hasn't figured out yet is why satellite-tv dishes are 3-d parabolas. She is sure it isn't a coincidence, but doesn't know what it is about a parabolic shape that is important. Do you know? Watch This This video is a fun example of what can be done with a parabola Embedded Video: - Green Power Science: 46" Parabolic Mirror There is a very interesting property of parabolas. This is the fact that all parabolas have the same shape. Or, in the language of geometry, any two parabolas are similar to one another. This means that any parabola can be scaled in or out to produce another parabola of exactly the same shape. This may come across as surprising, because parabolas where $x^2$ a. Tall and skinny: b. Short and wide: But when one of the parabolas is scaled appropriately, these parabolas are identical: a. Tall and skinny - zoomed in by 4x: b. Short and wide: This fact about parabolas can be seen from an Algebra standpoint given the fact that all parabolas are generated from a line and a point not on that line. This configuration of generating objects, a line and a point, is always the same shape. Any other line and point looks exactly the same—simply zoom in or out until the line and point are the same distance from one another. So the shapes that any two such configurations generate must also be the same shape. From an analytic geometry standpoint, there were two factors that might affect the shape of the parabola. The first is the distance between the cutting plane and the apex of the cone. But cones have the same proportions at any scale, so no matter what this distance, the picture can be reduced or enlarged, affecting this distance but not the shape of the cone or plane. So this parameter does not actually change the shape of the conic section that results. The other factor is the shape of the actual cone. This is its steepness, defined by the angle at the apex, or equivalently by the ratio between the radius and the height at any point. This is a bit trickier. It’s not at all obvious that short, squat cones and tall, skinny cones would produce parabolas of the same shape. a. Tall and skinny: b. Short and wide: According to what we found, any parabola produced by slicing any cone resulted in an equation of this form: $y = ax^2$ We want to show that if we generate two such parabolas, that they actually have the same shape. So suppose we use two cone constructions and come up with these parabolas: $y= a_1 x^2$$y = a_2 x^2$$f$ $y= a_1 x^2$$y = a_2 x^2$$x-$$y-$$f$$(fy) = a_1(fx)^2$$y = a_2 x^2$$y = (a_1 f)x^2$$a_1 f = a_2$$f = \frac{a_2}{a_1}$$f$$a_1$$a_2$$a$$a$ Parabola Applications Burning Mirrors: Diocles ($\sim 240-180$ For parabolas, since parabolas have only one focus, the directrix plays a role. For the parabola, the optical property is that lines perpendicular to the directrix “bounce off” the parabola and converge at the focus. Or, alternatively, lines from the focus “bounce off” off the parabola and continue perpendicular to the directrix. As with the ellipse, “bouncing off” means that the two lines meet the parabola at equal angles to the tangent. In the above diagram, the optical property states that $\Box \ \alpha \cong \Box \ \beta$$P$$Q$$R'$$PQ$$PR'$$P$$R'$$R$$Q$$QR = QR'$$R$$R'$$\Box \ \alpha \cong \Box \ \gamma$$\Box \ \gamma \cong \Box \ \beta$$\Box \ \alpha \cong \Box \ \beta$ The optical property has some interesting applications. Diocles described one potential application in his document “On Burning Mirrors”. He envisioned a parabolic-shaped mirror (basically a parabola rotated about its line of symmetry) which would collect light from the sun and focus it on the focal point, creating enough of a concentration of light to start a fire at that point. Some claim that Archimedes attempted to make such a contraption with copper plates to fight the Romans in Syracuse. Headlights:The optical property is also responsible for parabola-shaped unidirectional lights, such Cassegrain Telescopes:Satellite telescopes take advantage of the optical property of parabolas as car headlights. If a bulb is placed at the focus of a parabolic mirror, the light rays reflect off to collect as much light from a distant star as possible. The dish of the satellite below is the mirror parallel to each other, making a focused beam of light. parabolic in shape and reflects light to the point in the middle. │Concept question wrap-up The satellite dish is a 3-d parabola so that all of the signal it collects over a wide area will be concentrated at the focus of the parabola, significantly increasing the │ │ reception. │ Example A Explain why not all ellipses are similar the way parabolas are. While enlarging or shrinking doesn’t work to make two ellipses identical, how can you change the view of two ellipses that have different shapes so that they look the same? The eccentricity of ellipses defines the shape, so when the eccentricity is different for two ellipses, the ellipses are not similar to one another. Viewing one of the ellipses at an angle, however, changes the perceived eccentricity of that ellipse, and the angle can be chosen to match the perceived eccentricity to the eccentricity of the other ellipse, producing an image that is similar to the other ellipse. Example B If nothing was used to deflect light before it entered a "burning mirror," where would the sun have to be in relationship to you and the place you want to start a fire? Why is this a constraint? Design a way to circumvent this constraint. The fire-locale must lie on the segment between you and the sun. This is a problem because to start a ground fire, you would have to wait until evening when the sun is low in the sky so you could aim your lens at the ground, unfortunately the sun is not as bright in the evening so you would lose a significant amount of power. A lens or mirror that changes the angle of the suns rays could help you work around this constraint. Example C In the above diagram of a car headlight, the lens directs the beams of light downwards, to keep them out of the eyes of oncoming drivers, if that was the only purpose of the lens, alternatively the lens could be omitted and the headlight could just be angled down slightly. But there is another purpose to the lens. What is it? The lens also expands the array of light which is why it is called “dispersed light.” Without the lens, the headlight would only illuminate a strip the width of the headlight itself, which would not be very useful for driving. A parabolic dish is a bowl with a true parabola cross-section, it has many applications in the real world. Unidirectional headlights focus light in one (uni-) specific direction. A burning mirror is a highly reflective parabolic dish used to focus sunlight onto a single point, the focus of the parabola. Guided Practice A satellite dish antenna is to be constructed in the shape of a paraboloid. The paraboloid is formed by rotating about the x-axis the parabola with focus at the point (25, 0) and directrix x = -25, where x and y are inches. The diameter of the antenna is to be 80 inches. a) Find the equation of the parabola and the domain of x. b) Sketch the graph of the parabola, showing the location of the focus. c) A receiver is to be placed at the focus. The designer has warned that a user or installer should take care, the receiver would hit the ground and could be damaged if the antenna were placed "face-down". Determine algebraically whether this observation is correct. a) $x = \frac{1}{100} y^2$ c)Yes, as discovered in part a and shown in the graph of part b, the focus is at x = 25, while the dish only extends to x = 16. 1. A football player is standing on a hill that is 200 feet above sea-level he throws the football with an initial vertical velocity of 96 feet per second. After how many seconds will the ball reach its maximum height above sea level? What is the maximum height? 2. A shot-gun is discharged vertically upward at a height of 3 feet above the ground. If the bullet has an initial muzzle velocity of 200 feet per second, what maximum height will it reach before it starts to fall to the ground? a) 628 feet b) 1,878 feet c) 20.87 feet d) 199.33 feet 3. An over-zealous golfer hits a flop shot with a sand wedge to get out of the corner of a sand trip with an initial vertical velocity of 45 feet per second. What is the maximum height that the golf ball will reach? a) 45 feet b) 13.19 feet c) 36.64 feet d) 95.26 feet 4. You are standing on the top of a 1680 ft tall hill, and throw a small ball upwards. At every second, you measure the distance of the ball from the ground. Exactly t seconds after you throw the object, its height, (measured in feet) is $h (t) = -16t^2 + 256t + 1680$$h(3)$ 5. A student participating in a game of kick ball kicked the ball with an initial vertical velocity of 32 feet per second. Its height above the earth in feet is given by $s(t) = -16t^2 + 32t$$0 \leq t \leq 2$ 6. An arch over the entrance to an enchanted trail has a parabolic shape, the arch has a height of 25 feet and it is 30 feet between the support pillars. Find an equation that models the arch, using the x-axis to represent the ground of the park. State the focus and directrix. 7. A satellite dish has a parabolic shape with a diameter of 80 meters. The collected tv signals are focused on a single point, called the "focal" point, which is the focus of the paraboloid (the cross-section of the parabola). If the focal length is 45 meters, find the depth of the dish, rounded to one decimal place. 8. When new highways go in, they are often designed with parabolic surfaces which allow water to drain off. A new highway is being laid, it is 32 feet wide and is .4 feet higher in the center of the highway then on the sides. a) Find an equation if the parabola that models the highway surface (assume that the origin is at the center of the highway) b) How far from the center of the highway is the surface of the road .1 feet lower than in the middle? 9. The internal distance of the sketch of an arch is 8cm and the height of the arch is 9cm. Assuming the scale 1cm = 2m, work out a formula which calculates the actual height of the inside edge of this structure (y), in metres at any horizontal distance x measured from the origin point which is the floor at the center of the arch. A new bridge has been built for foot traffic cross a river. The two towers on either end of the bridge are 50 feet high and 300 feet away from each other. The supporting cables (2) are connected at the top of the towers and hang in a curve that forms the shape of a parabola. There are vertical cables that connect the walkway to the supporting cables. These cables connect every 15 feet from the walkway up to the supporting cables. At the center of the bridge, the parabola is 5 feet above the walkway. Specialty Cable Company back east sells cable for $52.75 per 10 feet with a shipping charge of $300.00 for the entire order. Cables R US, on the West Coast sells cable for $432.90 per 100 feet with a shipping charge of $350.00 for the entire order. Cable had to be purchased in either 10-foot lengths from SCC or in 100-foot lengths from CRUS. Once purchased, the cables can be cut or welded 10. Write an equation for the parabola that represents each of the support cables. 11. Determine the number of vertical cables needed. 12. Determine the length of each of the vertical cables. 13. How much does it cost to purchase the needed materials from each company? Look at the image. It shows the parabolic cross section of a satellite dish antenna. The equation for the parabola is $x = 0.01y^2$ 14. What is the focus of the parabola as shown in the image? 15. A radio signal comes in at y = 20 as marked in red. Identify where the ray strikes the parabola, give coordinates. 16. Identify the angle measure between the incoming ray and a line between the strike point and the focus. 17. An incoming ray and its reflected ray make angles of equal measure with a line tangent to the curved surface. Use the angle you just measured between the signal and the reflected line to calculate the measures of the angles between the incoming signal and the tangent to the point of impact and from the tangent to the reflected ray. Are they the same? Measure with a protractor to confirm your calculations. 18. Find the equation for the tangent line. Use angle C to find its slope. 19. Randomly choose another incoming ray. Calculate the angles as before. Is the incoming ray tangent to the parabola at the point where the incoming ray strikes the graph? How do you know? 20. What can you say about the direction a reflected ray takes whenever the incoming ray is parallel to the axis of the parabola? Can you see why the name "focus" is used and why television satellite antennas and other listening devices are made in the shape of a paraboloid? Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r4/section/6.5/","timestamp":"2014-04-17T02:37:19Z","content_type":null,"content_length":"142177","record_id":"<urn:uuid:3f76b895-5140-439a-8012-445b33084f29>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
The Orthocenter of a triangle is the point formed by the intersection of its three altitudes (or their extensions). │ │ │ Orthocenter, concurrency of the three altitudes │ Concurrence of the altitudes The three altitudes have to meet in a single point because... Cut-the-knot lists a dozen or more proofs, but the simplest to understand (for me) uses Carnot's Theorem this way: Letting AA', BB', and CC' be the altitudes of ABC, we see that each altitude is a leg of two right triangles, so that CC'^2 + AC'^2 = AC^2, -CC'^2 - BC'^2 = -BC^2, AA'^2 + BA'^2 = AB^2, -AA'^2 - CA'^2 = -AC^2, BB'^2 + CB'^2 = BC^2, -BB'^2 - AB'^2 = -AB^2 Now, just as we did to prove Carnot's theorem, we simply add up the six equations to get AC'^2 - BC'^2 + BA'^2 - CA'^2 + CB'^2 - AB'^2 = 0, and then by Carnot's theorem, the three altitudes concur on a point. Barycentric coordinates of the orthocenter ( (a^2+b^2-c^2)(c^2+a^2-b^2), (b^2+c^2-a^2)(a^2+b^2-c^2), (c^2+a^2-b^2)(b^2+c^2-a^2) ) You can think of the barycentric coordinates of the "weights" of the vertices, so the weighted average of the vertices is the circumcenter. You can calculate the circumcenter from the barycentric coordinates by multiplying each vertex (vector) by the corresponding barycentric coordinate, adding the results, and then dividing by the sum of the barycentric coordinates. If we represent A, B, and C on the coordinate plane by (a,b), (c,d), (e,f) and replace the squares of the lengths of the sides appropriately, we can calculate the orthocenter (h,k) as h = ( a*(((c-e)^2+(d-f)^2)+((a-e)^2+(b-f)^2)-((a-c)^2+(b-d)^2))* (((a-c)^2+(b-d)^2)+((c-e)^2+(d-f)^2)-((a-e)^2+(b-f)^2))+ c*(((a-e)^2+(b-f)^2)+((a-c)^2+(b-d)^2)-((c-e)^2+(d-f)^2))* (((c-e)^2+ (d-f)^2)+((a-e)^2+(b-f)^2)-((a-c)^2+(b-d)^2))+ e*(((a-c)^2+(b-d)^2)+((c-e)^2+(d-f)^2)-((a-e)^2+(b-f)^2))* (((a-e)^2+(b-f)^2)+((a-c)^2+(b-d)^2)-((c-e)^2+(d-f)^2)) ) / ( (((c-e)^2+(d-f)^2)+ ((a-e)^2+(b-f)^2)-((a-c)^2+(b-d)^2))* (((a-c)^2+(b-d)^2)+((c-e)^2+(d-f)^2)-((a-e)^2+(b-f)^2))+ (((a-e)^2+(b-f)^2)+((a-c)^2+(b-d)^2)-((c-e)^2+(d-f)^2))* (((c-e)^2+(d-f)^2)+((a-e)^2+(b-f)^2)- ((a-c)^2+(b-d)^2))+ (((a-c)^2+(b-d)^2)+((c-e)^2+(d-f)^2)-((a-e)^2+(b-f)^2))* (((a-e)^2+(b-f)^2)+((a-c)^2+(b-d)^2)-((c-e)^2+(d-f)^2)) ) which simplifies to h = ( (d-f)b^2+(f-b)d^2+(b-d)f^2+ab(c-e)+cd(e-a)+ef(a-c) ) / (bc+de+fa-cf-be-ad), and k = ( (e-c)a^2+(a-e)c^2+(c-a)e^2+ab(f-d)+cd(b-f)+ef(d-b) ) / (bc+de+fa-cf-be-ad) Other factoids about the orthocenter The orthocenter and circumcenter are isogonal conjugates of one another. If H is the orthocenter of triangle ABC, then... A is the orthocenter of triangle HBC, B is the orthocenter of triangle HCA, and C is the orthocenter of triangle HAB. Together, A, B, C, and H are said to represent an Orthocentric System Why is this true? Consider the six lines formed by points A, B, C, and H. They form three pairs of perpendicular lines: AB is perpendicular to CH, AC is perpendicular to BH, and BC is perpendicular to AH Moreover, if you pick any two of the four points A, B, C, and H, these two points determine a line, and the two points not picked determine its perpendicular. Now, let P, Q, and R be any three points selected from among A, B, C, and H to make triangle PQR, and let S be the fourth point, also selected from among A, B, C, and H. The PQ and RS are perpendicular, so RS is one of the three altitudes of PQR. Similarly, PS and QS are altitudes of PQR, so all three altitudes pass through S, making S the orthocenter of PQR. Let ABCH be an orthocentric system, and without loss of generality, assume H is in the interior of triangle ABC. Let H[A], H[B], and H[C] represent the three intersections of perpendicular lines in the system. Now, for your amusement, observe the following cyclic quadrilaterals: │ cyclic │ diameter of │ │ quadralateral │ its circumcircle │ │ HH[C]BH[A] │ HB │ │ HH[B]AH[C] │ HA │ │ HH[A]CH[B] │ HC │ │ ABH[A]H[B] │ AB │ │ BCH[B]H[C] │ BC │ │ CAH[C]H[A] │ CA │ Interestingly, of the seven points A, B, C, H, H[A], H[B], and H[C], for each of the six sets of collinear points, the remaining four are the vertices of a cyclic quadrilateral. The orthic triangle is the triangle formed by the feet of the altitudes, H[A], H[B], and H[C]. The incenter of the orthic triangle H[A]H[B]H[C] is the orthocenter of ABC. Internet references Cut-the-knot: Triangle Altitudes and Orthocenter has a dozen proofs that the altitudes are concurrent, establishing many fun facts along the way! Mathworld: Orthocentric System -- more fun facts about orthocenters and their relationship to the 9-point circle. Wikipedia: Orthocentric system -- additional facts about the orthocentric system. Related pages in this website Other triangle centers: Circumcenter, Incenter, Orthocenter, Centroid. The Orthocenter and Circumcenter of a triangle are isogonal conjugates, and the Incenter is its own isogonal conjugate. Summary of geometrical theorems summarizes the proofs of concurrency of the lines that determine these centers, as well as many other proofs in geometry. Barycentric Coordinates, which provide a way of calculating these triangle centers (see each of the triangle center pages for the barycentric coordinates of that center). Carnot's Theorem -- AC'^2 - BC'^2 + BA'^2 - CA'^2 + CB'^2 - AB'^2 = 0 iff perpendiculars from A', B', and C' concur on a point. Triangle Centers -- a summary of the different "centers" of triangles. The Orthocenter and Circumcenter of a triangle are isogonal conjugates. The webmaster and author of this Math Help site is Graeme McRae.
{"url":"http://2000clicks.com/MathHelp/GeometryTriangleCenterOrthocenter.aspx","timestamp":"2014-04-17T21:26:28Z","content_type":null,"content_length":"12756","record_id":"<urn:uuid:10168357-0a3c-4dab-b299-74d1fdd2ae62>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagorean theorem? May 8th 2008, 02:52 PM #1 Apr 2008 Pythagorean theorem? The longer leg of a right triangle measures 5 inches less than the hypotenuse. The shorter leg measures 25 inches. Find the length of the long leg. How would I set this up? If the hypotenuse is h and the longer leg is 5 less than h (i.e. the hypotenuse), what is the length of the longer leg in terms of h? Then, plug this into the pythagorean theorem: $a^{2} + b^{2} = c^{2}$ where c is the hypotenuse and a and b are the two legs. May 8th 2008, 02:55 PM #2 May 8th 2008, 03:06 PM #3
{"url":"http://mathhelpforum.com/geometry/37690-pythagorean-theorem.html","timestamp":"2014-04-17T10:45:34Z","content_type":null,"content_length":"34324","record_id":"<urn:uuid:f990943f-d3ce-49b9-bf15-f7b70f342d55>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
In a bucket brigade, each person hands a result to the next. Simply Scheme: Introducing Computer Science 2/e Copyright (C) 1999 MIT Chapter 3 The interaction between you and Scheme is called the "read-eval-print loop." Scheme reads what you type, evaluates it, and prints the answer, and then does the same thing over again. We're emphasizing the word "evaluates" because the essence of understanding Scheme is knowing what it means to evaluate something. Each question you type is called an expression.[1] The expression can be a single value, such as 26, or something more complicated in parentheses, such as (+ 14 7). The first kind of expression is called an atom (or atomic expression), while the second kind of expression is called a compound expression, because it's made out of the smaller expressions +, 14, and 7. The metaphor is from chemistry, where atoms of single elements are combined to form chemical compounds. We sometimes call the expressions within a compound expression its subexpressions. Compound expressions tell Scheme to "do" a procedure. This idea is so important that it has a lot of names. You can call a procedure; you can invoke a procedure; or you can apply a procedure to some numbers or other values. All of these mean the same thing. If you've programmed before in some other language, you're probably accustomed to the idea of several different types of statements for different purposes. For example, a "print statement" may look very different from an "assignment statement." In Scheme, everything is done by calling procedures, just as we've been doing here. Whatever you want to do, there's only one notation: the compound Notice that we said a compound expression contains expressions. This means that you can't understand what an expression is until you already understand what an expression is. This sort of circularity comes up again and again and again and again[2] in Scheme programming. How do you ever get a handle on this self-referential idea? The secret is that there has to be some simple kind of expression that doesn't have smaller expressions inside it—the atomic expressions. It's easy to understand an expression that just contains one number. Numbers are self-evaluating; that is, when you evaluate a number, you just get the same number back. Once you understand numbers, you can understand expressions that add up numbers. And once you understand those expressions, you can use that knowledge to figure out expressions that add up expressions-that-add-up-numbers. Then and so on. In practice, you don't usually think about all these levels of complexity separately. You just think, "I know what a number is, and I know what it means to add up any expressions." So, for example, to understand the expression (+ (+ 2 3) (+ 4 5)) you must first understand 2 and 3 as self-evaluating numbers, then understand (+ 2 3) as an expression that adds those numbers, then understand how the sum, 5, contributes to the overall expression. By the way, in ordinary arithmetic you've gotten used to the idea that parentheses can be optional; 3+4×5 means the same as 3+(4×5). But in Scheme, parentheses are never optional. Every procedure call must be enclosed in parentheses. Little People You may not have realized it, but inside your computer there are thousands of little people. Each of them is a specialist in one particular Scheme procedure. The head little person, Alonzo, is in charge of the read-eval-print loop. When you enter an expression, such as (- (+ 5 8) (+ 2 4)) Alonzo reads it, hires other little people to help him evaluate it, and finally prints 7, its value. We're going to focus on the evaluation step. Three little people work together to evaluate the expression: a minus person and two plus people. (To make this account easier to read, we're using the ordinary English words "minus" and "plus" to refer to the procedures whose Scheme names are - and +. Don't be confused by this and try to type minus to Scheme.) Since the overall expression is a subtraction, Alonzo hires Alice, the first available minus specialist. Here's how the little people evaluate the expression: • Alice wants to be given some numbers, so before she can do any work, she complains to Alonzo that she wants to know which numbers to subtract. • Alonzo looks at the subexpressions that should provide Alice's arguments, namely, (+ 5 8) and (+ 2 4). Since both of these are addition problems, Alonzo hires two plus specialists, Bernie and Cordelia, and tells them to report their results to Alice. • The first plus person, Bernie, also wants some numbers, so he asks Alonzo for them. • Alonzo looks at the subexpressions of (+ 5 8) that should provide Bernie's arguments, namely, 5 and 8. Since these are both atomic, Alonzo can give them directly to Bernie. • Bernie adds his arguments, 5 and 8, to get 13. He does this in his head—we don't have to worry about how he knows how to add; that's his job. • The second plus person, Cordelia, wants some arguments; Alonzo looks at the subexpressions of (+ 2 4) and gives the 2 and 4 to Cordelia. She adds them, getting 6. • Bernie and Cordelia hand their results to the waiting Alice, who can now subtract them to get 7. She hands that result to Alonzo, who prints it. How does Alonzo know what's the argument to what? That's what the grouping of subexpressions with parentheses is about. Since the plus expressions are inside the minus expression, the plus people have to give their results to the minus person. We've made it seem as if Bernie does his work before Cordelia does hers. In fact, the order of evaluation of the argument subexpressions is not specified in Scheme; different implementations may do it in different orders. In particular, Cordelia might do her work before Bernie, or they might even do their work at the same time, if we're using a parallel processing computer. However, it is important that both Bernie and Cordelia finish their work before Alice can do hers. The entire call to - is itself a single expression; it could be a part of an even larger expression: > (* (- (+ 5 8) (+ 2 4)) (/ 10 2)) This says to multiply the numbers 7 and 5, except that instead of saying 7 and 5 explicitly, we wrote expressions whose values are 7 and 5. (By the way, we would say that the above expression has three subexpressions, the * and the two arguments. The argument subexpressions, in turn, have their own subexpressions. However, these sub-subexpressions, such as (+ 5 8), don't count as subexpressions of the whole thing.) We can express this organization of little people more formally. If an expression is atomic, Scheme just knows the value.[3] Otherwise, it is a compound expression, so Scheme first evaluates all the subexpressions (in some unspecified order) and then applies the value of the first one, which had better be a procedure, to the values of the rest of them. Those other subexpressions are the We can use this rule to evaluate arbitrarily complex expressions, and Scheme won't get confused. No matter how long the expression is, it's made up of smaller subexpressions to which the same rule applies. Look at this long, messy example: > (+ (* 2 (/ 14 7) 3) (/ (* (- (* 3 5) 3) (+ 1 1)) (- (* 4 3) (* 3 2))) (- 15 18)) Scheme understands this by looking for the subexpressions of the overall expression, like this: (+ () ( ; One of them takes two lines but you can tell by ) ; matching parentheses that they're one expression. (Scheme ignores everything to the right of a semicolon, so semicolons can be used to indicate comments, as above.) Notice that in the example above we asked + to add three numbers. In the functions program of Chapter 2 we pretended that every Scheme function accepts a fixed number of arguments, but actually, some functions can accept any number. These include +, *, word, and sentence. Result Replacement Since a little person can't do his or her job until all of the necessary subexpressions have been evaluated by other little people, we can "fast forward" this process by skipping the parts about "Alice waits for Bernie and Cordelia" and starting with the completion of the smaller tasks by the lesser little people. To keep track of which result goes into which larger computation, you can write down a complicated expression and then rewrite it repeatedly, each time replacing some small expression with a simpler expression that has the same value. (+ (* (- 10 7) (+ 4 1)) (- 15 (/ 12 3)) 17) (+ (* 3 (+ 4 1)) (- 15 (/ 12 3)) 17) (+ (* 3 5 ) (- 15 (/ 12 3)) 17) (+ 15 (- 15 (/ 12 3)) 17) (+ 15 (- 15 4 ) 17) (+ 15 11 17) In each line of the diagram, the boxed expression is the one that will be replaced with its value on the following line. If you like, you can save some steps by evaluating several small expressions from one line to the next: (+ (* (- 10 7) (+ 4 1)) (- 15 (/ 12 3)) 17) (+ (* 3 5 ) (- 15 4 ) 17) (+ 15 11 17) Plumbing Diagrams Some people find it helpful to look at a pictorial form of the connections among subexpressions. You can think of each procedure as a machine, like the ones they drew on the chalkboard in junior high Each machine has some number of input hoppers on the top and one chute at the bottom. You put something in each hopper, turn the crank, and something else comes out the bottom. For a complicated expression, you hook up the output chute of one machine to the input hopper of another. These combinations are called "plumbing diagrams." Let's look at the plumbing diagram for (- (+ 5 8) (+ 2 4)): You can annotate the diagram by indicating the actual information that flows through each pipe. Here's how that would look for this expression: One of the biggest problems that beginning Lisp programmers have comes from trying to read a program from left to right, rather than thinking about it in terms of expressions and subexpressions. For (square (cos 3)) doesn't mean "square three, then take the cosine of the answer you get." Instead, as you know, it means that the argument to square is the return value from (cos 3). Another big problem that people have is thinking that Scheme cares about the spaces, tabs, line breaks, and other "white space" in their Scheme programs. We've been indenting our expressions to illustrate the way that subexpressions line up underneath each other. But to Scheme, (+ (* 2 (/ 14 7) 3) (/ (* (- (* 3 5) 3) (+ 1 1)) (- (* 4 3) (* 3 2))) (- 15 18)) means the same thing as (+ (* 2 (/ 14 7) 3) (/ (* (- (* 3 5) 3) (+ 1 1)) (- (* 4 3) (* 3 2))) (- 15 18)) So in this expression: (+ (* 3 (sqrt 49) ;; weirdly formatted (/ 12 4))) there aren't two arguments to +, even though it looks that way if you think about the indenting. What Scheme does is look at the parentheses, and if you examine these carefully, you'll see that there are three arguments to *: the atom 3, the compound expression (sqrt 49), and the compound expression (/ 12 4). (And there's only one argument to +.) A consequence of Scheme's not caring about white space is that when you hit the return key, Scheme might not do anything. If you're in the middle of an expression, Scheme waits until you're done typing the entire thing before it evaluates what you've typed. This is fine if your program is correct, but if you type this in: (+ (* 3 4) (/ 8 2) ; note missing right paren then nothing will happen. Even if you type forever, until you close the open parenthesis next to the + sign, Scheme will still be reading an expression. So if Scheme seems to be ignoring you, try typing a zillion close parentheses. (You'll probably get an error message about too many parentheses, but after that, Scheme should start paying attention again.) You might get into the same sort of trouble if you have a double-quote mark (") in your program. Everything inside a pair of quotation marks is treated as one single string. We'll explain more about strings later. For now, if your program has a stray quotation mark, like this: (+ (* 3 " 4) ; note extra quote mark (/ 8 2)) then you can get into the same predicament of typing and having Scheme ignore you. (Once you type the second quotation mark, you may still need some close parentheses, since the ones you type inside a string don't count.) One other way that Scheme might seem to be ignoring you comes from the fact that you don't get a new Scheme prompt until you type in an expression and it's evaluated. So if you just hit the return or enter key without typing anything, most versions of Scheme won't print a new prompt. Boring Exercises 3.1 Translate the arithmetic expressions (3+4)×5 and 3+(4×5) into Scheme expressions, and into plumbing diagrams. 3.2 How many little people does Alonzo hire in evaluating each of the following expressions: (+ 3 (* 4 5) (- 10 4)) (+ (* (- (/ 8 2) 1) 5) 2) (* (+ (- 3 (/ 4 2)) (sin (* 3 2)) (- 8 (sqrt 5))) (- (/ 2 3) 3.3 Each of the expressions in the previous exercise is compound. How many subexpressions (not including subexpressions of subexpressions) does each one have? For example, (* (- 1 (+ 3 4)) 8) has three subexpressions; you wouldn't count (+ 3 4). 3.4 Five little people are hired in evaluating the following expression: (+ (* 3 (- 4 7)) (- 8 (- 3 5))) Give each little person a name and list her specialty, the argument values she receives, her return value, and the name of the little person to whom she tells her result. 3.5 Evaluate each of the following expressions using the result replacement technique: (sqrt (+ 6 (* 5 2))) (+ (+ (+ 1 2) 3) 4) 3.6 Draw a plumbing diagram for each of the following expressions: (+ 3 4 5 6 7) (+ (+ 3 4) (+ 5 6 7)) (+ (+ 3 (+ 4 5) 6) 7) 3.7 What value is returned by (/ 1 3) in your version of Scheme? (Some Schemes return a decimal fraction like 0.33333, while others have exact fractional values like 1/3 built in.) 3.8 Which of the functions that you explored in Chapter 2 will accept variable numbers of arguments? Real Exercises 3.9 The expression (+ 8 2) has the value 10. It is a compound expression made up of three atoms. For this problem, write five other Scheme expressions whose values are also the number ten: • Another compound expression made up of three atoms • A compound expression made up of four atoms • A compound expression made up of an atom and two compound subexpressions • Any other kind of expression [1] In other programming languages, the name for what you type might be a "command" or an "instruction." The name "expression" is meant to emphasize that we are talking about the notation in which you ask the question, as distinct from the idea in your head, just as in English you express an idea in words. Also, in Scheme we are more often asking questions rather than telling the computer to take some action. [2] and again [3] We'll explain this part in more detail later. BACK chapter thread NEXT Brian Harvey, bh@cs.berkeley.edu
{"url":"http://www.eecs.berkeley.edu/~bh/ssch3/people.html","timestamp":"2014-04-19T22:09:55Z","content_type":null,"content_length":"21269","record_id":"<urn:uuid:acd55eb7-9eb8-463e-8ee4-244f530d1cf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Carlsbad, CA Algebra 1 Tutor Find a Carlsbad, CA Algebra 1 Tutor I've been tutoring students since 2008. I am very flexible in my approach to a student. I believe that everyone is capable to learn yet different people require different approaches. 12 Subjects: including algebra 1, chemistry, geometry, biology ...That tutor is me! Art history through the epochs has been both a central and peripheral focus for my honors history-social science undergraduate theses, and my recent 8 years of graduate work/ study experience as a Kinder- grade12 educator. I am highly qualified, and genuinely enthusiastic about... 55 Subjects: including algebra 1, reading, Spanish, English ...Welcome to my WyzAnt profile, and thank you for your interest. Here I have the opportunity to give you a brief overview of my academic credentials, my experience tutoring and teaching, and my overall approach as a tutor. I graduated in 1982 from the University of the City of Manila, Philippines with a Bachelor of Science in Chemical Engineering. 4 Subjects: including algebra 1, algebra 2, trigonometry, prealgebra ...I majored in math as a student, and took the entire Calculus series, and I received A's in all my classes. So you can rely on the fact that I know the material well. You won't find me spending half the time in your book, or frequently getting stuck on problems. 12 Subjects: including algebra 1, calculus, algebra 2, geometry ...These were geared to the instruction of math and science to primarily college-aged students. Specific to elementary-aged students, I have tutored a number of elementary-aged students during high school in math and science. I am recent graduate of Grinnell College with a BA in chemistry. 16 Subjects: including algebra 1, chemistry, biology, organic chemistry Related Carlsbad, CA Tutors Carlsbad, CA Accounting Tutors Carlsbad, CA ACT Tutors Carlsbad, CA Algebra Tutors Carlsbad, CA Algebra 2 Tutors Carlsbad, CA Calculus Tutors Carlsbad, CA Geometry Tutors Carlsbad, CA Math Tutors Carlsbad, CA Prealgebra Tutors Carlsbad, CA Precalculus Tutors Carlsbad, CA SAT Tutors Carlsbad, CA SAT Math Tutors Carlsbad, CA Science Tutors Carlsbad, CA Statistics Tutors Carlsbad, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/carlsbad_ca_algebra_1_tutors.php","timestamp":"2014-04-18T05:42:55Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:376548c3-60cf-4798-8997-dee9ddfd23f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
What is 170 CM EQUALS? Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS! All the definitions and meanings found are from third-party authors, please respect their copyright. © 2014 - mrwhatis.net
{"url":"http://mrwhatis.net/170-cm-equals.html","timestamp":"2014-04-18T00:13:56Z","content_type":null,"content_length":"36863","record_id":"<urn:uuid:11fabc05-98cc-4dd8-a9e7-a44cf8881212>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonney Lake Math Tutor Find a Bonney Lake Math Tutor ...I am detail oriented, and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience. I have worked as a laboratory chemist and as an instructor at Tacoma Community College for several years. I have also taught high school level sciences and mathematics. 12 Subjects: including geometry, ASVAB, algebra 1, algebra 2 ...I am qualified to tutor Prealgebra, and I am currently tutoring math at levels ranging from Algebra 1 through Calculus. I hold a PhD in Aeronautical and Astronautical Engineering from the University of Washington, and I have more than 40 years of project experience in science and engineering. I... 21 Subjects: including algebra 1, algebra 2, calculus, chemistry ...Be prepared to work hard and score high! Again, I'm willing to set up a free trial session so you can see how I work. This will also give me a chance to see where you are at, and what' holding you back. 16 Subjects: including algebra 2, geometry, ACT Math, algebra 1 With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including discrete math, Mathematica, algebra 1, algebra 2 ...I write documentations/recommendations/accommodations for students with Learning Disabilities. I look forward to meeting you, determining what you may need to find success and participating in that process in any way I can. I have after school hours available at my home office in Lakewood, WA. 12 Subjects: including algebra 1, algebra 2, SAT math, geometry Nearby Cities With Math Tutor Algona, WA Math Tutors Auburn, WA Math Tutors Cedarview, WA Math Tutors Edgewood, WA Math Tutors Federal Way Math Tutors Fife, WA Math Tutors Graham, WA Math Tutors Milton, WA Math Tutors Normandy Park, WA Math Tutors Pacific, WA Math Tutors Puy, WA Math Tutors Puyallup Math Tutors South Prairie Math Tutors Spanaway Math Tutors Sumner, WA Math Tutors
{"url":"http://www.purplemath.com/bonney_lake_math_tutors.php","timestamp":"2014-04-19T07:13:08Z","content_type":null,"content_length":"23593","record_id":"<urn:uuid:71775e31-d587-4f53-a4eb-546e931df5cc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick question on capacitor charge/discharge - diyAudio Thanks guys (except for the snarky response repeating Ohm's law). Ripple current rating is a steady state rating, just like the rated power of a resistor. As an example, look at the actual current levels in a 20,000uF cap on a full bridge rectifier providing a steady 4 amp load current to a 70V rail. At the peak of the rectified 60 cycle source, the peak current into the cap will exceed 12A at 120Hz with a short duty cycle, and the ripple voltage on the cap will be a mere 1.5V. That's a pretty common setup for a power amp. The RMS ripple current is by definition, 4A, but the peak currents are repeated spikes of over 12A. Discharging the cap with a 5 Ohm resistor falls easily within such a scenario. I have no doubt the cap can handle that discharge cycle. It's the resistor that concerns me. The 12A initial current is an instantaneous power level that's quite a bit higher than rated, obviously, but it is very short duration. I was curious if the actual resistive element in a typical 10W wire wound resistor could withstand the initial surge. I guess I'll have to look into using a higher wattage resistor just to be safe.
{"url":"http://www.diyaudio.com/forums/solid-state/227077-quick-question-capacitor-charge-discharge.html","timestamp":"2014-04-18T20:02:53Z","content_type":null,"content_length":"75899","record_id":"<urn:uuid:742b095b-c2ae-437a-9b28-a5935ca218cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluation of Algebraic Iterative Image Reconstruction Methods for Tetrahedron Beam Computed Tomography Systems International Journal of Biomedical Imaging Volume 2013 (2013), Article ID 609704, 14 pages Research Article Evaluation of Algebraic Iterative Image Reconstruction Methods for Tetrahedron Beam Computed Tomography Systems ^1TetraImaging, 4591 Bentley Drive, Troy, MI 48098, USA ^2Department of Physics, Oakland University, 2200 N. Squirrel Road, Rochester, MI 48309, USA ^321st Century Oncology Inc., 4274 W. Main Street, Dothan, AL 36305, USA ^4Department of Radiation Oncology, William Beaumont Hospital, 3601 W. Thirteen Mile Road, Royal Oak, MI 48073, USA Received 15 February 2013; Revised 2 April 2013; Accepted 2 May 2013 Academic Editor: Habib Zaidi Copyright © 2013 Joshua Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Tetrahedron beam computed tomography (TBCT) performs volumetric imaging using a stack of fan beams generated by a multiple pixel X-ray source. While the TBCT system was designed to overcome the scatter and detector issues faced by cone beam computed tomography (CBCT), it still suffers the same large cone angle artifacts as CBCT due to the use of approximate reconstruction algorithms. It has been shown that iterative reconstruction algorithms are better able to model irregular system geometries and that algebraic iterative algorithms in particular have been able to reduce cone artifacts appearing at large cone angles. In this paper, the SART algorithm is modified for the use with the different TBCT geometries and is tested using both simulated projection data and data acquired using the TBCT benchtop system. The modified SART reconstruction algorithms were able to mitigate the effects of using data generated at large cone angles and were also able to reconstruct CT images without the introduction of artifacts due to either the longitudinal or transverse truncation in the data sets. Algebraic iterative reconstruction can be especially useful for dual-source dual-detector TBCT, wherein the cone angle is the largest in the center of the field of view. 1. Introduction Image-guided radiation therapy (IGRT) is essential to ensure proper dose delivery to the target while sparing the surrounding tissue [1, 2]. Cone beam CT (CBCT) is a popular online imaging modality used for LINAC-based IGRT [3, 4]. Although CBCT is convenient to use, the performance of CBCT systems is less than ideal. The image quality for the CBCT is significantly degraded due to excessive scattered photons [5–8] as well as suboptimal performance of the flat panel detector [9]. These issues limit the use of CBCT for certain advanced radiation therapy techniques such as online adaptive radiotherapy [8, 10]. It is also well known that at large cone angles, there are artifacts caused by using approximate reconstruction methods that appear in CBCT reconstructions [11], but this issue has largely been ignored in IGRT because the scatter and detector issues are the dominant factors in the degradation of CBCT image quality. Tetrahedron beam computed tomography (TBCT) is a novel volumetric CT modality that overcomes the scatter and detector problems of CBCT [12, 13]. A TBCT system is composed of a minimum of one linear source array with one linear detector array positioned opposite and orthogonal to it. In TBCT, scattered photons are largely rejected due to the fan-beam geometry of the system. A TBCT system also uses the same high performance detectors that are used for helical CT scanners. Therefore, TBCT should be equivalent to diagnostic helical CT with regard to scatter rejection and detector performance. However, similar to CBCT, the data sufficiency condition [14, 15] is not satisfied with a single axial TBCT scan. TBCT still suffers from the same large cone angle artifacts that are present in CBCT images reconstructed using the conventional Feldkamp-Davis-Kress (FDK)-type approximate filtered backprojection (FBP) algorithm [16]. More importantly, in a TBCT system that is composed of two source arrays and two detector arrays, the cone reconstruction artifact is most significant in the center of the field of view (FOV). Therefore, reducing cone artifacts is more important for this arrangement. Owing to the rapid improvement in computational power, it has become practical to use iterative reconstruction methods in the clinic. Iterative imaging reconstruction methods have been proven to be capable of reducing imaging dose [17, 18], increasing image resolution [19, 20], and reducing artifacts [21, 22]. Most CT vendors provide different iterative image reconstruction solutions for their diagnostic CT scanners. The algebraic reconstruction technique (ART) [23] and the simultaneous ART (SART) [24] algebraic iterative methods, in particular, have been shown to reconstruct cone beam data with minimal artifacts at large cone angles [21]. In order to further improve TBCT image quality and reduce reconstruction artifacts at larger cone angles, we implemented iterative algebraic reconstruction methods for different TBCT geometries in this study. We evaluated the performance of these algorithms using various numerical phantoms as well as digitally-projected patient images. The patient reconstruction results were then compared to the reconstructed images produced using a fan-beam reconstruction method that was considered to be the ground truth for this study. 2. Material and Methods 2.1. TBCT Geometries The TBCT system geometry is flexible enough to incorporate multiple source and detector arrays if the need arises. Figure 1 shows a comparison of the geometries for the [single-source single-detector] TBCT system and for the dual-source dual-detector TBCT system. With the dual-source dual-detector geometry, the length of the detector and source arrays can be reduced while still being able to achieve the same FOV. However, for a TBCT system that uses two detector arrays, the approximate reconstruction artifacts would be most prominent in the central transverse plane of the image instead of at the top and bottom of the image. Therefore, reducing the cone artifact is especially important for the [dual-source dual-detector] TBCT system. 2.2. Algebraic Reconstruction Algorithms Detector projection measurements during a CT scan are represented by the linear system equation where represents the image to be reconstructed and represents the measured projection data. and are the total number of line integral measurements and total number of image voxels, respectively. The total number of measurements, , is the product of the number of detectors and the number of projections per detector. The system matrix has matrix elements that map the image voxel onto the projection measurement . In iterative reconstruction, the image voxel values are treated as unknowns in the system of equations given by (1). For a 3D CT scan, the dimensions of the system matrix are enormous. Both and could be in the order of hundreds of millions. To calculate the elements of the system matrix, we implemented the distance-driven method introduced by De Man and Basu [25]. For this method, the boundaries of the detectors and voxels are mapped onto a common plane. The lengths of overlap of the detector and voxel boundaries along each of the axes of the plane are then calculated. These two values are then multiplied together to determine the value of the system matrix elements. This system of equations cannot be solved directly due to the ill-posedness of the problem, the noise in the data, and the immense size of the system matrix, but it can be solved iteratively using an algebraic approach. Iterative methods begin with an initial guess of the image voxel values, which is then forward projected using the system matrix to produce an estimate of the projection data. The differences between the estimated and measured projection data are calculated and used to determine correction terms which are then back projected onto the image. This process is iteratively repeated until some convergence criteria have been satisfied or a preset number of iterations has completed. For this study, we have chosen to implement the well-known SART algorithm [24] which has been shown to converge to the weighted least squares solution from any initial guess [26]. It has also been demonstrated in previous studies that the convergence of algebraic methods could be improved by varying the order in which projections are processed [27], and so we implement the SART both with and without the use of the multilevel access ordering scheme (MAS) developed by Guan and Gordon [27]. 2.2.1. Simultaneous Algebraic Reconstruction Technique The forward projection of each measurement is calculated using and then compared to the measured projection value . The difference between these values is then weighted and backprojected over the image. For the SART algorithm, all projection measurements collected at a single projection image are used to simultaneously update each image voxel value. The update term is given by where is the update step, is the image voxel index, is the projection data index, is the relaxation parameter, and and are the indices of the first and last projection data elements used for the th update step. These values are defined by and where is the number of projection data elements that make up a single projection image. One iteration is completed after all projection images have been used to update the image. The image converges to a stable solution after a few iterations. The relaxation parameter value that was chosen for this study was 0.08, which was selected by trial and error and falls within the range suggested in the literature [28, 29]. 2.2.2. Multilevel Projection Ordering Scheme The MAS ordering system was developed for algebraic reconstruction algorithms in order to minimize the correlation between sequential projection images that are used to update the image [27]. This leads to an improvement in the convergence speed of the algebraic methods. This method has been evaluated and compared against alternate ordering systems and has been shown to provide the greatest benefit in improving the efficiency of the reconstruction algorithms [30]. For a system with projection views ordered sequentially as , this system determines a number of levels according to . If is not a power of two, then one is added to the number of levels. The levels are ordered so that any two sequential views are chosen for maximum orthogonality between them. The order of the indices in the first level is set as 0 (0°) followed by (90°). The second level again has two elements and the indices are set as (45°) followed by (135°). The order of the indices in the third level is , , , and then . The value of the index is rounded down to the nearest integer if the division results in a decimal. This process is repeated until all levels are complete. This system was originally developed for a set of projections that covers the range 0 to 180°. For the projection set that covers a full rotation, the scheme would be used to calculate the order for the projections that cover the first 180°. The indices for the set of projections that cover the 180 to 360° range can be found by adding 180 to the set of indices covering the first 180°. No change needs to be made to (2) when using the MAS ordering scheme. To implement the MAS scheme, only the indices and need to be redefined so that and where is the projection view index determined according to the MAS scheme. 2.2.3. Image Reconstruction for Dual-Source Dual-Detector TBCT In the dual-source dual-detector configuration, four projection images are generated at each rotation angle. Each of the projection images collected at a given rotation angle is truncated both longitudinally and transversely as can be seen in the diagram shown in Figure 1(b). This leads to the center region of the FOV being covered by more than one source array detector array pair. Sequentially, backprojecting equally weighted correction terms calculated from each of the four projection images will cause artifacts. Instead, the correction terms from each of the four projection images are first weighted and then simultaneously backprojected onto the image. The weights applied to the correction terms for image voxel are given by where is the weight applied to the update term from the projection set collected using source array and detector array . is the transverse position of voxel in the rotated reference frame, is the longitudinal position of voxel , is a constant used to vary the rate at which the transverse contribution fades out, and is the constant used to vary the rate at which the longitudinal contribution fades out. Therefore, the new expression for the update term for the SART reconstruction method using the dual-source dual-detector configuration is where is the weighting factor defined by (3), is an element of the system matrix , is an element of the measured projection data set , and is an element of the estimated projection set calculated using the system matrix . 2.3. Evaluation Method 2.3.1. System Parameters We employed the same geometry that was used in our TBCT benchtop system [13]. The reconstructed images had dimensions with an isotropic voxel size of 1mm. A total of 360 projections were generated at one degree intervals. For a TBCT system that incorporates a multirow detector array, each TBCT projection is a 3D matrix whose dimensions correspond to the number of sources, the number of detector columns, and the number of detector rows. Therefore, the TBCT projection data dimensions for our system containing 75 field emission X-ray sources and five detector rows with 275 detector columns per row were . The X-ray source spacing was 4mm, and the isotropic detector pixel size was 2.54mm. These projections were reconstructed using a modified FDK filtered backprojection algorithm and the SART algorithm both with and without the MAS ordering scheme. 2.3.2. Phantom The three-dimensional Shepp-Logan phantom [31] was used to test the performance of the reconstruction algorithms. The parameters were taken from this reference except that the density values of the ellipsoids were magnified to increase the contrast. Patient projection data were also generated by forward projecting the CT image of a real patient. The same matrix that was used for image reconstruction was also used to forward project the patient image for generation of the patient projection set. The resolution of the reconstructed image was . It has been shown that the use of the SART algorithm can mitigate the large cone angle artifacts that are produced when using approximate reconstruction methods such as the FDK algorithm [11]. In order to test the effectiveness of our modified algebraic reconstruction methods at reducing the cone angle artifacts, a numerical Defrise-like phantom was created [32]. The seven identical, longitudinally stacked ellipses of uniform density that compose this phantom provided a cone angle of 20 degrees. The phantom was positioned at the isocenter, which was set to be equidistant from both source and detector positions. The distance from the source to the detector was set at 64cm. The linear system of equations when using the disk phantom has a very low rank due to the longitudinal symmetry of the phantom. We believe that the cone artifacts appearing in the reconstructions are exaggerated due to the atypical geometry of the disk phantom. In reality, the symmetry and shape of the disks do not appear in regular patients’ images. To test the cone artifact using a phantom with a different configuration, we created a phantom where each disk was replaced by a set of nine small spheres. For this configuration, there is one central sphere and eight spheres equally spaced in a circle pattern around it as shown in Figure 2. Five sets of these sphere configurations were stacked longitudinally at equal intervals and together provided the same 20 degree cone angle that was provided by the disk phantom. Similar to the disk phantom, the sphere phantom is also longitudinally symmetric, but while the disk phantom generated identical projection images at every projection angle, the sphere phantom generated data that was sinusoidally varying as a function of projection angle. Figure 3 compares the sinograms of the central slices for the two phantoms. The disk and sphere projection data were generated using the same method that was used to generate the Shepp-Logan projection data. 2.3.3. Image Evaluation Metrics The figures of merit (FOM) chosen for quantitative evaluation of the reconstructed images are the relative root mean square error (RRME) and the square Euclidean distance, which are defined by where is the reference image. 3. Results 3.1. Evaluation of the Algebraic Reconstruction Methods We first tested the SART algorithms with data created using the [single-source single-detector] TBCT geometry. SART reconstructions of a transverse slice of the Shepp-Logan phantom after 1, 5, 10, and 15 iterations are displayed in Figure 4. The results are given for the SART method both with and without the MAS ordering scheme. The numerical phantom image and FDK reconstruction are also displayed for comparison. In Figure 5, the convergence rates of the algebraic methods are compared using the square Euclidean distance and RRME. The FOM values for the FDK reconstruction are displayed as reference lines on the graphs. Both metrics indicate that the algebraic methods achieved their best results between four and six iterations and that the FOM values for the algebraic methods were comparable to the FDK results in that region. The SART method converged at approximately the same rate whether or not the MAS ordering scheme was used, but the SART method using the MAS scheme clearly gave better initial results. After five iterations, the SART method with the MAS ordering scheme was chosen as the method for providing the best balance between convergence speed and image quality. The reconstructions using the FDK and SART with MAS algorithms were then compared to the original phantom by evaluating a line integral taken through different views of the image. As shown in Figure 6, the line profile results from the sagittal and coronal views show good agreement between the reconstructed images and the numerical phantom. In the images reconstructed using the SART algorithm, ringing artifacts can be seen at both the top and bottom of the phantom in both the coronal and sagittal views. This artifact is a result of using a cubic voxel discretization of the image space [24, 33–35]. Because of discretization of the continuous object, the modeled object edges are blurred and therefore the projection values calculated after forward projecting the image voxels will not match the measured values. Ringing artifacts will then result in areas with very high image gradients because the corrections to these voxels will be incorrectly weighted, and the resulting overshoot and undershoot in the voxels at the edges will then propagate to the neighboring voxels that are under a non-negativity constraint during further iterations. This effect is usually controlled by selecting a number of iterations that qualitatively provide the best tradeoff between edge sharpness and ringing artifact. The simplest way to mitigate this effect is to use a finer grid size [34], but this would lead to an increase in the computational expense of the algorithm. Other possible methods used to reduce these artifacts include the use of a spherically symmetric basis for the voxels instead of the cubic basis that is conventionally used [33, 36] and the use of a smoothing method during reconstruction [35]. A pig’s head was scanned using our TBCT benchtop system. The projections were reconstructed using the FDK and SART algorithms. With no image to use as a ground truth image, the reconstructions were evaluated qualitatively. Based on the results of the Shepp-Logan phantom, we used five iterations of the SART algorithm and used the MAS projection ordering scheme. A transverse image of the pig’s head reconstructions is displayed in Figure 7 for comparison. There is close visual agreement between the images reconstructed using the different methods with slightly better contrast seen in the SART images. 3.2. Evaluation of the Cone Artifact Projection images of the Defrise-like phantom were generated for both CBCT and TBCT geometries. Coronal images of the original phantom as well as reconstructions produced using the regular FDK method for CBCT, the modified FDK method for the TBCT, and the SART with MAS method for TBCT are shown in Figures 8(a)–8(d). Line profiles taken through the center and edges of the coronal image are displayed in Figures 8(e) and 8(f), respectively. When using the FDK method, cone artifacts appeared in the reconstructions at the larger cone angles when using either of the TBCT and CBCT geometries. By contrast, the reconstructions produced using the SART algorithm did not suffer from large cone angle artifacts. There was, however, slight elongation of the disks along the longitudinal axis and a corresponding drop in CT values at the edges of the disk, though not to the extent experienced by the FDK reconstructions. The sphere phantom was reconstructed using the FDK and SART with MAS algorithms that were modified for use with the TBCT system. The TBCT system dimensions that were used for the disk phantoms were also used here. Figure 9 shows the coronal views of the FDK and SART reconstructions. Line profiles were taken through the central column of spheres to verify that there is neither elongation nor decay in CT values for the spheres along the longitudinal axis. A slight elongation of the disks could still be observed in the FDK reconstruction but not in the reconstructed images produced by the SART algorithm. To check that the CT values were constant at the borders of the image, a line profile was also taken through a side column of spheres. The results were generally consistent with those obtained from the line profile through the central column, but the FDK reconstructions showed a slight drop in CT values toward the edges. We further tested the reconstruction methods using a patient image that was originally reconstructed using a diagnostic CT scanner. We generated TBCT projection data using the system geometry parameters of our benchtop TBCT system. Because the inherent resolution of the original patient image would be different than that of the TBCT reconstruction due to the differences in the scanning geometry used to create our projections as opposed to the scanning geometry used with the diagnostic CT scanner, we generated a projection set using a fan-beam geometry that had exactly the same system dimensions as the central plane of our TBCT system. The resulting fan-beam reconstruction had the same inherent resolution as our TBCT system and could therefore be used for comparison with the TBCT reconstructions. The projections were reconstructed using the fan-beam filtered backprojection (FBP) algorithm, the FDK algorithm modified for the TBCT geometry, and the SART with MAS algorithm for TBCT. The reconstructed images had dimensions with voxel dimensions of. Figures 10(a) and 10(d) show the transverse and coronal images, respectively, of the FBP algorithms using the simulated helical CT data. The cone angle to the outermost slices was 25°. Figures 10(b) and 10(e) show the transverse and coronal images, respectively, reconstructed using the modified FDK algorithm on the simulated TBCT data, and Figures 10(c) and 10(f) show the same two images after five iterations of the SART algorithm also using the simulated TBCT data. No elongation is apparent in the images produced using either the FDK or SART algorithms. The coronal images demonstrate that the TBCT reconstructions do not show any noticeable elongation for objects at higher cone angles during a patient scan. These results are consistent with the testing performed on the sphere phantom. 3.3. Iterative Reconstruction for Dual Source-Dual Detector TBCT The dual source-dual detector TBCT configuration is preferable for image-guided radiotherapy. However, for this configuration, the maximum cone angles are in the region of the central axis, and therefore, the large angle artifacts will be most significant at the center of the reconstructed image. Moreover, the detector and source arrays do not cover the full FOV so that the data is truncated both longitudinally and transversely. We used the Shepp-Logan phantom and clinical CT images to test the performance of the TBCT and SART algorithms that were modified for the dual source-dual detector TBCT geometry. The detector array was modified so that the projection data associated with each source array detector array pair had dimensions . The size of the detectors was kept the same, and the reconstructed image dimensions were with an isotropic voxel length of 1mm. We performed five iterations of the SART algorithm while using the MAS scheme and compared the results with those of the modified FDK algorithm. As seen in Figure 11, the image was reconstructed without the addition of significant artifacts that would have been caused by either longitudinal or transverse truncation. To further evaluate the performance of the algorithms at large cone angles, we used the same disk and sphere phantoms defined above. The system parameters that were used to generate the data sets were kept constant. The reconstruction results are shown in Figure 12. As expected, the cone angle artifact in the FDK reconstruction increased towards the center of the image where the angle was the largest, while no artifact was observed in the SART reconstruction. There was a slight elongation of the disks that appeared around the central slice for both algorithms, but it was more pronounced in the FDK reconstruction. The reconstructions of the sphere phantom showed neither elongation nor cone artifacts when using the SART algorithm, but there was a slight elongation around all the spheres in the reconstructions produced by the FDK algorithm. Similarly, a diagnostic CT image was then used to create the four simulated projection sets that would be produced by a dual source-dual detector TBCT system. As shown in Figure 13, there are no significant artifacts introduced into the image due to the transverse truncation. Because the inherent resolution of the patient image would again have been different due to the use of different scanning parameters, we again used the same system parameters that were used for the central plane of our benchtop system to generate a fan-beam projection set. The spatial resolution of the image reconstructed using this new fan-beam projection set was comparable to that provided by the SART reconstruction. 4. Discussion and Conclusion In this paper, the FDK and SART methods were implemented for the TBCT geometry. Data generated using numerical phantoms and clinical CT images as well as data collected using our TBCT benchtop system were reconstructed with these modified methods. The accuracy of the FDK and SART reconstructions was evaluated using the square Euclidean distance and the root mean square error FOMs. For small cone angles, the algebraic SART methods for both TBCT geometries were able to provide image quality comparable to that of the analytical FDK algorithm. For large cone angles, use of the algebraic image reconstruction algorithms significantly reduced the cone artifacts that were especially prominent in the FDK phantom reconstructions. This was especially important for the dual source-dual detector TBCT geometry as the cone artifacts were most prominent in the center of the image for this geometry. The results given by implementing the SART method are promising, and the algorithm may be implemented in future TBCT systems. The use of model-based statistical iterative image reconstruction methods, which can more accurately model the physics of the system and better take into account sources of noise and the statistical distribution of that noise, can potentially further improve the image quality. Because of their accurate modeling of the system geometry and data collection process, we expect that model-based statistical iterative image reconstruction methods can mitigate cone artifacts in a similar way as the SART method. Therefore, a model-based statistical reconstruction method that incorporates an accurate model of the system geometry and physics into the calculation of the system matrix is planned for a future study. The use of the MAS ordering scheme improved the accuracy of the reconstruction during the first few iterations of the SART method, particularly in the first iteration. After that, the reconstruction results both with and without using the MAS scheme were almost indistinguishable. One reason that the MAS scheme did not increase the convergence rate or accuracy after the initial iterations may be that the scheme was originally designed for a parallel-beam configuration. The most straightforward method that could be attempted to improve the performance of the MAS scheme would then be to rebin the fan-beam data in order to create a parallel beam projection data set. However, rebinning the data may unnecessarily complicate the calculation of the system matrix when we transition to model-based iterative reconstruction. Therefore, we kept the fan-beam geometry for this study. The speed of the reconstruction method must improve for future implementation of the algorithm. The large computational burden required when employing iterative reconstruction algorithms is the main reason that it has taken so long for these methods to be implemented in the clinic. Using this algorithm, reconstructions would take as long as two hours to complete. However, the literature shows promising results from the implementation of iterative algorithms using graphics processing units (GPU). Acceleration in reconstruction times on the scale of one to two orders of magnitude have been reported [20, 28, 37, 38]. For these methods, though, the system matrix is much too large to hold in the GPU’s memory and, therefore, must be calculated during runtime. The use of a spherical pixellation scheme has been used to take advantage of the symmetries of the circular scanning geometry of the system in order to reduce the storage requirement of the system matrix and to, therefore, improve the speed of reconstruction [39]. Therefore, the development of a cylindrical voxelization scheme in a separate study could potentially accelerate the reconstruction process on its own or, by reducing the size of the system matrix, make it feasible to implement the method on the GPU. In conclusion, algebraic iterative reconstruction algorithms were successfully implemented for the TBCT system. The analytical and iterative reconstructions showed similar image quality for reconstructions at small cone angles, while the iterative methods were able to mitigate the cone artifacts that normally appear at large cone angles in analytical methods. The iterative algorithms were also able to accurately account for both longitudinal and transverse truncation of the projection data without introducing new artifacts into the image. Conflict of Interests Tiezhi Zhang and Joshua Kim have financial interests in TetraImaging Inc. This work is supported in part by Oakland University and NIH SBIR Contract no. HHSN261201100045C. 1. M. W. K. Kan, L. H. T. Leung, W. Wong, and N. Lam, “Radiation dose from cone beam computed tomography for image-guided radiation therapy,” International Journal of Radiation Oncology Biology Physics, vol. 70, no. 1, pp. 272–279, 2008. View at Publisher · View at Google Scholar · View at Scopus 2. J. A. Purdy, “Dose to normal tissues outside the radiation therapy patient's treated volume: a review of different radiation therapy techniques,” Health Physics, vol. 95, no. 5, pp. 666–676, 2008. View at Publisher · View at Google Scholar · View at Scopus 3. R. R. Allison, H. A. Gay, H. C. Mota, and C. H. Sibata, “Image-guided radiation therapy: current and future directions,” Future Oncology, vol. 2, no. 4, pp. 477–492, 2006. View at Publisher · View at Google Scholar · View at Scopus 4. D. A. Jaffray, J. H. Siewerdsen, J. W. Wong, and A. A. Martinez, “Flat-panel cone-beam computed tomography for image-guided radiation therapy,” International Journal of Radiation Oncology Biology Physics, vol. 53, no. 5, pp. 1337–1349, 2002. View at Publisher · View at Google Scholar · View at Scopus 5. H. Kanamori, N. Nakamori, K. Inoue, and E. Takenaka, “Effects of scattered X-rays on CT images,” Physics in Medicine and Biology, vol. 30, no. 3, pp. 239–249, 1985. View at Publisher · View at Google Scholar · View at Scopus 6. J. H. Siewerdsen and D. A. Jaffray, “Cone-beam computed tomography with a flat-panel imager: magnitude and effects of X-ray scatter,” Medical Physics, vol. 28, no. 2, pp. 220–231, 2001. View at Publisher · View at Google Scholar · View at Scopus 7. R. Ning, X. Tang, and D. Conover, “X-ray scatter correction algorithm for cone beam CT imaging,” Medical Physics, vol. 31, no. 5, pp. 1195–1202, 2004. View at Publisher · View at Google Scholar · View at Scopus 8. L. Zhu, Y. Xie, J. Wang, and L. Xing, “Scatter correction for cone-beam CT in radiation therapy,” Medical Physics, vol. 36, no. 6, pp. 2258–2268, 2009. View at Publisher · View at Google Scholar · View at Scopus 9. T. G. Flohr, S. Schaller, K. Stierstorfer, H. Bruder, B. M. Ohnesorge, and U. J. Schoepf, “Multi-detector row CT systems and image-reconstruction techniques,” Radiology, vol. 235, no. 3, pp. 756–773, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. T. Zhang, Y. Chi, E. Meldolesi, and D. Yan, “Automatic delineation of on-line head-and-neck computed tomography images: toward on-line adaptive radiotherapy,” International Journal of Radiation Oncology Biology Physics, vol. 68, no. 2, pp. 522–530, 2007. View at Publisher · View at Google Scholar · View at Scopus 11. C. Maaß, F. Dennerlein, F. Noo, and M. Kachelrieß, “Comparing short scan CT reconstruction algorithms regarding cone-beam artifact performance,” in Proceedings of the Nuclear Science Symposium Conference Record (NSS/MIC '10), pp. 2188–2193, IEEE, November 2010. View at Publisher · View at Google Scholar · View at Scopus 12. T. Zhang, D. Schulze, X. Xu, and J. Kim, “Tetrahedron beam computed tomography (TBCT): a new design of volumetric CT system,” Physics in Medicine and Biology, vol. 54, no. 11, pp. 3365–3378, 2009. View at Publisher · View at Google Scholar · View at Scopus 13. X. Xu, J. Kim, P. Laganis, D. Schulze, Y. Liang, and T. Zhang, “A tetrahedron beam computed tomography benchtop system with a multiple pixel field emission X-ray tube,” Medical Physics, vol. 38, no. 10, pp. 5500–5509, 2011. 14. H. K. Tuy, “Scatter correction for cone-beam CT in radiation therapy,” SIAM Journal on Applied Mathematics, vol. 43, no. 3, pp. 546–552, 1983. View at Scopus 15. X. Tang, J. Hsieh, A. Hagiwara, R. A. Nilsen, J. B. Thibault, and E. Drapkin, “A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory,” Physics in Medicine and Biology, vol. 50, no. 16, pp. 3889–3905, 2005. View at Publisher · View at Google Scholar · View at Scopus 16. I. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” Journal of the Optical Society of America A, vol. 1, no. 6, pp. 612–619, 1984. View at Scopus 17. A. K. Hara, R. G. Paden, A. C. Silva, J. L. Kujak, H. J. Lawder, and W. Pavlicek, “Iterative reconstruction technique for reducing body radiation dose at CT: Feasibility Study,” American Journal of Roentgenology, vol. 193, no. 3, pp. 764–771, 2009. View at Publisher · View at Google Scholar · View at Scopus 18. A. C. Martinsen, H. K. Saether, P. K. Hol, D. R. Olsen, and P. Skaane, “Iterative reconstruction reduces abdominal CT dose,” European Journal of Radiology, vol. 81, no. 7, pp. 1483–1487, 2012. 19. J. S. Liow, S. C. Strother, K. Rehm, and D. A. Rottenberg, “Improved resolution for PET volume imaging through three-dimensional iterative reconstruction,” Journal of Nuclear Medicine, vol. 38, no. 10, pp. 1623–1631, 1997. View at Scopus 20. J. B. Thibault, K. D. Sauer, C. A. Bouman, and J. Hsieh, “A three-dimensional statistical approach to improved image quality for multislice helical CT,” Medical Physics, vol. 34, no. 11, pp. 4526–4544, 2007. View at Publisher · View at Google Scholar · View at Scopus 21. K. Mueller and R. Yagel, “Rapid 3-D cone-beam reconstruction with the simultaneous algebraic reconstruction technique (SART) using 2-D texture mapping hardware,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1227–1237, 2000. View at Scopus 22. B. Chiang, S. Nakanishi, A. A. Zamyatin, and D. Shi, “Cone beam artifact reduction in circular computed tomography,” in Proceedings of the Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC '11), pp. 4143–4144, IEEE, 2011. 23. R. Gordon, R. Bender, and G. T. Herman, “Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography,” Journal of Theoretical Biology, vol. 29, no. 3, pp. 471–481, 1970. View at Scopus 24. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the art algorithm,” Ultrasonic Imaging, vol. 6, no. 1, pp. 81–94, 1984. View at 25. B. De Man and S. Basu, “Distance-driven projection and backprojection in three dimensions,” Physics in Medicine and Biology, vol. 49, no. 11, pp. 2463–2475, 2004. View at Publisher · View at Google Scholar · View at Scopus 26. M. Jiang and G. Wang, “Convergence of the Simultaneous Algebraic Reconstruction Technique (SART),” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 957–961, 2003. View at Publisher · View at Google Scholar · View at Scopus 27. H. Guan and R. Gordon, “A projection access order for speedy convergence of ART (algebraic reconstruction technique): a multilevel scheme for computed tomography,” Physics in Medicine and Biology , vol. 39, no. 11, pp. 2005–2022, 1994. View at Publisher · View at Google Scholar · View at Scopus 28. F. Xu, W. Xu, M. Jones et al., “On the efficiency of iterative ordered subset reconstruction algorithms for acceleration on GPUs,” Computer Methods and Programs in Biomedicine, vol. 98, no. 3, pp. 261–270, 2010. View at Publisher · View at Google Scholar · View at Scopus 29. W. M. Pang, J. Qin, Y. Lu, Y. Xie, C. K. Chui, and P. A. Heng, “Accelerating simultaneous algebraic reconstruction technique with motion compensation using CUDA-enabled GPU,” International Journal of Computer Assisted Radiology and Surgery, vol. 6, no. 2, pp. 187–199, 2011. View at Publisher · View at Google Scholar · View at Scopus 30. H. Guan and R. Gordon, “Computed tomography using algebraic reconstruction techniques (ARTs) with different projection access schemes: a comparison study under practical situations,” Physics in Medicine and Biology, vol. 41, no. 9, pp. 1727–1743, 1996. View at Publisher · View at Google Scholar · View at Scopus 31. A. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, IEEE Press, 1988. 32. H. Kudo, F. Noo, and M. Defrise, “Cone-beam filtered-backprojection algorithm for truncated helical data,” Physics in Medicine and Biology, vol. 43, no. 10, pp. 2885–2909, 1998. View at Publisher · View at Google Scholar · View at Scopus 33. A. H. Andersen, “Algebraic reconstruction in CT from limited views,” IEEE Transactions on Medical Imaging, vol. 8, no. 1, pp. 50–55, 1989. View at Publisher · View at Google Scholar · View at 34. W. Zbijewski and F. J. Beekman, “Characterization and suppression of edge and aliasing artefacts in iterative X-ray CT reconstruction,” Physics in Medicine and Biology, vol. 49, no. 1, pp. 145–157, 2004. View at Publisher · View at Google Scholar · View at Scopus 35. W. Zbijewski and F. J. Beekman, “Comparison of methods for suppressing edge and aliasing artefacts in iterative X-ray CT reconstruction,” Physics in Medicine and Biology, vol. 51, no. 7, pp. 1877–1889, 2006. View at Publisher · View at Google Scholar · View at Scopus 36. S. Matej and R. M. Lewitt, “Practical considerations for 3-D image reconstruction using spherically symmetric volume elements,” IEEE Transactions on Medical Imaging, vol. 15, no. 1, pp. 68–78, 1996. View at Scopus 37. F. Xu and K. Mueller, “Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware,” IEEE Transactions on Nuclear Science, vol. 52, no. 3, pp. 654–663, 2005. View at Publisher · View at Google Scholar · View at Scopus 38. B. Keck, H. Hofmann, H. Scherl, M. Kowarschik, and J. Hornegger, “GPU-accelerated SART reconstruction using the CUDA programming environment,” in Medical Imaging 2009: Physics of Medical Imaging, E. Samei and J. Hsieh, Eds., vol. 7258 of Proceedings of SPIE, February 2009. View at Publisher · View at Google Scholar · View at Scopus 39. C. Mora, M. J. Rodríguez-Álvarez, and J. V. Romero, “New pixellation scheme for CT algebraic reconstruction to exploit matrix symmetries,” Computers and Mathematics with Applications, vol. 56, no. 3, pp. 715–726, 2008. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ijbi/2013/609704/","timestamp":"2014-04-19T21:29:33Z","content_type":null,"content_length":"171872","record_id":"<urn:uuid:1ca9cbbc-2510-43f1-b658-c243dd3be1fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternative regression models to assess increase in childhood BMI • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Med Res Methodol. 2008; 8: 59. Alternative regression models to assess increase in childhood BMI Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data. The prevalence of childhood obesity increased dramatically during the last decades in industrialized countries [1,2]. This increase in prevalence seems rather to be due to a shift of the upper part of the body mass index (BMI) distribution than to a shift of the entire BMI distribution as for example observed in the NHANESIII survey from 1988 to 1994 [3]. This increased positive skewness could be due to exposure to obesogenic environmental determinants among a subpopulation with a high degree of susceptibility. TV watching, formula feeding, smoking in pregnancy, maternal obesity or parental social class are well known environmental, constitutional or sociodemographic risk factors [4,5]. However, it remains unknown if these factors affect the entire BMI distribution or only parts of it. A recent descriptive study reported an effect of several risk factors for childhood obesity on upper BMI percentiles, while the middle part of the BMI distribution was virtually unaffected. However, this study did not adjust for potential confounders [6]. In the literature most authors used linear or logistic regression to model effects on body mass index (BMI) measures. However, BMI data are usually positively skewed, and therefore a transformation of the response variable and/or other regression methods might be more appropriate. Possible approaches include lognormal or Box Cox power transformations of the BMI prior to linear regression modeling, gamma regression, quantile regression or GAMLSS models. Quantile regression has been applied in various BMI-related studies [7-9]. Several risk factors for increased adult body size had different effects on specific quantiles. Comparisons between different regression models were discussed, but not quantified by model fit criteria such as Akaike Information Criterion (AIC) [10]. The aim of our study was to compare generalized linear models, GAMLSS models and quantile regression models among BMI data on 4967 preschoolers in order to identify the best approach for obesity risk factor analysis. Additionally, we aimed to assess the effect of different risk factors on the BMI distribution (change of mean, variance, skewness or kurtosis) that might have implications for preventive measures (population based approach vs. targeted approach). Data on 7026 children participating in the school entry health examination in Bavaria, Southern Germany, were collected between September 2001 and August 2002. Children's age ranged from 54 to 88 months. Parental questionnaires on sociodemographic, lifestyle and other risk factors for obesity were distributed together with the invitation to the compulsory school entry examination. Children's weight and height were measured in light clothing and with calibrated balances and fixed stadiometers during the examination. The study has been described in detail elsewhere [4]. Sex and age were considered as confounders, while explanatory variables with previously reported associations to childhood body composition were a priori considered as exposures (abbreviations in brackets). These exposure variables included maternal smoking in pregnancy (PS), amount of watching TV (TV), breast feeding (BF), daily meal frequency (MF), highest graduation of either parent (elementary/secondary/at least A-level) (PG), maternal BMI (MB) and child's weight gain from birth to 2 years of life (WG) [4,5,11]. The sample was confined to cases with complete information on these variables leaving data of 4967 children for the analyses. Statistical methods Simple linear regression uses an identity link and models the relationship between a dependent variable Y[i], independent variables (z[1], ..., z[m]) with m as total number of covariates included, and residuals (ε[1], ..., ε[n]) for the individual i, i = 1, ..., n. The model can be denoted as y[i ]= β[0 ]+ β[1]z[i1 ]+ ... + β[m]z[im ]+ ε[i], ε[i ]~ N(0, σ^2). Generalized linear models (GLM) allow a more flexible modeling [12] of the linear predictor η[i ]= g(μ[i]) which can be denoted as η[i ]= β[0 ]+ β[1]z[i1 ]+ ... + β[m]z[im]. The link function g(.) can be specified e.g. by • the identity link g(μ) = μ, resulting in the simple linear regression model, • the log link g(μ) = log(μ) yielding loglinear regression, • the Box Cox power link [13] $g(λ,μ)={(μλ−1)/λ, if λ≠0log⁡(μ), if λ=0$ • or the inverse link g(μ) = μ^-1. The inverse link function is the natural link function for the normal gamma distribution and was used in this study to perform gamma regression. One approach for model selection is the Generalized Akaike Information Criterion (GAIC) with c = 2 for the 'classical' Akaike Information Criterion (AIC) [10], and c = log(n) for the Bayes Information Criterion (BIC) [14]. The GAIC includes the log likelihood containing the relevant parameter vector $θ^$ (e. g. μ) and a penalty term c × p for the number of parameters and p = m + f with f for the extra degrees of freedom needed for special model fitting techniques (e. g. splines). A statistical model is considered as better fitting if its GAIC is smaller than the GAIC of another statistical model. Generalized Additive Models for Location, Scale and Shape (GAMLSS) offer an approach to model data with consideration of μ as location parameter as well as σ as scale parameter, and the skewness parameter ν and the kurtosis parameter ζ as shape parameters. A GAMLSS model is based on independent observations y[i ]for i = 1, ..., n and monotone link functions g[k](.), relating the parameters μ , σ, ν and ζ to the J[k ]explanatory variables [15,16] through semiparametric predictors. The common choice of the link functions is: A multiplicative rather than an additive model for μ can be obtained by setting g[1](μ) = log(μ). Calculations with GAMLSS in this study use the Box Cox t (BCT) distribution, which is defined as $z={1/σν((y/μ)ν−1), ν≠0σ−1log⁡(y/μ), ν=0,$ with z assumed to follow a t distribution with ζ degrees of freedom (ζ > 0). Under this assumption it is possible to perform likelihood calculations. Additionally, cubic and penalized splines were considered to model continuous covariates [17,18]. The model selection can also be performed by GAIC because GAMLSS represents a general framework of regression models, including the class of GLMs [19]. The authors of GAMLSS used values for c in the range of 2 to 3 to calculate the GAIC [19]. In contrast to the above mentioned distribution based methods, quantile regression estimates conditional quantile functions. It can be used to obtain information about specific quantiles of the underlying distribution. Quantile regression for the sample quantile τ works by minimizing with the so-called check function [20] $ρτ(u)=u(τ−I(u<0))={τ×u, u≥0(τ−1)×u, u<0$ In (3), the predictor in equation (1) is taken as η = Q[τ ]with Q[τ ]being the modeled τ quantile. The comparison of quantile regression and generalized linear models is a major challenge due to the inapplicability of the GAIC in quantile regression. To compare GAMLSS and quantile regression, we plotted estimated values of the 90^th and 97^th BMI percentiles for weight gain in the first two years, while the other covariates were considered at their mean values (if continuous) or their modes (if categorical). We similarly calculated the estimated percentiles for each category of meal frequency, holding the other variables fixed accordingly. All calculations were carried out with R 2.5.1 http://cran.r-project.org. The overall mean of the BMI of the 4967 children was 15.34 kg/m^2 with a median of 15.08 kg/m^2. The data included 2585 males (vs. 2382 females), 417 (vs. 4550) children whose mother had smoked in pregnancy, 384 children with more than 2 TV hours per day (vs. 4583 in 3 lower categories), 1197 (vs. 3770) children who had never been breastfed, 816 children with 3 daily meals at maximum (vs. 4151 with 4 or more meals), and 1466 children whose parents had only an elementary school degree or less (vs. 3501 in other categories). In addition to these categorical covariates, we considered the metric variables children's age in months with a mean of 72.86 (SD 4.77), the maternal BMI (in kg/m^2) which ranged from 15.9 to 49.5 (mean 23.44, SD 3.99), and the children's weight gain (in kg) in the first 2 years of life, ranging from 5.5 to 15.3 (mean 9.45, SD 1.40). Figure Figure11 shows univariate non-parametric kernel density estimates of the children's BMI distributions with regard to underlying risk factors. Maternal BMI and weight gain in the first 2 years were categorized by common cut points (Maternal BMI > 25 kg/m^2, weight gain ≥ 10 kg [4]). When present, most risk factors seemed to increase BMI values of upper BMI regions: For example, there was a higher proportion of children with a BMI > 18 in non-breastfed compared to breastfed children, although the distribution curves of both strata were of almost identical shape for BMI values of < 18. Univariate density distributions of children's BMI with regard to underlying risk factors. Maternal BMI and weight gain in the first two years were divided up into two categories. The risk factors seem to produce a slightly right-skewed distribution for ... Simple linear models assessing the impact of certain risk factors might be limited under such varying key characteristics of the density distributions with and without underlying risk factors due to their intense assumptions. In the multivariable regression analyses, we considered the following a priori defined interaction terms with reported or assumed interrelations: a) sex as confounder with every covariate except age, b) weight gain in the first 2 years with parental education [4], c) weight gain in the first 2 years with breast feeding [21] and d) maternal smoking in pregnancy with breastfeeding [22]. Full multivariable linear, loglinear, gamma and linear regression models with Box Cox power transformed BMI values included all covariates and all a priori defined interaction terms. The backward elimination procedure yielded models without any interaction term and without parents' graduate, maternal smoking in pregnancy or breastfeeding for all 4 GLM models, η = β[0 ]+ β[1]SEX + β[2]TV + β[3]MF + β[4]MF + β[4]AGE + β[5]MB + β[6]WG with η = μ for LR, for example. We chose c = 3 in equation (2) for the GAIC because this factor yielded stable and plausible results in a univariate preanalysis (data not shown). We decided not to fit the multivariable GAMLSS model by considering all covariates from the beginning and starting the fitting process due to the high computational demand of this approach. Instead, we calculated separate univariate GAMLSS models for all covariates and thereafter combined the resulting models to a multivariable model in terms of a pre-selecting forward selection procedure. During the fitting process of univariate models, we considered the strict parameter hierarchy for GAMLSS models in four steps, according to the suggestion of the GAMLSS authors [23]: first a model for μ should be fitted, after that for σ, followed by ν and ζ. If a parameter term did not reduce the GAIC(3), it was not considered for the univariate model of the respective covariate. For example, ν and ζ did not enhance the fit of the univariate model for the variable watching TV, yielding (table (table11): Estimators (EST) and 95% confidence intervals (CI) of the multivariable GAMLSS model in the School Entry Health Examination Study in Bavaria, 2001–2002. η[2 ]= log(σ) = β[02 ]+ β[12]TV Cubic and penalized splines up to three degrees of freedom were considered in models of the continuous covariates age, maternal BMI and weight gain in the first 2 years. Parameters that were not significant anymore in the combined multivariable model were excluded from the final multivariable model. Apart from age, increase (or decrease) in the location parameter μ for covariates was always associated with significant increase (or decrease) in the scale parameter σ. The final multivariable GAMLSS model yielded the same significant covariates as the GLM methods using backward selection, with exception of breastfeeding for which the scale parameter σ was significant in the GAMLSS (tables (tables11 and and2).2). The a priori defined interaction terms were not significant in any considered model. Variables in the models with GLM (linear regression, lognormal regression, gamma regression, regression with Box Cox power transformation), GAMLSS, quantile regression for τ = 0.9 (QR 0.9) and for τ = 0.97 (QR 0.97) for the School Entry ... The fit of the multivariable GAMLSS was far better than the fit of the multivariable GLM models. The GAIC(3) of GAMLSS was 17 470, while linear regression with Box Cox Power transformation, gamma regression, loglinear regression and the simple linear regression model yielded increased GAICs with 17 955, 18 120, 18 219 and 18 616, respectively. Apart from parental education, all considered covariates were significant in quantile regression considering the quantile τ = 0.9 (equals 90^th percentile). In quantile regression (QR) models with τ = 0.97 (equals 97^th percentile), however, only TV watching, breastfeeding, meal frequency, maternal BMI and weight gain in first two years of life were significantly associated with child's BMI. For example, the model for QR, τ = 0.9, was (table (table33): Estimators and 95% confidence intervals (CI) of the quantile regression models with τ = 0.9 (QR 0.9) and τ = 0.97 (QR 0.97). η = β[0 ]+ β[1]SEX + β[2]PS + β[3]TV + β[4]BF + β[5]MF + β[6]AGE + β[7]MB + β[8]WG An overview on significant variables in respective models and differences across models is shown in table table2.2. The covariates TV watching, meal frequency, maternal BMI and weight gain in the first two years of life were significantly associated with child's BMI regardless of the method or chosen link. In contrast, parental education was not significant in any multivariable model. Its influence on offspring's BMI might sufficiently be explained by effects of the other considered covariates. An effect of breastfeeding on the BMI distribution was only detected by GAMLSS and quantile regression. Pregnancy smoking, however, was only significant in the quantile regression model of the τ = 0.9 quantile. In figure figure2,2, estimated values of the 90^th and 97^th BMI percentiles from GAMLSS and quantile regression were compared for weight gain with fixed values of the other covariates. Similarly, table table44 shows percentile values estimated with both methods for different values of meal frequency. Both figure figure22 and table table44 indicate that estimated values for the 90^th percentile obtained by GAMLSS and quantile regression were similar, while the 97^th percentile was slightly higher in quantile regression models. While percentile curves estimated by quantile regression were linear, those obtained by GAMLSS showed a shaped curve due to the combinations of the additional parameters σ, ν and ζ. Values for the 90^th and 97^th BMI percentiles (τ) estimated by GAMLSS and quantile regression (QR) in respect to meal frequency (MF), with fixed values for all other covariates. Values for the 90^th and 97^th BMI percentiles in respect to weight gain in the first two years (in kg), estimated by GAMLSS (dark lines) and quantile regression (grey lines), with fixed values for all other covariates. The dashed lines denote the estimated ... Discussion and conclusion In our study, GAMLSS showed a much better fit examining obesity risk factors compared to GLM models by GAIC. The same explanatory variables had significant associations to body composition across all GLM models, although models contained either additive (linear regression) or multiplicative components (loglinear regression, Box Cox regression and gamma regression). In general, GAMLSS offers a flexible approach due to the large number of implemented distribution families. With GAMLSS, it is possible to assess the effect of specific parameters on the outcome variable distribution. For example, we observed that some variables did not only affect the mean, but additionally the scale of the BMI distribution. Additionally, interdependencies of considered parameters can be examined by GAMLSS. We observed that an increase (decrease) of the mean (μ) was mostly associated with an increase (decrease) of the scale (σ). The scale parameter σ in the distribution used (BCT) in GAMLSS is an approximative centile based coefficient of variation measure [16]. Therefore risk factors of overweight seem to affect both, the BMI itself and its variation. For example, children with a high weight gain in the first 2 years of life had higher BMI values as well as a higher coefficient of variation in BMI compared to those with a low infant weight gain. Thus, low infant weight gain might be a better predictor for underweight than is high infant weight gain for overweight. A change of the skewness term ν, however, did not improve the goodness of fit for modeling the skewed BMI distribution. This might be due to a sufficient consideration of skewness by a change of both parameters μ and σ. Quantile regression allows additional interpretation, e.g. of risk factors affecting only parts of the distribution [7]. While GAMLSS models consider the entire BMI distribution, quantile regression directly examines possible associations between explanatory variables and certain predefined percentiles. Logistic regression is in principal based on a similar idea, but in case of overweight, for example, it has to deal with a big loss of information due to transformation of the continuous BMI to a binary variable. Quantile regression, in contrast, uses the whole information of the data. Furthermore, the interpretations of logistic and quantile regression differ. For example, logistic regression assesses the odds ratio for overweight in relation to certain risk factors, whereas quantile regression quantifies the linear impact of risk factors on overweight children. In our study, the variables TV watching, maternal BMI and weight gain in the first 2 years of life were directly and meal frequency was inversely significantly associated with body composition in every examined model type. However, the strength of the associations was of different magnitude across model types (table (table44). In our study breastfeeding seemed to have a protective effect on the upper percentiles of the BMI estimated by quantile regression (e.g. -0.41 for the 90^th percentile, s. table table3),3), although generalized regression models and GAMLSS did not assess breastfeeding as being significantly associated with the mean BMI (although it was a significant predictor of σ). The latter is in accordance with a recent study on mean BMI and DXA derived fat mass measures [24]. Additionally, different aspects might be detected by modeling different quantiles, for example quantiles referring to We confined our sample to cases with complete information in all variables. Since underreporting with respect to pregnancy smoking and high values of maternal BMI is well-known, this might have led to underestimation of the effects of the corresponding covariates on childhood BMI. However, such an underestimation is likely to similarly affect all examined statistical approaches and therefore be of minor relevance for assessment of the appropriate approach. It might be of interest, however, to compare how sensitive the statistical models are to several methods of missing data imputation such as multiple imputation. However, this question leads deeply into other statistical methodology and is therefore beyond the scope of our study. GAMLSS and quantile regression have recently been compared, along with many other methods, in a WHO study to identify standard reference values for child growth [25]. Four out of five construction methods taken under further examination were GAMLSS methods with different distribution functions: Box Cox t (like in this study), Box Cox power exponential [26], Box Cox normal [27] and Johnson's SU (sinh^-1 normal) [28]. The other considered method used modulus-exponential-normal distribution [29]. The authors finally calculated reference values by GAMLSS with Box Cox power exponential distribution, using AIC and GAIC(3) in parallel for model selection [30]. This indicates that GAMLSS is a very appropriate method for constructing reference curves which are based on estimated percentile curves. In our study, a comparison of GAMLSS and quantile regression by estimated values of the 90^th and 97^th percentiles with respect to certain covariates (weight gain and meal frequency) showed similar results for both methods at the 90^th percentile, while the estimated 97^th percentile was slightly higher in the quantile regression model. Since implementation of percentile curves is existent only for univariate models in the gamlss package, some computational effort was necessary to gain the respective GAMLSS curves with fixed effects of other covariates. Furthermore, it might be worthwhile to consider nonlinear quantile regression (20) in future studies. The statistical model that should be used, largely depends on the observed data and on the aim of the study. GAMLSS models provide exact modeling of continuous outcomes, e.g. for the calculation of standard reference values. While GLMs provide helpful information on mean response changes, GAMLSS additionally provides information on distribution parameters like scale or skewness. On the other hand, quantile regression can be used to model specific parts of the BMI distribution such as the 90^th or 97^th percentile and should be preferred to logistic regression if the original scale of the outcome variable was continuous and a GLM or GAMLSS cannot answer the research question. Competing interests The authors declare that they have no competing interests. Authors' contributions The authors' responsibilities were as follows: AB (guarantor) did the statistical analysis with help by LF and wrote the first draft of the manuscript. AMT, LF and UM reviewed and critiqued the manuscript and made substantial intellectual contributions to subsequent drafts. AB and AMT had the idea for the study and wrote the final draft together. Pre-publication history The pre-publication history for this paper can be accessed here: This study was supported by the innovative research priority project Munich Center of Health Sciences (sub-project II) of the Ludwig Maximilians University Munich and by grants of the Bundesministerium für Bildung und Forschung (Obesity network: LARGE). We thank Nora Fenske for her help in computing the comparison between GAMLSS and quantile regression. • Ogden CL, Flegal KM, Carroll MD, Johnson CL. Prevalence and trends in overweight among US children and adolescents, 1999–2000. Journal of the American Medical Association. 2002;288:1728–1732. doi: 10.1001/jama.288.14.1728. [PubMed] [Cross Ref] • Toschke AM, Lüdde R, Eisele R, von Kries R. The obesity epidemic in young men is not confined to low social classes – a time series of 18-year-old German men at medical examination for military service with different educational attainment. International Journal of Obesity. 2005;29:875–877. doi: 10.1038/sj.ijo.0802989. [PubMed] [Cross Ref] • Flegal KM, Troiano RP. Changes in the distribution of body mass index of adults and children in the US population. International Journal of Obesity. 2000;24:807–818. doi: 10.1038/sj.ijo.0801232. [PubMed] [Cross Ref] • Toschke AM, Beyerlein A, von Kries R. Children at high risk for overweight: A classification and regression trees analysis approach. Obesity Research. 2005;13:1270–1274. doi: 10.1038/ oby.2005.151. [PubMed] [Cross Ref] • Toschke AM, Küchenhoff H, Koletzko B, von Kries R. Meal frequency and childhood obesity. Obesity Research. 2005;13:1932–1938. doi: 10.1038/oby.2005.238. [PubMed] [Cross Ref] • Toschke AM, von Kries R, Beyerlein A, Rückinger S. Risk factors for childhood obesity: shift of the entire BMI distribution vs. shift of the upper tail only in a cross sectional study. BMC Public Health. 2008;8:115. doi: 10.1186/1471-2458-8-115. [PMC free article] [PubMed] [Cross Ref] • Terry MB, Wei Y, Esserman D. Maternal, birth, and early life influences on adult body size in women. American Journal of Epidemiology. 2007;166:5–13. doi: 10.1093/aje/kwm094. [PubMed] [Cross Ref] • Sturm R, Datar A. Body mass index in elementary school children, metropolitan area food prices and food outlet density. Public Health. 2005;119:1059–1068. doi: 10.1016/j.puhe.2005.05.007. [PubMed ] [Cross Ref] • Herpertz-Dahlmann B, Geller F, Böhle C, Khalil C, Trost-Brinkhues G, Ziegler A, Hebebrand J. Secular trends in body mass index measurements in preschool children from the City of Aachen, Germany. European Journal of Pediatrics. 2003;162:104–109. [PubMed] • Akaike H. A new look at the Statistical Model Identification. IEEE Transaction on Automatic Control. 1974;19:716–723. doi: 10.1109/TAC.1974.1100705. [Cross Ref] • Toschke AM, Montgomery SM, Pfeiffer U, von Kries R. Early Intrauterine Exposure to Tobacco-inhaled Products and Obesity. American Journal pf Epidemiology. 2003;158:1068–1074. doi: 10.1093/aje/ kwg258. [PubMed] [Cross Ref] • Fahrmeir L, Tutz G. Springer. 2 2001. Multivariate Statistical Modelling based on Generalized Linear Models. • Box GEP, Cox DR. An analysis of transformations. Journal of the Royal Statistical Society Series B (Methodological) 1964;26:211–252. • Schwarz G. Estimating the dimension of a model. Annals of Statistics. 1978;6:461–464. doi: 10.1214/aos/1176344136. [Cross Ref] • Akantziliotou K, Rigby RA, Stasinopoulos DM. The R implementation of Generalized Additive Models for Location, Scale and Shape. Statistical modelling in Society: Proceedings of the 17th International Workshop on statistical modelling. 2002. pp. 75–83. • Rigby RA, Stasinopoulos DM. Using the Box-Cox t distribution in GAMLSS to model skewness and kurtosis. Statistical Modelling. 2006;6:209–226. doi: 10.1191/1471082X06st122oa. [Cross Ref] • Hastie TJ, Tibshirani RJ. Chapman and Hall. 2 1990. Generalized Additive Models (1st edn) • Eilers PHC, Marx BD. Flexible smoothing with B-splines and penalties. Statistical Science. 1996;11:89–121. doi: 10.1214/ss/1038425655. [Cross Ref] • Rigby RA, Stasinopoulos DM. Generalized additive models for location, scale and shape. Applied Statistics. 2005;54:507–554. • Koenker R. Econometric Society Monographs. 1 2005. Quantile Regression. • Kalies H, Heinrich J, Borte N, Schaaf B, von Berg A, von Kries R, Wichmann HE, Bolte G. The effect of breastfeeding on weight gain in infants: results of a birth cohort study. European Journal of Medical Research. 2005;10:36–42. [PubMed] • Dorea JG. Maternal smoking and infant feeding: breastfeeding is better and safer. Maternal and Child Health Journal. 2007;11:287–91. doi: 10.1007/s10995-006-0172-1. [PubMed] [Cross Ref] • Stasinopoulos DM, Rigby RA, Akantziliotou C. The GAMLSS Package. R help files. 2006. • Toschke AM, Martin RM, von Kries R, Wells J, Smith GD, Ness AR. Infant feeding method and obesity: BMI and DXA measurements at 9–10 years from the Avon Longitudinal Study of Parents and Children (ALSPAC) American Journal of Clinical Nutrition. 2007;85:1578–1585. [PubMed] • Borghi E, de Onis M, Garza C, Broeck J Van den, Frongillo EA, Grummer-Strawn L, Van Buuren S, Pan H, Molinari L, Martorell R, Onyango AW, Martines JC, the WHO Multicentre Growth Reference Study Group Construction of the World Health Organization child growth standards: selection of methods for attained growth curves. Statistics in Medicine. 2006;25:247–265. doi: 10.1002/sim.2227. [ PubMed] [Cross Ref] • Rigby RA, Stasinopoulos DM. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution. Statistics in Medicine. 2004;23:3053–3076. doi: 10.1002/ sim.1861. [PubMed] [Cross Ref] • Cole TJ, Green PJ. Smooth reference centile curves: the LMS method and penalized likelihood. Statistics in Medicine. 1992;11:1305–1319. doi: 10.1002/sim.4780111005. [PubMed] [Cross Ref] • Johnson NL. Systems of frequency curves generated by methods of translation. Biometrika. 1949;36:149–176. [PubMed] • Royston P, Wright EM. A method for estimating age-specific reference intervals ('normal ranges') based on fractional polynomials and exponential transformation. Journal of the Royal Statistical Society Series A (Statistics in Society) 1998;161:79–101. doi: 10.1111/1467-985X.00091. [Cross Ref] • WHO . WHO Child Growth Standards: Length/Height-for-Age, Weight-for-Age, Weight-for-Length, Weight-for-Height and Body Mass Index-for-Age Methods and Development. World Health Organization; 2006. Articles from BMC Medical Research Methodology are provided here courtesy of BioMed Central • Cited in Books Cited in Books PubMed Central articles cited in books • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2543035/?tool=pubmed","timestamp":"2014-04-20T07:14:18Z","content_type":null,"content_length":"111418","record_id":"<urn:uuid:465e4b7f-c7d7-4d60-95c0-7484a7474072>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Finite population correction with clustering of SE at a differen Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Finite population correction with clustering of SE at a different level than the strata From Stas Kolenikov <skolenik@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Finite population correction with clustering of SE at a different level than the strata Date Wed, 6 Jun 2012 07:42:43 -0500 On Mon, Jun 4, 2012 at 8:57 AM, Ole Dahl Rasmussen <odr@dca.dk> wrote: > Dear Statalist, > As part of a cluster randomized control trial, colleagues and I are doing stratified sampling and we're not sure if we're analyzing data correctly. Great if someone has suggestions. > We have 46 villages. Before anything else, we went to all villages and asked them if they would be interested in participating in the project we were about to implement. We wrote down the names of the interested households on lists. We then stratified the population on village and interest: On household population lists we marked the interested households and randomly selected an absolute number, 24, of the interested and 14 on the non-interested in each village, 1750 household out of a total population of approximately 3000 households. In the end we have a total of 92 interested/village combination, which we define as our stratas in the analysis. The sampling rate inside the stratas vary from 10% to 100%. > Then we randomly selected 23 of the villages and implemented a project in these 23 villages. > After two years, we surveyed everybody again. > Finally, following Cameron/Trivedi p 817 in Microeconometrics and others, we estimate the following: > svyset vid [pweight=weights], fpc(one) || _n, strata(strataID) fpc(f) singleunit(certainty) This is a weird design specification. This is what it says: 1. your PSUs are identified by -vid-, but 2. they don't contribute any variance at the first stage, since the fpc of 1 kills all variability 3. Then, at the next stage, you have a stratified SRSWOR sample of observations, with strata given by -stataID- and fpc given by -f-. If there are any strata where only one observation is being used, disregard the contribution to variance from such strata. In a sense, (2) indicates that this is sample is not generalizable to any population; whether that is true or not depends on where the 46 initial villages came from. If they were sampled from a larger population, then you would need to account for that in the first stage. If you somehow got stuck with them based on what the national government gave you, then it is indeed impossible to say how your microfinance could work in the population as a whole beyond the sample that you have. If you do care about correlations of the units within villages (which is the advice you seem to be getting from empirical economics literature: cluster as high as you can, then come up with a justification as to why you have done so), you should omit the -fpc()- option in the first stage and pretend you sampled these villages in the first place. Note that "stratum" is singular and "strata" are plural, so "stratas" is a non-word. ---- Stas Kolenikov -- http://stas.kolenikov.name ---- Senior Survey Statistician, Abt SRBI -- Opinions stated in this email are mine only, and do not reflect the position of my employer * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-06/msg00307.html","timestamp":"2014-04-20T21:34:43Z","content_type":null,"content_length":"10665","record_id":"<urn:uuid:7f68eea2-de6d-4df9-9974-eb432bf7f2c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Force of a hinge on a hinged beam The problem statement, all variables and given/known data A 26.6 kg beam is attached to a wall with a hinge and its far end is supported by a cable. The angle between the beam and the cable is 90°. If the beam is inclined at an angle of theta=13.3° with respect to horizontal, what is the horizontal component of the force exerted by the hinge on the beam? (Use the 'to the right' as + for the horizontal direction.) Hint: The Net torque and the Net Force on the hinge must be zero since it is in equilibrium. What is the magnitude of the force that the beam exerts on the hinge? (Image attached) 3. The attempt at a solution I already knew that the net force and net torque would be zero, so I set clockwise and counter clockwise torque equal to eachother τcw = τccw Fdsinθ = Fdsinθ And this is where I ran into trouble. Length of the beam is never given. I'm not really sure what angles to use where, and while I know that the force the beam exerts on the wall/hinge is equal and opposite to what the wall/hinge exerts on the beam, I'm not sure how to find it. To find the magnitude of force in the second half of the problem, you would just take magnitude = √(Fx^2 + Fy^2), correct? Any advice?
{"url":"http://www.physicsforums.com/showthread.php?p=3651010","timestamp":"2014-04-19T02:18:36Z","content_type":null,"content_length":"31090","record_id":"<urn:uuid:9c629401-84d9-49b3-98fd-adc1e406c51a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Exploratory Data Analysis: 2 Ways of Plotting Empirical Cumulative Distribution Functions in R Continuing my recent series on exploratory data analysis (EDA), and following up on the last post on the conceptual foundations of empirical cumulative distribution functions (CDFs), this post shows how to plot them in R. (Previous posts in this series on EDA include descriptive statistics, box plots, kernel density estimation, and violin plots.) I will plot empirical CDFs in 2 ways: 1. using the built-in ecdf() and plot() functions in R 2. calculating and plotting the cumulative probabilities against the ordered data Continuing from the previous posts in this series on EDA, I will use the “Ozone” data from the built-in “airquality” data set in R. Recall that this data set has missing values, and, just as before, this problem needs to be addressed when constructing plots of the empirical CDFs. Recall the plot of the empirical CDF of random standard normal numbers in my earlier post on the conceptual foundations of empirical CDFs. That plot will be compared to the plots of the empirical CDFs of the ozone data to check if they came from a normal distribution. Method #1: Using the ecdf() and plot() functions I know of 2 ways to plot the empirical CDF in R. The first way is to use the ecdf() function to generate the values of the empirical CDF and to use the plot() function to plot it. (The plot.ecdf() function combines these 2 steps and directly generates the plot.) First, let’s get the data and the sample size; note the need to count the number of non-missing values in the “ozone” data vector for the sample size. ### get data and calculate key summary statistics # extract "Ozone" data vector for New York ozone = airquality$Ozone # calculate the number of non-missing values in "ozone" n = sum(!is.na(ozone)) Now, let’s use the ecdf() function to obtain the empirical CDF values. You can see what the output looks like below. # obtain empirical CDF values ozone.ecdf = ecdf(ozone) > ozone.ecdf Empirical CDF Call: ecdf(ozone) x[1:67] = 1, 4, 6, ..., 135, 168 Finally, use the plot() function to plot the empirical CDF. • Note that only one argument – the object created by ecdf() – is needed. • Also note my use of the mtext() and the expression() functions to add the desired “F-hat-of-x” label. For some strange reason, the same expression used in the ylab option in the plot() function does not show the “hat”. I’m very glad that mtext() shows the “hat”! • The ylab option in plot() is set as ‘ ‘ to purposefully show nothing. If the ylab option is not specified, $F_n(x)$ will be shown, but this does not have the hat. (Yes, I am doing a lot of work just to add a “hat” to the “F”, but now you get to learn some more R!) • Notice that “[n]‘ is used to write “n” as a subscript. ### plotting the empirical cumulative distribution function using the ecdf() and plot() functions # print a PNG image to a desired folder png('INSERT YOUR DIRECTORY PATH HERE/ecdf1.png') plot(ozone.ecdf, xlab = 'Sample Quantiles of Ozone', ylab = '', main = 'Empirical Cumluative Distribution\nOzone Pollution in New York') # add label for y-axis # the "line" option is used to set the position of the label # the "side" option specifies the left side mtext(text = expression(hat(F)[n](x)), side = 2, line = 2.5) # you can create the plot directly with just the plot.ecdf() function, but this doesn't produce any empirical CDF values Method #2: Plotting the Cumulative Probabilities Against the Ordered Data There is another way of plotting the empirical CDF that mirrors its definition. It uses R functions to • calculate the cumulative probabilities • order the data • plot the cumulative probabilities against the ordered data. This method does not use any function specifically created for empirical CDFs; it combines several functions that are more rudimentary in R. • It plots the empirical CDF as a series of “steps” using the option type = ‘s’ in the plot() function. • Notice that the vector (1:n)/n is the vector of the cumulative probabilities that are assigned to the data. • I have also added some vertical and horizontal lines that mark the 3rd quartile; this gives the intution that the CDF increases quickly and that most of the probabilities are already assigned with the small values of the data. • In case you’re wondering how I got the 3rd quartile, I used the summary() function on the output of the fivenum() function as applied to the ozone data. > summary(fivenum(ozone)) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.0 18.0 31.5 56.4 63.5 168.0 ### empirical cumulative distribution function using sort() and plot() # ordering the ozone data ozone.ordered = sort(ozone) png('INSERT YOUR DIRECTORY PATH HERE/ecdf2.png') # plot the possible values of probability (0 to 1) against the ordered ozone data (sample quantiles of ozone) # notice the option type = 's' for plotting the step functions plot(ozone.ordered, (1:n)/n, type = 's', ylim = c(0, 1), xlab = 'Sample Quantiles of Ozone', ylab = '', main = 'Empirical Cumluative Distribution\nOzone Pollution in New York') # mark the 3rd quartile abline(v = 62.5, h = 0.75) # add a legend legend(65, 0.7, '3rd Quartile = 63.5', box.lwd = 0) # add the label on the y-axis mtext(text = expression(hat(F)[n](x)), side = 2, line = 2.5) Did the Ozone Data Come from a Normal Distribution? Recall the empirical CDF plot of the random standard normal numbers from my last post on the conceptual foundations of empirical CDFs. Comparing this above plot to the plots of the empirical CDFs of the ozone data, it is clear that the latter do not have the “S” shape of the normal CDF. Thus, the ozone data likely did not come from a normal distribution. Please comment! Cancel reply Recent Comments Eric Cai - The Chemi… on Video Tutorial – Rolling… Bob Mrotek on Video Tutorial – Rolling… Andrew Taylor on Side-by-Side Box Plots with Pa… Eric Cai - The Chemi… on Side-by-Side Box Plots with Pa… Andrew Taylor on Side-by-Side Box Plots with Pa… Andrew Taylor on Side-by-Side Box Plots with Pa… Pablo Lischinsky on Machine Learning Lesson of the… Andrew Taylor on Side-by-Side Box Plots with Pa… Eric Cai - The Chemi… on Side-by-Side Box Plots with Pa… Andrew Taylor on Side-by-Side Box Plots with Pa…
{"url":"http://chemicalstatistician.wordpress.com/2013/06/25/exploratory-data-analysis-2-ways-of-plotting-empirical-cumulative-distribution-functions-in-r/","timestamp":"2014-04-17T09:34:33Z","content_type":null,"content_length":"113360","record_id":"<urn:uuid:aef8ffad-513b-44e5-92e4-c8e0d5408ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
fourier transform for differential equation February 23rd 2010, 08:45 AM #1 Feb 2010 fourier transform for differential equation all "d" is partial du/dt = p*du^2/dx^2 + m*du/dx x from minus infinity to infinity p>0 t>0 initial condition u(x,0) = f(x) assuming u(x,t) is bounded any help would be great! thanks Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/130335-fourier-transform-differential-equation.html","timestamp":"2014-04-21T00:51:59Z","content_type":null,"content_length":"29244","record_id":"<urn:uuid:40423b57-4262-4f72-a269-6a0a1bc9d2a6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
MzScheme returns the unique void value -- printed as #<void> -- for expressions that have unspecified results in R5RS. The procedure void takes any number of arguments and returns void: Variables bound by letrec-values that are accessible but not yet initialized are bound to the unique undefined value, printed as #<undefined>. Unless otherwise specified, two instances of a particular MzScheme data type are equal? only when they are eq?. Two values are eqv? only when they are either eq?, = and have the same exactness, or both +nan.0. The andmap and ormap procedures apply a test procedure to the elements of a list, returning immediately when the result for testing the entire list is determined. The arguments to andmap and ormap are the same as for map, but a single boolean value is returned as the result, rather than a list: • (andmap proc list ···^1) applies proc to elements of the lists from the first elements to the last, returning #f as soon as any application returns #f. If no application of proc returns #f, then the result of the last application of proc is returned. If the lists are empty, then #t is returned. • (ormap proc list ···^1) applies proc to elements of the lists from the first elements to the last. If any application returns a value other than #f, that value is immediately returned as the result of the ormap application. If all applications of proc return #f, then the result is #f. If the lists are empty, then #f is returned. (andmap positive? '(1 2 3)) ; => #t (ormap eq? '(a b c) '(a b c)) ; => #t (andmap positive? '(1 2 a)) ; => raises exn:application:type (ormap positive? '(1 2 a)) ; => #t (andmap positive? '(1 -2 a)) ; => #f (andmap + '(1 2 3) '(4 5 6)) ; => 9 (ormap + '(1 2 3) '(4 5 6)) ; => 5 A number in MzScheme is one of the following: MzScheme extends the number syntax of R5RS in two ways: • All input radixes (#b, #o, #d, and #x) allow ``decimal'' numbers that contain a period or exponent marker. For example, #b1.1 is equivalent to 1.5. In hexadecimal numbers, e and d always stand for a hexadecimal digit, not an exponent marker. The special inexact numbers +inf.0, -inf.0, and +nan.0 have no exact form. Dividing by an inexact zero returns +inf.0 or -inf.0, depending on the sign of the dividend. The infinities are integers, and they answer #t for both even? and odd?. The +nan.0 value is not an integer and is not = to itself, but +nan.0 is eqv? to itself.^3 Similarly, (= 0.0 -0.0) is #t, but (eqv? 0.0 -0.0) is #f. All multi-argument arithmetic procedures operate pairwise on arguments from left to right. The string->number procedure works on all number representations and exact integer radix values in the range 2 to 16 (inclusive). The number->string procedure accepts all number types and the radix values 2, 8, 10, and 16; however, if an inexact number is provided with a radix other than 10, the exn:application:mismatch exception is raised. The add1 and sub1 procedures work on any number: The following procedures work on integers: • (integer-sqrt n) returns the integer square-root of n. For positive n, the result is the largest positive integer bounded by the (sqrt n). For negative n, the result is (* (integer-sqrt (- n)) The following procedures work on exact integers in their (semi-infinite) two's complement representation: The random procedure generates pseudo-random integers: • (random k) returns a random exact integer in the range 0 to k - 1 where k is an exact integer between 1 and 2^31 - 1, inclusive. The number is provided by the current pseudo-random number generator, which maintains an internal state for generating numbers.^4 • (random-seed k) seeds the current pseudo-random number generator with k, an exact integer between 0 and 2^31 - 1, inclusive. Seeding a generator sets its internal state deterministically; seeding a generator with a particular number forces it to produce a sequence of pseudo-random numbers that is the same across runs and across platforms. • (current-pseudo-random-generator) returns the current pseudo-random number generator, and (current-pseudo-random-generator generator) sets the current generator to generator. See also section • (make-pseudo-random-generator) returns a new pseudo-random number generator. The new generator is seeded with a number derived from (current-milliseconds). The following procedures convert between Scheme numbers and common machine byte representations: • (integer-byte-string->integer string signed? [big-endian?]) converts the machine-format number encoded in string to an exact integer. The string must contain either 2, 4, or 8 characters. If signed? is true, then the string is decoded as a two's-complement number, otherwise it is decoded as an unsigned integer. If big-endian? is true, then the first character's ASCII value provides the most significant eight bits of the number, otherwise the first character provides the least-significant eight bits, and so on. The default value of big-endian? is the result of • (integer->integer-byte-string n size-n signed? [big-endian? to-string]) converts the exact integer n to a machine-format number encoded in a string of length size-n, which must be 2, 4, or 8. If signed? is true, then the number is encoded with two's complement, otherwise it is encoded as an unsigned bit stream. If big-endian? is true, then the most significant eight bits of the number are encoded in the first character of the resulting string, otherwise the least-significant bits are encoded in the first character, and so on. The default value of big-endian? is the result of If to-string is provided, it must be a mutable string of length size-n; in that case, the encoding of n is written into to-string, and to-string is returned as the result. If to-string is not provided, the result is a newly allocated string. If n cannot be encoded in a string of the requested size and format, the exn:misc:application exception is raised. If to-string is provided and it is not of length size-n, the exn:misc:application exception is raised. • (floating-point-byte-string->real string [big-endian?]) converts the IEEE floating-point number encoded in string to an inexact real number. The string must contain either 4 or 8 characters. If big-endian? is true, then the first character's ASCII value provides the most significant eight bits of the IEEE representation, otherwise the first character provides the least-significant eight bits, and so on. The default value of big-endian? is the result of system-big-endian?. • (real->floating-point-byte-string x size-n [big-endian? to-string]) converts the real number x to its IEEE representation in a string of length size-n, which must be 4 or 8. If big-endian? is true, then the most significant eight bits of the number are encoded in the first character of the resulting string, otherwise the least-significant bits are encoded in the first character, and so on. The default value of big-endian? is the result of system-big-endian?. If to-string is provided, it must be a mutable string of length size-n; in that case, the encoding of n is written into to-string, and to-string is returned as the result. If to-string is not provided, the result is a newly allocated string. If to-string is provided and it is not of length size-n, the exn:misc:application exception is raised. MzScheme character values range over the characters for ``extended ASCII'' values 0 to 255 (where the ASCII extensions are platform-specific). The procedure char->integer returns the extended ASCII value of a character and integer->char takes an extended ASCII value and returns the corresponding character. If integer->char is given an integer that is not in 0 to 255 inclusive, the exn:application:type exception is raised. The procedures char->latin-1-integer and latin-1-integer->char support conversions between characters in the platform-specific character set and platform-independent Latin-1 (ISO 8859-1) values: • (char->latin-1-integer char) returns the integer in 0 to 255 inclusive corresponding to the Latin-1 value for char, or #f if char (in the platform-specific character set) has no corresponding character in Latin-1. For Unix and Mac OS, char->latin-1-integer and latin-1-integer->char are the same as char->integer and integer->char. For Windows, the platform-specific set and Latin-1 match except for the range # x80 to #x9F (which are unprintable control characters in Latin-1). The character comparison procedures -- char=?, char<?, char-ci=?, etc. -- take two or more character arguments and check the arguments pairwise (like the numerical comparison procedures). Two characters are eq? whenever they are char=?. The expression (char<? char1 char2) produces the same result as (< (char->integer char1) (char->integer char2)), etc. The procedures char-whitespace?, char-alphabetic?, char-numeric?, char-upper-case?, and char-upper-case?, char-upcase, and char-downcase are fully portable; their results do not depend on the platform or locales. In addition to the standard character procedures, MzScheme provides the following locale-sensitive procedures (see section 7.7.1.11): For example, since ASCII character 112 is a lowercase ``p'' and Latin-1 character 246 is a lowercase ``ddoto'' (with an umlaut), (char-locale<? (integer->char 112) (integer->char 246)) tends to produce #f, though it always produces #t if the current locale is disabled. A string can be mutable or immutable. When an immutable string is provided to a procedure like string-set!, the exn:application:type exception is raised. String constants generated by read are immutable. (string->immutable-string string) returns an immutable string with the same content as string, and it returns string itself if string is immutable. (See also immutable? in section 3.8.) (substring string start-k [end-k]) returns a mutable string, even if the string argument is immutable. The end-k argument defaults to (string-length string) (string-copy! dest-string dest-start-k src-string [src-start-k src-end-k]) changes the characters of dest-string from positions dest-start-k (inclusive) to dest-end-k (exclsuive) to match the characters in src-string from src-start-k (inclsuive). If src-start-k is not provided, it defaults to 0. If src-end-k is not provided, it defaults to (string-length src-string). The strings dest-string and src-string can be the same string, and in that case the destination region can overlap with the source region; the destination characters after the copy match the source characters from before the copy. If any of dest-start-k, src-start-k, or src-end-k are out of range (taking into acount the sizes of the strings and the source and destination regions), the exn:fail:contract exception is raised. When a string is created with make-string without a fill value, it is initialized with the null character (#\nul) in all positions. The string comparison procedures -- string=?, string<?, string-ci=?, etc. -- take two or more string arguments and check the arguments pairwise (like the numerical comparison procedures). String comparisons using the standard functions are fully portable; the results do not depend on the platform or locales. In addition to the string character procedures, MzScheme provides the following locale-sensitive procedures (see section 7.7.1.11): For information about symbol parsing and printing, see section 14.3 and section 14.4, respectively. MzScheme provides two ways of generating an uninterned symbol, i.e., a symbol that is not eq?, eqv?, or equal? to any other symbol, although it may print the same as another symbol: • (string->uninterned-symbol string) is like (string->symbol string), but the resulting symbol is a new uninterned symbol. Calling string->uninterned-symbol twice with the same string returns two distinct symbols. Regular (interned) symbols are only weakly held by the internal symbol table. This weakness can never affect the result of a eq?, eqv?, or equal? test, but a symbol placed into a weak box (see section 13.1) or used as the key in a weak hash table (see section 3.12) may disappear. When a vector is created with make-vector without a fill value, it is initialized with 0 in all positions. A vector can be immutable, such as a vector returned by syntax-e, but vectors generated by read are mutable. (See also immutable? in section 3.8.) (vector->immutable-vector vec) returns an immutable vector with the same content as vec, and it returns vec itself if vec is immutable. (See also immutable? in section 3.8.) (vector-immutable v ···^1) is like (vector v ···^1) except that the resulting vector is immutable. (See also immutable? in section 3.8.) A cons cell can be mutable or immutable. When an immutable cons cell is provided to a procedure like set-cdr!, the exn:application:type exception is raised. Cons cells generated by read are always The global variable null is bound to the empty list. (reverse! list) is the same as (reverse list), but list is destructively reversed using set-cdr! (i.e., each cons cell in list is mutated). (append! list ···^1) is like (append list), but it destructively appends the lists (i.e., except for the last list, the last cons cell of each list is mutated to append the lists; empty lists are essentially dropped). (list* v ···^1) is similar to (list v ···^1) but the last argument is used directly as the cdr of the last pair constructed for the list: (list* 1 2 3 4) ; => '(1 2 3 . 4) (cons-immutable v1 v2) returns an immutable pair whose car is v1 and cdr is v2. (list-immutable v ···^1) is like (list v ···^1), but using immutable pairs. (list*-immutable v ···^1) is like (list* v ···^1), but using immutable pairs. (immutable? v) returns #t if v is an immutable cons cell, string, vector, box, or hash table, #f otherwise. The list-ref and list-tail procedures accept an improper list as a first argument. If either procedure is applied to an improper list and an index that would require taking the car or cdr of a non-cons-cell, the exn:application:mismatch exception is raised. The member, memv, and memq procedures accept an improper list as a second argument. If the membership search reaches the improper tail, the exn:application:mismatch exception is raised. The assoc, assv, and assq procedures accept an improperly formed association list as a second argument. If the association search reaches an improper list tail or a list element that is not a pair, the exn:application:mismatch exception is raised. MzScheme provides boxes, which are records that have a single field: Two boxes are equal? if the contents of the boxes are equal?. A box returned by syntax-e (see section 12.2.2) is immutable; if set-box! is applied to such a box, the exn:application:type exception is raised. A box produced by read (via #&) is mutable. (See also immutable? in section 3.8.) See section 4.6 for information on defining new procedure types. MzScheme's procedure-arity procedure returns the input arity of a procedure: (procedure-arity cons) ; => 2 (procedure-arity list) ; => #<struct:arity-at-least> (arity-at-least? (procedure-arity list)) ; => #t (arity-at-least-value (procedure-arity list)) ; => 0 (arity-at-least-value (procedure-arity (lambda (x . y) x))) ; => 1 (procedure-arity (case-lambda [(x) 0] [(x y) 1])) ; => '(1 2) (procedure-arity-includes? cons 2) ; => #t (procedure-arity-includes? display 3) ; => #f When compiling a lambda or case-lambda expression, MzScheme looks for a 'method-arity-error property attached to the expression (see section 12.6.2). If it is present with a true value, and if no case of the procedure accepts zero arguments, then the procedure is marked so that an exn:application:arity exception involving the procedure will hide the first argument, if one was provided. (Hiding the first argument is useful when the procedure implements a method, where the first argument is implicit in the original source). The property affects only the format of exn:application:arity exceptions, not the result of procedure-arity. A primitive procedure is a built-in procedure that is implemented in low-level language. Not all built-in procedures are primitives, but almost all R5RS procedures are primitives, as are most of the procedures described in this manual. • (primitive-result-arity prim-proc) returns the arity of the result of the primitive procedure prim-proc (as opposed to the procedure's input arity as returned by arity; see section 3.10.1). For most primitives, this procedure returns 1, since most primitives return a single value when applied. For information about arity values, see section 3.10.1. See section 6.2.4 for information about the names of primitives, and the names inferred for lambda and case-lambda procedures. The force procedure can only be applied to values returned by delay, and promises are never implicitly forced. (promise? v) returns #t if v is a promise created by delay, #f otherwise. (make-hash-table [flag-symbol flag-symbol]) creates and returns a new hash table. If provided, each flag-symbol must one of the following: By default, key comparisons use eq?. If the second flag-symbol is redundant, the exn:application:mismatch exception is raised. Two hash tables are equal? if they are created with the same flags, and if they map the same keys to equal? values (where ``same key'' means either eq? or equal?, depending on the way the hash table compares keys). (make-immutable-hash-table assoc-list [flag-symbol]) creates an immutable hash table. (See also immutable? in section 3.8.) The assoc-list must be a list of pairs, where the car of each pair is a key, and the cdr is the corresponding value. The mappings are added to the table in the order that they appear in assoc-list, so later mappings can hide earlier mappings. If the optional flag-symbol argument is provided, it must be 'equal, and the created hash table compares keys with equal?; otherwise, the created table compares keys with eq?. (hash-table? v [flag-symbol flag-symbol]) returns #t if v was created by make-hash-table or make-immutable-hash-table with the given flag-symbols (or more), #f otherwise. Each provided flag-symbol must be a distinct flag supported by make-hash-table; if the second flag-symbol is redundant, the exn:application:mismatch exception is raised. (hash-table-put! hash-table key-v v) maps key-v to v in hash-table, overwriting any existing mapping for key-v. If hash-table is immutable, the exn:application:type exception is raised. (hash-table-get hash-table key-v [failure-thunk]) returns the value for key-v in hash-table. If no value is found for key-v, then the result of invoking failure-thunk (a procedure of no arguments) is returned. If failure-thunk is not provided, the exn:application:mismatch exception is raised when no value is found for key-v. (hash-table-remove! hash-table key-v) removes the value mapping for key-v if it exists in hash-table. If hash-table is immutable, the en:application:type exception is raised. (hash-table-map hash-table proc) applies the procedure proc to each element in hash-table, accumulating the results into a list. The procedure proc must take two arguments: a key and its value. See the caveat below about concurrent modification. (hash-table-for-each hash-table proc) applies the procedure proc to each element in hash-table (for the side-effects of proc) and returns void. The procedure proc must take two arguments: a key and its value. See the caveat below about concurrent modification. (hash-table-count hash-table) returns the number of keys mapped by hash-table. If hash-table is not created with 'weak, then the result is computed in constant time and atomically. If hash-table is created with 'weak, see the caveat below about concurrent modification. (eq-hash-code v) returns an exact integer; for any two eq? values, the returned integer is the same. Furthermore, for the result integer k and any other exact integer j, (= k j) implies (eq? k j). (equal-hash-code v) returns an exact integer; for any two equal? values, the returned integer is the same. Furthermore, for the result integer k and any other exact integer j, (= k j) implies (eq? k j). If v contains a cycle through pairs, vectors, boxes, and inspectable structure fields, then equal-hash-code applied to v will loop indefinitely. Caveat concerning concurrent modification: A hash table can be manipulated with hash-table-get, hash-table-put!, and hash-table-remove! concurrently by multiple threads, and the operations are protected by a table-specific semaphore as needed. A few caveats apply, however: • If a thread is terminated while applying hash-table-get, hash-table-put!, or hash-table-remove! to a hash table that uses equal? comparisons, all current and future operations on the hash table block indefinitely. • The hash-table-map, hash-table-for-each, and hash-table-count procedures do not use the table's semaphore. Consequently, if a hash table is extended with new keys by another thread while a map or for-each is in process, arbitrary key-value pairs can be dropped or duplicated in the map, for-each, or count. Similarly, if the map or for-each procedure itself extends the table, arbitrary key-value pairs can be dropped or duplicated. However, key mappings can be deleted or remapped by any thread with no adverse affects (i.e., the change does not affect a traversal if the key has been seen already, otherwise the traversal skips a deleted key or uses the remapped key's new value). Caveat concerning mutable keys: If a key into an equal?-based hash table is mutated (e.g., a key string is modified with string-set!), then the hash table's behavior for put and get operations becomes unpredictable.
{"url":"http://download.plt-scheme.org/doc/209/html/mzscheme/mzscheme-Z-H-3.html","timestamp":"2014-04-19T11:57:05Z","content_type":null,"content_length":"82124","record_id":"<urn:uuid:9c443094-319f-4e7e-b750-9d17b2064aad>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
East Boston Prealgebra Tutor Find an East Boston Prealgebra Tutor ...I pride myself on having a high sense of empathy which allows me relate to and understand the student’s perspective, identify any barriers to learning, and find ways to work around them. I hope to continue to teach and inspire others, as well as play a greater part in my local community along th... 22 Subjects: including prealgebra, calculus, geometry, algebra 1 ...My social dance and wedding students report greater ease of social ability and happiness from a beautiful unforgettable first dance at their wedding. I teach most of my violin lessons on the weekends, dance lessons on evenings and weekends, and tutoring school subjects afternoons and early eveni... 13 Subjects: including prealgebra, English, writing, geometry ...Students with whom I have worked include students with Aspergers, ADHD, executive functioning difficulties, learning disabilities, and behavioral and emotional challenges. In addition, I have a lot of familiarity with IEPs and have experience in writing and revising goals and objectives, in inte... 33 Subjects: including prealgebra, English, reading, writing ...I use my intuitive sense and professional coaching skills to listen and to problem solve with the student to find ways that make sense to them. I also have over 15 years of communications and graphic design experience working in a corporate environment using multiple software applications to do ... 8 Subjects: including prealgebra, geometry, biology, elementary science I am currently employed as a Development Engineer for a medical device company after having graduated from Boston University with a degree in Biomedical Engineering. As a student athlete throughout college, I learned the value of time management and how to best utilize my strong work ethic, and suc... 15 Subjects: including prealgebra, chemistry, English, writing
{"url":"http://www.purplemath.com/East_Boston_prealgebra_tutors.php","timestamp":"2014-04-16T04:21:43Z","content_type":null,"content_length":"24153","record_id":"<urn:uuid:819191eb-f826-4a85-95df-8ef35766af25>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordinary Differential Equations/Maximum domain of solution Even if a differential equation satisfies the Picard–Lindelöf theorem, it may still not have a solution for all of $\mathbb{R}$ that is uniquely determined by a single initial condition. Here we study the maximum interval to which a solution may be extended. Uniqueness of solution over intervalsEdit Theorem Local uniqueness implies global uniqueness over intervals • y is a solution to an IVP • y is locally unique (by the Picard–Lindelöf theorem for example) • the domain of y is an interval (which contains $x_0$, otherwise the initial condition makes no sense) y(x) is the only solution to that IVP of the which has that domain. Maximal solutionEdit Definition Maximal solution and maximal domain of solution The maximal solution $y_{max}$ of an IVP which is locally unique is the solution such that • has an interval as domain (which contains $x_0$, otherwise the initial condition makes no sense) • is not a restriction of any other solution whose domain is a larger interval $Dom(y_{max})$ is called the maximal domain of solution and is denoted $]x_-,x_+[$ Theorem Every IVP has an unique maximal solution Theorem If there is a solution $y_\mathbb{R}$ with domain $\mathbb{R}$ then it is the maximal solution. We must verify that: • $Dom(y_\mathbb{R})$ is an interval. Obviously true. • $y_{\mathbb{R}}$ is not a restriction of any other solution. Obvious because the domain of $y_{\mathbb{R}}$ is already all of $\mathbb{R}$ Behavior at boundaryEdit Theorem Behavior at the boundary of a maximal solution The domain of a maximal solution is smaller than infinity ($x_\pm eq \pm \infty$) At$x_\pm$ one or two of the following happen: • explosion in finite time: $\lim_{x \to x_\pm} ||y(x)|| = \infty$ • y falls out of the domain of F: $\lim_{x \to x_\pm} \in \overline{Dom(F)}$ Sufficient condition for Dom(y)=\mathbb{R}Edit Theorem Growth at most linear implies $Dom(y)=\mathbb{R}$ F grows at most linearly with y $||F(x,y)|| \leq a||y||^2 + b$ The domain of the maximal solution is all of $\mathbb{R}$ y'=y,\, y(0)=1Edit Example $y'=y,\, y(0)=1$ has an infinity of solutions F(x,y)=y , which is $C^1$ in both x and y, and therefore satisfies the Picard–Lindelöf theorem on all of its domain, so that solutions are locally unique. All of the following are solutions to this IVP: \begin{align}y_1 : \, & ]-2,2[ \, \to \, ]e^{-2},e^{2}[ \\ & x \mapsto e^x \end{align} \begin{align}y_2 : \, & ]-1,3[ \, \to \, ]e^{-1},e^{3}[ \\ & x \mapsto e^x \end{align} \begin{align}y_\mathbb{R} : \, & \mathbb{R} , \to \, \mathbb{R}_+ \\& x \mapsto e^x \end{align} \begin{align}y_3 : & Dom(y_3) = ]-1,1[ \, \cup \, ]2,4[ \\ & x \mapsto \begin{cases}e^x & x \in ]-1,1[ \\y(3)e^{(x-3)} & ]2,4[\end{cases}\end{align}Different domains mean completely different A function is a set of ordered pairs {(x,f(x))}, with $x \in Dom(f)$. If the domains are different, the sets are completely different, and therefore the solutions are completely different too. $y_1$,$y_2$ and $y_\mathbb{R}$ are the only solutions on their respective domains This is a conclusion of the theorem on the uniqueness on intervals, because their domains are all intervals. If the domain is not an interval there is generally no uniqueness For the fixed domain of $y_3$ $]-1,1[ \, \cup \, ]2,4[$ which is clearly not an interval, then for any value y(3) we choose we have a different solution. Therefore, even if the domain is fixed but not an interval and there is local uniqueness, there may not be global uniqueness. This is the major reason why we restrict ourselves to a maximal domain that is an interval, even if there may be larger domains with non-unique solutions. To determine the solution uniquely the initial value at 0 is not enough. We would have to set a value on $]2,4[$ such as y(3). This happens because the initial condition at 0 is separated from the interval $]2,4[$. We need to set the a value inside of $]2,4[$, such as y(3) in order to fully determine the solution. $y_\mathbb{R}$ is the maximal solution $y_{max}$ Since the domain is all of $\mathbb{R}$, it must be the maximal solution by the proposition above. We could immediately see that $Dom(y_{max})=\mathbb{R}$ without solving it because the growth of F is linear, as stated on the theorem of sublinear growth. We note also that as stated on the definition, any solution that has a domain that is an interval such as $y_1$ and $y_2$ are restrictions of $y_\mathbb{R}$. Solutions which are not defined on intervals may not be a restriction of the the maximal solution $y_{max}$ $y_3$ is not a a restriction of $y_\mathbb{R}$ unless $y(3) eq e^3$. However, $y_3$ is not unique, and in unacceptable in a physical situation since there is no causality between the two intervals of its domain. So it is not very serious if it is not a restriction of $y_\mathbb{R}$, which is generally the 'best' one. Example $y' = y^2$ $F(x,y)=y^2$, which is $C^1 \, \forall (x,y) \in \mathbb{R}^2$ and therefore Lipschitz continuous, satisfying the Picard–Lindelöf theorem. • the maximal domain may depend on the initial condition • the maximal domain may be different than $\mathbb{R}$ • at the borders of the maximal domain the function may go to infinity The maximal solutions are $\begin{cases} \mathbb{R} & y_0 = 0 \\]x_0-\frac{1}{y_0},+\infty[ & y_0 > 0 \\]-\infty,x_0-\frac{1}{y_0}[ & y_0 < 0 \end{cases}$ which clearly depend on the initial condition, and are not all of $\mathbb{R}$ unless for the trivial solution. At $x_0-\frac{1}{y_0}$ the maximal solution tends to infinity. Extending the domain other side of the singularity leads to non uniqueness Fix $y(1)=1$. One might want to extend the domain of $y_{max}$ to $\mathbb{R}^*$. But then there is no more uniqueness since for any $a>0$ the following is a solution \begin{align}y_a : \, & Dom(y_1) = \mathbb{R}^* \\ & x \mapsto \begin{cases}\frac{1}{x} & x>0 \\\frac{1}{x-a} & x<0\end{cases} \end{align} This is no surprise since $\mathbb{R}^*$ is not an interval. $y_a$ above are not maximal solutions Actually any solution that is an interval is a restriction of those solutions. But this is not a maximal solution since its domain is not an interval. We wouldn't like those to be maximal solutions since they would not be unique. y' = \frac{-xy}{ln(y)}Edit Example $y' = \frac{-xy}{ln(y)},\, y(0) = e^2$ $F(x,y)=\frac{-xy}{ln(y)}$, which is $C^1$ and therefore Lipschitz continuous $\forall y > 0$, satisfying the Picard–Lindelöf theorem. If $y=0$, f is undefined, so that $\Omega=\mathbb{R} \times \mathbb{R}^*$ The maximum solution is \begin{align}y : & \, Dom(y) = \, ]-2,2[ \\ & x \mapsto \exp(\sqrt{4-x^2}) \end{align}At the border the solution may fall out of the domain of F The solution is only defined for $x \in ]-2,2[$, because if $x=\pm 2$ then $y=0$, which is not in the domain of F. The maximal solution ends at those points, which is one of the two possibilities for finite domains. The solution does not explode in this case. Last modified on 13 April 2012, at 23:39
{"url":"http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Maximum_domain_of_solution","timestamp":"2014-04-17T22:21:12Z","content_type":null,"content_length":"30131","record_id":"<urn:uuid:d87db81c-0c60-42e5-8597-1eb6a2365ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference Methods for Initial-Value Problems.lems. Interscience Results 1 - 10 of 159 - ARTIFICIAL INTELLIGENCE , 1981 "... Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent veloc ..." Cited by 1727 (7 self) Add to MetaCart Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image. - Artifical Intelligence , 1981 "... An iterative method for computing shape from shading using occluding boundary information is proposed. Some applications of this method are shown. We employ the stereographic plane to express the orientations of surface patches, rather than the more commonly.used gradient space. Use of the stereogra ..." Cited by 191 (14 self) Add to MetaCart An iterative method for computing shape from shading using occluding boundary information is proposed. Some applications of this method are shown. We employ the stereographic plane to express the orientations of surface patches, rather than the more commonly.used gradient space. Use of the stereographic plane makes it possible to incorporate occluding boundary information, but forces us to employ a smoothness constraint different from the one previously proposed. The new constraint follows directly from a particular definition of surface smoothness. We solve the set of equations arising from the smoothness constraints and the image-irradiance equation iteratively, using occluding boundary information to supply boundary conditions. Good initial values are found at certain points to help reduce the number of iterations required to reach a reasonable solution. Numerical experiments show that the method is effective and robust. Finally, we analyze scanning electron microscope (SEM) pictures using this method. Other applications are also proposed. 1. - SIAM Rev , 1997 "... Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp(tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A ..." Cited by 113 (8 self) Add to MetaCart Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp (tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A, defined by level curves of the norm of the resolvent, ‖(zI − A) −1‖. Five years ago, the author published a paper that presented computed pseudospectra of thirteen highly nonnormal matrices arising in various applications. Since that time, analogous computations have been carried out for differential and integral operators. This paper, a companion to the earlier one, presents ten examples, each chosen to illustrate one or more mathematical or physical principles. , 1994 "... This work is directed toward approximating the evolution of forecast error covariances for data assimilation. We study the performance of different algorithms based on simplification of the standard Kalman filter (KF). These are suboptimal schemes (SOS's) when compared to the KF, which is optimal fo ..." Cited by 41 (8 self) Add to MetaCart This work is directed toward approximating the evolution of forecast error covariances for data assimilation. We study the performance of different algorithms based on simplification of the standard Kalman filter (KF). These are suboptimal schemes (SOS's) when compared to the KF, which is optimal for linear problems with known statistics. The SOS's considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. In order to employ a methodology for exact comparison among these schemes we maintain a linear environment, choosing a beta--plane shallow water model linearized about a constant zonal flow for the testbed dynamics. Our results show that constructing dynamically--balanced forecast error covariances, rather than using conventional geostrophically--balanced ones, is essential for successful performance of any SOS. A posteriori initialization of SOS's to comp... - IN &QUOT;GABOR ANALYSIS AND ALGORITHMS: THEORY AND APPLICATIONS , 1998 "... We introduce the Banach space S 0 # L which has a variety of properties making it a useful tool in Gabor analysis. S 0 can be characterized as the smallest time-frequency homogeneous Banach space of (continuous) functions. We also present other characterizations of S 0 turning it into a very ..." Cited by 38 (9 self) Add to MetaCart We introduce the Banach space S 0 # L which has a variety of properties making it a useful tool in Gabor analysis. S 0 can be characterized as the smallest time-frequency homogeneous Banach space of (continuous) functions. We also present other characterizations of S 0 turning it into a very flexible tool for Gabor analysis and allowing for simplifications of various proofs. A careful , 1992 "... . Let fv " (x; t)g "?0 be a family of approximate solutions for the nonlinear scalar conservation law u t + f(u)x = 0 with C 1 0 -initial data. Assume that fv " (x; t)g are Lip + -stable in the sense that they satisfy Oleinik's E-entropy condition. It is shown that if these approximate solut ..." Cited by 34 (13 self) Add to MetaCart . Let fv " (x; t)g "?0 be a family of approximate solutions for the nonlinear scalar conservation law u t + f(u)x = 0 with C 1 0 -initial data. Assume that fv " (x; t)g are Lip + -stable in the sense that they satisfy Oleinik's E-entropy condition. It is shown that if these approximate solutions are Lip 0 -consistent, i.e., if kv " (\Delta; 0) \Gamma u(\Delta; 0)k Lip 0 (x) + kv " t + f(v " )x k Lip 0 (x;t) = O("), then they converge to the entropy solution, and the convergence rate estimate kv " (\Delta; t) \Gamma u(\Delta; t)k Lip 0 (x) = O(") holds. Consequently, the familiar L p -type and new pointwise error estimates are derived. These convergence rate results are demonstrated in the context of entropy satisfying finite-difference and Glimm's schemes. Key Words. Conservation laws, entropy stability, weak consistency, error estimates,post-processing, finite-difference approximations, Glimm scheme AMS(MOS) subject classification. 35L65, 65M10,65M15. 1. Intro... - IMA Journal of Numerical Analysis , 2003 "... An implicit method is developed for the numerical solution of option pricing models where it is assumed that the underlying process is a jump diffusion. This method can be applied to a variety of contingent claim valuations, including American options, various kinds of exotic options, and models wit ..." Cited by 32 (13 self) Add to MetaCart An implicit method is developed for the numerical solution of option pricing models where it is assumed that the underlying process is a jump diffusion. This method can be applied to a variety of contingent claim valuations, including American options, various kinds of exotic options, and models with uncertain volatility or transaction costs. Proofs of timestepping stability and convergence of a fixed point iteration scheme are presented. For typical model parameters, it is shown that the fixed point iteration reduces the error by two orders of magnitude at each iteration. The correlation integral is computed using a fast Fourier transform (FFT) method. Techniques are developed for avoiding wrap-around effects. Numerical tests of convergence for a variety of options are "... wireless communications ..." - In Methods in Neuronal Modeling , 1989 "... Introduction In this chapter we will discuss some practical and technical aspects of numerical methods that can be used to solve the equations that neuronal modelers frequently encounter. We will consider numerical methods for ordinary differential equations (ODEs) and for partial differential equa ..." Cited by 24 (1 self) Add to MetaCart Introduction In this chapter we will discuss some practical and technical aspects of numerical methods that can be used to solve the equations that neuronal modelers frequently encounter. We will consider numerical methods for ordinary differential equations (ODEs) and for partial differential equations (PDEs) through examples. A typical case where ODEs arise in neuronal modeling is when one uses a single lumped-soma compartmental model to describe a neuron. Arguably the most famous PDE system in neuronal modeling is the phenomenological model of the squid giant axon due to Hodgkin and Huxley. The difference between ODEs and PDEs is that ODEs are equations in which the rate of change of an unknown function of a single variable is prescribed, usually the derivative with respect to time. In contrast, PDEs involve the rates of change of the solution with respect to two or more independent variables, such as time and space. The numerical methods we will discuss for both ODEs and , 2003 "... In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and di#usive time scales, rendering the reaction part of the model equations sti#. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or sho ..." Cited by 24 (12 self) Add to MetaCart In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and di#usive time scales, rendering the reaction part of the model equations sti#. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux di#erence form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of spectral deferred correction methods. The advection term is integrated explicitly, and the di#usion and reaction terms are treated implicitly but independently, with the splitting errors reduced via the spectral deferred correction procedure. To reduce computational cost, di#erent time steps may be used to integrate processes with widely-di#ering time scales. Numerical results show that the conservative nature of the methods allows a robust representation of discontinuities and sharp gradients; the results also demonstrate the expected convergence rates for the methods of orders three, four, and five for smooth problems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=692870","timestamp":"2014-04-16T10:49:45Z","content_type":null,"content_length":"38160","record_id":"<urn:uuid:3bd406dd-7975-4fa1-bb98-11f749bc3c70>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Any relationship of frobenius homomorphism and frobenius category? up vote 1 down vote favorite I did not understand number theory or characteristic p-algebraic geometry at all. I just know a little about frobenius homomorphism between two schemes. On the other hand, when I learned something on triangulated category. I found there was also a definition of "frobenius morphism" the definition is as follows: There are two categories C and D. f_:D--->C, f^ :C--->D is left adjoint to f_, we call f_ is a Frobenious morphism if there exists an auto-equivalence G of C such that composition f^* G is right adjoint to f_*. First question is:is there any relationship between this two frobenius morphism? Second question is:does frodenius category play roles in algebraic geometry? All the comments related to this are welcomed. ag.algebraic-geometry noncommutative-geometry triangulated-categories add comment 1 Answer active oldest votes That notion of Frobenius morphism between categories is a generalization of Frobenius algebras (those which have a non-degenerate mulplicative bilinear form) to triangulated categories. This is quite unrelated to the Frobenius morphism on a scheme. There are lot of things named ater Frobenius! up vote 5 On the other hand, Frobenius categories show up all the time in geometrical contexts. They provide a nice way to construct triangulated caegories (and the triangulated categories so down vote constrcted are particularly nice: they are ´algebraic´) For example, they are used to construct one of the categories equivalent to the derived category of coherent sheaves on accepted projective space in the canonical example of why derived categories are relevant to geometry! This is explained in the book by Gelfan'd and Manin on homological algebra, if I recall Thank you. Yes, I know the stable category of frobenius category is triangulated category and I know why derived categories are related to algebraic geometry. Can you pointed out the 1 reference which shows what you said about " they are used to construct one of the categories equivalent to the derived category of coherent sheaves on projective space" – Shizhuo Zhang Jan 1 '10 at 16:56 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry noncommutative-geometry triangulated-categories or ask your own question.
{"url":"http://mathoverflow.net/questions/10383/any-relationship-of-frobenius-homomorphism-and-frobenius-category?sort=newest","timestamp":"2014-04-19T15:07:40Z","content_type":null,"content_length":"54378","record_id":"<urn:uuid:dded97f1-ed14-4daf-a00d-167687d82f02>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/boricua86/medals","timestamp":"2014-04-18T14:22:45Z","content_type":null,"content_length":"75538","record_id":"<urn:uuid:147fbcff-2bb1-41a9-bece-fad4d0d896ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
[glm] Use quaterions to move the camera direction according to mouse movement 09-23-2013, 01:22 PM [glm] Use quaterions to move the camera direction according to mouse movement I am trying to use quaterions to move the camera direction vector in the following way. This code is perfectly working Code : glm::quat temp1 = glm::normalize( glm::quat((GLfloat)( -Input1.MouseMove.x * mouse_sens * time_step), glm::vec3(0.0, 1.0, 0.0)) ); glm::quat temp2 = glm::normalize( glm::quat((GLfloat)( -Input1.MouseMove.y * mouse_sens * time_step), dir_norm) ); Camera1.SetCameraDirection(temp2 * (temp1 * Camera1.GetCameraDirection() * glm::inverse(temp1)) * glm::inverse(temp2)); this code is not Code : glm::quat temp1 = glm::normalize( glm::quat((GLfloat)( -Input1.MouseMove.x * mouse_sens * time_step), glm::vec3(0.0, 1.0, 0.0)) ); glm::quat temp2 = glm::normalize( glm::quat((GLfloat)( -Input1.MouseMove.y * mouse_sens * time_step), dir_norm) ); glm::quat temp3 = temp2 * temp1; Camera1.SetCameraDirection(temp3 * Camera1.GetCameraDirection() * glm::inverse(temp3)); The two pieces of code, from my understanding of glm, should produce the same result. However, they are not. The first piece of code produce expected result. In the second piece of code when i move the mouse I get extremely small movements in an apparently random direction. Why I cannot multiply quaterions successfully? Am I using GLM in a wrong way? 09-25-2013, 10:36 AM Code : glm::quat temp3 = glm::normalize(temp2 * temp1); which is not equal to b^-1 a^-1 = b*a* /(||b||^2||a||^2) unless when ||ab||=||a||=||b||=1 09-25-2013, 10:57 AM Actually, I think this is wrong since ||ab||^2=(ab)(ab)*=abb*a*=-aba*b*=--aa*bb*=||a||^2||b||^2=1x1=1 09-25-2013, 11:05 AM I would make sure that Camera1.SetCameraDirection takes a const reference to a quaternion and that Camera1.GetCameraDirection() returns a quaternion by value (weird things could happen if it returns a reference...). No more ideas.... 10-02-2013, 01:59 PM The issue was that mamo had wrong expectation for the quaternion constructors. It doesn't take an axis and an angle but there is a function for that called angleAxis.
{"url":"http://www.opengl.org/discussion_boards/printthread.php?t=182780&pp=10&page=1","timestamp":"2014-04-17T15:55:04Z","content_type":null,"content_length":"9029","record_id":"<urn:uuid:df50e0f2-fdc5-476b-814d-7b22319f85b3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Error Reduction for Extractors Ran Raz, Omer Reingold , and Salil Vadhan. We present a general method to reduce the error of any extractor. Our method works particularly well in the case that the original extractor extracts up to a constant fraction of the source min-entropy and achieves a polynomially small error. In that case, we are able to reduce the error to (almost) any epsilon, using only O(log(1/epsilon)) additional truly random bits (while keeping the other parameters of the original extractor more or less the same). In other cases (e.g., when the original extractor extracts all the min-entropy or achieves only a constant error) our method is not optimal but it is still quite efficient and leads to improved constructions of extractors. Using our method, we are able to improve almost all known extractors in the case where the error required is relatively small (e.g., less than polynomially small error). In particular, we apply our method to the new extractors of [Tre99,RRV99a] to get improved constructions in almost all cases. Specifically, we obtain extractors that work for sources of any min-entropy on strings of length n which: (a) extract any 1/n^{gamma} fraction of the min-entropy using O(log n+log(1/epsilon)) truly random bits (for any gamma>0), (b) extract any constant fraction of the min-entropy using O(log^2 n+log(1/epsilon)) truly random bits, and (c) extract all the min-entropy using O(log^3 n+(log n)*log (1/epsilon)) truly random bits. BibTeX entry author = {Ran Raz and Omer Reingold and Salil Vadhan}, title = {Error Reduction for Extractors}, booktitle = {Proceedings of the 40th Annual Symposium on the Foundations of Computer Science}, year = {1999}, organization = {IEEE}, month = {October}, address = {New York, NY}, note = {To appear} [ postscript ] [ back to Salil Vadhan's research interests ]
{"url":"http://people.seas.harvard.edu/~salil/research/error-abs.html","timestamp":"2014-04-19T06:53:29Z","content_type":null,"content_length":"2516","record_id":"<urn:uuid:67be9e61-7563-47a6-9424-2969b632c0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Struggling with calc of interest compounding monthly I have just read the original post again, and I can see a bit which I did not read the first time: This makes things a lot easier, and is not like a mortgage repayment because that is based upon the assumption that repayments will be made each month which are taken off the loan (well it is in the case I was thinking of). If I am understanding things correctly you are supposed to be doing something like this: Let A = 200000 (The amount of the original loan.) Let i = 14/(12 * 100) So i = 0.01166666667 (to the accuracy of a calculator) The extra division by 100 is to convert a percentage to a decimal. The division by 12 is to convert into an amount per month. (Strictly speaking the 12th power root should be taken, but appearently in USA conventions this is not how it is done. Instead it is divided by 12 for simplicity and they do not worry about the fact that this raised to the power of 12 is not the same when added to 1 and then 1 is subtracted at the end if you see what I mean. Compare 1.01166666667^12 to 1.14 they are not the same.) So if I add 1 to the value of i to represent adding 100% F = i + 1 F = 1.01166666667 My variable of F is supposed to be the factor of increase in the amount owed per month of compounding. Using my interpretation, and this is the bit that I do not know whether it is correct, we should do this: Where n is the number of months in which interest is accumulated assuming no money is paid back. I am getting: 430032.30 This assumes that there are n=66 months of accumulation. (Not sure whether this is correct.) Last edited by SteveB (2013-06-03 22:16:04)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=271819","timestamp":"2014-04-16T16:26:11Z","content_type":null,"content_length":"16782","record_id":"<urn:uuid:79b16262-fcf3-4b93-b353-ea2e3c013f31>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Precise definition of a limit That's not much of an improvement over what you had in the first post. Here is what I think the given problem is. f(x) = x^2 Find a value of delta so that when |x - 1| < delta, |x^2 - 1| < 1/2. In other words, how close to 1 must x be so that x will be within 1/2 of 1? Draw a graph of the function. On your graph, draw a horizontal line through the point (1, 1). Draw two more horizontal lines, one 1/2 unit above the first line and the other, 1/2 unit below the first line. At the points where these two lines intersect the graph of y = x in the first quadrant, draw vertical lines down to the x-axis. The two intervals to the left and right of (1, 0) can help you find what delta needs to be.
{"url":"http://www.physicsforums.com/showthread.php?t=337512","timestamp":"2014-04-16T10:33:30Z","content_type":null,"content_length":"29259","record_id":"<urn:uuid:a34cd347-e0bf-4df6-b509-d610c43e806b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Related rates March 18th 2007, 08:07 AM #1 Senior Member Jul 2006 Shabu City Help Related rates Hi people this is my last problem so sorry for asking too much because its our finals day tommorow... 1.) A conical tent with no floor is to have a capacity of 1000cubic meters. Find the dimensions that minimize the amount of canvas required: 2.) A box with a square base has an open top. The area of the material in this box is 100cm^2. What should be the dimensions in order to make the volume as large as possible? Just give me formula Ill be differentiating this: THanks SOOO much for all the big help! My Next bunch of questions will be about integrals The curved surface area of a cone is slant height * perimiter of base/2, so if the height is h, and the radius of the base is r, we have slant height: s = sqrt(h^2+r^2) So the area of canvas required is: A = sqrt(h^2+r^2) (2 pi r)/2 The volume is: V = (1/3) (pi r^2) h = 1000, h = 3000/(pi r^2), and area of canvas: A = sqrt([3000/(pi r^2)]^2+r^2) (2 pi r)/2 But if A is minimise then so is A^2, so you may as well minimise: A^2 = ([3000/(pi r^2)]^2+r^2) pi^2 r^2 Area of material is: A= b^2 + 4*b*h = 100, where h is the height of the box and b the side of the base in cm. h = [100 - b^2]/(4 b) ... (1) The volume is: V = b^2 h = b [100 - b^2]/4 ... (2) So to find the b that maximises the volume we solve dV/db=0 (determining which solutions correspond to maxima/minima if necessary), then solve for h using (1) March 18th 2007, 08:54 AM #2 Grand Panjandrum Nov 2005 March 18th 2007, 08:59 AM #3 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/12686-help-related-rates.html","timestamp":"2014-04-18T15:59:35Z","content_type":null,"content_length":"38107","record_id":"<urn:uuid:7deb3081-6776-4bda-a1cc-692ae1ff0777>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Having Trouble Factoring? [Archive] - Free Math Help Forum 05-09-2006, 10:52 PM A plane travels 1200 miles against the jet stream, causing its airspeed to be decreased by 20 miles per hour. On the return flight, the plane travels with the jet stream so that its airspeed is increased by 20 miles per hour. If the total flight time of the round trip is 6 and 1/3 hours, what would be the plane's rate in still air? 1200/r-20 + 1200/r+20=19/3 LCM is 3(r-20)(r+20); so 3*1200(r+20) + 3*1200(r-20)=19(r-20)(r+20) I am stuck at this point.
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-43728.html","timestamp":"2014-04-21T09:37:58Z","content_type":null,"content_length":"5775","record_id":"<urn:uuid:a78af85c-f7d6-4bf9-b81e-448602d548f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
WebDiarios de Motocicleta Various fields have various notions of "nice proofs," be they combinatorial, or elementary, or bijective. In TCS, perhaps the correct standard for lower bound proofs should be "encoding proofs." In these proofs, one starts with the assumption that some algorithm exists, and derives from that some impossible encoding algorithm, e.g. one that can always compress n bits into n-1 bits. A normal lower bound will have a lot of big-bad-ugly statements -- "there are at least A bad sets (cf Definition 12), each containing at least B elements, of which at most a fraction of C are ugly (cf Definition 6)". To deal with such things, one invokes concentrations left and right, and keeps throwing away rows, columns, elements, and any hope that the reader will not get lost in the There are 3 huge problems with this: 1. Most lower bounds cannot be taught in a regular class. But we can't go on saying how problems like P-vs-NP are so awesome, and keep training all our graduate students to round LPs better and squeeze randomness from stone. 2. The reader will often not understand and appreciate the simple and beautiful idea, as it is too hard to pull apart from its technical realization. Many people in TCS seem to think lower bounds are some form of dark magic, which involves years of experience and technical development. There is certainly lots of dark magic in the step where you find small-but-cool tricks that are the cornerstone of the lower bound; the rest can be done by anybody. 3. You start having lower-bounds researchers who are so passionate about the technical details that they actually think that's what was important! I often say "these two ideas are identical" only to get a blank stare. A lower bound idea never talks about entropy or rectangle width; such things are synonymous in the world of ideas. Proofs that are merely an algorithm to compress n bits have elegant linearity properties (entropy is an expectation, therefore linear), and you never need any ugly concentration. Anybody, starting with a mathematically-mature high school student, could follow them with some effort, and teaching them is feasible. Among researchers, such proofs are games of wits and creativity, not games involving heavy guns that one may or may not have in their toolkit. My paper on lower bounds for succinct rank/select data structures was submitted to SODA in extraordinary circumstances. I had been travelling constantly for a month, and the week before the deadline I was packing to move out of California and down with a flu. In the end, the only time I found for the paper was on an airplane ride to Romania, but of course I had no laptop since I had just quit IBM. So I ended up handwriting the paper on my notepad, and quickly typing it in on my sister's laptop just in time for the deadline. [ You would be right to wonder why anybody would go through such an ordeal. I hate submitting half-baked papers, and anyway I wanted to send the paper to STOC. But unfortunately I was literally forced to do this due to some seriously misguided (I apologize for the hypocritical choice of epithets) behavior by my now-coauthor on that paper. ] If you have 8 hours for a paper, you use all the guns you have, and make it work. But after the paper got in, I was haunted by a feeling that a simple encoding proof should exist. I've learnt long ago not to resist my obsessions, so I ended up spending 3 weeks on the paper -- several dozen times more than before submission :). I am happy to report that I found a nice encoding proof, just "in time" for the SODA camera-ready deadline. (As you know, in time for a camera-ready deadline means 2 weeks and 1 final warning later.) The paper is , if you are interested. 20 comments: What sort of behavior? I am curious to learn how many commenters are going to look at the techniques in the paper(s), and how many people are going to zoom in on the tiny politically incorrect statements in Mihai's blog post. :) So the camera-ready version bears little resemblance to the accepted version? Can it even be called a "peer-reviewed" publication in that case? It's nice to see someone putting effort into cleaning up a result even after it has been accepted. Too many authors do what you did (to meet the deadline) and then take the program committee's acceptance as a certificate that the paper is "good enough." I agree with what well said by Morin. More to the point, it would be very nice to write a pedagogical paper in which the usefulness of the "encoding technique" is illustrated on a variety of examples. You quit IBM? I am curious to learn how many commenters are going to look at the techniques in the paper(s), and how many people are going to zoom in on the tiny politically incorrect statements in Mihai's blog post. To each his own :) So the camera-ready version bears little resemblance to the accepted version? Can it even be called a "peer-reviewed" publication in that case? Well, the common opinion is that PCs rarely read technical in the paper. So it's as peer-reviewed as any other paper in the conference :) -- my introduction didn't change, of course. You quit IBM? Sure (with ample notice). I was going to the Central European Olympiad in Romania, and then starting at ATT. "Well, the common opinion is that PCs rarely read technical in the paper. So it's as peer-reviewed as any other paper in the conference :) " That sounds bad. I'm not a computer scientist, and to me this sounds like a recipe for lots and lots of incorrect results. Add to that the common practice in CS of not submitting to journals... Out of curiosity, what proportion of conference papers in CS do you think have fatal errors in them? You mean you were getting scooped or something on this result and so you were forced to do it to ensure at least a merge? I mean tell me the story... I'm not a computer scientist, and to me this sounds like a recipe for lots and lots of incorrect results. Add to that the common practice in CS of not submitting to journals... Spotting errors is very hard. The idea that journal reviewers do it is wishful thinking. But the good news is that 90% of what we publish is crap, so who cares if it's correct or not? If somebody actually cares about your result, it will get checked. You mean you were getting scooped or something on this result and so you were forced to do it to ensure at least a merge? I was getting "scooped" by a weaker result which was not an independent discovery. My choice was between submitting to SODA and accepting a merger of the author lists, or starting a public fight. Since I chose the former, it doesn't make sense to go into details now. Hi Mihai. Interestingly, as far as I know, the encoding technique for proving the lower bounds was first observed by Gennaro-Trevisan. They observed that if there is a small circuit for inverting a random permutation with non-trivial probability, then you can compact the description of the random permutation. Although quite basic, I totally agree and love this technique. More recently, several paper co-authored by Iftach Heitner (and others, of course) applied this technique to much more powerful situations, where a direct proof seem hard. One nice thing about the encoding technique is that the encoder/decoder are allowed to be inefficient, if you one proves the lower bound on efficiency of some algorithm or cryptographic assumption. Recently, I worked with Iftach (and a student) on a paper where we successfully used this technique to argue the impossibility of black-box reduction from one problem to another (more or less), and I truly appreciated its power. Very interesting that it is used in algorithms much less (and much more recently) than in crypto, and that it is not as known in data structures as much as it is known in crypto. Interestingly, as far as I know, the encoding technique for proving the lower bounds was first observed by Gennaro-Trevisan. Isn't the entire field of Kolmogorov complexity all about using encodings of random objects to prove lower bounds or are you talking about a more specialized application? Hi Yevgeniy, it turns out that this 1990 paper of Andy Yao Andrew Chi-Chih Yao: Coherent Functions and Program Checkers STOC 1990: 84-94 already had the main idea of the proof you mention from my paper with Rosario. He deals with the simpler question of the complexity of inverting a random permutation on all inputs and he shows that, roughly speaking, if you have an oracle inversion algorithm for a permutation that uses m bits of advice and has query complexity q, then a 1/q fraction of the outputs of the permutation are implied by the other outputs and by the advice. If you think of the permutation itself, plus the m bits of advice, as a data structure with m bits of redundancy, and of the q-query oracle inversion algorithm as a q-query algorithm in the cell-probe model, then Yao's result (and the one with Rosario, and the later ones etc.) are really redundancy/query trade-off for certain systematic data structures, so it's not surprising that one would end up using similar techniques. Hi Luca. I like your connection to data structures a lot, thanks! Regarding Kolmogorov complexity comparison, I think the difference is that the encoding arguments are expected to be used there, since this is more or less what definition states. But the encoding technique is somewhat surprising at the first glance (until one thinks about it, as Luca just did). Indeed, my first inclination to argue that no small circuit can invert a random permutation with forward oracle access is to perhaps to first fix the circuit, and argue that Pr (fixed circuit succeeds with probability > e) << 1/#circuits. And computing the latter probability is somewhat painful. Certainly doable for this relatively simple example, but IMHO much more complicated than the beautiful encoding technique. Namely, compress a random permutation as follows (simplified): give the small circuit as advice, describe the set S of inputs on which the circuit succeeds, describe pi^{-1}(S) as a set, and then explicitly describe pi^{-1}(y) for all remaining y's. The rest is just counting the size of this description in one line! Very natural, no probability calculations! Isn't also the constructive proof of Lovasz's local lemma an encoding proof? What is that you don't like in the first proof? The first proof argues Q_0, Q_delta can be encoded efficiently even under some arbitrary conditioning (with sufficiently high probability)? Is it this conditioning that you consider not too intuitive? Just asking since I found the first proof really cute. The encoding technique is certainly well-known in data structures. It appeared for instance in [Fredman '82], [Demaine Lopez-Ortiz '01], [Demaine Patrascu '04], [Golynski '09]. I don't think you can attribute it to any one person. Luca, now we have lower bounds for storing permutations in a non-systematic fashion [Golynski'09]. Can this be connected to crypto? Does non-systematic have any natural meaning there? What is that you don't like in the first proof? [...] Is it this conditioning that you consider not too intuitive? Just asking since I found the first proof really cute The high entropy given the conditioning is intuitive (for someone in the area), but rather technical to prove. For instance, I didn't even deal with independent variables, but with two dependent vectors, each having independent coordinates. If you can get the proof down to essentially high school math, you should always do it :) One crypto application, which can already be hinted from my paper with Erik is that if gives a lower bound on the time for decrypting a message on the bit/cell probe model. Although the lower bound is extremely weak, nothing higher than that has been proven as far as we were able to ascertain. Here's the scheme. You create a random permutation π of the integers 1 to n. Consider this permutation to be your encrypting key as follows. A message M composed of say nlog n bits is encoded by sending π applied to the first log n bits, then π to the next log n bits and so on. Now assume that Eve manages to get her hands on the private key, encoded in whatever form she prefers. Even so, by the lower bound it follows that this is not enough to decode the message in constant time per log n bits. Her choices are now (a) to spend time t decoding each word for a total decoding time of n*t, (b) to bite the bullet and build a data structure of size n/t to assist in fast lookups for a total time of n*t+n/t > 2n, or (c) to obtain more data from the channel which by the lower bound is at least an additional n log n bits. The generalization to the cell probe model gives n*t +n/(log n)^t > n+n/log n lower bound for breaking the key. The decrypting lower bound is extremely weak: breaking the message takes Eve an extra n/log n additive term. I don't see a natural interpretation of non-systematic representation. A one-way permutation should be efficiently computable, so a time-t adversary should be able to get about t evaluations of the permutation, which you can model by having the permutation itself be available for lookup. Then (if you are trying to prove a lower bound for non-uniform algorithms in an oracle setting) the algorithm can have some non-uniform advice, which is the redundant part of the data structure. At this point, from Hellman and Yao it follows that the optimal tradeoff for inverting is time t and redundancy m provided m*t > N where N is the size of the range of the Here is a question that has been open for 10+ years: suppose I want to determine the non-uniform complexity of inverting a random *function* rather than a permutation. Fiat and Naor show that a random function mapping [N] into [N] can be inverted in time t with redundancy m provided t* m^2 > N, up to polylogN terms. (Same if the function is not random, but it has the property that the preimage size is at most polylogN.) Amos Fiat, Moni Naor: Rigorous Time/Space Trade-offs for Inverting Functions. SIAM J. Comput. 29(3): 790-803 (1999) And this paper shows that this trade-off (which has the interesting case t=m=N^{2/3}) is best possible under rather restrictive assumptions on what the redundant part of the data structure is allowed to contain, and how it is allowed to be used Elad Barkan, Eli Biham, Adi Shamir: Rigorous Bounds on Cryptanalytic Time/Memory Tradeoffs. CRYPTO 2006: 1-21 Is t=m=N^{2/3} best possible for functions with small pre-images in the fully-general cell-probe / non-uniform-oracle-algorithm model? (For general functions, Fiat-Naor only get the trade-off tm^3>N^3, which has the interesting case t=m=N^{3/4}) Even showing that t=m=N^{1/2+eps} is not achievable would be interesting because it would separate the complexity of functions from the complexity of permutations, for which t=m=N^{1/2} is I love how your now co-author list their [probable] initial submission on their papers page.
{"url":"http://infoweekly.blogspot.com/2009/10/encoding-proofs.html?showComment=1256454678304","timestamp":"2014-04-18T08:38:23Z","content_type":null,"content_length":"81350","record_id":"<urn:uuid:78aa441e-5d6e-4592-90b5-547f00c8e879>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear programming relaxations and belief propagation - an empirical study Results 1 - 10 of 53 - Ihler (ihler@ics.uci.edu), University of California, Irvine. Michael "... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..." Cited by 428 (27 self) Add to MetaCart The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models. 1 "... We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but unlike max-product it always converges, and can be proven to find the exact MAP solution in various settings. The algorithm is derived via bloc ..." Cited by 75 (10 self) Add to MetaCart We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but unlike max-product it always converges, and can be proven to find the exact MAP solution in various settings. The algorithm is derived via block coordinate descent in a dual of the LP relaxation of MAP, but does not require any tunable parameters such as step size or tree weights. We also describe a generalization of the method to cluster based potentials. The new method is tested on synthetic and real-world problems, and compares favorably with previous approaches. Graphical models are an effective approach for modeling complex objects via local interactions. In such models, a distribution over a set of variables is assumed to factor according to cliques of a graph with potentials assigned to each clique. Finding the assignment with highest probability in these models is key to using them in practice, and is often referred to as the MAP (maximum aposteriori) assignment problem. In the general case the problem is NP hard, with complexity exponential in the tree-width of the underlying graph. "... Linear Programming (LP) relaxations have become powerful tools for finding the most probable (MAP) configuration in graphical models. These relaxations can be solved efficiently using message-passing algorithms such as belief propagation and, when the relaxation is tight, provably find the MAP confi ..." Cited by 65 (10 self) Add to MetaCart Linear Programming (LP) relaxations have become powerful tools for finding the most probable (MAP) configuration in graphical models. These relaxations can be solved efficiently using message-passing algorithms such as belief propagation and, when the relaxation is tight, provably find the MAP configuration. The standard LP relaxation is not tight enough in many real-world problems, however, and this has lead to the use of higher order cluster-based LP relaxations. The computational cost increases exponentially with the size of the clusters and limits the number and type of clusters we can use. We propose to solve the cluster selection problem monotonically in the dual LP, iteratively selecting clusters with guaranteed improvement, and quickly re-solving with the added clusters by reusing the existing solution. Our dual message-passing algorithm finds the MAP configuration in protein sidechain placement, protein design, and stereo problems, in cases where the standard LP relaxation fails. 1 , 2007 "... Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with max-product belief propagation (BP) and its variants. In particular, it is known that using BP on a single-cycle graph or tree reweighted BP on an arbitrary ..." Cited by 45 (4 self) Add to MetaCart Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with max-product belief propagation (BP) and its variants. In particular, it is known that using BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will give the MAP solution if the beliefs have no ties. In this paper we extend the setting under which BP can be used to provably extract the MAP. We define Convex BP as BP algorithms based on a convex free energy approximation and show that this class includes ordinary BP with single-cycle, tree reweighted BP and many other BP variants. We show that when there are no ties, fixed-points of convex max-product BP will provably give the MAP solution. We also show that convex sum-product BP at sufficiently small temperatures can be used to solve linear programs that arise from relaxing the MAP problem. Finally, we derive a novel condition that allows us to derive the MAP solution even if some of the convex BP beliefs have ties. In experiments, we show that our theorems allow us to find the MAP in many real-world instances of graphical models where exact inference using junction-tree is impossible. , 2008 "... The problem of computing a maximum a posteriori (MAP) configuration is a central computational challenge associated with Markov random fields. A line of work has focused on “tree-based ” linear programming (LP) relaxations for the MAP problem. This paper develops a family of super-linearly convergen ..." Cited by 30 (1 self) Add to MetaCart The problem of computing a maximum a posteriori (MAP) configuration is a central computational challenge associated with Markov random fields. A line of work has focused on “tree-based ” linear programming (LP) relaxations for the MAP problem. This paper develops a family of super-linearly convergent algorithms for solving these LPs, based on proximal minimization schemes using Bregman divergences. As with standard messagepassing on graphs, the algorithms are distributed and exploit the underlying graphical structure, and so scale well to large problems. Our algorithms have a double-loop character, with the outer loop corresponding to the proximal sequence, and an inner loop of cyclic Bregman divergences used to compute each proximal update. Different choices of the Bregman divergence lead to conceptually related but distinct LP-solving algorithms. We establish convergence guarantees for our algorithms, and illustrate their performance via some simulations. We also develop two classes of graph-structured rounding schemes, randomized and deterministic, for obtaining integral configurations from the LP solutions. Our deterministic rounding schemes use a “re-parameterization ” property of our algorithms so that when the LP solution is integral, the MAP solution can be obtained even before the LP-solver converges to the optimum. We also propose a graph-structured randomized rounding scheme that applies to iterative LP solving algorithms in general. We analyze the performance of our rounding schemes, giving bounds on the number of iterations required, when the LP is integral, for the rounding schemes to obtain the MAP solution. These bounds are expressed in terms of the strength of the potential functions, and the energy gap, which measures how well the integral MAP solution is separated from other integral configurations. We also report simulations comparing these rounding schemes. 1 - In RECOMB2007 , 2007 "... Side-chain prediction is an important subproblem of the general protein folding problem. Despite much progress in side-chain prediction, performance is far from satisfactory. As an example, the ROSETTA protocol that uses simulated annealing to select the minimum energy conformations, correctly predi ..." Cited by 23 (1 self) Add to MetaCart Side-chain prediction is an important subproblem of the general protein folding problem. Despite much progress in side-chain prediction, performance is far from satisfactory. As an example, the ROSETTA protocol that uses simulated annealing to select the minimum energy conformations, correctly predicts the first two side-chain angles for approximately 72 % of the buried residues in a standard data set. Is further improvement more likely to come from better search methods, or from better energy functions? Given that exact minimization of the energy is NP hard, it is difficult to get a systematic answer to this question. In this paper, we present a novel search method and a novel method for learning energy functions from training data that are both based on Tree Reweighted Belief Propagation (TRBP). We find that TRBP can find the global optimum of the ROSETTA energy function in a few minutes of computation for approximately 85 % of the proteins in a standard benchmark set. TRBP can also effectively bound the partition function which enables using the Conditional Random Fields (CRF) framework for learning. Interestingly, finding the global minimum does not significantly improve side-chain prediction for - in: 45th Annual Allerton Conference on Communication, Control and Computing , 2007 "... Abstract — We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing th ..." Cited by 21 (1 self) Add to MetaCart Abstract — We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing these constraints gives a tractable dual problem, one defined by a thin graph, which is then optimized by an iterative procedure. When this iterative optimization leads to a consistent estimate, one which also satisfies the constraints, then it corresponds to an optimal MAP estimate of the original model. Otherwise there is a “duality gap”, and we obtain a bound on the optimal solution. Thus, our approach combines convex optimization with dynamic programming techniques applicable for thin graphs. The popular tree-reweighted maxproduct (TRMP) method may be seen as solving a particular class of such relaxations, where the intractable graph is relaxed to a set of spanning trees. We also consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g. loops), and a connected tree obtained by “unwinding ” cycles. In addition, we propose a new class of multiscale relaxations that introduce “summary ” variables. The potential benefits of such generalizations include: reducing or eliminating the “duality gap ” in hard problems, reducing the number or Lagrange multipliers in the dual problem, and accelerating convergence of the iterative optimization procedure. I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=686032","timestamp":"2014-04-18T12:21:36Z","content_type":null,"content_length":"38975","record_id":"<urn:uuid:361b9661-4bfe-414c-8c8c-537766990f68>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis for Computer Scientists: Foundations, Methods, and Algorithms (Undergraduate Topics in Computer Science) Synopses & Reviews Publisher Comments: Mathematics and mathematical modelling are of central importance in computer science, and therefore it is vital that computer scientists are aware of the latest concepts and techniques. This concise and easy-to-read textbook/reference presents an algorithmic approach to mathematical analysis, with a focus on modelling and on the applications of analysis. Fully integrating mathematical software into the text as an important component of analysis, the book makes thorough use of examples and explanations using MATLAB, Maple, and Java applets. Mathematical theory is described alongside the basic concepts and methods of numerical analysis, supported by computer experiments and programming exercises, and an extensive use of figure illustrations. Topics and features: Thoroughly describes the essential concepts of analysis, covering real and complex numbers, trigonometry, sequences and series, functions, derivatives and antiderivatives, definite integrals and double integrals, and curvesProvides summaries and exercises in each chapter, as well as computer experimentsDiscusses important applications and advanced topics, such as fractals and L-systems, numerical integration, linear regression, and differential equationsPresents tools from vector and matrix algebra in the appendices, together with further information on continuityIncludes definitions, propositions and examples throughout the text, together with a list of relevant textbooks and references for further readingSupplementary software can be downloaded from the book's webpage at www.springer.comThis textbook is essential for undergraduate students in Computer Science. Written to specifically address the needs of computer scientists and researchers, it will also serve professionals looking to bolster their knowledge in such fundamentals extremely well. Dr. Michael Oberguggenberger is a professor in the Department of Civil Engineering Sciences at the University of Innsbruck, Austria. Dr. Alexander Ostermann is a professor in the Department of Mathematics at the University of Innsbruck, Austria. This textbook presents an algorithmic approach to mathematical analysis, with a focus on modelling and on the applications of analysis. It makes thorough use of examples and explanations using MATLAB, Maple and Java applets. This textbook presents an algorithmic approach to mathematical analysis, with a focus on modelling and on the applications of analysis. Fully integrating mathematical software into the text as an important component of analysis, the book makes thorough use of examples and explanations using MATLAB, Maple, and Java applets. Mathematical theory is described alongside the basic concepts and methods of numerical analysis, supported by computer experiments and programming exercises, and an extensive use of figure illustrations. Features: thoroughly describes the essential concepts of analysis; provides summaries and exercises in each chapter, as well as computer experiments; discusses important applications and advanced topics; presents tools from vector and matrix algebra in the appendices, together with further information on continuity; includes definitions, propositions and examples throughout the text; supplementary software can be downloaded from the book's webpage. About the Author Dr. Michael Oberguggenberger is a professor in the Department of Civil Engineering Sciences at the University of Innsbruck, Austria. Dr. Alexander Ostermann is a professor in the Department of Mathematics at the University of Innsbruck, Austria. Table of Contents Numbers Real-Valued Functions Trigonometry Complex Numbers Sequences and Series Limits and Continuity of Functions The Derivative of a Function Applications of the Derivative Fractals and L-Systems Antiderivatives Definite Integrals Taylor Series Numerical Integration Curves Scalar-Valued Functions of Two Variables Vector-Valued Functions of Two Variables Integration of Functions of Two Variables Linear Regression Differential Equations Systems of Differential Equations Numerical Solution of Differential Equations What Our Readers Are Saying Be the first to add a comment for a chance to win!
{"url":"http://www.powells.com/biblio?isbn=9780857294456","timestamp":"2014-04-19T23:15:45Z","content_type":null,"content_length":"69922","record_id":"<urn:uuid:1692c1e0-faa3-478c-890e-91830b007736>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Priscilla on Monday, December 28, 2009 at 8:28pm. A race car can be slowed with a constant acceleration of -11 m/s^2 a. If the car is going 55 m/s, how many meters will it travel before it stops? b. How many meters will it take to stop a car going twice as fast? According to one of the teachers this is what they stated A. stopping distance = (stopping time) x (average speed) = (V/a)*(V/2) = V^2/(2a) B. If the speed doubles, with deceleration rate "a" staying the same, the stopping distance is four times farther. Ok, to find the distance for the first one i understand. I'm pretty sure this is how you do it d = Vit d = 55 x 5 d = 275 m But for b, Would i have to double the speed for this which is 55 x 2 = 110 and would i also have to increase the stoppping distance which is 275 m by multiplying by 4 • Physics - drwls, Monday, December 28, 2009 at 9:58pm This question has been answered twice already. Yes, the stopping distance quadruples if you double the speed. What is there about Stopping distance = V^2/(2a) that you don't understand? • Physics - Priscilla, Tuesday, December 29, 2009 at 10:01pm Ok, i just understand this a little. I know it has been answered twice by you and i thank you for that. Can you tell me if this right below? a. stoppping distance = v^2 / 2a sd = 55^2 / 2(- 11) sd = 3025 / - 22 sd = -137.5 m b. Therefore, if the stopping distance quadruples then i shouldn't put that in the equation of: sd = v^2 / 2a sd = 110^2 / 2 (- 11) sd = 12100 / -22 sd = -550 m Is this right? Thanks i appreciate this a lot and happy new yr. Related Questions Physics - A race car can be slowed with a constant acceleration of -11 m/s^2 a. ... Physics - A race car can be slowed with a constant acceleration of -11 m/s^2 a... Physics - A race car can be slowed with a constant acceleration of -11 m/s^2 a. ... physics - A race car can be slowed with a constant acceleration of -14 m/s2. (a... physics - a race car can be slowed with a constant acceleration of -11m/s2. a.if... physics with 6 questions - Two cars start from rest at a red stop light. When ... physics - Two cars start from rest at a red stop light. When the light turns ... Physics - Two cars start from rest at a red stop light. When the light turns ... physics - Two cars start from rest at a red stop light. When the light turns ... physics - A car initially going 30ft/sec brakes at a constant rate (constant ...
{"url":"http://www.jiskha.com/display.cgi?id=1262050094","timestamp":"2014-04-19T22:59:23Z","content_type":null,"content_length":"9843","record_id":"<urn:uuid:226f6761-075d-4cfc-bffc-3c868f0e3466>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x S. Vassiliadis, E.M. Schwarz, B.M. Sung, "Hard-Wired Multipliers with Encoded Partial Products," IEEE Transactions on Computers, vol. 40, no. 11, pp. 1181-1197, November, 1991. BibTex x @article{ 10.1109/12.102823, author = {S. Vassiliadis and E.M. Schwarz and B.M. Sung}, title = {Hard-Wired Multipliers with Encoded Partial Products}, journal ={IEEE Transactions on Computers}, volume = {40}, number = {11}, issn = {0018-9340}, year = {1991}, pages = {1181-1197}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.102823}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Hard-Wired Multipliers with Encoded Partial Products IS - 11 SN - 0018-9340 EPD - 1181-1197 A1 - S. Vassiliadis, A1 - E.M. Schwarz, A1 - B.M. Sung, PY - 1991 KW - hardwired multipliers; encoded partial products; multibit overlapped scanning multiplication algorithm; sign-magnitude; two's complement; digital arithmetic; encoding; multiplying circuits. VL - 40 JA - IEEE Transactions on Computers ER - A multibit overlapped scanning multiplication algorithm for sign-magnitude and two's complement hard-wired multipliers is presented. The theorems necessary to construct the multiplication matrix for sign-magnitude representations are emphasized. Consequently, the algorithm for sign-magnitude multiplication and its variation to include two's complement numbers are presented. The proposed algorithm is compared to previous algorithms that generate a sign extended partial product matrix, with an implementation and with a study of the number of elements in the partial product matrix. The proposed algorithm is shown to yield significant savings over well known algorithms for the generation and the reduction of the partial product matrix of a multiplier designed with multibit overlapped scanning. [1] L. Dadda, "Some schemes for parallel multipliers,"Alta Frequenza, vol. 34, pp. 349-356, Mar. 1965. [2] C. S. Wallace, "A suggestion for a fast multiplier,"IEEE Trans. Electron. Comput., vol. EC-13, pp. 14-17, Feb. 1964. [3] L. Dadda, "Composite parallel counters,"IEEE Trans. Comput., vol. C- 29, no. 10, pp. 942-946, Oct. 1980. [4] W. J. Stenzel, W. J. Kubitz, and G. H. Garcia, "Compact high-speed parallel multiplication scheme,"IEEE Trans. Comput., vol. C-26, no. 10, pp. 948-957, Oct. 1977. [5] S. D. Pesaris, "A 40-ns 17-bit array multiplier,"IEEE Trans. Comput., vol. C-20, pp. 442-447, Apr. 1971. [6] C. R. Baugh and B. A. Wooley, "A two's complement parallel array multiplication algorithm,"IEEE Trans. Comput., vol. C-22, pp. 1045-1047, Dec. 1973. [7] R. De Mori and A. Serra, "A parallel structure for sign number multiplication and addition,"IEEE Trans. Comput., pp. 1453-1454, Dec. 1972. [8] S. Vassiliadis, M. Putrino, and E. M. Schwarz, "Parallel encrypted array multipliers,"IBM Journal of Research and Development, vol. 32, no. 4, pp. 536-551, July 1988. [9] M. Putrino, S. Vassiliadis, and E. M. Schwarz, "Array two's complement multiplier and square function,"Electron. Lett., vol. 23, no. 22, pp. 1185-1187, Oct. 1987. [10] S. Vassiliadis, E. M. Schwarz, and D. J. Hanrahan, "A general proof for overlapped multiple-bit scanning multiplications,"IEEE Trans. Comput., vol. 38, no. 2, pp. 172-183, Feb. 1989. [11] S. Vassiliadis, M. Putrino, and E. M. Schwarz, "Unified multi-bit overlapped scanning multiplier algorithm," inProc. IEEE STTC Conf., 1988, pp. 68-75. [12] M. J. Flynn and S. Waser,Introduction to Arithmetic for Digital Systems Designers. CBS College Publishing, 1982, pp. 215-222. [13] K. Hwang,Computer Arithmetic: Principles, Architecture, and Design. New York: Wiley, 1979. [14] J. J. F. Cavanagh,Digital Computer Arithmetic Design and Implementation. New York: McGraw-Hill, 1984, pp. 117-122. [15] N. R. Scott,Computer Number Systems&Arithmetic. Englewood Cliffs, NJ: Prentice-Hall, 1985, ch. 5. [16] IBM System/370 Principles of Operations, GA22-7000, IBM Corp. (available through IBM branch offices). [17] IBM System/370 Extended Architecture Principles of Operations, SA22- 7085, IBM Corp. (available through IBM branch offices). [18] S. L. George and J. L. Hefner, "High speed hardware multiplier for fixed floating point operands," U.S. Patent 4 594 679, col. 8, June 10, 1986. [19] J. Sklansky, "Conditional-sum addition logic,"IEEE Trans. Electron. Comput., EC-9, pp. 226-231, 1960. [20] H. Ling, "High-speed binary adder,"IBM J. Res. Develop., vol. 25, no. 3, pp. 156-166, May 1981. [21] A. Weinberger, "High-speed binary adder,"IBM Technical Disclosure Bulletin, vol. 24, no. 8, pp. 4393-4398, Jan. 1982. [22] S. Vassiliadis, "Adders with removed dependencies,"IBM Technical Disclosure Bulletin, vol. 30, no. 6, pp. 208-212, Mar. 1988. [23] S. Vassiliadis, "A comparison between adders with new defined carries and traditional schemes for addition,"Int. J. Electron., vol. 64, no. 4, pp. 617-626, Apr. 1988. [24] S. Vassiliadis, "Recursive equations for hardwired binary adders,"International Journal of Electronics, vol. 67, no. 2, pp. 201-213, Aug. 1989. [25] G. Bewick, P. Song, G. De Micheli, and M. J. Flynn, "Approaching a nanosecond: A 32 bit adder," inProc. ICCD Conf., 1988, pp. 221-224. [26] A. Habibi and P.A. Wintz, "Fast multipliers,"IEEE Trans. Comput., vol. C-19, pp. 153-157, Feb. 1970. Index Terms: hardwired multipliers; encoded partial products; multibit overlapped scanning multiplication algorithm; sign-magnitude; two's complement; digital arithmetic; encoding; multiplying circuits. S. Vassiliadis, E.M. Schwarz, B.M. Sung, "Hard-Wired Multipliers with Encoded Partial Products," IEEE Transactions on Computers, vol. 40, no. 11, pp. 1181-1197, Nov. 1991, doi:10.1109/12.102823 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1991/11/t1181-abs.html","timestamp":"2014-04-18T19:00:12Z","content_type":null,"content_length":"54521","record_id":"<urn:uuid:56787ac6-6cc5-484b-beaf-ab3303ccf130>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework 4 (Fall 1998) MC48 Homework 4 (Fall 1998) Due: October 20, 1998 1. Design a PLA with four inputs, X[3], X[2], X[1], and X[0], and two outputs D[3] and D[5]. The four inputs should be considered as forming a four-bit unsigned number, X[3]X[2]X[1]X[0], with X[3] as the most significant bit. The D[3] output should be asserted if and only if that number is divisible by 3. Similarly, the D[5] output should be asserted if and only if the input number is divisible by 5. 2. Do exercise 5.6 on page 427. 3. Do exercise 5.10 on page 428. The exercise says "consider both datapaths." By "both" they mean the single-cycle datapath from section 5.3 and the multicycle one from section 5.4. For this homework, you should only deal with the single-cycle one from section 5.3. Instructor: Max Hailperin
{"url":"https://gustavus.edu/+max/courses/F98/MC48/homeworks/hw4.html","timestamp":"2014-04-17T09:43:31Z","content_type":null,"content_length":"1549","record_id":"<urn:uuid:966ccc44-2dfc-483b-86d7-d9aa2adc05cf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
i'm just workin on my algebra and discrete geometry homework, and I'm having trouble with the following question: Triangle ABC is obtuse-angled at C. The bisectors of the exterior angles at A and B meet BC and AC extended at D and E, respectively. If AB = AD = BE, prove that the angle of ACB is 108 degrees. I'm totally lost as to where to start. If anyone could post a few pointers to get me started, I'd appreciate it. There is a formula (don't know the proper english name for it) that says: a^2 = b^2 + c^2 - 2*b*c*cos(A) where a, b & c are the length's of the trinagle's sides, and A is the angle at the opposite side of side a. Implement this in your figure (see below). You get: X^2 = (2Y)^2 + (2Y)^2 - 2*(2Y)*(2Y)*cos(a) X^2 = 8Y^2 - 8Y^2 * cos(a) X^2 = 8Y^2 * (1 - cos(a)) (X^2)/(8Y^2) = 1 - cos(a) cos(a) = 1 - (X^2)/(8Y^2) a = arccos(1 - (X^2)/(8Y^2)) a = arccos(1 - (1/8)*(X/Y)^2) Now, the problem is to find some kind of relation between X and Y, but I leave that to you. that's the cosine law The bisectors of the exterior angles at A and B meet BC and AC extended at D and E, respectively. That's what caught me up and why i didn't even attempt to answer what is a bisector of any angle? A bisector divides the angle in half. hence the word bisect To divide into two equal or congruent pieces Originally posted by XSquared The bisectors of the exterior angles at A and B meet BC and AC extended at D and E, respectively. the line that bisects the exterior angle should be that same one that bisects the interior angle. Either im missing some thing in the question or Magos is correct in his diagram. So AB == AD == BE, could that mean angle CAB == ABC? Then ACB + 2CAB = 180 ? Ok just rambling... This is how I interpreted the question:
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/44253-proofs-printable-thread.html","timestamp":"2014-04-17T22:08:00Z","content_type":null,"content_length":"10503","record_id":"<urn:uuid:3cd449f1-e452-440a-9ef2-25bcf40d60d9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternating Series March 25th 2009, 05:19 PM #1 Junior Member Feb 2009 Alternating Series These problems are in the alternating series section of my calc book but they are not necessarily alternating. How would you figure out if they converge, converge absolutely or diverge and what test would you use? problem 1 n=1 to infinity a(sub)1=5 a(sub)n+1= ((n to the nth root)/2) a(sub)n problem 2 n=1 to inifinity (-1)^n+1 (ln n/n) yes that is exactly what it means. Sorry, I don't know how to make it look like the actual problem like many people do on here. March 25th 2009, 05:26 PM #2 March 26th 2009, 03:37 AM #3 Junior Member Feb 2009
{"url":"http://mathhelpforum.com/calculus/80688-alternating-series.html","timestamp":"2014-04-18T16:50:05Z","content_type":null,"content_length":"35150","record_id":"<urn:uuid:f7cbe1b4-7f31-4b0a-ace1-9e136c4e80c7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Ten years hence Rajesh’s age would be 15 years more than twice his friend Sanjay’s age. 10 years ago Rajesh’s age was 35 years more than twice his friend’s age. After how many years would Rajesh’s age be exactly twice of his friend’s age? Best Response You've already chosen the best response. After 25years... Best Response You've already chosen the best response. could you plz explain Best Response You've already chosen the best response. r-rajesh's age as of now; s-sanjay's from the first clause of the qn(after 10 yrs): (r+10)=15+2(s+10) so now r=2s+25 (rearranging) in fact both the clauses will give the same eqn. when s=1; r=27 after 25 years: s=26; r=52 (which is double) Best Response You've already chosen the best response. k thanks Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4dfe35cf0b8bbe4f12e72af7","timestamp":"2014-04-18T08:16:37Z","content_type":null,"content_length":"34909","record_id":"<urn:uuid:42195894-f0aa-46a2-ada4-344591aeda90>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Proportion and Perspective The ARTstor image group Proportion and Perspective is meant to supplement a lesson in a middle-school math class that deals with measurement and proportion — usually in the context of geometry. There are several purposes of the image group, specifically: The introduction of 3-dimensional space in geometry (moving from square feet to cubic feet, for instance) traditionally devolves into a recitation of mathematical formulae — to be memorized and drilled. There are, of course, many applications of three-dimensional geometry that should be introduced to supplement these formulae, but the history of art offers a unique approach. Simply put, how does an artist present a realistic three-dimensional view on a two-dimensional surface? While the use of linear perspective seems basic to realistic painting, it was not seen before the 1400s. Some of the images in the image group are selected to show this transition — using similar scenes or topics depicted prior to the use of perspective and after. These comparisons will reveal the power of the basic geometric "discovery." Additional images are provided to show how artists can use perspective in a more playful sense — creating illusions and "tricking" the observer with impossible A complementary discussion of proportion helps round out the application geometry in art. One cannot depict reality (in the days before photography) without understanding the intricate measurements and ratios of objects, people, etc. The image group includes several sketches and studies made by classical artists seeking to represent reality on their canvases. A few of these sketches seem very much like computer renderings — revealing the connections between an understanding of proportion and, for example, a realistic computer game or CAD program. Furthermore, since CAD is basic for design and engineering, the applications offer great breadth and depth. Students do not learn well through "drill and kill," and we see the results of this in our national mathematics competency scores. An approach that involves visual application is sorely needed across the curriculum — and this can be one piece of it. Image caption: Albrecht Dürer, 1471 - 1528 | Madonna with the Monkey, circa 1498 | The Illustrated Bartsch. Vol. 10, commentary, Sixteenth Century German Artists: Albrecht Dürer
{"url":"http://www.artstor.org/news/n-html/ta-100430-wills.shtml","timestamp":"2014-04-16T13:34:08Z","content_type":null,"content_length":"4542","record_id":"<urn:uuid:97cf24a8-9822-4058-b1e2-394bf24cee2c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Draft (July 10, 1995) The Foreign Exchange Committee ("the Committee") supports the effort of the Basle Committee to establish international guidelines for applying capital charges to the market risks incurred by banks. The Foreign Exchange Committee is encouraged that a number of its recommended changes to the April 1993 Basle Committee proposal on market risk ("BIS proposal") were incorporated in the current BIS proposal, most notably the use of banks' internal models for calculating market risk and the extension of Tier 3 capital to cover foreign exchange as well as other market exposures. However, the latest proposal, particularly the quantitative standards for using internal models, raises two major concerns: 1. The extremely conservative quantitative standards in the proposal (including the multiplication factor, the holding period, restricted correlations and the confidence interval) would require banks to hold capital against daily price movements of as much as 24 standard deviations. Such capital requirements are out of proportion to actual risks in the foreign exchange market. The simple aggregation of capital for credit and market risks also overestimates the capital necessary for a diversified firm because potential losses from these risks are unlikely to be realized simultaneously. The additional costs imposed by such capital standards may shift a significant volume of trading activity to less regulated organizations. 2. A regulatory model with minimum quantitative standards may actually impede progress toward developing more precise risk measurement systems. Virtually any comprehensive set of proposed quantitative standards will be in conflict with model parameters used by banks. Banks will continue to rely upon their own more precise internal models. The regulatory model, purely duplicative, will not be used in day-to-day risk management and may prove impossible to validate using the proposed risk parameters. The regulatory model may divert resources from improvements to a bank's day-to-day risk systems. The Committee strongly recommends that banks should be able to use their internal models as the basis for calculating regulatory capital requirements. Based upon their reviews of banks' internal models, regulators may adjust the model results, if necessary, using a multiplication factor greater than one. Members of the Foreign Exchange Committee understand the necessity of establishing conservative capital standards that capture a wide range of possible price movements. However, the BIS proposal assumes that each bank's portfolio is comprised entirely of the most illiquid and volatile traded instruments. In contrast, internal bank models are designed to more correctly reflect the actual composition of each bank's portfolio. The table below compares the current quantitative standards from the BIS proposal with parameters generally used by financial institutions. The cumulative effect of the BIS proposal standards is a total compounding factor ranging from 12.1 to 14.7, which is equivalent to a market move of approximately 24 standard deviations of daily price changes.1 Based on historical market volatilities in foreign exchange, Committee members believe that planning for at least 24 standard deviation price changes is unduly extreme. If banks are required to maintain capital against "worst case" price movements while competitors' capital requirements are significantly lower, then banks (and perhaps other highly-regulated organizations) will have to widen bid-offer spreads to remain profitable. As a result, foreign exchange market liquidity may diminish and a substantial portion of foreign exchange turnover would migrate to less regulated entities. │ │ Financial │ BIS │Compounding│ │ │ Inst │ Proposal │ Effect │ │Holding Period │ 1 Day │ 10 Days │ 3.16 │ │Confidence Interval │ 1.65 - 2.0 │ 2.33 │1.16 - 1.41│ │Correlations │Across Market Factors│Within Market Factors│ 1.12 │ │Multiplication Factor │ 1 │ 3 (Minimum) │ 3 │ │Cumulative Compounding Effect │ │ │12.1 - 14.7│ Chart 1 (attached) compares the capital that would be held under the latest BIS proposal against a long spot yen position of $100 million equivalent, showing profits and losses over rolling 10-day periods. As the chart demonstrates, the BIS proposal would require capital of almost 15 percent ($15 million) for this position.3 The largest 10-day price movement in the past ten years (June 12, 1985 - June 12, 1995) was a 12 percent gain in the yen shortly after the September 1985 Plaza Agreement. The proposed BIS capital requirement is therefore more onerous than would have been necessary for this "worst case" historical experience. The second largest 10-day price movement was 8 percent, equivalent to only half of the BIS proposed regulatory capital level. It should also be noted that a yen position of this size could be liquidated in 1 day rather than 10 days. The largest 1-day price change in the same 10-year period was 3.47%, implying a portfolio value change of $3.47 million. The $14.88 million of capital required under the BIS proposal is 4.3 times greater than the largest historical 1-day loss on this portfolio over the past 10 years. A comparison between the BIS credit risk capital guidelines and the proposed capital requirements for market risk guidelines also leads to the conclusion that the BIS market risk proposal is unduly conservative. Committee members agree that foreign exchange trading positions (which can generally be liquidated in one day) impose less risk than long-term commercial loans (which cannot be offset until maturity). Yet the BIS market risk proposal would require capital of 14 percent or more against certain market risks while the credit risk guidelines require 8 percent against long-term commercial loans. Multiplication Factor The Committee understands that it may be appropriate to use a multiplication factor to transform value-at-risk figures into suitable capital levels. However, using a multiplication factor in addition to the highly conservative model assumptions in the BIS proposal generates capital requirements that are clearly excessive. As discussed more fully below, the Committee recommends that the BIS proposal include a multiplication factor but allow banks to use their own model parameters to calculate value at risk. While the Committee is fully supportive of the qualitative standards outlined in the BIS proposals, there is concern that national supervisors, both within and between international jurisdictions, must apply a consistent approach in determining the level of each bank's compliance with the standards. This is particularly important as the results of these compliance assessments will be used to determine the multiplication factor assigned to each bank. To reduce the level of arbitrariness in this exercise, the BIS should develop a set of detailed guidelines for use by the national supervisors to ensure consistent application and measurement of compliance with the qualitative standards. Holding Period A 10 business day holding period is unjustifiable and ignores the fact that, even if a particular instrument is not readily marketable, its risk can often be hedged in liquid markets. Given that the large majority of both trading4 and position-taking in foreign exchange occurs in major currency pairs, the Committee recommends that the common holding period for all currencies should be 1 day. A 1-day holding period also facilitates more accurate back testing of value-at-risk calculations against actual daily revenues. Sophisticated institutions use correlations across risk categories (e.g., interest rates and exchange rates) to measure portfolio risks more accurately and often employ diversification strategies to reduce risks. Disallowing the possibility of any cross-category correlations for market risk capital discourages risk reduction through diversification and reduces the reliability of potential loss Confidence Interval The 99th percentile, 1-tailed test (equivalent to 2.33 standard deviations in a normal distribution) is also conservative. Most financial institutions use confidence intervals ranging from 1.65 standard deviations (95th percentile) to 2 standard deviations (97.7th percentile). Combined with other highly conservative BIS proposed assumptions, a wide confidence interval can generate potential loss forecasts well in excess of actual risks. Observation Period The Committee strongly recommends the use of a single observation index weighted to capture the benefits of both long and short observation periods. A weighted methodology would respond to changing market environments while preserving the importance of earlier data. Committee members believe that the dual observation period under consideration by the BIS would be operationally burdensome. De Minimus Exemption Committee members believe that the de minimus exemption should be applied to all banks. Whether banks take positions for their own account or not is irrelevant given the exemption criteria of overall net open positions exceeding 2% of eligible capital. The de minimus exemption should also not include a requirement on the size of a bank's matched foreign exchange positions. Matched positions are already covered under BIS credit risk guidelines. Using Internal Bank Models The Foreign Exchange Committee strongly supports the use of internal models to calculate capital against possible losses from market price movements. However, as outlined above, the proposal as currently drafted includes minimum quantitative standards that are very different from most banks' internal models. In many instances, the proposed BIS model may be used solely for calculating regulatory capital rather than for day-to-day risk management purposes. The BIS proposal requires that banks "back-test" their past value-at-risk calculations against actual profits and losses. The Committee agrees that back-testing is a crucial element in the validation of any bank's value at risk model. Although the BIS proposal is not specific in this regard, we are assuming that the requirement is for banks to back-test their own internal models. The conservative assumptions in the BIS model would make it extremely difficult, if not impossible, for banks to back-test the BIS model. For example, to back-test a model using the proposed 10-day holding period would require that the model's calculations be compared with actual revenues over a 10-day period. Virtually all trading portfolios change significantly over any 10-day period, making it impractical to compare the proposed value-at-risk calculations with actual revenues. In a similar fashion, using a highly conservative 99% confidence interval will make it extremely difficult to judge whether the interpreted results from back-testing are statistically significant. Back testing the BIS model would be purely a regulatory burden which would provide little, if any, benefit to the bank's risk management capabilities. A simpler and more effective approach would allow each bank to use its own internal model to compute risk capital. Regulators could utilize the multiplication factor, if necessary, to adjust bank computed value at risk to appropriate capital levels. To evaluate the accuracy of internal models, regulators would review both the results of back testing as well as the methodologies employed in the back testing process. Internal models with poor predictive capabilities would be penalized with a multiplication factor greater than one. In this manner, regulators could encourage the development of more precise risk measurement models while maintaining consistent and conservative levels of risk capital. 1. Confidence interval of 2.33 standard deviations * 3.16 (square root of 10-day vs 1-day holding period) * 1.1 (excluded correlations) * 3 (minimum proposed multiplication factor) = 24.3 standard 2. This estimate is based upon the results of a comparison conducted by a major U.S. money center bank represented on the Committee in June 1995. This bank compared its daily value-at-risk (VAR) figures using correlations across interest and exchange rate movements with modified estimates allowing no correlations. The uncorrelated estimates were consistently 1.1 times this bank's correlated VAR calculations. 3. The BIS proposed capital guideline for this position would be as follows: • 2.13% (1 standard deviation of yen change for 10-day holding period) * • 2.33 standard deviations (99% confidence interval) * • 3 (Multiplication Factor) * • $100 million (notional position) = $14,888,700 4. According to the BIS Central Bank Survey of Foreign Exchange Market Activity in April 1992, 83.2 percent of all global spot foreign exchange transactions were in major currency pairs including the US dollar, Japanese yen, Deutsche Mark, other European currencies, the Canadian dollar and the Australian dollar (Table IIb on page 10).
{"url":"http://www.newyorkfed.org/fxc/annualreports/ar1995/fxar9514.html","timestamp":"2014-04-21T14:42:32Z","content_type":null,"content_length":"31544","record_id":"<urn:uuid:9c701fbf-07bc-474e-a00a-fb629bbf63d6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Books à la Carte are unbound, three-hole-punch versions of the textbook. This lower cost option is easy to transport and comes with same access code or media that would be packaged with the bound Michael Sullivan’s Fundamentals of Statistics, Third Edition, was written to address the everyday challenges Mike faces teaching statistics. Almost every aspect of the book was tested in his classroom to ensure that it truly helps students learn better. Mike skillfully connects statistical concepts to readers’ lives, helping them to think critically, become informed consumers, and make better decisions. If you are looking for a streamlined textbook, which will help you think statistically and become a more informed consumer through analyzing data, then Sullivan’s Fundamentals of Statistics, Third Edition, is the book for you. This Package Contains: Fundamentals of Statistics, Third Edition, (à la Carte edition) with MyMathLab/MyStatLab Student Access Kit Table of Contents Preface to the Instructor Technology Resources An Introduction to the Applets Applications Index 1. Data Collection 1.1 Introduction to the Practice of Statistics 1.2 Observational Studies and Experiments 1.3 Simple Random Sampling 1.4 Other Effective Sampling Methods 1.5 Bias in Sampling 1.6 The Design of Experiments Chapter Review Chapter Test Making an Informed Decision: What Movie Should I Go To? Case Study: Chrysalises for Cash (On CD) 2. Creating Tables and Drawing Pictures of Data 2.1 Organizing Qualitative Data 2.2 Organizing Quantitative Data 2.3 Graphical Misrepresentations of Data Chapter Review Chapter Test Making an Informed Decision: Tables or Graphs? Case Study: The Day the Sky Roared (On CD) 3. Numerically Summarizing Data 3.1 Measures of Central Tendency 3.2 Measures of Dispersion 3.3 Measures of Central Tendency and Dispersion from Grouped Data 3.4 Measures of Position and Outliers 3.5 The Five-Number Summary and Boxplots Chapter Review Chapter Test Making an Informed Decision: What Car Should I Buy? Case Study: Who Was "A Mourner"? (On CD) 4. Describing the Relation Between Two Variables 4.1 Scatter Diagrams and Correlation 4.2 Least-Squares Regression 4.3 The Coefficient of Determination Chapter Review Chapter Test Making an Informed Decision: What Car Should I Buy? Part II Case Study: Thomas Malthus, Population, and Subsistence (On CD) 5. Probability 5.1 Probability Rules 5.2 The Addition Rule and Complements 5.3 Independence and the Multiplication Rule 5.4 Conditional Probability and the General Multiplication Rule 5.5 Counting Techniques 5.6 Putting It Together: Probability Chapter Review Chapter Test Making an Informed Decision: Sports Probabilities Case Study: The Case of the Body in the Bag (On CD) 6. Discrete Probability Distributions 6.1 Discrete Random Variables 6.2 The Binomial Probability Distribution Chapter Review Chapter Test Making an Informed Decision: Should We Convict? Case Study: The Voyage of the St. Andrew (On CD) 7. The Normal Probability Distribution 7.1 Properties of the Normal Distribution 7.2 The Standard Normal Distribution 7.3 Applications of the Normal Distribution 7.4 Assessing Normality 7.5 The Normal Approximation to the Binomial Probability Distribution Chapter Review Chapter Test Making an Informed Decision: Join the Club Case Study: A Tale of Blood, Chemistry, and Health (on CD) 8. Sampling Distributions 8.1 Distribution of the Sample Mean 8.2 Distribution of the Sample Proportion Chapter Review Chapter Test Making an Informed Decision: How Much Time Do You Spend in a Day&? Case Study: Sampling Distribution of the Median (On CD) 9. Estimating the Value of a Parameter Using Confidence Intervals 9.1 The Logic in Constructing Confidence Intervals about a Population Mean Where the Population Standard Deviation is Known 9.2 Confidence Intervals about a Population Mean Where the Population Standard Deviation is Unknown 9.3 Confidence Intervals about a Population Proportion 9.4 Putting It Together: Which Method Do I Use? Chapter Review Chapter Test Making an Informed Decision: What's Your Major? Case Study: When Model Requirements Fail (On CD) 10. Hypothesis Tests Regarding a Parameter 10.1 The Language of Hypothesis Testing 10.2 Hypothesis Tests for a Population Mean—Population Standard Deviation is Known 10.3 Hypothesis Tests for a Population Mean—Population Standard Deviation is Unknown 10.4 Hypothesis Tests for a Population Proportion 10.5 Putting It Together: Which Method Do I Use? Chapter Review Chapter Test Making an Informed Decision: What Does It Really Weigh? Case Study: How Old Is Stonehenge? (On CD) 11. Inference on Two Samples 11.1 Inference about Two Means: Dependent Samples 11.2 Inference about Two Means: Independent Samples 11.3 Inference about Two Proportions 11.4 Putting It Together: Which Method Do I Use? Chapter Review Chapter Test Making an Informed Decision: Where Should I Invest? Case Study: Control in the Design of Experiment (On CD) 12. Additional Inferential Procedures 12.1 Goodness of Fit Test 12.2 Tests for Independence and the Homogeneity of Proportions 12.3 Testing the Significance of the Least-Squares Regression Model 12.4 Confidence and Prediction Intervals Chapter Review Chapter Test Making an Informed Decision: Benefits of College Case Study: Feeling Lucky? Well, Are You? (on CD) Additional Topics on CD-ROM C.1 Lines C.2 Confidence Intervals about a Population Standard Deviation C.3 Hypothesis Tests for a Population Standard Deviation C.4 Comparing Three or More Means (One-Way Analysis of Variance) Appendix A. Tables Purchase Info ISBN-10: 0-321-70588-2 ISBN-13: 978-0-321-70588-4 Format: Alternate Binding
{"url":"http://www.mypearsonstore.com/bookstore/fundamentals-of-statistics-a-la-carte-with-mml-msl-0321705882","timestamp":"2014-04-17T01:57:03Z","content_type":null,"content_length":"23847","record_id":"<urn:uuid:f7d81053-b0b8-466e-a190-bf57d690b849>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: if f(x)=x-10 and g(x)=4x+3, how do i find fog of (-2) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50cf865be4b06d78e86d4878","timestamp":"2014-04-18T10:39:42Z","content_type":null,"content_length":"34774","record_id":"<urn:uuid:aa482e78-695d-4147-ac87-5f40de5de76f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
4 search hits An optimal, stable continued fraction algorithm for arbitrary dimension (1996) Carsten Rössner Claus Peter Schnorr We analyse a continued fraction algorithm (abbreviated CFA) for arbitrary dimension n showing that it produces simultaneous diophantine approximations which are up to the factor 2^((n+2)/4) best possible. Given a real vector x=(x_1,...,x_{n-1},1) in R^n this CFA generates a sequence of vectors (p_1^(k),...,p_{n-1}^(k),q^(k)) in Z^n, k=1,2,... with increasing integers |q^{(k)}| satisfying for i=1,...,n-1 | x_i - p_i^(k)/q^(k) | <= 2^((n+2)/4) sqrt(1+x_i^2) |q^(k)|^(1+1/(n-1)) By a theorem of Dirichlet this bound is best possible in that the exponent 1+1/(n-1) can in general not be Diophantine approximation of a plane (1997) Carsten Rössner Claus Peter Schnorr A Stable Integer Relation Algorithm (1994) Claus Peter Schnorr Carsten Rössner We study the following problem: given x element Rn either find a short integer relation m element Zn, so that =0 holds for the inner product <.,.>, or prove that no short integer relation exists for x. Hastad, Just Lagarias and Schnorr (1989) give a polynomial time algorithm for the problem. We present a stable variation of the HJLS--algorithm that preserves lower bounds on lambda(x) for infinitesimal changes of x. Given x \in {\RR}^n and \alpha \in \NN this algorithm finds a nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no very short relation exists for points \bar{x} within half the x'--distance from x. On the other hand if x'=x then m is, up to a factor 2^{n/2}, a shortest integer relation for \mbox{x.} Our algorithm uses, for arbitrary real input x, at most \mbox{O(n^4(n+\log \alpha))} many arithmetical operations on real numbers. If x is rational the algorithm operates on integers having at most \ mbox{O(n^5+n^3 (\log \alpha)^2 + \log (\|q x\|^2))} many bits where q is the common denominator for x. Computation of highly regular nearby points (1995) Carsten Rössner Claus Peter Schnorr We call a vector x/spl isin/R/sup n/ highly regular if it satisfies =0 for some short, non-zero integer vector m where <...> is the inner product. We present an algorithm which given x/spl isin/R /sup n/ and /spl alpha//spl isin/N finds a highly regular nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no short relation m~ of length less than /spl alpha//2 exists for points x~ within half the x'-distance from x. The integer relation m for x' is for random x up to an average factor 2/sup /spl alpha//2/ a shortest integer relation for x'. Our algorithm uses, for arbitrary real input x, at most O(n/sup 4/(n+log A)) many arithmetical operations on real numbers. If a is rational the algorithm operates on integers having at most O(n/sup 5/+n/sup 3/(log /spl alpha/)/sup 2/+log(/spl par/qx/spl par//sup 2/)) many bits where q is the common denominator for x.
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Carsten+R%C3%B6ssner%22/start/0/rows/10/author_facetfq/Claus+Peter+Schnorr","timestamp":"2014-04-21T07:28:51Z","content_type":null,"content_length":"30030","record_id":"<urn:uuid:0bb52654-aefb-41a9-b4b7-603eb5418bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which theorem or postulate proves the two triangles are similar? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5112f5d0e4b07c1a5a649d57","timestamp":"2014-04-17T04:18:28Z","content_type":null,"content_length":"106169","record_id":"<urn:uuid:4bc36c86-f59a-435e-aff8-4d83ae640490>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7349878 - Simulation method and system for the valuation of derivative financial instruments This application is a continuation-in-part of application Ser. No. 08/911,252 filed Aug. 15, 1997, now U.S. Pat. No. 6,061,662, which itself claimed priority from provisional application 60/024,100, “Simulation Method and System for the Valuation of Derivative Financial Instruments,” filed on Aug. 16, 1996. 1. Field of the Invention The present invention relates to methods and systems for simulating a financial parameter and, more particularly, to methods and systems for simulating securities valuations and for assisting a user in making an investment decision. 2. Description of Related Art The introduction of quantitative analysis methods into the financial services arena has become attractive to market participants. However, all but the largest users are forced to use simple techniques, because of prohibitively expensive software and hardware. In addition to cost, investors must learn how to install, use, and maintain proprietary software and frequently must purchase different software for different investment or risk management tasks. At present, theoretical pricing models are used as market benchmarks. Traders use the models to obtain implied volatilities from current market prices. They may combine implied parameters with historical analysis. Then they use the model with these parameters to construct hedges with respect to various parameters and to predict option price movements as the stock prices move in the short run. The models are also used to consistently price new options with respect to actively traded ones. This can be done quickly only using analytical methods, but the analytical methods are very restrictive and even outright wrong in their assumptions about how the markets work. Numerical methods are much more flexible, but are much slower, particularly when sensitivities and multiple scenarios must be evaluated. Monte Carlo simulation is a technique for estimating the solution of a numerical mathematical problem by means of an artificial sampling experiment. This is an established numerical method for the valuation of derivative securities. Its major strength is flexibility, and it may be applied to almost any problem, including history-dependent claims or empirically characterized security processes. The major disadvantage of Monte Carlo simulation is speed, as the accuracy of the method improves as the square root of the number of independent samples generated in a simulation. However, because of sample independence, the method is highly parallel and is thus ideally suited for implementation with scalable parallel-processing architectures. The use of MPP permits a speed increase by a factor up to the number of processors. The Monte Carlo approach introduced by Boyle (J. Fin. Econ. 4, 323-38, 1977) relies on direct stochastic integration of the underlying Langevin equation. Given a security price at a first time, a new price for a subsequent second time is generated at random according to the stochastic process of the security. Results are obtained by averaging over a large number of realizations of the process. High-performance computing and communications (HPCC), and, in particular, cooperative, distributed, and parallel computing, are expected to play an increasingly important role in trading, financial engineering, and all aspects of investment and commercial banking. The convergence of a number of factors are at present, and are anticipated to continue, causing significant changes in the way in which financial transactions are implemented. Such factors include: □ Increased volatility due to globalization of financial markets; □ Global distribution of data sources; □ Increased complexity of derivatives and other risk management vehicles; □ Increased demand for real-time investment and asset allocation decision support; □ Increased volume of raw data and need to process large databases; □ Increased volume on the retail side of the spectrum, mainly due to on-line technologies (e.g., the Internet and the World Wide Web). High-performance computing technologies are becoming indispensable in the application domains such as: □ Derivative valuation, particularly over-the-counter products; □ Portfolio optimization and asset allocation; □ Hedging of portfolios in real time; □ Arbitrage trading; □ Risk analysis simulations. Traditionally, these applications supported the wholesale end of the financial services spectrum and the supply side. It is believed that there is opportunity for HPCC being created by the emergence of global computer networks as a new delivery channel and economic force. For example, the Internet is creating a shift among financial services providers towards the retail end of the spectrum. At the same time, there is increased demand on the buy side, particularly among corporate treasurers, for more structured and more complex financial instruments to manage risk with more flexibility. This demand is going to grow with the trend towards globalization, which will be reflected in increased short-term volatility and risk exposure for all market participants. Investors at all levels of endowment are becoming more self-reliant for investment decisions and more comfortable with technology. It is believed that this trend will be reinforced by the wealth of information offered to the public as well as value-added networks. Finally, there is increased pressure from regulators to enact sophisticated risk management strategies for institutional investors, given well-publicized recent events involving financial catastrophes. It is believed that these factors will contribute to an increased demand for on-line services in two areas: □ Resources for risk management support; □ Resources for investment decision support. It is believed that these trends will lead to pervasive deployment of scalable high-performance architectures to support market demands. It is therefore an object of the invention to provide a method and system for performing online, network-based quantitative analysis and derivative valuation for market participants. It is another object to provide such a method and system that includes an online option pricing system. It is a further object to provide such a method and system capable of performing a plurality of statistical analyses of historical data. It is an additional object to provide such a method and system capable of computing an implied volatility. It is also an object to provide such a method and system that comprises a user-friendly interface for permitting a user with limited market sophistication to perform simulations based on historical data and a choice of option strategies. It is yet another object to provide such a method and system that permits the user to obtain a complete solution of a derivative security valuation problem in a single simulation. It is yet a further object to provide such a method and system for computing sensitivities to model parameters in a single pass. It is yet an additional object to provide such a method and system to evaluate American options. It is also an object to provide such a method and system to evaluate with Monte Carlo methods derivative securities with interest-rate-structure-dependent underlying assets. Another object is to provide a user with simulation data useful in assisting the user in making an investment decision. These objects and others are achieved by the system and method of the present invention. The system is for deriving an option price and a sensitivity to an input parameter on an underlying asset and comprises software means installable on a computer that comprises means for accessing a database of underlying asset dynamics (e.g., for hedging purposes, historical asset data are used) and historical asset data and means for performing stochastic statistical sampling on the historical asset data for an option based upon the input parameter to derive an option price in a single simulation. The system further comprises means for outputting the derived option price and the sensitivity to the user. The derived option price and sensitivity are useful in assisting the user to make an investment decision. The software means is for applying the stochastic technique in a parallel computing environment for pricing the financial vehicle. The stochastic technique preferably comprises a Monte Carlo approach, and is in a particular embodiment applied to the problem of pricing derivative securities. The method is in an exemplary embodiment based on a probability function for the complete history of the underlying security, although this is not intended to be limiting. Also a path-integral approach is preferably utilized, which is believed to be particularly suitable for implementation in a parallel or distributed computing environment. In a particular embodiment, the database further comprises parameters of a plurality of asset probability distributions, and the sampling is performed thereon. A commercial product resulting from this invention comprises a full-service quantitative analysis resource center (QUARC). The center comprises sophisticated quantitative analysis capability, implemented on a massively parallel machine or a scalable parallel machine. In a particular embodiment the center is delivered to a user on a computer platform using network technology. An online option pricing service is a component of the resource center, at the core of which is a Monte Carlo simulation algorithm. This algorithm is among the most flexible and is capable of pricing any kind of option. The algorithm has a unique feature in that it can compute all the parameter sensitivities of an option price in a single simulation without resorting to numerical differentiation. An accurate determination of price sensitivities is desirable for a practical trading or hedging strategy, again to assist the user in making an investment decision. The algorithm can also accept any kind of stochastic process for the underlying assets. Furthermore, option values and sensitivities can be computed for multiple values of parameters in a single simulation, which is a feature not replicatable by any other known method. This is useful for risk management purposes, where multiple scenarios may be superimposed on top of option valuation to establish best/worst-case exposures and other risk assessment tasks mandated by internal policies or regulators. The algorithm is also a valuable tool for sell-side firms, where it can be used as a flexible engine for the valuation of exotic one-of-a-kind derivative instruments. From an implementation point of view, the algorithm scales efficiently even on massively parallel processors; so it can take full advantage of the processing power of the machine. An architecture for the system may in an exemplary embodiment comprise a user's workstation, which typically would include a processor and storage, input, and output means. The workstation would also preferably comprise means for accessing a network such as the Internet. All of these means are in electronic communication with the processor, and the storage means has resident thereon the software as described above. The network-accessing means are used to access a database of historical asset data, which are utilized to perform a stochastic statistical sampling simulation on an input parameter entered by the user via the input means. The simulation yields a value such as a derived option price to the user via the output means. The value is useful in assisting the user to make an investment decision. Another, preferable embodiment of the present invention comprises a computation center such as described above, having a processor, storage means, and network accessing means, both in electronic communication with the processor. In this embodiment the software means is resident on the storage means of the computation center, and a desired simulation is performed when a user accesses the center, such as via a network, to request the simulation and provide an input parameter. The center's processor initiates an accessing of the historical asset database, whether resident on its own storage means or accessible via a network, performs the desired simulation, and provides an output to the user, which is useful in assisting the user in making an investment decision. The features that characterize the invention, both as to organization and method of operation, together with further objects and advantages thereof, will be better understood from the following description used in conjunction with the accompanying drawing. It is to be expressly understood that the drawing is for the purpose of illustration and description and is not intended as a definition of the limits of the invention. These and other objects attained, and advantages offered, by the present invention will become more fully apparent as the description that now follows is read in conjunction with the accompanying drawing. FIG. 1 schematically illustrates an exemplary architecture of the parallel derivative valuation server. FIG. 2 represents exemplary system output showing the daily average price graph for a common stock and the user-specified variables used in the statistical software. FIG. 3 represents exemplary system output showing a histogram of price changes and a Gaussian fit thereto. FIG. 4 represents exemplary system output showing histogram and real data fit. FIG. 5 represents exemplary system output showing calculated prices for an option on the stock of FIG. 2. A detailed description of preferred embodiments of the invention will now be presented with reference to FIGS. 1-5. I. Theoretical Basis for the Method and System A. Path Integral Monte Carlo Method Monte Carlo methods in financial calculations can be based on the risk-neutral valuation approach of Cox and Ross (J. Fin. Econ. 3, 145-66, 1976). Consider a derivative security that depends on an N-dimensional vector of state variables, Θ=(θ[1], θ[2], . . . , θ[i], . . . , θ[N]). The state vector Θ is assumed to follow an arbitrary Markov process. Current time is set to t=0, and the expiration date of the contract is at t=T. It will be assumed first for the sake of simplicity that the contract is European. Implementation of American contracts will be discussed in the following. The riskless short-term interest rate at time t will be denoted by r(t). The risk-neutral probability density of the final state vectors at time T is given by the conditional probability distribution P(Θ(T)|Θ(0)). Let t[i], i=1, . . . , M, denote a set of intermediate time slices, such that 0<t[1]<t[2]< . . . <t[M]<T. To simplify notation, we will denote these time slices using their indices only, i.e., t[i]=i. Application of the Chapman-Kolmogorov equation for Markov processes (A. Papoulis, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1984) leads to a recursive determination of P: P(Θ)(T)|Θ(0))=∫dΘ(M) . . . ∫dΘ(2)∫dΘ(1)P(Θ(T)|Θ(M)) . . . P(Θ(1)Θ(0))(1) We will call the collection of state vectors Ω=(Θ(0), Θ(1), . . . , Θ(T))(2) a “path” followed by the state variables. This is analogous to the path integrals as described by Richard Feynman (Rev. Mod. Phys. 20(2), 367, 1948) and Norbert Weiner (Proc. Natl. Acad. Sci. 7, 253, 1922; J. Math. Phys. 2, 131, 1923; Proc. Lond. Math. Soc. 22, 434, 1924; ibid., 55, 117, 1930). For any finite number of time slices, a path may be regarded as a point in an space, which will be called the “path space.” The payoff of the derivative security, F(Ω), is a real-valued function on the path space. We will use the following shorthand notation for the multiple ) . . . ∫ ) . . . ∫ With these conventions, the valuation formula for the price of Q of a European contract with payoff function can be written in a “path integral” notation: $Q = ∫ D ⁢ ⁢ Ω ⁢ ⁢ F ⁡ ( Ω ) ⁢ P ⁡ ( Ω ) ⁢ exp ⁡ ( - ∫ 0 T ⁢ r ⁡ ( t ) ⁢ ⁢ ⅆ t ) ( 4 )$ where, by definition, the probability of a path is: P(ω)=P(ω(T)|Θ(M))P(Θ(M)|Θ(M−1)) . . . P(Θ(2)|Θ(1))P(Θ(1)|Θ(0))(5) Path probability is expressed in terms of a product of probabilities associated with “short-term” segments (time slices) of a path. By tuning the number of time slices, one can simulate arbitrary Markov processes with desired accuracy. Jump processes (see Merton, J. Fin. Econ. 3, 125-44, 1976), nonstationary stochastic processes, or empirical probability distributions of security returns can be readily incorporated into the path integral framework. The interpretation of Eq. (4) is straightforward: One is summing over payoffs F(Ω) of all possible paths of the state vector from the beginning of the contract until its expiration, weighted by the probability P(Ω) of an occurrence of a particular path. The basic idea of a Monte Carlo approach is that this summation can be performed stochastically by generating paths at random and accumulating their payoffs. Major contributions to the integral come from relatively small parts of the otherwise huge path space. Therefore, to evaluate the multidimensional integral (4) efficiently, it is essential to sample different regions of the path space according to their contribution to the integral, i.e., to perform importance sampling (Hammersly and Handscomb, Monte Carlo Methods, Methuen, London, 1964). If a discrete set of L paths Ω[v], v=1, . . . , L, is drawn according to its probability of occurrence, P(Ω), the integral (4) may be approximated by: $〈 Q 〉 MC = 1 L ⁢ ∑ v = 1 L ⁢ ⁢ Q ⁡ ( Ω v ) ( 6 )$ and the error can be controlled by increasing the size of the sample, since, for large sample sizes, the central limit theorem assures that <Q> [MC] =Q+O(L ^−1/2)(7) B. Metropolis Algorithm Before we describe the advantages of promoting complete paths to be the fundamental objects of a Monte Carlo simulation, we shall describe the Metropolis method for generating the probability distribution of the paths, to be able to take advantage of importance sampling. The Metropolis method constructs a Markov process in the path space, which asymptotically samples the path probability distribution. This process is not related to the Markov process that governs the evolution of the state variables. Being a formal device to obtain the desired distribution, there is a lot of freedom in constructing this process, which will prove advantageous for variance reduction techniques. The Markov process will be defined by the transition probability W(Ω[1]→Ω[2]), which denotes the probability of reaching point Ω[2 ]starting from Ω[1]. There are two restrictions on the choice of the transition probability W. First, the stochastic dynamics defined by W must be ergodic; i.e., every point in the path space must be accessible. The second requirement is that the transition probability must satisfy the “detailed balance condition”: These two restrictions do not specify the stochastic dynamics. We shall use the transition probability proposed by Metropolis et al. (J. Chem. Phys. 21, 1087-91, 1953), which is known as the Metropolis algorithm. $W ⁡ ( Ω 1 → Ω 2 ) = { P ⁡ ( Ω 2 / P ⁡ ( Ω 1 ) ) , if ⁢ ⁢ P ⁡ ( Ω 1 ) ≥ P ⁡ ( Ω 2 ) 1 , if ⁢ ⁢ P ⁡ ( Ω 1 ) < P ⁡ ( Ω 2 ) ( 9 )$ It has been proved (Negele and Orland, Quantum Many-Particle Systems, Addison Wesley, New York, 1988) that this Markov chain asymptotically samples the desired distribution P(Q), which is the asymptotic probability distribution of the points generated by the random walk P(Ω)=lim [n→∞] P [n](Ω)(10) One can view the evolution of the original probability distribution along the Markov chain as a relaxation process towards the “equilibrium distribution,” P(Ω). In practice, one assumes that the relaxation occurs within a Markov chain of finite length R. The actual number R is usually determined by experimenting, and depends on both probabilities P and Wand the desired accuracy of the simulation. Given P and W, R has to be chosen large enough that the systematic error due to the deviation from the true distribution is smaller than the statistical error due to the finite size of the sample (see Eq. 7). In applications with a large number of strongly coupled degrees of freedom, the relaxation process is nontrivial. For our present purpose, the state vector is low dimensional, and relaxation occurs within just a few steps along the Markov chain. One of the major pitfalls of the Metropolis method is the possibility of being trapped in the vicinity of a metastable state in the phase space. Fortunately it is not a serious concern in this case, again because of the low dimensionality of our problem. The prescription for a practical algorithm can now be summarized as follows: □ 1. Pick an arbitrary initial path. □ 2. Generate a new trial path. □ 3. The new path is accepted with probability W. Specifically, if W≧1, the new path is accepted without further tests. If W<1, a random number between 0 and 1 is generated, and the new path is accepted if the random number is smaller than W. If the trial path is accepted, it becomes the current path Q. □ 4. If we progressed enough along the Markov chain so that the relaxation is completed (i.e., v≧R), the current path is sampled from the desired distribution P(Ω). We compute the payoff function for the current path F(Ω[v]) and accumulate the result (see Eqs. 4 and 6), A=A+F(Ω[v]). □ 5. Perform an estimate of the statistical errors due to a Monte Carlo sampling procedure. If the error is above a desired level of accuracy, go to (2); otherwise, go to (6). □ 6. Compute Monte Carlo estimates of the required integrals. If L denotes the last value of the step index v, and R is the number of relaxation steps, the total number of Monte Carlo measurements is M[v]=L−R. The Monte Carlo estimate of the option price <Q>[MC], given the payoff function F, is obtained to be: $〈 Q 〉 MC + A M v = ∑ v = R + 1 L ⁢ ⁢ F ⁢ ⁢ ( Ω v ) ( 11 )$ The error estimate requires that we also accumulate: $〈 Q 2 〉 MC = 1 M v ⁢ ∑ v = r + 1 L ⁢ ⁢ F 2 ⁡ ( Ω v ) ( 12 )$ An estimate of the sampling error is obtained as a square root of the variance of the Monte Carlo run: $ɛ = ( 〈 σ 2 〉 MC ) 1 / 2 ⁢ ⁢ 〈 σ 2 〉 MC = 1 M v ⁢ ( 〈 Q 2 〉 MC - 〈 Q 〉 MC 2 ( 13 )$ □ 7. Stop. D. What can be Computed in a Single Simulation? A very important advantage of the path integral approach is that more information can be obtained in a single simulation than using the standard approach. The basic observation is that all relevant quantities can be expressed as integrals with respect to the path probability distribution. This is very important for computation of partial derivatives of the contingent claim's price with respect to various parameters. The standard practice is to compute derivatives using numerical differentiation. This approach introduces discretization errors in addition to statistical sampling errors. Numerical differentiation is also computationally expensive, since it requires repeating Monte Carlo simulations for nearby values of parameters. To illustrate the path integral approach, we start from Eq. (4) and denote explicitly the dependence of the price Q(X), the payoff function F (Ω,X), and path probability P(Ω,X) on a parameter X: We have absorbed the present-value discount factor, $exp ⁡ [ - ∫ 0 T ⁢ r ⁡ ( t ) ⁢ ⁢ ⅆ t ] ,$ into the definition of the payoff function F(Ω,X). The desired partial derivative is given by: $∂ Q ⁡ ( X ) ∂ X = ∫ D ⁢ ⁢ Ω ⁡ ( ∂ F ⁡ ( Ω , X ) ∂ X + F ⁡ ( Ω , X ) ⁢ ∂ ln ⁢ ⁢ P ⁡ ( Ω , X ) ∂ X ) ⁢ P ⁡ ( Ω , X ) ( 15 )$ Therefore, a Monte Carlo estimate of a partial derivative ∂Q(X)/∂X of the price may be computed in the same Monte Carlo run as the price Q itself, by accumulating (see Eq. 11): $∂ Q ⁡ ( X ) ∂ X = 1 M v ⁢ ∑ ⁢ ( ∂ F ⁡ ( Ω v , X ) ∂ X + F ⁡ ( Ω v , X ) ⁢ ∂ ln ⁢ ⁢ P ⁡ ( Ω v , X ) ∂ X ) ( 16 )$ The derivative of the path probability In P(Ω,X) is a sum of contributions coming from individual time slices (see Eq. 5): $∂ ln ⁢ ⁢ P ⁡ ( Ω v , X ) ∂ X = ∑ i = 0 T - 1 ⁢ ⁢ ∂ ln ⁢ ⁢ P ⁡ ( Θ ⁡ ( i + 1 ) ❘ Θ ⁡ ( i ) , X ) ∂ X ( 17 )$ If the parameter X is the initial stock price or the strike price, this expression simplifies considerably, since only the first or last time slice appears in the sum. Equation (17) implies that contributions from different time slices are independent. This is an important source of parallelism that can be exploited on a concurrent processor (see Section I.G). As a result, term structure contributions to the probability distribution are conveniently included as simply another input parameter. Following Ferrenberg and Swendsen (Phys. Rev. Lett. 61, 2635-38, 1988), a knowledge of the path probability function may be used to obtain results for a different probability distribution. In practice, this means that within a single Monte Carlo simulation with a particular fixed set of parameters (e.g., initial stock price, volatility, exercise price), we can compute results corresponding to other sets of parameters that are close to the one used in the simulation. Let us denote the parameters used in the simulation by a vector X. Let us also denote a different parameter vector by Y. Then: which can be rewritten as: $Q ⁡ ( Y ) = ∫ D ⁢ ⁢ Ω ⁢ F ⁡ ( Ω , Y ) ⁢ P ⁡ ( Ω , Y ) P ⁡ ( Ω , X ) ⁢ P ⁡ ( Ω , X ) ( 19 )$ Thus the change of parameters amounts to a redefinition of the payoff function. This equation implies that by accumulating: $Q ⁡ ( Y ) MC = 1 M v ⁢ ∑ ⁢ F ⁡ ( Ω v , Y ) ⁢ P ⁡ ( Ω v , Y ) P ⁡ ( Ω v , X ) ( 20 )$ for a number of different values of parameter vector Y, while running a simulation for a fixed vector X, one can obtain results for a window of parameters without repeating the simulation for each new set of parameters. This feature of path integral Monte Carlo is extremely useful for the computation of American contracts, although this is not intended as a limitation. Practical limitations arise from the need to have efficient importance sampling. If the parameters change too much, so that the true path probability distribution for that choice of parameters is significantly different from the one that is used in the simulation, the benefits of importance sampling are lost. This limits the size of the parameter window that can be scanned with uniform accuracy in a single simulation. An alternate embodiment of the method comprises using Eq. (20) as the basis of a control variate technique. E. Implementation of American Contracts Monte Carlo computation proceeds by accumulating payoffs obtained for each simulated path. This property complicates pricing for American contracts, where, at every instant in time, the contract owner has to make a decision whether the contract is worth more exercised or unexercised. Therefore, the payoff function at every time step is not known before the simulation. This is clear from the following recursive equation for American contracts: $Q ⁡ ( t , S t , X ) = max ⁢ { f ⁡ ( S t ⁢ X ) , Q * ( t , S t , x ) } ( 21 )$ Q*(t,S [t] ,X)=∫dS [t+ΔT] Q(t+Δt,S [t+Δt] ,X)P(S [t+Δt] |S [t])(22) We shall begin with a simple call or put on one underlying. The value Q(t,S[t],X) of an American contract at time t, given by a stock price S[t ]and strike price X, is equal to the larger of the exercised contract payoff f(S[t],X) or the value of the contract if left alive for the next time slice (Q*(t,S[t],X)). The recursion is closed by the boundary condition at the contract expiration date: Q(T,S[T],X)=f(S[t],X). One consequence of Eq. (21) is that there exists a boundary S[B](t) on the security price versus time plane, such that for all paths within the boundary, the contract will be left alive until the expiration, while the paths that cross the boundary are terminated since the contract is exercised. Once this boundary is determined, pricing of American options proceeds in a fashion similar to European options. Paths are generated in exactly the same fashion as for European options, but one has to check whether a path crosses a boundary. The payoff of a path within the boundary is f(S[T],X), while the payoff of a path that crosses the boundary at an earlier time, t<T, is f(S[t],X). Determination of the boundary is a computationally demanding task. In our present implementation, the boundary is determined recursively starting from the contract expiration date T. If we specialize to an ordinary put or call, the boundary at T is given by the strike price X. If the time slice Δt is short enough, it is clear that the location of the boundary at T−1 will be close to X. To determine the location exactly, one has to compute the option price for a number of initial stock prices at T−1, which are close to X, under the assumption that the contract is kept alive for the single time step and compare each of those prices with the payoffs of the same contracts if exercised. This is accomplished efficiently in a single simulation using Eq. (20), while the simulation is centered at the initial stock price S[T−1]=X. Quadratic interpolation is then used to obtain all option prices in the search region. If we denote this quadratic function by Q*[2](S), the boundary S [B](T−1) is obtained by solving Q*[2](S)−f(S,x)=0. This procedure is repeated for each time slice t<T by centering the search region at S[B](T+1). Depending on the desired accuracy and the size of the time step, this procedure can be repeated starting from the boundary determined in the previous iteration. Once we determine the boundary, we proceed with the computation as for the European contract while keeping track of boundary crossings for payoff accumulation. We should stress that once a path is generated, the European option price can be easily computed for all time periods to maturity, from t=1 to T. This is not as simple for the American option, since the boundary is different for different time periods to maturity. One must compute and store all the boundaries for all periods to maturity before using a full path generated in the Monte Carlo simulation. A path may be contained within some of the boundaries while crossing the others. It is clear from these considerations that boundary determination is the most expensive part of the computation. The ability of path integral Monte Carlo to compute multiple option prices in a single run is extremely valuable in this context. From a practical viewpoint it is important correctly to set the width of the search region on the next time slice once the boundary is determined at the current time slice. In our present implementation it requires a lot of intervention from the user, although it should be adjusted automatically given the parameters of the simulation. An alternate embodiment of the method comprises a different approach. Given the falling cost of both primary and secondary storage, it can be advantageous to store all paths generated in a Monte Carlo simulation. A good first guess for the boundary can be made. Since the correct boundary maximizes the contract value, an optimization procedure can be established wherein the stored paths are used to compute the contract value given a boundary, which is then perturbed until the global maximum of the contract value is reached. This approach may be more suitable for parallel machines than the recursive method, which is necessarily sequential across time slices. F. Sequential Implementation We now consider the simplest possible valuation problem. We describe in detail a path-integral Monte Carlo evaluation of the price of a European call on a stock following the standard Ito price process with constant volatility. The interest rate is also assumed constant: r(t)−r[f]. The exact solution is given by the well-known Black-Scholes method (J. Polit. Econ. 81, 637-59, 1973). The evolution of the stock price logarithm, y=log S, is given by the stochastic differential equation: d log S=dy=μdt+σdξ(23) Standard notation is used: μ is the expected return, and σ is the stock price volatility. The stochastic component dξ is a Wiener process with variance dt. In a risk-neutral world, the expected return on the stock is equal to the risk-free interest rate, which requires that μ=r[f]−σ^2/2. If the stock price logarithm at time t is y[t], the risk-neutral probability distribution of the stock price logarithms y[t+Δt ]at the instant t+Δt is Gaussian: $P ⁡ ( y t + Δ ⁢ ⁢ t ❘ y t ) ∝ exp ⁡ ( - ( y t + Δ ⁢ ⁢ t - y t - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ T ) ( 24 )$ For this particular stochastic process, Eq. (24) is true for any time interval. For general continuous price processes, the Gaussian form is valid only in the limit Δt→0. The probability of any path Ω=(y(0), y(Δt), . . . y(Δt), y(T)), with M time slices, can be written (using simplified notation: n≡nΔt): $P ⁡ ( Ω ) = ∏ n = 0 M ⁢ ⁢ exp ⁡ ( - ( y ⁡ ( n + 1 ) - y ⁡ ( n ) - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t ) ( 25 )$ Note that this distribution is not normalized, but that is irrelevant for the Metropolis algorithm, since only ratios of probabilities matter. The payoff function for the call is given by: F(S(T),X)=(S(T)−x)Σ(S(T)−X=(e ^log y(T) −X)Θ(e ^log y(T) −X)(26) where X denotes the strike price and Θ(x) is the step function. Given a path Ω=(y(0), y(1), . . . , y(n), . . . , y(T)), we can obtain any other path by a sequence of entirely local steps where we update the value of a stock price at a single time slice n. The new path differs from the old one only by the stock price value at time nΔt: Ω′=(y(0), y(1), . . . , y(n), . . . y(M), y(T)). The new value of the stock price logarithm is obtained in the following way: (1) At the beginning of the simulation, we pick an interval of width Δ=λσ^2Δt. (2) For each update, we pick a random number p, such that −1≦p≦1. The new stock price is y′=y+pΔ. The scale factor λ is chosen by experimentation, so that the acceptance rate of the updated configurations is roughly 0.5. Following the Metropolis algorithm, we have to accept or reject the new path according to the transition probability W(Ω−Ω′). Combining Eqs. (9) and (26), we obtain the following expression for the transition probability: $W ⁡ ( Ω → Ω ′ ) = Λ ′ ⁡ ( n - 1 , n , n + 1 ) Λ ⁡ ( n - 1 , n , n + 1 ) ( 27 )$ $Λ ′ ⁡ ( n - 1 , n . n + 1 ) = exp ⁡ ( - y ′ ⁡ ( n ) - y ⁡ ( n - 1 ) - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t - ( y ⁡ ( n + 1 ) - y ′ ⁡ ( n ) - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t ) ( 28 )$ $Λ ⁡ ( n - 1 , n , n + 1 ) = exp ⁡ ( - y ⁡ ( n ) - y ⁡ ( n - 1 ) - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t - ( y ⁡ ( n + 1 ) - y ⁡ ( n ) - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t ) ( 29 )$ This local algorithm is applicable to any Markov process. A non-Markov process induces couplings between nonadjacent time slices, and one would design an appropriate algorithm with global updates. Global updates are dependent upon the nature of the stochastic process, and consequently the symmetries of the resultant path probability function. It is desirable to include global updates even for Markov processes in order to reduce the variance of the Monte Carlo simulation. We use a very simple global update for the present problem, based upon the translational invariance of the path probability (i.e., its dependence upon differences of stock price logarithms). When y[n ]is updated, then y[k], n<k≦T are also shifted by the same amount. The transition probability for this move is still given by Eqs. (28) and (29), but without the second term in the exponent, since the probability of the path segment after time slice n is not changed by a rigid shift. This move significantly improves the statistical quality of results for time slices closer to the contract expiration date. We usually start from a “deterministic” path, i.e., a path given by Eq. (23) with σ=0. We update paths looping over time slices sequentially using both global and local moves. When the whole path is updated, it is counted as one Monte Carlo step. After the relaxation period (which is less than 100 steps), we begin accumulating an option price estimate and its partial derivatives, as described in Sections B and D. The computation of the option's delta, which measures its price sensitivity, δ=∂Q/∂S(0), requires an accumulation of A=∂F/∂S(0)+F∂ ln P/∂S(0) during the Monte Carlo run. Since S(0) appears only on the initial time slice and the payoff function does not depend explicitly on S(0), the accumulator reduces to: $A = F ⁡ ( S ⁡ ( T ) , X ) ⁢ ( y ⁡ ( 1 ) - y ⁡ ( 0 ) - μΔ ⁢ ⁢ t ) S ⁡ ( 0 ) ⁢ σ 2 ⁢ Δ ⁢ ⁢ t ( 30 )$ The volatility sensitivity κ is somewhat more complicated, since all the time slice probabilities depend upon σ. The payoff function does not depend upon σ; so A=F ∂ ln P/∂σ. Using the explicit expression for path probability, Eq. (25), we obtain: $A = F ⁡ ( S ⁡ ( T ) , X ) ⁢ ∑ n = 1 T ⁢ ⁢ y ⁡ ( n ) - y ⁡ ( n - 1 ) - μΔ ⁢ ⁢ t σ ⁢ ( ( Y ⁡ ( n ) - y ⁡ ( n - 1 ) - μΔ ⁢ ⁢ t ) σ 2 ⁢ Δ ⁢ ⁢ t - 1 ) ( 31 )$ In this sum, there are only two terms that depend upon the stock price on any given time slice. Therefore, the computation of κ has a constant complexity per Monte Carlo update. Furthermore, only adjacent time slices are coupled in the sum, which is important for an efficient parallel implementation. Similar expressions are easily derived for interest rate sensitivity ρ. We should note that these equations are not the most efficient way to compute derivatives for this particular problem. By a simple change of variables, one can rewrite the probability function to be independent of σ and S(0), and the complete dependence on these parameters is lumped into the payoff function. However, unlike Eqs. (31) and (32), the resulting expressions cannot be easily generalized to more complicated processes. We now outline the procedure for computing option prices for multiple initial stock prices in a single simulation with an initial stock price S(0)=S[0]. Additional stock prices will be designated by s[j]. Since only the first time slice probability depends upon S(0), it follows from Eqs. (21) and (26) that for each additional option price j, we must accumulate: $A j = F ⁡ ( S ⁡ ( T ) , X ) ⁢ exp ⁡ ( - ( Y ⁡ ( 1 ) - S 0 - μΔ ⁢ ⁢ t ) 2 - ( y ⁡ ( 1 ) - S j - μΔ ⁢ ⁢ t ) 2 2 ⁢ σ 2 ⁢ Δ ⁢ ⁢ t ) ( 32 )$ Note that F is independent of the initial price; so the same function F(S(T),X) appears for all indices j. G. Parallel Implementation An extrapolation of current trends in high-performance computing indicates that multiple-instruction multiple-data (MIMD) distributed-memory architectures are very cost-effective hardware platforms, although this is not intended as a limitation. Additional parallel configurations comprise single-instruction multiple-data architectures. Further, the system may comprise distributed or shared memory, as well as a network of, for example, personal computers. The architecture thus includes tightly coupled systems as well as networked workstations, which are conceptually similar. The only important difference is that tightly coupled systems have higher communication bandwidths, which means that they can better exploit finer-grain parallelism. We will discuss an implementation of Monte Carlo option pricing on MIMD platforms. Task parallelism and data parallelism are the two most important paradigms of parallel computation. Task parallel computation can be broken into a number of different tasks running concurrently on different processors. Tasks coordinate their work by exchanging messages. A computational problem is data parallel if a single operation can be applied to a number of data items simultaneously. In this case, the most important parallelization strategy is to distribute data items among the nodes of a parallel processor, which then can apply the operation simultaneously on those data they own. Monte Carlo simulation can be viewed as either task-parallel or data-parallel or both. Depending on the parameters of the pricing problem and the characteristics of the hardware, one may employ either parallelization strategy or both. The simplest and most effective parallelization strategy is based on task parallelism. It also requires minimal programming effort for porting of a sequential code to a parallel code. A Monte Carlo simulation run can be broken into a number of short runs. These short runs may be viewed as tasks that are assigned and executed on different processors independently. The only time when processors communicate is at the initialization stage and at the very end, when averages from different processors are combined into a single Monte Carlo estimate. If the time required to complete the long run on a single processor is T[MC](1) and the number of processors is N, then the time required for a short run is T[MC](1)/N. If the short runs are executed concurrently on a parallel machine with N processors, the time required for the simulation is T[MC](N)=T[MC](1)/N+T[comm], where T [comm ]is the amount of time spent for communications among processors during initialization and global exchange of partial results at the end of the simulation. Parallel speed-up is defined as: $S ⁡ ( N ) = T MC ⁡ ( 1 ) T MC ⁡ ( N ) = N ⁢ 1 1 + NT comm / T MC ⁡ ( 1 ) ( 33 )$ and parallel efficiency is defined as speed-up per processor: $ɛ ⁡ ( N ) = S ⁡ ( N ) N = 1 1 + NT comm / T MC ⁡ ( 1 ) ( 34 )$ The theoretical upper bound for efficiency is 1. It can be achieved only in embarrassingly parallel applications that do not incur any communication or load imbalance overheads. It is clear from the equations that this implementation is efficient as long as T[comm ]is much shorter than the sequential time of a short run. This is usually the case for large- and medium-grain parallel machines. In comparison, any other approach to derivative pricing has much higher communication costs. Explicit finite difference methods require local communication at every time step. This is a very serious limitation for coarse space-time grids on massively parallel machines. The granularity of local computation becomes too small for the communication overhead. The situation is even worse for implicit methods, where collective communication has to be executed at every time step. Binomial approximations are also bound by communication costs and may impose significant memory requirements due to exponential growth of the data set with the number of time slices. On the other hand, the only situation where task-parallel Monte Carlo fails is in the extreme limit of more processors than total Monte Carlo samples. However, on massively parallel processors one can always exploit the data-parallel nature of the Monte Carlo method. A simple data-parallel domain-decomposition strategy is to divide a path into a number of smaller time intervals (segments), which are in turn assigned to different nodes. Since only adjacent time slices are required to update the stock price on a given time slice, a node will own the information it needs for most of the time slices, except those on the boundaries of the time interval assigned to the node. Each node will update sequentially the time slices it owns (following the same algorithm as described in the previous section), until it encounters one of the boundary slices. At that moment, it will communicate with a neighboring node to obtain the value of the neighbor's boundary time slice. After the communication is completed, it has all the information it needs to update the last time slice. Then the whole sequence is repeated for the next Monte Carlo step. Equations (30), (31), and (32) show that the same divide-and-conquer strategy can be employed for the accumulation of the option price and its parameter sensitivities. This sequence of communications and computations is executed by all nodes simultaneously, which brings a speed-up roughly proportional to the number of path segments. It follows from Eq. (27) that all even time slices can be updated simultaneously and then all odd time slices can be updated simultaneously. This implies that the highest degree of parallelism can be achieved by having only 2 time slices per node. In practice, the situation is complicated by the fact that exchange of messages between nodes is a much slower operation (sometimes by a factor of 10^2 or even 10^3) than a typical floating point operation within a node. This communication overhead varies significantly from one architecture to another and limits the granularity of parallelism on a particular machine. There is an optimal number of time slices per node, which balances local computation and communication costs. Communication overhead is very important for simulations with a small number of time slices running on a networked cluster of workstations, which has large communication latencies. Because of the communication overhead, it is always more efficient to exploit first task parallelism, and then domain decomposition, so that the granularity of local computation is maximized with respect to communication. This discussion illustrates that the Monte Carlo method is believed to have the best scalability properties among present known option pricing methods. A preferred embodiment comprises implementing the method on massively parallel computers or scalable parallel processors. This feature, combined with the fact that path-integral Monte Carlo simulation provides the most complete solution to the pricing problem, strongly suggests that the path-integral Monte Carlo method is the method of choice in a parallel computing environment. II. Implementation of the Valuation Method and System A. Implementation of the Algorithm The algorithm discussed above may be implemented by explicitly coding the exchange of messages between nodes or to use a high-level data-parallel language that has semantic support for parallelism and hides the complexities of explicit message passing constructs. The path-integral Monte Carlo code has been implemented in high-performance Fortran (HPF), which is among the most prominent of such languages. HPF is an emerging standard of parallel computation. HPF may be considered a superset of Fortran 90, with a small number of extensions to support effective use of MIMD machines. Data layout is achieved in the following stages: □ 1. Definition of a template using the TEMPLATE directive. Template is not a true array, just a description of an abstract index space; so there is no memory allocation for the template. □ 2. Definition of an abstract processor array using the PROCESSORS directive. □ 3. Alignment of data arrays with the template using the ALIGN directive. □ 4. Distribution of the template onto the processor array using the DISTRIBUTE directive. All array elements that are aligned with the elements of the template reside on the same processor. Among the possible partitioning patterns are: BLOCK, CYCLIC, and *. BLOCK distributes an array dimension in contiguous blocks, CYCLIC distributes in a round-robin fashion, while * specifies that the array dimension should not be distributed. □ 5. Mapping of the abstract processor array onto the hardware. This final stage is implementation dependent and is done by the run-time system transparently to the programmer. The fundamental advantage of this process is that code design is centered around the topology of the physical problem, while the mapping from the problem topology to the hardware topology is done in an optimal fashion by the run-time system. As an illustration, consider the following lines, which specify how to distribute an array of stock price paths among the processors. The option price is computed for a number of expiration times, volatilities, initial stock prices, and strike prices; so the paths are described by 3D arrays indexed by time step, volatility, and initial stock price. Path arrays are not indexed by strike price, since the computation for different strike prices is done sequentially: INTEGER, PARAMETER:: time_steps=100 !Number of time slices INTEGER, PARAMETER:: n_prices=20 !Number of initial stock prices INTEGER, PARAMETER:: n_vols^=20 !Number of volatilities INTEGER, PARAMETER:: mc_steps=10000 !Number of Monte Carlo steps REAL, DIMENSION(time_steps,n_prices,n_vols):: price !stock price path INTEGER, PARAMETER:: nt=4, np=4, nv=4 !number of processors assigned to !dumensions of price array !HPF$ PROCESSORS proc(nt,np, nv) !processor array !HPF$ TEMPLATE, DIMENSION(time_steps,n_prices,n_vols):: temp ! template !HPF$ ALIGN price WITH temp !align price array with template !HPF$ DISTRIBUTE temp ONTO proc !distribute template The second dimension of the price array is used to index different initial stock prices for the simulation. By initializing all of them to the same stock price and by changing a few directives: INTEGER, PARAMETER:: time_steps=100 !Number of time slices INTEGER, PARAMETER:: n_prices=20 !Number of initial stock prices INTEGER, PARAMETER:: n_vols=20 !Number of volatilities INTEGER, PARAMETER:: mc_steps=10000/(n_prices) !Number of Monte Carlo steps REAL, DIMENSION(time_steps,n_prices,n_vols):: price !stock price path REAL, DIMENSION(time_steps,n_prices,n_vols):: partial_avg !partial results REAL, DIMENSION(time_steps,n_vols):: option_price !option price array INTEGER, PARAMETER:: np=8, nv=8 !number of processors assigned to !dumensions of the price array !HPF$ PROCESSORS proc(np, nv) !processor array !HPF$ TEMPLATE, DIMENSION(n_prices,n_vols):: temp ! template !HPF$ ALIGN price(*,:,:) WITH temp(:,:) !collapse onto template !HPF$ ALIGN Partial_avg(*,:,:) WITH temp(:,:) !collapse onto template !HPF$ DISTRIBUTE temp ONTO proc !distribute template one has effectively converted the computation into a task-parallel one. Partial averages are combined into the final result by a call to the global reduction routine, which is Fortran 90 intrinsic: □ option_price(:, :)=SUM(partial_avg(:, :, :), DIM=2)/n_prices Implementation in a high-level language like HPF has the added benefits of code modularity, clarity, and simplicity, as well as ease of debugging, maintenance, and portability. The slightly modified HPF code was tested and benchmarked on the following platforms: 32-node Connection Machine CM5, 16000 processor MasPar, 8-node DEC Alpha workstation farm with GIGAswitch, 12-node IBM SP-2 with High-Performance Switch, and a network of Sun workstations. We have also implemented the task-parallel algorithm using the native message-passing library on an IBM SP-2 parallel computer. B. Functional Specification The system of the present invention includes an ability to price: □ Options on common stock; □ Equity index options (e.g., S&P 500 and S&P 100); □ Options on financial and commodity futures; □ Options on currencies; □ Certain exotic options with multiple underlying instruments. This selection covers most classes of options that are not dependent upon the term structure of interest rates, although this is not intended as a limitation. Further, the system includes means for computing the following derivative price sensitivities: □ Delta (sensitivity to changes in the underlying price); □ Vega (sensitivity to changes in the underlying volatility); □ Rho (sensitivity to changes in the money market rate); □ Theta (sensitivity to the passage of time); □ Phi (sensitivity to the dividend yield for stock options). This selection, while not intended to be limiting, demonstrates that the system provides complete information about an option without resorting to numerical differencing, which entails errors and requires at least one extra simulation for each sensitivity if conventional algorithms are used. The system implements the following stochastic processes for the underlying asset: □ Gaussian; □ Cauchy; □ Poisson jump; □ Gaussian and Poisson jump combined; □ Cauchy and Poisson jump combined; □ Histogram of historical returns. These choices are sufficient to show that the algorithm can accept virtually any model for the underlying dynamics. The system performs detailed statistical analysis of historical data: □ Histograms; □ Detrending (linear and quadratic trends); □ Correlations; □ Spectral analysis; □ Calculation of moments; □ Moment substitution estimation; □ Weighted historical volatility estimation; □ Robustness statistics; □ Maximum likelihood estimation. The system provides a range of services, from “quick and dirty” estimation methods, such as moment substitution, to spectral analysis and maximum likelihood estimation for sophisticated users. Statistical analysis tools are also provided for multiple correlated underlying processes, which is not standard. The system includes means for computing implied volatility and sensitivity with respect to volume, as discussed above, using the Black-Scholes model and current market prices. The user can mix historical and implied volatility to obtain estimates of future volatility. The user can also experiment with volatility in a more sophisticated fashion than with previous known systems, since, in addition to the historical, implied, or mixed volatility estimate, the system implements the following possibilities for volatility dynamics assuming Gaussian and Cauchy processes. □ Geometric random walk stochastic volatility; □ Volatility defined by a generalized autoregressive conditional heteroskedastity (GARCH) process; □ Deterministic time-dependent volatility scenarios; □ Deterministic volatility defined as a polynomial function of underlying price. These choices for volatility modeling are state-of-the-art and also believed to be impossible to implement by any other known method in a scenario-type analysis. The system permits the user to specify standard option strategies such as spreads, butterflies, and straddles, using a simple menu-driven graphical user interface. For each strategy, the user obtains expected payoffs and their uncertainties. The system implements default choices whenever feasible, so that a user with limited knowledge of options can obtain sensible and useful answers with minimal The system provides two distinct functionalities: □ Historical calibration module; □ Market module. The historical calibration module enables experimentation with the data in the long-term database. Its purpose is to let the user develop estimates of parameters and to test different option strategies against historical market data. Therefore, the user is able to run and manage “What If?” sessions to develop intuition about the market before attempting to participate. The market module provides functions to examine and price currently traded contracts, using some of the information supplied by the historical calibration module. The results of the analysis can be displayed in graphical as well as tabular fashion and is stored in the database if requested by the user for future reference. The front end (user interface) is designed to support both standalone mode and access through a Web browser. An exemplary language in which it is written, although this is not intended as a limitation, is Java, which provides portability, robustness, and security features. The system implements basic security features such as server and user authentication without data encryption. The system comprises in an alternate embodiment a more comprehensive security model using current standards. If the server is operating behind a firewall, then the basic security features suffice. The system also implements a logging and accounting scheme. Utilization of the system has shown that a flexible option pricing environment is feasible, which is enabled by algorithmic features, parallel computer resources, and portable network-friendly implementation. For the user, it is believed that the speed, “What If?” scenario generation, and testing capability are of particular importance and novelty. The present invention provides the most flexible numerical method known at present while at the same time providing real-time support for pricing, hedging, and strategy development. These capabilities stem from the parallel implementation and the ability of the algorithm to produce the complete information set about an option in a single simulation. C. Hardware and Software Resources A block diagram of the system 10 of the present invention (FIG. 1) indicates that the processing system 12 contains software 120 for performing a plurality of functions, including verifying and formatting input, initializing calculations and the computational configuration, manipulating the input data, calculating option prices and partial derivatives, performing statistical analyses, and verifying and formatting output. The user 20 inputs any of a plurality of requests and/or parameters, including an information request, input parameter set(s), partial derivatives or other output requirements, statistical quantities required from the analysis, and optional statistical procedures to be used. The user 20 may interface the processor 12 by any means known in the art; shown are through a personal computer 21 having input means such as keyboard 210 and mouse 211 and output devices such as a printer 212 and monitor 213. The processor 12 is interfaced via, for example, a modem 214 to a telecommunications network 22 (such as, but not limited to, the World Wide Web). Alternatively, the user 20 may interface by means of a workstation 25 through an internal network 26. Output from the processor 12 is fed back to the user 20 from an output module 30, such as through an interface to a printer or a display device. Current asset and other instrument data are obtained from a source, which may include, but is not limited to, a financial data vendor 40. These data may be requested by the processor 12 as needed. Historical data may be stored on the server 50, from which the processor 12 may request the desired information to perform calculations. Such data may be updated at specified intervals by adding current data to the server 50 by means known in the art. An exemplary core hardware resource 12 comprises the Maui High-Performance Computing Center (MHPCC) IBM SP-2. Parallel software is implemented in C and MPI to ensure portability across virtually any parallel platform. An exemplary database server 50 comprises Oracle. Calibration may be performed on a node of the SP-2 in a sequential fashion. Hardware and software are included to supply real-time or delayed data feeds from financial data vendors 40 such as Knight-Ridder, Telerate, Bloomberg, Dow, etc. Financial modeling software is also an element of the invention. In another embodiment, a version of the software 120′ may be made resident on a user's computer storage 215 for direct access by the user 20. Typically the user 20 will still need to access a network to access the historical asset database 40 in order to have up-to-date information. In either case, the output provided to the user 20 can be used to assist in making an investment decision. If the output is not satisfactory, the user 20 can continue performing different simulations as desired. III. Exemplary Results A valuation problem of a European call on a stock with constant volatility and no dividends has been undertaken. In this problem the Monte Carlo results can be easily compared with the analytic Black-Scholes solution. In Table 1 are shown the results for some realistic parameter choices and their accuracy as the number of Monte Carlo steps is varied. Exact results are always within estimated confidence limits of the Monte Carlo results. Statistical errors after 100,000 Monte Carlo steps are less than a half-percent for all maturities. The error is less than a tenth of a percent for 1.6×10^6 steps. These statistical uncertainties reflect improvements achieved by explicit use of all the symmetries of path probabilities, which enable one to accumulate more independent results per path. For example, if a stock price path is reflected with respect to the deterministic path, its probability is the same; so we can accumulate results for the reflected path as well, with negligible computation cost. This can be regarded as a rudimentary variance reduction technique. TABLE 1 Comparison of Monte Carlo estimates and exact results for European call values. This table shows the level of accuracy which can be achieved as the number of Monte Carlo steps ranges from 1 × 10^5 to 16 × 10^5. Risk-free interest rate per period is set to r[f ]= 0.004853. N[t ]is the number of periods to maturity, σ^2 is stock price variance per period, C(N) is European call value Monte Carlo estimate after N Monte Carlo steps, and ε(N) is the error estimate after N Monte Carlo steps. Exact results obtained using Black-Scholes formula are listed in the last column (C). Initial stock price is S = 100 and the strike price is X = 100 for all data sets in the table. N[t] σ^2 C(1 × 10^5) ε(1 × 10^5) C(4 × 10^5) ε(4 × 10^5) C(16 × 10^5) ε(16 × 10^5) C 1 0.001875 1.9792 0.0064 1.9769 0.0032 1.9750 0.0016 1.9761 2 0.001875 2.9508 0.0106 2.9456 0.0053 2.9430 0.0026 2.9443 3 0.001875 3.7522 0.0139 3.7492 0.0069 3.7469 0.0034 3.7482 4 0.001875 4.4710 0.0168 4.4678 0.0083 4.4673 0.0041 4.4675 5 0.001875 5.1446 0.0193 5.1324 0.0096 5.1326 0.0048 5.1327 6 0.001875 5.7822 0.0216 5.7610 0.0108 5.7608 0.0054 5.7597 7 0.001875 6.3823 0.0238 6.3599 0.0118 6.3595 0.0059 6.3576 8 0.001875 6.9456 0.0258 6.9309 0.0129 6.9337 0.0064 6.9324 9 0.001875 7.4969 0.0276 7.4881 0.0138 7.4875 0.0069 7.4883 10 0.001875 8.0335 0.0294 8.0259 0.0147 8.0235 0.0073 8.0285 1 0.002500 2.2467 0.0075 2.2436 0.0037 2.2415 0.0018 2.2411 2 0.002500 3.3269 0.0123 3.3198 0.0061 3.3168 0.0030 3.3161 3 0.002500 4.2089 0.0161 4.2024 0.0080 4.2004 0.0040 4.1999 4 0.002500 4.9935 0.0195 4.9873 0.0097 4.9864 0.0048 4.9848 5 0.002500 5.7262 0.0224 5.7086 0.0112 5.7085 0.0056 5.7065 6 0.002500 6.4168 0.0252 6.3873 0.0125 6.3855 0.0062 6.3831 7 0.002500 7.0602 0.0277 7.0302 0.0138 7.0290 0.0069 7.0255 8 0.002500 7.6629 0.0301 7.6415 0.0150 7.6434 0.0075 7.6406 9 0.002500 8.2483 0.0322 8.2351 0.0161 8.2348 0.0080 8.2335 10 0.002500 8.8169 0.0343 8.8066 0.0172 8.8041 0.0086 8.8075 A shortcoming of the Metropolis method is that configurations reached along the Markov chain are correlated, which decreases the effective number of independent measurements. In this simulation, correlations along the Markov chain are reduced by choosing with equal probabilities, at every Monte Carlo step, either the current path or any of its reflection-symmetry-related paths. In Table 2 are presented results for parameter sensitivities using the same parameter choices as Table 1. They are obtained concurrently with the option price itself, as described above. They show an even higher level of accuracy than the corresponding option price for a given number of Monte Carlo steps. If these values had been obtained with numerical differentiation, it would require at least three simulations besides the original one to compute the three partial derivatives. Additional simulations may also be required, depending upon the statistical accuracy of the Monte Carlo results. If the statistical errors are large, one would need simulations for a few nearby parameter values, combined with a least-squares fit to produce estimates of derivatives. This may lead to unacceptably large errors for higher-order derivatives (like γ, for example), unless statistical errors for option prices are very small. In the path integral approach there are no additional sources of errors. TABLE 2 Option price sensitivities to input parameters. This table lists some of the price sensitivities which can be computed along the option price in a path-integral simulation. Initial stock value is set to S = 100 and the strike price is X = 100. Number of Monte Carlo steps is 1 × 10^5. Each parameter sensitivity estimate (suffix MC) is followed by its error estimate ε and the exact value obtained by differentiation of the Black-Scholes formula. δ is the stock price sensitivity (δ = ∂C/∂S), κ is the volatility sensitivity (κ = ∂C/∂σ), and ρ is the interest rate sensitivity (ρ = ∂c/∂r[f]). N[t] r[f] σ^2 δ[MC] ε δ κ[MC] ε κ ρ[MC] ε ρ 1 0.004853 0.001875 0.5530 0.00039 0.5532 0.1143 0.0005 0.1141 0.04443 0.00005 0.04445 2 0.004853 0.001875 0.5745 0.00045 0.5750 0.1610 0.0006 0.1600 0.09083 0.00011 0.09092 3 0.004853 0.001875 0.5914 0.00049 0.5916 0.1949 0.0008 0.1942 0.13848 0.00019 0.13852 4 0.004853 0.001875 0.6060 0.00051 0.6054 0.2219 0.0010 0.2222 0.18710 0.00027 0.18692 5 0.004853 0.001875 0.6167 0.00053 0.6175 0.2469 0.0012 0.2463 0.23555 0.00035 0.23592 6 0.004853 0.001875 0.6276 0.00054 0.6283 0.2687 0.0013 0.2673 0.28485 0.00043 0.28539 7 0.004853 0.001875 0.6386 0.00055 0.6382 0.2867 0.0014 0.2862 0.33526 0.00052 0.33523 8 0.004853 0.001875 0.6474 0.00056 0.6473 0.3034 0.0016 0.3032 0.38522 0.00061 0.38536 9 0.004853 0.001875 0.6564 0.00057 0.6558 0.3179 0.0017 0.3188 0.43615 0.00071 0.43573 10 0.004853 0.001875 0.6635 0.00057 0.6638 0.3323 0.0018 0.3330 0.48614 0.00080 0.48627 11 0.004853 0.001875 0.6707 0.00057 0.6713 0.3451 0.0019 0.3461 0.53639 0.00089 0.53694 12 0.004853 0.001875 0.6772 0.00058 0.6784 0.3577 0.0021 0.3583 0.58630 0.00099 0.58771 1 0.004853 0.002500 0.5485 0.00035 0.5486 0.1146 0.0004 0.1143 0.043845 0.000045 0.043847 2 0.004853 0.002500 0.5682 0.00040 0.5685 0.1617 0.0006 0.1605 0.089164 0.000106 0.089227 3 0.004853 0.002500 0.5838 0.00044 0.5837 0.1960 0.0009 0.1950 0.135424 0.000174 0.135430 4 0.004853 0.002500 0.5966 0.00046 0.5964 0.2236 0.0010 0.2235 0.182223 0.000248 0.182195 5 0.004853 0.002500 0.6074 0.00048 0.6075 0.2490 0.0012 0.2481 0.229169 0.000324 0.229369 6 0.004853 0.002500 0.6167 0.00049 0.6175 0.2715 0.0013 0.2697 0.276238 0.000402 0.276847 7 0.004853 0.002500 0.6270 0.00050 0.6266 0.2901 0.0015 0.2892 0.324568 0.000488 0.324553 8 0.004853 0.002500 0.6354 0.00051 0.6350 0.3072 0.0016 0.3068 0.372574 0.000572 0.372425 9 0.004853 0.002500 0.6433 0.00052 0.6428 0.3226 0.0017 0.3230 0.420538 0.000658 0.420414 10 0.004853 0.002500 0.6506 0.00052 0.6502 0.3373 0.0019 0.3380 0.468612 0.000745 0.468478 11 0.004853 0.002500 0.6569 0.00053 0.6572 0.3508 0.0020 0.3519 0.516353 0.000833 0.516584 12 0.004853 0.002500 0.6629 0.00053 0.6637 0.3641 0.0021 0.3648 0.563794 0.000929 0.564700 The possibility of computing Monte Carlo results for different parameters in a single simulation was discussed above. This is illustrated in Table 3, where option values in a window of about 10% variation of initial stock price are computed in a single run. Within a few percent difference from the stock price used in the simulation, results are roughly of the same statistical quality as for the original price. This is a very inexpensive and efficient way to explore option price variations in a limited parameter range, particularly if there is uncertainty about input parameter estimates. It is clear from Table 3 that the further one goes from the original simulation parameters, the worse the statistics become (larger relative errors) due to inefficient importance sampling. TABLE 3 Computation of option prices for multiple parameters in a single simulation. This table shows the level of which can be obtained if multiple option prices are computed in a single simulation. Number of Monte Carlo steps 1 × 10^5, initial stock price is S[o ]= 100, strike price is X = 100, volatility per period is σ^2 = 0.0025, and interest race is r[f ]= 0.004853 per period. Each option price estimate C(S[i]) for initial stock price S[i ]is followed by its error estimate ε and the exact value from Black-Scholes formula C. N[t ]denotes the number of time periods to N[t] C(95) ε C C(99) ε C C(101) ε C C(105) ε C 1 0.4632 0.0006 0.4629 1.7364 0.0047 1.7325 2.8358 0.0117 2.8287 5.8448 0.0572 5.8540 2 1.1827 0.0051 1.1790 2.7907 0.0080 2.7757 3.9362 0.0154 3.9121 6.8103 0.0639 6.7915 3 1.8600 0.0098 1.8613 3.6494 0.0107 3.6390 4.8268 0.0183 4.8059 7.6560 0.0714 7.6377 4 2.5060 0.0145 2.5049 4.4087 0.0130 4.4080 5.6044 0.0207 5.6004 8.4129 0.0774 8.4160 5 3.1353 0.0192 3.1165 5.1283 0.0151 5.1164 6.3470 0.0230 6.3310 9.1655 0.0826 9.1444 6 3.7185 0.0232 3.7024 5.8019 0.0170 5.7813 7.0462 0.0251 7.0160 9.8878 0.0877 9.8342 7 4.2810 0.0269 4.2671 6.4267 0.0188 6.4133 7.6846 0.0271 7.6662 10.5269 0.0926 10.4933 8 4.8267 0.0312 4.8140 7.0248 0.0205 7.0191 8.2981 0.0288 8.2888 11.1462 0.0968 11.1270 9 5.3438 0.0345 5.3457 7.5966 0.0220 7.6032 8.8850 0.0305 8.8887 11.7508 0.1010 11.7393 10 5.8616 0.0380 5.8641 8.1616 0.0235 8.1691 9.4637 0.0321 9.4693 12.3372 0.1052 12.3334 11 6.3681 0.0415 6.3710 8.7055 0.0249 8.7194 0.0198 0.0336 10.0335 12.9117 0.1086 12.9115 12 6.8597 0.0450 6.8676 9.2435 0.0264 9.2561 0.5745 0.0352 10.5834 13.5007 0.1139 13.4755 The same trend is apparent for longer periods to maturity, because the differences between the simulation probability distribution and the true ones are amplified for longer time periods. For shorter periods to maturity, there is an apparent asymmetry between errors, which are much smaller for the initial prices below the simulation price S[i]<S[0 ]than for prices above the simulation price, S [i]>S[0]. The reason is that the stock price distribution is skewed towards higher stock prices, so that the overlap between simulation price distribution and actual price distributions is bigger for S[i]<S[0 ]than for S[i]>S[0]. This effect becomes less and less important for longer time periods to maturity. We applied the path-integral Monte Carlo approach to the simple Black-Scholes model, where exact results are easy to obtain, to show the accuracy and efficiency of the method. The results indicate that this approach shows significant potential, which must be tested on realistic problems if it is to become a useful simulation tool. It is anticipated that an application of the method to various nontrivial models of underlying asset dynamics will prove valuable as well. To this end we show results for a jump diffusion model, where jumps are superimposed upon the continuous Wiener process. We consider the following differential/difference equation corresponding to this process: d log S=dy=μdt+σdξ+dZ where dZ is the stochastic variable describing the jump process. It is assumed that the number of jumps is Poisson distributed, while jump size is uniformly distributed with average <dZ>=0. The finite average of the jump size will amount to a trivial shift in the drift coefficient μ. A series solution has been obtained for the option price only under the assumption that the jump size distribution is normal. This restriction can be lifted in a Monte Carlo simulation; so we have chosen a uniform distribution for experimentation purposes, since it is computationally inexpensive and there is no analytic solution; this choice, however, is not intended as a limitation. The results for a European call on an asset following this process are shown in Table 4. Prices and sensitivities are obtained concurrently, and the accuracy is comparable to that achieved on the Black-Scholes problem. Relative errors for 1×10^5 steps are below 1% for option price and below 0.2% for some δ and ρ sensitivities. As for the Black-Scholes model, price sensitivities are more accurately determined than the option price itself. The relative quality of estimators depends upon the form of the corresponding function (see Eq. 15), which is integrated with respect to the path probability measure. If one computed sensitivities using numerical differentiation, errors would be at least as large as the price error. TABLE 4 Call option price and sensitivities for a jump diffusion process. This table lists results for the option price and its input parameter sensitivities when a jump process is superimposed on the continuous process of the Black-Scholes model. Initial stock value is set to S = 100 and the strike price is X = 100. Number of Monte Carlo steps is 1 × 10^5. Each Monte Carlo result is immediately followed by its error estimate ε. Jump rate per period is set to k[P ]= 0.1. Riskless interest rate per period is r[f ]= 0.004853 and variance per period is σ^2 = 0.001875. Jump sizes are uniformly distributed in the interval (−Δ, +Δ). δ is the stock price sensitivity (δ = ∂C/∂S), κ is the volatility sensitivity (κ = ∂C/∂σ), and ρ is the interest rate sensitivity (ρ = ∂C/∂r[f]). N[t] Δ C ε δ ε κ ε ρ ε 1 0.02 2.12671 0.01505 0.55233 0.00121 0.12487 0.00108 0.04426 0.00010 2 0.02 3.16259 0.02458 0.57517 0.00141 0.17626 0.00159 0.09059 0.00024 3 0.02 3.97857 0.03243 0.59134 0.00151 0.21151 0.00202 0.13789 0.00040 4 0.02 4.77069 0.03896 0.60583 0.00159 0.24485 0.00244 0.18604 0.00057 5 0.02 5.47055 0.04516 0.61457 0.00161 0.27364 0.00281 0.23328 0.00072 6 0.02 6.19997 0.05057 0.62528 0.00165 0.30257 0.00316 0.28164 0.00090 7 0.02 6.85696 0.05560 0.63553 0.00168 0.32708 0.00349 0.33073 0.00108 8 0.02 7.50037 0.06024 0.64449 0.00170 0.34980 0.00382 0.37966 0.00126 9 0.02 8.10141 0.06485 0.65231 0.00172 0.36877 0.00412 0.42847 0.00145 10 0.02 8.68738 0.06891 0.66075 0.00173 0.39014 0.00443 0.47823 0.00164 1 0.05 3.05180 0.02526 0.55565 0.00110 0.19220 0.00192 0.04376 0.00009 2 0.05 4.60908 0.04117 0.57536 0.00120 0.28671 0.00285 0.08821 0.00021 3 0.05 5.81127 0.05421 0.59300 0.00131 0.35139 0.00363 0.13372 0.00035 4 0.05 6.94802 0.06491 0.60581 0.00136 0.41691 0.00432 0.17878 0.00049 5 0.05 7.97130 0.07494 0.61854 0.00141 0.47293 0.00502 0.22451 0.00064 6 0.05 9.01019 0.08419 0.62963 0.00144 0.53323 0.00569 0.26977 0.00080 7 0.05 9.97512 0.09320 0.63949 0.00146 0.58649 0.00627 0.31485 0.00096 8 0.05 10.87057 0.10119 0.64944 0.00149 0.63476 0.00693 0.36049 0.00113 9 0.05 11.75562 0.10962 0.65942 0.00153 0.67822 0.00754 0.40639 0.00132 10 0.05 12.61027 0.11694 0.66944 0.00156 0.73089 0.00826 0.45278 0.00150 An exemplary set of output graphs is provided in FIGS. 2-5. In FIG. 2 is presented an example of the system's output displayed on a screen illustrating information on the underlying asset (here a common stock). Below the graph are displayed user-controllable variables associated with calculating the statistical quantities to be used by a portion of the pricing software subsystem. In FIG. 3 is presented an example of the system's output displayed on a screen illustrating a histogram of price changes at particular time intervals together with a fitted curve depicting a best-fit Gaussian distribution to the given histogram. Below the graph is displayed information to be used in another portion of the pricing software subsystem. In FIG. 4 the system displays the same information as in FIG. 3, while preparing to use an actual distribution rather than a Gaussian or other approximation to the histogram. In FIG. 5 is exemplary output depicting the calculated prices for an option on the common stock of FIG. 2, with the stated stock and strike prices and for an expiration date after the date of the calculation. The error bars represent uncertainties calculated by the Monte Carlo software based on the number of paths computed by the software. It may be appreciated by one skilled in the art that additional embodiments may be contemplated, including systems and methods for simulating other financial parameters. In the foregoing description, certain terms have been used for brevity, clarity, and understanding, but no unnecessary limitations are to be implied therefrom beyond the requirements of the prior art, because such words are used for description purposes herein and are intended to be broadly construed. Moreover, the embodiments of the system and method illustrated and described herein are by way of example, and the scope of the invention is not limited to the exact details provided. Having now described the invention, the operation and use of a preferred embodiment thereof, and the advantageous new and useful results obtained thereby, are set forth in the appended claims.
{"url":"http://www.google.com/patents/US7349878?dq=6859936","timestamp":"2014-04-17T15:51:21Z","content_type":null,"content_length":"275372","record_id":"<urn:uuid:42d8b7c8-da66-46f8-b6b2-3887b89311c7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami Gardens, FL Prealgebra Tutor Find a Miami Gardens, FL Prealgebra Tutor ...I have over three years experience as a GED teacher and I know what it takes to pass the test. I can teach math k-Algebra I I believe that I am qualified to teach study skills since that is one of the most important things that a teacher can train a student to do, study. I have over six years experience teaching and tutoring children, youth and adults. 14 Subjects: including prealgebra, English, reading, writing Hey guys!!!! My name is Carmen and I recently graduated from St. Joseph's University in Philadelphia. My major was chemical biology and a minor Spanish with a final grade point average of 3.2. 9 Subjects: including prealgebra, Spanish, algebra 2, algebra 1 ...I scored a 730 on the Math section of the SAT. I am also available to tutor for the SAT Math and any other related math courses. In high school and college, I have always received positive feedback from my tutees. 8 Subjects: including prealgebra, chemistry, writing, biology ...I ended up with a 3.44 GPA at the University. I know how to stay on top of assignments, how to take effective notes, how to study for exams and how to take them. I have always understood and related to the frustrations, fears and anxieties that students face. 14 Subjects: including prealgebra, reading, physics, geometry ...I have a solid background in computers and office productivity software based on my engineering background. I currently teach Mathematics at Broward College with 9 years teaching experience. Also, I work as a Lab Assistant with Broward College providing support and tutoring college students. 22 Subjects: including prealgebra, physics, calculus, trigonometry Related Miami Gardens, FL Tutors Miami Gardens, FL Accounting Tutors Miami Gardens, FL ACT Tutors Miami Gardens, FL Algebra Tutors Miami Gardens, FL Algebra 2 Tutors Miami Gardens, FL Calculus Tutors Miami Gardens, FL Geometry Tutors Miami Gardens, FL Math Tutors Miami Gardens, FL Prealgebra Tutors Miami Gardens, FL Precalculus Tutors Miami Gardens, FL SAT Tutors Miami Gardens, FL SAT Math Tutors Miami Gardens, FL Science Tutors Miami Gardens, FL Statistics Tutors Miami Gardens, FL Trigonometry Tutors Nearby Cities With prealgebra Tutor Aventura, FL prealgebra Tutors Doral, FL prealgebra Tutors Hallandale prealgebra Tutors Hialeah prealgebra Tutors Hialeah Gardens, FL prealgebra Tutors Hollywood, FL prealgebra Tutors Miami Lakes, FL prealgebra Tutors Miami Shores, FL prealgebra Tutors Miramar, FL prealgebra Tutors N Miami Beach, FL prealgebra Tutors North Miami Beach prealgebra Tutors North Miami, FL prealgebra Tutors Opa Locka prealgebra Tutors Pembroke Park, FL prealgebra Tutors Pembroke Pines prealgebra Tutors
{"url":"http://www.purplemath.com/miami_gardens_fl_prealgebra_tutors.php","timestamp":"2014-04-18T15:55:44Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:beb664e2-9ab5-4f0a-babd-c9ae0e3828d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Anonymous- boy on Friday, February 17, 2012 at 5:52pm. in triangle ABC, the size of angle B is 5 times the size of angle A and the size of angle C is 9 degrees less than 4 times the size of angle A Find the size of angle A • algebra - MathMate, Friday, February 17, 2012 at 6:09pm You can set up the equations as follows: A+B+C=180 (sum of angles of a triangle). and solve for A, B and C. However, you can easily solve it by substitution into the third equation: A + 5A + 4A-9 = 180 With A known, you can then find the remaining angles by substitution into the first two equation. • algebra - MathGuru, Friday, February 17, 2012 at 6:12pm The sum of the angles of a triangle are 180 degrees. Let x = angle A 5x = angle B 4x - 9 = angle C x + 5x + 4x - 9 = 180 Solve for x. Related Questions algebra - In triangle ABC, the size of angle B is 5 times the size of angle A, ... geometry - In a triangle ABC, angle B is 3 times angle A and angle C is 8 ... algebra - In a triangle ABC , angle B is 4 times angle A and angle C is 17 ... Algebra - In a triangle ABC, angle B is three times angle A and angle C is 5 ... algebra 1 - In a triangle ABC,angle b is 4 times angle "A" and angle "C" IS 17 ... algebra 1 - In a triangle ABC,angle b is 4 times angle "A" and angle "C" IS 17 ... help math - In triangle ABC, points D,E,F are on sides BC,CA,AB respectively ... Trig - a. What is the length of the hypotenuse of triangle ABC? b. What is the ... math - in triangle abc angle b is 5x angle a and c is 16 degree less than 4 ... maths - ABC is an acute triangle with \angle BCA = 35 ^\circ. Denote the ...
{"url":"http://www.jiskha.com/display.cgi?id=1329519164","timestamp":"2014-04-23T18:11:45Z","content_type":null,"content_length":"9012","record_id":"<urn:uuid:43205d8e-2714-4019-a29a-ec5f502ff522>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Cos addition identity but with a multiplyier April 12th 2009, 02:57 AM #1 Apr 2009 Cos addition identity but with a multiplyier i hope that someone can help, i have a equation that is in the form i want to simplify it using the simple trig identity but i don't think that i can because of the A multiplying term, anyone know of another way that you can or if you defiantly can't Without knowing the exact expression in question, it's hard to say for sure. But you could, I suppose, so something like the following: . . . . .cos(s)cos(t) – Asin(s)sin(t) = [cos(s)cos(t) - sin(s)sin(t)] - (A - 1)sin(s)sin(t) I don't know if that will help at all, though.... well attached is the whole equation April 12th 2009, 04:59 AM #2 MHF Contributor Mar 2007 April 12th 2009, 01:59 PM #3 Apr 2009
{"url":"http://mathhelpforum.com/trigonometry/83318-cos-addition-identity-but-multiplyier.html","timestamp":"2014-04-20T01:11:32Z","content_type":null,"content_length":"34956","record_id":"<urn:uuid:034ab39b-d391-4880-864c-b55b7091930d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve |x + 6| = 12. Answer 6 -18 {6, -18} No Solution • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c7e3e5e4b0a14e43687dc2","timestamp":"2014-04-19T07:07:13Z","content_type":null,"content_length":"205565","record_id":"<urn:uuid:19a4af20-3e3d-4b5c-b630-3c282b87727f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra II Find the number a such that x-1 is a factor of x^3+ax^2-x+8 Also, you could use Vieta's theorem. When you have: $P(x)=x^3+px+qx+r$, you know that: $x_1+x_2+x_3=-p$ $x_1x_2+x_1x_3+x_2x_3=q$ $x_1x_2x_3=-r$ ... where $x_1,x_2,x_3$ are roots of $P(x)$. So when you have $P(x)=x^3+ax^2-x+8$ and you know that 1 is root ( $x_1=1$), you can write three equations: $1+x_2+x_3=-a$ $x_2+x_3+x_2x_3=-1$ $x_2x_3=-8$ ... so you have 3 equations with three unknowns, and you can solve for $a$. Well, if you really want to do it the hard way! Personally I would prefer to evaluate the polynomial at x= 1.
{"url":"http://mathhelpforum.com/algebra/206875-algebra-ii.html","timestamp":"2014-04-17T04:29:07Z","content_type":null,"content_length":"44478","record_id":"<urn:uuid:615abd7b-e30c-4348-b591-eda7dce5b458>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalise Invertible Matrix Generalise Invertible Matrix PDF Sponsored High Speed Downloads 16. The characteristic polynomial We have already seen that given a 2 2 matrix A= a b c d ; there is a single number ad bcsuch that Ais invertible if and only In the present paper we extend and generalise the findings of [8] to an arbitrary ring of ternions, with finite cases handled in somewhat more detail. 2 Ternions Let F be a (commutative) field. ... by the invertible matrix ... we investigate generalise Ochs’ “deterministic” perturbations in the context of ... semi-invertible matrix cocycles under iid perturbations, was recently obtained in [11]. The main result of [11] implies stability of invariant measures for MCREs Powers of elements Aim lecture: We introduce the notion of group actions to generalise and formalise the notion of permuting objects such as the rows (or columns of a matrix). signature amongst the eigenvalues of the transformation matrix, and proves that it is su cient to do so. ... We can also generalise the loop condition to allow for a conjunction of multiple linear in- ... P2 can be transformed by an invertible transformation, preserving its termination properties. by transforming an existing ordering by the action of an invertible matrix (Theorem 4.1). The ... If we generalise this definition of the fundamental region to the case when > is partial we obtain an important new ordering B, which turns ... We will then generalise to C-vector spaces and consider Hermitian spaces and unitary morphisms - these are the complex analogues ... Bis an invertible matrix. Proof: Suppose that B is nondegenerate. We will show that A = [B] Bis invertible by showing that ker T Generalise: m points and values, and n functions Solve: Interpolation matrix. Golan [Gol99] Semiring = “ring without subtraction ... Invertible matrices = generalized permutation matrices permutation matrix permuting rows and/or columns of HFE. In this paper, we generalise the idea of Kipnis and Shamir to attack partially the HFE cryptosystem of degree 3. ... two invertible matrix S = {sij} and t = {tij} with entries ... The transformation in a matrix representation consists in for any choice of invertible matrix Aand any diagonal matrix Dfor which W+ Dis positive definite. N(x;m;) ... [25], which generalise the restricted Boltzmann machines. The specific Gaussian-Bernoulli harmonium is in common use, ... random matrix in which all the coefficients a ... • In particular, the matrix M is invertible or non-singular iff det(M) ... The trivial proof does not generalise well to the discrete case. Hence we will need to find a less trivial proof. Now in order to generalise the above idea we can investigate what happens if we do not insist on deriving the sensing matrix from a linear transformation of the problem. Instead ... and assuming that the matrix Ψ⋆ IΦI is invertible so that we De nition of adjoint Aim lecture: We generalise the adjoint of complex matrices to linear maps between n dim inner product spaces. In this lecture, we let F = R or C. Revisiting the Pascal Matrix Barry Lewis 1. INTRODUCTION. ... if we generalise this to f(z)zk k!, k ≥ 0, what array—if any—does it generate if we use the appropriate function (for a given ... • if f (0) = 0 then the array M is invertible; MA398 Matrix Analysis and Algorithms Sheet 2, Questions ... Assume that Ahas full rank so that ATAis invertible. ... We generalise the pseudo inverse to the rank-de cient case by de ning Aybto be the element of L(b) ... We generalise this construction to any length pLegendre sequence in ... it is necessary that S or ~S be invertible. This in turn implies that s or ~s, when viewed as polynomials, s(x) or ~s(x), ... circulant matrix, P, whose rst row is the negation of ~s for p= 8k 3, and Precision Matrix Modelling for Large Vocabulary Continuous Speech Recognition (LVCSR) ... ‘ Generalise using basis superposition framework: ... ‘ Transformation matrix, A, is square and invertible. generalise the notion of positive factor in so far as they are defined on the set of positive semidefinite ... (the set of invertible real d dmatrices), M2M d ... with the Matrix Cameron-Martin formula given by [5]. tematic method of finding, where it exists, an invertible t ×t matrix X over F with XA=BX? Should X exist then A and B =XAX−1 are called similar. The answer ... matrix A, as above, ... nevertheless I have adopted proofs which generalise without material change, that of This allows to generalise Zyskind (1967) most famous equivalent condition to Ander-son theorem in the following corollary, ... The proof, provided in the appendix, is based on the fact that the covariance matrix of the non-invertible MA(q) ... This allows to generalise Zyskind (1967) most famous equivalent condition to Anderson theorem in the following corollary, ... The proof, provided in the Appendix, is based on the fact that the covariance matrix of the non-invertible MA(q) process (10) ... The purpose of this paper is to generalise some of the above ideas to the case ... invertible for each t ∈ R and hence we get the identity ... Any matrix C ∈ R n ... These models generalise both generalised linear models and survival analysis. ... determines an invertible change of parameter. 2·4. ... Then the model matrix Mf used for the prediction is different from that used an invertible matrix. Jurek [19] also investigated the case where J is a bounded op-erator in a Banach space. It is a consequence of results found in [39], [21], [22] and ... Here we generalise operator self-decomposability by taking ... The zero-curvature representation We generalise the setting of the Lax representation from KdV to more general integrable PDEs. ... given a matrix-valued function G( ), to construct two matrix functions, G ... are invertible then, subject to the latter normalisation, the solution is unique. Thus are invertible, then give the inverse matrix, otherwise explain why they are not invertible. a.-.. / 90 30 4 54 57 17 27 69 40 0 1 1 2 ... We may generalise the notion of square roots to matrices by de ning the square root of a (square) ... •A matrix representation of the geometric algebra of 3D space. MIT2 2003 4 ... • Does all generalise to multiparticle setting. MIT2 2003 11 Magnetic Field ... • Spacetime vector derivative is invertible, can The resolvent ( I A) 1 of a matrix Ais naturally an analytic function of 2C, ... many results naturally generalise to operators on Banach spaces. ... ( I A) is invertible for close to 0, so we cannot start with ( I 1A) . In the spirit of (1.1) we write I 1A= ( (i.e. a map whose Jacobian matrix has maximal rank at each point of Cn) is an isomorphism. Several attempts have been made to generalise this conjecture by allowing one of the Cn’s to be replaced by an irreducible affine variety of ... invertible elements of its coordinate ring are ... Our method generalise the method introduced by Elliott for general hidden Markov models and avoid to use backward recursion. Key words: Hidden Markov models, Switching models. MSC: 62M09 ... i = suppose that the matrix is invertible. Generalise this. Exercise** 2.36. Let R= Z and let I = h3i:What is I2? What is In? Let J= h12i:What is IJ? ... What is a criterion for a matrix in M 2(Z 8) to be invertible? Exercise 6.43. Let R= M 2 (F 4) be the ring of 2 2 matrices with entries from the eld F tions that can be exploited to generalise the notion of scaling a sample or pixel, ... The equivalence between invertible functions in quaternary ... above matrix equation reveals that these matrices are inde- invertible We replace this by the following simpler ... matrix It is represented by an idempotent matrix F, ... (which does not generalise these theorems however) In 2004: simple constructive proofs of these results (that can be thought of The matrix of the linear map in Example 2.8 with respect to the standard basis ... linear endomorphisms. The subset (in fact, subgroup) of invertible endomorphisms is denoted by GL.V/. ... Both statements generalise more than two subspaces: The C-vector space V is said to be the direct In section 6 we will generalise the results from section 4 and show that the ... The operator ⊙ is any invertible operator. On the pixel ... are reasonable as H is a blurring matrix and hence should operate equally on Our main aim for this talk is to generalise these Lp spaces to the non-commutative situation. ... a matrix, A, is self-adjoint if A= A¯T. It is unitary if it is invertible and its inverse is its conjugate transpose. Now generalise the above example so that instead of Cthe fibres of E0 are simple matrix ... which all morphisms or arrows gare invertible and a group is a groupoid with just one object. ... matrix units, {eij: 1 ≤ i,j ... the class of affine processes and they generalise the notion of positive factor insofar as they are defined ... (the set of invertible real d dmatrices), M2M d ... matrix Riccati ODE to have a unique non-negative solution which makes the closed loop system matrix Matrix subordinators are a generalisation of the one- ... generalise and refine a result of [1] ... of invertible n× nmatrices by GLn(R), the linear subspace of symmetric matrices by Sn, the (closed) positive semidefinite cone by S+ We generalise this for the case of Gaussian covariance function, ... covariance matrix) along with Gaussian basis functions ... is invertible, then α = K−1u. Following this finite analogy, by k−1 we now intend a sloppy same solution, provided the matrix A is invertible. ... Nonetheless, we can generalise this idea in the following way. ... Moreover, sinceAis invertible, we also must have thatH# j is invertible. Ifx ∈ x 1+K j(r 1,A) As in length, one can generalise the concept of angles in Minkowski ... well and hence must be invertible i.e. O' ... In matrix notation, ηη= LLt. Note the invariance condition is transferred to invariance of Minkowski ... we propose a simple matrix formulation for parameter-izing the saturated model as in Glonek ... these results generalise those of Dardanoni & Forcina ... ato ais also invertible and di We generalise this by replacing constant parameters by smooth, invertible functions of the linear predictors from the real-line to the positive half-plane . ... is a matrix of M features (powers of LGD and its products with covariates): where we de ne the weight matrix WN M = (wnm) and the invertible diagonal matrix GM M = diag(gm) as wnm def= e 12k xn ym ... These ideas generalise directly to elastic nets of two or more dimensions, but the structure of D is more compli-cated, ... Therefore we generalise this model inasmuch ... For γ 6= β the matrix A can be diagonalised, i.e., there exists an invertible matrix ... For the situation γ = β the matrix A is a multiple of a 2× 2 Jordan block. Thus, matrix [6]. One way to retain ... Such operators generalise the vector product and the scalar product, respectively, ... product, which is a combination of outer and inner product and provides GA with a rich algebraic structure, as it is invertible [5]. In the GA projective space a point is ... On généralise ainsi des idées de Courant et Priedrichs [7], et de Sedov ... properties for every t in the time interval (tOy t\)\ the map a e M3 > </>(a, t) 6 M3 must be invertible, continuons ... Using block matrix notations, the matrix operator A is A = ^2 Akdk 1 k 1, 2,3, with /0 Ak = generalise { and include as examples ... are matrix inverses (matrix multiplication) and appropriately de ned inverse functions (function composition). ... The set of invertible n nmatrices forms a group with respect to matrix multi-plication. to generalise this approach, the concept of the model error model, ... as a fixed invertible minimum phase transfer matrix. Then it can be assumed without loss of generality that W ... Linear Matrix Inequalities in System and Control Theory. SIAM, ...
{"url":"http://ebookily.org/pdf/generalise-invertible-matrix","timestamp":"2014-04-23T11:07:00Z","content_type":null,"content_length":"42306","record_id":"<urn:uuid:9f00a0ba-21f2-4bc8-a852-36d018bf7c92>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Mathematics education on the arXiv? Replies: 2 Last Post: Jul 29, 2013 10:57 AM Messages: [ Previous | Next ] Re: Mathematics education on the arXiv? Posted: Jul 29, 2013 1:13 AM One crucial arena of math-ed that sorely needs *mathematical* attention by competent mathematicians is perhaps best called, "Mathematical Knowledge for Teachers' Education." Perhaps it could be a sub-category. I am speaking about knowledge within mathematical theories ... rather than mathematicians thoughts about what/how mathematics should be taught [e.g. Wu's opinions about teaching For a *mathematical* example: the commonly taught non-sense about the real-domain, real range, quadratic 3(x-5)^2+7, having non-real solutions is self-contradictory (and thoughtful students are troubled by it). The mistakenly asserted "complex roots" are 5 [+/-]sqrt[-7/3], which makes no sense for real valued functions of real variables. BUT, when that parabola is vertically flipped over its vertex, its "image" - - 3(x-5)^2+7, whose real-number solutions are 5 [+/-]sqrt[7/3] ... as "image roots" for the original Its not a new mathematical discovery, but it is important for teachers of algebra (and of algebra-teachers) to know. There probably are dozens of such MKTE insights that should be made accessible to all teachers of core-curricular mathematics, ASAP. Indeed, there is a nationally crucial need for mathematicians to mathematically re-examine the mathematical foundations of core-curricular mathematics ... and to collectively assemble a mathematically solid body of MKTE [e.g. math majors who teach school mathematics should know the logic of converting between decimals and percents]. I am personally ready to publish several such *mathematical* papers about little known MKTE essentials, and to nurture collaborative development of a library of such items. Some exemplary papers are available if they are - -------------------------------------------------- From: "Alain Schremmer" <schremmer.alain@gmail.com> Sent: Sunday, July 28, 2013 7:21 PM To: <mathedcc@mathforum.org> Subject: Re: Mathematics education on the arXiv? > On Jul 27, 2013, at 5:21 PM, Dana Ernst wrote: >> Greetings! My name is Dana Ernst and I am an assistant professor at >> Northern Arizona University. I am a mathematician that dabbles in math >> ed. >> The virtues of the arXiv are well known. Yet, there is currently no >> dedicated category on the arXiv for mathematics education research. The >> math.HO ? History and Overview category lists mathematics education as >> one of the possible topics, but it doesn?t appear to be commonly used >> for this purpose. In contrast, there is an active physics education >> category (physics.ed-ph). Unfortunately, at this time, there is not a >> culture among math ed folks to utilize pre- print servers like the arXiv. >> However, if there is going to be a cultural shift, there needs to be a >> dedicated repository for math ed papers. Authors need to know where to >> submit papers and readers need to know where to look. A category called >> History and Overview doesn?t cut it. A precedent has been set by the >> physics education crew and we should follow in their footsteps. It is >> also worth mentioning that Mathematics Education is listed as one of the >> American Mathematical Society?s subject classification codes (n! >> umber 97). >> I've contacted the arXiv and they are open-minded to adding math.ED - >> Mathematics Education as a category. However, they will seriously >> consider it, they want to know that there is support from the community >> and that it will get used. As a result, I have created a petition on >> change.org. If you are in favor of the arXiv including math.ED ? >> Mathematics Education as a category, please sign the petition. If you >> would also utilize this category by uploading articles related to >> mathematic education, please leave a comment (on the petition) >> indicating that this is the case. You can find the petition here: >> http://www.change.org/petitions/arxiv-org-add-math-ed-mathematics-education-category-to-arxiv >> The arXiv mentioned support by at least 50 people, but I'm shooting for >> 100, so if you are in favor, please take a minute to sign the petition. >> If you are curious or want to know more, check out the short blog post >> that I recently wrote: >> http://danaernst.com/mathematics-education-on-the-arxiv/ >> I'd love to hear what y'all think about this. Feel free to comment on >> the blog or reply to this email. I'm especially interested in hearing >> from people that are willing to help out. >> People that have responded to me via email, Twitter, Google+, and my >> blog post have been pretty supportive, so I'm hoping to see this come to >> fruition. It is worth noting that 3 people so far have expressed their >> support but also their skepticism that math ed folks will go along with >> the idea. In short, all three people have said something like, "math ed >> researches have been shunned too many times by mathematicians, and as a >> result are protective of their territory." Maybe this is true, but I'm >> not okay with it. Let's close the divide. As a mathematician that >> dabbles in math ed, I feel pretty passionate about this. > (1) arXiv is indeed a very nice "container". > (2) Unfortunately, there is just about nothing worth placing therein: > "math ed" is no more a science than, say, "economy". There are just > "Educologists" who want you to believe that they have just discovered the > ultimate "sugarcoating" to make "math" palatable to those, let us tacitly > agree, mostly mindless students. > (3) The only work in mathematics education I respect is that of Z. P. > Dienes but it pertains only to children. > (4) The only work I respect in adult education is Atherton, J. S. (1999) > Resistance to Learning [...] in Journal of Vocational Education and > Training Vol 51, No. 1, 1999 That's hard to find but he wrote > <http://www.doceo.co.uk/original/learnloss_1.htm> > on the subject. > (5) "Mathematics is not necessarily simple" (Gödel Incompleteness > Theorem, algebraic statement by Halmos) but the only known way for adults > to learn mathematics is by gaining "mathematical maturity" by > experiencing the "compressibility of mathematics" by "reading pencil in > hand". Hence the importance of the text. However, textbooks have very > rarely been good at presenting even those parts that are simple. (One, > rare, exception being Fraleigh's A First Course in Abstract Algebra.) Of > course, books used to be written for the colleagues whether because they > may review the book and/or because they may let their students buy it. > (Now the books are written for---or increasingly by---the editor and are > at an all time low.) Yet > (6) The difficulty in learning mathematics resides only in: > -- the degree of abstraction of what I am considering (= how far removed > is it from the real world, e.g. when I am counting marbles, I ignore > their colors while when I am operating in a group of moves, I am ignoring > a lot more than the nature of the universe in which I am making these > moves.) > -- the degree to which the information is concentrated in the language or > even left as "going without saying". > Both can be dealt with in an honest text via a Model Theoretic setting. > The idea of level and the concomitant idea of prerequisite are artificial > constructs that can usually be easily dispensed with. > For a specific example of what I mean, consider my > <http://www.freemathtexts.org/Standalones/RAF/index.php> > While not particularly well written, it is essentially without > prerequisite and, In fact, it is really a text on differential calculus: > Given > f(x_0+h) = A(x_0) +B(x_0)h +C(x_0)h^2 +D(x_0)h^3 + [...] (an > extrapolation of decimal approximation) > we need only give a name to the functions > x ------> B(x), > x ------> C(x), > ... > to have the derivatives---up to the factorial needed to make things > recursive. > Thus, all that is needed to learn, say, the differential calculus beyond > a good text, is only a willingness to: > -- stop and consider, > -- insist on things making sense. > So, an efficient scenario is to let the students unable to jump to this > or that level go through a sequence such as described in > <http://www.ams.org/notices/201303/rnoti-p340.pdf> > (7) Last, but not least, while we are always ready to brag about the > latest little "educational" gimmick we are using, we are quite unwilling > to depart significantly from the "true and tried"---rather understandably > if you are not tenured or need a promotion. (How many people do you know > who are not lecturing?) > Regards > --schremmer > **************************************************************************** > * To post to the list: email mathedcc@mathforum.org * > * To unsubscribe, email the message "unsubscribe mathedcc" to > majordomo@mathforum.org * > * Archives at http://mathforum.org/kb/forum.jspa?forumID=184 * > **************************************************************************** * To post to the list: email mathedcc@mathforum.org * * To unsubscribe, email the message "unsubscribe mathedcc" to majordomo@mathforum.org * * Archives at http://mathforum.org/kb/forum.jspa?forumID=184 * Date Subject Author 7/29/13 Re: Mathematics education on the arXiv? Clyde Greeno @ MALEI 7/29/13 Re: Mathematics education on the arXiv? Alain Schremmer
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2584123&messageID=9184001","timestamp":"2014-04-19T02:32:05Z","content_type":null,"content_length":"28377","record_id":"<urn:uuid:3b7f7dba-079a-4936-b1e7-55026d398185>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Bang patterns, ~ patterns, and lazy let John Hughes rjmh at cs.chalmers.se Wed Feb 8 00:56:05 EST 2006 From: Simon Peyton-Jones To: John Hughes ; haskell-prime at haskell.org Sent: Tuesday, February 07, 2006 11:37 PM Subject: RE: Bang patterns, ~ patterns, and lazy let Applying the rules on the wiki, the first step is to translate the first expression into a tuple binding, omitting the implicit ~: Not so! I changed it a few days ago after talking to Ben, to a simpler form that works nicely for recursive bindings too. Darn I forgot to change the rules at the bottom. Anyway, read the section “Let and where bindings”. Sorry about the rules at the end. The trouble with those parts is that NOWHERE do they discuss how to translate a let or where containing more than one binding. If they're not to be translated via tupling, then how are they to be translated? The only relevant thing I could find was in the "modifications to the report" section at the bottom, which just tells you to omit implicit ~ when applying the tuplling rules in the report. So I don't understand how the semantics of multiple bindings is supposed to be defined (and I admit my proposal isn't so nice either). But more and more complex translations make me very nervous! I have a feeling there could be a nice direct semantics, though, including both ! and ~ in a natural way. Let's see now. Let environments be (unlifted) functions from identifiers to values, mapping unbound identifiers to _|_ for simplicity. The semantics of patterns is given by P[[pat]] :: Value -> Maybe Env The result is Just env if matching succeeds, Nothing if matching fails, and _|_ if matching loops. Two important clauses: P[[! pat]] v = _|_ if v=_|_ P[[pat]]v otherwise P[[~ pat]] v = Just _|_ if P[[pat]]v <= Nothing P[[pat]]v otherwise In definitions, pattern matching failure is the same as looping, so we P'[[pat]]v = _|_ if P[[pat]]v = Nothing P[[pat]]v otherwise We do need to distinguish, though, between _|_ (match failure or looping), and Just _|_ (success, binding all variables to _|_). The semantics of a definition in an environment is D[[pat = exp]]env = P'[[pat]] (E[[exp]]env) (*) where E is the semantics of expressions. Note that this takes care of both ! and ~ on the top level of a pattern. Multiple definitions are interpreted by D[[d1 ; d2]]env = D[[d1]]env (+) D[[d2]]env where (+) is defined by _|_ (+) _ = _|_ Just env (+) _|_ = _|_ Just env (+) Just env' = Just (env |_| env') Note that (+) is associative and commutative. Let's introduce an explicit marker for recursive declarations: D[[rec defs]]env = fix menv'. D[[defs]](env |_| fromJust menv') This ignores the possibility of local variables shadowing variables from outer scopes. *Within defs* it makes no difference whether menv' is _|_ (matching fails or loops), or Just _|_ (succeeds with variables bound to _|_) If defs are not actually recursive, then D[[rec defs]]env = D[[defs]]env. Now let expressions are defined by E[[let defs in exp]]env = E[[exp]](env |_| D[[rec defs]]env) (this also ignores the possibility of local definitions shadowing variables from an outer scope). Too late at night to do it now, but I have the feeling that it should not be hard now to prove that E[[let defs1 in let defs2 in exp]]env = E[[let defs1; defs2 in exp]]env under suitable assumptions on free variable occurrences. That implies, together with commutativity and associativity of (+), that the division of declaration groups into strongly connected components does not affect I like this way of giving semantics--at least I know what it means! But it does demand, really, that matching in declarations is strict by default. Otherwise I suppose, if one doesn't care about compositionality, one could replace definition (*) above by D[[!pat = exp]]env = P'[[pat]](E[[exp]]env) D[[pat = exp]]env = P'[[~pat]](E[[exp]]env), otherwise But this really sucks big-time, doesn't it? More information about the Haskell-prime mailing list
{"url":"http://www.haskell.org/pipermail/haskell-prime/2006-February/000495.html","timestamp":"2014-04-18T06:56:42Z","content_type":null,"content_length":"6633","record_id":"<urn:uuid:334c9fad-24b9-48b0-93d2-276d48966424>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
log and exponential functions May 3rd 2009, 02:31 PM #1 Senior Member Nov 2008 log and exponential functions for the function y=0.5^x a) identity intervals for which the function is positive and intervals for which it is negative for both the the function and its inverse b) intervals for which the function is increasing and intervals for which it is decreasing for both the function and its inverse I don't get how to do these questions. How do I figure out if the function and its inverse is positive, negative, increasing or decreasing? If you could provide me the info that I need to know to figure out these questions, it would be really helpful! thank you so much! for the function y=0.5^x a) identity intervals for which the function is positive and intervals for which it is negative for both the the function and its inverse b) intervals for which the function is increasing and intervals for which it is decreasing for both the function and its inverse I don't get how to do these questions. How do I figure out if the function and its inverse is positive, negative, increasing or decreasing? If you could provide me the info that I need to know to figure out these questions, it would be really helpful! thank you so much! $y = \left(\frac{1}{2}\right)^x = 2^{-x}$ you should know what the graph looks like. using that graph, you should be able to sketch its inverse, then answer the questions. May 3rd 2009, 03:27 PM #2
{"url":"http://mathhelpforum.com/pre-calculus/87205-log-exponential-functions.html","timestamp":"2014-04-17T11:20:01Z","content_type":null,"content_length":"34729","record_id":"<urn:uuid:125935eb-58c3-460a-af11-af1541880f47>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Miami Gardens, FL Prealgebra Tutor Find a Miami Gardens, FL Prealgebra Tutor ...I have over three years experience as a GED teacher and I know what it takes to pass the test. I can teach math k-Algebra I I believe that I am qualified to teach study skills since that is one of the most important things that a teacher can train a student to do, study. I have over six years experience teaching and tutoring children, youth and adults. 14 Subjects: including prealgebra, English, reading, writing Hey guys!!!! My name is Carmen and I recently graduated from St. Joseph's University in Philadelphia. My major was chemical biology and a minor Spanish with a final grade point average of 3.2. 9 Subjects: including prealgebra, Spanish, algebra 2, algebra 1 ...I scored a 730 on the Math section of the SAT. I am also available to tutor for the SAT Math and any other related math courses. In high school and college, I have always received positive feedback from my tutees. 8 Subjects: including prealgebra, chemistry, writing, biology ...I ended up with a 3.44 GPA at the University. I know how to stay on top of assignments, how to take effective notes, how to study for exams and how to take them. I have always understood and related to the frustrations, fears and anxieties that students face. 14 Subjects: including prealgebra, reading, physics, geometry ...I have a solid background in computers and office productivity software based on my engineering background. I currently teach Mathematics at Broward College with 9 years teaching experience. Also, I work as a Lab Assistant with Broward College providing support and tutoring college students. 22 Subjects: including prealgebra, physics, calculus, trigonometry Related Miami Gardens, FL Tutors Miami Gardens, FL Accounting Tutors Miami Gardens, FL ACT Tutors Miami Gardens, FL Algebra Tutors Miami Gardens, FL Algebra 2 Tutors Miami Gardens, FL Calculus Tutors Miami Gardens, FL Geometry Tutors Miami Gardens, FL Math Tutors Miami Gardens, FL Prealgebra Tutors Miami Gardens, FL Precalculus Tutors Miami Gardens, FL SAT Tutors Miami Gardens, FL SAT Math Tutors Miami Gardens, FL Science Tutors Miami Gardens, FL Statistics Tutors Miami Gardens, FL Trigonometry Tutors Nearby Cities With prealgebra Tutor Aventura, FL prealgebra Tutors Doral, FL prealgebra Tutors Hallandale prealgebra Tutors Hialeah prealgebra Tutors Hialeah Gardens, FL prealgebra Tutors Hollywood, FL prealgebra Tutors Miami Lakes, FL prealgebra Tutors Miami Shores, FL prealgebra Tutors Miramar, FL prealgebra Tutors N Miami Beach, FL prealgebra Tutors North Miami Beach prealgebra Tutors North Miami, FL prealgebra Tutors Opa Locka prealgebra Tutors Pembroke Park, FL prealgebra Tutors Pembroke Pines prealgebra Tutors
{"url":"http://www.purplemath.com/miami_gardens_fl_prealgebra_tutors.php","timestamp":"2014-04-18T15:55:44Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:beb664e2-9ab5-4f0a-babd-c9ae0e3828d1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
’t Hooft on Cellular Automata and String Theory Gerard ’t Hooft in recent years has been pursuing some idiosyncratic ideas about quantum mechanics; for various versions of these, see papers like this, this, this and this. His latest version is last month’s Discreteness and Determinism in Superstrings, which starts with cellular automata in 1+1 dimensions and somehow gets a quantized superstring out of it (there are also some comments about this on his web-site here). Personally I find it difficult to get at all interested in this (for reasons I’ll try and explain in a moment), but those who are interested might like to know that ’t Hooft has taken to explaining himself and discussing things with his critics at a couple places on-line, including Physics StackExchange, and Lubos Motl’s blog. If you want to discuss ’t Hooft’s ideas, best if you use one of these other venues, where you can interact with the man himself. One of ’t Hooft’s motivations is a very common one, discomfort with the non-determinism of the conventional interpretation of quantum mechanics. The world is full of crackpots with similar feelings who produce reams of utter nonsense. ’t Hooft is a scientist though of the highest caliber, and as with some other people who have tried to do this sort of thing, I don’t think what he is producing is nonsense. It is, however, extremely speculative, and, to my taste, starting with a very unpromising starting point. Looking at the results he has, there’s very little of modern physics there, including pretty much none of the standard model (which ’t Hooft himself had a crucial role in developing). If you’re going to claim to solve open problems in modern physics with some radical new ideas, you need to first show that these ideas reproduce the successes of the estabished older ones. From what I can tell, ‘t Hooft may be optimistic he can get there, but he’s a very long way from such a goal. Another reason for taking very speculative ideas seriously, even if they haven’t gotten far yet, is if they seem to involve a set of powerful and promising ideas. This is very much a matter of judgement: what to me are central and deep ideas about mathematics and physics are quite different than someone else’s list. In this case, the central mathematical structures of quantum mechanics fit so well with central, deep and powerful insights into modern mathematics (through symmetries and representation theory) that any claim these should be abandoned in favor of something very different has a big hurdle to overcome. Basing everything on cellular automata seems to me extremely unpromising: you’re throwing out deep and powerful structures for something very simple and easy to understand, but with little inherent explanatory power. That’s my take on this, those who see this differently and want to learn more about what ’t Hooft is up to should follow the links above, and try discussing these matters at the venues ’t Hooft is frequenting. 30 Responses to ’t Hooft on Cellular Automata and String Theory 1. I hope you’ll also comment on Alain Connes new paper. It does try to be consistent with the standard model, but apart from that it’s too hard for me. 2. Even though my work is here sketched as “not even wrong”, I will avoid any glimpse of hostility, as requested; I do think I have the right to say something here in my defense (One positive note: “Not even wrong” sounds a little bit better than “Wrong wrong wrong” on another blog …). First, I agree that cellular automata doesn’t sound very sexy; those who have seen Wolfram’s book will certainly be discouraged. But I want to stress as much as I can that I am striving at a sound and interesting mathematical basis to what I am doing; least of all I would be tempted to throw away any of the sound and elegant mathematics of quantum mechanics and string theory. Symmetries, representation theory, and more, will continue to be central themes. I am disappointed about the reception of my paper on string theory, as I was hoping that it would open some people’s eyes. Perhaps it will, if some of my friends would be prepared to put their deeply rooted scepsis against the notion of determinism on hold. I think the mathematics I am using is interesting and helpful. I encounter elliptic theta functions, and hit upon an elegant relation between sets of non-commuting operators p and q on the one hand, with integer, commuting variables P and Q on the other. All important features of Quantum Mechanics are kept intact as they should. I did not choose to side with Einstein on the issue of QM, it just came out that way, I can’t help that. It is also not an aversion of any kind that I would have against Quantum Mechanics as it stands, it is only the interpretation where I think I have non-trivial observations. If you like the many world interpretation, or Bohm’s pilot waves, fine, but I never thought those have anything to do with the real world; my interpretation I find far superior, but I just found out from other blogs as well as this one, that most people are not ready for my ideas. Since the mud thrown at me is slippery, it is hard to defend my ideas but I think I am making progress. They could well lead to new predictions, such as a calculable string coupling constant g_s, and (an older prediction) the limitations for quantum computers. They should help investigators to understand what they are doing when they discuss “quantum cosmology”, and eventually, they should be crucial for model building. G. ’t H 3. Prof. ‘t Hooft, Thanks for writing here with your reaction to and comments on the blog posting. I hope you’ll keep in mind that I often point out that “Not Even Wrong” is where pretty much all speculative ideas start life. Some of the ideas I’m most enthusiastic about are certainly now “Not Even Wrong”, in the sense of being far, far away from something testable. While my own enthusiasms are quite different than yours, and lead me to some skepticism about your starting point, the reason for this blog posting was not to launch a hostile attack, but to point others to what I thought was an interesting discussion, one which many of my readers might find valuable to know about. Good luck pursuing these ideas, may you show my skepticism and that of others to be mistaken… 4. Prof. ‘t Hooft, While I am not familiar with your particular work, I am familiar with previous explorations on the theme of interpretations on quantum mechanics and determinism, particularly with old things such as de Broglie-Bohm’s theory, Bell’s contextual ontological model, Kochen-Specker’s model, and newer things such as Harrigan & Spekkens classification of ontological models, Lewis et al. psi-epistemic model, Hardy’s excess baggage theorem, etc. But after studying them with interest for a while, I gradually developed the opinion that they have no good motivation, use uninteresting mathematics, and have been generally fruitless. Since then I have stopped paying attention to this area of research. Since you seem to be interested in defending your work and, furthermore, in publicizing it, I would be very interested in knowing what’s the difference from these previous explorations and, more importantly, what’s the motivation for you to begin work in this area (since you claim not to be motivated by an aversion to Quantum Mechanics). I do hope you can convince me to study your work, and perhaps an answer could be useful to other researchers as well. 5. I think it’s great that Prof. ‘T-Hooft (hope I got the capitalizations and apostrophes right) is commenting here. This site is in many ways much less hostile than other sites we could all name I started out (well, in terms of blog years, not my physics education several decades ago) as a string critic. I didn’t find Brian Greene’s first book very persuasive (nor a book from the 1980s, I forget who wrote it). I was receptive to Lee Smolin’s “Three Roads to Quantum Gravity” and to this blog, even before the book came out. However, partly as a result of reading this blog for about half a dozen years (I think), I’ve gradually warmed to a lot of the string stuff. Not the Kaku stuff, but the Arkani-Hamed types of The fact that we may be 10-15 orders of magnitude away from probing the energies needed to test some of these theories is, of course, daunting. And I think both Woit and Smolin were basically right to warn of the “We are only interested in hiring string theorists” situation a while back. However, things seem to be on a somewhat better keel today. Or so is my perception from this blog and others. And of course the stuff coming out of cosmology and observational astronomy is just plain exciting. How it links with math is also exciting, albeit probably decades or even centuries off in terms of real experimental links. Exciting times. And I think “Not Even Wrong” is useful for reminding its readers to doubt some popular theories. Sort of like the guy the Romans used to hire to ride behind the triumphant god-king warrior to remind him he is not really a god. As for CAs, I’d've thunk (a technical term –Tim May 6. Prof. ‘t Hooft, I think the key point in all this is in your last paragraph: “They could well lead to new predictions, such as a calculable string coupling constant g_s, and (an older prediction) the limitations for quantum computers. They should help investigators to understand what they are doing when they discuss “quantum cosmology”, and eventually, they should be crucial for model building.” Regardless of what people think about your ideas (most of the people probably just dismiss it without reading it after hearing about what you are claiming) the key point with new interpretations of Quantum Mechanics is suggesting an experiment where it can predict something new (OK, saying this to a Nobel prize professor sounds perhaps very pretentious, but I just want to express my deepest concern about your message). A new paper based on this one, but concentrated only on the “what’s measurably different” in all of what you are saying, would (I suppose) attract attention and perhaps put skepticism on hold for a while. I confess I find difficult to digest your paper but if I could understand better where could it measurably matter then this would be a different story. Otherwise it will probably stay as a curiosity. Let me finish saying I deeply admire your courage on working with fundamental problems in QM and as I semi-young physicist I even more deeply envy the freedom you have to pursue your own ideas. 7. Prof ‘t Hooft is in exactly the right situation to play with highly speculative ideas that a young postdoc. cannot afford to do. 8. If anyone has “earned the right” to engage the profession with speculative ideas, it is Professor ‘t Hooft with his track record of brilliant theoretical insights. I look forward to learning more about this. 9. ‘t Hooft, Could you explain why you think the Bohm theory is rubbish? It seems to me that it has already done exactly what you are trying to do — provide a perfectly consistent alternative to the standard approach in quantum mechanics. I also don’t think that you are siding with Einstein on quantum mechanics. Einstein made clear that he did NOT think something like Bell’s inequality could exist, but it DOES! He was very clear that his thinking would change if something like a Bell inequality was discovered, in particular it would lead to something along the lines of a Bohm interpretation. 10. At first, I did not believe it was GT – but doing the (’t) correctly (twice even) gave it away. Thanks for posting. 11. I posted this before and I will post it again! Ray Streater on Bohmian Mechanics: 12. Unless Prof. ’t Hooft himself wants to answer and discuss the topic, enough about Bohmian mechanics, please. 13. `t Hooft is one of the good guys, an incarnation of Dirk Foster and a physicist of the highest calibre. This deterministic programme seems Just Possibly True, and should probably serve as inspiration for the sort of paradigm-shifting research everyone seems to advocate (but no-one pursues). My humble opinion. 14. I think the biggest issue here is that ‘t Hooft (clearly a brilliant physicist) ignores Conway/Kochen, who clearly prove, given three axioms, that the universe *can’t* be deterministic. Actually, they prove it isn’t “random” in the usual sense either. And the axioms one needs to assume are essentially completely straight QM and relativity, together with the inability to influence past 15. Though I don’t necessarily accept ‘t Hooft’s scenario, I am a little surprised by some of the criticism. The no-go theorems mentioned above are not relevant. There is no contradiction for quantum evolution to occur in a deterministic system. A cellular automaton, for example, can be regarded as a transfer matrix with a special property. This property is that given any initial (basis) state, the matrix element is not zero for a unique choice of final (basis) state. Such transfer matrices can be unitary, hence identified with quantum evolution. I don’t know whether the scenario is useful or not. It is certainly not, however, ill-conceived. 16. There are two problems with Jeff’s dismissal of ‘t H.’s approach; Peter Orland has touched on one. For the other, see Wüthrich’s observation that even Conway and Kochen’s scenario depends on access to information about—in fact is a *function* of—prior events (which, NB, has nothing to do with influence over prior events) seems like a huge analytic challenge to C&W’s core thesis. Obviously, the game is still in play, and meanwhile, you can’t appeal to C&W as any kind of decisive refutation of what ‘t H. is proposing. 17. … and it is not the same as DeBroglie/Bohm mechanics. Or Nelson’s for that matter. 18. Peter O, I’m a bit confused about why you feel Conway/Kochen isn’t relevant. Prof. ‘t Hooft is more than a good enough mathematician to understand that his system somehow violates one of their 3 axioms, but I can’t from his writeup figure out which one it would be. Then again, if you read his paper, he seems to be ducking the issue “The philosophy used here is often attacked by using Bell’s inequalities[1]—[3] applied to some Gedanken experiment, or some similar “quantum” arguments. In this paper we will not attempt to counter those (see Ref. [16]), but we restrict ourselves to the mathematical expressions.” Since the discussion in [3] is a valid mathematical proof, I can’t understand ignoring it. 19. Jeff, ‘t Hooft’s model does not specify values of observables. It has deterministic evolution of basis states. That is why the theorems you quote do not apply. 20. Peter o Thanks, been too long since I thought like a physicist. :-). I’m reading as a mathematician, and assumed “deterministic” had the usual meaning. 21. Just another link, which may be helpful in this context: 22. “Though I don’t necessarily accept ‘t Hooft’s scenario, I am a little surprised by some of the criticism. The no-go theorems mentioned above are not relevant. There is no contradiction for quantum evolution to occur in a deterministic system. A cellular automaton, for example, can be regarded as a transfer matrix with a special property. This property is that given any initial (basis) state, the matrix element is not zero for a unique choice of final (basis) state.” This (cellular automaton as a transfer matrix) may or may not be true but it’s not interesting. The problem of interest here is not alternative computational models of QFT (ie alternative ways to solve PDEs or calculate path integrals). The problem of interest is WHY DOES SOMETHING LIKE COLLAPSE OF THE WAVE FUNCTION OCCUR? The deterministic answer, in all its forms, whether Bohm or many universes or whatever, is to claim that this “collapse” is a misunderstanding, that if you define the problem properly, then some deterministic combination of initial conditions plus minor perturbations along the way lead you inevitably to the specific state that you measured. The nice thing about this world view is that it’s compatible with relativity — it doesn’t bring up any (so far completely unresolved IMHO) questions about “when” does this collapse occur, bearing in mind that “when” is a relative concept and so does the collapse “propagate outward” at the speed of light from some initiating point (very problematic) or does it happen “simultaneously” (hmm, that’s not a very relativistic word) (perhaps simultaneously over some especially blessed world surface?). OK, so determinism is nice. Only problem is that it appears to be wrong wrong wrong. Bell’s inequality is one version of why it’s wrong, but the deeper reason it is wrong is that it uses a broken mental model of the relationship between probability and QM. Probability is based on the idea of underlying space \Omega, a sigma field of events, and a measure associated with the sigma field. On top of this we construct random variables which (and this is the important part) are all COMPATIBLE. That is, for any random variables X and Y, the concept of a joint distribution,say F(X, Y) is well defined. At root, this is because the sigma fields defined by X and Y are subfields of the underlying \Omega sigma field, and we can construct an intersection of them. Even more fundamentally, this is because the “building up” operator we’re working with is set union, and set intersection plays nicely with set union. Now we switch to the world of operators and vector spaces. Given one particular operator O, this has a set of eigenvalues. Associated with each set of eigenvalues (-\infinity, \lambda] is a vector space,call it V_\lambda. IF in addition we are given a vector \psi, we can now associate a real number with V_\lambda, namely the length of the projection of \psi into V_\lambda. This in other words gives us a monotonic increasing function (a measure) associated with increasing \lambda. This may look unfamiliar, but it’s really not scary, it’s the usual stuff you are familiar with — an eigenvalue and associated with a “probability” for that eigenvalue given by , only made a little more rigorous and described in the language of probability. This monotonic increasing function associated with increasing \lambda is just like the cumulative distribution function associated with a random variable,and because of that, people have for almost a hundred years being slipping informally between operator language (eigenvalues, eigenspaces, “probability associated with an eigenvalue”) to an implicit assumption that we are dealing with full-blown probability theory and random variables. We are NOT. Things fall apart if we consider now a second operator, call it Q, which does not commute with operator O. Whereas two random variables ARE always “compatible” in the sense that I stated earlier, specifically, that they have an associated joint distribution (and an associated underlying set of points \xi \in \Omega each of which represent “initial conditions” which might lead to a particular joint outcome of X=x and Y=Y; this sort of thing is no longer true for non-commuting operators O and Q. Q (and vector \psi) generate another spectrum of eigenvalues, each with an associated weight, and so can also, apparently, be thought of as random variable. But there is no “compatibility” between these two random variables. More specifically, there IS NO finer sigma algebra which contains both the sigma algebras generated by the O-”random variable” and the Q-”random variable”. At root, this is because the fundamental “points” we are dealing with when we treat O as having an associated random variable are not simple points, they are vector spaces, and the building-up operation as we aggregate these is not a union of sets of “simple points”, it is a cartesian product of vector spaces. But the cartesian product does NOT play nicely with intersection the way union does (we don’t have the full set of De Morgan laws). Or to put it slightly differently, given a set \Omega, the set which is actually relevant to QM is C^\Omega, and the measures defined by operators apply to this set C^\Omega “sliced by cartesian product” along different angles for different operators. This is different from standard measures which derive by slicing \Omega “by union”. The union slices, when intersected, still give useful sets. Cartesian product slices, when intersected, simply give the set {0}, not useful structure. I know this sounds like a whole lot of weirdness, but there’s nothing unorthodox here — it’s just standard probability theory, and standard Hilbert space theory interpreted as measure theory. But, IMHO, this conceptualization is, once you understand it, extremely powerful in revealing where the true weirdness of QM lies. In particular, again IMHO, it’s as powerful a mathematical argument as we’re ever going to get that the underlying idea of many-world theories. I’ve never seen a real mathematica formulation of a many-world theory, so I’ve no idea what the proponents actually mean; but as far as I can tell, what they mean is essentially a probabilistic model: the multiverse consists of some unfathomably large set \Omega of points \xi, each \xi corresponding to a universe, with some sort of measure tied to subsets of \Omega, and our universe is the deterministic unfolding of one of these \xi. As I’ve tried to explain, you just can’t get this to work, because the model only works when you “aggregate points by union”, and QM doesn’t do that. 23. Maynard, Bell’s inequality is not violated in ‘t Hooft’s model. It is just a way to formulate quantum mechanics. It is deterministic in the way that one basis vector is sent to the next, during some discrete time interval. This is a linear unitary map (both vectors are normalized), hence just quantum mechanics. Nothing is any different concerning collapse or no-collapse of wave functions. A trivial example of this kind of model (much simpler than what ‘t Hooft considers) can be done with two spin states, s1 and s2. The transfer matrix sends s1 to s2 and s2 to s1. It therefore has the representation of the Pauli matrix sigma1. Well, that is a unitary evolution operator. A slightly less trivial example is a permutation of N objects, with N>2 (which can have complex eigenvalues, with unit norm). Any permutation can be represented as a unitary N X N matrix. Hence it can be written as the logarithm of i (sqrt of minus 1) times the time interval times a Hamiltonian (also an N X N matrix). The Schroedinger equation is satisfied, in the sense that at its solution is the state vector. The eigenvalues of the transfer matrix do not have to be real numbers (though its components are real). This kind of model is not a traditional hidden-variable theory in the sense of De Broglie, Bohm, Nelson or anyone else. Unlike those theories, you cannot specify all observables simultaneously. You can specify certain observables at discrete time intervals, but the uncertainty principle is intact. I am not saying you should accept the idea, just that you need to see it for what it is. 24. Anyway, I would prefer not spending time defending this. I wish I did not get incensed when I see people’s work criticized for the wrong reasons. 25. I meant to say Bell’s inequality is violated. It is just QM. 26. Thanks, Peter. I don’t want to thread jack, but how does what you say fit in with Peter’s statement about “One of ’t Hooft’s motivations is a very common one, discomfort with the non-determinism of the conventional interpretation of quantum mechanics.” I will admit I have not yet this particular ‘t Hooft’s paper (though I have pretty much always been pleased with when I have read something by him) but I assumed, from that, that this WAS the point at issue. Hence my long attempt to summarize the issues in play and my views on them. If all we have is “deterministic evolution of the wave function” then why is Peter saying what he is saying? 27. Maynard, If you read my posting, you’d notice that its point is to explain why I’m not trying to seriously understand exactly what ‘t Hooft is doing. So, I’m not in any position to participate in a discussion of this sort. Actually, neither are you, since you admit you haven’t read his paper. Enough about this. 28. Here’s a more concise rebuttal of Streater: http://www.ilja-schmelzer.de/realism/BMarguments.php 29. No more Bohmian mechanics. At all, ever. 30. Prof ‘t Hooft, don’t get discouraged by the opposition to your cellular automota approach to QM. Computer science is still in its infancy, and it’s impact on mathematics and physics has only just I think your approach via minimalistic cellular automota is very promising, and the recent anti-determinism bias is just a fad. QM has equally expressive deterministic and non-deterministic formulations, and an exploration of the limits of deterministic formal systems given the current empirical evidence is essential. There seems little reason to posit non-determinism if it is not necessary to do so, as deterministic explanations invariably have more explanatory power. This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://www.math.columbia.edu/~woit/wordpress/?p=5022","timestamp":"2014-04-17T18:47:16Z","content_type":null,"content_length":"78943","record_id":"<urn:uuid:7403b893-13c8-4711-93e3-817231205036>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
how to calculate type 2 error? 0 how to calculate type 2 error? · Comment · Flag · Answer You can only calculate the probability of a type 1 error. A type 1 error is made when the null hypothesis is true but we reject it. You can calculate the probability of this error because the null hypothesis is (or should be) mathematically precise stating what distribution and parameters are assumed to be involved. A type 2 error is made when the null hypothesis is not true but you accept it anyway. Since it is not true then you don't know what the situation is. The distribution that you have assumed applies may not do so and even if it does you don't know what the parameter values are. This means that you cannot make any calculation of the probability of this type of error. Normal practice is to set a maximum for the probability of a type 1 error (called the significance level of the test) and just hope that the probability of the other one is not too high. The only way to reduce the probability of both types of error at the same time is to have 0 a larger sample size but in ... more · Comment · Flag You can only calculate the probability of a type 1 error. A type 1 error is made when the null hypothesis is true but we reject it. You can calculate the probability of this error because the null hypothesis is (or should be) mathematically precise stating what distribution and parameters are assumed to be involved. A type 2 error is made when the null hypothesis is not true but you accept it anyway. Since it is not true then you don't know what the situation is. The distribution that you have assumed applies may not do so and even if it does you don't know what the parameter values are. This means that you cannot make any calculation of the probability of this type of error. Normal practice is to set a maximum for the probability of a type 1 error (called the significance level of the test) and just hope that the probability of the other one is not too high. The only way to reduce the probability of both types of error at the same time is to have a larger sample size but in ... more
{"url":"http://www.experts123.com/q/how-to-calculate-type-2-error.html","timestamp":"2014-04-16T13:06:17Z","content_type":null,"content_length":"43787","record_id":"<urn:uuid:1d4e701b-09cf-432c-9ef9-44a1a20663bd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Travelling Trains From: Anonymous Date: Sun, 6 Nov 1994 21:06:23 -0700 (MST) Subject: Dr. Math & trains Dr. Math, glad to see that you are on-line. Where were you when I needed you? Is this for real? I received a note saying that you were looking for math questions from students who had math problems. True? I've always wondered about those two trains that left the East coast and the West coast. The question asked when they would meet or where they would meet if both left at the same time and were traveling at 60 to 70 mph. I know that isn't the exact question, but that has always stuck in the back of my mind. Care to try and figure out some kind of reply, even though the question is sort of hazy? Looking forward to hearing from you. I teach Spanish here at Madison Middle School in Albuquerque, New Mexico. Hasta la vista. Brian E. Tafoya Date: Mon, 7 Nov 1994 21:35:07 +0000 From: Elizabeth Anna Weber Hi Brian! Yes, we are for real. Our main patients are K-12 students and their teachers, but thanks for writing to us anyway! If a train leaves the West Coast going at any speed, and another train leaves the East Coast at the same time, going at the same speed, the trains will meet in the middle. The exact meeting point depends on the exact starting points, but it will be somewhere on a line drawn from Texas to the Dakotas. The problem gets a little more complicated when the trains make a few stops, go at different speeds, or when the tracks don't go in a straight line from coast to coast. Elizabeth, Math Doctor on duty.
{"url":"http://mathforum.org/library/drmath/view/58599.html","timestamp":"2014-04-18T21:01:46Z","content_type":null,"content_length":"6692","record_id":"<urn:uuid:152f80da-1d09-4883-bd76-a3cd29ae75e2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding generalised Lyndon words up vote 3 down vote favorite Let $\Sigma = \lbrace a_1, \ldots, a_n, A_1, \ldots A_n \rbrace$ (where $A_i = a_i^{-1}$) and $\prec$ be a total ordering on $\Sigma$. Let $\Sigma^*$ be the set of all words (generated by the alphabet $\Sigma$) and $\prec^*$ be the total ordering on $\Sigma^*$ induced by $\prec$ (dictionary / lexicographical ordering). Let $G$ be a finitely presented group which acts on $\Sigma^*$. For $w \in \Sigma^*$, let $[w]$ denote the equivalence class of words under $G$ (i.e. $[w] = \text{Orb}_G(w)$). Let $\text{First}_G(w)$ be the first element of $[w]$ under the total ordering $\prec^*$ (i.e. $\text{First}_G(w)$ is the unique element of $[w]$ s.t. $\forall v \in [w] \backslash \lbrace \text {First}_G(w) \rbrace$, $\text{First}_G(w) \prec^* v$). The naive way to determine $\text{First}_G(w)$ is to first generate $[w]$ and then determine the 'first' element of this set, however in general $[w]$ may be an infinite set. In the case when $G = \langle \Sigma^* | \rangle$ and $g \in G$ acts on $\Sigma^*$ by $g :w \mapsto gwg^{-1}$, $[w]$ is the set of cyclic permutations of $w$ and $\text{First}_G(w)$ is the unique Lyndon word in $[w]$. In this particular case, duval's algorithm will determine $\text{First}_G(w)$ without having to generate all of $[w]$. Is there an algorithm for determining $\text{First}_G(w)$ without first determining all the elements of $[w]$ for a general group $G$ acting on $\Sigma^*$? Or alternatively Is there a $\Sigma$ and $G$ such that $\forall n$, $\exists w \in \Sigma^*$ such that there is no sequence $g_1, g_2, \ldots g_m \in G$ such that $(g_m \circ \cdots \circ g_1)(w) = \text{First}_G (w)$ and $\forall p < m$, $\text{length}((g_p \circ \cdots \circ g_1)(w)) - \text{length}(w) < n$? i.e. For any bound $n$ there is a word $w$ that must be made more than $n$ letters longer during any sequence of group actions that take it to it's 'first' word. co.combinatorics algorithms group-actions gr.group-theory 1 You consider only reduced words? The action of $G$ on $F^*$ should preserve what? – Mark Sapir Oct 2 '10 at 20:52 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged co.combinatorics algorithms group-actions gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/40866/finding-generalised-lyndon-words","timestamp":"2014-04-20T05:56:10Z","content_type":null,"content_length":"48712","record_id":"<urn:uuid:3b560e5f-6dc1-4ae3-a625-e3f2d0405464>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Feature Trees over Arbitrary Structures Ralf Treinen? Programming Systems Lab German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3, D66123 Saarbr ucken, Germany This paper presents a family of first order feature tree theories, indexed by the theory of the feature labels used to build the trees. A given feature label theory, which is required to carry an appropriate notion of sets, is conservatively extended to a theory of feature trees with the predicates x[t]y (feature t leads from the root of tree x to the tree y), where we have to require t to be a ground term, and xt# (feature t is defined at the root of tree x). In the latter case, t might be a variable. Together with the notion of sets provided by the feature label theory, this yields a first-class status of arities. We present a quantifier elimination procedure to reduce any sentence of the feature tree theory to an equivalent sentence of the feature label theory. Hence, if the feature label theory is decidable, the feature tree theory is too. If the feature label theory is the theory of infinitely many constants and finite sets over infinitely many constants, we obtain an extension of the feature theory CFT, giving first-class status to arities. As an another application, we obtain decidability of the theory of feature trees, where the feature labels are words, and where the language includes the successor function on words, lexical comparison of words and first-class status of arities. 1 Introduction Feature trees have been introduced as record-like data structures in constraint (logic) programming [4], [28], and as models of feature descriptions in computational linguistics [7], [6]. The use of record-like structures in logic programming languages, in the form of socalled -terms [1], was pioneered by the languages LOGIN [2] and LIFE [3]. More recently, Oz [17, 26] uses a feature constraint system, the semantics of which is directly based on feature trees. In computational linguistics, feature structures have a long history in the field of unification grammars (as described in [25]). ?On leave to: L.R.I., B^at. 490, Universit?e Paris Sud, F91405 Orsay cedex, France, treinen@lri.fr To appear in: Patrick Blackburn and Maarten de Rijke, eds., Logic, Structures and Syntax, Studies in Logic, Language and Information, 1995 In both areas, first order predicate logic has been recognized as a powerful description language for feature trees. For the first area, this is immediate by the role of constraints in constraint logic programming [19] and in concurrent constrained-based languages [26], while in the second area different approaches have been proposed. [24, 27, 20, 25] have advocated the use of predicate logic as feature description languages. [6] argues that predicate logic is the right language to express phenomena in both fields, and that feature trees constitute the canonical semantical model. Feature trees [4] are possibly infinite, finitely branching trees, where the edges carry labels taken from some given set of feature symbols. Features are functional, i.e., all edges departing from the same node have different labels. In contrast to the usual definition from the literature, we will omit of nodes by so-called sort der languages have been sic class of predicate symany first order feature lanrelation symbols x[f ]y for the standard model of feaof this predicate is y is der edge f". The feature a c d c d d c a a b Figure 1: A Feature Tree in this paper the labeling symbols. Different first orstudied. The most babols, which is contained in guage, consists of binary every feature symbol f . In ture trees, the denotation the direct subtree of x untheory FT [4] is an axiomatization of feature trees based exactly on this language (besides equality and sort predicates). The feature theory CFT [28] uses a much more expressive language which extends FT by so-called arity constraints xff1; : : : ; fng. The denotation of such a constraint in the standard model is x has exactly edges labeled by f1; : : : ; fn departing from its root". Furthermore, regular path expressions (which contain an implicit existential quantification over feature paths, [7]), and subsumption ordering constraints [15, 14] have been considered. Finally, the language F [31] contains a ternary feature predicate x[y]z. Using quantification over features, all other feature theories can be embedded in the theory F [31, 6]. With the establishment of first order logic as feature description language, concrete problems concerning logical theories of feature trees have been attacked. After fixing an appropriate predicate logic language, these problems can be phrased as decision problems of certain, syntactically characterized fragments of the theory of feature trees. Satisfiability of existentially quantified conjunctions of atomic constraints (so-called basic constraints) and entailment between basic constraints is efficiently decidable for the languages FT [4] and CFT [28], and satisfiability of regular path constraints [7] and weak subsumption constraints [14] is decidable, while it is undecidable for subsumption constraints [13]. These considerations lead to the more general question whether the full first order theories of these languages is decidable. An affirmative answer was given for the case of FT [9] and CFT [10, 8]. Not surprisingly, the full first order theory of feature trees over F is undecidable, although the existential fragment of the theory is NP-complete even with arity constraints as additional primitive notions [31]. The reason for the undecidability of F is the fact that it allows one to quantify over the direct subtrees of a tree. Taking x OE y (x is a direct subtree of y") as abbreviation of 9f y[f ]x, we can define for trees x with only finitely many different subtrees (rational trees) the predicate x is a subtree of y" by y OE z ^ 8y1; y2 (y1 OE y2 OE z ! y1 OE z) ! x OE z Here, the idea is to abuse" feature trees as sets, taking the direct subtrees of a tree as the elements of a set. Note that z fulfills the hypothesis in the above formula exactly if the set" z contains y and is transitive, and that hence the transitive closure of y is the smallest z which satisfies the hypothesis. Thus, we can easily show (e.g., with the method of [30]) the undecidability of the theory of feature trees in the language of F. Consequently, in order to get a decidable sub-theory of F, we have to restrict the use of quantification over features. The first contribution of this paper is the formulation of a decidable theory of feature trees which lies between CFT and F. The idea is to allow quantification over features only in order to state which features are defined, but not to quantify over the direct subtrees of a tree. More precisely, we will define the restricted theory of feature trees as the set of formulae where t in x[t]y is always a ground term, but where still atomic constraints xf# (f is defined on x"), where f may be a variable, are allowed. This situation is similar to Process Logic, where unrestricted quantification over path and state variables lead immediately to an undecidable validity problem, while a syntactic restriction leads to decidable sub-logic [22]. This restricted theory still is an essential extension of the theory of CFT. It extends CFT, since we can encode an arity constraint xff1; : : : ; fng as 8f (xf# $ Wni=1 f ?= fi). Beyond the expressivity of CFT, we can make statements about the arities of trees, for instance we can say that the arity of x is contained in the arity of y by 8f (xf# ! yf#) As another example, the following formula expresses that x has exactly 3 features: 9f1; f2; f3 f1 ?6 = f2 ^ f2 ?6 = f3 ^ f1 ?6 = f3 ^ 8g (xg# $ [g ?= f1 _ g ?= f2 _ g ?= f3]) From these examples, one gets the idea that the theory of sets of feature symbols is hidden in our restricted theory of feature trees. This leads to the second contribution of our approach, which we now explain in three steps. The first step is to realize that, in order to decide the validity of first order sentences over feature trees, we can save some work if we employ an existing decision algorithm for the theory of finite sets over infinitely many constants. Since this theory is easily encoded in the theory WS1S, the weak second order monadic theory of one successor function, the existence of such an algorithm follows immediately from B uchis result on the decidability of WS1S [11]1. 1The reader shouldn't be confused by the fact that we are apparently mixing first and second order structures. A second order structure can always be considered as a two-sorted first order structure, with one sort for the elements, and another sort for the sets. Only in the context of classes of structures makes it really sense to distinguish first order from second order structures. The following examples give an idea why logical statements involving feature trees can be reduced to logical statements on sets of features. Let x; y; z denote variables ranging over feature trees, f; g; h range over features and F; G; H range over sets of features. First, the formula 9x; y 8f (xf# ! yf#) ^ :8g (yg# ! xg#) does not involve any tree construction. This formula is just about the sets of features defined at the roots of x and y, and hence can be translated to: 9F; G 8f (f ?2 F ! f ?2 G) ^ :8g (g ?2 G ! g ?2 F ) Formulae like the above subformula (8f (xf# : : :), where the feature tree x is only used as a set of features, will be called primitive formulae. The formula 9x (x[a]x ^ x[b]y ^ :xh#) (1) where a and b are two different constants, is clearly satisfiable if we can find a set which contains a and b, but not h. Hence, (1) can be reduced to 9F (a ?2 F ^ b ?2 F ^ :h ?2 F ) (2) In the setting we have defined so far, this is equivalent to a ?6 = h ^ b ?6 = h. The next step is to generalize this idea to the situation where we have some structure of feature symbols and finite sets of feature symbols given, and to build the feature trees with the feature labels we find in the given feature label structure. Hence, we now obtain a family of feature tree structures, indexed by the feature label structures. This is a wellknown situation, for instance in constraint domains for programming languages [26], where feature constraints are not isolated but come in combination with other constraint domains like numbers and words. Hence, our decision procedure now decides the validity of a sentence of the feature tree theory relative to the theory of the feature labels. As a consequence, our feature tree theory is decidable if the feature label theory is. There is only little to do in order to adopt the reduction procedure to this more general case. The only problem is now that two different ground terms, like the constants a and b in example (1) above, not necessarily denote semantically different elements. Hence we have to consider the two cases a ?= b and a ?6 = b. In the first case, a ?= b and the functionality of features yield x ?= y. Hence, we can eliminate x, and obtain for the first case a ?= b ^ y[a]y ^ y[b]y ^ :yh# In the second case, we get the same reduction as before: a ?6 = b ^ 9F (a ?2 F ^ b ?2 F ^ :h ?2 F ) The feature label structure may be equipped with operations and predicate symbols of their own, which of course can be used in the feature tree structure as well. We could for instance take as feature label structure WS2S, that is the structure of words over the alphabet fa; bg, finite sets of words, and successor functions for every symbol of the alphabet. Since the membership predicate in any regular language is definable in the theory of this structure, we can express in this feature tree theory for any regular language L that the arity of some x is contained in L. So far, feature trees have been finitely branching trees, that is we took as possible arities all finite sets of features. The third step is to generalize this to an arbitrary notion of arities. That is, we assume that the feature label structure comes with a notion of sets, where we only require that there are at least two different sets. From this, we construct the feature trees such that the arities of the trees are always sets of the given feature label structure. For instance, we get as before the finitely branching feature trees if the feature label structure contains all the finite sets of feature trees. If we take as feature label structure natural numbers and all the initial segments of the natural numbers, that is sets of the form f1; : : : ; ng, we get a structure of feature trees where at every node the edges are consecutively numbered. In example (1) above, this has the consequence that we cannot reduce (2) to a ?6 = h ^ b ?6 = h. Instead, the satisfiability of (3) depends on the theory of the feature label structure. As another example, consider 9x; y (x[f ]x ^ y[f ]y ^ x ?6 = y) (3) Here, we will make a case distinction: Either both x and y have the arity ffg, that is f is the only feature defined, or at least one of them has a greater arity. In the first case both variables are called tight, in the second case a variable with arity greater than ffg is called sloppy. Intuitively, a sloppy variable has features for which there are no constraints. For the case that both variables are tight, the formula can not be satisfied. This is a consequence of the fact that the formula x[f ]x ^ arity(x; ffg), a so-called determinant [28], has a unique solution. In the other case, the formula is clearly satisfied, since we can use the unconstrained features of x, resp. y, to make both values different. Hence, we can translate (3) to the formula which states that this other case is indeed possible: 9F; g (f ?2 F ^ :g ?2 F ) (4) Up to now, we have been talking about the feature tree structures defined upon some feature label structure. The quantifier elimination procedure we are going to present will be based on an axiomatization FX only, no other properties of the structures will be used for the justification of the procedure. The axiomatization is not subject to the syntactic restriction we imposed on the input formulae to the procedure, that is the axioms may contain subformulae x[t]y where t is non-ground. The quantifier elimination procedure proposed in this paper takes another road than the quantifier eliminations which have been given for the feature theories FT [9] and CFT [8]. We believe that, in the case of FT and CFT, our procedure is simpler than the existing ones for these theories. The difference lies in the way how the procedure deals with the fact that these feature theories themselves do not have the property of quantifier elimination. A theory T is defined to have the property of quantifier elimination [18], if for every variable x and atomic formulae OE1; : : : ; OEn there is a quantifier-free formula such that T j= 9x (OE1 ^ : : : ^ OEn) $ . An effective procedure to compute this yields immediately a decision procedure for T , provided does not contain new free variables, and provided True and False are the only quantifier-free formulae. A simple counterexample, showing that for instance FT does not have the property of quantifier elimination, is 9x (y[l]x ^ xk#) (5) We can not simply eliminate x, since we need it to express an important property of the free variable y, which we must not drop. The classical way to solve this problem is to extend the language, such that non-reducible formulae like (5) become atomic formulae in the extended language. In our example, this means that we have to add so-called path-constraints like y(lk)# to the language. This solution was chosen in [9] and [8]. We will use another idea: We exploit the functionality of features to trade in the above situation an existential quantifier for a universal quantifier, and transform (5) into: yl# ^ 8x (y[l]x ! xk#) We can benefit from this quantifier-switching if we consider the elimination of blocks of quantifiers of the same kind. This idea has already been used, for instance, in [21, 12]: We consider formulae in prenex normal form, for instance 9 ? ? ? 98 ? ? ? 89 ? ? ? 9OE where OE is quantifier-free. If we can transform 9 ? ? ? 9OE into a formula of the form 8 ? ? ? 8 for some quantifier-free , then we have reduced the number of quantifier alternations from 2 to 1, although the total number of quantifiers might have increased. The rest of the paper is organized as follows: Section 2 fixes the necessary notions from predicate logic. In Section 3, we define by an axiom the class of feature label structures, which will be called admissible parameter structures in the rest of the paper. In Section 4 we construct the standard model of feature trees over some arbitrary admissible parameter structure, present the axiomatization FX and show that the feature tree structure is a model of FX. Some basic properties of the axiomatization FX are stated in Section 5. The overall structure of the quantifier elimination procedure is presented in Section 6, the details are given in Section 7. 2 Preliminaries We consider many-sorted predicate logic with equality. We use the standard shortcuts from predicate logic: ~8 OE is the universal closure of OE. We write 9?x OE, where ?x = (x1; : : : ; xn) is a list of variables, as abbreviation for 9x1 : : : 9xn OE (8? x OE is defined accordingly.) We also use sometimes the notation 9X OE, where X is a finite set of variables, for 9?x OE where ?x is some linear arrangement of X . Instead of writing the sort with every quantified variable, as in 8x 2 S : : :", we will introduce naming conventions which allow us to directly read off the sort of a variable. As usual, variables may be decorated with sub-and superscripts. Lists of variables will be denoted with an overstrike as in ?x. The junctors ^; _ take precedence over (bind tighter than) $;!. Negation : and quantors bind tightest. It is understood that conjunction is commutative and associative. Consequently, we identify a conjunction of formulae with the multiset of its conjuncts. We use notions like 2 OE or ? OE, where OE is a conjunction, accordingly. We write the negation of x ?= y as x ?6 = y. We consider equality as symmetrical, that is we identify x ?= y with y ?= x (and hence, x ?6 = y with y ?6 = x). The reader should be aware, that x ?= y and x ?6 = y are formulae of our object logic, while x = y, resp. x <> y, is a mathematical statement, expressing that the two variables x, y are syntactically identical, resp. distinct. fr(OE) is the set of free variables of OE, OE[y=x] denotes the formula that is obtained from OE by replacing every occurrence of x by y, after possibly renaming bound variables to avoid capture. An assignment is a X-update of an assignment , where X is a set of variables, if (x) = (x) for all variables x 62 X . We write [x1 7! a1; : : : ; xn 7! an] for the fx1; : : : ; xng-update of which assigns ai to xi, respectively. 3 Admissible Parameter Structures In this section, we specify the class of parameter structures which we want to allow as a basis for the construction of feature trees. Definition 3.1 (Admissible parameter signature) The signature ? = hS?; F?; R?i is an admissible parameter signature, if S? contains at least the two sorts Feat and Set, and R? contains at least the relational symbol Feat ?2 Set, that is the binary infix relation symbol ?2 of profile Feat; Set. The sort Feat is intended to denote the features, and the sort Set is intended to denote the sets of features. In this sense, ?2 can be thought of as the usual elementship relation. Small letters from the middle of the alphabet f; g; h; : : : are variables of sort Feat, and capital letters from the middle of the alphabet F; G; H; : : : are variables of sort Set. The only requirement on the class of admissible parameter structures is, that they contain at least two (observationally) different sets: (S2) 9F; G; f (f ?2 F ^ :f ?2 G) Definition 3.2 (Admissible Parameter Structure) Let ? be an admissible parameter signature. We call a ?-structure B an admissible parameter structure, if B j= (S2). This is in two respects weaker than what is usually stated by axioms systems of second order logic [5]. First, we don't require extensionality, that is two different sets may have the same elements. Second, axiom (S2) is much weaker than the usual comprehension axiom of second order logic which states that every formula denotes a set. Note that, as a consequence of (S2), every admissible parameter structure contains at least one element of sort Feat. Examples of admissible parameter signatures and algebras are 1. The signature ?C consists of an infinite set C of Feat-constants and the ?2 predicate symbol. The algebra BC assigns C to Feat, every constant of C to itself, the powerset over C to Set, and the elementship relation to ?2. 2. ?F and BF are defined as above with the only difference that Set is interpreted as the class of finite sets over C. 3. The signature ?N contains the constant of sort Feat, the unary function symbol succ of profile Feat ! Feat, and ?2. The algebra BN assigns the set of natural numbers to Feat, the number to the constant and the successor function to succ. Set denotes the class of initial segments of natural numbers (that is, sets of the form f1; : : : ; ng), and ?2 denotes elementship. 4. The signature ?S contains the constant ffl of sort Feat, finitely many function symbols succi, 1 <= i <= n, of profile Feat ! Feat, two predicate symbols <=pre and <=lex, and ?2. The algebra BS assigns the set f1; : : : ; ng? to Feat, the empty word to ffl, the function >=x:xn to succn, the prefix (resp. lexical) ordering to <=pre, resp. <=lex, the powerset of f1; : : : ; ng? to Set, and elementship to ?2. 4 Feature Tree Structures In this section we give the definition of a standard model of features trees over some given admissible parameter structure. We also present a set of axioms for feature trees. We will prove, along the presentation of the axioms, that the standard model of feature trees is indeed a model of this axiomatization. No other properties of the feature tree model than the axiomatization will be used for the justification of the quantifier elimination procedure to be presented in the next sections. Definition 4.1 (Tree signature) For a given admissible parameter signature ?, we define the tree signature ?y = hS?y ; F?y ; R?yi by S?y = S? +[ fTreeg F?y = F? R?y = R? +[ fTree[Feat]Tree; Tree Feat#g In the standard model to be defined below, the sort symbol Tree denotes a set of trees. Small letters at the end of the alphabet (x; y; z : : :) denote Tree-variables. Note that the only Tree-terms are the Tree-variables, and that any ?y-formula without Tree-variables is in fact a ?-formula. We write the negation of xt# as xt". Definition 4.2 (Tree) For a set M , a set o ? M? of finite M-words is called a tree over M if it is prefix-closed, that is if vw 2 o implies v 2 o for all v; w 2 M?. T (M) denotes the set of trees over M . Note that every tree contains the empty word ffl and hence is non-empty, and that a tree may be infinite. This is of course the usual definition of trees|the tree in Figure 1, for instance, is fffl; a; b; ad; bc; ba; ada; adc; add; bac; badg. Definition 4.3 (Admissible Tree) For an admissible parameter structure B, an admissible tree over FeatB is a tree o 2 T (FeatB), such that for all v 2 o exists M 2 SetB with: ?2 BM , v 2 o AT (FeatB) denotes the set of admissible trees over FeatB. Intuitively, this means that the set of features defined at some node of an admissible tree must be licensed by the denotation of Set in the admissible parameter structure B. If we take, e.g., an admissible structure B where SetB is the class of finite subsets of FeatB, then AT (FeatB) contains exactly the finitely branching trees over FeatB. Definition 4.4 (Feature tree structure) For any admissible ?-structure B, we define the ?y-structure By by 1. By j? = B, 2. TreeBy = AT (FeatB), 3. o [ ]By oe iff oe = fv j v 2 og, 4. o #By iff 2 o . Hence, By is a conservative extension of B. The first axiom gives an explicit definition for the ? ? # predicate: (#) 8x; f (xf# $ 9y x[f ]y) The next axiom scheme expresses that every feature is functional: (F) 8x; y; z (x[t]y ^ x[t]z ! y ?= z) where t is ground. Syntactic Convention arity(x; F ) := 8f (xf# $ f ?2 F ) If ?x = (x1; : : : ; xn) and ?F = (F1; : : : ; Fn), we write arity(?x; ?F ) for Vni=1 arity(xi; Fi). The next axiom states that every tree has an arity, and hence reflects the fact that we consider admissible trees only. (A) 8x 9F arity(x; F ) By construction, we get immediately: Proposition 4.5 For any admissible ?-structure B, we have By j= (#); (F ); (A). Next next axiom scheme expresses that certain formulae indeed have a solution in the domain of feature trees. Definition 4.6 (Graph, Constrained variable) A conjunction of formulae of the form x[t]y is a called a graph. For a graph , let co( ) := fx j x[t]y 2 for some t and yg be the set of variables constrained by . Syntactic Convention For a graph and variable x, we define Fx := ft j x[t]y 2 for some variable yg ? := ft ?6 = s j t <> s and t; s 2 Fx for some xg For instance, := x[a(f )]y ^ x[b(g; a(f ))]z ^ y[a(a(f ))]x ^ y[a(a(f ))]y is a graph with co( ) = fx; yg, Fx = fa(f); b(g; a(f))g, F y = fa(a(f))g, F z = ;, and ? = a(f) ?6 = b(g; a(f )). (E) ~8 B@? ^ a ?2 Fi CA ! 9x1; : : : ; xn ^ arity(xi; Fi) where is a graph with co( ) = fx1; : : : ; xng. An example of axiom scheme (E) is 8z; f1; f2; g; F; G (f1 ?6 = f2 ^ f1 ?2 F ^ f2 ?2 F ^ g ?2 G ! 9x1; x2 (x1[f1]x2 ^ x1[f2]z ^ x2[g]x1^ arity(x1; F ) ^ arity(x2; G))) Proposition 4.7 T (M) with the subset relation is a cpo. (see, e.g., [16] for definition and basic properties of cpos). Note, that in general AT (M) does not constitute a sub-cpo of T (M ). Obviously, the set of compact elements of T (M) are exactly the finite sets in T (M ), and T (M) is an algebraic cpo. Lemma 4.8 For any admissible ?-structure B, we have By j= (E). Proof: (Sketch) Let be a graph, co( ) = fx1; : : : ; xng, and By; j= ? ^Vni=1 V a2Fxi a ?2 Fi. We construct o1; : : : ; on 2 AT (FeatB), such that By; [x1 7! o1; : : : ; xn 7! on] j= ^ ^arity(xi; Fi) (7) We define the operator ?: (T (FeatB))n ! (T (FeatB))n by its n components pri ffi ?. For given i, let fxi[t1]z1; : : : ; xi[tm]zmg be the set of atoms in which constrain xi. pri ffi ?(?1; : : : ; ?n) = ffflg [ 1oe1 [ : : : [ moem [ f 2 FeatB j ?2 B (Fi)g where j is the evaluation of tj in B; , and where we define oej := ?k if zj = xk for some 1 <= k <= n, and otherwise oej := (zj). As usual j oej is an abbreviation for f jv j v 2 oej g. ? is obviously continuous, hence we can define (o1; : : : ; on) as the least fixed point of ?. By construction, oi 2 AT (FeatB) for all i. Since By; j= ? , all j for given i are different. Hence, (7) holds. 2 As an example of this construction, consider the formula (6). Let (z) = fffl; e; eeg, (f1) = a, (f2) = b, (g) = c, (F ) = fa; bg and (G) = fc; dg. In this case, we define ? by pr1 ffi ?(?1; ?2) = ffflg [ a?2 [ bfffl; e; eeg [ fa; bg = fffl; b; be; beeg [ a?2 pr2 ffi ?(?1; ?2) = ffflg [ c?1 [ fc; dg = fffl; dg [ c?1 The least fixed point of ? is (L1; L2), where L1 is the prefix-closure of (ac)?(bee [ ad), and L2 is the prefix-closure of (ca)?(d [ cbee). Syntactic Convention Let M be a finite set of Feat-terms. arity(x; M) := 8f (xf# $ _ f ?= a) As above, this notion generalizes to arity(?x; ?M ). Definition 4.9 A determinant ffi is a formula ^ ^ x2co( ) arity(x; Fx) where is a graph and has only free variables of sort Tree. In other words, for every constraint x[t]y in a determinant, the term t must be ground. For instance, from the following three formulae x[a(c)]y ^ x[b(d)]z ^ y[a(a(c))]x ^ arity(x; fa(c); b(d)g)^ arity(y; fa(a(c))g) x[a(f )]y ^ arity(x; fa(f)g) x[a(c)]y ^ x[b(d; a(c))]z ^ arity(x; fa(c)g) only the first one is a determinant (since f denotes a variable). The last axiom scheme expresses that determinants have at most one solution in the constrained variables. Syntactic Convention 9<=1?x OE is an abbreviation for 8?x; ?y (OE(?x) ^ OE(?y) ! ?x ?= ?y) where ?y is some list of distinct variables as long as ?x, and disjoint to fr(OE). 9<=1 ?x OE reads here is at most one tuple ?x, such that OE". (U) ~8 (? ! 9<=1co(ffi) ffi ) where ffi is a determinant. An example of (U) is 8z (a1 ?6 = a2 ! 9<=1x; y (x[a1]y ^ x[a2]z ^ y[b]x ^ arity(x; fa1; a2g) ^ arity(y; fbg))) Note, that (U) does not state that a determinant always has a solution. In the above example, it might be the case that, e.g., the set" fbg does not exist, that is that 9F 8x(x ?2 F $ x ?= b) does not hold in the parameter structure. In this case, the determinant does not have a solution due to axiom (A). Lemma 4.10 For any admissible ?-structure B, we have By j= (U). Proof: (Sketch) We split the determinant ffi into ^ ae, where is a graph and ae is a conjunction of arities. As in the proof of Lemma 4.8, let By; j= ?ffi, and let ? be the operator defined by . By the construction given in the proof of Lemma 4.8, By; [x1 7! o1; : : : ; xn 7! on] j= ffi iff (o1; : : : ; on) is a fixed point of ?. We show, that ? has only one fixed point. Let (o1; : : : ; on); (oe1; : : : ; oen) be two fixed points of ?. Define o ji := fv 2 oi j length(v) = jg for any j >= 0, and analogously for oeji . One shows easily by induction on j that o ji = oeji for all i; j. Taking the limits of the two chains, the claim follows immediately. 2 Definition 4.11 The axiom system FX consists of the axioms (S2), (#), (F), (A), (E) and (U) Corollary 4.12 For every admissible parameter structure B, we have that By j= FX. 5 Some Properties of FX 5.1 Determinants As an immediate consequence of (U) and the definition of 9<=1 , we get Proposition 5.1 For every formula and determinant ffi , we have FX j= ~8 (?ffi ^ 9co(ffi) (ffi ^ ) ! 8co(ffi) (ffi ! )) This prominent role of determinants is the heart of the entailment check for the feature theory CFT [28]. 5.2 Primitive Formulae Definition 5.2 The set of primitive formulae is defined by the grammar p ::= oe j xt# j p ^ p j p _ p j :p j 8O p j 9O p where oe denotes an arbitrary ?-formula, and where O denotes a variable not of sort Tree. In other words, a primitive formula is a ?y-formula that does not contain a Tree-quantifier, and does not contain an atom of the form x ?= y or x[t]y. A primitive formula without free Tree-variables is in fact a ?-formula. Intuitively, in a primitive formula, the sort Tree is only used to express statements that could be as well expressed using sets. The following definition makes this intuition Definition 5.3 We define inductively OE[F ==x], the replacement of a Tree-variable x by a Set-variable F in a primitive formula OE. xt#[F==x] = t ?2 F a[F==x] = a if a is an atomic formula different from xt# for all t (:OE)[F ==x] = :(OE[F ==x]) (OE1 ^ OE2)[F ==x] = OE1[F ==x] ^ OE2[F ==x] (9O OE)[F ==x] = 9O (OE[F ==x]) if F <> O (9F OE)[F ==x] = 9G ((OE[G=F ])[F ==x]) if G 62 fr(OE) Intuitively, OE[F ==x] abstracts the feature tree x in OE to a set F . This operation is an abstraction since it drops"all the subtrees of a feature tree and just keeps the information about the features defined at the root. Again, this notation generalizes to simultaneous replacement [ ?F ==?x]. For instance, OE := xa(f)# ^ 8g (xa(g)# ! xb(g)#) is a primitive formula, and OE[F ==x] = a(f) ?2 F ^ 8g(a(g) ?2 F ! b(g) ?2 F ) The following lemma expresses that the definition of OE[F ==x] meets the intuition of replacing a Tree-variable x by a Set-variable F . Proposition 5.4 Let OE be a primitive formula. Then j= ~8 arity(?x; ?F ) ! (OE $ OE[ ?F ==?x]) It would be possible to extend the definition of a primitive formula and of OE[F ==x] to allow also for Tree-quantifiers. The definition given here is sufficient for the quantifier elimination as described below. 6 The Main Theorem We first define the class of restricted formulae, which is the class of input formulae for our quantifier elimination procedure. Definition 6.1 (Restricted formula) A ?y-formula is called a restricted formula, if in every subformula x[t]y the term t is ground. In the following, we will also speak of restricted sentences, the restricted theory of a ?ystructure, and so on. Theorem 6.2 (Main Theorem) There is an algorithm which computes for every restricted ?y-sentence oe a ?-sentence with FX j= oe $ . Before we can discuss the top-level structure of the proof, we need some additional concepts which describe the intermediate results we get during the quantifier elimination. Definition 6.3 (Molecule) The set of molecules is defined by the following grammar: m ::= x ?= y j x ?6 = y j x[t]y j :x[t]y j p where p is a primitive formula, and where t is a ground term. Hence, any molecule without free Tree-variables is in fact a primitive formula without free Tree-variables, and hence a ?-formula. Definition 6.4 (Basic Formula) A basic formula is a ?y-formula of the form 9?x (m1 ^ : : : ^ mn) where m1; : : : ; mn are molecules. A variable is local to a basic formula 9?x OE if it occurs in ?x, and global otherwise. Let 9?x OE be a basic formula, and let be the greatest graph contained in OE, that is is the set of all molecules of the form x[t]y contained in OE. Then we define FxOE = Fx. Theorem 6.2 follows from the following lemma: Lemma 6.5 (Main Lemma) There is an algorithm which computes for every basic formula OE an universally quantified Boolean combination of molecules, such that 1. FX j= ~8 (OE $ ) 2. fr( ) ? fr(OE) 3. if fr(OE) = ;, then is a boolean combination of molecules. We borrow the technique of proving Theorem 6.2 from Lemma 6.5 from [21], [12]. Proof of Theorem 6.2: It is sufficient to consider only sentences oe in a weak prenex normal form, where the matrix is just required to be boolean combination of molecules (instead of a boolean combination of atoms). We proceed by induction on the number n of quantifier blocks in the quantifier prefix. If n = 0, then since oe is a sentence, it does not contain any Tree-variables and hence is a ?-sentence. Let n >= 1 and oe = Q9?x OE, where Q is a (possibly empty) string of quantifiers, not ending with 9, and OE is a Boolean combination of molecules. We transform OE into disjunctive normal form and obtain an equivalent formula Q9?x (OE1 _ : : : _ OEn) where every OE is a conjunction of molecules. This is equivalent to Q(9?x OE1 _ : : : _ 9?x OEn) where every 9?x OEi is a basic formula. Using (1) of Lemma 6.5, we can transform this equivalently into Q(8?y1 1 _ : : : _ 8?yn n) where every i is a Boolean combination of molecules, and where all ?yi are empty if Q is the empty string (because of (3) in Lemma 6.5). After possibly renaming bound variables, this can be transformed into the sentence Q8?z , where is Boolean combination of molecules. By condition (2) of Lemma 6.5, Q8?z is again a sentence. Since the number of quantifier alternations in Q8?x is n ? 1, we can now apply the induction hypothesis. If the innermost block of quantifiers consists of universal quantifiers, we consider the negation :oe of the sentence (which now has an existential innermost block of quantifiers) and transform it into a restricted sentence . Consequently, FX j= oe $ : . 2 Corollary 6.6 If B is an admissible ?-structure, then the restricted theory of By is decidable relative to the theory of B. Note that all four admissible parameter structures introduced at the end of Section 3 have a decidable first-order theory. 1. We can interpret the theory of BC in S1S, the monadic second order theory of natural numbers with successor. The decidability of the theory of BC follows from B uchis result [11] on the decidability of S1S. 2. Analogously, the decidability of the theory of BF follows from the decidability of WS1S, the weak monadic second order theory of the natural numbers with successor. The decidability of WS1S is an easy corollary of [11], since the finite sets are definable in S1S. 3. Decidability of the theory of BN follows again from [11], since the initial fragments of natural numbers are definable in S1S. 4. Definability of the theory of BS follows from Rabins celebrated result [23] on the decidability of S2S, the monadic second order theory of two successor functions. Note that the prefix relation and the lexical ordering can be defined in S2S [29]. Corollary 6.7 The restricted theory of By, where B is one of BC, BF , BN , BS, is decidable. 7 The Reduction We now prove Lemma 6.5. Our goal is to eliminate, by equivalence transformations w.r.t. FX , all the quantifiers of sort Tree, taking care of the fact that we don't introduce new variables. This will be achieved by transformation rules which transform basic formulae into combinations of basic formulae. To make this formal, we introduce the class of complex formulae (see Figure 7 for an overview of the different syntactic classes of formulae): ?-f. - ^; _; : 8; 9; xt# x ?= y, x ?6 = y, x[t]y, :x[t]y XXXXXXXz primitive ???? molecule - 9; ^ basic - 8; ^; _ complex Figure 2: Classes of formulae Definition 7.1 (Complex formula) The set of complex formulae is defined by the following grammar: F ::= 8x F j F ^ F j F _ F j hbasic formulai Note that this fragment, by closure of the set of molecules under negation, also contains constructions like molecule1 ^ : : : ^ moleculen ! basic formula The transformation rules always have a basic formula in the hypothesis. Such a rule can be applied to any maximal basic formula occurring in a complex formula. The maximality condition means here, that we have to use the complete existential quantifier prefix. If a complex formula does not contain any basic formula, it can be easily transformed into a universal quantified boolean combination of molecules by moving universal quantifiers outside. Definition 7.2 (Quasi-solved from) A basic formula 9?x is a quasi-solved form, if 1. does not contain a molecule x ?= y or :x[t]y, 2. if x ?6 = y 2 , then x <> y, and x 2 ?x or y 2 ?x. 3. if x[t]y 2 , then x 2 ?x, 4. if x[t]y ^ x[t]z ? , then y = z. 5. if the ground Feat-terms t, s occur in and t <> s, then t ?6 = s 2 . 6. if x 2 ?x, then arity(x; FxOE ) 2 or :arity(x; FxOE ) 2 . 7.1 Transformation into Quasi-solved Form The goal of the rules in Figure 3 is to have only basic formulae which are quasi-solved forms. Proposition 7.3 The rules described by (SC), (E1), (E2), (IE1), (FI) are equivalence transformations in every structure. (Sc) 9?x (m ^ OE) m ^ 9?x OE fr(m) ?x = ;; m is not a primitive formula (E1) 9?x; x (x ?= y ^ OE) 9?x OE[y=x] y <> x (E2) 9?x (x ?= x ^ OE) 9?x OE (IE1) 9?x (x ?6 = x ^ OE) (UD) 9?x (:x[t]y ^ OE) 9?x (xt" ^ OE) _ 9?x; z (x[t]z ^ z ?6 = y ^ OE) z new (FD) 9?x (x[t]y ^ x[t]z ^ OE) 9?x (x[t]y ^ y ?= z ^ OE) (FI) 9?x OE s ?= t ^ 9?x OE[t=s] _ 9?x (s ?6 = t ^ OE) the ground terms s, t occur in OE, s <> t (FQ) 9?x; x (y[t]x ^ OE) yt# ^ 8z (y[t]z ! 9?x OE[z=x]) y 62 ?x; z new Figure 3: The rule set (QSF) for quasi-solved forms. Lemma 7.4 (UD) describes an equivalence transformation in every model of the axioms schemes ("), (F). Proof: Axiom scheme (F) is equivalent to 8x; y (x[t]y $ 9z x[t]z ^ 8z (x[t]z ! z ?= y)) which can be transformed equivalently, using axiom (#), into 8x; y (:x[t]y $ xt" _ 9z (x[t]z ^ z ?6 = y)) As a consequence, we have for every formula OE with z 62 fr(OE): ("); (F ) j= ~8 (:x[t]y ^ OE $ (xt" ^ OE) _ 9z (x[t]z ^ z ?6 = y ^ OE)) and hence ("); (F ) j= ~8 (9?x (:x[t]y ^ OE) $ 9?x (xt" ^ OE) _ 9?x; z (x[t]z ^ z ?6 = y ^ OE)) 2 Lemma 7.5 (FD) describes an equivalence transformation in every model of the axiom scheme (F). Lemma 7.6 (FQ) describes an equivalence transformation in every model of the axiom scheme (F). Proof: We have for any formula with z 62 fr( ) (F ) j= ~8 9x (y[t]x ^ ) $ yt# ^ 8z (y[t]z ! [z=x]) Now we choose to be the formula 9?xOE. Since y 62 ?x the antecedent of the rule is equivalent to 9x (y[t]x ^ 9?x OE), and the claim follows immediately. 2 For this rule it is essential that t is ground. Lemma 7.7 The rule system (QSF) is terminating. Proof: We define a measure on basic formulae and show, that for every rule application the measure of every single basic formula generated is smaller than the measure of the basic formula being replaced. Termination then follows by a standard multiset argument. We assign a basic formula the tuple ( 1; 2; 3; 4), where 1. 1 is the number of :x[t]y molecules in , 2. 2 is the number of x[t]y molecules in , 3. 3 is the number of pairs (t; s) of Feat-ground terms, where both t and s occur in , but t ?6 = s does not occur in , 4. 4 is the total length of . It is now easily checked that the lexicographic ordering on these measures is strictly decreased by every application of a rule. The side condition of rule (Sc) guarantees that no formula of the form t ?6 = s, arity(x; FxOE ) or :arity(x; FxOE ) is moved out of a basic formula. 2 Corollary 7.8 There is an algorithm, which transforms any basic formula into an FX- equivalent complex formula, in which all basic formulae are quasi-solved forms. Proof: We compute a normal-from wrt. the ruleset (QSF), and from this compute a normal form wrt. the following rule: (ST) 9?x; x OE 9?x; x (OE ^ arity(x; FxOE )) _ 9?x; x (OE ^ :arity(x; FxOE )) arity(x; FxOE ); :arity(x; FxOE ) 62 OE 7.2 Eliminating quasi-solved forms with sloppy inequations In this section, we show how to eliminate quasi-solved forms with only benign inequations, in a sense to be explained soon. In the next subsection, we will explain how to get rid of nasty Definition 7.9 (Sloppy and Tight variables) Let 9?x be a basic formula. We call a local variable x 2 ?x tight (in 9?x ) if arity(x; Fx) 2 , and otherwise sloppy. By the definition of a quasi-solved form, :arity(x; FxOE ) 2 for every sloppy variable x. Definition 7.10 (Closure) For a graph , we define for every feature path ss of Featterms the relation ;ss as the smallest relation on fr( ) with x ;ffl x if x 2 fr( ) if x ;ss y and y[t]z 2 ; then x ;sst z We write x ; y if x ;ss y for some ss. For a graph and variables x; y, we define the closure of (x; y) hx; yi := f(u; v) 2 fr( )2 j x ;ss u and y ;ss v for some ssg In [8], the variable y with x ;ss y has been called the value j xss j of the rooted path xss in . Obviously, hx; yi can be computed in finitely many steps. Proposition 7.11 For every graph ,variables x; y and (u; v) 2 hx; yi we have (F ) j= ( ^ u ?6 = v ! x ?6 = y) Definition 7.12 (Sloppy and Tight inequations) Let 9?x be a basic formula. We call an inequation x ?6 = y sloppy (in 9?x ), if there is a (u; v) 2 hx; yi with x <> y, where at least one of u and v is sloppy. Otherwise, the inequation is called tight. The benign inequations handled in this section are the sloppy ones. The idea is that for sloppy variables, we have enough freedom to make them all different. In the following, we assume a partition of a quasi-solved form as 9?x ( ^ ? ^ ae), where denotes a graph, ? denotes a conjunction of inequations between Tree-variables, and ae denotes a primitive formula. Note that in this case, by the definition of quasi-solved forms, co( ) ? ?x, ? ? ae, and ? contains only non-trivial inequations which use at least one local variable. For a graph , we denote by ~ the formula obtained by replacing every atom x[t]y by xt#. Lemma 7.13 Let 9?x ( ^ ae) be a quasi solved form without inequations. Then FX j= ~8 9?x ( ^ ae) ! 9 ?F ((~ ^ ae)[ ?F ==?x]) where ?F is disjoint with fr(ae). Proof: Let A j= FX and be a valuation with A; j= 9?x ( ^ ae). Since j= ! ~, we get A; j= 9?x (~ ^ ae). Together with axiom (A), this means since ?F is disjoint with fr(ae), that A; j= 9?x; ?F (arity (?x; ?F ) ^ ~ ^ ae). With Proposition 5.4, we get A; j= 9 ?F ((~ ^ ae)[ ?F ==?x]) since ?x is disjoint with fr((~ ^ ae)[ ?F ==?x]). 2 Lemma 7.14 Let 9?x ( ^ ? ^ ae) be a quasi solved form, where ?F is disjoint with fr(ae) and ? consists of sloppy inequations only. Then FX j= ~8 9 ?F ((~ ^ ae)[ ?F ==?x]) ! 9?x ( ^ ? ^ ae) Proof: Let A j= FX and be a valuation with A; j= 9 ?F (~ ^ ae)[ ?F ==?x]. Let be an ?F - update of , such that A; j= (~ ^ ae)[ ?F ==?x]. Let Sl be the set of sloppy variables of ^ ae. Let f; F be new variables, and for every x 2 Sl, let nx >= 0, and fx; x0; : : : ; xnx be variables not occurring in ^ ae. Let Slf = ffx j x 2 Slg, and Slx = fxi j x 2 Sl; <= i <= nxg. We define an extension ? of by ? := ^ ^ (x[fx]x0 ^ x0[f ]x1 ^ : : : xnx?1[f ]xnx ^ arity(xnx ; F )) Hence, j= ? ! . By axiom (S2), there are a 2 FeatA and A; B 2 SetA with a ?2A A and a 6 ?2A B. We denote by ??x the extension of ?x by Slx, and by ??F an according extension of ?F . Hence, by definition of sloppyness, there is a Slf [ Slx [ ff; Fg-update of such that A; j= ?? ^ (~? ^ ae)[ ??F ==??x] Especially, (f) = a, (F i) = A if F corresponds to some xi with i <= nx, and (F i) = B if F corresponds to some xnx . Note, that ?? extends ? ? ae just by stating that fx is assigned a value different from all (ground) terms in Fx. By construction, A; j= a2Fxi? a 2 Fi. Hence, by axiom (E), there is an ??x-update 00 of , such that A; 00 j= ? ^ arity(??x; ??F ) Let (x) = 00(x) if x 2 ?x, and (x) = (x) otherwise. Hence, A; j= . By Proposition 5.4 and since ?F is disjoint with fr(ae), A; j= ae. Since there are infinitely many choices of nx for every x 2 Sl, we can easily find values nx such that 00(x) <> 00(y) for every variable y 2 fr( ^ ? ^ ae) with y <> x. Hence, by Proposition 7.11, A; j= ?. 2 We are now ready to give the elimination rule for quasi-solved forms with benign inequations: (IE2) 9?x ( ^ ? ^ ae) 9 ?F ((~ ^ ae)[ ?F ==?x]) if ? contains only sloppy inequations, ?F fr(ae) = ; As an example of rule (IE2), consider 9x; y; u (x[s]y ^ u[s]v ^ y[t]y ^ x ?6 = u ^ arity(x; fsg) ^ arity(u; fsg) ^ :arity(y; ftg)) 9F; G; H ( s ?2 F ^ s ?2 G ^ t ?2 H^ 8? (? ?2 F $ ? ?= s) ^ 8? (? ?2 G $ ? ?= s) ^ :8? (? ?2 F $ ? ?= s)) Here, x ?6 = u is a sloppy inequation since y is a sloppy variable. From Lemma 7.14 and Lemma 7.13, we get immediately Lemma 7.15 (IE2) describes an equivalence transformation in every model of FX. Corollary 7.16 There is an algorithm, which transforms any complex formula, in which all basic formulae are quasi-solved forms containing only sloppy inequations, into an FX- equivalent universally quantified boolean combination of molecules. 7.3 Eliminating tight inequations In the closure of tight inequations, there are only inequations of type tight <> tight or tight <> global. We first show how to transform the quasi-solved form such that the only tight inequations are of type tight <> global. Then, we show how to get rid of the tight <> global inequations. (IE3) 9?x ( ^ ? ^ x ?6 = y ^ ae) 9?x ( ^ ? ^ ae) there are tight variables u; v with (u; v) 2 hx; yi and Fu <> F v From Proposition 7.11, we get Proposition 7.17 (IE3) describes an equivalence transformation on quasi-solved forms in every model of FX. Proof: This is a consequence of condition (5) in the definition of a quasi-solved form. 2 We say that the set ? of equations is closed under a graph , if whenever x ?= y 2 and (u; v) 2 hx; yi , then u ?= v 2 . Proposition 7.18 Let ffi be a determinant and ? a set of equations which is closed under ffi . If fr(?) ? co(ffi) and Fxffi = F yffi for every equation x ?= y 2 ?, then FX j= ~8 (?ffi ! (ffi ! ?)) Proof: Let A; j= ?ffi. By Proposition 5.1, we have to show that A; j= 9co(ffi)(ffi ^ ?)) (8) Let ? be an idempotent substitution equivalent to ?. Then (8) , A; j= 9co(ffi)(ffi ^ ?) , A; j= 9co(ffi)(?ffi ^ ?) , A; j= 9co(ffi)(?ffi) since fr(?) ? co(ffi) (9) By construction, ?ffi is again a determinant, with co(?) ? co(ffi), and ??ffi = ?ffi. Hence, (9) follows from axiom (E). 2 A similar lemma, in the context of CFT, was presented in [28]. Proposition 7.19 Let ffi be a determinant and ?; ?0 be sets of equations such that ? ^ ?0 is closed under ffi . If fr(?) ? co(ffi) and Fxffi = F yffi for every equation x ?= y 2 ?, then FX j= ?ffi ! ~8 (ffi ! (?0 $ ? ^ ?0)) Proof: We have to show that FX j= ~8 (?ffi ^ ffi ^ ?0 ! ?) (10) Let ?0 be an idempotent substitution equivalent to ?0. Then (10) is equivalent to FX j= ~8 (?ffi ^ ?0 ffi ! ?0?) (11) since ?0?ffi = ?ffi. Observe, that ?ffi = ??0 ffi , fr(?0?) ? co(?0 ffi ), ?0? is closed under ?0 ffi , and that Fx?0 ffi = F y?0 ffi for every equation x ?= y 2 . Hence, (11) follows from Proposition 7.18. 2 We can now give the rule which reduces the tight <> tight inequations to tight <> global inequations: (IE4) 9?x ( ^ ? ^ x ?6 = y ^ ae) 9?x ( ^ ? ^ u ?6 = v ^ ae) x ?6 = y tight, rule (IE3) does not apply, I = f(u; v) 2 hx; yi j fu; vg 6? ?xg As an example of rule (IE4), consider 9x; y; v ( x[s]v ^ x[t]v0 ^ y[s]w ^ y[t]w0 ^ s ?6 = t^ arity(x; fs; tg) ^ arity(y; fs; tg) ^ arity(v; fg) ^ x ?6 = y) 9x; y; v ( x[s]v ^ x[t]v0 ^ y[s]w ^ y[t]w0 ^ s ?6 = t^ arity(x; fs; tg) ^ arity(y; fs; tg) ^ arity(v; fg) ^ v ?6 = w) _ 9x; y; v ( x[s]v ^ x[t]v0 ^ y[s]w ^ y[t]w0 ^ s ?6 = t^ arity(x; fs; tg) ^ arity(y; fs; tg) ^ arity(v; fg) ^ v0 ?6 = w0) Lemma 7.20 (IE4) describes an equivalence transformation in every model of FX. Proof: This follows immediately from Proposition 7.19. 2 Finally, we give the rule to eliminate tight <> global inequations. Definition 7.21 (Generated subformula) For a conjunction OE of molecules and variable x, the subformula OEx of OE generated by x is defined as OEx := fu[t]v; arity(u; M) 2 OE j x ;OE ug Note that, if x ?6 = y is tight in the quasi-solved form 9?x OE, then OEx is a determinant. (IE5) 9?x; x (OE ^ x ?6 = y) 9?x; x OE ^ 8co(OEx) (OEx ! x ?6 = y) y 62 ?x; y ?6 = x; x tight As an example of rule (IE5), consider 9x; x0 (x[s]x ^ x[t]y ^ x0[t]x0 ^ s ?6 = t ^ arity(x; ffg) ^ x ?6 = y) 9x; x0 (x[s]x ^ x[t]y ^ x0[t]x0 ^ s ?6 = t ^ arity(x; ffg))^ 8x (x[s]x ^ x[t]y ^ arity(x; ffg) ! x ?6 = y) Lemma 7.22 (IE5) describes an equivalence transformation in every model of FX. Proof: First note that co(OEx) ? ?x [ fxg. Since j= OE ! OEx, the conclusion implies the hypothesis. The hypothesis obviously implies the first part of the conclusion. By Proposition 5.1, it also implies the second part (note that ?OEx ? OE, since OE is a quasi-solved form). 2 Corollary 7.23 There is an algorithm, which transform any complex formula in which all basic formulae are quasi-solved forms, into an FX-equivalent complex formula, in which all basic formulae are quasi-solved forms containing only sloppy inequations. Hence, we obtain the proof of Lemma 6.5 by composing the Corollaries 7.8, 7.16 and 7.23. Acknowledgments. David Israel pointed out the analogy to the situation in process logic. Rolf Backofen, Andreas Podelski and Gert Smolka provided helpful criticism and remarks. This work has been supported by the Bundesminister f ur Bildung, Wissenschaft, Forschung und Technologie (Hydra, ITW 9105), the Esprit Basic Research Project ACCLAIM (EP 7195) and the Esprit Working Group CCL (EP 6028). [1] Hassan A ?t-Kaci. An algebraic semantics approach to the effective resolution of type equations. Theoretical Computer Science, 45:293{351, 1986. [2] Hassan A ?t-Kaci and Roger Nasr. LOGIN: A logic programming language with built-in inheritance. Journal of Logic Programming, 3:185{215, 1986. [3] Hassan A ?t-Kaci and Andreas Podelski. Towards a meaning of LIFE. In Jan Maluszy?nski and Martin Wirsing, editors, 3rd International Symposium on Programming Language Implementation and Logic Programming, Lecture Notes in Computer Science, vol. 528, pages 255{274. Springer-Verlag, August 1991. [4] Hassan A ?t-Kaci, Andreas Podelski, and Gert Smolka. A feature-based constraint system for logic programming with entailment. Theoretical Computer Science, 122(1{2):263{283, January 1994. [5] Peter B. Andrews. An Introduction to Mathematical Logic and Type Theory: To Truth through Proof. Computer Science and Applied Mathematics. Academic Press, 1986. [6] Rolf Backofen. Expressivity and Decidability of First-Order Theories over Feature Trees. PhD thesis, Technische Fakult at der Universit at des Saarlandes, Saarbr ucken, Germany, 1994. [7] Rolf Backofen. Regular path expressions in feature logic. Journal of Symbolic Computation, 17:421{455, 1994. [8] Rolf Backofen. A complete axiomatization of a theory with feature and arity constraints. Journal of Logic Programming, 1995. To appear. [9] Rolf Backofen and Gert Smolka. A complete and recursive feature theory. Theoretical Computer Science. To appear. [10] Rolf Backofen and Ralf Treinen. How to win a game with features. In Jean-Pierre Jouannaud, editor, 1st International Conference on Constraints in Computational Logics, Lecture Notes in Computer Science, vol. 845, M unchen, Germany, September 1994. Springer-Verlag. [11] J. R. B uchi. On a decision method in restricted second order arithmetic. In E. Nagel et. al., editor, International Congr. on Logic, Methodology and Philosophy of Science, pages 1{11. Stanford University Press, 1960. [12] Hubert Comon and Pierre Lescanne. Equational problems and disunification. Journal of Symbolic Computation, 7(3,4):371{425, 1989. [13] Jochen D orre. Feature-Logik und Semiunfikation. PhD thesis, Philosophische Fakult at der Universit at Stuttgart, July 1993. In German. [14] Jochen D orre. Feature-logic with weak subsumption constraints. In M. A. Rosner C. J. Rupp and R. L. Johnson, editors, Constraints, Language and Computation, chapter 7, pages 187{203. Academic Press, 1994. [15] Jochen D orre and WilliamC. Rounds. On subsumption and semiunification in feature algebras. Journal of Symbolic Computation, 13(4):441{461, April 1992. [16] C. A. Gunter and D. S. Scott. Semantic domains. In van Leeuwen [32], chapter 12, pages 633{674. [17] Martin Henz, Gert Smolka, and J org W urtz. Object-oriented concurrent constraint programming in Oz. In V. Saraswat and P. Van Hentenryck, editors, Principles and Practice of Constraint Programming, chapter 2, pages 27{48. MIT Press, Cambridge, MA, 1995. To appear. [18] Wilfrid Hodges. Model Theory. Encyclopedia of Mathematics and its Applications 42. Cambridge University Press, 1993. [19] Joxan Jaffar and Jean-Louis Lassez. Constraint logic programming. In Proceedings of the 14th ACM Conference on Principles of Programming Languages, pages 111{119, Munich, Germany, January 1987. [20] Mark Johnson. Attribute-Value Logic and the Theory of Grammar. CSLI Lecture Notes 16. Center for the Study of Language and Information, Stanford University, CA, 1988. [21] Anatoli?? Ivanovi<=c Malc'ev. Axiomatizable classes of locally free algebras of various type. In III Benjamin Franklin Wells, editor, The Metamathematics of Algebraic Systems: Collected Papers 1936{1967, chapter 23, pages 262{281. North Holland, 1971. [22] Rohit Parikh. A decidability result for a second order process logic. In 19th Annual Symposion on Foundations of Computer Science, pages 177{183, Ann Arbor, Michigan, October 1978. IEEE. [23] Michael O. Rabin. Decidability of second-order theories and automata on infinite trees. Transactions of the American Mathematical Society, 141:1{35, 1969. [24] William C. Rounds and Robert Kasper. A complete logical calculus for record structures representing linguistic information. In Proceedings of the First Symposium on Logic in Computer Science, pages 38{43, Cambridge, MA, June 1986. IEEE Computer Society. [25] Gert Smolka. Feature constraint logics for unification grammars. Journal of Logic Programming, 12:51{87, 1992. [26] Gert Smolka. The definition of Kernel Oz. In Andreas Podelski, editor, Constraints: Basics and Trends, Lecture Notes in Computer Science, vol. 910, pages 251{292. Springer-Verlag, March 1995. [27] Gert Smolka and Hassan A ?t-Kaci. Inheritance hierarchies: Semantics and unification. Journal of Symbolic Computation, 7:343{370, 1989. [28] Gert Smolka and Ralf Treinen. Records for logic programming. Journal of Logic Programming, 18(3):229{258, April 1994. [29] Wolfgang Thomas. Automata on infinite objects. In van Leeuwen [32], chapter 4, pages 133{191. [30] Ralf Treinen. A new method for undecidability proofs of first order theories. Journal of Symbolic Computation, 14(5):437{457, November 1992. [31] Ralf Treinen. Feature constraints with first-class features. In Andrzej M. Borzyszkowski and Stefan Soko lowski, editors, Mathematical Foundations of Computer Science 1993, Lecture Notes in Computer Science, vol. 711, pages 734{743. Springer-Verlag, 30 August{3 September 1993. [32] Jan van Leeuwen, editor. Handbook of Theoretical Computer Science, volume B - Formal Models and Semantics. Elsevier Science Publishers and The MIT Press, 1990.
{"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-0l--11-en-50---20-preferences---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&cl=CL1.51&d=HASH12e104dca72182dbdb2cb7.1&gt=2","timestamp":"2014-04-19T17:10:26Z","content_type":null,"content_length":"69927","record_id":"<urn:uuid:a95049dd-eaef-44f7-9a14-0b555dba96e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: An Integrated Data Preparation Scheme for Neural Network Data Analysis February 2006 (vol. 18 no. 2) pp. 217-230 ASCII Text x Lean Yu, Shouyang Wang, K.K. Lai, "An Integrated Data Preparation Scheme for Neural Network Data Analysis," IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 2, pp. 217-230, February, 2006. BibTex x @article{ 10.1109/TKDE.2006.22, author = {Lean Yu and Shouyang Wang and K.K. Lai}, title = {An Integrated Data Preparation Scheme for Neural Network Data Analysis}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {18}, number = {2}, issn = {1041-4347}, year = {2006}, pages = {217-230}, doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2006.22}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Knowledge and Data Engineering TI - An Integrated Data Preparation Scheme for Neural Network Data Analysis IS - 2 SN - 1041-4347 EPD - 217-230 A1 - Lean Yu, A1 - Shouyang Wang, A1 - K.K. Lai, PY - 2006 KW - Index Terms- Data preparation KW - neural networks KW - complex data analysis KW - cost-benefit analysis. VL - 18 JA - IEEE Transactions on Knowledge and Data Engineering ER - Data preparation is an important and critical step in neural network modeling for complex data analysis and it has a huge impact on the success of a wide variety of complex data analysis tasks, such as data mining and knowledge discovery. Although data preparation in neural network data analysis is important, some existing literature about the neural network data preparation are scattered, and there is no systematic study about data preparation for neural network data analysis. In this study, we first propose an integrated data preparation scheme as a systematic study for neural network data analysis. In the integrated scheme, a survey of data preparation, focusing on problems with the data and corresponding processing techniques, is then provided. Meantime, some intelligent data preparation solution to some important issues and dilemmas with the integrated scheme are discussed in detail. Subsequently, a cost-benefit analysis framework for this integrated scheme is presented to analyze the effect of data preparation on complex data analysis. Finally, a typical example of complex data analysis from the financial domain is provided in order to show the application of data preparation techniques and to demonstrate the impact of data preparation on complex data analysis. [1] X. Hu, “DB-H Reduction: A Data Preprocessing Algorithm for Data Mining Applications,” Applied Math. Letters, vol. 16, pp. 889-895, 2003. [2] K.U. Sattler and E. Schallehn, “A Data Preparation Framework Based on a Multidatabase Language,” Proc. Int'l Symp. Database Eng. & Applications, pp. 219-228, 2001. [3] M. Lou, “Preprocessing Data for Neural Networks,” Technical Analysis of Stocks & Commodities Magazine, Oct. 1993. [4] D. Pyle, Data Preparation for Data Mining. Morgan Kaufmann, 1999. [5] M.W. Gardner and S.R. Dorling, “Artificial Neural Networks (the Multilayer Perceptron)— A Review of Applications in the Atmospheric Sciences,” Atmospheric Environment, vol. 32, pp. 2627-2636, [6] M.Y. Rafiq, G. Bugmann, and D.J. Easterbrook, “Neural Network Design for Engineering Applications,” Computers & Structures, vol. 79, pp. 1541-1552, 2001. [7] K.A. Krycha and U. Wagner, “Applications of Artificial Neural Networks in Management Science: A Survey,” J. Retailing and Consumer Services, vol. 6, pp. 185-203, 1999. [8] K.J. Hunt, D. Sbarbaro, R. Bikowski, and P.J. Gawthrop, “Neural Networks for Control Systems— A Survey,” Automatica, vol. 28, pp. 1083-1112, 1992. [9] D.E. Rumelhart, “The Basic Ideas in Neural Networks,” Comm. ACM, vol. 37, pp. 87-92, 1994. [10] K.S. Narendra and K. Parthasarathy, “Identification and Control of Dynamic Systems Using Neural Networks,” IEEE Trans. Neural Networks, vol. 1, pp. 4-27, 1990. [11] M.R. Azimi-Sadjadi and S.A. Stricker, “Detection and Classification of Buried Dielectric Anomalies Using Neural Networks— Further Results,” IEEE Trans. Instrumentations and Measurement, vol. 43, pp. 34-39, 1994. [12] A. Beltratli, S. Margarita, and P. Terna, Neural Networks for Fconomic and Financial Modeling. London: Int'l Thomson Publishing Inc., 1996. [13] Y. Senol and M.P. Gouch, “The Application of Transputers to a Sounding Rocket Instrumentation: On-Board Autocorrelators with Neural Network Data Analysis,” Parallel Computing and Transputer Applications, pp. 798-806, 1992. [14] E.J. Gately, Neural Networks for Financial Forecasting. New York: John Wiley & Sons, Inc., 1996. [15] A.N. Refenes, Y. Abu-Mostafa, J. Moody, and A. Weigend, Neural Networks in Financial Engineering. World Scientific Publishing Company, 1996. [16] K.A. Smith and J.N.D. Gupta, Neural Networks in Business: Techniques and Applications. Hershey, Pa.: Idea Group Publishing, 2002. [17] G.P. Zhang, Neural Networks in Business Forecasting. IRM Press, 2004. [18] B.D. Klein and D.F. Rossin, “Data Quality in Neural Network Models: Effect of Error Rate and Magnitude of Error on Predictive Accuracy,” OMEGA, The Int'l J. Management Science, vol. 27, pp. 569-582, 1999. [19] T.C. Redman, Data Quality: Management and Technology. New York: Bantam Books, 1992. [20] T.C. Redman, Data Quality for the Information Age. Norwood, Mass.: Artech House, Inc., 1996. [21] S. Zhang, C. Zhang, and Q. Yang, “Data Preparation for Data Mining,” Applied Artificial Intelligence, vol. 17, pp. 375-381, 2003. [22] A. Famili, W. Shen, R. Weber, and E. Simoudis, “Data Preprocessing and Intelligent Data Analysis,” Intelligent Data Analysis, vol. 1, pp. 3-23, 1997. [23] R. Stein, “Selecting Data for Neural Networks,” AI Expert, vol. 8, no. 2, pp. 42-47, 1993. [24] R. Stein, “Preprocessing Data for Neural Networks,” AI Expert, vol. 8, no. 3, pp. 32-37, 1993. [25] A.D. McAulay and J. Li, “Wavelet Data Compression for Neural Network Preprocessing,” Signal Processing, Sensor Fusion, and Target Recognition, vol. 1699, pp. 356-365, SPIE, 1992. [26] V. Nedeljkovic and M. Milosavljevic, “On the Influence of the Training Set Data Preprocessing on Neural Networks Training,” Proc. 11th IAPR Int'l Conf. Pattern Recognition, pp. 1041-1045, 1992. [27] J. Sjoberg, “Regularization as a Substitute for Preprocessing of Data in Neural Network Training,” Artificial Intelligence in Real-Time Control, pp. 31-35, 1992. [28] O.E. De Noord, “The Influence of Data Preprocessing on the Robustness and Parsimony of Multivariate Calibration Models,” Chemometrics and Intelligent Laboratory Systems, vol. 23, pp. 65-70, [29] J. DeWitt, “Adaptive Filtering Network for Associative Memory Data Preprocessing,” Proc. World Congress Neural Networks, vol. IV, pp. 34-38, 1994. [30] D. Joo, D. Choi, and H. Park, “The Effects of Data Preprocessing in the Determination of Coagulant Dosing Rate,” Water Research, vol. 34, pp. 3295-3302, 2000. [31] H.H. Nguyen and C.W. Chan, “A Comparison of Data Preprocessing Strategies for Neural Network Modeling of Oil Production Prediction,” Proc. Third IEEE Int'l Conf. Cognitive Informatics, 2004. [32] J. Pickett, The American Heritage Dictionary, fourth ed. Boston: Houghton Mifflin, 2000. [33] P. Ingwersen, Information Retrieval Interaction. London: Taylor Graham, 1992. [34] U.Y. Nahm, “Text Mining with Information Extraction: Mining Prediction Rules from Unstructured Text,” PhD thesis, 2001. [35] F. Lemke and J.A. Muller, “Self-Organizing Data Mining,” Systems Analysis Modelling Simulation, vol. 43, pp. 231-240, 2003. [36] E. Tuv and G. Runger, “Preprocessing of High-Dimensional Categorical Predictors in Classification Setting,” Applied Artificial Intelligence, vol. 17, pp. 419-429, 2003. [37] C.W.J Granger, “Investigating Causal Relations by Econometric Models and Cross-Spectral Methods,” Econometrica, vol. 37, pp. 424-438, 1969. [38] K.I. Diamantaras and S.Y. Kung, Principal Component Neural Networks: Theory and Applications. John Wiley and Sons, Inc., 1996. [39] D.W. Ashley and A. Allegrucci, “A Spreadsheet Method for Interactive Stepwise Multiple Regression,” Proceedings, pp. 594-596, Western Decision Sciences Inst., 1999. [40] X. Yan, C. Zhang, and S. Zhang, “Toward Databases Mining: Preprocessing Collected Data,” Applied Artificial Intelligence, vol. 17, pp. 545-561, 2003. [41] S. Chaudhuri and U. Dayal, “A Overview of Data Warehousing and OLAP Technology,” SIGMOD Record, vol. 26, pp. 65-74, 1997. [42] S. Abiteboul, S. Cluet, T. Milo, P. Mogilevsky, J. Simeon, and S. Zohar, “Tools for Translation and Integration,” IEEE Data Eng. Bull., vol. 22, pp. 3-8, 1999. [43] A. Baumgarten, “Probabilistic Solution to the Selection and Fusion Problem in Distributed Information Retrieval,” Proc. SIGIR'99, pp. 246-253, 1999. [44] Y. Li, C. Zhang, and S. Zhang, “Cooperative Strategy for Web Data Mining and Cleaning,” Applied Artificial Intelligence, vol. 17, pp. 443-460, 2003. [45] J.H. Holland, “Genetic Algorithms,” Scientific Am., vol. 267, pp. 66-72, 1992. [46] D.E. Goldberg, Genetic Algorithm in Search, Optimization, and Machine Learning. Reading, Mass.: Addison-Wesley, 1989. [47] A.M. Kupinski and M.L. Giger, “Feature Selection with Limited Datasets,” Medical Physics, vol. 26, pp. 2176-2182, 1999. [48] Mani Bloedorn and E. Bloedorn, “Multidocument Summarization by Graph Search and Matching,” Proc. 15th Nat'l Conf. Artificial Intelligence, pp. 622-628, 1997. [49] M. Saravanan, P.C. Reghu Raj, and S. Raman, “Summarization and Categorization of Text Data in High-Level Data Cleaning for Information Retrieval,” Applied Artificial Intelligence, vol. 17, pp. 461-474, 2003. [50] W.A. Shewhart, Economic Control of Quality of Manufactured Product. New York: D. Van Nostrand, 1931. [51] D.A. Dickey and W.A. Fuller, “Distribution of the Estimators for Autoregressive Time Series with a Unit Root,” J. Am. Statistical Assoc., vol. 74, pp. 427-431, 1979. [52] J. Wang, C. Zhang, X. Wu, H. Qi, and J. Wang, “SVM-OD: A New SVM Algorithm for Outlier Detection,” Proc. ICDM'03 Workshop Foundations and New Directions of Data Mining, pp. 203-209, 2003. [53] J. Han and Y. Fu, “Dynamic Generation and Refinement of Concept Hierarchies for Knowledge Discovery in Database,” Proc. AAAI '94 Workshop Knowledge Discovery in Database, pp. 157-168, 1994. [54] U. Fayyad and K. Irani, “Multiinterval Discretization of Continuous-Valued Attributes for Classification Learning,” Proc. 13th Int'l Joint Conf. Artificial Intelligence, pp. 1022-1027, 1993. [55] A. Srinivasan, S. Muggleton, and M. Bain, “Distinguishing Exceptions from Noise in Nonmonotonic Learning,” Proc. Second Int'l Workshop Inductive Logic Programming, 1992. [56] G.H. John, “Robust Decision Trees: Removing Outliers from Data,” Proc. First Int'l Conf. Knowledge Discovery and Data Mining, pp. 174-179, 1995. [57] D. Gamberger, N. Lavrac, and S. Dzeroski, “Noise Detection and Elimination in Data Preprocessing: Experiments in Medical Domains,” Applied Artificial Intelligence, vol. 14, pp. 205-223, 2000. [58] G.E. Batista and M.C. Monard, “Experimental Comparison of K-Nearest Neighbor and Mean or Mode Imputation Methods with the Internal Strategies Used by C4.5 and CN2 to Treat Missing Data,” Technical Report 186, ICMC USP, 2003. [59] G.E. Batista and M.C. Monard, “An Analysis of Four Missing Data Treatment Methods for Supervised Learning,” Applied Artificial Intelligence, vol. 17, pp. 519-533, 2003. [60] R.J. Little and P.M. Murphy, Statistical Analysis with Missing Data. New York: John Wiley and Sons, 1987. [61] A. Ragel and B. Cremilleux, “Treatment of Missing Values for Association Rules,” Proc. Second Pacific-Asia Conf. Knowledge Discovery and Data Mining, pp. 258-270, 1998. [62] R.C.T. Lee, J.R. Slagle, and C.T. Mong, “Application of Clustering to Estimate Missing Data and Improve Data Integrity,” Proc. Int'l Conf. Software Eng., pp. 539-544, 1976. [63] S.M. Tseng, K.H. Wang, and C.I. Lee, “A Preprocessing Method to Deal with Missing Values by Integrating Clustering and Regression Techniques,” Applied Artificial Intelligence, vol. 17, pp. 535-544, 2003. [64] A.S. Weigend and N.A. Gershenfeld, Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley, 1994. [65] F.M. Tseng, H.C. Yu, and G.H. Tzeng, “Combining Neural Network Model with Seasonal Time Series ARIMA Model,” Technological, Forecasting, and Social Change, vol. 69, pp. 71-87, 2002. [66] J. Moody, “Economic Forecasting: Challenges and Neural Network Solution,” Proc. Int'l Symp. Artificial Neural Networks, 1995. [67] J.T. Yao and C.L. Tan, “A Case Study on Using Neural Networks to Perform Technical Forecasting of Forex,” Neurocomputing, vol. 34, pp. 79-98, 2000. [68] K. Hornik, M. Stinchcombe, and H. White, “Multilayer Feedforward Networks Are Universal Approximators,” Neural Networks, vol. 2, no. 5, pp. 359-366, 1989. [69] A. Esposito, M. Marinaro, D. Oricchio, and S. Scarpetta, “Approximation of Continuous and Discontinuous Mappings by a Growing Neural RBF Based Algorithm,” Neural Networks, vol. 13, pp. 651-665, [70] H. Martens and T. Naes, Multivariate Calibration. New York: John Wiley & Sons Inc., 1989. [71] R. Rojas, Neural Networks: A Systematic Introduction. Berlin: Springer-Verlag, 1996. [72] S. Geman, E. Bienenstock, and R. Doursat, “Neural Networks and the Bias/Variance Dilemma,” Neural Computation, vol. 4, pp. 1-58, 1992. [73] U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, Advances in Knowledge Discovery and Data Mining. Menlo Park, Calif.: AAAI Press, 1996. Index Terms: Index Terms- Data preparation, neural networks, complex data analysis, cost-benefit analysis. Lean Yu, Shouyang Wang, K.K. Lai, "An Integrated Data Preparation Scheme for Neural Network Data Analysis," IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 2, pp. 217-230, Feb. 2006, doi:10.1109/TKDE.2006.22 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tk/2006/02/k0217-abs.html","timestamp":"2014-04-23T16:36:38Z","content_type":null,"content_length":"64194","record_id":"<urn:uuid:4a7644e2-d4a4-4262-b18d-943bfc6150de>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Critical Numbers October 17th 2009, 09:12 PM #1 Junior Member Oct 2009 [SOLVED] Critical Numbers I need to find the critical numbers of the function f(x)=x(2-x)^2/5 I found the derivative to be f'(x)= -(2/5)x (2-x)^(-3/5) + (2-x)^(2/5) But now I'm at a loss at how to factor the equation to find the values for x. Could someone give me a push in the right direction? Any help would be appreciated. Thanks! I need to find the critical numbers of the function f(x)=x(2-x)^2/5 I found the derivative to be f'(x)= -(2/5)x (2-x)^(-3/5) + (2-x)^(2/5) But now I'm at a loss at how to factor the equation to find the values for x. Could someone give me a push in the right direction? Any help would be appreciated. Thanks! Your derivative factorises as $(2 - x)^{-3/5} \left( - \frac{2}{5} x + (2 - x)\right)$ .... October 17th 2009, 09:19 PM #2
{"url":"http://mathhelpforum.com/calculus/108671-solved-critical-numbers.html","timestamp":"2014-04-16T04:26:13Z","content_type":null,"content_length":"33391","record_id":"<urn:uuid:c44676af-7d7c-4417-a4a0-b7eb483eae70>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: what is union? From: paul c <toledobythesea_at_oohay.ac> Date: Tue, 15 Sep 2009 02:14:22 GMT Message-ID: <2MCrm.45042$PH1.13390_at_edtnps82> That last post was a bit of a short circuit, sorry for that, I got carried away trying to keep it short Let me go at it slightly differently and sorry this is so long, . Take two single-attribute relations, C and S, predicate of C is "c is a customer" and predicate of S is "s is a supplier". In theory we could "insert" to C <OR> S because its heading allows all possible combinations of c and s values. In practice, because machine recordings are finite, we don't use unrestricted <OR>, rather the restricted/"union compatible" UNION operator. In practice, just to express the union, let alone insert to it, we need to do some "renaming", eg., change the headings of C and S to {x} and the predicates to "x is a customer" and "x is a supplier". No logical change by doing that as long as all other expressions reflect the change. We could join the original, un-renamed C and S with the expression C <AND> S. All original tuples of C are in a projection {c} of that join, likewise for S and {s} and the two projections have no tuples that aren't in C and S respectively, so there are two projections that are equal to C and S respectively. If we rename the single attributes c and s of the two projections to x and take their union, I think we have the predicate "x is a customer OR x is a supplier". Call the renamed-projections CR and SR. So their union is CR UNION SR with attribute x. If we want to assert a matching proposition, say assuming an integer domain for x, that proposition might be "1 is a customer OR 1 is a supplier", the usual argument is that there are three ways to assert that fact. If CR UNION SR were a base relvar, we could certainly insert that fact. Since we can't record infinite relations on a finite machine, the individual relvars CR and SR can never be base, they are always views of a join. Suppose further that C and S are empty before the join, rename, projections and union. Before asking what should the resulting C and S relvar values be, I think we need to ask does the join allow the new proposition to be asserted. It doesn't allow us to assert "1 is not a customer AND 1 is a supplier". nor "1 is a customer AND 1 is not a supplier", only "1 is a customer AND 1 is a supplier". Perhaps people will object that a join can't express a union or that OR doesn't mean AND, but I see no problem at all in the above because the "union" has been defined as a join without any loss of possible tuples or introduction of spurious ones and if the tuple <c 1, s 1> is inserted to the join's relvar there is still no loss. It seems simple enough to define views such that we always insert to join and we always delete from union. Just as submarines don't swim and sun-dials don't tell time, computers don't actually assert propositions, they insert tuples. Predicates aren't recorded. When a union is defined, the dbms designer has had no previous knowledge and no way of ensuring the db now reflects a predicate that even involves logical OR, same goes for logical AND (with suitable constraints, possible the db designer has such a way). Assuming either of AND or OR is probably an example of mysticism, or possibly an attempt to duplicate pred/prop logic rather than merely apply it, eg., the result of the D&D "<OR>" operator is defined in terms of tuple values, not logical OR. While logical OR is present in the definition, it is not recorded in the result, Only what is recorded and what the user or db designer knows has any significance. Users and designers must choose the recording forms that suit their purposes.. I think that without taking all of the above into account, defining INSERT I TO V as V:= V UNION I = V <OR> I = V OR I amounts to nothing else but preventing insert to union, same as what happens when people start painting the floor at the door sill. Received on Mon Sep 14 2009 - 21:14:22 CDT
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2009/09/14/0105.htm","timestamp":"2014-04-20T21:53:33Z","content_type":null,"content_length":"10182","record_id":"<urn:uuid:8d0d33ff-2220-4e1e-a1bf-5edada974652>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Trinomials: 9x^2 - 42x + 49 Date: 10/15/2002 at 12:56:16 From: Tiffany Subject: Dactoring trinomials Right now in my math course we are factoring trinomials, for instance: 9x^2 - 42x + 49 I am having a lot of trouble factoring this trinomial. I kind of, but not completely, understand x^2-21x-72. I am having the most difficulty factoring when there is a number in front of the x^2 (9x^2). Is there an easier way to do this than finding all of the common factors and going from there? I really don't understand this method. Please help me. Thank you for your time. Date: 10/15/2002 at 15:17:29 From: Doctor Ian Subject: Re: Factoring trinomials Hi Tiffany, Is there an easier way? It depends on what you think is 'easy'. Here are a few different ways to go about factoring a quadratic trinomial: Factoring Quadratics Without Guessing And here is a gentle guide to using the 'standard' method: Factoring Polynomials In the case of 9x^2 - 42x + 49 the nice thing is that there aren't a lot of possibilities, because there aren't a lot of prime factors to deal with. The only possibilities for the initial terms are: (x + __)(9x + __) (3x + __)(3x + __) In fact, looking at the signs, we know that both of the numbers in the blanks will have to be negative: (x - __)(9x - __) (3x - __)(3x - __) And the only possibilities for the final terms are (x - {1,7,49})(9x - {49,7,1}) (3x - {1,7,49})(3x - {49,7,1}) So that's a total of only 6 possibilities to check. Now, what did I mean by 'looking at the signs'? Well, assuming that a and b are both positive, there are a few patterns that are worth memorizing. The first two are: (x + a)(x + b) = x^2 + (a+b)x + ab (x - a)(x - b) = x^2 - (a+b) + ab These are the only cases in which the final term (ab) will be positive. And you can tell by looking at the sign of the middle term what signs will appear in the factors. The other pattern is (x + a)(x - b) = x^2 + ax - bx - ab = x^2 + (a-b)x - ab In this case, the sign of the final term will be negative, and the sign of the middle term can be anything at all. (But that's not a problem, since the sign of the final term is enough to tell you what's going on.) So back to our possibilities. To check them, we need to multiply the pairs by the initial coefficients, and add: 1. (x - {1,7,49})(9x - {49,7,1}) (1,49): 9*1 + 1*49 = 58 (No) (7,7): 9*7 + 1*7 = 70 (No) (49,1): 9*49 + 1*1 = [too big] (No) 2. (3x - {1,7,49})(3x - {49,7,1}) (1,49): 3*1 + 3*49 = [too big] (No) (7,7): 3*7 + 3*7 = 42 (Yes!) (49,1): 3*49 + 3*1 = [don't care] Is this a pain? Yes, it is - and the more prime factors you have, the more it hurts. However, as the second URL above points out, if you're trying to sketch the graph of the function (which is usually what you're trying to do with a function), it's a _lot_ easier than picking points at And, believe it or not, as you get more practice at this, you'll get better at instinctively avoiding the possibilities that can't make sense (in very much the same way that someone who is really good at chess doesn't even consider the kinds of dumb moves that novices have to work through). For example, you're not going to multiply 49 by _anything_ and add the result to something else to get 42. So with practice, you'd go right to the following, smaller set of possibilities: (x - {7})(9x - {7}) (3x - {7})(3x - {7}) So you'd have only two things to try, instead of 6. Try reading both of the answers from the Dr. Math archives above, and then try factoring some expressions. If you get stuck, write back and show me how far you were able to get, and we'll go from there. Okay? - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61570.html","timestamp":"2014-04-16T04:53:14Z","content_type":null,"content_length":"9164","record_id":"<urn:uuid:7e2a1ba1-f126-4c47-92a0-2ad50d09a809>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Method to determine weld puddle area and width from vision measurements - General Electric Company This invention relates to robotic welding and especially to a method of computing weld puddle geometry parameters from images in real time for use in controlling the weld process. Research on a vision-based tungsten inert gas (TIG) welding process control system has established that puddle control is more effective when the weld pool area and maximum width are measured and regulated; see application Ser. No. 677,786, filed Dec. 3, 1984, H. A. Nied and R. S. Basheti, "Arc Welding Adaptive Process Control System". However, an intense arc light obscures the puddle boundary near the electrode and the leading edge of the weld pool. The welding torch optical system has been modified to include laser floodlighting of the rear edge of the puddle to improve detectability of the puddle boundary near the trailing edge; this is disclosed in application Ser. No. 641,541, filed Aug. 16, 1984, now U.S. Pat. No. 4,578,561, N. R. Corby, Jr. and S. J. Godwin, "Method of Enhancing Weld Pool Boundary Definition". In practice, the molten weld pool may be constantly changing and has a highly reflective surface which makes reliable extraction of the weld pool boundary difficult. One prior art approach computes the weld puddle area, length and width with off-line processing of a single video image on a microcomputer system. It requires that most of the puddle boundary points, including points near the leading edge of the electrode, are measured. The Center for Welding Research, Ohio State University, technique is described by R. D. Richardson et al., "The Measurement of Two-Dimensional Arc Weld Pool Geometry by Image Analysis", Control of Manufacturing Processes and Robotic Systems, ASME WAM, Nov. 13-18, 1983, pp. 137-148. The filtering is done in the frequency domain on 64 data points and all terms of the Fourier transform of order three or greater are discarded. An inverse transform is used to return to the spatial domain with the filtered signal. This approach is very restrictive and is computationally inefficient for real-time implementation. An object of the invention is to provide an efficient method to compute in real-time the discriminants that can be measured from the weld puddle for controlling an arc welding process. Another object is the provision of an improved technique of determining weld puddle area, maximum width, and angular orientation from vision measurements of the trailing edge. This method is based on a physical model of the elliptical weld puddle represented in the polar coordinate system with a minimum number of unknown parameters, either two or three. A least-squares algorithm is used to estimate the model parameters than can be related to puddle area and maximum width, and to puddle orientation. The computational requirements are small and therefore the processing time can be significantly reduced and the bandwidth of the puddle control system can be improved. The puddle boundary representation in polar coordinates is a more efficient approach to extract the puddle boundary from noisy data. The steps of the method are as follows. Images of the weld puddle are acquired and image intensities are sampled along radial rays intersecting only the trailing edge of the weld puddle, using the torch electrode as the origin. Potential pool boundary points are determined by processing each radial image intensity waveform to identify patterns that indicate potential edge points. There may be one or more points per ray. This puddle boundary data is prefiltered, based on a parabola fit to the near edge, to remove extraneous points. A least-squares algorithm is employed to estimate the area and maximum width of the elliptical weld puddle. These values are presented to the closed-loop weld process control system. In addition the puddle orientation relative to the direction of torch travel is estimated and passed to the control system for use, for instance, as a safety factor to stop welding if the puddle orientation angle is large. FIG. 1 is a block diagram of a closed-loop robotic arc welding system. FIG. 2 shows schematically a TIG welding torch having integral puddle view optics and a filler wire feed. FIG. 3 shows a typical weld puddle geometry. FIG. 4 illustrates a predicted puddle boundary based on 25 measurements of puddle boundary as determined by the vision sub-system. FIG. 5 depicts potential pool edge points resulting from radial sampling and prefiltering the data to remove extraneous points based on a parabola fit to the trailing edge of the puddle. FIG. 6 is a flow diagram of the processing of vision data and calculation of pool area, maximum width and puddle orientation. FIG. 7 illustrates misalignment of the puddle tail relative to the direction of torch travel. FIG. 8 is a block diagram of an arc welding control system based on weld puddle area and width measurements. FIGS. 9 and 10 are plots of puddle area and weld current as a function of time and illustrate the results of a heat sink disturbance test. The components of the closed-loop adaptive gas tungsten arc welding system illustrated in FIG. 1 are, briefly, a multiaxis robot 10 and robot controller 11 interconnected by joint control lines 12, a commercially available arc welding power supply 13, a microprocessorbased adaptive welding control system 14, a video monitor 15 to observe the weld puddle and seam, and a user terminal 16. The robot by way of illustration is General Electric's P50 robot, an industrial manipulator having 5 degrees of freedom and a mechanical structure that resembles a parallelogram. This welding system employs a TIG welding torch 17 with an integrated through-torch vision sensor, and automatically tracks the seam 18 in a workpiece 19 and does high quality, full penetration welding of the plates. The vision sub-system includes a laser pattern generator to project a light pattern, such as two parallel stripes, on the metal surface in front of the torch electrode. The pattern is generated by low power laser beams from one or more lasers 20 and reaches the end of the torch after being transmitted through a coherent fiber optic bundle 21. The image of the weld puddle and seam tracking laser stripes passes along a second coherent fiber optic bundle 22 to a solid state video camera 23 operated by camera controls 24. The weld seam image is analyzed in real time by the welding control system 14 to guide the torch and control the weld process. Supply lines 25 conduct electrical power and welding current, inert cover gas, and cooling water to the welding torch. The welding torch with integrated optics, shown schematically in FIG. 2, is described in greater detail in several commonly assigned patents and pending patent applications. The torch barrel and gas nozzle are indicated at 26 and the tungsten electrode at 27. The demagnified image of the weld puddle and weld region provided by an optical lens system 28 built into the torch assembly is focussed onto the face of the fiber optic cable 29. The coherent bundle has ends of the individual fibers arranged in identical matrices at each bundle end, reproducing a two-dimensional image. Current for the welding process conducted by the electrode strikes an electric arc 30 between its end and the workpiece which supplies heat to create a molten weld puddle 31. This figure shows the filler wire 32 and wire feed mechanism 33 represented by two rollers to control its rate of feed. A commercially available wire feeder is attached to the welding torch 17 and to robot 10, controlled by commands from robot controller 11. In FIG. 2 the arrow indicates the direction of torch movement. The multivariable feedback control system (FIG. 8) is based on measuring the weld puddle area and maximum width for controlling the welding process in real time. FIG. 3 schematically shows a typical weld pool geometry as observed from the moving coordinate system using a through-torch optical viewing system. A resolidified area 34 is seen at the trailing or aft part of the roughly oval-shaped molten weld pool 31. The weld pool size and shape change when the welding process parameters, such as torch velocity, input power and filler wire velocity, are varied. The weld puddle maximum width (W) is located aft of the electrode position by the offset distance (0). Points (LE) and (TE) locate the leading and trailing edges of the liquid-solid interface when the torch velocity is directed along the positive (ξ) axis. The distance between the points (LE) and (TE) provide the weld pool length (L). The analysis and experiments have shown that the weld puddle width (W) in combination with the surface area (A) of the weld puddle are the best discriminants to use in a TIG welding adaptive process control system. A control strategy based on using both the maximum weld puddle width (W) and area (A) together with the appropriate weighting functions was devised to predict full penetration of the weld bead for thin sheet metal parts. Photographs of the weld puddle and seam tracking laser pattern observed on the video monitor 15, FIG. 1, show that the leading edge of the weld pool is obscured by the arc light. One of these is FIG. 1 of the technical paper "Operational Performance of Vision-Based Arc Welding Robot Control System", R. S. Baheti et al., Sensors and Controls for Automated Manufacturing and Robotics, eds. K. Stelson and L. W. Sweet, ASME, December 1984, pp. 93-105, the disclosure of which is incorporated herein by reference. Despite the use of laser floodlighting to enhance boundary detection, only the aft part of the puddle boundary can be measured by the vision sensor. Some of the disturbances that make a reliable extraction of weld puddle boundary difficult are enumerated. The surface of the molten puddle under normal welding conditions is convex, but may have depressions caused by circular movements in the molten material or by cover gas flow. The nature of the puddle surface and puddle boundary is significantly affected by torch velocity and tilt of the workpiece surface. Electrical and thermal imbalances can create quite asymmetrical and distorted boundaries. The average reflectivity of the puddle is very high, but the surface can have local areas of much lower reflectivity. For example, oxide rafts may be formed which can circulate randomly in the molten puddle due to electromagnetic or thermal gradients. The thermal distortion due to wire feed can also change the puddle boundary. An efficient technique has been developed to determine the weld puddle area and maximum width from noisy measurements of the puddle trailing edge. The method is based on a physical model of the weld puddle represented in the polar coordinate system with a minimum number of unknown parameters. A least-squares algorithm determines the model parameters which can be related to the puddle area and width. The computational requirements are small and therefore the processing time (currently at five images per second) can be significantly reduced and the bandwidth of the puddle control system can be improved. The puddle boundary representation in the polar coordinates is a more efficient approach to extract the puddle boundary from noisy data. FIG. 4 illustrates prediction of the puddle boundary from vision data. Pixel numbers are shown along the X and Y axes; the torch electrode center is used as the origin when the weld puddle image is processed. Vision data points 35 were obtained from processing a video image of the molten puddle during welding experiments. The predicted puddle boundary 36 is calculated by the least-squares algorithm based on 25 measurements of the puddle trailing edge. The boundary is assumed to be an ellipse having the electrode center at one of its focal points. FIG. 5 depicts prefiltering of the vision data to screen out clearly extraneous points to which the least-squares technique would be sensitive. The processing described is performed by the vision microprocessor in the adaptive welding control system 14. An image is acquired and image intensities are sampled in radial directions at a predetermined angular and radial resolution. Radial rays 37 are centered on electrode 27 and intersect only the trailing edge of weld puddle 31, covering a total angle of about 90° to 120°. The next step is to determine all potential pool boundary points 38. Along the rays, each one-dimensional radial image intensity waveform is processed syntactically, by a set of predetermined rules, to determine patterns that indicate potential edge points and a possible puddle boundary. One such pattern is a large intensity change, dark to light, in adjacent pixels. The outlying possible boundary points 38 at the right may be at the crests of ripples, indicated by wavy lines, in the molten metal. Often there are two or three potential boundary points 38 along each ray. Boundary points are selected that are inside of a parabolic screening zone 39 fitted roughly to the trailing edge of the weld puddle. As indicated by the dashed center line, a parabola approximates the shape of the rear edge of the pool. The "peak" of the fitted parabola occurs roughly where the tail of the pool occurs. The trailing edge point TE is normally along or close to the x axis of the moving coordinate system. The weld puddle area and maximum width are estimated in real time from the prefiltered puddle boundary data, by means of the least-squares algorithm whose derivation is now developed. The least-squares method has been defined as a technique of fitting a curve close to some given points which minimizes the sum of the squares of the deviations from the curve. In this case the curve is an ellipse and the deviations are radial distances. It has been shown by H. A. Nied, from mathematical models of the weld pool, that the puddle boundary, S, in a moving coordinate system can be represented by ##EQU1## where R is the radial distance of the puddle boundary at an angle θ measured from the center of the electrode (see FIG. 4). The constants C[1] and C[2] can be expressed as functions of power input and the torch velocity, respectively. Using the first two terms of the exponential series expansion, equation (1) can be rearranged as ##EQU2## where X denotes the X-coordinate of the puddle boundary (FIG. 4). Equation (2) can be rewritten as R=a[1] +a[2] X (3) where a[1] =C[1] /S and a[2] =-C[1] C[2] /S are the unknown parameters. Equation (3) is an equation of an ellipse with the electrode center at one of the focal points of the ellipse. It is a two degree of freedom ellipse. The least-squares technique is efficient because the unknown parameters a[1] and a[2] are linear in the model. The solution can be obtained in one calculation and iteration is not necessary. A least-squares algorithm to estimate the unknown parameters is summarized as the following. Refer to H. W. Sorenson, "Parameter Estimation", Marcel Dekker, 1980, for more information. 1. Compute mean values X and R given by ##EQU3## where M denotes the number of puddle boundary points. 2. Compute correlation functions ##EQU4## The first is the cross correlation function and the second the autocorrelation function. 3. The estimated parameters denoted by a[1] and a[2] are given by ##EQU5## 4. The puddle area and maximum width is given by ##EQU6## In FIG. 4, the puddle boundary was predicted based on 25 boundary measurements, the parameters a[1] and a[2] were estimated using equation (6) and the puddle area and width were computed from equation (7). For data shown in FIG. 4, a[1] =82 and a[2] =0.3. The puddle area is 23,210 square pixels and maximum width is 164 pixels. The steps used in determining weld puddle area and width in real time from weld region images provided by a vision system are summarized with reference to FIGS. 5 and 6. Images of the weld puddle are acquired, and each image is sampled along N radial rays centered on the electrode and intersecting only the trailing edge of the weld puddle. All potential edge points are extracted as shown at 40 and passed to the prefilter 41, where points inside a parabolic zone fitted roughly to the puddle trailing edge are selected. The prefiltered boundary data is sent to the least-squares estimator 42, for estimating the unknown parameters a[1] and a[2]. These estimated parameters are employed at 43 to explicitly calculate puddle area and maximum width, and these values are passed to the closed-loop weld process control system (FIG. 8) to control weld current. Instead of relying only on the estimated parameters derived from a single image, a modification is that the results of one or more previous frames can be used, weighting the sum of current and past estimated parameters. This exploits the frame-to-frame time correlation of the pool scene. During the welding process, the puddle tail may not be aligned with the direction of torch travel. The misalignment may be due to process disturbances or due to curved geometry of the weld path. The least-squares algorithm described above to estimate puddle width and area can be modified to include the effect of the puddle orientation. Referring to FIG. 7, let Δθ denote the puddle angular orientation with respect to the X-axis of the torch. It is assumed that the X-axis is in the direction of torch travel. Let R and θ[o] denote the coordinates of point A on the puddle boundary 44 measured from the center of the electrode. From equation (3), substituting X=R cos θ and letting θ=θ[o] +ΔΘ, the puddle boundary can be represented by ##EQU7## It is assumed that ΔΘ is small (in practice less than 10 degrees). Making the usual assumptions if an angle is small, then ##EQU8## Equation (9), letting R cos Θ[o] =X and R sin Θ[o] =Y, can be rearranged as R=a[1] +a[2] X+a[3] Y (10) where a[3] =ΔΘa[2]. This represents a three degree of freedom ellipse. In an analogous manner, the estimated parameters a[1], a[2] and a[3] are calculated, and from these the puddle area, width and angular orientation are determined as shown in FIG. 6. At the present time knowledge of the angular orientation of the weld pool relative to the direction of torch travel is used for safety purposes, to stop welding if ΔΘ is, say, greater than 10°. A large percentage of the time, the third ellipse shape parameter, a[3], is zero and determination of estimated area by the two parameter algorithm is satisfactory. Since the prefilter based on a parabola fit to the puddle tail provides relatively weak prefiltering, the orientation of the parabolic screening zone may remain fixed, symmetrical with the ξ axis. A block diagram of the adaptive feedback control system based on weld puddle area and maximum width measurements is shown in FIG. 8. The measurement subsystem along with the puddle geometry algorithm that has been described determines the puddle area and maximum width at discrete time intervals. The area and maximum width measurements are compared with the desired area and maximum width. An approximate measure of the puddle geometry error q is defined as q=a[1] '(A[d] -A[m])+a[2] '(W[d] -W[m]) A[d] =desired puddle area A[m] =measured puddle area W[d] =desired puddle width W[m] =measured puddle width The weighting constants a[1] ' and a[2] ' can be selected based on the control objective. For example, a[1] '=1, and a[2] '=0 will regulate the puddle area. On the other hand, a[1] '=0 and a[2] '=1 will provide a control of the puddle width. In FIG. 8, the vision system 45 provides images of the weld pool to the adaptive welding control system 14 (FIG. 1) and the puddle geometry algorithm 46 determines measured maximum width and measured area. The difference between desired and measured widths is computed at 47 and multiplied by the preassigned constant a[2] ' at 48. Similarly, the difference between desired and measured areas is taken at 49 and multiplied by the weighting constant a[1] ' at 50. The puddle area and maximum width error terms are added at 51 and the result, the puddle geometry error q, is the input to a dynamic comprensation algorithm. The function of the dynamic compensator 52, typically a proportional, integral and derivative (PID) compensator, is to minimize the effects of process disturbances on the puddle geometry, stabilize the closed-loop system, and provide a robust dynamic performance. The output of the dynamic compensator is a function of the puddle geometry error and determines, at 53, a correction to the nominal welding current. The corrected welding current command is presented to a limiter 54 where it is clamped between upper and lower limits. Up to this point, the calculation of puddle geometry error, dynamic compensation, and limiting of the corrected welding current value are performed by the microprocessor-based adaptive welding control system 14 in FIG. 1. The welding current command is passed to the current controller 55 which is in the arc welding power supply 13. Block 56 represents a dynamic model of the molten pool in terms of gain and thermal time constant. The process disturbances include, for instance, tack welds, heat sinks, filler wire feed rate changes, and undesirable puddle orientation. In the closed-loop control, the heat input to the molten weld pool is directly influenced by the welding current. The vision system computes the weld puddle boundary, and the puddle geometry algorithm determines the puddle area and maximum width, and the feedback loop is closed. The torch travel speed is preprogrammed and is not modified by the control system. The referenced Baheti et al. paper and application Ser. No. 677,786 contain a fuller explanation of the closed-loop weld process control. The robotic welding system is for non-autogenous welding with wire feed and autogenous welding without filler wire. The method presented here has been successfully tested with real vision data. A plot of the puddle area and weld current under closed-loop control with a heat sink disturbance in the weld fixture is shown in FIGS. 9 and 10. The control objective is to regulate the puddle area by changing the current to counteract the process disturbances. It is seen that the weld current increases from 50 to 70 amperes to maintain puddle area close to the reference. When the weld torch moves away from the heat sink, the puddle area increases rapidly, and therefore the current is reduced by the feedback control system. The current controller responds to external disturbances and maintains the puddle area near the reference value. The vision algorithm provides the estimates of the puddle area and width during the closed-loop control. While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the spirit and scope of the invention.
{"url":"http://www.freepatentsonline.com/4611111.html","timestamp":"2014-04-21T04:34:32Z","content_type":null,"content_length":"53859","record_id":"<urn:uuid:983f36be-eb38-43bc-9f7b-ade5d33ec5c3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] definition of N without quantifying over infinite sets Thomas Forster T.Forster at dpmms.cam.ac.uk Mon Aug 9 00:13:20 EDT 2004 The significance of FFF - Friedman's Finite Form of Kruskl's theorem is of course that it is a fct about N probable only by reasoning about infinite sets. When explaining this to my students I of course have to anticipate that the inductive definition of N involves quantifying over ininite sets - after all if you contain 0 and are closed under S then you are infinite, so i employ a definition that i learned from Quine: set theory and it logic. You are a natural number iff every set containing you and closed under predecessor contins 0. This doesn't involve quantification over infinite sets. Did Quine invent this? If not, who did? Thomas Forster www.dpmms.cam.ac.uk/~tf; 01223-337981 and 020-7882-3659 More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008390.html","timestamp":"2014-04-18T08:05:46Z","content_type":null,"content_length":"3240","record_id":"<urn:uuid:1816ee97-4778-427f-a02f-b3b64d12874f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Dick Canary Dick Canary Office: Room 5848 East Hall. Office Phone: (734) 763-5861 Title: Professor canary@umich.edu Four generations Research Interests Low-dimensional topology Curriculum Vita: Papers (since 1999): ``Quasiconformal homogeneity after Gehring and Palka,'' with Petra Bonfert-Taylor and Edward C. Taylor , Computational Methods and Function Theory (Gehring Memorial Volume), to appear, revised version: paper (in PDF) ``Dynamics on character varieties: a survey,'' preliminary version: paper (in PDF) ``The pressure metric for convex representations,'' with Martin Bridgeman , Francois Labourie , and Andres Sambarino , preliminary version: paper (in PDF) ``Convergence properties of end invariants,'' with Jeff Brock, Ken Bromberg , and Yair Minsky, Geometry and Topology, 17 (2013), 2877-2922, revised version: paper (in PDF) ``Dynamics on PSL(2,C)-character varieties: 3-manifolds with toroidal boundary components,'' with Aaron Magid , Groups, Geometry and Dynamics, to appear, revised version, paper (in PDF) ``Uniformly perfect domains and convex hulls: improved bounds in a generalization of a Theorem of Sullivan,'' with Martin Bridgeman , Pure and Applied Mathematics Quarterly 9 (2013), 49--71, revised version: paper (in PDF) ``Moduli spaces of hyperbolic 3-manifolds and dynamics on character varieties,'' with Peter Storm , Commentarii Mathematici Helvetici, 88 (2013), 221--251, preliminary version: paper (in PDF) ``The classification of Kleinian surface groups II: the ending lamination conjecture,'' with Jeff Brock and Yair Minsky, Annals of Mathematics, 176 (2012), 1--149. Preliminary version: available here ``The curious moduli space of unmarked Kleinian surface groups,'' with Peter Storm , American Journal of Mathematics, 134 (2012), 71--85. preliminary version: paper (in PDF) ``Local topology in deformation spaces of hyperbolic 3-manifolds,'' with Jeff Brock, Ken Bromberg , and Yair Minsky, Geometry and Topology, 15 (2011), 1169--1224. preliminary version: paper (in PDF) ``Exotic quasiconformally homogeneous surfaces,'' with Petra Bonfert-Taylor , Juan Souto , and Edward C. Taylor , Bulletin of the London Mathematical Society, 43 (2011), 57--62. preliminary version: paper (in PDF) ``The Thurston metric on hyperbolic domains and boundaries of convex hulls,'' with Martin Bridgeman , Geometric and Functional Analysis, 20 (2010), 1317--1353. revised version: paper (in PDF) ``Ambient quasiconformal homogeneity of planar domains,'' with Petra Bonfert-Taylor , Gaven Martin, , Edward C. Taylor , and Michael Wolf , Ann. Acad. Sci. Fenn., 35 (2010), 275--283. preliminary version: paper (in PDF) ``Introductory Bumponomics: the topology of deformation spaces of hyperbolic 3-manifolds,'' in Teichmuller Theory and Moduli Problem, ed. by I. Biswas, R. Kulkarni and S. Mitra, Ramanujan Mathematical Society, 2010, 131-150, preliminary version: paper (in PDF) ``Marden's Tameness Conjecture: history and applications,'' in Geometry, Analysis and Topology of Discrete groups, ed. by L. Ji, K. Liu, L. Yang and S.T. Yau, Higher Education Press, 2008, 137--162. preliminary version: paper (in PDF) ``Kleinian groups with discrete length spectrum,'' with Chris Leininger, Bulletin of the London Mathematical Society, 39 (2007), 189-193. preliminary version: here ``Quasiconformal homogeneity of hyperbolic surfaces with fixed-point full automorphisms,'' with Petra Bonfert-Taylor, Martin Bridgeman, and Edward C. Taylor, Mathematical Proceedings of the Cambridge Philosophical Society, 143 (2007), 71-84. preliminary version: paper (in PDF) ``A new foreword for `Notes on Notes of Thurston','' in Fundamentals of Hyperbolic Manifolds: Selected Expositions, London Mathematical Society Lecture Note Series 328, Cambridge University Press, 2006, preliminary version: paper (in PDF) ``Quasiconformal homogeneity of hyperbolic manifolds,'' with Petra Bonfert-Taylor , Gaven Martin, , and Edward C. Taylor , Mathematische Annalen, 331 (2005), 281-295. preliminary version: paper (in ``Bounding the bending of a hyperbolic 3-manifold'' with Martin Bridgeman , Pacific Journal of Mathematics, vol. 218(2005), pp. 299-314. revised version: paper (in postscript) ``Homotopy equivalences of 3-manifolds and deformation theory of Kleinian groups'' with Darryl McCullough , Memoirs of the American Mathematical Society, vol. 172(2004), no. 812, preliminary version: paper (in PDF) ``Pushing the boundary,'' In the Tradition of Ahlfors and Bers, III, Contemporary Mathematics vol. 355(2004), American Mathematical Society, 109-121. preliminary version: paper (in postscript) ``Ubiquity of geometric finiteness in boundaries of deformation spaces of hyperbolic 3-manifolds,'' with Sa'ar Hersonsky , American Journal of Mathematics, vol. 126(2004), pp. 1193-1220. preliminary version: paper (in postscript) ``Approximation by maximal cusps in boundaries of deformation spaces of Kleinian groups'' with Marc Culler , Sa'ar Hersonsky , and Peter Shalen , Journal of Differential Geometry, vol. 64(2003), pp. 57-109. preliminary version: paper (in postscript) ``From the boundary of the convex core to the conformal boundary'' with Martin Bridgeman , Geometriae Dedicata, vol. 96(2003), pp. 211-240. preliminary version: paper (in postscript) ``The visual core of a hyperbolic 3-manifold,'' with Jim Anderson , Mathematische Annalen, vol. 321(2001),pp. 989-1000. preliminary version: abstract (in html), paper (in postscript) ``The conformal boundary and the boundary of the convex core,'' Duke Mathematical Journal, vol. 106(2001), pp. 193-207. preliminary version: abstract (in html), paper (in postscript) ``On the topology of deformation spaces of Kleinian groups,'' with Jim Anderson and Darryl McCullough , Annals of Mathematics, vol. 152(2000), pp. 693-741. preliminary version: abstract (in html), paper (in postscript) ``Cores of hyperbolic $3$-manifolds and limits of Kleinian groups II,'' with Jim Anderson , Journal of the London Mathematical Society, vol. 61(2000), pp. 489-505. preliminary version: abstract (in html), paper (in postscript) ``Spectral theory, Hausdorff dimension and the topology of hyperbolic 3-manifolds'', with Edward C. Taylor and Yair N. Minsky , Journal of Geometric Analysis, vol. 9(1999), pp. 17-40. preliminary version: abstract (in html), paper (in postscript) ``Hausdorff dimension and limits of Kleinian groups,'' with Edward C. Taylor , Geometric and Functional Analysis, vol. 9 (1999), pp. 283-297. preliminary version: abstract (in html), paper (in Me and my co-author
{"url":"http://www.math.lsa.umich.edu/~canary/","timestamp":"2014-04-19T22:24:27Z","content_type":null,"content_length":"11340","record_id":"<urn:uuid:eb5e0217-7dc3-4d75-91f4-435111bf8142>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I pool data (and perform Chow tests) in linear regression without constraining the residual variances to be equal? Title Pooling data and performing Chow tests in linear regression Author William Gould, StataCorp Date December 1999; minor revisions July 2013 1. Pooling data and constraining residual variance Consider the linear regression model, y = β[0] + β[1]x[1] + β[2]x[2] + u, u ~ N(0, σ^2 ) and let us pretend that we have two groups of data, group=1 and group=2. We could have more groups; everything said below generalizes to more than two groups. We could estimate the models separately by typing . regress y x1 x2 if group==1 . regress y x1 x2 if group==2 or we could pool the data and estimate a single model, one way being . gen g2 = (group==2) . gen g2x1 = g2*x1 . gen g2x2 = g2*x2 . regress y x1 x2 g2 g2x1 g2x2 The difference between these two approaches is that we are constraining the variance of the residual to be the same in the two groups when we pool the data. When we estimated separately, we estimated group 1: y = β[01] + β[11]x[1] + β[21]x[2] + u[1], u[1] ~ N(0, σ[1]^2) group 2: y = β[02] + β[12]x[1] + β[22]x[2] + u[2], u[2] ~ N(0, σ[2]^2) When we pooled the data, we estimated y = β[01] + β[11]x[1] + β[21]x[2] + (β[02]-β[01])g[2] + (β[12]-β[11])g[2]x[1] + (β[22]-β[21])g[2]x[2] + u, u ~ N(0, σ^2) If we evaluate this equation for the groups separately, we obtain y = β[01] + β[11]x[1] + β[21]x[2] + u, u ~ N(0,σ^2) for group=1 y = β[02] + β[12]x[1] + β[22]x[2] + u, u ~ N(0,σ^2) for group=2 The difference is that we have now constrained the variance of u for group=1 to be the same as the variance of u for group=2. If you perform this experiment with real data, you will observe the following: • You will obtain the same values for the coefficients either way. • You will obtain different standard errors and therefore different test statistics and confidence intervals. If u is known to have the same variance in the two groups, the standard errors obtained from the pooled regression are better—they are more efficient. If the variances really are different, however, then the standard errors obtained from the pooled regression are wrong. 2. Illustration (See the do-file and the log with the results in section 7) I have created a dataset (containing made-up data) on y, x1, and x2. The dataset has 74 observations for group=1 and another 71 observations for group=2. Using these data, I can run the regressions separately by typing [1] . regress y x1 x2 if group==1 [2] . regress y x1 x2 if group==2 or I can run the pooled model by typing . gen g2 = (group==2) . gen g2x1 = g2*x1 . gen g2x2 = g2*x2 [3] . regress y x1 x2 g2 g2x1 g2x2 I did that in Stata, and it let me summarize the results. When I typed command [1], I obtained the following results (standard errors in parentheses): y = -8.650993 + 1.21329*x1 + -.8809939*x2 + u, Var(u) = 15.891^2 (22.73703) (.5445941) (.4054011) and when I ran command [2], I obtained y = 4.646794 + .9307004*x1 + .8812369*x2 + u, Var(u) = 7.5685^2 (11.1593) (.236696) (.1997562) When I ran command [3], I obtained y = -8.650993 + 1.21329*x1 + -.8809939*x2 + (17.92853) (.4294217) + (.3196656) 13.29779*g2 + -.2825893*g2x1 + 1.762231*g2x2 + u, Var(u) = 12.531^2 (25.74446) (.6123452) (.4599583) The intercept and coefficients on x1 and x2 in [3] are the same as in [1], but the standard errors are different. Also, if I sum the appropriate coefficients in [3], I obtain the same results as [2]: Intercept: 13.29779 + -8.650993 = 4.646797 ([2] has 4.646794) x1: -.2825893 + 1.21329 = .9307007 ([2] has .9307004) x2: 1.762231 + -.8809939 = .8812371 ([2] has .8812369) The coefficients are the same, estimated either way. (The fact that the coefficients in [3] are a little off from those in [2] is just because I did not write down enough digits.) The standard errors for the coefficients are different. I also wrote down the estimated Var(u), what is reported as RMSE in Stata’s regression output. In standard deviation terms, u has s.d. 15.891 in group=1, 7.5685 in group=2, and if we constrain these two very different numbers to be the same, the pooled s.d. is 12.531. 3. Pooling data without constraining residual variance We can pool the data and estimate an equation without constraining the residual variances of the groups to be the same. Previously we typed . gen g2 = (group==2) . gen g2x1 = g2*x1 . gen g2x2 = g2*x2 . regress y x1 x2 g2 g2x1 g2x2 and we start exactly the same way. To that, we add . predict r, resid . sum r if group==1 . gen w = r(Var)*(r(N)-1)/(r(N)-3) if group==1 . sum r if group==2 . replace w = r(Var)*(r(N)-1)/(r(N)-3) if group==2 [4] . regress y x1 x2 g2 g2x1 g2x2 [aw=1/w] In the above, the constant 3 that appears twice is 3 because there were three coefficients being estimated in each group (an intercept, a coefficient for x1, and a coefficient for x2). If there were a different number of coefficients being estimated, that number would change. In any case, this will reproduce exactly the standard errors reported by estimating the two models separately. The advantage is that we can now test equality of coefficients between the two equations. For instance, we can now read right off the pooled regression results whether the effect of x1 is the same in groups 1 and 2 (answer: is _b[g2x1]==0?, because _b[x1] is the effect in group 1 and _b[x1]+_b[g2x1] is the effect in group 2, so the difference is _b[g2x1]). And, using test, we can test other constraints as well. For instance, if you wanted to prove to yourself that the results of [4] are the same as typing regress y x1 x2 if group==2, you could type . test x1 + g2x1 == 0 (reproduces test of x1 for group==2) . test x2 + g2x2 == 0 (reproduces test of x2 for group==2) 4. Illustration Using the made-up data, I did exactly that. To recap, first I estimated separate regressions: [1] . regress y x1 x2 if group==1 [2] . regress y x1 x2 if group==2 and then I ran the variance-constrained regression, . gen g2 = (group==2) . gen g2x1 = g2*x1 . gen g2x2 = g2*x2 [3] . regress y x1 x2 g2 g2x1 g2x2 and then I ran the variance-unconstrained regression, . predict r, resid . sum r if group==1 . gen w = r(Var)*(r(N)-1)/(r(N)-3) if group==1 . sum r if group==2 . replace w = r(Var)*(r(N)-1)/(r(N)-3) if group==2 [4] . regress y x1 x2 g2 g2x1 g2x2 [aw=1/w] Just to remind you, here is what commands [1] and [2] reported: y = -8.650993 + 1.21329*x1 + -.8809939*x2 + u, Var(u) = 15.891^2 (22.73703) (.5445941) (.4054011) y = 4.646794 + .9307004*x1 + .8812369*x2 + u, Var(u) = 7.5685^2 (11.1593) (.236696) (.1997562) Here is what command [4] reported: y = -8.650993 + 1.21329*x1 + -.8809939*x2 + (22.73703) (.5445941) (.4054011) 13.29779*g2 + -.2825893*g2x1 + 1.762231*g2x2 + u (25.3279) (.6050657) (.4519431) Those results are the same as [1] and [2]. (Pay no attention to the RMSE reported by regress at this last step; the reported RMSE is the standard deviation of neither of the two groups but is instead a weighted average; see the FAQ on this if you care. If you want to know the standard errors of the respective residuals, look back at the output from the summarize statements typed when producing the weighting variable.) Technical Note: In creating the weights, we typed . sum r if group==1 . gen w = r(Var)*(r(N)-1)/(r(N)-3) if group==1 and similarly for group 2. The 3 that appears in the finite-sample normalization factor (r(N)-1)/(r(N)-3) appears because there are three coefficients per group being estimated. If our model had fewer or more coefficients, that number would change. In fact, the finite-sample normalization factor changes results very little. In real work, I would have ignored it and typed . sum r if group==1 . gen w = r(Var) if group==1 unless the number of observations in one of the groups was very small. The normalization factor was included here so that [4] would produce the same results as [1] and [2]. 5. The (lack of) importance of not constraining the variance Does it matter whether we constrain the variance? Here, it does not matter much. For instance, if after [4] . regress y x1 x2 g2 g2x1 g2x2 [aw=1/w] we test whether group 2 is the same as group 1, we obtain . test g2x1 g2x2 g2 ( 1) g2x1 = 0.0 ( 2) g2x2 = 0.0 ( 3) g2 = 0.0 F( 3, 139) = 307.50 Prob > F = 0.0000 If instead we had constrained the variances to be the same, estimating the model using [3] . regress y x1 x2 g2 g2x1 g2x2 and then repeated the test, the reported F-statistic would be 300.81. If there were more groups, and the variance differences were great among the groups, this could become more important. 6. Another way to fit the variance-unconstrained model Stata’s xtgls, panels(het) command (see xtgls) fits exactly the model we have been describing, the only difference being that it does not make all the finite-sample adjustments, so its standard errors are just a little different from those produced by the method just described. (To be clear, xtgls, panels(het) does not make the adjustment described in the technical note above, and it does not make the finite-sample adjustments regress itself makes, so variances are invariable normalized by N, the number of observations, rather than N-k, observations minus number of estimated Anyway, to estimate xtgls, panels(het), you pool the data just as always, . gen g2 = (group==2) . gen g2x1 = g2*x1 . gen g2x2 = g2*x2 and then type [5] . xtgls y x1 x2 g2 g2x1 g2x2, panels(het) i(group) to estimate the model. The result of doing that with my fictional data is y = -8.650993 + 1.21329*x1 + -.8809939*x2 + (22.27137) (.5334409) (.3970985) 13.29779*g2 + -.2825893*g2x1 + 1.762231*g2x2 + u (24.80488) (.5925734) (.4426101) These are the same coefficients we have always seen. The standard errors produced by xtgls, panels(het) here are about 2% smaller than those produced by [4] and in general will be a little smaller because xtgls, panels(het) is an asymptotically based estimator. The two estimators are asymptotically equivalent, however, and in fact quickly become identical. The only caution I would advise is not to use xtgls, panels(het) if the number of degrees of freedom (observations minus number of coefficients) is below 25 in any of the groups. Then, the weighted OLS approach [4] is better (and you should make the finite-sample adjustment described in the above technical note). 7. Appendix: do-file and log providing results reported above 7.1 do-file The following do-file, named uncv.do, was used. Up until the line reading “BEGINNING OF DEMONSTRATION’, the do-file is concerned with constructing the artificial dataset for the demonstration: 7.2 log The do-file shown in 7.1 produced the following output: uncv.log
{"url":"http://www.stata.com/support/faqs/statistics/pooling-data-and-chow-tests/","timestamp":"2014-04-20T15:58:07Z","content_type":null,"content_length":"39555","record_id":"<urn:uuid:9d429634-8b5b-4a1f-9909-ea0a70aeffb1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Code speedup tips Sean Richards someone at invalid.com Sun Mar 2 00:36:44 CET 2003 Just started looking at Python as a tool for investigating 2d Cellular Automata. I am a complete novice in both disciplines so be gentle (I don't pretend to be a skilled programmer either) ;) Found some code for a 1d CA which I have crudely modified to work with 2d CA. I would appreciate some advice on how to make this code more efficient, as it already takes about 1 minute with a 200X200 array on a 1.2ghz processor. I can see that the nested loops are the bottleneck but what better alternatives does Python have for iterating over a 2d array. I have only being playing with Python for a very short time and there is so much out there that I am getting myself a bit lost in all the information. Anyway here is the code with a *very* simple rule as an example - read it and weep :) # Simple cellular automata - code 1022 # Rule specifies that a cell should become black if any # of its neighbours were black on previous step # Original code by Frank Buss - http://www.frank-buss.de/automaton/rule30.html # import the Numeric module - used for the arrays from Numeric import * # import the Tk module - used to display the CA from Tkinter import * def CA_Function(Length): # Create the two lattices current = zeros((Length,Length),Int) next = zeros((Length,Length),Int) # Set the start cell to black current[len(current)/2,len(current)/2] = 1 next[len(current)/2,len(current)/2] = 1 # Apply the rule for step in xrange(1,(Length/2)-1): for i in xrange(1,len(current)-1): for j in xrange(1,len(current)-1): if current[i-1,j]== 1 or current[i,j-1]== 1 or \ current[i,j+1]== 1 or current[i+1,j]== 1: next[i,j] = 1 # Swap the lattices at each step (current,next) = (next,current) # Draw the lattice def CA_Draw(Lattice): for x in range(1,len(Lattice)-1): for y in range(1,len(Lattice)-1): if Lattice[x,y]: # Initialise the Tk interface root = Tk() root.title('2d CA') # Create the empty image Length = 200 # Apply the function # Display image On the 200x200 array that gives me 40,000 elements, I go over the entire array 100 times -> 4,000,000 iterations Then to fill the image is another 200x200 operations -> 40,000 Any tips on how to make this more efficient would be greatly appreciated. | All spelling errors are intentional and are there to show new | | and improved ways of spelling old words. | More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2003-March/220642.html","timestamp":"2014-04-19T03:13:16Z","content_type":null,"content_length":"5529","record_id":"<urn:uuid:dffcc351-aac3-4e3f-a2ea-7e2e34c52195>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
High-Accuracy Fourier Transform Interferometry, Without Oversampling, with a 1-Bit Analog-to-Digital Converter We demonstrate a new technique for performing accurate Fourier transform interferometry with a 1-bit analog-to-digital (AD) converter that does not require oversampling of the interferogram, unlike in other 1-bit coding schemes that rely on delta-sigma modulation. Sampling aims at locating the intersections {z[i]} of the modulation term s(z) of the interferogram and a reference sinusoid r(z) = A cos(2πf[r]z), where z is the optical path difference. A new autocorrelation-based procedure that includes the accurate recovery of the equally sampled amplitude representation {s(k)} of s(z) from { z[i]} is utilized to calculate the square of the emission spectrum of the light source (sample). The procedure is suitable for interferograms that are corrupted with additive noise. Sinusoid-crossing sampling satisfies the Nyquist sampling criterion, and a z[i] exists within each sampling interval Δ = 1/2f[r], if A ≥ ‖s(z)‖ for all z, and f[r] ≥ f[c], where f[c] is the highest frequency component of s(z). By locating a crossing at an accuracy of 1 part in 2^16, we determine the multimode spectrum of an argon-ion laser with a 1-bit AD converter that performs like a 13-bit amplitude-sampling AD © 2000 Optical Society of America OCIS Codes (070.4790) Fourier optics and signal processing : Spectrum analysis (120.3180) Instrumentation, measurement, and metrology : Interferometry (120.6200) Instrumentation, measurement, and metrology : Spectrometers and spectroscopic instrumentation Vincent Ricardo Daria and Caesar Saloma, "High-Accuracy Fourier Transform Interferometry, Without Oversampling, with a 1-Bit Analog-to-Digital Converter," Appl. Opt. 39, 108-113 (2000) Sort: Year | Journal | Reset 1. D. Malacara, Optical Shop Testing (Wiley, New York, 1975). 2. P. Hariharan, “Optical interferometry,” Rep. Prog. Phys. 54, 339–390 (1990). 3. J. Chamberlain, The Principles of Interferometric Spectroscopy (Wiley, New York, 1979). 4. P. Griffiths, Chemical Infrared Fourier Transform Spectroscopy (Wiley, New York, 1975). 5. P. Grangier, J. Levenson, and J. Poizat, “Quantum non-demolition measurements in optics,” Nature 396, 537–542 (1998). 6. G. Hazel, F. Bucholtz, and I. Aggarwal, “Characterization and modeling of drift noise in Fourier transform spectroscopy: implications for signal processing and detection limits,” Appl. Opt. 36, 6751–6759 (1993). 7. V. Daria and C. Saloma, “Bandwidth and detection limit in crossing-based spectrum analyzer,” Rev. Sci. Instrum. 68, 240–242 (1997). 8. M. Lim and C. Saloma, “Direct signal recovery from threshold crossings,” Phys. Rev. E 58, 6759–6765 (1998). 9. J. Proakis and D. Manolakis, Introduction to Digital Processing (Maxwell-Macmillan, New York, 1989), pp. 111–123. 10. K. Minami and S. Kawata, “Dynamic range enhancement of Fourier transform infrared spectrum measurement using delta sigma modulation,” Appl. Opt. 32, 4822–4827 (1993). 11. C. Saloma, “Computational complexity and observation of physical signals,” J. Appl. Phys. 74, 5314–5319 (1993). 12. C. Saloma and P. Haeberli, “Optical spectrum analysis from zero crossings,” Opt. Lett. 16, 1535–1537 (1991). 13. C. M. Blanca, V. Daria, and C. Saloma, “Spectral recovery by analytic continuation in crossing-based spectral analysis,” Appl. Opt. 35, 6417–6422 (1996). 14. M. A. Nazario and C. Saloma, “Signal recovery in sinusoid-crossing sampling by use of the minimum-negativity constraint,” Appl. Opt. 37, 2953–2964 (1998). 15. M. Litong and C. Saloma, “Detection of sub-threshold oscillations by sinusoid-crossing sampling,” Phys. Rev. E 57, 3579–3588 (1998). 16. G. Pfeifer, “Modulators, demodulators and converters,” in Electronics Engineers Handbook, D. Fink and D. Christiansen, eds., (McGraw-Hill, New York, 1982), Section 14, pp. 14–24–14–45. 17. M. Demler, High-Speed Analog-to-Digital Conversion (Academic, New York, 1991). 18. J. Candy, “A use of double integration in delta signal modulation,” IEEE Trans. Commun. COM-33, 249–258 (1985). 19. Y. Matsuya, K. Uchimura, A. Iwata, T. Kobayashi, M. Ishikawa, and T. Yoshitome, “A 16-bit oversampling A-to-D conversion technology using triple integration noise shaping,” IEEE J. Solid-State Circuits SC-22, 921–929 (1987). 20. K. C. Chao, S. H. Lee, and C. G. Sodini, “A high order topology for interpolative modulators for oversampling A/D converter,” IEEE Trans. Circuits Sys. 37, 309–318 (1990). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-39-1-108","timestamp":"2014-04-19T11:03:12Z","content_type":null,"content_length":"108303","record_id":"<urn:uuid:4d4fdbe4-f0d6-4ad1-9f94-73ee5551aa4f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Requirements for the Minor in Mathematics In order to receive a minor in mathematics students are required to complete 28 units: 16 lower division required courses and 12 upper division electives. All electives must be approved in advance by a math minor advisor. Students interested in earning a minor in mathematics should see a mathematics minor advisor for the evaluation of their transcripts and admission to the program. Please download a copy of the appropriate from the Forms section of the Advisement webpage and present it along with an up to date copy of your DPR to a math minor advisor. You can find the list of all math advisors in the Advisement webpage. The requirements for a minor in mathematics are as follows: • Lower Division Required Courses (16 Units) Math 150A Calculus I (5) Math 150B Calculus II (5) Math 250 Calculus III (3) Math 262 Introduction to Linear Algebra (3) Note: Phil 230, Symbolic Logic I, is recommended and satisfies the Critical Thinking section of General Education. • Upper Division Electives (12 Units) Selected upper division mathematics courses totaling at least 12 units which must be approved in advance by a mathematics minor advisor. Depending on the student’s area of interest, any one of the following sequences could be used as part or all of the required 12 units; or other choices if approved by the mathematics minor advisor. Computer Mathematics: Math 326, 340, 481A, 482 Secondary Teaching: Math 320, 341 or 360, 370 Statistics: Math 340, 440A, 440B • Application Form for a Minor in Mathematics
{"url":"http://www.csun.edu/math/Minor_Info.html","timestamp":"2014-04-19T07:44:10Z","content_type":null,"content_length":"15508","record_id":"<urn:uuid:f4c26479-b5b7-4c87-b361-9c6003750ff7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: ancova for repeated designs [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: ancova for repeated designs From Joseph Coveney <jcoveney@bigplanet.com> To Statalist <statalist@hsphsun2.harvard.edu> Subject st: Re: ancova for repeated designs Date Mon, 16 Aug 2004 23:25:16 +0900 tmmanini wrote: . . . You are right that I have only 6 levels of convariate (a possible problem), but I took your advice on several fronts and I'm still not fully comprehending the solution. Here's what I did: (I used my data with 32 subjects, which is included at the end). First I ran the model positioning g after the id|g random error term, and the specifying if t>1. I got a sig. interaction, but according a recent addition to the listserv, I learned that this interaction may not be as important as I once thought. Therefore, I dropped the g*t term from the [excerpted ANOVA table with zero sum of squares and degrees of freedom for continuous covariate] These results seemed weird, based on the previous F value for x being much higher. So I dropped t>1 from the model [excerpted ANOVA table with different results] I'm not sure which model it correct? Based on recent addition by Joseph Coveney, the last model (without t>1) would be correct. Here is the data, sorry it is long, there are 32 subjects, 3 levels of g, 3 levels of t and 1 level of x (remeber x is the first level of t (time)) I'm trying to covary for the pre-test level (time==1). One more thing, I successfully implemented the adjust command by included id in the "by" statement. However, I only received adjustments for those subjects I specify (ie. id<=4 gives me subjects 1 through 3), which makes sense. However, I would like to report the adjusted mean for each group over each time period. I guess I can request all id's be shown on the output by using "adjust x, by(g t id)" and then taking the mean of the id's for each group, but that seems cumbersome. Is there a better way? . . . [dataset excerpted] David Airey seems to have been on the right track a couple of posts ago. The inconsistent estimates suggests that there is some kind of collinearity between the groups-by-subject interaction term and the covariate, which undermines estimation. You can see it happening using -anova , sequential- and stepwise shifting the position of x from first to after the id|g term. You cannot do this with Stata's repeated-measures ANOVA syntax for subjects-within-groups error term--you need to use alternative syntax and just call it what it is, an interaction term. This is illustrated below in a do-file: as the covariate enters the model going past the groups-by-subjects interaction term in a sequential sums-of-squares ANOVA (SAS Type I sums of squares), its sum of squares is zeroed. Your dataset is imbalanced, and this often induces this phenomenon to at least to some extent in factorial ANOVA, but I didn't think that it wreaks this much havoc, even with repeated-measures ANOVA. The imbalance might be compounding the effects of collinearity otherwise in the covariates and factors. What did SPSS give you, by the way? At a loss as to what else to suggest, depending upon your objectives you can try -xtreg, re-; transforming the covariate somehow (centering works for some situations, like polynomials); perhaps breaking the analysis into two (groups 1 and 2, groups 1 and 3, Bonferroni adjustment); or other avenues that others on the list might suggest. As far as dropping the term for interaction of group and continuous covariate, assumption of homogeneity of slope is actually important. When the slope is substantially different between the groups, it complicates interpretation and qualifies the conclusions. It's just is difficult to test for interaction powerfully in the average situation, at least it is for interactions of categorical variables. Don't drop the -if t > 1- from the command (model statement). You seem to have got confused by the Winer example--it didn't use the first time point's response values as the covariate, so it didn't need to exclude the first time point from analysis. You do. (In the do-file below, I pre-emptively dropped the first observations, so the model statement doesn't anymore need the -if t > 1-.) You're right and I was mistaken as far as -adjust-: subjects will reflect their own intercepts and not the group average. You could use -predict-. Joseph Coveney "share the within-subjects error term with the between-subjects factor"--hope this clears up before next Monday. set more off input byte id byte g byte t byte y byte x [dataset excerpted--given in earlier post in this thread] assert x == y if t == 1 drop if t == 1 * Stepwise shifting of entry position of x anova y x g id*g t g*t, continuous(x) sequential anova y g x id*g t g*t, continuous(x) sequential anova y g id*g x t g*t, continuous(x) sequential quietly anova y g / id|g x g*x t g*t, continuous(x) predict y_hat, xb predict y_res, residual graph7 y_res g, xlabel ylabel yline(0) graph7 y_res y_hat, xlabel ylabel yline(0) drop y_* * Using -predict- for within-cell x-adjusted predicted means rename x x_prime summarize x_prime, meanonly generate float x = r(mean) predict y_hat bysort g t: summarize y_hat drop x y_hat rename x_prime x reshape wide y, i(id) j(t) manova y2 y3 = g x, continuous(x) * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-08/msg00481.html","timestamp":"2014-04-20T04:17:19Z","content_type":null,"content_length":"9794","record_id":"<urn:uuid:d46babea-faab-4ce6-ac6f-13c7f0aac649>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Ontology: From Quantum Mechanics to Quantum Field Theory MacKinnon, Edward (2005) Generating Ontology: From Quantum Mechanics to Quantum Field Theory. UNSPECIFIED. (Unpublished) Download (9Mb) Download (2961b) Philosophical interpretations of theories generally presuppose that a theory can be presented as a consistent mathematical formulation that is interpreted through models. Algebraic quantum field theory (AQFT) can fit this interpretative model. However, standard Lagrangian quantum field theory (LQFT), as well as quantum electrodynamics and nuclear physics, resists recasting along such formal lines. The difference has a distinct bearing on ontological issues. AQFT does not treat particle interactions or the standard model. This paper develops a framework and methodology for interpreting such informal theories as LQFT and the standard model. We begin by summarizing two minimal epistemological interpretation of non-relativistic quantum mechanics (NRQM): Bohrian semantics, which focuses on communicables; and quantum information theory, which focuses on the algebra of local observables. Schwinger's development of quantum field theory supplies a unique path from NRQM to QFT, where each step is conceptually anchored in local measurements. LQFT and the standard model rely on postulates that go beyond the limits set by AQFT and Schwinger's anabatic methodology. The particle ontology of the standard model is clarified by regarding the standard model as an informal modular theory with a limited range of validity. Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL Social Networking: Share | Actions (login required) Document Downloads
{"url":"http://philsci-archive.pitt.edu/2467/","timestamp":"2014-04-17T22:21:06Z","content_type":null,"content_length":"29914","record_id":"<urn:uuid:4278356d-5d82-4e68-a762-0bbc867539a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Problems Library - Pre-Algebra, Operations with Numbers This page: order of operations About Levels of Difficulty operations with numbers number sense number theory fractions, decimals, ratio & proportion geometry in the plane geometry in space logic & set theory discrete math Browse all About the PoW Library Operations with Numbers Problems in this category can be solved using basic operations (addition, subtraction, multiplication, division, exponentiation), applying estimation skills, applying the order of operations, or using the properties of numbers (identity, associative, multiplicative, distributive). They should be suitable for the beginning pre-algebra student. Some of these problems are also in the following subcategories: Related Resources Interactive resources from our Math Tools project: Math 7: Operations with Numbers The closest match in our Ask Dr. Math archives: Middle School Arithmetic NCTM Standards: Number and Operations Standard for Grades 6-8 Access to these problems requires a Membership.
{"url":"http://mathforum.org/library/problems/sets/prealg_operations.html","timestamp":"2014-04-24T16:53:15Z","content_type":null,"content_length":"30254","record_id":"<urn:uuid:78f301dc-a4c6-44fb-a5a3-499fdccf513b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of words in a random string Author Probability of words in a random string Ranch Hand Joined: Aug Suppose I give you a 4-letter word with letters chosen from an alphabet of 26 letters. What is the probability that a random 13-letter string contains the word as a substring? 17, 2001 Posts: 171 Alternatively, write a Java program to calculate the probability. 1 Geoffrey Sun Certified Programmer for the Java 2 Platform Ranch Hand Geoffrey Falk wrote:Suppose I give you a 4-letter word with letters chosen from an alphabet of 26 letters. What is the probability that a random 13-letter string contains the word Joined: Feb as a substring? 18, 2005 Posts: 988 Alternatively, write a Java program to calculate the probability. 1 Geoffrey Pn = prob that word occurs at position n of string. For 0 <= n <= 9, Pn = (1/26) ^ 4 = 1/456976 For 9 < n, Pn = 0. P(word does NOT appear in string) = [(1 - P1) * (1-P2) * ... (1 - P12)] = (1- (1/456976)) ^ 13 = (456975/456976) ^ 13 P(word DOES exist in string) = 1 - P(word does NOT appear in string) = 1 - (456975/456976)^13 approx= .0000284475 ...which is about 1 in 35,000 According to this formula, you reach the 50-50 point around with a string around 300,000 letters long. That seems long. Ranch Hand Joined: Aug 17, 2001 Are you sure? I think the probability depends on the word that I give you. For instance, "AAAA" has a different probability than "ABAB". (Hint: take account of double counting.) Posts: 171 Ranch Hand Geoffrey Falk wrote:Are you sure? I think the probability depends on the word that I give you. For instance, "AAAA" has a different probability than "ABAB". (Hint: take account of Joined: Feb double counting.) 18, 2005 Geoffrey Posts: 988 Loophole: In your problem statement, you used the word "chosen". In a nit-picky sense, that implies that are no repeated letters in the word. See the Wikipedia page on the Binomial Coefficient function, aka the Choose Function. Yeah, that's what I was thinking. What if the word were 3 letters long and the string was 3? Is "AA" more likely than "AB" to show up in a string of 3 random letters? "AA" will match if the string is either "AA." (using regular expressions), or ".AA". That matches only 51 possible different strings. "AB" would match "AB." or ".AB", which covers 52 strings (since there's no overlap). So it would seem that you're correct. ...assuming the "word" can have repeated letters. subject: Probability of words in a random string
{"url":"http://www.coderanch.com/t/478407/Programming/Probability-words-random-string","timestamp":"2014-04-17T18:56:01Z","content_type":null,"content_length":"25773","record_id":"<urn:uuid:4739f5a4-21f4-4bf5-900d-74e9596dba04>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
help me out October 8th 2012, 01:41 AM #1 Junior Member Sep 2012 help me out a,b and c are 3 different digits of the number ABC where a,c is not equal to zero. the number got by reversing the digits is added to ABC and the sum is a perfect square. find all such 3 digit Re: help me out 100*c+10*b+100*a = a+c =3,4,5... and b= 1,2,3,4... and see... October 8th 2012, 04:58 AM #2 Junior Member Oct 2012
{"url":"http://mathhelpforum.com/number-theory/204843-help-me-out.html","timestamp":"2014-04-21T04:39:40Z","content_type":null,"content_length":"26344","record_id":"<urn:uuid:6e577d2f-3983-42c7-9626-a70f54551d11>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Persistence and extinction of single population in a polluted environment. (English) Zbl 1084.92032 From the introduction: Today, the most threatening problem to the society is the change in environment caused by pollution, affecting the long term survival of species, human life style and biodiversity of the habitat. Therefore the study of the effects of toxicant on the population and the assessment of the risk to populations is becoming more important. Recently, a first attempt to consider a spatial structure has been carried out [see B. Buonomo and A. di Liddo, Dyn. Syst. Appl. 8, 181–196 (1999; Zbl 0936.35087); Nonlinear Anal., Real World Appl. 5, No. 4, 749–762 (2004; Zbl 1074.92036)] where a reaction-diffusion model is proposed to describe the the dynamics of a living population interacting with a toxicant present in the environment (external toxicant) through the amount of toxicant stored into the bodies of the living organisms (internal toxicant). However, as the authors pointed out, even if the resulting model presents many features which make stimulating its study, such a modelling approach is a rough approximation to the biological phenomena at hand. Buonomo et.al. viewed the internal toxicant as drifted by the living population and then, by balance arguments, they derived a PDE system consisting of two reaction diffusion equations coupled with a first-order convection equation, and the corresponding ODE system was obtained as well [see Math. Biosci. 157, 37–64 (1999)]. This model is the most realistic by now but the analysis of it is so difficult that they only used some analytic and numerical approaches. Obviously, more clear work is deserved to do. We use some new methods to investigate the model made by Buonomo et al. and conditions of survival and extinction are obtained. 92D40 Ecology 34C60 Qualitative investigation and simulation of models (ODE)
{"url":"http://zbmath.org/?q=an:1084.92032","timestamp":"2014-04-19T02:00:00Z","content_type":null,"content_length":"22212","record_id":"<urn:uuid:06b9fa7e-37c5-4155-9ae9-2c3d31213b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2006 [00373] [Date Index] [Thread Index] [Author Index] Re: Re: Re: Solve or Reduce? • To: mathgroup at smc.vnet.net • Subject: [mg64453] Re: [mg64412] Re: [mg64398] Re: Solve or Reduce? • From: Math Novice <math_novice_2 at yahoo.com> • Date: Fri, 17 Feb 2006 04:12:31 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com Thank you all for your responses. I have a lot to learn about using Mathematica but it should be a very interesting process. What I am trying to do with this problem is to try and understand how an automobile?s double wishbone suspension determines the orientation of the wheel?s plane as the suspension moves. I need to be able to rotate three non collinear points, say, (13,0,0), (13,3.5,0) and (13, 3.5,3.5) an arbitrary number of degrees (corresponding to the movement of the suspension) and then use these three new points to determine the orientation of the wheel?s plane so that I can graph the movement. I?ve already done this for a semi-trailing arm suspension which basically just required rotating three points in the wheel?s plane about an arbitrary axis. I want to be able to see visually why the more modern double wishbone and multilink suspensions are superior to the older suspension designs. David Park <djmp at earthlink.net> wrote: The following table gives the solutions in degrees as a varies from 0 to 90 degrees. Each triplet is {a, first b solution, second b solution}. Because the RootSearch routine I was using gives the results in sorted order, the two roots switch columns at 60 degrees. {{0, 0., 277.628}, {5, 8.14157, 279.009}, {10, 16.3816, 281.034}, {15, 24.8218, 283.798}, {20, 33.5778, 287.445}, {25, 42.7897, 292.171}, {30, 52.6316, 298.233}, {35, 63.3125, 305.928}, {40, 75.0545, 315.54}, {45, 88.0132, 327.201}, {50, 102.111, 340.683}, {55, 116.839, 355.264}, {60, 9.92425, 131.266}, {65, 23.8027, 144.4}, {70, 36.4951, 155.625}, {75, 47.9915, 164.805}, {80, 58.473, 172.123}, {85, 68.1628, 177.873}, {90, 77.2632, 182.348}} There are always two solutions and they are perfectly well behaved. Those who have the Cardano3 complex graphics package, and Ted Ersek's RootSearch package and who are interested in the solution, and animations of the solution, may contact me and I will send them the solution notebook, David Park djmp at earthlink.net From: Math Novice [mailto:math_novice_2 at yahoo.com] To: mathgroup at smc.vnet.net I am trying to find the angle b corresponding to the points on the circumference of the circle (5+8 Cos[b], 7+8 Sin[b]) that are a distance of 7 units from a point on the circumference of the circle (13 cos[a], 13 Sin[a]) for angles a in the first quadrant. (5+8 Cos[b], 7+8 Sin[b]) is a circle of radius 8 and center (5,7) and (13 cos[a], 13Sin[a]) is a circle of radius 13 and center (0,0). (13,0) on the circle (13 cos[a], 13 Sin[a]) and (13,7) on (5+8 Cos[b], 7+8 Sin[b]) are the first set of points that are 7 units apart when the angle a (of the larger circle ) is equal to 0. As I increase the value of a and use my compass (set at 7 units) to measure on the printout of the diagram of the two circles it seems that there should always be an angle for b that corresponds to a and b should always increase as a increases but something happens at about 32 degrees. For some calculations b starts to decrease as a increases past 32 degrees or with some other calculations b becomes negative as a increases past 32 degrees. Any idea of what I'm going wrong?
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Feb/msg00373.html","timestamp":"2014-04-19T19:55:15Z","content_type":null,"content_length":"37331","record_id":"<urn:uuid:bf1c009d-f18d-466b-90f5-cc1e91a9c15f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
53. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS) Modelling continuous variables with a spike at zero – on issues of a fractional polynomial based procedure Meeting Abstract Search Medline for Published: September 10, 2008 In clinical epidemiology, a frequently occurring problem is to model a dose/response function for a variable X which has value 0 for a proportion of individuals (“spike at zero”), and a quantitative value for the others, e.g. cigarette consumption or an occupational exposure. When the individuals with X = 0 are seen as a distinct sub-population, it may be necessary to model the outcome in the subpopulation explicitly with a dummy variable, and the rest of the distribution as a positive continuous variable using a dose-response relationship [Ref. 1]. The concept of fractional polynomials [Ref. 2] has been shown to be useful for estimating dose-response relationships for continuous variables. A multivariable procedure (MFP) is available to select variables and to determine the functional relationship in many types of regression models. A modification of the function selection component for variables with a spike at zero was proposed in chap 4 of Royston & Sauerbrei [Ref. 3]. A binary variable indicating zero values of X is added to the model. The procedure considers in two stages whether X has any effect, whether individuals with X = 0 should be considered as a separate subgroup and whether an FP functional relationship for the positive values improves the model fit. In three examples with substantial differences in the distributions of X, strength of the effects and correlations with other variables, we will discuss in a multivariable context issues concerning the modelling of a continuous variable with a spike at zero. The examples will illustrate that sometimes a binary component will be sufficient for a good model fit, whereas in other cases an FP function, with or without the binary component, is a better model. We propose a new procedure which will often improve modeling of continuous variables with a spike at zero. Adjustment for other important predictors can be done in the usual way. Robertson C, Boyle P, Hsieh CC, Macfarlane GJ, Maisonneuve P. Some statistical considerations in the analysis of case-control studies when the exposure variables are continuous measurements. Epidemiology 1994; 5: 164-70. Royston P, Altman DG. Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling (with discussion). Applied Statistics 1994; 43 (3): 429-467. Royston P, Sauerbrei W. Multivariable regression modelling. A pragmatic approach based on fractional polynomials for modelling continuous variables. Wiley; 2008.
{"url":"http://www.egms.de/static/en/meetings/gmds2008/08gmds138.shtml","timestamp":"2014-04-16T10:28:27Z","content_type":null,"content_length":"15284","record_id":"<urn:uuid:9cde6cbe-b9a9-43c6-aa24-5f3d15db52e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Mental FlossRetrobituaries: Edsger Dijkstra, Computer Scientist In our Retrobituaries series, we highlight interesting people who are no longer with us. Today let's explore the life of Edsger Dijkstra, who died at 72 in 2002. If you’ve used a computer or smart phone in the last few decades, you’ve come into contact with the work of Edsger Dijkstra. Since his death in 2002, his research in the field of computer science has in many ways only grown more important. Here are a few things you didn’t know about his life and his science. If you took his computer science class, you probably didn’t touch a computer. Professor Dijkstra once said, “Computer science is no more about computers than astronomy is about telescopes,” and he taught his courses accordingly. He was a proponent of elegance in mathematical proofs, whereby puzzles are solved with efficiency and aesthetic sensitivity. Grades were determined by the final exam, which was neither written on a piece of paper nor typed on a computer. Rather, students were given individual oral examinations in his office or at his home. The conversational exams lasted hours at a time, and students were asked how they might prove various mathematical propositions. They were then challenged to write out their proofs on a chalkboard. After the exam, students were offered a beer if they were of age, or a cup of tea, if they were not. He didn’t use email. Or a word processor. Dijkstra was famous for his general rejection of personal computers. Instead of typing papers out using a word processor, he printed everything in longhand. He wrote well over a thousand essays of significant length this way, and for most of his academic career, they proliferated by ditto machine and fax. Each essay was given a number and prefixed with his initials, EWD. Students who emailed Dijkstra were asked to include a physical mailing address in the letter. His secretary would print the message, and he would respond by hand. Computers weren’t the only technology he shunned. He refused to use overhead projectors, calling them “the poison of the educational process.” Use Google Maps? You can thank Dijkstra. Among his profound contributions to computer science is a solution to the “single source shortest-path problem.” The solution, generally referred to as Dijkstra’s algorithm, calculates the shortest distance between a source node and a destination node on a graph. (Here is a visual representation.) The upshot is that if you’ve ever used Google Maps, you’re using a derivation of Dijkstra’s algorithm. Similarly, the algorithm is used for communications networks and airline flight plans. He “owned” a nonexistent company. In many of his more humorous essays, he described a fictional company of which he served as chairman. The company was called Mathematics, Inc., and sold mathematical theorems and their maintenance. Among the company’s greatest triumphs was proving the Riemann hypothesis (which it renamed the Mathematics, Inc. Theorem), and then it unsuccessfully attempted to collect royalties on all uses of the mathematical conjecture in the real world. Evidence was never given of the proof, of course, because it was a trade secret. Mathematics Inc. claimed to have a global market share of 75 percent. He was the first programmer in the Netherlands. In the 1950s, his father suggested that he attend a Cambridge course on programming an Electronic Delay Storage Automatic Calculator, or EDSAC. Dijkstra did, believing that theoretical physics (which he was studying at the time at Leiden University) might one day rely upon computers. The following year, he was offered a job at Mathematisch Centrum in Amsterdam, making him the first person in the Netherlands to be employed as something called a “programmer.” (“A programmer?” he recalled of the moment he was offered the position. “But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline?” He was then challenged by his eventual employer to make it a respectable This would later cause problems. On his marriage application in 1957, he was required to list his profession. Officials rejected his answer—”Programmer”—stating that there was no such job. Previously on Retrobituaries: Albert Ellis, Pioneering Psychologist. See all Retrobituaries here.
{"url":"http://mentalfloss.com/node/49520/atom.xml","timestamp":"2014-04-16T04:15:57Z","content_type":null,"content_length":"8516","record_id":"<urn:uuid:15da9a25-4e12-4251-aecf-a90e6ec61c12>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Types of angles are discussed here in this lesson. Types of Angles Types of angles are discussed here in this lesson. We will learn different types of angles. 1. Acute Angle: An angle whose measure is less than 90° is called an acute angle. ∠MON shown in adjoining figure is equal to 60°. So, ∠MON is an acute angle. 2. Right Angle: An angle whose measure is 90° is called right angle. ∠AOB shown in adjoining figure is 90°. So, ∠AOB is a right angle. 3. Obtuse Angle: An angle whose measure is greater than 90° but less than 180° is called an obtuse angle. ∠DOQ shown in adjoining figure is an obtuse angle. 4. Straight Angle: An angle whose measure is 180° is called a straight angle. ∠XOY shown in adjoining figure is a straight angle. A straight angle is equal to two right angles. 5. Reflex Angle: An angle whose measure is more than 180° but less than 360° is called a reflex angle. ∠AOB shown in adjoining figure is 210°. So, ∠AOB is a reflex angle. 6. Zero Angle: An angle measure 0° is called a zero angle. When two arms of an angle lie on each other, 0° angle is formed. Related Links : ● Angle. 5th Grade Geometry 5th Grade Math Problems From Types of Angles to HOME PAGE
{"url":"http://www.math-only-math.com/types-of-angles.html","timestamp":"2014-04-16T16:25:57Z","content_type":null,"content_length":"13996","record_id":"<urn:uuid:ce0644a0-5cd4-4b7e-bc01-5d2618ab5637>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Cyclic subgroup October 21st 2007, 04:02 PM #1 Super Member Mar 2006 Cyclic subgroup Suppose that the order of some finite Abelian group is divisible by 10. Prove that the group has a cyclic subgroup of order 10. My proof so far: Suppose G is a finite Abelian group with 10 | |G|. By a theorem, I know that G contains a subgroup with order 10. Now, a subgroup of a Abelian group is also Abelian (is that right? I recall that from an exercise I did), and since G can be written as $Z_{2} \oplus Z_{5} \oplus ...$ both of these groups are cyclic, so G contains a cyclic subgroup. Is that right? Suppose that the order of some finite Abelian group is divisible by 10. Prove that the group has a cyclic subgroup of order 10. My proof so far: Suppose G is a finite Abelian group with 10 | |G|. By a theorem, I know that G contains a subgroup with order 10. Now, a subgroup of a Abelian group is also Abelian (is that right? I recall that from an exercise I did), and since G can be written as $Z_{2} \oplus Z_{5} \oplus ...$ both of these groups are cyclic, so G contains a cyclic subgroup. Is that right? Since you are allowed to use the fundamental theorem for finite abelian groups it means the converse of the theorem of Lagrange holds. Thus, there is a subgroup of order 10. Since 10 is square free it means by the fundamental theorem this group is cyclic. Q.E.D. Suppose that the order of some finite Abelian group is divisible by 10. Prove that the group has a cyclic subgroup of order 10. My proof so far: Suppose G is a finite Abelian group with 10 | |G|. By a theorem, I know that G contains a subgroup with order 10. Now, a subgroup of a Abelian group is also Abelian (is that right? I recall that from an exercise I did), and since G can be written as $Z_{2} \oplus Z_{5} \oplus ...$ both of these groups are cyclic, so G contains a cyclic subgroup. Is that right? Use Lagrange's theorem ( or the corollary) Ü October 21st 2007, 06:01 PM #2 Global Moderator Nov 2005 New York City October 24th 2007, 05:08 AM #3
{"url":"http://mathhelpforum.com/advanced-algebra/21019-cyclic-subgroup.html","timestamp":"2014-04-20T14:32:03Z","content_type":null,"content_length":"37976","record_id":"<urn:uuid:57c4fd55-71f1-48e1-8f5b-c7e85a743a3d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
How to find the total surface area S of a cylinder with radius r and height h can be May 3rd 2008, 05:57 PM #1 May 2008 S=2 pi r^2 + 2 pi r h pi is the symbol, 3.141.............. r is radius h is height a rewrite the formula to describe the total surface area for all cylinders with a height of 10 cm. i dont get how u do it. what r u supposed to do? how do i rewrite the formula to describe the total surface area for all cylinders with a height of 10 cm. then how do i express that equation into factored form? You are given a formula: $S=2\pi r^2 + 2\pi rh$. With this formula, you can find the surface area $S$ of any cylinder, provided that you substitute the radius for $r$ and the height for $h$. So, for example, if I have a cylinder of radius 5 cm and height 6 cm, its surface area would be: $S=2\pi r^2 + 2\pi rh=2\pi (5)^2 + 2\pi (5)(6) = 110\pi\approx 345.58\text{ cm}^2$. Now, suppose you were only considering cylinders with a height of 10 cm. What sort of substitution should you make here? Your new formula should relate the surface area directly to the radius, since the height is fixed. Once you come up with your new formula, find the common factors in each term, and factor them out. so whatwouldthe new formula be? You can't figure it out? The new formula should deal only with cylinders that have a height of 10 cm. In our given formula, $h$ represents height, so all we have to do is substitute 10 cm for $h$ to produce another formula that only works with cylinders of that height. Do you see? We have $S=2\pi r^2 + 2\pi rh$ with $h=10\text{ cm}$ so $S=2\pi r^2 + 2\pi r(10)$, and you can simplify from there. Now, try to do the factoring on your own: all you have to do is find the common factors of each term, and pull them out of the expression. For example, $3x^3 + 27x^2 - 9x$ factors as follows: $3x^3 + 27x^2 - 9x$ $=3(x)(x^2) + (3)(9)(x)(x) - (3)(3)x$ $=(3x)(x^2) + (3x)(9x) - (3x)(3)$ $=(3x)(x^2 + 9x - 3)$ Each term has a factor of $3x$, so we can pull it out as a factor of the whole expression. Now, you try! May 3rd 2008, 06:09 PM #2 May 3rd 2008, 06:15 PM #3 May 2008 May 3rd 2008, 06:16 PM #4 May 2008 May 3rd 2008, 06:27 PM #5 May 3rd 2008, 06:35 PM #6 May 2008 May 3rd 2008, 06:50 PM #7
{"url":"http://mathhelpforum.com/geometry/37042-how-find-total-surface-area-s-cylinder-radius-r-height-h-can.html","timestamp":"2014-04-19T12:15:24Z","content_type":null,"content_length":"52962","record_id":"<urn:uuid:b4edc50f-4e2e-49a1-b97b-f04382f26eab>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume 3, Volume 3, Issue 1, January 1960 View Description Hide Description Starting from the Liouville equation, a chain of equations is obtained by integrating out the coordinates of all but one, two, etc., particles. One ``test'' particle is singled out initially. All other ``field'' particles are assumed to be initially in thermal equilibrium. In the absence of external fields, the chain of equations is solved by expanding in terms of the parameter g = 1/nL [ D ] ^3. For the time evolution of the distribution function of the test particle, an equation is obtained whose asymptotic form is of the usual Fokker‐Planck type. It is characterized by a frictional‐drag force that decelerates the particle, and a fluctuation tensor that produces acceleration and diffusion in velocity space. The expressions for these quantities contain contributions from Coulomb collisions and the emission and absorption of plasma waves. By consideration of a Maxwell distribution of test particles, the total plasma‐wave emission is determined. It is related to Landau's damping by Kirchoff's law. When there is a constant external magnetic field, the problem is characterized by the parameter g, and also the parameter λ = ω[ c ]/ω[ p ]. The calculation is made by expanding in terms of g, but all orders of λ are retained. To the lowest order in g, the frictional drag and fluctuation tensor are slowly varying functions of λ. When λ ≪ 1, the modification of the collisional‐drag force due to the magnetic field, is negligible. There is a significant change in the properties of plasma waves of wavelength greater than the Larmor radius which modifies the force due to plasma‐wave emission. When λ ≫ 1, the force due to plasma‐wave emission disappears. The collisional force is altered to the extent that the maximum impact parameter is sometimes the Larmor radius instead of the Debye length, or something in between. In the case of a slow ion moving perpendicular to the field, the collisional force is of a qualitatively different form. In addition to the drag force antiparallel to the velocity of the particle, there is a collisional force antiparallel to the Lorentz force. The force arises because the particle and its shield cloud are spiralling about field lines. The force on the particle is equal and opposite to the centripetal force acting on the ``shield cloud.'' It is much smaller than the Lorentz force. View Description Hide Description Alfvén hydromagnetic waves are propagated through a cylindrical plasma. The wave velocity, attenuation, impedance, and energy transfer are studied. The theoretical equations predict correctly the functional dependence of the velocity and attenuation, and from these quantities accurate measurements of plasma density and temperature can be obtained. A qualitative agreement between theory and experiment is obtained for the hydromagnetic coaxial waveguide impedance, and the energy transferred from an oscillating circuit to the hydromagnetic wave is measured to be 43 ± 10%. View Description Hide Description The propagation of waves through a plasma, wherein the density and/or magnetic field strength are slowly varying functions of position is discussed. If the local propagation constant, k[x] , is a slowly varying function of x, the adiabatic approximation will be valid. However, k[x] ^2 may pass through zero as a function of x. Using the WKB linear turning point connection formulas, examination shows that an incoming plasma wave is totally reflected in the region where k[x] ^2 ≈ 0. A similar analysis for the case where k[x] ^2 is a singular function of x shows that absorption of an incoming wave occurs in the vicinity of the singularity. Such singular behavior in k[x] ^2 can occur for propagation along the magnetic field when the wave frequency is equal to the local ion or electron cyclotron frequency. For propagation transverse to the magnetic field, an apparent singularity occurs at a frequency somewhat below the ion cyclotron frequency, and at the two hybrid frequencies of Auer, Hurwitz, and Miller. A detailed examination, including higher order effects in electron mass ÷ ion mass, finite electron and ion temperatures, and ion‐ion and ion‐electron collisions shows that the absorption will take place at the apparent singularity only if the physical damping processes are strong enough to swamp the reactive effects of the higher order corrections. Otherwise the higher order reactive effects introduce a new propagation mode into the dispersion equation with a root which, in the vicinity of the apparent singularity, is conjugate to the root of the original mode. Partial or total reflection now occurs at the apparent singularity instead of absorption. It is, however, conjectured that some of the original mode energy may be reflected into the new mode. As the new mode recedes from the region of the apparent singularity, its wavelength can become comparable to the particle Larmor radius. Energy in this mode may then be absorbed by phase‐mixing processes which are of high order in the quantity (Larmor radius ÷ wavelength). Wave reflection from the apparent singularities will then heat ions in the case of the transverse ion cyclotron mode, and electrons in the case of the upper hybrid frequency. View Description Hide Description An electric discharge which is compressed by its own magnetic field, and ``stabilized'' by means of an axial magnetic field, can have transverse wave motions which cause its periodic compression and expansion. This kind of motion can cause the heating of the ions in the discharge. The simplest of these wave modes are described and an estimate is given of the power available to the waves as a result of the interaction of the electrons in the discharge with an axial electric field. This interaction can cause the attenuation or spontaneous growth of the waves, depending upon the circumstances. It is likely that in high current gas discharge experiments there are examples of growing and decaying waves of this type. View Description Hide Description The problem of instabilities in colliding ionized hydrogen beams, which has been treated by Kahn and Parker in the special case of zero temperature, is solved for the nonzero temperature case by taking Maxwell distributions for the equilibrium density functions. At sufficiently high temperature it is found that the random thermal motion will prevent growing oscillations. The boundary between the stable and unstable regions is plotted as a function of energy and density parameters. Certain phenomena associated with solar particle streams are discussed in terms of these View Description Hide Description In their Geneva paper, Trubnikov and Kudryavtsev calculated the cyclotron radiation from a hot plasma. In doing this, the assumption was made that the individual particles radiated as though they were in a vacuum. We have investigated this approximation by calculating the absorption length directly from the Boltzmann equation, and we find that indeed this assumption is correct whenever (ω [ p ]/ω[ e ])^2 ≪ m ^2, where m is the harmonic number of the radiation in question, ω[ p ] is the plasma frequency, and ω[ e ] is the cyclotron frequency. For a contained plasma, the left hand side of this inequality is of the order of magnitude of one, and thus the inequality is well satisfied for the dominant radiation from a plasma at high temperature. The angular independence of the absorption coefficient has been calculated, and this together with a more careful examination of the mechanism of thermonuclear energy transfer to the electrons, leads to a modification of the results presented by Trubnikov and Kudryavtsev at Geneva. In addition, it is shown that by the use of reflectors the critical size can be reduced by two orders of View Description Hide Description The general theory of irreversible processes, developed by Prigogine and Balescu, is applied to the case of long range interactions in ionized gases. A similar diagram technique permits the systematic selection of all the contributions to the evolution of the distribution function, to an order of approximation equivalent to Debye's equilibrium theory. The infinite series which appear in this way can be summed exactly. The resulting evolution equations have a clear physical significance: they describe interactions of ``quasi particles,'' which are electrons or ions ``dressed'' by their polarization clouds. These clouds are not a permanent feature, as in equilibrium theory, but have a nonequilibrium, changing shape, distorted by the motions of the particles. From the mathematical point of view, these equations exhibit a new type of nonlinearity, which is very directly related to the collective nature of the interactions. View Description Hide Description We calculate the asymptotic value of the pair probability density ρ[2](r [2], r [1]) for finding a fluid particle at a point r [2] far in the interior of a fluid, when it is known that there is a particle at r [1] in contact with the walls (rigid) of the container. This value is different from the well‐known expression for the asymptotic value of ρ[2](r [2], r [1]) when both r [2] and r [1] are in the interior of the fluid. Our derivation is based on the virial theorem for total momentum fluctuations in an equilibrium system and makes use of the assumption that there are no long range correlations in a fluid. Application is made of our result to re‐derive simply the expression for the second virial coefficient and the exact equation of state of a hard‐sphere gas in one dimension. Quantum systems are also treated. View Description Hide Description Some results are given on the connection existing between the Lee‐Huang‐Yang theory for the interacting Bose systems, and the Bogoliubov theory. View Description Hide Description The one‐dimensional equilibrium spectra in isotropic turbulence are given for the physical transfer theories of Heisenberg, Kovásznay, and Obukhoff. These results are then compared with the experimental measurements of the spectrum of ∂^3 u [1]/∂x [1] ^3 fluctuations. For two of the theories (Heisenberg's and Kovásznay's), reasonable agreement is obtained for k [η] < 0.04, but for larger values of k [η] there is considerable divergence between the theoretical and experimental results. The relationship between the equilibrium and similarity spectra are also discussed for these two theories. View Description Hide Description The governing equations of an incompressible boundary layer over a flat plate in the presence of a shear flow with finite vorticity are derived. For large vorticity, a similarity solution is obtained. For moderate vorticity, one of the governing equations is replaced by an approximate one for which similarity solutions exist. View Description Hide Description The character of heat transport by cellular convection, which arises beyond the marginal state of stability in a layer of fluid bound between two constant temperature surfaces is examined. It is shown that a simple equation characterizes the heat transport in the neighborhood of the marginal state of stability when the convection is steady, and its cellular pattern of motion is represented by the solutions of the linear theory. The results of the study include all three cases of boundary conditions, namely, when the bounding surfaces of the layer are both free, are both rigid, or one is free and the other is rigid. View Description Hide Description The effect of an impressed magnetic field on heat transport by convection, which arises from instability in a layer of an electrically conducting fluid, bounded between two constant temperature surfaces, is examined. It is shown that such a field reduces the amount of heat transported by convection, and that when the strength of the magnetic field is increased, such reduction becomes proportional to (π^2 Q)^−1, where Q = σμ^2 cos^2ϑH ^2 d ^2/ρν, d is the depth of the layer, ρ the density, H the strength of magnetic field, ϑ the inclination of the direction of H to the vertical, and σ, μ and ν are the coefficients of electrical conductivity, magnetic permeability, and kinematic viscosity, respectively, for all types of the boundary conditions. It is also shown that in the neighborhood of the marginal state of stability, a simple formula characterizes the heat transport by convection. View Description Hide Description The problem investigated is that of the penetration of a fluid into a porous medium containing a more viscous liquid. In order to do this, the flow potentials for a displacement front which is just about to become unstable are calculated. For such a displacement front it is possible to linearize the differential equations, and to give a description in terms of Fourier analysis. The law of growth for each spectral component of the front is deduced, and it is shown how the time dependence of the whole front can be represented by a superposition of elemental solutions. Subsequently, the effect of the heterogeneities contained in the porous medium is accounted for by introducing a random velocity perturbation term into the differential equation for each spectral component. In this fashion one arrives at an equation describing the growth of each spectral component of the fingers with time. It is shown that, under given external conditions, fingering should be independent of the speed with which the displacement proceeds. This is, in fact, what has been observed experimentally. View Description Hide Description The performance of a hot‐wire thermal diffusion column is discussed. The role of spacers used for centering the hot wire has been investigated experimentally for a glass column of 9‐mm i.d. cold wall, and hot wall 20‐mil tungsten wire; the results suggest that the maximum separation is obtained when the spacers (20‐mil nickel wire) are installed every 70 cm along the hot wire. The separation falls when the spacers come either closer together or farther apart. An explanation of this optimum dependence of separation on spacer distance is advanced. View Description Hide Description It is found that the entropy per gram of mixture remains constant in a flame or a one‐dimensional chemically reacting gaseous flow system if all the binary diffusion coefficients are equal to each other and , where C̄[p] is the specific heat at constant pressure per gram of the mixture, m is the average molecular weight of the mixture, and n is the number of moles per gram. This value for each of the binary diffusion coefficients corresponds to setting each of the Lewis numbers equal to unity. For detonations, or systems having large kinetic energy, the enthalpy (per gram) including kinetic energy remains constant if, in addition to the diffusion coefficients having this special value, the Prandtl number is equal to ¾. It is clear that the assumption of constant enthalpy should not be applied to hydrogen‐bromine or hydrogen‐oxygen flames where some coefficients of diffusion are very large and others very small. The constant enthalpy assumption is applied to unimolecular decomposition flames supported by the reaction A → sB′. It is found that, to a rough approximation, the flame velocity varies as the 1/12th power of s. View Description Hide Description Study of velocity fluctuations observed by means of ionization probes during the development of detonation reveals its significance as an indicator of physical characteristics of the flame. The scatter in time of arrival (the reciprocal of the velocity) was found normally distributed at an 85% probability level. The means and standard deviation were determined within 5% and 20%, respectively, at a confidence level of 90%. The intensity of scatter is interpreted consequently as indicative of the combustion front fluctuation that can be considered to delineate the ``effective flame thickness.'' It is found then that, as the flame accelerates, its effective thickness first increases, reaching a maximum in the vicinity of velocity overshoot, and then decreases, attaining finally a minimum, constant value when the steady detonation wave is established. An interesting bimodal distribution of scatter for the 2H[2]–0[2] mixture has been observed, indicating a possible existence of two alternative, independent modes for the development of the process. View Description Hide Description A simplified model for dePackh's version of an electron guide field accelerator is set up by substituting for the actual external quadrupole or solenoidal focusing magnetic field an azimuthally symmetric focusing field, and by replacing the actual toroidal geometry by a cylindrical geometry with periodic boundary conditions. The Boltzmann equation for an electron beam in this system is studied, and a set of solutions is obtained which contain just enough parameters to represent the quasi‐stationary behavior of the beam realistically. The values of these parameters are related to the initial conditions of the beam by the adiabatic invariance of linear charge density and of the radial and azimuthal action integrals in the absence of collisions and radiation. Thus, the quasi‐stationary development of the beam in time is determined without an explicit time dependence in the Boltzmann equation. While the electron energy is being increased by a betatron field, the beam passes from a condition in which its electrons are in almost neutral equilibrium with respect to displacement from the axis (low temperature or `` '' regime) to a condition in which each electron is hardly affected by the other electrons (high temperature or betatron or ``½'' regime), as predicted by dePackh. If the beam is initially isothermal, the temperature becomes a monotonic decreasing function of distance from the axis in the course of electron energy increase. View Description Hide Description The electron density distribution and diffusion length have been investigated for a steady‐state, diffusion‐controlled radiofrequency discharge acting over a finite portion of an infinite cylinder in which there is a uniform axial gas flow. This model simulates to some extent the flow in a plasma wind tunnel. A qualitative relationship is obtained for the influence of active cylinder length and gas velocity parameter on the diffusion length. The effect of these parameters on the electric field necessary to sustain the discharge is also discussed. It is shown that the peak of the electron density distribution shifts downstream with increasing gas velocity, but never leaves the region of production. A numerical example is calculated for the case of helium, indicating that while there is moderate effect on breakdown parameters, the ambipolar case may be changed considerably by the presence of flow.
{"url":"http://scitation.aip.org/content/aip/journal/pof1/3/1/","timestamp":"2014-04-16T22:15:07Z","content_type":null,"content_length":"136001","record_id":"<urn:uuid:5083a514-4aae-470c-8f71-b8db7ed02eee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving system of equations by addition April 21st 2011, 06:42 AM Solving system of equations by addition Solve the system of equations by the addition method: I know the answer is 4,-4, however, how do I figure out what number comes first when completing the answer, is it (4,-4) or (-4,4). Thanks. April 21st 2011, 06:49 AM The ' $x$' value comes before the $y$ value usually - just think of coordinates. April 21st 2011, 12:25 PM Is x the input and y the output or vice versa? April 21st 2011, 01:37 PM April 24th 2011, 10:19 AM Multiply the top by 3 and the bottom by -4 So we now get, 3(4x-3y=-28) and So we now have 12x - 9y = -84 -12x -8y = 16 Now if we add the 2 equations together you'll notice that the x's cancel. So -17y = -68 Now solve for y y = -4 Since we now know that y = -4, Plug it into either of the original given equations, I'm going to plug it into the first one for you. 4x-3y=-28 where y = -4 So we now have 4x - 3(-4) = -28 Solve for x and you'll get x = -4 So if x = -4 and y = 4 We can write it as (-4, 4) Hope that helps boss. (:
{"url":"http://mathhelpforum.com/algebra/178248-solving-system-equations-addition-print.html","timestamp":"2014-04-20T16:31:34Z","content_type":null,"content_length":"5978","record_id":"<urn:uuid:de12d0bf-fed7-4337-a405-bb346cd6f6cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about comment on thinking in tune About metamathematical statements in a recent post by Timothy Gowers Posted in Other, tagged cambridge, comment, implies, metamathematics, Timothy Gowers, undergraduates on 2011/10/10 | Leave a Comment » (This is a comment about this post by Professor Timothy Gowers, in a series for mathematical undergraduates he has started there.). I was delighted to rediscover several distinctions about mathematical discourse in ordinary english I usually do not keep consciously in mind when writing. But you write in the first part: “Here are a few metamathematical statements. -1- “There are infinitely many prime numbers” is true. -2- The continuum hypothesis cannot be proved using the standard axioms of set theory. -3- “There are infinitely many prime numbers” implies “There are infinitely many odd numbers”. -4- The least upper bound axiom implies that every Cauchy sequence converges. In each of these four sentences I didn’t make mathematical statements. Rather, I referred to mathematical statements.” I beg to differ slightly. First, many metamathematical statements are mathematical statements in a larger theory and can often be treated as mathematical objects (for example Model theory). Second, I would have drawn important distinctions between those four (but it was not exactly the subject of your post which is already very detailed). Further, each of your four sentences implies a specific mathematical universe with minimal logical and set-theoretic axioms for it to be meaningful and unambiguous. I think this is important to point out to young mathematicians. In these sentences we have most of the time silent implications together with an explicit “implies” (see below). There is a topological analogy: most properties of, say a knot, depend on the space it is embedded in. Not that these sentences do not have the same implied strength or the same immediate relevance for the mathematician, undergraduate or not. So I prefer to rephrase them with parameters and implicit -1- Theorem A is true (in implied theory T) -2- Axiom C is independent from Axiom-System S -3- Theorem A has Corollary B (in implied theory T common to A and B) -4- Axiom L (added to implied Theory R) gives it the strength to prove Theorem V. The first sentence is of the most common kind for a mathematician. The third sentence is very common as well and is a very small step from -1-. Both -1- and -3- are used so frequently that the distinctions between mathematics and metamathematics is blurred as in common metalinguistic sentences people use every day : “Please, can you finish your sentence?” or “Do not answer this question!” The fourth one is of strong metamathematical character and of interest to most mathematicians, because Theorem V is useful and a common way to express continuity. It could be paraphrased/expanded : one of the solutions to create a mathematical universe where you can have a notion of continuity for your analysis theorems is to have a Theory R consistent with Axiom L and add this axiom L to R, creating Theory R2 and go on with finding limits. But the second one is the strongest of all, the most “meta” and the only one to be explicit about its metamathematical context. It is part of a family of statements of about “relationships between logical contexts in which you can do mathematics”. You can call that meta-trans-peri-mathematics or meta-meta-metamathematics. It would be very difficult to find an equivalent to -2- in a non mathematical situation. It would be considered at best very subjective or dogmatic such as “You cannot speak about the “Gestalt” philosophical concept in english without using the german word “Gestalt” or another philosophical german word of equivalent depth and power. You will always fail if you try.” The remarquable thing about mathematics is that we can reach a so strong level of implication in our discourse about it.
{"url":"http://ogerard.wordpress.com/tag/comment/","timestamp":"2014-04-17T19:12:33Z","content_type":null,"content_length":"26163","record_id":"<urn:uuid:b45cb4ff-b1b5-4e68-8198-f47b25b5473f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Please help on Monday, May 31, 2010 at 9:49pm. Here is the problem: A two-digit number is eight times the sum of its digits.When the number is added to the number obtained by reversing the digits,the sum is 99. Find the original number. Please explain hot to solve it and it would help a lot to if you explain other algebra word problems like: Distance rate and time problems,Mixture Problems,wind and water current problems,work problem,area problem,cost and value problems if you could explain or give a link to something helpful, • Algebra - Damon, Monday, May 31, 2010 at 9:56pm write two digit number nm as m + 10 n m + 10n = 8 (n+m) m + 10n + n + 10 m = 99 then solve the two equations 2 n = 7 m 11n+11m = 99 or n+m=9 n = (7/2)m (7/2)m + m = 9 7 m + 2 m =18 9 m = 18 m = 2 n = 7 number = 72 • Algebra - MathMate, Monday, May 31, 2010 at 9:59pm This is a question with two equations, x, y each representing one of the two digits. the number is equal to 8 times the sum of its digits, therefore: 10x+y = 8(x+y), or That leaves only one solution: x=7, y=2. When the number is added to that with the digits reversed, it should add up to 99: 72+27=99 OK. Related Questions Algebra - Here is the problem: A two-digit number is eight times the sum of its ... 8th Grade Algebra - These are extra credit questions, and I just want to know ... Algebra math. - 1) the sum of the digits of a two digit number is 9. The value ... Maths - The digits of a three digit number are in AP&their sum is 15.The number ... algebra - The sum of the digits of a certain two-digit number is 7. Reversing ... algebra word prob. - one more - The sum of the digits of a certain two-digit ... maths - Find the sum of all 3-digit positive numbers N that satisfy the ... algebra - A number consists of two digits. The sum of twice the tens digit an 5 ... Exploring mathematics elementary algebra - The tens digit of a two-digit number ... math - A two digit number is such that two times the tens digit is three less ...
{"url":"http://www.jiskha.com/display.cgi?id=1275356999","timestamp":"2014-04-20T06:25:50Z","content_type":null,"content_length":"9401","record_id":"<urn:uuid:7d3615b9-771b-4c51-a435-2c6b74177e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/cbrusoe/medals","timestamp":"2014-04-19T17:09:15Z","content_type":null,"content_length":"86598","record_id":"<urn:uuid:051698cb-5f36-4e72-944a-eb2a7510ab29>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Here's the stuff we found when you searched for "To set at naught" • GpBCT: proof that Bob wins on the complement of an open dense set • If you love somebody, set them free • Some cars not for use with some sets • The set of rational numbers is countably infinite • enumerable set • Set Theory Weirdness • finite set • Independent set • natural numbers as sets • How to set up and operate a road checkpoint • Axiom of Elementary Sets • set fire to head. kill anything that runs out. • data set • tea set • For every set there exists a larger set • How to buy LEGO sets when you're over twice the suggested age • convex set • unmeasurable set • Clare Bowditch and the Feeding Set • I wish when I closed a book I could set it on the shelf and know it was really over • Cantor Ternary Set • Let me set the record straight • She who makes the Moon the Moon and, whenever she is full, sets the dogs to howling all night long, and me with them. • Proof that the set of transcendental numbers is uncountable • Creating a Commodore 64 character set • analytic set • power set • direct set • The skill set for creating fake celebrity porn • flop a set • socket set • Gödel-Bernays set theory • Set the table, Victoria, I'm coming home • Mandelbrot set • There are as many numbers between 0 and 1 as there are in the set of all real numbers • instruction set • test set cross validation • set fire to flames • The Nodermeet On Which The Sun Never Sets - Beijing • Skill Set • GpBCT: proof that Bob wins on a countable union of sets if he's guaranteed a win on each one of them • I'm not in love, set me free • What were you expecting? Once the process of falsification is set in motion, it won't stop. • The American Analog Set • Get off my lawn or I will grab that vacuum cleaner on your porch and set you on fire • the end of the beginning, and my heart is set on full auto • chess set • set off • MISC: minimal instruction set computing • Cantor set • Nightmare pictures at an Internet exhibition, set to music • secret city map with pins set at the places their eyes had met • Television set • Jet Set Willy • Connecting the NES Control Deck to your TV set • How to set up a formal table • swing set • result set • The Nodermeet On Which The Sun Never Sets - Sydney • Everything2 is not a TV set! • Did Aum Shinrikyo set off a nuclear bomb in Western Australia in 1993? • If you really mean it, set yourself on fire • Chip Set • Jet Set Willy (user) • johnny only sets fires • Set the Controls for the Heart of the Sun • number sets • A recursively enumerable set whose complement is also recursively enumerable is recursive • information set • Red Meat Construction Set • The world is bleak and horrible and depressing, so I'm going to set it on fire and laugh • butt set • critical sets • America is currently reliving the 1950's. Do not adjust your set. • set a breakpoint • perceptual set • Smith Set • The Nodermeet On Which The Sun Never Sets - London • When there's nothing left to burn, you have to set yourself on fire • AT command set • recommended patch set • The sun sets slower now • problem set • Tongue Set Free • measurable set • Surah 37 Those Who Set the Ranks • How to set up and record an EEG • derived set • Desk Set • rubies subtly set their skirts on fire. • Do Not Adjust Your Set • S2 Works (Evangelion BGM Box Set) • Barenaked Ladies Live Set • New Jersey Trilogy Boxed Set Box • Set This House in Order • Cliveden Set • and watch the madness set in • The Aislers Set • Boy Sets Fire • set in stone • The sun never sets on the British Empire • Pitch-class sets • The Sun sets on Subtractive Lives • Today the sun set. It burned my eyes. • naught • Sett • Jet Set Radio • NSA: some pseudo set theory • perfect set • Eros Set Fuji (user) • I set my sister up with her husband, and all I got was this great dress and a trip to Hawaii • the anti-insomniac power of my living room set • Council of Set • drum set • group acting on set • set algebra • How to set yourself on fire • Borel set • morning set • Set It Off • Someone set her face on fire and put it out with an anchor chain • the correspondence between closed sets for the Zariski topology and radical ideals in the polynomial ring • Death Sets a Thing • answer set programming • play at a game of constantly being wrong with a priceless set of vocabularies • set up to fail • set and setting • NSA: some pseudo set theory footnote 1 • Somebody Set Us Up • Temple of Set • I am outside the set of scientifically intelligible events • list all the subsets of a given set • Julia Set • The set of decimal representations of numbers divisible by 17 is regular • fixed point formula for a finite group acting on a finite set • A place where the sun never sets • drop set • Jet Set Radio Future • ordered set • the bleak spark crackling and cursing above it like a small malignant spirit set to dog its tracks • null set • 5-piece drum set • twin set • sketch of a Set Theory proof that every Goodstein's sequence reaches 0 • Reuptake is not enough to set you free • tap and die set • An acoustic set, played for an audience of one. • set logic • Semantics and Set Theory • I don't have a television set • photo gen set • Pinball Construction Set • How to set proper banmasks • Notting Hill set • Shall I at least set my lands in order? • Erector set • hereditarily finite set • index set • set aside • Bitch set me up! • cot bed sets • Infinite set • open set • Identifying Set Pins • movie set • set theoretic topology • lexical sets of English vowels • screwdriver set • Dead Set • set • Set the table • difference set • Set to receive a charge If you Log in you could create a "To set at naught" node. If you don't already have an account, you can register here.
{"url":"http://everything2.com/title/To+set+at+naught","timestamp":"2014-04-20T09:53:29Z","content_type":null,"content_length":"29249","record_id":"<urn:uuid:344b8205-b75e-4fd7-9ebe-b7c6e000cf52>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please Help with Practice question!!! An electrically charged particle accelerates uniformly from rest to speed v while traveling a distance x. a) Show the acceleration of the particle is a= v^2/2x. b) If the particle starts from rest and reaches a speed of 1.8 x 10^7 m/s over a distance of 0.10m, show that its acceleration is 1.6 x 10^15 m/s^2. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51542206e4b0b79aa94472a4","timestamp":"2014-04-17T10:03:56Z","content_type":null,"content_length":"65795","record_id":"<urn:uuid:7c89cc07-9374-419f-91c9-23fb163a78dc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Code: Three primes programs This post illustrates some of the programming styles possible in Disciple. We'll start with ye'olde primes program straight from the nofib benchmark suite. This is the exact code, except that for the sake of example I've hard wired it to show the 1500th prime instead of taking a command line argument. suCC :: Int -> Int suCC x = x + 1 isdivs :: Int -> Int -> Bool isdivs n x = mod x n /= 0 the_filter :: [Int] -> [Int] the_filter (n:ns) = filter (isdivs n) ns primes :: [Int] primes = map head (iterate the_filter (iterate suCC 2)) main = print $ primes !! 1500 This program builds an infinite list of primes, and then uses list indexing to take the 1500'th one. A couple of nice functional programming paradigms are used, namely higher order functions and laziness. Disciple is a default-strict dialect of Haskell, so we can use almost the same code, except we have to be explicit when we want laziness: suCC :: Int -> Int suCC x = x + 1 isdivs :: Int -> Int -> Bool isdivs n x = mod x n /= 0 the_filter :: [Int] -> [Int] the_filter (n:ns) = filterL (isdivs n) ns primes :: [Int] primes = mapL head (iterateL the_filter (iterateL suCC 2)) main () = println $ show $ primes !! 1500 That's it. I've changed map, filter and iterate to mapL, filterL and iterateL, and wibbled the main function slightly, but it's otherwise the same code. The difference between something like map and mapL is like the difference between foldl and foldl' in Haskell, they're implemented almost the same way, but have slightly different behaviours. Check the base libs for details. Ok, so Disciple can express the dominant paradigms of Haskell, now it's time to subvert them. In the "Beyond Haskell" session at HIW 2010 we talked about how sometimes when optimising a program in a high level language, you reach a point where you just have to drop down to a "lower level" language for performance reasons. This is like when you're writing a Ruby program and it's running so slow there's simply nothing else to do than to "shell out" to a C routine. This also happens in Haskell, but perhaps not as much. Anyway, one of the goals of Disciple is to avoid this problem if at all possible. If the lazy, purely functional version of primes just isn't fast enough for you, then let's write the same program using destructive update of a mutable array: -- | Check if an int is a multiple of any in a given array. checkPrime :: Array Int -> Int -> Int -> Int -> Bool checkPrime array high x n | n >= high = True | mod x array.(n) == 0 = False | otherwise = checkPrime array high x (n + 1) -- | Fill an array with primes. fillPrimes :: Array Int -> Int -> Int -> Int -> () fillPrimes primes max high i | high > max = () | checkPrime primes high i 0 = do primes.(high) := i fillPrimes primes max (high + 1) (i + 1) | otherwise = fillPrimes primes max high (i + 1) main () = do -- We want the 1500'th prime. max = 1500 -- Start with an array containing the first prime as its first element. primes = generate&{Array Int} (max + 1) (\_ -> 0) primes.(0) := 2 -- Fill the array with more primes. fillPrimes primes max 1 2 -- Take the last prime found as the answer. println $ show primes.(max) The syntax primes.(max) means "return the max'th element of the primes array". The syntax primes.(high) := i means "destructively update the high'th element of the primes array to the value i". Updating an array requires the array to be mutable. It's possible to express this mutability constraint in the type signature, but it's not required, so I usually just leave it to the inferencer. The compiler knows that you're using side effects, and will optimise around them, but you usually don't have to say anything about it in source level type sigs. I'll talk about the cases when you do in another post. Note that we don't have to pollute the type sigs with the IO constructor just to use destructive update. After all, checkPrime and fillPrimes aren't doing any IO... The above code is good, but it still feels like a Haskell program. I'm particularly looking at the tail calls to checkPrime and fillPrimes. This is how you express primitive looping in many functional languages, but it can become tedious when there are lots of state variables to pass back to each iteration. Here is another version of primes written in a more imperative style. -- | Check if an int is a multiple of any in a given array. checkPrime :: Array Int -> Int -> Int -> Bool checkPrime array high x = do n = 0 isPrime = 1 while (n < high) do when (mod x array.(n) == 0) do isPrime := 0 n := n + 1 isPrime /= 0 main () = do -- We want the 1500'th prime. max = 1500 -- Start with an array containing the first prime as its first element. primes = generate&{Array Int} (max + 1) (\_ -> 0) primes.(0) := 2 -- Fill the array with primes. high = 1 i = 2 while (high <= max) do when (checkPrime primes high i) do primes.(high) := i high := high + 1 i := i + 1 -- Take the last prime found as the answer. println $ show primes.(max) Now we've got while loops, and break, even. Of course, the while syntax desugars to a simple higher order function that takes the loop body, and break just throws an exception to end the iteration. Such syntax should help the Python programmers feel at home, but Disciple is still lambdas all the way down. Finally, note that we started with a lazy, purely functional program but arrived at an unapologetically imperative program all in the same language. In my dreams people talk about "functional vs imperative" programs instead of "functional vs imperative" Write the lazy functional program because it's shorter, cleaner, and easier to understand.. but when the clock is tickin' and the space is leakin' then wouldn't it be nicer "shell out" to the same PS: Code examples are part of the DDC test suite PPS: Of couse the absolute performance of these programs compiled with DDC today is atrocious. This is a development blog after all, and we're still developing. They do work though. See the wiki to find out how you can help. No comments:
{"url":"http://disciple-devel.blogspot.com/2010/10/code-three-primes-programs.html","timestamp":"2014-04-16T20:37:11Z","content_type":null,"content_length":"37079","record_id":"<urn:uuid:a1882d94-82e0-4281-b678-e5ab438916e1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric Motors And Generators From Wikibooks, open books for an open world A Wikibookian believes this page should be split into smaller pages with a narrower subtopic. You can help by splitting this big page into smaller ones. Please make sure to follow the naming policy. Dividing books into smaller sections can provide more focus and allow each one to do one thing well, which benefits everyone. You can ask for help in dividing this book in the assistance reading room. Electrical motors and generators are machines which either convert electrical energy inputs into forces or applied kinetic energy inputs into electrical energy. In principle, any electrical generator can also be operated as a motor and vice-versa. In practice they will often be optimized for one application or the other. All electrical machines operate due the same principles derived from the study of electro-magnetics. Therefore, it is appropriate to first discuss these underlying electromagnetic concepts as this is crucial for understanding their operation. However, before discussing these concepts it will also be useful to revise the concepts of vector algebra and vector calculus which are used extensively in this subject. Vectors and Fields[edit] The use of vectors and vector fields greatly simplifies the analysis of many electromagnetic (and indeed other) systems. Due to their usefulness, these concepts will be used extensively in this book. For this reason it will be useful to begin with a general treatment of the subject. A vector is a quantity which has both magnitude,direction and sense. the magnitude of a vector represents its size as a physical quantity, the direction; represents its position with respect to a reference axis, whlie its sense; represents its orientation and it is represented by its arrow head. This is in contrast to a scalar which has only magnitude. Examples of scalar quantities include temperature, resistivity, voltage and mass. In comparison, examples of vector quantities would include velocity, force, acceleration and position. The most familiar and intuitive use of vectors is in the two-dimensional or three-dimensional Cartesian coordinate system. This is the familiar system of x, y and z coordinates. The term field has a general meaning in mathematics and physics, but here we will be referring only to the special cases of scalar and vector fields. Generally, a field is a region in space where the quantity in question, exist and its influence is being felt. A scalar field is a region of space in which each point is associated with a scalar value. A classic example of a scalar field is a temperature field in a heated block of material. If some heat source is applied to a cube of a conductive material, such as a metal, the temperature in the block will be highest where the heat source is applied, dropping off as we move away from the source in any direction. At every position inside the block a value could be assigned which is the temperature at this point. These temperature values make up the scalar temperature field in the block. It might be that it is possible to model accurately these values with some mathematical function, but the field itself is simply the variation in the scalar quantity in the space occupied by the block. A vector field differs from a scalar field in that it has not only a magnitude at every position, but also direction. A good example of a vector field is the velocity of the fluid flow in a winding river of changing width. Clearly at every point in the river, the velocity of the fluid will have a magnitude (the speed) which will be lower where the river is wide, and higher where the river is narrow. However, the flow will also have a direction which changes as the water is forced around the river bends. If we noted the fluid speed and direction everywhere in the river, the result would be a vector field of the fluid flow. In addition to varying in space, fields can also vary in time. In the first example, if we started with a cold block and then applied the heat source, mapping the temperature field at set time intervals, it would be seen that the values of the temperature at every point would change as the heat conducted throughout the block over time. The result therefore is a scalar field that varies in the three dimensions of space and one of time. Magnetic Field Concepts[edit] An electromagnetic field is a region of space in which electrical charges experience forces. The classical definition of the electromagnetic field is given by the following equation What this equation states, is that an electrical charge of size q experiences a force when in the presence of an electric field denoted by E. Furthermore if the charge is moving, with velocity v, it will experience a further force if in the presence of a magnetic field denoted by B. Therefore the force on a electric charge defines what the electric and magnetic fields are in a given region of space. Without the presence of a charged particle it could never be known what their magnitudes or directions were. Further Reading on Magnetism[edit] Further Reading on Electric Current[edit] AC Motors and Generators[edit] AC generators[edit] • A very simple AC generator consists of a permanent magnet that rotates inside a coil in such a way that the N-pole and S-pole alternate as seen from the coil. An analog voltmeter (or rather a millivoltmeter?) that has its zero at the middle of the scale is connected to the ends of the coil. As the magnet is rotated the voltmeter moves first one way, then the other way. The speed of rotation determines the number of "cycles per second", called Hertz(Hz). A rotation speed of 3000 revolutions per minute(RPM) produces 50 Hz, and 3600 RPM produce 60 Hz. • The rotating permanent magnet can be replaced by another coil that is fed by DC and acts as an electromagnet. Doubling the number of coils will double the number of, what is called "the poles", and then only half the rotation speed is required for a given output frequency. • See also Wikipedia: Alternator AC generator works on the principle of Faraday's laws electro-magnetics. AC motors[edit] AC motors are generally divided into two categories, induction and synchronous motors. The most common AC motor is the "Squirrel cage motor", a type of induction motor. These have only one or more coils within which a special kind of mechanical rotor is free to rotate. There is no electrical connection to the rotor from the outside. The general formula to determine the synchronous speed of an induction motor is $Speed = \frac{120f}{P}$ For induction motors, this is a theoretical speed, even though it will never be obtained. The motor will always run slower than synchronous speed with a slip of S. If a motor were to be operated at full synchronous speed, the relative speed of the rotor to the stator would be 0, making it impossible to induce a voltage (Faraday's law) in the rotor windings. This in turn would make the flow of current impossible. Without current no magnetic field can be generated. Most AC motors require a starter, or method of limiting the inrush current to a reasonable level. Types of motor starting include reactive (capacitor start and inductive start), and electronic (frequency drives and soft start drives). The reactive start method is usually used on fractional horsepower motors, and the electronic method is usually reserved for larger motors (cost of the drives is the main reason for this). Connecting these motors to computers, PLC's (programmable logic controllers), and interfacing with automation systems, is becoming more prevalent. DC Generators[edit] DC generators are basically AC generators whose output voltage is switched the other way round at the proper moment, so that the direction of the voltage is always in a single direction. But the magnitude of the voltage keeps changing, just as it does in an AC generator, and it can be said that the output of a DC generator is DC plus a "superimposed" AC voltage, called "ripple". Connecting a capacitor across the output terminals reduces that ripple. See also Wikipedia: "Testatika" Electrostatic generators DC Motors[edit] Direct Current (DC) motors have a "Commutator" that switches the part of the coil that is closest to the poles at the time, more or less similar to the legendary "donkey" that tries to catch the carrots, but never succeeds. See the very simplified commutator shown in blue. Usually a commutator has many "segments", as many as there are taps on the coil. Starting a DC motor requires often an external resistor or rheostat to limit the current. The value, in Ohms, of that resistor is reduced in steps as the speed of the motor increases, until finally that resistor is removed from the circuit as the motor reaches close to its final speed. See also Wikipedia: Car Other electric motors[edit] See Wikipedia: Universal motors A stepper motor is a brushless, synchronous electric motor that can divide a full rotation into a large number of steps, for example, 200 steps. See Robotics: Stepper Motors and Wikipedia: Stepper motor • And a tutorial: [http:// www.geocities.com/nozomsite/stepper.html] Exotic motors[edit]
{"url":"http://en.wikibooks.org/wiki/Electric_Motors_And_Generators","timestamp":"2014-04-17T21:35:55Z","content_type":null,"content_length":"52009","record_id":"<urn:uuid:6be355b1-4870-4faf-94b7-2071167ec6c7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
The Three Magical Boxes Q: You are playing a game wherein you are presented 3 magical boxes. Each box has a set probability of delivering a gold coin when you open it. On a single attempt, you can take the gold coin and close the box. In the next attempt you are free to either open the same box again or pick another box. You have a 100 attempts to open the boxes. You do not know what the win probability is for each of the boxes. What would be a strategy to maximize your returns? Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) A: Problems of this type fall into a category of algorithms called "multi armed bandits". The name has its origin in casino slot machines wherein a bandit is trying to maximize his returns by pulling different arms of a slot machine by using several "arms". The dilemma he faces is similar to the game described above. Notice, the problem is a bit different from a typical estimation exercise. You could simply split your 100 attempts into 3 blocks of 33,33 & 34 for each of the boxes. But this would not be optimal. Assume that one of the boxes had just a \(1\%\) probability of yielding a golden coin. Even as you probe and explore that box you know intuitively that you have spent a fair amount of attempts to simply reinforce something you already knew. You need a strategy that adjusts according to new information that you gain from each attempt. Something that gradually transitions away from a box that yields less to a box that yields more. Assume at the beginning of the game you do not know anything about the yield probabilities. Assign a prior set of values of \(\big[\frac{1}{2}, \frac{1}{2},\frac{1}{2}\big]\). Simultaneously maintain a set of likelihoods using which you will decide which box to sample next. Initially all three values are set to 1s \(\{p_1 = 1,p_2 = 1,p_3 = 1\}\). First open the boxes in succession and use up \(n \) attempts per box. If you denote the number of successes for each box as \(\{s_1,s_2,s_3\}\), then you could update the posterior distribution of your belief in what box yields as follows p_1 = \frac{1 + s_1}{2 + n} \\ p_2 = \frac{1 + s_2}{2 + n} \\ p_3 = \frac{1 + s_3}{2 + n} Think of this as your initializing phase. Once you initialize your estimates, subsequent choice of boxes should be based on a re-normalized probability vector derived from \(p_1,p_2,p_3\). What this means is that the probability you would pick a box is computed as follows P(\text{pick box 1}) = \frac{p_1}{p_1 + p_2 + p_3} \\ P(\text{pick box 2}) = \frac{p_2}{p_1 + p_2 + p_3} \\ P(\text{pick box 3}) = \frac{p_3}{p_1 + p_2 + p_3} What ends up happening here is that you will pick the box which has the highest probability of winning based on information gleaned up to a certain point. Another benefit of this approach is you are learning in real time. If a certain box isn't yielding as much as another you don't discard opening that box all together, instead you progressively sample it less often. If you are looking to buy some books in probability here are some of the best books to own Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics) This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that they are all tractable and don't require too much advanced mathematics to solve. Introduction to Algorithms This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists Introduction to Probability Theory Overall an excellent book to learn probability, well recommended for undergrads and graduate students An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition This is a two volume book and the first volume is what will likely interest a beginner because it covers discrete probability. The book tends to treat probability as a theory on its own The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!) A good book for graduate level classes: has some practice problems in them which is a good thing. But that doesn't make this book any less of buy for the beginner. Introduction to Probability, 2nd Edition A good book to own. Does not require prior knowledge of other areas, but the book is a bit low on worked out examples. Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5) An excellent resource (students, engineers and even entrepreneurs) if you are looking for some code that you can take and implement directly on the job Understanding Probability: Chance Rules in Everyday Life This is a great book to own. The second half of the book may require some knowledge of calculus. It appears to be the right mix for someone who wants to learn but doesn't want to be scared with the Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) This one is a must have if you want to learn machine learning. The book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some great examples and test data in the text book too. Discovering Statistics Using R This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing. A Course in Probability Theory, Third Edition Covered in this book are the central limit theorem and other graduate topics in probability. You will need to brush up on some mathematics before you dive in but most of that can be done online Probability and Statistics (4th Edition) This book has been yellow-flagged with some issues: including sequencing of content that could be an issue. But otherwise its good
{"url":"http://bayesianthink.blogspot.com/2014/01/the-three-magical-boxes.html","timestamp":"2014-04-19T17:01:54Z","content_type":null,"content_length":"75263","record_id":"<urn:uuid:7b784caa-478b-4f42-97a3-6b902c9eb8d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about witt vectors on A Mind for Madness Hopefully I’ll start posting more now that last week is over. Today we’ll look at a counterexample to see that the Witt cohomology we’ve been looking at is not always a a finite type ${\Lambda}$ -module. Just to recall a bit, we’re working over a perfect field of characteristic ${p}$, and ${\Lambda=W_{p^\infty}(k)}$. Given a variety ${X}$ over ${k}$ we can use the structure sheaf ${\mathcal {O}_X}$ to form ${\mathcal{W}_n}$, which is the sheaf of length ${n}$ Witt vectors over ${\mathcal{O}_X}$. This is just ${\mathcal{O}_X^n}$ with a special ring structure that on stalks has the property of being a complete DVR with residue field ${k}$ and fraction field of characteristic ${0}$. The restriction map given by chopping off the last coordinate ${R: \mathcal{W}_n\rightarrow \mathcal{W}_{n-1}}$ gives us a projective system of sheaves and using standard abelian sheaf cohomology we can define ${H^q(X, \mathcal{W})=\lim H^q(X, \mathcal{W}_n)}$. This brings us to the purpose of today. It is possible that in very nice (projective even) cases we have ${H^q(X, \mathcal{W}_n)}$ a finite type ${\Lambda}$-module, yet have that ${H^q(X, \mathcal {W})}$ is not. Let ${X}$ be a genus zero cuspidal curve with cusp ${P}$. Let ${X'\rightarrow X}$ be the normalization of ${X}$. We will shorthand ${\mathcal{O}}$ and ${\mathcal{O}'}$ as the structure sheaves of ${X}$ and ${X'}$ respectively. We have that ${\mathcal{O}_x=\mathcal{O}_x'}$ when ${xeq P}$. We have ${\mathcal{O}_P}$ is the subring of ${\mathcal{O}_P'}$ formed from functions ${f}$ where the differential ${df}$ vanishes at ${P} Let’s use the standard exact sequence we get from normalizing a curve: ${0\rightarrow \mathcal{O}\rightarrow \mathcal{O}'\rightarrow \mathcal{F}\rightarrow 0}$ where ${\mathcal{F}}$ is concentrated at ${P}$ with the property ${\mathcal{F}_P=k}$. If we take the long exact sequence in cohomology we see that ${ H^0(X, \mathcal{F})\hookrightarrow H^1(X, \mathcal{O})\rightarrow H^1(X, \mathcal{O}')\ rightarrow}$. Note that ${X'}$ is non-singular of genus ${0}$, so ${H^1(X', \mathcal{O}')=H^1(X, \mathcal{O})=0}$. Also, ${H^0(X, \mathcal{F})=\mathcal{F}_p=k}$. So ${\mathrm{dim}_k H^1(X, \mathcal Now we can use the standard sequence of restriction ${0\rightarrow \mathcal{O}\rightarrow \mathcal{W}_n\rightarrow \mathcal{W}_{n-1}\rightarrow 0}$ and induction to get that the length of the module ${H^1(X, \mathcal{W}_n)}$ is ${n}$. Now let’s use the normalization sequence above and take Witt sheaves associated to all of them. We’ll denote this by ${0\rightarrow \mathcal{W}_n\rightarrow \ mathcal{W}_n'\rightarrow \mathcal{F}_n\rightarrow 0}$. Note that we still have a bijection with the coboundary map ${\delta: H^0(X, \mathcal{F}_n)\rightarrow H^1(X, \mathcal{W}_n)}$. Let’s now think about the Frobenius map ${F}$. Since our field is perfect, we get a bijection ${\mathcal{W}_n'\rightarrow \mathcal{W}_n'}$ and also between ${\mathcal{W}_n\rightarrow \mathcal{W}_n}$. On ${\mathcal{O}_P'}$ we get that ${F(f)=f^p}$ and hence the differential is ${0}$, which means it is in ${\mathcal{O}_P}$. Applying Frobenius to our exact sequence we get the square $\displaystyle \begin{matrix} H^0(X, \mathcal{F}_n) & \rightarrow & H^1(X, \mathcal{W}_n) \\ F \downarrow & & \downarrow F \\ H^0(X, \mathcal{F}_n) & \rightarrow & H^1(X, \mathcal{W}_n) \end{matrix}$ Here we see that ${F: H^1(X, \mathcal{W}_n)\rightarrow H^1(X, \mathcal{W}_n)}$ is identically ${0}$. This means that ${p}$ annihilates ${H^1(X, \mathcal{W}_n)}$ which means that it is not only a length ${n}$${\Lambda}$-module, but is a vector space over ${k}$ of dimension ${n}$. Thus the projective limit ${H^1(X, \mathcal{W})}$ is an infinite dimensional vector space over ${k}$ and hence is not a finite type ${\Lambda}$-module. Sheaf of Witt Vectors 2 Recall last time we talked about how we can form the sheaf of Witt vectors over a variety ${X}$ that is defined over an algebraically closed field ${k}$ of characteristic ${p}$. The sections of the structure sheaf form rings and we can take ${W_n}$ of those rings. The functoriality of ${W_n}$ gives us that this is a sheaf that we denote ${\mathcal{W}_n}$. For today we’ll be define ${\Lambda}$ to be ${W(k)}$. Recall that we also noted that ${H^q(X, \mathcal{W}_n)}$ makes sense and is a ${\Lambda}$-module annihilated by ${p^n\Lambda}$ (recall that we noted that Frobenius followed by the shift operator is the same as multiplying by ${p}$, and since Frobenius is surjective, multiplying by ${p}$ is just replacing the first entry by ${0}$ and shifting, so multiplying by ${p^n}$ is the same as shifting over ${n}$ entries and putting ${0}$‘s in, since the action is component-wise, ${p^n\Lambda}$ is just multiplying by ${0}$ everywhere and hence annihilates the module). In fact, all of our old operators ${F}$, ${V}$, and ${R}$ still act on ${H^q(X, \mathcal{W}_n)}$. They are easily seen to satisfy the formulas ${F(\lambda w)=F(\lambda)F(w)}$, ${V(\lambda w)=F^{-1}(\ lambda)V(w)}$, and ${R(\lambda w)=\lambda R(w)}$ for ${\lambda\in \Lambda}$. Just by using basic cohomological facts we can get a bunch of standard properties of ${H^q(X, \mathcal{W}_n)}$. We won’t write them all down, but the two most interesting (of the very basic) ones are that if ${X}$ is projective then ${H^q(X, \mathcal{W}_n)}$ is a finite ${\Lambda}$-module, and from the short exact sequence we looked at last time ${0\rightarrow \mathcal{O}_X\rightarrow \mathcal{W}_n \rightarrow \mathcal{W}_{n-1}\rightarrow 0}$, we can take the long exact sequence associated to it to get ${\ cdots \rightarrow H^q(X, \mathcal{O}_X)\rightarrow H^q(X, \mathcal{W}_n)\rightarrow H^q(X, \mathcal{W}_{n-1})\rightarrow \cdots}$ If you’re like me, you might be interested in studying Calabi-Yau manifolds in positive characteristic. If you’re not like me, then you might just be interested in positive characteristic K3 surfaces, either way these cohomology groups give some very good information as we’ll see later, and for a Calabi-Yau’s (including K3′s) we have ${H^i(X, \mathcal{O}_X)=0}$ for ${i=1, \ldots , n-1}$ where ${n}$ is the dimension of ${X}$. Using this long exact sequence, we can extrapolate that for Calabi-Yau’s we get ${H^i(X, \mathcal{W}_n)=0}$ for all ${n>0}$ and ${i=1, \ldots, n-1}$. In particular, we get that ${H^1(X, \mathcal{W})=0}$ for ${X}$ a K3 surface where we just define ${H^q(X, \mathcal{W})=\lim H^q(X, \mathcal{W}_n)}$ in the usual way. Sheaf of Witt Vectors I was going to go on to prove a bunch of purely algebraic properties of the Witt vectors, but honestly this is probably only interesting to you if you are a pure algebraist. From that point of view, this ring we’ve constructed should be really cool. We already have the ring of ${p}$-adic integers, and clearly ${W_{p^\infty}}$ directly generalizes it. They have some nice ring theoretic properties, especially ${W_{p^\infty}(k)}$ where ${k}$ is a perfect field of characteristic ${p}$. Unfortunately it would take awhile to go through and prove these things, and it would just be tedious algebra. Let’s actually see why algebraic geometers and number theorists care about the Witt vectors. First, we’ll need a few algebraic facts that we haven’t talked about. For today, we’re going to fix a prime ${p}$ and we have an ${\mathbf{important}}$ notational change: when I write ${W (A)}$ I mean ${W_{p^\infty}(A)}$, which means I’ll also write ${(a_0, a_1, \ldots)}$ when I mean ${(a_{p^0}, a_{p^1}, \ldots)}$ and I’ll write ${W_n(A)}$ when I mean ${W_{p^n}(A)}$. This shouldn’t cause confusion as it is really just a different way of thinking about the same thing, and it is good to get used to since this is the typical way they appear in the literature (on the topics I’ll be There is a cool application by thinking about these functors as representable by group schemes or ring schemes, but we’ll delay that for now in order to think about cohomology of varieties in characteristic ${p}$ and hopefully relate it back to de Rham stuff from a month or so ago. In addition to the fixed ${p}$, we will assume that ${A}$ is a commutative ring with ${1}$ and of characteristic ${p}$. We have a shift operator ${V: W_n(A)\rightarrow W_{n+1}(A)}$ that is given on elements by ${(a_0, \ldots, a_{n-1})\mapsto (0, a_0, \ldots, a_{n-1})}$. The V stands for Verschiebung which is German for “shift”. Note that this map is additive, but is not a ring map. We have the restriction map ${R: W_{n+1}(A)\rightarrow W_n(A)}$ given by ${(a_0, \ldots, a_n)\mapsto (a_0, \ldots, a_{n-1})}$. This one is a ring map as was mentioned last time. Lastly, we have the Frobenius endomorphism ${F: W_n(A)\rightarrow W_n(A)}$ given by ${(a_0, \ldots , a_{n-1})\mapsto (a_0^p, \ldots, a_{n-1}^p)}$. This is also a ring map, but only because of our necessary assumption that ${A}$ is of characteristic ${p}$. Just by brute force checking on elements we see a few relations between these operations, namely that ${V(x)y=V(x F(R(y)))}$ and ${RVF=FRV=RFV=p}$ the multiplication by ${p}$ map. Now on to the algebraic geometry part of all of this. Suppose ${X}$ is a variety defined over an algebraically closed field of characteristic ${p}$, say ${k}$. Then we can form the sheaf of Witt vectors on ${X}$ as follows. Notice that all the stalks of the structure sheaf ${\mathcal{O}_x}$ are local rings of characteristic ${p}$, so it makes sense to define the Witt rings ${W_n(\mathcal{O} _x)}$ for any postive ${n}$. Now just form the natural sheaf ${\mathcal{W}_n}$ that has as its stalks ${(\mathcal{W}_{n})_x=W_n(\mathcal{O}_x)}$. Note that forgetting ring structure and thinking only as a sheaf of sets we have that ${\mathcal{W}_n}$ is just ${\mathcal{O}^n}$, and when ${n=1}$ it is actually isomorphic as a sheaf of rings. For larger ${n}$ the addition and multiplication is defined in that strange way, so we no longer get an isomorphism of rings. Using our earlier operations and the isomorphism for ${n=1}$, we can use the following sequences to extract information. When ${n\geq m}$ we have the exact sequence ${0\rightarrow \mathcal{W}_m\stackrel{V}{\rightarrow} \mathcal{W}_n\stackrel{R}{\rightarrow}\mathcal{W}_{n-m}\rightarrow 0}$. If we take ${m=1}$, then we get the sequence ${0\rightarrow \mathcal{O}_X\rightarrow \mathcal{W}_n\rightarrow \mathcal{W}_{n-1}\rightarrow 0}$. This will be useful later when trying to convert cohomological facts about ${\ mathcal{O}_X}$ to ${\mathcal{W}}$. We could also define ${H^q(X, \mathcal{W}_n)}$ as sheaf cohomology because we can think of ${\mathcal{W}_n}$ just as a sheaf of abelian groups. Let ${\Lambda=W(k)}$, then since ${\mathcal{W}_n}$ are ${\Lambda}$-modules annihilated by ${p^n\Lambda}$, we get that ${H^q(X, \mathcal{W}_n)}$ are also ${\Lambda}$-modules annihilated by ${p^n\Lambda}$. Next time we’ll talk about some other fundamental properties of the cohomology of these sheaves. Other forms of Witt vectors Today we’ll discuss two other flavors of the ring of Witt vectors, which have some pretty neat applications to computing Cartier duals of group schemes. The ring we’ve constructed, ${W(A)}$, is sometimes called the ring of generalized Witt vectors. You can construct a similar ring associated to a prime, ${p}$. Recall that the functor ${W}$ was the unique functor ${\mathrm{Ring}\rightarrow\mathrm{Ring}}$ that satisfies ${W(A)=\{(a_1, a_2, \ldots): a_j\in A\}}$ as a set and for ${\phi:A\rightarrow B}$ a ring map we get ${W(\phi)(a_1, a_2, \ldots )=(\phi(a_1), \phi(a_2), \ldots)}$ and the previously defined ${w_n: W(A)\rightarrow A}$ is a functorial homomorphism. We can similarly define the Witt vectors over ${A}$ associated to a prime ${p}$ as follows. Define ${W_{p^\infty}}$ to be the unique functor ${\mathrm{Ring}\rightarrow \mathrm{Ring}}$ satisfying the following properties: ${W_{p^\infty}(A)=\{(a_0, a_1, \ldots): a_j\in A\}}$ and ${W_{p^\infty}(\phi)(a_0, a_1, \ldots )=(\phi(a_0), \phi(a_1), \ldots)}$ for any ring map ${\phi: A\rightarrow B}$. Now let ${w_{p^n}(a_0, a_1, \ldots)=a_0^{p^n}+pa_1^{p^{n-1}}+\cdots + p^na_n}$, then ${W_{p^\infty}}$ also has to satisfy the property that ${w_{p^n}: W(A)\rightarrow A}$ is a functorial homomorphism. Basically we can think of ${W_{p^\infty}}$ as the generalized Witt vectors where we’ve relabelled so that our indexing is actually ${(a_{p^0}, a_{p^1}, a_{p^2}, \ldots)}$ in which case the ${w_n}$ are the ${w_{p^n}}$. There is a much more precise way to relate these using the Artin-Hasse map and the natural transformation ${\epsilon_p: W(-)\rightarrow W_{p^\infty}(-)}$ which maps ${\epsilon_p (a_1, a_2, \ldots)\mapsto (a_{p^0}, a_{p^1}, a_{p^2}, \ldots)}$. Notice that when we defined ${W(A)}$ using those formulas (and hence also ${W_{p^\infty}(A)}$) the definition of adding, multiplying, and additive inverse were defined for the first ${t}$ components using only polynomials involving the first ${t}$ components. Define ${W_t(A)}$ to be the set of length ${t}$ “vectors” with elements in ${A}$, i.e. the set ${\{(a_0, a_1, \ldots, a_{t-1}): a_j\in A\}}$. The same definitions for multiplying and adding the generalized Witt vectors are well-defined and turns this set into a ring for the same exact reason. We also get for free that the truncation map ${W(A)\rightarrow W_t(A)}$ by ${(a_0, a_1, \ldots)\ mapsto (a_0, a_1, \ldots, a_{t-1})}$ is a ring homomorphism. For instance, we just get that ${W_1(A)\simeq A}$. These form an obvious inverse system ${W_n(A)\rightarrow W_m(A)}$ by projection when ${m|n}$ and we get that ${W(A)\simeq \lim W_t(A)}$ and that $ {W_{p^\infty}(A)\simeq \lim W_{p^t}(A)}$. Today we’ll end with a sketch of a proof that ${W_{p^\infty}(\mathbb{F}_p)\simeq \mathbb{Z}_p}$. Most of these steps are quite non-trivial, but after next time when we talk about valuations, we’ll be able to prove much better results and this will fall out as a consequence of one of them. Consider the one-dimensional formal group law over ${\mathbb{Z}}$ defined by ${F(x,y)=f^{-1}(f(x)+f(y))}$ where ${f(x)=x+p^{-1}x^p+p^{-2}x^{p^2}+\cdots}$. Then for ${\gamma(t)\in \mathcal{C}(F; \ mathbb{Z}_p)}$ (the honest group of power series with no constant term defined from the group law considered on ${\mathbb{Z}_p}$), there is a special subcollection ${\mathcal{C}_p(F; \mathbb{Z}_p)}$ called the ${p}$-typical curves, which just means that ${\mathbf{f}_q\gamma(t)=0}$ for ${qeq p}$ where ${\mathbf{f}_q}$ is the frobenius operator. Now one can define a bijection ${E:\mathbb{Z}_p^{\mathbb{N}\cup \{0\}}\rightarrow \mathcal{C}_p(F;\mathbb{Z}_p)}$. This can be written explicitly by ${(a_0, a_1, \ldots)\mapsto \sum a_it^{p^i}}$ and moreover we get ${w_{p^n}^FE=w_{p^n}}$ where ${w_{p^n}^F(\gamma(t))=p^n}$(coefficient of ${t^{p^n}}$ in ${f(\gamma(t))}$). Now we put a commutative ring structure on ${\mathcal{C}_p(F;\mathbb{Z}_p)}$ compatible with the already existing group structure and having unit element ${\gamma_0(t)=t}$. There is a ring map ${\Delta: \mathbb{Z}_p\rightarrow \mathcal{C}_p(F; \mathbb{Z}_p)}$ defined by ${\Delta(a)=f^{-1}(af(t))}$. Also, the canonical projection ${\mathbb{Z}_p\rightarrow \mathbb{F}_p}$ induces a map ${\rho: \mathcal{C}_p(F;\mathbb{Z}_p)\rightarrow \mathcal{C}_p(F; \mathbb{F}_p)}$. It turns out you can check that the compostion ${\rho\circ \Delta}$ is an isomorphism, which in turn gives the isomorphism ${\mathbb{Z}_p\stackrel{\sim}{\rightarrow} W_{p^\infty}(\mathbb{F}_p)}$. Likewise, we can also show that ${W_{p^\infty}(\mathbb{F}_p^n)}$ is the unique unramified degree ${n}$ extension of ${\mathbb{Z}_p}$. Formal Witt Vectors Last time we checked that our explicit construction of the ring of Witt vectors was a ring, but in the proof we noted that ${W}$ actually was a functor ${\mathrm{Ring}\rightarrow\mathrm{Ring}}$. In fact, since it exists and is the unique functor that has the three properties we listed, we could have just defined the ring of Witt vectors over ${A}$ to be ${W(A)}$. We also said that ${W}$ was representable, and this is just because ${W(A)=Hom(\mathbb{Z}[x_1, x_2, \ldots ], A)}$. We can use our ${\Sigma_i}$ to define a (co)commutative Hopf algebra structure on $ {\mathbb{Z}[x_1, x_2, \ldots]}$. For instance, define the comultiplication ${\mathbb{Z}[x_1, x_2, \ldots ]\rightarrow \mathbb{Z}[x_1,x_2,\ldots]\otimes \mathbb{Z}[x_1,x_2,\ldots]}$ by ${x_i\mapsto \Sigma_i(x_1\otimes 1, \ldots , x_i \otimes 1, 1\otimes x_1, \ldots 1\otimes x_i)}$. Since this is a Hopf algebra we get that ${W=\mathrm{Spec}(\mathbb{Z}[x_1,x_2,\ldots])}$ is an affine group scheme. The ${A}$-valued points on this group scheme are by construction the elements of $ {W(A)}$. In some sense we have this “universal” group scheme keeping track of all of the rings of Witt vectors. Another thing we could notice is that ${\Sigma_1(X,Y)}$, ${\Sigma_2(X,Y)}$, ${\ldots}$ are polynomials and hence power series. If we go through the tedious (yet straightfoward since it is just Witt addition) details of checking, we will find that they satisfy all the axioms of being an infinite-dimensional formal group law. We will write this formal group law as ${\widehat{W}(X,Y)}$ and ${\ widehat{W}}$ as the associated formal group. Next time we’ll start thinking about the length ${n}$ formal group law of Witt vectors (truncated Witt vectors). Witt Vectors Form a Ring Today we’ll check that the ring of Witt vectors is actually a ring. Let ${A}$ be a ring, then ${W(A)}$ as a set is the collection of infinite sequences of ${A}$. Recall that our construction involves lots of various polynomials and a strange definition addition and multiplication. I won’t rewrite those, since it was the entirety of the last post. Now there is a nice trick to prove that ${W(A)}$ is a ring when ${A}$ is a ${\mathbb{Q}}$-algebra. Just define ${\psi: W(A)\rightarrow A^\mathbb{N}}$ by ${(a_1, a_2, \ldots) \mapsto (w_1(a), w_2(a), \ldots)}$. This is a bijection and the addition and multiplication is taken to component-wise addition and multiplication, so since this is the standard ring structure we know ${W(A)}$ is a ring. Also, ${w(0,0,\ldots)=(0,0,\ldots)}$, so ${(0,0,\ldots)}$ is the additive identity, ${W(1,0,0,\ldots)=(1,1,1,\ldots)}$ which shows ${(1,0,0,\ldots)}$ is the multiplicative identity, and ${w(\iota_1 (a), \iota_2(a), \ldots)=(-a_1, -a_2, \ldots)}$, so we see ${(\iota_1(a), \iota_2(a), \ldots)}$ is the additive inverse. We can actually get this idea to work for any characteristic ${0}$ ring by considering the embedding ${A\rightarrow A\otimes\mathbb{Q}}$. We have an induced injective map ${W(A)\rightarrow W(A\otimes \mathbb{Q})}$. The addition and multiplication is defined by polynomials over ${\mathbb{Z}}$, so these operations are preserved upon tensoring with ${\mathbb{Q}}$. We just proved above that ${W(A\ otimes\mathbb{Q})}$ is a ring, so since ${(0,0,\ldots)\mapsto (0,0,\ldots)}$ and ${(1,0,0,\ldots)\mapsto (1,0,0,\ldots)}$ and the map preserves inverses we get that the image of the embedding ${W(A)\ rightarrow W(A\otimes \mathbb{Q})}$ is a subring and hence ${W(A)}$ is a ring. Lastly, we need to prove this for positive characteristic rings. Choose a characteristic ${0}$ ring that surjects onto ${A}$, say ${B\rightarrow A}$. Then since the induced map again preserves everything and ${W(B)\rightarrow W(A)}$ is surjective, the image is a ring and hence ${W(A)}$ is a ring. So where does all this formal group stuff we started with come into play? Well, notice that what we were really implicitly using is that ${W:\mathbf{Ring}\rightarrow\mathbf{Ring}}$ is a functor. It takes a ring ${A}$ and gives a new ring ${W(A)}$. If ${\phi: A\rightarrow B}$ is a ring map, then ${W(\phi): W(A)\rightarrow W(B)}$ by ${(a_1, a_2, \ldots)\mapsto (\phi(a_1), \phi(a_2), \ldots)}$ is still a ring map. We also have ${w_n:W(A)\rightarrow A}$ by ${a\mapsto w_n(a)}$ are ring maps for all ${n}$. Some people think it is cleaner to define the ring of Witt vectors as the unique functor ${W}$ that satisfies these three properties. From a functorial point of view it turns out that ${W}$ is representable. The representing ring via the ring axioms gives a Hopf algebra structure, and hence we get an affine group scheme out of it. Then as in the formal group discussion, we can complete this to get a formal group. This will be the discussion of next time. Witt Vectors 2 Today we’re going to accomplish the original goal we set out for ourselves. We will construct the ring of generalized Witt vectors. First, let ${x_1, x_2, \ldots }$ be a collection of indeterminates. We can define an infinite collection of polynomials in ${\mathbb{Z}[x_1, x_2, \ldots ]}$ using the following formulas: and in general ${\displaystyle w_n(X)=\sum_{d|n} dx_n^{n/d}}$. Now let ${\phi(z_1, z_2)\in\mathbb{Z}[z_1, z_2]}$. This just an arbitrary two variable polynomial with coefficients in ${\mathbb{Z}}$. We can define new polynomials ${\Phi_i(x_1, \ldots x_i, y_1, \ldots y_i)}$ such that the following condition is met ${\phi(w_n(x_1, \ldots ,x_n), w_n(y_1, \ldots , y_n))=w_n(\Phi_1(x_1, y_1), \ldots , \Phi_n(x_1, \ldots , y_1, \ldots ))}$. In short we’ll notate this ${\phi(w_n(X),w_n(Y))=w_n(\Phi(X,Y))}$. The first thing we need to do is make sure that such polynomials exist. Now it isn’t hard to check that the ${x_i}$ can be written as a ${\mathbb{Q}}$-linear combination of the ${w_n}$ just by some linear algebra. ${x_1=w_1}$, and ${x_2=\frac{1}{2}w_2+\frac{1}{2}w_1^2}$, etc. so we can plug these in to get the existence of such polynomials with coefficients in ${\mathbb{Q}}$. It is a fairly tedious lemma to prove that the coefficients ${\Phi_i}$ are actually in ${\mathbb{Z}}$, so we won’t detract from the construction right now to prove it. Define yet another set of polynomials ${\Sigma_i}$, ${\Pi_i}$ and ${\iota_i}$ by the following properties: ${w_n(\Sigma)=w_n(X)+w_n(Y)}$, ${w_n(\Pi)=w_n(X)w_n(Y)}$ and ${w_n(\iota)=-w_n(X)}$. We now can construct ${W(A)}$, the ring of generalized Witt vectors over ${A}$. Define ${W(A)}$ to be the set of all infinite sequences ${(a_1, a_2, \ldots)}$ with entries in ${A}$. Then we define addition and multiplication by ${(a_1, a_2, \ldots, )+(b_1, b_2, \ldots)=(\Sigma_1(a_1,b_1), \Sigma_2(a_1,a_2,b_1,b_2), \ldots)}$ and ${(a_1, a_2, \ldots )\cdot (b_1, b_2, \ldots )=(\Pi_1(a_1,b_1), \ Pi_2(a_1,a_2,b_1,b_2), \ldots )}$. Next time we’ll actually check that this is a ring for any ${A}$. To show you that this isn’t as horrifyingly strange and arbitrary as it looks, it turns out that all these rules boil down to just the ${p}$-adic integers when ${A}$ is the finite field of order ${p}$, i.e. ${W(\mathbb{F}_p)=\mathbb{Z}_p}$. It also turns out that there is a much cleaner construction of this than element-wise if all you care about are the existence and certain properties. Formal Groups 3 Today we move on to higher dimensional formal group laws over a ring ${A}$. Side note: later on we’ll care about the formal group attached to a Calabi-Yau variety in positive characteristic which is always one-dimensional, but to talk about Witt vectors we’ll need the higher dimensional ones. An ${n}$-dimensional formal group law over ${A}$ is just an ${n}$-tuple of power series, each of ${2n}$ variables and no constant term, satisfying certain relations. We’ll write ${F(X,Y)=(F_1(X,Y), F_2(X,Y), \ldots , F_n(X,Y))}$ where ${X=(x_1, \ldots , x_n)}$ and ${Y=(y_1, \ldots , y_n)}$ to simplify notation. There are a few natural guesses for the conditions, but the ones we actually use are that ${F_i(X,Y)=x_i+y_i+}$(higher degree) and for all ${i}$${F_i(F(X,Y),Z)=F_i(X, F(Y,Z))}$. We call the group law commutative if ${F_i(X,Y)=F_i(Y,X)}$ for all ${i}$. We still have our old examples that are fairly trivial ${\widehat{\mathbb{G}}_a^n(X,Y)=X+Y}$, meaning the i-th one is just ${x_i+y_i}$. For a slightly less trivial example, let’s explicitly write a four-dimensional one It would beastly to fully check even one of those associative conditions. I should probably bring this to your attention, but the condition ${F_i(F(X,Y), Z)}$ has you input as the first four variables those four equations, so in this case checking the first condition amounts to ${F_1(F_1(X,Y),F_2(X,Y),F_3(X,Y),F_4(X,Y),z_1,z_2,z_3,z_4)}$ ${=F_1(x_1,x_2,x_3,x_4, F_1(Y,Z),F_2(Y,Z),F_3(Y,Z),F_4(Y,Z))}$. But on the other hand, the fact that ${\widehat{\mathbb{G}}_a^n}$ satisfies it is trivial since you are only adding everywhere which is associative. Now we can do basically everything we did with the one-dimensional case now. For one thing we can form the set ${\mathcal{C}(F)}$ of ${n}$-tuples of power series in one indeterminant and no constant term. This set has an honest group structure on it given by ${\gamma_1(t)+_F \gamma_2(t)=F(\gamma_1(t), \gamma_2(t))}$. We can define a homomorphism between an ${n}$-dimensional group law ${F}$ and ${m}$-dimensional group law ${G}$ to be an ${m}$-tuple of power series in ${n}$-indeterminants ${\alpha(X)}$ with no constant term and satisfying ${\alpha(F(X,Y))=G(\alpha(X), \alpha(Y))}$. From here we still have the same inductively defined endomorphisms for any ${n}$ (not just the dimension) from the one-dimensional case ${[0]_F(X)=0}$, ${[1]_F(X)=X}$ and ${[n]_F(X)=F(X, [n-1]_F(X))} $. That’s a lot of information to absorb, so we’ll end here for today. Formal Groups 1 Today we’ll start our long journey on the definition of the ring of Witt vectors. Our first step will be to think about formal group laws, since this will be useful to us for things I have planned later on (the formal groups attached to varieties in positive characteristic). Let ${A}$ be commutative ring with ${1}$. Then a one-dimensional formal group law over ${A}$ is just a formal power series in two variables ${F(x,y)\in A[[x,y]]}$ of the form ${F(x,y)=x+y+\sum c_ {i,j}x^iy^j}$ that satisfies “associativity”. This means that ${F(x, F(y,z))=F(F(x,y),z)}$. A key thing is that the power series has no constant term. We call the ring ${A}$ equipped with this ${F}$ a (one-dimensional) formal group. This terminology makes some sense considering it is sort of giving a group operation on ${A}$, but since these are just formal series, we may not have convergence when plugging actual elements of ${A}$ in. The formal group is called commutative if ${F(x,y)=F(y,x)}$. Here are two easy examples to see that this is really quite concrete. The additive formal group is using ${F(x,y)=x+y}$ and the multiplicative formal group uses ${F(x,y)=x+y+xy}$. They are both commutative. We’ll suggestively write ${\widehat{\mathbb{G}}_a(x,y)=x+y}$ and ${\widehat{\mathbb{G}}_m(x,y)=x+y+xy}$. Let’s not shy away from positive characteristic, since that will be our main usage of formal groups in the future. We have a nice non-commutative formal group on ${A=k[\epsilon]/(\epsilon^2)}$ where ${\text{char}(k)=p>0}$ given by ${f(x,y)=x+y+\epsilon xy^p}$. Now we can get honest groups out of our formal groups. Suppose ${F}$ is a formal group law on ${A}$, then we can consider power series in one variable with no constant term. We take this just as a set, so given ${\gamma_1(t), \gamma_2(t)\in tA[t]}$, it makes sense to do ${F(\gamma_1(t), \gamma_2(t))}$, so we’ll define ${\gamma_1(t)+_F \gamma_2(t)=F(\gamma_1(t), \gamma_2(t))}$. This turns our set into an honest group which we denote ${\mathcal{C}(F)}$. But why are there inverses? We need some sort of lemma that says: given any formal group law ${F}$ over ${A}$ there is a power series ${i(x)=-x+b_2x^2+\cdots}$ such that ${F(x,i(x))=0}$. This is just a special case of what is known as the “Formal Implicit Function Theorem”. With an eye towards our goal of the Witt vectors ${W(A)}$, we’ll just say here that ${\mathcal{C}(\widehat{\mathbb{G}_m})}$ is the underlying additive group of the ring of Witt vectors over ${A}$. Witt Rings 1 Here is just a quick post on why one might want to know what the ring of Witt vectors is. I won’t tell you what they are, but here are some interesting ways in which they are used. It is hard to find any resources on their construction, so we’ll try to get some information out there. Given a ring, you can construct the Witt ring from it. For fields of positive characteristic, this ring ${W(k)}$ has some very nice properties. It is a DVR with residue field ${k}$ and fraction field of characteristic ${0}$. I’m very interested in a class of problems in algebraic geometry known as “lifting problems”. One wants to know if a particular variety defined over a positive characteristic field has a lift to characteristic ${0}$. What this means is that you have a deformation of the variety where the special fiber is the variety itself, but another fiber is of characteristic ${0}$. This probably hurts your brain if you are used to thinking of deformations as “continuously” changing a variety, but recall all it really means is that you have a flat family. Here is where the Witt vectors shine. Suppose you are trying to lift a variety ${Y}$ to characteristic ${0}$. Then you might try to find an ${X}$ and a flat map ${X\rightarrow \text{Spec}(W(k))}$ with the property that the fiber over the closed point is ${Y}$. Then you’ve lifted it, since the generic fiber is a deformation ${Y}$ and is defined over ${Frac(W(k))}$ which is of characteristic $ {0}$. Note that finding such an ${X}$ is usually very difficult and often requires constructing a formal scheme one step at a time and proving that this is algebraizable, but now we’re getting ahead of ourselves. Now I’ll just list some other applications that we won’t focus on, but hopefully something catches your interest so that you’ll want to find out what they are. The last few posts were about de Rham cohomology in arbitrary characteristic, and we have our eye towards crystalline cohomology. Number theorists care a lot about crystalline cohomology since it is central in all of this Langland’s stuff going on. The tie in with Witt vectors is in the de Rham-Witt complex. Witt rings show up in K-theory in the form of ${K_0}$ of the category of endomorphisms of projective modules over a commutative ring. Lastly, any unipotent abelian connected algebraic group is isogenous to a product of truncated Witt group schemes. I’m sure there are lots of other examples where these things come up. “If there so important why has no one really heard of them,” you may be asking? I have no idea. I wish there was more out there on them so that it was easier to learn what they are. I think it has to do with the fact that for the most part you have to write down a really gross formula for the multiplication.
{"url":"http://hilbertthm90.wordpress.com/tag/witt-vectors/","timestamp":"2014-04-17T01:10:11Z","content_type":null,"content_length":"166118","record_id":"<urn:uuid:74f0b76a-79b2-488a-9a01-de325b952521>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
A comparison of methods for calculating population exposure estimates of daily weather for health research To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites – that is, using proximity polygons around weather stations intersected with postal areas – tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid. A study of the possible effect of temperature and precipitation on gastroenteritis inspired an assessment of different methods for small area population exposure estimation techniques. The health data were obtained from a survey with respondents from Australia conducted between September 2001 and August 2002 [1]. In area-level analysis such as this the health outcome data and population exposure variables would ideally be available at the finest resolution of aggregation in space and time. The daily health outcome data were available for individuals and the postcode of their residence was recorded. The level of aggregation of weather observations for this analysis was also at the postcode level. The focus of this study was to therefore find the best method for representing the exposure of populations to daily weather in small geographic areas. Within spatial units there are factors that may complicate the computation of exposure values. For instance there can potentially be large variation of temperature influenced by ground elevation. The design of the monitoring network can also have an impact. The density of weather recording stations is an important factor in computation of exposures, with more reliable estimates expected from areas that have many sites. The way the sites are spread throughout the area can also affect exposure estimates, such as whether they are evenly distributed or clustered together. An additional factor to consider when dealing with human health outcomes is the distribution of the population within the area, as our primary interest is in human exposure to environmental conditions. The Australian Bureau of Statistics (ABS) does not publish census populations for postcodes, but instead for approximations termed Postal Areas (POA) [2]. Although there are inconsistencies when matching postcodes to POA [3], these were used in order to utilise information on the population distribution in the computation of exposure. Population weighted exposure data are conceptually appealing as they more closely estimate the weather being experienced by the majority of the population. A complication is that some postcodes consist of multiple non-contiguous parts, or are large single-part postcodes with multiple population clusters. Therefore calculating an estimate for each sub-population separately gives better information from a population exposure perspective, although this is more computationally intensive. Non-computationally intensive methods of calculating weather exposure estimates used by others have included taking the mean of all stations' observations within a geographic region, a method used by the Australian Bureau of Meteorology for precipitation since 1910 [4]. A similar method is to calculate the mean from the nearest neighbouring stations. This method has been used for a variety of purposes including rainfall [5,6]. A more sophisticated method is inverse distance weighted averages. This approach has been used in many area, point and gridding contexts [7,8]. An inverse distance weighted average is: Q[j ]= ∑W[ij]Z[i]/∑W[ij] where Q[j ]is the estimate of a day's weather for the jth spatial unit, Z[i ]is the data value measured at the ith station, and the W[ij ]are weights calculated as the reciprocal of the distance, or squared distance, between the jth spatial unit centroid and each of the stations in the neighbourhood. Stations outside the neighbourhood are given zero weight. The inverse of the squared distance is most commonly used as the weight, however the inverse of the distance is also often used [9]. Using the inverse of the squared distance gives higher weight to closer observations. Note that the part of the spatial unit used to calculate distances from is a very important decision to be made. Some of the options available are: geographic centroid; population weighted centroid; the area boundary; sub-unit centroids; and sub-unit boundaries [10]. Other studies have compared the results from different methods for spatial interpolation of weather, focused on comparing the cells of gridded surfaces [11-14] or imputed data for stations with gaps [15]. The area estimates derived from different methods have been compared less often [16], and rarely in a population health context. A recent study of health effects of air pollutants and weather in an Australian city estimated exposure at the aggregate level using the average of internal stations without assessing estimates weighted by distance or population [17]. This was noted as a possible limitation of the study design even though the authors considered that any measurement error would be "non-differential and produce conservative relative risks". There has been much work in the air pollution research community investigating different methods of combining exposure data. Air pollution research that has addressed these issues includes those that compare modelled pollution (using dispersion models or geostatistical surface computation) and compared areal averages with those gained from simple averages of monitors [18-20], and others that use the distance from addresses or area centroids to monitors [21,22]. However in weather exposure studies often the rationale for using one technique rather than another is not explicitly considered and the differences in the values obtained by the different methods are generally unknown. Comparison between the results of the different estimates is required to ascertain the differential in particular contexts. There has been a proliferation of approaches to the problems of spatial estimation of daily weather. Some of the methods are splining [23]; kriging and co-kriging [15]; gridded inverse distance weighting algorithms [4,11,24]; multiplicatively weighted proximity polygons [25]; artificial neural networks [26]; additive spatial regression models [27]; physically based numerical models of the three-dimensional atmospheric processes [28]; indirect methods such as radar [29]; and remote sensors mounted on satellites [30]. Some of these methods would enable the inclusion of relevant covariates such as elevation, wind speed and wind direction. However there is no consensus about which is the best to use, some methods are computationally intensive and some commercially available options are expensive [31-33]. In addition, even if one of these were identified as a gold-standard to be used for creating gridded surfaces at each time point, it remains unclear whether it is worthwhile to undertake the extra computational burden needed to estimate population weighted exposure values. These could be based on fine resolution population distribution within spatial units, or on less computationally intensive approaches. It is not known which methods yield adequate weather estimates for health research. This paper addresses this important problem. Five methods for population exposure estimation Option 1: average of internal or nearest neighbouring stations (using intersecting proximity polygons) The first option used to estimate daily temperature and precipitation for POA was to calculate the average of internal stations, or the nearest neighbours if no internal stations exist. The first step in Option 1 was to identify stations in the POA boundaries. If there were no stations then the nearest neighbours were used. The "nearest neighbours" were found by the overlay and intersection of proximity polygons (also known as Thiessen or Voronoi polygons) with the POA boundary. In this approach each monitoring station is the focal point used to calculate the boundaries of a proximity polygon, whose area is defined so that all other points in it are nearest to the focal point than to any other focal point. The corresponding POA code is joined to each of the daily observations in a many-to-many relationship. Then the averages of each daily observation from the stations are calculated for each POA on each day. Option 2: average of nearest neighbouring stations (using interesting proximity polygons) The second option was to calculate the average of "nearest neighbours" regardless of their location inside or outside each POA boundary. Proximity polygons were used to allocate nearest neighbours as described in Option 1. Option 3: geographic centroid inverse distance weighted average (using stations ≤ 50 km distant from centroid) In the third option the distance between the geographic centroid and each station was used to calculate an inverse distance weighted average. The geographic centroid (also known as the mean centre) is the geographic centre of the boundary. The inverse distances from this centroid are used to weight the average of the station observations. An arbitrary maximum distance of 50 km from the centroids of each spatial unit was used because it is likely that stations further away will not be similar to the area of interest [7]. The distance-weighting factor was also compared as the inverse of the distance (Option 3a) and the inverse of the squared distance (Option 3b). Option 4: population weighted centroid inverse distance weighted average (using stations ≤ 50 km distant from centroid) Option 4 used the distance between the POA population weighted centroid and the stations for an inverse distance weighted average. The population weighted centroid is calculated by subdividing the POA into its population census constituent sub-units (collector's districts) and calculating the centroids of these. The population-weighted centroid is found by weighting the average of the latitude and longitude coordinates of the sub-unit centroids by the populations of those sub-units. The choice of weights was also compared as the inverse of the distance (Option 4a) and the inverse of the squared distance (Option 4b). Option 5: population weighted average of census collector's district distance weighted averages (using stations ≤ 50 km distant from centroid) In the fifth option we calculated inverse distance weighted averages for each sub-unit geographic centroids (collector's districts) and then averaged these within POA using sub-unit populations as a weight. In this option each centroid had a weather estimate calculated for each day. Then the sizes of the populations are used to weight the contribution of these into each POA on each day. Option 5 differs from Option 4 in that it estimates the weather exposure for each sub-unit first and then gives a weighted summary of these for the POA. The choice of weights was also compared as the inverse of the distance (Option 5a) and the inverse of the squared distance (Option 5b). The options are shown as schematic diagrams in figures 1, 2, 3 and 4. Figure 5 shows the legend for the symbols used in these figures. For Options 1 and 2 the images are the same so we have displayed these two options together in figure 1. In Option 1 the areas with internal stations are assessed first. This would give POA Y the value of its internal station 3. POA W would be given an average of the two internal stations 1 and 4. Then the areas with no internal stations are assessed using the proximity polygon network represented by the thick dashed lines. POA X has four neighbouring stations 1, 2, 3 and 5. POA Z only has one overlapping proximity polygon indicating that the nearest neighbour is station 4. Figure 1. Options 1 and 2 for calculating population exposure estimates of daily weather for areas. Options 1 and 2 use the internal station and nearest neighbour by proximity polygon methods. Figure 2. Option 3 for calculating population exposure estimates of daily weather for areas. Option 3 uses the inverse distance to geographic centroid. Figure 3. Option 4 for calculating population exposure estimates of daily weather for areas. Option 4 weights to the population weighted centroid. Figure 4. Option 5 for calculating population exposure estimates of daily weather for areas. Option 5 applies population weights to CD distance weighted averages. Figure 5. legend for symbols used in figures 1-4. In Option 2 the only differences are that now POA Y is given the average of the internal station 3 AND the nearest neighbouring stations 1 and 4. POA W now includes the neighbouring station 5 in the average of internal stations 1 and 4. In Options 3–5 the process is only described for POA Y to avoid excessive detail. In figure 2, Option 3 is shown. The distances from the stations to the geographic centroid of POA Y (shown by the star) are calculated. The distances between the centroid and stations within the search radius are shown by the lines. The inverse distance weighted average will include stations 1, 3 and 4. The station 3 is so close to the centroid that the inverse distance weighted average will be dominated by this observation. This is especially the case using the weight calculated by the reciprocal of the squared distance. Figure 3 shows that Option 4 uses a centroid weighted by the population of the sub-unit collector's districts (CD) to calculate the distances from the stations. For POA Y the centroid is pulled to the southeast because of the dominance of population in that direction. Distances are calculated from this centroid to the stations within the search radius which now includes the stations 3, 4 and In figure 4, Option 5 is shown. Here the distance from each sub-unit centroid to each station within the search radius is used to calculate a daily estimate. These are then weighted by the population and aggregated to give a POA level estimate. We considered Option 5b the most conceptually appealing because it incorporates fine resolution population distribution patterns and is more sensitive to observations close to these sub-populations than the other options. Meteorological data We obtained average daily temperature (the average of daily maximum and minimum temperature) in degrees Celsius and the daily precipitation in the 24 hours before 9 am in millimetres from the National Climate Centre of the Bureau of Meteorology Research Centre [34]. Weather data were obtained for 2,246 Bureau of Meteorology stations within 50 km of the NSW border (figure 6). Exposure estimates were calculated to correspond to the gastroenteritis survey respondent dates and postcode localities for the period August 2001 to December 2002 for 620 POA from NSW and the ACT. Not all stations in the relevant POAs logged observations in the period, nor do they all observe every parameter. The Meteorology Bureau receives data either electronically or manually on paper forms. These data may undergo initial error checking or subsequent error checking and each observation is given a quality rating. As a result of this error checking, and incorporation of additional historical data, the data may be modified. It is unlikely that there would be significant modifications made more than a few months after the date of observation [15]. 86% of the precipitation observations and over 99% of the temperature observations were considered acceptable by the Bureau. These were the only data used in this study. There were 1,816 stations with at least one good quality precipitation observation and 220 stations with at least one good quality temperature measurement during the period. Figure 6. Map of Bureau of Meteorology monitoring stations overlayed on Postal Areas. Postcodes and postal areas Inconsistencies between Australian postcode areas (for which health data are available) and Australian Bureau of Statistics POA boundaries (for which population data are available) are sometimes considerable [3,35]. Despite this we used the POA boundaries to enable the incorporation of fine resolution population data from the Australian Census [36]. The population data are based on the smaller CDs, which are then combined, by the Australian Bureau of Statistics, into the larger POA units in such a way as to align them as closely as possible to postcodes. When a CD crosses more than one postcode, the decision rule for allocating it to a POA is the area that contains the majority of the population [37]. This is done in a subjective way using indicators such as how much of the area of the CD lies in each region and the distribution of land-use parcels [38]. A further complication is that some postcodes, and therefore some POA, comprise two or more separate land areas. In 2001 there were 72 such multipart POA in NSW and the ACT. The maximum distance between the geographic centroids of any two parts of the same POA was 350 km (POA 2831) and the mean of the split POAs centroid to centroid distance was 33 km. The maximum number of parts in any one POA was 16 (POA 2324). This is a common problem in coastal areas with many small islands allocated a single code, however these cases normally have small distances between parts. There are some inland POA with fewer numbers of parts but greater distances between these, due to the way Australia Post operates its delivery system. Summary of estimates from all five options The time taken to calculate exposure estimates using Options 1 and 2 (2–3 hours per weather parameter on a desktop PC) was appreciably less than that required for Options 3 and 4 (around 8 hours per parameter). Option 5 required much more processing than the other options because each CD needed an inverse distance weighted weather estimate on each day (approximately 9,500 CD). This method was completed using a Structured Query Language server. Even using this more powerful computer, the time taken was approximately 8 hours per parameter. The monitoring network is sparse in the west of the state and Options 1 and 2 suffer from a paucity of neighbourhood proximity polygon information. Of the 620 NSW and ACT POA, there were 375 that had internal precipitation stations and 130 with internal temperature stations. Some POA are allocated only one station and when those stations had days with no observations this resulted in many days with missing data. The percentage of complete (and 90% complete) daily POA estimates are shown for each option in table 1. There are more gaps from the temperature observation network, which is sparser than the precipitation observations. Table 1. Percentage of POAs with complete, and a majority, of weather estimates by option The problem stems from the fact that proximity polygon size is inversely related to the density of monitoring stations. In sparsely monitored regions the large size of polygons increases the probability that a POA will be allocated to only one monitoring station, causing gaps in the series on days when no weather information is available for that station. The example of POA Z in figure 1 shows that the estimate is determined by one station in Options 1 and 2. In contrast the distance-weighting scheme uses all stations within a given distance (50 km for this study) and thus incorporates more information. The inverse distance weighting methods overcame this problem because it is more likely that there will be another station observing which could be used, and the information from these will be incorporated even if the nearest neighbour is not observing on a particular day. However this may cause some problems on days when there are only distant stations observing and these are given full weight because there are no close observations. Difference between the options The difference between Option 5b and the daily estimates of each of the options was calculated. Many of the precipitation estimates were zero due to the dry conditions in NSW and the ACT and consequently many of the differences between the precipitation estimates of the options were also zero. In Option 1 67% of estimates were zero, in Option 2 62% of estimates, Option 3a and 3b 36%, in Option 4a and 4b 36% and in Option 5a and 5b 33% of daily estimates were zero. To examine the rainfall differences between Option 5b and the comparative option, summary statistics of the precipitation differences were calculated for all estimated values where either Option 5b or the comparative option had a value greater than zero. The mean and median of the daily differences in Table 2 represent the bias of that option against Option 5b, after excluding those readings where either option estimated zero rainfall. Table 2. Summary of daily differences between each option with Option 5b for temperature and precipitation In Option 1 the mean of the temperature differences is negative, implying that this option estimates lower temperatures on average than Option 5b. On the other hand the mean of Option 2 differences is positive implying that this option estimates higher temperatures on average. In the precipitation estimates for options 1 and 2 the mean is positive, implying higher rainfall estimates than Option 5b. The range and standard deviation for the differences for these options is large implying that the results are broadly inconsistent with the Option 5b estimates. For the inverse distance weighted options (3, 4 and 5a) in both precipitation and temperature the mean difference shows that there is a tendency for Option 5b to have higher values with Option 4b the closest to 5b and 3a the most different. However, for precipitation the median differences are all positive or zero, suggesting that the mean is affected by some extreme values where rainfall estimates by Option 5b are considerably higher than the comparative option. For temperature both the median and the mean are negative or zero for all options apart from Option 2, which implies that Option 5b consistently estimates higher temperatures. The scatter plots in figure 7 show the difference in precipitation for each of the options with Option 5b on the y-axis, and the magnitude of Option 5b on the x-axis. This shows that options 1 and 2 give very different estimates than Option 5b. In addition Option 5b gives markedly higher estimates than any of the non-squared weighted averages (3a, 4a and 5a) when considered with precipitation of greater magnitude. This probably represents the correlation between population density and rainfall, with more population clusters in areas where there is generally higher rainfall. Figure 7. Scatter plots of the differences between precipitation estimates from each Option with Option 5b. The differences between Option 5b rainfall and the inverse distance weighted options on the bottom row (3b and 4b) show that the difference generally does not vary when dealing with rainfall of greater magnitude (with a very few extreme exceptions such as a few precipitation differences greater than 100 mm). This implies that increasing the local weighting by squaring the distance changes estimates from Options 3 and 4. The different temperature estimates are displayed in figure 8. The wider scatter in the first column show that the nearest neighbour methods perform poorly, with wide scatter. The scatter plots for Options 3a and 3b show that the geographic centroid gives quite different estimates, both greater and less than Option 5b, regardless of the weighting power. This is more evident in the mid-range of temperatures and the differences are reduced in the higher and lower temperature ranges. Option 4a and 4b give similar differences to Option 5a, and overall the differences for these are not great. Figure 8. Scatter plots of the differences between temperature estimates from each Option with Option 5b. Regional differences To see if differences between the options compared with Option 5b varied by region, we analysed the differences aggregated by 15 climatic zones. These zones were constructed by grouping POA based on the Bureau of Meteorology rainfall districts [39]. These climatic zones and their respective population densities are shown in figure 9. Two multipart POA that span the border of climatic zones (2652 and 2642), shown by the crosshatched areas were excluded because they cannot be considered part of a single climatic region. Figure 9. Postal Areas grouped into 15 climatic regions with population densities. The daily precipitation differences (calculated only where either option is not zero) were grouped by region and displayed in the box and whisker plots in figure 10. These plots have been duplicated and the set on the left show the range of differences between the maximum and minimum (described by the top and bottom of the whiskers). The set on the right show in detail the range between -0.3 mm and 0.3 mm. The boxes contain all values between the first and third quartiles and this shows that 50% of the differences are very small in all districts. It also appears that the POA in districts 6, 7, 8 and 15 have larger rainfall differences from Option 5b (positive and negative) than other districts. The districts 6, 7 and 8 are in the higher rainfall zones of the NSW north coast where the spatial pattern of rainfall is usually highly localised. Region 15 is in the drier part of the state where there are larger POA. Figure 10. Box plots of the precipitation differences for the 5 Options by climatic zone. The left plot shows the range (whiskers) while the right hand plot shows the median, first quartile and the third quartile (box). The influence of highly variable rainfall in a small area is demonstrated in figure 11 by an exemplar POA from the northern NSW coastal region (POA 2441). This POA happens to be split, and on the 31/ 3/2002 there was a storm that passed between two parts of the same POA that are about 20 km apart. The geographic centroid is shown by the star in the southern part of this POA. The parts have an equal sized population and so the population-weighted centroid (shown by the cross) falls equidistant between the parts (in multipart POA that have populations in each of the parts, the centroid will often fall outside the boundaries of the parts). The CD centroids are represented by the black circles, whose sizes are proportional to their populations. Option 1 had an estimate of 81 mm; Option 2 was 110 mm; Option 3a was 105 mm; Option 3b was 133 mm; Option 4a was 150 mm; Option 4b was 202 mm; Option 5a was 91 mm and Option 5b was 99 mm. Option 4 (the distance to population weighted centroid) was very different to Option 5 (population weighted average of distance to CD centroid weighted averages). Figure 11. Example of precipitation over a multipart Postal Area giving different results for each option. The temperature differences were also grouped by region and are shown by box-plots in figure 12. These do not vary as much between regions and in all regions 50% of daily differences between Option 3b and 5b are less than plus or minus 5 degrees Celsius. In Option 4b, 50% of the daily differences are within 0.3 degrees different from Option 5b. Districts 7 and 8 stand out in this comparison as well showing a larger difference between Options 3b and 5b compared with the other districts. Figure 12. Box plots of the temperature differences for the 5 Options by climatic zone. The left plot shows the range (whiskers) and the right hand plot shows the median, first quartile and the third quartile (box). Temporal differences To see if there was variation in the daily differences from Option 5b during the year, the differences were also grouped by month. There were greater precipitation differences for both Options 3b and 4b in February 2002, a month with high rainfall in some parts of NSW, increasing the likelihood of greater differences. The differences for daily temperatures grouped by month showed that the daily differences for Option 3b in the winter months June, July and August were more strongly negative than those in Option 5b. The primary focus of this work is a comparison of options for calculation of weather exposure measures for health analysis of small area populations. The exposures were average temperature and rainfall, although other measures could be similarly compared including humidity, ultraviolet radiation, air pollution and other environmental exposures. The criteria used to assess the different options were: conceptually sound; computer time required; low variation across methods; and completeness of values at the daily POA level. The population weighted average of inverse distance weighted averages (Option 5) fulfilled these criteria best. The other geographic and population weighting methods performed similarly, and were quite close to the Option 5 estimates in most regions. The use of data from weather stations internal to the area or using neighbour allocation methods based on proximity polygons performed poorly. This was because the density of the monitoring stations is very low resulting in dependence on only a few observations to calculate values. A possible limitation of Option 5 is that the population used to describe the fine resolution distributions were based on the August 2001 census enumeration counts. This single estimate may not be an accurate representation of the population at other times. If the data are available then the distribution of population at specific times could be taken into account in the calculations. As this study calculates weather estimates shortly after the census this issue will not greatly affect the application presented here. As the census is based on residence, the population distribution does not take account of frequent movement of people such as daily travel to other areas for work, which may differ by population. In the absence of such data describing movement of people, the census currently represents the best available data on population distribution. The nearest neighbour method (using proximity polygons) allocates less monitoring stations to each POA and thus limits access to regional information and may give unrepresentative estimates. This also causes these methods to be susceptible to large gaps in the series. The problem of missing data in Options 1 and 2 could be resolved in a number of ways by imputation. However problems associated with having less monitoring station observations per POA cannot be easily dealt with in this approach. The inverse distance weighting approaches incorporate information from many more stations. For this reason they are less susceptible to the gaps found in Options 1 and 2. However when no stations are close then far off stations are given full weight. We set the limit at 50 km. As some POA estimates were derived from stations almost this far from their centroids these values may be untrustworthy. A tighter search radius would reduce this, but would increase the number of missing values, while a larger radius would incorporate more values but potentially more unreliable data. Sensitivity analyses could be done to study the effect of the different cut-off levels. Option 5b was based on localised population weighting. This gives higher estimates at greater rainfall magnitude than any of the other methods using non-squared distance weighting. In some coastal areas of Australia there is highly localised intense rainfall, which is the probable cause of this effect. The inverse-squared-distance-weighting options (3b and 4b) decreased the influence of stations at a greater distance and gave more similar results to Option 5b. Daily temperature and rainfall estimates calculated by using data from internal sites or nearest neighbours (proximity polygon) methods give poor representations of local area weather patterns for health studies based on daily data. The weighting approaches using weather stations less than 50 kilometres from area centroids were considerably better in this regard and the majority of daily differences across the options were small. The extent of the differences depended to some extent on the climatology of the location of the spatial unit and the time of the year. For studies of human health in the Australian context the distance to a regional geographic centroid is not as precise as a population weighted centroid, as large areas of uninhabited land (and the weather of these areas) may not provide relevant information about weather exposures. The population weighted average of sub-unit inverse distance weighted estimates is the most conceptually appealing method applied here. However, it is more computationally intensive than simpler population weighted centroid estimates and there is little difference in daily average temperature and rainfall estimates. Hardware and software Options 1 to 4 were calculated on a desktop PC. Option 5 was performed on a Structured Query Language (SQL) server. GIS operations used ArcGIS 9.1 [40]. Microsoft Access was used to join the concordance table of POA-to-monitoring station proximity polygons to the daily observations, and averaged these whilst grouping by POA code and date in Options 1 and 2. Options 3 and 4 used "joinby" and "collapse" commands in STATA 8 [41] to join the distance weights with the daily observations, and Option 5 used the SQL server. Meteorological data Individual station files of daily meteorological data for 1990–2005 were parsed for integration in MS Access databases using visual basic code written by Melissa Goodwin at the National Centre for Epidemiology and Population Health. Postcode/postal area populations and concordance The CD populations from the 2001 census were obtained from the ABS [36]. These data were enumeration counts rather than area of usual residence which cost more. Some postcodes don't exist as POA and for these the locality names were found using the online postcode finder from the electronic telephone directory [42]. These locality names were georeferenced using the online Geoscience Australia Place Name Finder [43] or the ABS 'Urban Centres and Localities' spatial boundaries (also CD aggregates from the ABS). These locations were then overlaid and intersected with the POA boundaries and given this code instead of their real postcode. Multipart POA were assessed by first using the ArcGIS multipart to single-part tool (features toolbox) and then counting the number of parts per feature (using the frequency tool). Internal stations Internal stations were found using the intersect tool in the ArcGIS Spatial Analyst extension. This information was joined to the meteorological data using Microsoft Access. Nearest neighbour Nearest neighbour concordances were calculated by first creating proximity polygons of the appropriate stations (using the coverage tools), then overlaying and intersecting these with POA (using Spatial Analyst tools in ArcGIS). Centroids were calculated using the Visual Basic for Applications script from the ArcGIS help menu. Then distances were calculated using the coverage toolbox "point-distance" tool. The projection was set to Albers South Asia Conic (metres) projection. This is necessary to avoid the distortion of length inherent with other cartographic projections [44]. Authors' contributions IH carried out the GIS analysis and drafted the manuscript. GH conceived the study, conducted the health analysis and helped to draft the manuscript. KD provided theoretical and conceptual guidance and helped to draft the manuscript. All authors read and approved the final manuscript. IH was employed by the National Centre for Epidemiology and Population Health at the time this work was conducted. The Authors would like to thank Melissa Goodwin and Aaron Petty for assistance with programming, Agus Salim and Rosalie Woodruff for editorial advice, Graham de Hoedt, Neville Nicholls, Cathy Toby and Mike Manton from the Bureau of Meteorology for access to the meteorological data and general advice. 1. Hall G, and the OzFoodNet Working Group: Results from the national gastroenteritis survey 2001-2002. In National Centre for Epidemiology and Population Health Working Papers. Canberra , Australian National University; 2004:[No. 50]. 2. Australian Bureau of Statistics: Statistical geography volume 2: census geographic areas (Cat. No. 2905.0) . Canberra, Australia ; 2001. 3. Jones SD, Eagleson S, Escobar FJ, Hunter GL: Lost in the mail: the inherent errors of mapping Australia Post postcodes to ABS derived postal areas. Australian Geographical Studies 2003, 41(2):171-179. Publisher Full Text 4. Jones D, Beard G: Verification of Australian monthly district rainfall totals using high resolution gridded analyses. 5. Aurenhammer F: Voronoi diagrams - a survey of a fundamental geometric data structure. Computing Surveys 1991, 23(3):345-405. Publisher Full Text 6. Mills GA, Weymouth G, Jones D, Ebert EE, Manton M, Lorkin J, Kelly J: A national objective daily rainfall analysis system. Melbourne, Australia , Bureau of Meteorology Research Centre; 1997. 7. Cressie N: Geostatistical methods for mapping environmental exposures. In Spatial Epidemiology: Methods and Applications. Edited by Elliott P, Wakefield JC, Best NG, Briggs DJ. Oxford , Oxford University Press; 2000. 8. Moore K: Resel filtering to aid visualisation within an exploratory data analysis system. Journal of Geographical Systems 2000, 2(4):375-398. Publisher Full Text 9. Bailey TC, Gatrell AC: Interactive Spatial Data Analysis. Essex , Longman Scientific and Technical ; 1995. 10. Stillman ST, Wilson JP, Daly C, Hutchinson MF, Thornton P: Comparison of ANUSPLIN, MTCLIM-3D, and PRISM precipitation estimates. In Proceedings of the Third International Conference/Workshop on Integrating GIS and Environmental Modeling, January 21-25, 1996. Santa Fe ; 1996. 11. Shine JA, Krause PF: Exploration and estimation of North American climatological data. In Proceedings of "Computing Science and Statistics: Modeling the Earth's Systems: Physical to Infrastructural": April 5-8, 2000. New Orleans ; 2000. 12. Jolly W, Graham J, Michaelis A, Nemani R, Running S: A flexible, integrated system for generating meteorological surfaces derived from point sources across multiple geographic scales. Environmental Modelling & Software 2005, 20:873-882. Publisher Full Text 13. Naoum S, Tsanis IK: Ranking spatial interpolation techniques using a GIS-based DSS. 14. Jeffrey SJ, Carter JO, Moodie KB, Beswick AR: Using spatial interpolation to construct a comprehensive archive of Australian climate data. Environmental Modelling & Software 2001, 16(4):309-330. Publisher Full Text 15. Pardoiguzquiza E: Comparison of geostatistical methods for estimating the areal average climatological rainfall mean using data on precipitation and topography. International Journal of Climatology 1998, 18(9):1031-1047. Publisher Full Text 16. Jalaludin B, Morgan G, Lincoln D, Sheppeard V, Simpson R, Corbett S: Associations between ambient air pollution and daily emergency department attendances for cardiovascular disease in the elderly (65+ years), Sydney, Australia. Journal of Exposure Science & Environmental Epidemiology 2006, 16(3):225-237. PubMed Abstract | Publisher Full Text 17. Bell ML: The use of ambient air quality modeling to estimate individual and population exposure for human health research: a case study of ozone in the Northern Georgia Region of the United Environ Int 2006, 32(5):586-593. PubMed Abstract | Publisher Full Text 18. Jerrett M, Arain A, Kanaroglou P, Beckerman B, Potoglou D, Sahsuvaroglu T, Morrison J, Giovis C: A review and evaluation of intraurban air pollution exposure models. J Expo Anal Environ Epidemiol 2005, 15(2):185-204. PubMed Abstract | Publisher Full Text 19. Jerrett M, Burnett RT, Ma R, Pope CA, Krewski D, Newbold KB, Thurston G, Shi Y, Finkelstein N, Calle EE, Thun MJ: Spatial analysis of air pollution and mortality in Los Angeles. Epidemiology 2005, 16(6):727-736. PubMed Abstract | Publisher Full Text 20. Dominici F, Peng RD, Bell ML, Pham L, McDermott A, Zeger SL, Samet JM: Fine particulate air pollution and hospital admission for cardiovascular and respiratory diseases. JAMA 2006, 295(10):1127-1134. PubMed Abstract | Publisher Full Text 21. Zandbergen PA, Chakraborty J: Improving environmental exposure analysis using cumulative distribution functions and individual geocoding. Int J Health Geogr 2006, 5:23. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 22. Hutchinson MF: Interpolation of mean rainfall using thin plate smoothing splines. International Journal of Geographical Information Systems 1995, 9:385-403. 23. Thornton P, Running S, White M: Generating surfaces of daily meteorological variables over large regions of complex terrain. Journal of Hydrology 1997, 190:214-251. Publisher Full Text 24. Mu L: Polygon characterization with the multiplicatively weighted voronoi diagram. 25. Rigol JP, Jarvis CH, Stuart N: Artificial neural networks as a tool for spatial interpolation. International Journal of Geographical Information Science 2001, 15(4):323-343. Publisher Full Text 26. Zoppou C, Roberts S, Hegland M: Spatial and temporal rainfall approximation using additive models. Australian & New Zealand Industrial and Applied Mathematics Journal 2000, 42(E):C1599-C1611. 27. Hurley PJ, Physick WL, Luhar AK: TAPM: a practical approach to prognostic meteorological and air pollution modelling. Environmental Modelling and Software 2005, 20(6):737-752. Publisher Full Text 28. Curtis DC: Storm sizes and shapes in the arid southwest. In Proceedings of the Arizona Floodplain Management Association Fall 2001 Meeting, Nov 8-9, 2001. Parker ; 2001. 29. Neteler M: Time series processing of MODIS satellite data for landscape epidemiological applications. 30. Houlder D, McMahon J, Hutchinson MF: ANUSPLIN and ANUCLIM. [http://cres.anu.edu.au/outputs/orderform-aust-print.php] webcite 31. Queensland Government Department of Natural Resources and Mines: SILO Australian Daily Historical Climate Surfaces. [http:/ / www.nrm.qld.gov.au/ products/ cat_services.php?category=534&descr iption=Digital+climate+data] webcite 32. University of Montana Numerical Terradynamic Simulation Group: DAYMET Daily Surface Weather Data and Climatological Summaries. [http://www.daymet.org/] webcite 33. National Climate Centre of the Bureau of Meteorology Research Centre: Daily weather data for Bureau of Meteorology stations. 150 Lonsdale Street, Melbourne 3000, AUSTRALIA; 2005. 34. Jenner A, Blanchfield F: Population estimates for non-standard geographical areas - practices, processes, pitfalls and problems. In Proceedings of the Joint AURISA and Institution of Surveyors Conference: 25 – 30 November 2002. Adelaide ; 2002. 35. Australian Bureau of Statistics: CDATA2001, census of population and housing data by age and sex for census collector's districts. Canberra ; 2001. 36. Blanchfield F. Director of Australian Bureau of Statistics Geography: Written communication regarding criteria for allocating CD-derived postal areas. Canberra ; 2004. 37. Bureau of Meteorology: Bureau of Meteorology Rainfall Districts. [http://www.bom.gov.au/climate/how/newproducts/images/raindist.pdf] webcite 38. Environmental Systems Research Institute: ArcGIS 9.1. [http://www.esri.com] webcite Redlands ; 1999. 39. Stata Corporation: STATA statistical software versions 8. [http://www.stata.com] webcite College Station ; 2001. 40. Australian Whitepages: Australian Whitepages check a postcode. [http://www.whitepages.com.au/wp/search/tools.jhtml] webcite 41. Australian Government Geoscience Australia: Geoscience Australia place name search. [http://www.ga.gov.au/map/names/] webcite Sign up to receive new article alerts from International Journal of Health Geographics
{"url":"http://www.ij-healthgeographics.com/content/5/1/38","timestamp":"2014-04-19T17:03:04Z","content_type":null,"content_length":"138803","record_id":"<urn:uuid:345c678a-018e-4fe3-871a-9d4a55d2283d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
simplifying expressions with exponents activity Author Message zinbij Posted: Wednesday 27th of Dec 19:39 Hey guys ,I was wondering if someone could help me with simplifying expressions with exponents activity? I have a major assignment to complete in a couple of months and for that I need a thorough understanding of problem solving in topics such as equation properties, sum of cubes and relations. I can’t start my assignment until I have a clear understanding of simplifying expressions with exponents activity since most of the calculations involved will be directly related to it in one way or the other. I have a problem set, which if someone can help me solve, would help me a lot. Vofj Posted: Friday 29th of Dec 08:39 Timidrov You really shouldn’t have wasted money on a math tutor. Had you posted this message before hiring a tutor , you could have saved yourself loads of money! Anyway, what’s done is done . Now to make sure that you do well in your exams I would suggest using Algebrator. It’s a user-friendly software. It can solve the toughest problems for you, and what’s even cooler is the fact that it can even explain how to go about solving it! There used to be a time when even I was having difficulty understanding angle-angle similarity, dividing fractions and side-side-side similarity. But thanks to Algebrator, it’s all good now . Ashe Posted: Friday 29th of Dec 16:50 It’s true, even I’ve been using this software since sometime now and it really helped me in solving problems my queries on simplifying expressions with exponents activity and simplifying expressions with exponents activity. I also used it to clear my doubts in topics such as rational inequalities and trinomials. If you are don’t have much time, then I would highly suggest this software, and well even if you have a lot of time in hand, I still would! tonj44 Posted: Sunday 31st of Dec 13:15 I want it NOW! Somebody please tell me, how do I order it? Can I do so over the internet? Or is there any phone number through which we can place an order? SjberAliem Posted: Monday 01st of Jan 12:39 I remember having often faced problems with y-intercept, radical inequalities and powers. A really great piece of math program is Algebrator software. By simply typing in a problem from workbook a step by step solution would appear by a click on Solve. I have used it through many math classes – Pre Algebra, Remedial Algebra and Pre Algebra. I greatly recommend the Koem Posted: Tuesday 02nd of Jan 10:16 The details are here : http://www.algebra-help.org/product-and-quotient-of-functions.html. They guarantee an unrestricted money back policy. So you have nothing to lose. Go ahead and Good
{"url":"http://www.algebra-help.org/basic-algebra-help/angle-suplements/simplifying-expressions-with.html","timestamp":"2014-04-19T07:29:20Z","content_type":null,"content_length":"52126","record_id":"<urn:uuid:43c6334b-546f-4476-af27-3cfbc1ed2bbd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 17 - THEORETICAL COMPUTER SCIENCE , 1994 "... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to ha ..." Cited by 87 (8 self) Add to MetaCart We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [20].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality ... , 1993 "... . Synthetic ethology is proposed as a means of conducting controlled experiments investigating the mechanisms and evolution of communication. After a discussion of the goals and methods of synthetic ethology, two series of experiments are described based on at least 5000 breeding cycles. The first d ..." Cited by 64 (5 self) Add to MetaCart . Synthetic ethology is proposed as a means of conducting controlled experiments investigating the mechanisms and evolution of communication. After a discussion of the goals and methods of synthetic ethology, two series of experiments are described based on at least 5000 breeding cycles. The first demonstrates the evolution of cooperative communication in a population of simple machines. The average fitness of the population and the organization of its use of signals are compared under three conditions: communication suppressed, communication permitted, and communication permitted in the presence of learning. Where communication is permitted the fitness increases about 26 times faster than when communication is suppressed; with communication and learning the rate of fitness increase is about 100 fold. The second series of experiments illustrates the evolution of a syntactically simple language, in which a pair of signals is required for effective communication. Keywords: artificial lif... , 1998 "... We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a phi ..." Cited by 31 (1 self) Add to MetaCart We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations. - Theoretical Computer Science , 2004 "... We propose certain non-Turing models of computation, but our intent is not to advocate models that surpass the power of Turing Machines (TMs), but to defend the need for models with orthogonal notions of power. We review the nature of models and argue that they are relative to a domain of applicatio ..." Cited by 18 (9 self) Add to MetaCart We propose certain non-Turing models of computation, but our intent is not to advocate models that surpass the power of Turing Machines (TMs), but to defend the need for models with orthogonal notions of power. We review the nature of models and argue that they are relative to a domain of application and are ill-suited to use outside that domain. Hence we review the presuppositions and context of the TM model and show that it is unsuited to natural computation (computation occurring in or inspired by nature). Therefore we must consider an expanded definition of computation that includes alternative (especially analog) models as well as the TM. Finally we present an alternative model, of continuous computation, more suited to natural computation. We conclude with remarks on the expressivity of formal mathematics. Key words: analog computation, analog computer, biocomputation, computability, computation on reals, continuous computation, formal system, hypercomputation, - Information Sciences , 1994 "... Connectionism the use of neural networks for knowledge representation and inference has profound implications for the representation and processing of information because it provides a fundamentally new view of knowledge. However, its progress is impeded by the lack of a unifying theoretical constru ..." Cited by 17 (8 self) Add to MetaCart Connectionism the use of neural networks for knowledge representation and inference has profound implications for the representation and processing of information because it provides a fundamentally new view of knowledge. However, its progress is impeded by the lack of a unifying theoretical construct corresponding to the idea of a calculus (or formal system) in traditional ap- proaches to knowledge representation. Such a construct, called a simulacrum, is proposed here, and its basic properties are explored. We find that although exact classification is impossible, several other useful, robust kinds of classification are permitted. The representation of structured information and constituent structure are considered, and we find a basis for more flexible rule-like processing than that permitted by conventional methods. We discuss briefly logical issues such as decidability and computability and show that they require reformulation in this new context. Throughout we discuss the implications for artificial intelligence and cognitive science of this new theoretical framework. - In Proceedings of the IEEE Workshop on Architectures for Semiotic Modeling and Situation Analysis in Large Complex Systems , 1995 "... this paper we outline the general characteristics of continuous formal systems ..." , 1992 "... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to hav ..." Cited by 15 (3 self) Add to MetaCart We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [17].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality "p... , 1993 "... "Artificial neural networks" provide an appealing model of computation. Such networks consist of an interconnection of a number of parallel agents, or "neurons." Each of these receives certain signals as inputs, computes some simple function, and produces a signal as output, which is in turn broadca ..." Cited by 14 (6 self) Add to MetaCart "Artificial neural networks" provide an appealing model of computation. Such networks consist of an interconnection of a number of parallel agents, or "neurons." Each of these receives certain signals as inputs, computes some simple function, and produces a signal as output, which is in turn broadcast to the successive neurons involved in a given computation. Some of the signals originate from outside the network, and act as inputs to the whole system, while some of the output signals are communicated back to the environment and are used to encode the end result of computation. In this dissertation we focus on the "recurrent network" model, in which the underlying graph is not subject to any constraints. We investigate the computational power of neural nets, taking a classical computer science point of view. We characterize the language re... - Minds and Machines , 2001 "... It has been argued that neural networks and other forms of analog computation may transcend the limits of Turing computation; proofs have been oered on both sides, subject to diering assumptions. In this report I argue that the important comparisons between the two models of computation are not so m ..." Cited by 12 (8 self) Add to MetaCart It has been argued that neural networks and other forms of analog computation may transcend the limits of Turing computation; proofs have been oered on both sides, subject to diering assumptions. In this report I argue that the important comparisons between the two models of computation are not so much mathematical as epistemological. The Turing machine model makes assumptions about information representation and processing that are badly matched to the realities of natural computation (information representation and processing in or inspired by natural systems). This points to the need for new models of computation addressing issues orthogonal to those that have occupied the traditional theory of computation. Keywords: computability, Turing machine, hypercomputation, natural computation, biocomputation, analog computer, analog computation, continuous computation 1. - American Behavioral Scientist , 1997 "... A myth has arisen concerning Turing's paper of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine - a myth that has passed into cognitive science and the philosophy of mind, to wide and pernicious effect. This supposed principle, somet ..." Cited by 11 (2 self) Add to MetaCart A myth has arisen concerning Turing's paper of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine - a myth that has passed into cognitive science and the philosophy of mind, to wide and pernicious effect. This supposed principle, sometimes incorrectly termed the 'Church-Turing thesis', is the claim that the class of functions that can be computed by machines is identical to the class of functions that can be computed by Turing machines. In point of fact Turing himself nowhere endorses, nor even states, this claim (nor does Church). I describe a number of notional machines, both analogue and digital, that can compute more than a universal Turing machine. These machines are exemplars of the class of nonclassical computing machines. Nothing known at present rules out the possibility that machines in this class will one day be built, nor that the brain itself is such a machine. These theoretical considerations undercut a numb...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1655112","timestamp":"2014-04-17T19:51:02Z","content_type":null,"content_length":"38324","record_id":"<urn:uuid:71b7bf2e-b287-4252-b60a-fbc928e9366c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Opportunities for Undergraduates in Mathematics Study and Scholarships Math Jobs and Careers Most of the following programs are for juniors, such as REUs, PCMI, MTBI, etc. Research Experiences for Undergraduates in Math-this includes both theoretical and applied mathematics The Mathematical and Theoretical Biology Institute Park City Math Institute has summer programs for math research and math education Center for Discrete Math and Theoretical Computer Science They have a US program and one in the Czech Republic this summer GWU Summer Program for Women in Mathematics VIGRE Vertical Integration of Research and Education Some are for sophomores: The Carleton College Summer Mathematics Program for Women pre-REU at U of Texas in Wavelets Some are for graduating seniors intending to go to graduate school. National Security Agency - has summer internships for undergrads in math and related areas Nebraska Conference for Undergraduate Women in Mathematics (NCUWM) Mathematics Advanced Study Semesters (MASS) at Penn State Seaway Meeting – Upstate NY (October/April) Saint Lawrence Valley Mathematics Symposium – Potsdam area (Fall) Joint Math Meetings – varies (January) NCUWM – Lincoln, NE (January/February) GREAT Day- Geneseo (April) Hudson River Undergraduate Conference – Albany area (April) Applied Math Conference – Buffalo (April) Math Fest – varies (August) Math Organizations MAA-Mathematics Association of America AMS-American Mathematical Society SIAM-Society for Industrial and Applied Mathematics AWM-Association for Women in Mathematics YMN-Young Mathematicians Network INFORMS-Institute for Operations Research and the Management Sciences
{"url":"http://www.geneseo.edu/~haddad/Undergrad.html","timestamp":"2014-04-20T17:35:23Z","content_type":null,"content_length":"19670","record_id":"<urn:uuid:607326d7-7020-47cb-ab62-e307a2815d98>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
unpredictable random number generator The HAVEGE unpredictable random number generator On line testing 7 March 2002 The online testing we provide is derived (adapted) from the NIST statistical suite for random number generator. For practical and theoretical description of the tests , the reader should consult the NIST report. The minimum size of the sequence we analyze is 1 Mbyte and up to 256 Mbytes. Those tests were not originally written for analyzing such huge sequences of random numbers. Therefore, some performance gaps were encountered. Many tests were too CPU hungry and memory hungry to be run in a reasonable time ( a few minutes for 1 Mbyte). We adapted those tests to run faster and with using less memory when possible. When despites our efforts, the response time was still too large, we run the test on a few "randomly" chosen subsequences. We describe our usage of each of the tests. Each of these tests are run on randomly selected subsequences for each of the size 128 bits, 256 bits, .. up to the complete sequence. We chose to 8 runs on randomly chosen 750000 bits sequences. This test is very CPU hungry. We chose to run it on only 8 randomly selected 38,192 bits sequences. and Test 4 (Longest runs tests) This test is very CPU hungry. We chose to run it on only 8 randomly selected 1Mbits sequences. On a single randomly selected 1Mbits sequence, we run the 148 template matchings for templates of length 9. m=14) , Test 15 (Random Excursion) , Test 16 (Random Excursion Variant) We chose to run these tests on only 8 randomly selected 1000000 bits sequences. This test is run for all possible values higher than 6 on the beginning of the sequence. This test is the most CPU hungry of the collection. We chose to run it on a single randomly selected 4731*341 bits sequences. We run this test for values from 3 to 16 on the whole sequence.
{"url":"http://www.irisa.fr/caps/projects/hipsor/old/subpage/OnLineNIST.html","timestamp":"2014-04-18T19:19:30Z","content_type":null,"content_length":"4293","record_id":"<urn:uuid:96b3ccaa-b147-45cc-ad51-1e2909dc800c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Why doesn't e^i equal 1? Everyone knows that e^[pi(i)] == -1 Using this we also know that e^[2pi(i)] == 1 What I believe I proved was e^i == 1 using the following proof: Start with [e^i] Since raising a number to 1 just gives you the number let's raise this to the equivalent of 1 or [(2pi)/(2pi)] Kaleb Burklow When raising a power to a power you just multiply them to get the new power also when multiplying 2 number you can also pull them apart as follows i*[(2pi)/(2pi)] == [2pi(i)]*[1/2pi] returning to the original problem we now get [e^(2pi(i))]^[1/(2pi)] but since we already know that [e^(2pi(i))] == 1 we get the following [1^(1/{2pi})] but 1 raised to any power equal to 1 therefore we have e^1 eqaul to 1 which we know isn't correct. Where did I mess up? Basically it is same question as: what is (-1)^(2/3). z^(1/r) is not defined usually with r being not integer. Shenghui Yang When raising a power to a power you just multiply them to get the new power As your example shows, this is not true for all complex powers. The statement is valid, however, if the inner power is a real number or the outer power is an integer: In[2]:= Simplify[(E^a)^b == E^(a b), Assumptions -> Element[a, Reals]] Ilian Gachevski Out[2]= True In[3]:= Simplify[(E^a)^b == E^(a b), Assumptions -> Element[b, Integers]] Out[3]= True
{"url":"http://community.wolfram.com/groups/-/m/t/140975?p_p_auth=4ngQkFqQ","timestamp":"2014-04-19T09:23:54Z","content_type":null,"content_length":"69146","record_id":"<urn:uuid:70098cf6-5503-42ac-8e68-b41703cd4e9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}