content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Miami Shores, FL Prealgebra Tutor
Find a Miami Shores, FL Prealgebra Tutor
I am fully bilingual, fluent in both Spanish and English with a B.S. in Genetics and Microbiology from Rutgers University, and having completed an MD degree. Great mathematical skills,
specializing in elementary and high school math (basic math, algebra I & II and geometry), SAT math, ASVAB and GED...
46 Subjects: including prealgebra, English, Spanish, chemistry
...What I find is that the classroom can be a little too distracting for some students and that tutoring can level the playing field for them. If you need help with math, I'm your guy. Together we
can tackle math concepts until they make complete sense to the student.
7 Subjects: including prealgebra, geometry, elementary math, basketball
...With Preparation, Organization, and Practice we will ensure our success on the exam. My expertise is in math but I am also proficient in the Verbal and Analytical Writing portions. Let's get
together and map out your plan for success.
30 Subjects: including prealgebra, calculus, ASVAB, GRE
I am a senior in college majoring in Biology with minors in Mathematics and Exercise Physiology. In the past I have tutored students ranging from elementary school to college in a variety of
topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others ...
30 Subjects: including prealgebra, reading, biology, algebra 1
...I love what I do and I love helping students succeed. I am very fun and love to laugh, so during sessions, don't be surprised if we laugh more than we study. However (for all the parents
reading this), there is a method to my madness.
46 Subjects: including prealgebra, chemistry, reading, English | {"url":"http://www.purplemath.com/Miami_Shores_FL_Prealgebra_tutors.php","timestamp":"2014-04-21T11:08:26Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:582f4026-e454-4379-889b-ebfe85c21ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Pitfalls with simple probability
Replies: 1 Last Post: Aug 26, 2012 2:13 AM
Messages: [ Previous | Next ]
Pitfalls with simple probability
Posted: Aug 26, 2012 12:30 AM
I have historical data on horse trainers and their race results going
back several years.I calculate the simple probability if Trainer A
uses a gear change then in the last 3 years his success rate was
Number winners/Total Runners
to give me a strike rate or probability.
1.How do I test for the optimal historical period to use for my
probabilities.ie 3 years or 10 years of data .I could go back a decade
for a larger N but then then things change over time
Or is it best to just run my own trials
2. What are the pitfalls for using this simple approach to predict
future outcomes
Date Subject Author
8/26/12 Pitfalls with simple probability Tony Miller
8/26/12 Re: Pitfalls with simple probability Richard Ulrich | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2398159","timestamp":"2014-04-19T02:13:09Z","content_type":null,"content_length":"17568","record_id":"<urn:uuid:7a3e7210-f01b-412f-b17d-7d4510e60f26>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
related rates problem
February 25th 2010, 01:29 PM #1
Junior Member
Jan 2010
related rates problem
A leadder 25ft long is leaning against the wall of a hour. The base of the ladder is pulled away from the wall at a rate of 2ft/sec
(my picture isnt to scal for some reason)
r | \ 25ft
| \
a.) How fast is the top of the ladder moving down the wall when it's base is 7ft from the wall?
i understand i can do pythag. theorom to get side of r which is 24 but im unsure of how to find dr/dt? do i still use pythag. theorom to solve for dr/dt? if so im unsure how to do it....any
explanation would be greatfull!
A leadder 25ft long is leaning against the wall of a hour. The base of the ladder is pulled away from the wall at a rate of 2ft/sec
(my picture isnt to scal for some reason)
r | \ 25ft
| \
a.) How fast is the top of the ladder moving down the wall when it's base is 7ft from the wall?
i understand i can do pythag. theorom to get side of r which is 24 but im unsure of how to find dr/dt? do i still use pythag. theorom to solve for dr/dt? if so im unsure how to do it....any
explanation would be greatfull!
Think of the side r as a function of x.
February 25th 2010, 01:34 PM #2
Super Member
Jun 2009
United States | {"url":"http://mathhelpforum.com/calculus/130779-related-rates-problem.html","timestamp":"2014-04-20T21:35:43Z","content_type":null,"content_length":"33075","record_id":"<urn:uuid:5e01dcce-d3d1-4597-9c1f-ed1d07f6a511>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bernoulli Identities and Combinatoric Convolution Sums with Odd Divisor Functions
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 890973, 8 pages
Research Article
Bernoulli Identities and Combinatoric Convolution Sums with Odd Divisor Functions
^1National Institute for Mathematical Sciences, Yuseong-Daero 1689-Gil, Yuseong-Gu, Daejeon 305-811, Republic of Korea
^2School of Mathematics, Korea Institute for Advanced Study, 85 Hoegiro, Dongdaemungu, Seoul 130-722, Republic of Korea
Received 18 October 2013; Accepted 2 December 2013; Published 12 January 2014
Academic Editor: Junesang Choi
Copyright © 2014 Daeyeoul Kim and Yoon Kyung Park. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We study the combinatoric convolution sums involving odd divisor functions, their relations to Bernoulli numbers, and some interesting applications.
1. Introduction
The Bernoulli polynomials , which are usually defined by the exponential generating function play an important role in different areas of mathematics, including number theory and the theory of finite
differences. It is well known that are rational numbers. It can be shown that for and is alternatively positive and negative for even . The are called Bernoulli numbers. Let denote the set of
positive integers. Further, let , where . Throughout this paper, we define divisor functions as follows: We also make use of the following convention:
Ramanujan [1] proved that using elementary arguments.
Let be the complex upper half plane and let be for . Denote by the Dedekind function and by the th coefficient of . Alaca and Williams [2] proved that It turns out that we need not only divisor
functions but also the coefficients of certain modular functions. For other divisor functions, Hahn [3] showed that and Glaisher [4–6] extended Besgue’s formula by replacing in the convolution sum in
(4) by other sums ; for example, Recently, the combinatorial convolution sum is studied [7–10]. In [10] Williams proved the following.
Proposition 1. Let and . Then
Cho et al. found out the linear sum for combinatorial convolution sum of in [7].
Proposition 2. For and , one has where
Denote by . The generating function of is an even function and is zero for all odd positive integer . The aim of this paper is to study two combinatorial convolution sums of the analogous type of
Proposition 2. When we write the convolution sums as linear sum of divisor function, in the result by Williams the coefficients are and ours are . More precisely, we prove the following theorems.
Theorem 3. For and ,
Equation (7) is a special case when for the following theorem because and .
Theorem 4. For and ,
Remark 5. The product of two modular forms is another modular form of bigger weight. The dimension of space of modular forms on is approximately linear for and the space generated by generating
functions of divisor functions is clearly 2 as grows. More precisely speaking, for the Eisenstein series and which will be defined in Section 2 where is the space of cusp form of weight on and it is
orthogonal complement of in. Since = and = , for suitable constants. On the other hand, Theorems 3 and 4 show that the combinatorial convolution sums are written as only divisor functions;
that is, The disappearance of is observed in Examples 17 and 18.
All calculations in Lemmas 6 and 7 and Theorems 9 and 10 are obtained by usins.
2. Modular Forms
In this section, we observe the convolution sums as a view point of generating functions of divisor functions.
The normalized Eisenstein series is defined by
For the generating function of we denote
Let be a finite index subgroup of. The modular form of weight on is a holomorphic function on such that for a positive integer . The vector space over of holomorphic modular forms of weight on is
finite dimensional and is denoted by .
Note that (if ) and (if ) for . Moreover, the product of two modular forms of weights is also modular form of weight .
The is the discriminant function with Ramanujan -function as its coefficient. It is modular of weight on .
Define the following two weight modular forms and by using the Dedekind -function defined in (5):
We get the lemma.
Lemma 6. Consider the following:(1)(2)(3)
Proof. The functions , , and are modular functions of weight on . Note that is a -dimensional vector space over generated by , and because . Their -expansions are as follows: By comparing the above
expansions with ones of for , we get our result.
Lemma 7. Consider the following:(1)(2)(3)
Proof. One can prove these by using a similar way to Lemma 6 and -dimensional vector space generated by , and over because their -expansions are as follows:
Remark 8. (1) is the normalized Hecke eigenform on the full modular group of weight :
(2) and are normalized newforms on of weight 14. The coefficients () satisfy that Moreover, for the Atkin-Lehner involution , if we define the action on the complex valued function as
By the help of Lemmas 6 and 7 we get the formulae for each convolution sum.
Theorem 9. Consider the following:(1)(2)(3)
Theorem 10. Consider the following:(1)(2)(3)
3. Proof of Theorems
In his series of eighteen papers published between 1858 and 1865, Joseph Liouville (1809–1882) stated without proof several elementary arithmetic formulae. One of these is the following formula.
Proposition 11 (see [10, page 112]). Let be an even function. Let . Then one obtains
Now we are ready to prove our theorems in Section 1.
Proof of Theorem 3. We apply () in Proposition 11. Then the left-hand side is On the other hand by using Bernoulli’s identity [10, page 42], the right-hand side is After dividing both sides by we get
Remark 12. When is a constant function in Proposition 11, it is a trivial formula such that
Corollary 13. For,
Proof. Applying in Theorem 3, our result is proved.
Corollary 14. Let. Then one has
Proof. Let in Theorem 3. Then the left-hand side is The right-hand side of Theorem 3 for is
Since , we are done.
Remark 15. The above result is also in [8, Theorem 3.4], but we do not use the combinatoric convolution sums of -functions but odd divisor function.
Corollary 16. For the odd prime case, one has
Proof. In Theorem 3, by Corollary 13.
Proof of Theorem 4. Note that We reconsider Proposition 1 as the last one in the previous line:
Thus, by Proposition 1, Theorem 3, and (53).
The following examples show us that the coefficients of cusp forms disappear in the combinatorial convolution sum for the weights and , explicitly.
Example 17. Consider the case of Theorem 3 for . The left-hand side is
When we put the formula in Theorem 9 in the above, the coefficients of and are zero.
Example 18. The left-hand side of Theorem 3 when is By applying Theorem 10 to the above, one can check that our theorem is true and the coefficients and of cusp forms disappear.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
1. S. Ramanujan, “On certain arithmetical functions,” Transactions of the Cambridge Philosophical Society, vol. 22, pp. 159–184, 1916.
2. S. Alaca and K. S. Williams, “Evaluation of the convolution sums ${\sum }_{l+6m=n}\sigma \left(l\right)\sigma \left(m\right)$ and ${\sum }_{2l+3m=n}\sigma \left(l\right)\sigma \left(m\right)$,”
Journal of Number Theory, vol. 124, no. 2, pp. 491–510, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. H. Hahn, “Convolution sums of some functions on divisors,” The Rocky Mountain Journal of Mathematics, vol. 37, no. 5, pp. 1593–1622, 2007. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
4. J. W. L. Glaisher, “Expressions for the five powers of the series in which the coefficients are the sums of the divisors of the exponents,” Messenger of Mathematics, vol. 15, pp. 33–36, 1885.
5. J. W. L. Glaisher, “On certain sums of products of quantities depending upon the divisors of a number,” Messenger of Mathematics, vol. 15, pp. 1–20, 1885.
6. J. W. L. Glaisher, “On the square of the series in which the coefficients are the sums of the divisors of the exponents,” Messenger of Mathematics, vol. 14, pp. 156–163, 1884.
7. B. Cho, D. Kim, and H. Park, “Evaluation of a certain combinatorial convolution sum in higher level cases,” Journal of Mathematical Analysis and Applications, vol. 406, no. 1, pp. 203–210, 2013.
View at Publisher · View at Google Scholar · View at MathSciNet
8. D. Kim and A. Bayad, “Convolution identities for twisted Eisenstein series and twisted divisor functions,” Fixed Point Theory and Applications, vol. 2013, article 81, 2013. View at Publisher ·
View at Google Scholar · View at MathSciNet
9. D. Kim, A. Kim, and A. Sankaranarayanan, “Bernoulli numbers, convolution sums and congruences of coefficients for certain generating functions,” Journal of Inequalities and Applications, vol.
2013, article 225, 2013. View at Publisher · View at Google Scholar
10. K. S. Williams, Number Theory in the Spirit of Liouville, vol. 76 of London Mathematical Society Student Texts, Cambridge University Press, Cambridge, UK, 2011. View at MathSciNet | {"url":"http://www.hindawi.com/journals/aaa/2014/890973/","timestamp":"2014-04-19T22:53:43Z","content_type":null,"content_length":"800298","record_id":"<urn:uuid:2023a9c3-29e9-4248-a454-dce5367cea85>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum distance from a polygonal chain to another
"Rikard " <rikard_dot_laxhammar@his.se> wrote in message <ie7pu9$pi1$1@fred.mathworks.com>...
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <ie7la5$7t9$1@fred.mathworks.com>...
> > "Rikard " <rikard_dot_laxhammar@his.se> wrote in message <ie7gi9$jqr$1@fred.mathworks.com>...
> > > "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <ie5qbs$cbj$1@fred.mathworks.com>...
> > > > "Rikard " <rikard_dot_laxhammar@his.se> wrote in message <ie5lqf$gd5$1@fred.mathworks.com>...
> > > > > Hi,
> > > > > I'm looking for fast yet accurate Matlab code that calculates the Hausdorff distance from a polygonal chain A (aka polyline, piecewise linear curve etc) to another polygonal chain B, i.e.
the maximum distance from any point along A to the corresponding closest point along B (note that these points are not necessarily vertices of the chains).
> > > > > Any ideas how to implement such as distance measure?
> > > > > There is a function for calculating Hausdorff distance at the FEX but it operates on sets of points - it requires all points along each straight line of the chain as input which makes it
rather inaccurate/inefficient? Moreover, there is a function "Distance from a point to polygon", but obviously it takes a point instead of a polygon chain as input.
> > > >
> > > > Unless if I'm mistaken but the Hausdorff distance is always reached by a vertexes on either set (you can show this by contradiction trick: it cannot be reach by the middle of two segments,
using the line derivative).
> > > >
> > > > So all we need is a procedure to compute the distance of a vertex VA of either polygonal A to the other polygonal B. Then take the max on all vertexes VA. Do the opposite and take the max of
both results.
> > > >
> > > > Such function, e.g., this one on FEX seems to be the right building block:
> > > > http://www.mathworks.com/matlabcentral/fileexchange/28268
> > > >
> > > > Bruno
> > >
> > > Thank you for your response Bruno.
> > >
> > > Yes, I believe you are right that the Hausdorff distance between two polygonal chains A and B correspond to the shortest distance from a vertex from A or B to a line segment from B resp. A.
> > > However, I want to calculate the *directed* Hausdorff distance from A to B which is less or equal to the "general" Haussdorff distance between the chains? In this case I'm not sure that
Haussdorff distance must correspond to distance between a vertex and a line segment. And I'm not sure how the function you proposed could be used for calculating the directed haussdorff distance from
chain A to chain B?
> > >
> >
> > Don't you want to compute:
> >
> > d(A,B) := max over VA { min over EB { D(VA,EB) }}
> >
> > Just use the above function (note that I haven't use it), put the distances in matrix D, for example first dimension corresponds to VA, second dimension corresponds to EB
> > then call max(min(D,[],2),[],1)
> >
> > Bruno
> Please correct me if I'm wrong, but I believe that d(A,B) as you defined it above does not calculate the directed Hausdorff distance from polygonal chain A to B. To give a counter example, assume A
is a single line segment: {(0,1),(1,1)} and B two line segments: {(0,0),(0.5,-1),(1,0)}. The point from A having the largest distance to any edge from B is *not* a vertex of A; it is in fact the
point (0.5,1) in between the vertices of A?
Sorry but because you introduce the notion "directed" Hausdorff in post #3, after I propose my solution, which I initially reasoning on symmetric Hausdorff (the property of vertexes applied for
symmetric one). I might still be valid for directed Hausdorff, but I haven't though that much right now and your example does not convince me either.
I'll leave the thread as it is and let you think about it. | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/298822","timestamp":"2014-04-16T08:35:09Z","content_type":null,"content_length":"69560","record_id":"<urn:uuid:4a45e59c-575f-443f-8e49-cad7d26017fe>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can someone help out with this problem please. x^2-3x=0 with explanation
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5077606ce4b02f109be3dda3","timestamp":"2014-04-19T19:57:14Z","content_type":null,"content_length":"48751","record_id":"<urn:uuid:917108d2-2586-4ad5-9a0d-b83c86d7ae0e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Through the looking-glass
Issue 24
March 2003
Through the looking-glass
When Alice stepped through the looking glass in Lewis Carroll's Through the Looking-Glass and What Alice Found There, she would have found that more than just the writing was back to front. The very
molecules that made up her body would have been the wrong way around in the looking-glass world, and their interaction with the looking-glass molecules would have led to a very confusing - and
possibly dangerous - adventure!
Symmetry has always played an important role in our world. We have an innate aesthetic appreciation of symmetry - we prefer symmetrical faces, and find naturally occurring symmetry in flowers and
reflections beautiful. And when we choose asymmetry it is usually because we recognise the effect that the absence of symmetry produces.
Our appreciation of symmetry in ourselves is not surprising, considering that it may be an indication of health and survival prospects (see Reflecting on extinction in Issue 20). In fact,
considerations of symmetry are important at the very smallest scale - symmetry or the lack of it in the molecules of which we and everything around us consist has a fundamental effect on their
behaviour and our existence. Mathematics has given chemists the tools they need to quantify symmetry, allowing them to classify molecules and understand their behaviour better.
Mirror, mirror...
To see an obvious example of asymmetry, just look at your hands. Your left and right hands are mirror images of each other, and the only way you can make your left hand look like your right is to
look at it in a mirror - moving or turning it just won't work. A molecule that, like your hands, cannot be made to look like its mirror-image by rotating it or displacing it, is called
. As with hands, chiral molecules can occur in two different forms, called
, which are mirror-images of one another. One will be thought of as the left-handed one, and the other the right-handed one, though it is not always clear which should be which.
DNA, the blue-print for our whole body, is chiral.
Chiral molecules play an important role in chemistry in general, and in particular in the chemistry of life, as the majority of molecules involved in living organisms are chiral. The most famous is
DNA - the information carrier inside our cells that contains the blue-prints for the structure of our whole body. The DNA structure is a double helix looking something like a twisted ladder. The two
cross-linked helices are always right-handed, twisting in the same direction as a normal screw.
Interestingly, chiral molecules that occur naturally almost always occur only in the left-handed or the right-handed enantiomer, rather than in both forms. The reason for this may be to preserve
energy. Molecules are transformed in living organisms by enzymes - huge organic molecules with cavities that enclose other molecules like a glove, allowing a chemical reaction to take place. Chiral
molecules, just like our hands, need the right sort of glove. So they need to be transformed by chiral enzymes with the same handedness, just as you need a right-handed glove for your right hand. If
both enantiomers of the chiral molecule occurred naturally, organisms would have to produce both versions of the enzyme, which would require a great deal more metabolic energy than if only one sense
of chirality was necessary.
However, when chiral molecules are manufactured synthetically in a lab using ingredients that are not chiral, the resulting enantiomers will be present in equal numbers. For many applications,
including pharmacology, the two enantiomers need to be separated, which is not a simple process. As they have different physical properties and may react differently with the chiral molecules in the
human body, the presence of both in pharmaceuticals could have disastrous consequences. It is thought that Thalidomide - used in the 1950s to control morning sickness during pregnancy - may be an
example. The drug contained both enantiomers, and although one did indeed suppress morning sickness, its mirror image may have caused the many birth defects that resulted from its use.
Not just smoke and mirrors
With chiral molecules we have been thinking of symmetry in terms of whether an object is identical to its mirror image. However, we also have an intuitive feel for other kinds of symmetry. For
example, we know that a circle is somehow more symmetrical than a equilateral triangle. If you rotate both through 120° clockwise around their respective centres, they will appear unchanged, but if
you rotate them through 90°, the triangle will now look different. In fact, you can rotate a circle around its centre by any amount and it will still look the same - it possesses complete rotational
symmetry. We intuitively understand that the more things you can do to an object - the more planes in which you can reflect it, the more ways in you can rotate it - without changing its appearance,
the more symmetrical it must be.
Mathematically, we break down the symmetry of an object into those types of operations we can perform on the object without changing its appearance - called the symmetry operations. For example, for
a smiley-face a reflection through a vertical line down its centre is a symmetry operation. And each operation has a corresponding symmetry element, which is the imagined line, plane or point around
which a symmetry operation occurs - such as the vertical line down the middle of the smiley-face that the reflection occurs with respect to.
When examining symmetry in the three-dimensional world, there are four types of symmetry operations to consider: rotation, reflection, roto-reflection and inversion.
The rotation symmetry operation rotates an object about an axis. Some objects are symmetric under any rotation - for example, a circle can be rotated by any amount around its centre and still look
the same - whereas others are only symmetric under repeated rotations around some axis through (360/n)° for some number n. For example, rotating the cross-shaped handle of a tap by 90° about an axis
through its centre - as you would to turn the tap on - is a symmetry operation. Repeating the rotation gives another symmetry operation - if the original rotation was by (360/n)°, then there will be
n rotations before the object is returned to its original position.
Turning the tap through 90° is a symmetry operation
There can be more than one rotation axis for an object, and if so the one with the most rotations before it returns the object to its original position will be called the main axis of rotation. A
dumb-bell, for example, has infinitely many axes of rotation. The main one passes through the weights at each side, along the bar which an athlete grips - and every rotation around this axis is a
symmetry. Any line through the dumb-bell's centre of gravity, perpendicular to the main axis, is also an axis of rotation. There are only two symmetric rotations around each of these axes - through
180° and 360°.
The reflection symmetry operation, as you might expect, reflects an object with respect to a plane. If the plane contains the main rotation axis then it is usually called a
vertical reflection plane
; and if it is perpendicular to the main rotation axis it is known as a
horizontal reflection plane
For the dumb-bell, any plane containing the main rotation axis is a vertical reflection plane, and there is a horizontal reflection plane cutting the dumb-bell in half through its centre of gravity,
perpendicular to the main rotation axis.
This symmetry operation is not as immediately obvious as the more familiar rotation and reflection. The roto-reflection operation involves two steps - rotating an object by (360/
)° about an axis and then reflecting it with respect to a plane perpendicular to the axis (the axis is the symmetry element for this operation).
The rotoreflection symmetry operation combines rotation and reflection
Like rotations, roto-reflections can generate a number of symmetry operations. However, unlike normal rotation, the number of operations generated depends on whether n is even or odd. Also, as for
rotation, there can be more than one roto-reflection axis.
The inversion operation,
, takes every point in an object to a equidistant point on the other side of the centre of inversion (the
symmetry element
). It can be easier to understand if you think of the centre of inversion as the point (0,0,0) - then the inversion operation takes every point (x,y,z) to (-x,-y,-z). For example, a dumb-bell is
symmetric under inversion in its centre of gravity.
Any object, indeed any molecule, will contain at least one of these symmetry elements - the operation C[1] known as the identity operation - a rotation of 360 °, the equivalent of doing nothing.
Things start to get interesting when a molecule has more than C[1].
The mathematics of symmetry
The symmetry operations of PF[5]
The molecule PF[5] is an interesting example as it has many different symmetry elements. The main axis of rotation is a three-fold rotation axis (labelled C[3]) which contains the fourth and fifth F
atoms and the P atom. The molecule also has three two-fold rotation axes (labelled C[2](1) to C[2](3)) each containing the P atom and one of the other F atoms. PF[5] also has three vertical planes of
symmetry (labelled σ[v](1) to σ[v](3)) and a horizontal symmetry plane (σ[h]). Finally, there is also a three-fold rotoreflection axis coincident with the main axis of rotation.
We will write C[3]^1 and C[3]^2 for rotation through 120^o and 240^o around C[3] respectively, and use the notations C[2](1) to C[2](3), σ[v](1) to σ[v](3) and σ[h] for both the symmetry elements and
the associated symmetry operations.
Multiplication and identity
If we rotate the labelled PF
molecule about the
rotation axis, and then reflect it in one of the symmetry planes, say
(1), the molecule will end up in exactly the same position as if we had only reflected it in the
(3) symmetry plane. So we can write that
σ[v](1) o C[3]^1 = σ[v](3),
where o is the symbol for composition of operations, and the operation applied first goes on the right hand side.
In any molecule, successive application of the symmetry operations it admits will always result in another symmetry operation on the molecule, allowing us to think of combining the operations as a
kind of multiplication. Since C[1] has no effect on the molecule, multiplying any symmetry operation by C[1] results in the original symmetry operation. So for multiplication of symmetry operations,
the identity operation C[1] acts just like the number 1 in normal multiplication.
Associativity and inverses
And just as for normal multiplication, where
This property of multiplication is called associativity, and it holds true for all the symmetry operations for every molecule.
Sometimes, multiplying two symmetry operations gets you back to the identity symmetry operation inverse to one another. And again just like normal multiplication of non-zero numbers, where for every
number there is another that multiplies with it to equal 1 (
It's a group thing
So there is an analogy between the collection of symmetry operations on an object, and normal multiplication on non-zero real numbers. And this is because both are examples of an important
mathematical concept - groups. A group is just a set of things - for example, the symmetry operations on a molecule, or the collection of integers - on which is defined some associative operation,
such that there is an identity element and every element has an inverse in the group. For example, the collection of integers under addition is a group (the identity element is 0), and groups occur
throughout mathematics from geometry to combinatorics to cryptography. The application of group theory to chemistry is just one of its uses outside of mathematics - another is in the error-correcting
codes used to create the no-scratch feature of CD players (see the Coding theory: the first 50 years from Issue 3 for more information).
The symmetry operations on a molecule constitute a mathematical group, called a point group by chemists because there is always one point in space - not necessarily on one of the atoms - left
unchanged by every symmetry operation in the point group. By determining the point group for a molecule, chemists are able to classify it and deduce important infomation about its structure and how
it will behave.
The three vibration modes of water under infra-red radiation
One application is in molecular spectroscopy, where compounds are bombarded with radiation such as infrared light, causing the molecules to vibrate as they absorb the energy. The point group of the
molecule being analysed helps chemists to interpret the spectrum they see as a result of these vibrations. For instance, when a water molecule is exposed to infra-red radiation the resulting spectrum
will show peaks at certain energies which correspond to particular motions of stretching and bending of the O-H bond (called vibration modes). Another molecule with the same point group will
experience the same number of vibration modes, though not at the same frequency due to a number of factors including the different weights of its constituent atoms.
How would you like to live in Looking-glass House, Kitty? I wonder if they'd give you milk, there? Perhaps Looking-glass milk isn't good to drink...
Whether we are talking about highly symmetrical inorganic molecules or the chiral molecules of biology, the mathematics of symmetry has given chemists a tool to help understand their structure and
behaviour. In fact chirality and symmetry are a driving force in all chemistry, and in defining our very nature and the world we live in. The looking-glass world Alice entered would indeed be a very
different place, with mirror-image DNA and enzymes and amino acids, meaning that - as Alice suggested to her kitten - looking-glass milk would almost certainly not be good to drink. Its mirror-image
molecules would all be the wrong way around for her enzymes to digest, as would the molecules of all the food in the looking-glass world.
So, were Alice to have stayed too long on the wrong side of the mirror the mirror, she would most likely have paid the ultimate price for her adventure, and starved to death!
Further reading
Reflections on Symmetry: in Chemistry and Elsewhere
, by E. Heilbronner and J. Dunitz, is a beautifully illustrated book explaining the role symmetry plays in our lives, including a discussion of the impact of symmetry on Alice's adventures through
the looking-glass.
The 2001 Nobel Prize in Chemistry
was won for work with chiral molecules.
The Point Group Tutorial
from Emory University, has 3-D models of molecules and their symmetries.
Read more about the
development of group theory
from the University of St Andrews
About the author
Rachel Thomas
is news editor on
. Rachel would like to thank Dr Paul Donnelly and Dr Charlotte Williams from the
University of Oxford
for their help with this article. | {"url":"http://plus.maths.org/content/through-looking-glass","timestamp":"2014-04-16T13:17:29Z","content_type":null,"content_length":"46788","record_id":"<urn:uuid:91e25394-68d2-48b7-8cfa-01a2b0051796>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double log ratio of number of conjugacy classes to order of group in infinite simple group families (or their corresponding Schur covers, automorphism groups)
up vote 10 down vote favorite
For a group G, denote by c(G) the number of conjugacy classes in G.
If $S_n$ denotes the symmetric group on n letters, then:
$$\lim_{n \to \infty} \frac{\log \log c(S_n)}{\log \log |S_n|} = \frac{1}{2}$$
[NOTE: Assume all logs are to the same base. It doesn't matter what the base is, as long as we use the same one in the numerator and denominator.]
[Proof that I know of: $c(S_n)$ equals the number of unordered integer partitions of n, and there exist asymptotic formulas for that which show it to be exponential in $\sqrt{n}$. $|S_n|$ is
exponential in $n \log n$. Taking double logs, we get roughly $(1/2)\log n$ and $\log n + \log(\log n)$ respectively, and the quotient goes to $1/2$.]
Obviously, the result also holds for alternating groups instead of symmetric groups.
Similarly, if $GL(n,q)$ denotes the general linear group of degree n over a field of q elements, then for fixed q:
$$\lim_{n \to \infty} \frac{\log \log c(GL(n,q))}{\log \log |GL(n,q)|} = \frac{1}{2}$$
[Proof that I know of: the number of conjugacy classes can be expressed as a polynomial in q of degree n, and the group order is a polynomial in $q$ of degree $n^2$, so we get the result.]
The result holds if we replace the general linear group by the special linear group, projective general linear group, or projective special linear group (which is the Chevalley group of type A).
My question:
1. Does the result also hold for the groups in the other three infinite Chevalley families B, C, D, and in the other infinite families of simple groups?
2. If the answer to 1 is yes, is there some deeper reason why the result holds both in the symmetric/alternating group case and in the linear groups case? I imagine that the linear groups cases can
be tied together using some general Lie group or algebraic group principles. If so, what are they? How do the alternating groups fit into the picture? May be something to do with Iwahori-Hecke
algebras or the field of one element? All explanations are welcome.
Certainly in terms of the number of elements, $|S_n| = |GL(n,1)|$, and this can be traced to the Bruhat decomposition. I don't know why the same should hold for the number of conjugacy classes, in
particular why it is (as you assert) a polynomial, but I presume it does. – Allen Knutson Mar 25 '12 at 2:49
Outside the classical series, the only infinite families of simple groups that have a rank parameter are Steinberg's unitary and orthogonal series. – S. Carnahan♦ Mar 25 '12 at 5:08
Allen Knutson, the GL(n,q) number of conjugacy classes is a polynomial in q which you can work out by explicitly computing the number of conjugacy classes of each type; here's the GL(3,q) case:
groupprops.subwiki.org/wiki/… For SL, PGL, PSL, we don't get polynomials but PORC functions; the statement for SL, for instance, is here: groupprops.subwiki.org/wiki/… – Vipul Naik Mar 25 '12 at
2 Allen: Regarding your original assertion that |S_n| = |GL(n,1)|, the naive interpretation of the polynomial formula for the order of GL(n,1) does not give the order of S_n (that naive formula
would give an order of 0, because q - 1 divides the polynomial). So could you elaborate what you mean? Also, the term quasipolynomial is over-used; I think some people use it for linear
combinations of positive power functions, others use it for PORC functions, others use it for functions that have polynomial-like growth. – Vipul Naik Mar 25 '12 at 20:09
@Vipul: the correct version replaces the cardinality of $\text{GL}_n$ with the cardinality of $\text{GL}_n/B$ for $B$ a Borel. I guess it should really be intepreted as a statement about flag
2 varieties: over $\mathbb{F}_q$ one considers maximal chains in the poset of subspaces of $\mathbb{F}_q^n$ and over $\mathbb{F}_1$ one considers maximal chains in the poset of subsets of $[n]$. –
Qiaochu Yuan Mar 27 '12 at 17:37
show 2 more comments
1 Answer
active oldest votes
(1) is true, and follows from a paper of Liebeck and Shalev ("Character degrees and random walks in finite groups of Lie type" on Liebeck's webpage). They count representations according to
their degrees rather than conjugacy classes, but it follows from their results that if $\Phi$ is a (reduced, irreducible) root system and G is a corresponding algebraic group (possibly
up vote twisted), then the number of conjugacy classes of $G(F_q)$ is roughly $q^{rk \Phi}$ (the lower bound is easy, just take conjugacy classes of elements in the torus). The limit of $$ \frac{\
2 down log \log q ^{rk \Phi}}{\log \log |G(\mathbb{F}_q)|}$$ as the rank of $\Phi$ tends to infinity is 1/2 also for types B,C, and D.
"The torus" usually means a split torus, but I don't think that has enough classes if the rank is large compared to $q$. But a maximal anisotropic torus seems to work. – Ben Wieland Apr 4
'12 at 2:32
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/92120/double-log-ratio-of-number-of-conjugacy-classes-to-order-of-group-in-infinite-si","timestamp":"2014-04-18T08:18:14Z","content_type":null,"content_length":"61532","record_id":"<urn:uuid:ecfcef10-a478-42e2-9e06-95aa9dfd6e5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to the PERT Formula Series
• For the math-phobic among us, any project management tool containing the word "Formula" in it can strike fear in even the most competent project manager. This series of articles aims to take
this fear away and demystify the PERT Formula. Hopefully, it will even convince you of some of the benefits of becoming comfortable using it.
Articles in this series include discussions of:
□ When to use the PERT Formula
□ The benefits and disadvantages of the PERT Formula
□ How the PERT Formula functions with Project Management Software including Microsoft Project 2007
• What is the PERT Formula?
The PERT Formula is most commonly used in project management for estimating project durations.
One instantiation of the formula averages out three numbers: an optimistic estimate, the most likely estimate, and a pessimistic estimate.
These numbers are added together and divided by six. This calculation is performed for each action item that requires a duration estimate. Project Managers can also calculate the standard
deviation from this number - the difference between the pessimistic estimate and optimistic estimate divided by six.
Once these two numbers have been determined for each task in the project (if using bottom-up estimation, which is recommended), then they can be added up to find the estimated duration for the
entire project.
• What does the PERT Formula do?
The PERT Formula gives the project manager a duration you can start with when planning the project. Experienced project managers will have a more accurate idea of how long particular tasks take
within a project. New project managers, however, may have a wider standard deviation when it comes to the estimated project duration due to inexperience.
The goal is not to pull numbers out of thin air, but rather, to come as close to reality as possible when determining the estimated duration for a project and for the various project phases.
PERT Formula
This series of articles introduces readers to the PERT Formula and discusses applications of PERT in project management. | {"url":"http://www.brighthubpm.com/methods-strategies/15682-introduction-to-the-pert-formula-series/","timestamp":"2014-04-20T13:18:55Z","content_type":null,"content_length":"40950","record_id":"<urn:uuid:4a64f2b3-29d0-430d-8e45-b698991d4b23>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/arcturas/answered","timestamp":"2014-04-20T21:16:18Z","content_type":null,"content_length":"64298","record_id":"<urn:uuid:7634179b-4553-43b1-8178-5612d260e742>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instead Of Actually Computing The Number Of... | Chegg.com
Instead of actually computing the number of primes between 2 and N,we can get an estimate by using the Prime Number Theorem, whichstates that
where prime(N) is the number of primes between 2 and N (inclusive). The function ln is the natural logarithm. Extend the program for Exercise 13 byprinting the estimate along with the actual number.
You should notice the pattern that the estimate approaches the actual number as the value of N gets larger. | {"url":"http://www.chegg.com/homework-help/instead-actually-computing-number-primes-2-n-get-estimate-us-chapter-6-problem-14re-solution-9780073523309-exc","timestamp":"2014-04-17T15:55:15Z","content_type":null,"content_length":"28373","record_id":"<urn:uuid:e834db5c-ac11-44c2-bba3-a3618194d40c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on The Geomblog: Trolled from the arXiv... volume estimation...Benoit:I think that one of the keys is that you ca...I'm having trouble comparing this to the usual con...@Suresh:My confusion was whether or not you meant ...that's what I did mean: you're given the vertices,...On the other hand, I can imagine a membership orac...I'm not sure I can answer that. I think the point ...Can you give an example of a convex body for which...
tag:blogger.com,1999:blog-6555947.post4059733631539307562..comments2014-01-12T10:46:48.153-07:00Suresh Venkatasubramanianhttps://plus.google.com/
112165457714968997350noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-6555947.post-71016073768204800332009-03-31T02:43:00.000-06:002009-03-31T02:43:00.000-06:00Benoit:<BR/><BR/>I think that
one of the keys is that you can't distinguish between a uniform dist over a convex shape and a spherical dist using only poly many points. <BR/><BR/>What you mention about concentration of mass may
have something to do with that.John Moellerhttp://ontopo.wordpress.com/
noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-39878936823692525322009-03-30T23:56:00.000-06:002009-03-30T23:56:00.000-06:00I'm having trouble comparing this to the usual concentration
results, which show that all the mass is near the surface.<BR/><BR/>Of course, this is all in asymptotic dimensions. In d=2 or d=3, this is easy, right?Benoithttp://www.blogger.com/profile/
04713089900964656656noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-69458060553714565132009-03-23T15:06:00.000-06:002009-03-23T15:06:00.000-06:00@Suresh:<BR/><BR/>My confusion was whether
or not you meant that the vertices have real or rational coordinates.<BR/><BR/>For rational coordinates checking whether $x \in conv(V)$ or not just amounts to checking that the LP $x=V^T*\Lambda, \
Lambda >= 0, \Sigma \lambda_i = 1$ is feasible or not.<BR/><BR/>If the entries of V are real then we don't know how to check the feasibility efficiently but for rational V it does not matter
that you don't know the halfspaces (and you don't need to compute them). It is just a matter of checking feasibility of a rational
LP.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-28526540263717825572009-03-23T10:17:00.000-06:002009-03-23T10:17:00.000-06:00that's what I did mean: you're given the vertices,
rather than the inequalities defining the halfspaces.Sureshhttp://www.blogger.com/profile/
15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-5438611869080366442009-03-23T07:11:00.000-06:002009-03-23T07:11:00.000-06:00<I>On the other hand, I can imagine a
membership oracle being hard if the object is described by its vertices only. Computing the convex hull is expensive.</I><BR/><BR/>Complexity of computing the convex hull is not really relevant here
since you can check membership in this case by solving an LP.<BR/><BR/>Of course you could say that the vertex coordinates are given by real numbers and so solving an LP is not an option. I don't
know if you meant it that way. In this case one could still "almost" sample points from this polytope by using a slight perturbation. (the new polytope has almost the same volume and almost the same
interior available for sampling but the membership answers for some points differ for the two polytopes.)
Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-5962478347881493392009-03-21T22:24:00.000-06:002009-03-21T22:24:00.000-06:00I'm not sure I can answer that. I think the point of the
result was that you can't just sample from the convex body if you want an efficient estimator: it's not that it's cheaper but limited. <BR/><BR/>On the other hand, I can imagine a membership oracle
being hard if the object is described by its vertices only. Computing the convex hull is expensive.Sureshhttp://www.blogger.com/profile/
15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-50296473238192991992009-03-21T22:19:00.000-06:002009-03-21T22:19:00.000-06:00Can you give an example of a convex body for
which an efficient random point oracle exists but not efficient membership oracle? Do such objects ever come up naturally?Arvind Narayananhttp://www.blogger.com/profile/ | {"url":"http://geomblog.blogspot.com/feeds/4059733631539307562/comments/default","timestamp":"2014-04-18T18:11:54Z","content_type":null,"content_length":"14917","record_id":"<urn:uuid:56482b54-6537-4bd1-8488-8097196b789d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Structure and Functionality Up: Essentials Previous: Essentials
ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing MIMD computers and networks of workstations supporting PVM [68] and/or MPI [64, 110]. It is a
continuation of the LAPACK [3] project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines
for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability (as the problem size and
number of processors grow), reliability (including error bounds), portability (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and
ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible). Many of these goals, particularly portability, are aided by developing and promoting standards , especially
for low-level communication and computation routines. We have been successful in attaining these goals, limiting most machine dependencies to two standard libraries called the BLAS, or Basic Linear
Algebra Subprograms [57, 59, 74, 93], and BLACS, or Basic Linear Algebra Communication Subprograms [50, 54]. LAPACK will run on any machine where the BLAS are available, and ScaLAPACK will run on any
machine where both the BLAS and the BLACS are available.
The library is currently written in Fortran 77 (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD)
style using explicit message passing for interprocessor communication. The name ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK.
Susan Blackford
Tue May 13 09:21:01 EDT 1997 | {"url":"http://www.netlib.org/scalapack/slug/node9.html","timestamp":"2014-04-17T00:50:54Z","content_type":null,"content_length":"4335","record_id":"<urn:uuid:08628eba-854b-4a32-af9f-e559b106c969>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
he Poisson
Adaptive Methods for the Poisson
Equation in Complex Geometry
We are working on fast, adaptive methods for the solution of the Poisson equation
Δ U = f
in complex geometry, subject to linear boundary conditions in interior or exterior domains. In simple geometries (circular or rectangular domains) with regular grids, there are well-known fast direct
solvers based on the fast Fourier transform (FFT) that are well-suited to the task. In practical problems, however, involving complex geometries, highly inhomogeneous source distributions (
), or both, there has been a lot of effort directed at developing alternative approaches. Most currently available solvers rely on iterative techniques using multigrid, domain decomposition, or some
other preconditioning strategy. Although there has been significant progress in this direction, the available solvers compare unfavorably with fast direct solvers in terms of work per gridpoint. We
are developing methods which are direct, high-order accurate, insensitive to the degree of adaptive mesh refinement, and accelerated by the FMM. Our goal is to produced solvers that are competitive
(or nearly competitive) with standard fast solvers in terms of
work per gridpoint
. Technical references:
• A. McKenney, L. Greengard and A. Mayo, A Fast Poisson Solver for Complex Geometries , J. Comput. Phys. 118, 348 (1995).
• L. Greengard and J.-Y. Lee, A Direct Adaptive Poisson Solver of Arbitrary Order Accuracy , J. Comput. Phys. 125, 415 (1996).
• F. Ethridge and L. Greengard, A New Fast-Multipole Accelerated Poisson Solver inTwo Dimensions , SIAM J. Sci. Comput. 23, 741 (2001).
• H. Langston, L. Greengard, and D. Zorin A Free-Space Adaptive FMM-Based PDE Solver in Three Dimensions , Comm. Appl. Math. and Comp. Sci. 6, 79 (2011) | {"url":"http://www.cims.nyu.edu/cmcl/pdelib/poisson.html","timestamp":"2014-04-21T14:47:28Z","content_type":null,"content_length":"3067","record_id":"<urn:uuid:cd1a905b-63f8-4812-88ea-51e3b7fc3ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Numbers | BInary digiT | Conversion of Binary Numbers to Decimal Numbers
Binary Numbers
In binary numbers system first we will discuss about the;
Data are a collection of facts and figures which act as a raw material for information. The processed data which help to make decisions or further manipulations are called information.
The total number of symbols used in a particular number system is called base or radix and each symbol is called a digit. The decimal number system has base 10 and the digits are 0, 1, 2, 3, 4, 5, 5,
6, 7, 8, 9.
To binary number system have base 2 and the binary digits 0 and 1. BIT is a contraction of the world’s BInary digiT.
Electronic and electrical components of a computer are bistable in nature. The binary digit 0 and 1 are most suitable and are conveniently used to express the two possible status. Internal circuit
designs of a computer become simplified as the circuits have to handle only two bits instead of ten digits of the decimal system. Also all the operations that can be done in decimal system can also
is done in binary system.
Conversion of number from one system to another is necessary to understand the logic and process of operations. Binary numbers can be easily converted to decimal numbers and vice versa. Conversion of
binary numbers to decimal numbers can be conveniently accomplished by either actual expansion method or by value box method. Similarly decimal numbers may be converted to binary numbers either by
value box method or by multiplication and division method.
Octal and hexadecimal number systems are also used in digital computers. Octal number system has a base 8 and the symbols used are 0, 1, 2, 3, 4, 5, 6, 7. Hexadecimal number system has a base 16 and
the symbols used are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. Conversion of binary numbers to octal or hexadecimal numbers and vice versa can also be easily accomplished.
Addition, subtraction, multiplication and division of binary number system can be made by following the usual rules of arithmetic.
• Why Binary Numbers are Used
• Binary to Decimal Conversion
• Hexa-decimal Number System
• Conversion of Binary Numbers to Octal or Hexa-decimal Numbers
• Octal and Hexa-Decimal Numbers
• Signed-magnitude Representation
• Diminished Radix Complement
• Arithmetic Operations of Binary Numbers
From Binary Numbers to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"http://www.math-only-math.com/binary-numbers.html","timestamp":"2014-04-16T10:32:40Z","content_type":null,"content_length":"29193","record_id":"<urn:uuid:623dbd90-b482-4620-a71f-ec83a8d26128>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadrants and Angles (page 1 of 3)
Sections: Introduction, Worked Examples (and Sign Chart), More Examples
You've worked with trig ratios in a geometrical context: the context of right triangles. Now we'll move those ratios into an algebraic context, and then we'll dispense with the triangles.
Let's start with an equilateral triangle with sides of length 2:
Each angle of this triangle measures 60°.
Then let's drop the perpendicular, splitting the triangle into two 30-60-90 triangles:
We'll work with that 60° angle, as labelled.
Now for the algebra: We'll place the triangle on the x,y-plane, with the "adjacent" side on the x-axis, the "opposite" side parallel to the y-axis, and the angle we're working with, that 60° angle,
at the origin:
Now let's throw out the triangle, keeping just the base, the height, and the angle itself, along with the x- and y-values from the point on the terminal side of the angle which marks where the
hypotenuse of the original 30-60-90 triangle had ended.
Since the length of the base is the same as the value of "adjacent", we can call this just "x". Similarly, the "opposite" is now just "y".
The Pythagorean Theorem gives us the value of the hypotenuse, which we will now call "r". Why? Because we can also look at this distance along the terminal side of the angle as being a radius line
of a circle.
Any right triangle with the same-length hypotenuse will have its far end lying on this circle.
We already know that cos(60°) = 1/2. Now you can read this value from the drawing in the quadrants. The cosine ratio had been "adjacent over hypotenuse"; now it is "x over r". But the value is
still 1/2.
So we've set up the angle in the first quadrant, thrown away the triangle, and still been able to find the trig ratio. Can we build on this? Yes!
Let's swing over to the second quadrant, drawing a similar line for 120°., being 60° above the negative x-axis. Since we're working in the same circle, we get the same value for r, but since we're
in QII, we get a negative x-value: x = –1 instead of +1.
Now grab your calculator, make sure it's in "degree" mode, and plug in "cos(120°)". Did you get "–0.5"? That's the same as –1/2, right?
Now look again at the graph: x/r = (–1)/2 = –1/2
= –0.5, just like your calculator said!
Does this continue to work in the other quadrants? You bet!
Draw the corresponding line in QIII, marking 240°, being 60° below the negative x-axis. You can see from the picture that cos(240°) ought to be –1/2.
What does your calculator say?
Now move to QIV and draw the line for 300°, being 60° below the positive x-axis. In QIV, x is again positive, so cos(300°) = x/r = 1/2 = 0.5.
What does your calculator say?
Note that in each case, all that really mattered was the sixty-degree triangle in the quadrant where the angle terminated, and the x- and y-values in that quadrant. The fact that you can't draw a
right triangle with a base angle of 300° was irrelevant. To find the trig ratios, you needed only to draw the terminal side of the angle in whatever quadrant it happened to land, construct a right
triangle by "dropping a perpendicular" to the x-axis, and then work from there. This works for all angles....
Original URL: http://www.purplemath.com/modules/quadangs.htm
Copyright 2009 Elizabeth Stapel; All Rights Reserved.
Terms of Use: http://www.purplemath.com/terms.htm | {"url":"http://www.purplemath.com/modules/printtrig.cgi?path=/modules/quadangs.htm","timestamp":"2014-04-17T04:21:12Z","content_type":null,"content_length":"15329","record_id":"<urn:uuid:53697cc4-74cb-4ba8-bf8f-d65af152a7a7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poincare's theorem about subgroups of finite index
September 26th 2009, 03:23 AM
Poincare's theorem about subgroups of finite index
I'm given this problem: let G be a group and A, B subgroups of finite order in G. Let $D=A\cap B$. Prove that $u,v \epsilon At\cap Bs \Rightarrow Du=Dv$. (1) Using this, prove that $[G:D]\leq
[G:A][G:B]$ (Poincare's theorem).
I can prove the theorem by defining $f: G_D \rightarrow G_A \times G_B, f(Dd)=((Ad, Bd), where G_D$ is the set of right
cosets of D in G and so on for $G_A and G_B$. f is one-to-one and therefor $G_D$ is a finite set. Furthermore, $|G_D|\leq |G_A||G_B|$.
My question is: can (1) make the proof simpler or more straightforward than that given by me? (assuming it is correct). Because I've tried to use (1) to prove the theorem but I've failed. Any
suggestion would be welcome. Thanks for reading.
September 26th 2009, 11:19 AM
I'm given this problem: let G be a group and A, B subgroups of finite order in G. Let $D=A\cap B$. Prove that $u,v \epsilon At\cap Bs \Rightarrow Du=Dv$. (1) Using this, prove that $[G:D]\leq
[G:A][G:B]$ (Poincare's theorem).
I can prove the theorem by defining $f: G_D \rightarrow G_A \times G_B, f(Dd)=((Ad, Bd), where G_D$ is the set of right
cosets of D in G and so on for $G_A and G_B$. f is one-to-one and therefor $G_D$ is a finite set. Furthermore, $|G_D|\leq |G_A||G_B|$.
My question is: can (1) make the proof simpler or more straightforward than that given by me? (assuming it is correct). Because I've tried to use (1) to prove the theorem but I've failed. Any
suggestion would be welcome. Thanks for reading.
Poincare's Theorem states that the intersection of finitely many subgroups of finite index is a subgroup of finite index. So it is sufficient to prove the result for two subgroups, which is what
we are doing here.
Let [tex]D= A \cap B[/MATH and $g \in Dh$ where $Dh$ is some coset of $D$, $g,h \in G$.
$\Rightarrow gh^{-1} \in D \Rightarrow g \in Ah \cap Bh$$\Rightarrow Dh \leq Ah \cap Bh$
Clearly $Ah \cap Bh \leq Dh \Rightarrow Ah \cap Bh = Dh$.
Thus, the number of cosets of $D$ is less than or equal to $[G:A]*[G:B]$, and we are done.
Also, you can put text into your LaTeX by writing "\text{put text here}". For example, $g \text{ is an element of } G$. | {"url":"http://mathhelpforum.com/advanced-algebra/104360-poincares-theorem-about-subgroups-finite-index-print.html","timestamp":"2014-04-18T17:15:04Z","content_type":null,"content_length":"10435","record_id":"<urn:uuid:56e1e7d9-ea8a-4e84-ad1a-bd63b794f5a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ridgewood, NY Statistics Tutor
Find a Ridgewood, NY Statistics Tutor
...However, all you need is a willingness to make sense of the problem using basic math ideas. My common-sense approach works, and teaches students how to be successful, not just in one class of
problems, but in dealing with new types of problems as well. I've been teaching in New York City public...
17 Subjects: including statistics, reading, writing, geometry
...I enjoy working with people and teaching them to do better professionally and financially. I am a native Chinese speaker. I write and speak Mandarin fluently.
9 Subjects: including statistics, geometry, Chinese, algebra 1
...I will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups. Group rates can be negotiated. I am available on
weekends and some evenings.
10 Subjects: including statistics, calculus, geometry, algebra 1
Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate
students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a...
15 Subjects: including statistics, chemistry, calculus, algebra 2
...I am currently enrolled in a master's program at Adelphi University studying special education. I have been tutoring students for about five years. I enjoy being able to work with students one
on one which really allows me to get to know the student and their learning style preferences.
18 Subjects: including statistics, reading, writing, algebra 1
Related Ridgewood, NY Tutors
Ridgewood, NY Accounting Tutors
Ridgewood, NY ACT Tutors
Ridgewood, NY Algebra Tutors
Ridgewood, NY Algebra 2 Tutors
Ridgewood, NY Calculus Tutors
Ridgewood, NY Geometry Tutors
Ridgewood, NY Math Tutors
Ridgewood, NY Prealgebra Tutors
Ridgewood, NY Precalculus Tutors
Ridgewood, NY SAT Tutors
Ridgewood, NY SAT Math Tutors
Ridgewood, NY Science Tutors
Ridgewood, NY Statistics Tutors
Ridgewood, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/ridgewood_ny_statistics_tutors.php","timestamp":"2014-04-18T05:43:46Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:71b46d45-b463-4f0c-8a81-9d2f1049f21b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Volumes Using the Disc Method
The Riemann sums used for calculating the area under a curve use approximating rectangles. To calculate the volume of solids of revolution, cylinders are the approximating elements. If the area of a
cross section near the point is and the thickness of the cylinder is , its volume is . The radius of the solid of revolution of the function at is so . In terms of Riemann sums and integrals the
volume is
, where . | {"url":"http://demonstrations.wolfram.com/VolumesUsingTheDiscMethod/","timestamp":"2014-04-20T03:31:26Z","content_type":null,"content_length":"44356","record_id":"<urn:uuid:9aa291f1-59d3-4927-905f-6a41febcf15d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cube entropy
I have a general impression that ha 25-30 move scramble (3x3x3) is not quite enough to separate all pieces from each other. When I scramble randomly myself with 40-50 moves, I find it more difficult
to find a good start.
As a result, I started thinking whether it is possible to find some kind of metric that calculates how well a cube is scrambled.
A possible metric would be to calculate the entropy of the cube. Entropy is a well defined metric within mathematics and chemistry and should be suitable way to calculate how well a cube is
scrambled. A solved cube has entropy 0, and grows the more unordered the cube is.
Has anyone looked into this?
Re: Cube entropy
I never bother with scrambles. I just mess it up without looking for about 20 seconds.
Please take off topic discussion off this forum. Thanks.
Re: Cube entropy
right now it's just the computer showing 20 or 21 move scrambles (WCA regulations)
now of course hand scrambling would require more moves because it's not as random
you can display the distance that it would take to solve the scramble, but most scrambles should have distance 18
lol, here to help ^_~
Re: Cube entropy
I think how well a cube is scrambled depends on how it's being looked at. Someone may look at a certain scramble from a speedsolving perspective and begin to see one solution. However, someone else
may look at the same thing from a fewest moves perspective and begin to see a different solution.
It would probably be difficult to calculate a cube's entropy. If a computer program determined it, then it would have to consider lots of different methods. It would do this so that the scrambled
cube wouldn't be biased toward one approach.
Re: Cube entropy
Zeotor wrote:I think how well a cube is scrambled depends on how it's being looked at. Someone may look at a certain scramble from a speedsolving perspective and begin to see one solution.
However, someone else may look at the same thing from a fewest moves perspective and begin to see a different solution.
It would probably be difficult to calculate a cube's entropy. If a computer program determined it, then it would have to consider lots of different methods. It would do this so that the scrambled
cube wouldn't be biased toward one approach.
Good point. I've gotten scrambles that would be great for 3OP but average for speedsolving.
lol, here to help ^_~ | {"url":"http://us.rubiks.com/forum/viewtopic.php?f=14&t=5101&p=26393","timestamp":"2014-04-23T11:08:43Z","content_type":null,"content_length":"25864","record_id":"<urn:uuid:ed30bd43-7724-4e81-abea-7915eb9166d8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converging measurable sequence
March 28th 2008, 11:31 AM #1
Mar 2008
Converging measurable sequence
Let (fn) be a sequence of measurable functions that converges pointwise to a function f. Show that f is measurable.
It seems to me that this should be really easy - but I can't find the proof anywhere...
This is best done in terms of limsup or liminf. For example, $f(x) = \mathop{\sup}_{n\to\infty}\Bigl(\,\mathop{\inf}_{k \geqslant n}f_k(x)\Bigr)$. If you know that measurability is preserved by
sup's and inf's of sequences then the result follows immediately.
March 28th 2008, 12:56 PM #2 | {"url":"http://mathhelpforum.com/calculus/32350-converging-measurable-sequence.html","timestamp":"2014-04-19T05:00:59Z","content_type":null,"content_length":"33730","record_id":"<urn:uuid:b2ae226e-dc37-4fe2-8ef5-38abea440937>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Homoclinic Bifurcation
Figure 1: Ascending and Descending
I have been presenting
some results
from an analysis of formalizations of Kaldor's 1940 model of the business cycle. This post illustrates some possible behaviors qualitatively similar to those already reported in the literature.
Figures 2, 3, and 4 display some orbits in the (normalized) state space of the Kaldor model, with variations in one parameter determining variations in the topology of these particular phase
portraits. In each figure, a movement to the right along the x axis corresponds to an increase in the value of the economy's stock of capital. A movement upward along the y axis corresponds to an
increase in the national income. The propensity to save is higher for each figure in the series, but the propensity to save is always small enough that three fixed points exist for the model. In all
cases, the middle fixed point has the (in) stability of a saddle point.
Figure 2: Kaldor's Model without a Business Cycle
Figure 3: A Homoclinic Bifurcation in Kaldor's Model
Figure 4: A Business Cycle in Kaldor's Model
A saddle point is such that a ball starting in the direction of the horse's head or tail rolls downward to the center. The bright yellow orbit in each of the three figures represents such a
trajectory. The yellow line is known as the stable set of the corresponding fixed point. A ball would have to be balanced just so to achieve such a trajectory on an actual saddle. A ball perturbed
from the center of the saddle would tend to roll downward to either side of the horse. The light blue (cyan) orbit in Figures 2 and 4 represent such a trajectory, called the unstable set of the
corresponding fixed point.
A bifurcation analysis identifies qualitative changes in the phase portraits for a dynamical system with variations in the system parameters. Several bifurcations exist between Figures 2 and 3, and,
I think, two bifurcations arise between Figures 3 and 4. The stable and the unstable sets of the fixed point at the origin, in some sense, have switched roles in the illustrated bifurcations. In
Figure 2, the unstable set shown flows from the origin to the other two fixed points. In Figure 4, the stable set flows backwards in time from the origin to the other two fixed points. Of course,
some other global behavior is an important difference among these figures. For example, a business cycle does not exist in Figure 2, while Figures 3 and 4 both display a stable business cycle. In the
language of dynamical systems, this business cycle is known as a (stable) limit cycle.
The stable and the unstable sets of the origin correspond in Figure 3. Such correspondence of these sets for a given fixed point (or, say, limit cycle) is known as a homoclinic bifurcation.
Homoclinic bifurcations are global phenomena and cannot be identified by a merely local stability analysis of the given fixed point. Can you see why one might draw an analogy between a homoclinic
bifurcation and the M. C. Escher etching I choose to head this post with?
• Agliari, A.; R. Dieci; and L. Gardini (2007). "Homoclinic Tangles in a Kaldor-Like Business Cycle Mode", Journal of Economic Behavior & Organization. V. 62: 324-347.
• Bischi, G. I.; R. Dieci; G. Rodano; and E. Saltari (2001). "Multiple Attractors and Global Bifurcations in a Kaldor-type Business Cycle Model", Journal of Evolutionary Economics. V. 11: 257-554.
No comments: | {"url":"http://robertvienneau.blogspot.com/2012/05/homoclinic-bifurcation.html","timestamp":"2014-04-18T21:03:26Z","content_type":null,"content_length":"117863","record_id":"<urn:uuid:52d08890-cff1-423f-b6d5-f7d625311c4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Credit Card Interest Rates - DailyFinance
If you borrow money, you pay an
interest rate
on your loan. The interest rate is stated as a yearly percentage rate, which is the interest cost of borrowing for one year. For example, if you borrow $10,000 at 10%, your interest cost for one year
is $1,000 ($10,000*.10). This interest rate is called the
simple interest rate
periodic interest rate
is calculated in a similar way. This is the interest rate that you pay for a period shorter than one year. For example, if you borrow the same $10,000 for one month, your interest cost is one-twelfth
(1/12) of 10%.
To calculate your periodic interest rate, divide the annual interest rate by the number of periods. If the period is months, divide 10% by 12 to get 0.83%. The monthly interest cost of the $10,000
loan is $83.33 [($10,000)*(.10)*(1/12)]. If you are saving instead of borrowing, the same interest rate calculations apply. Instead of paying interest, however, you earn interest income.
compounded interest rate
assumes you "earn interest on interest." As a result of
, you pay (as a borrower) an interest rate that is higher than the simple interest rate. Alternatively, you earn a higher rate (as an investor) than the simple interest rate. The following table
shows monthly compounded interest on a $10,000 loan at a 10% annual interest rate. (Compounded monthly, the periodic interest rate is 0.83%):
│ Periodic │ Number │ Compounded │ Effective │
│ interest │ of months │ interest │ interest rate │
│ rate │ │ │ │
│ 0.83% │ 1 │ $83.33 │ 10.47% │
│ 0.83% │ 6 │ $510.53 │ 10.47% │
│ 0.83% │ 12 │ $1,047.13 │ 10.47% │
│ 0.83% │ 24 │ $2,203.91 │ 10.47% │
│ 0.83% │ 60 │ $6,453.09 │ 10.47% │
(You can verify these figures by looking at the Results in the calculator, which was introduced above.)
As the table shows, compounded interest grows faster than the simple interest rate for all periods. However, the compounded interest rate remains at 10.47%. This is because you are compounding on a
monthly basis. If you increase the
compounding frequency
to a daily basis (assuming a year is 365 days), your compounded interest rate would increase to 10.52%.
The compounded interest rate is the real rate of interest you earn (as an investor) or pay (as a borrower). It is also called the
effective interest rate
. If you are a borrower, the effective interest rate is also called the annual percentage rate (
). If you are an investor, the effective interest rate is also called the annual percentage yield (
The APR includes any
closing costs
you pay on a loan. For example, if you borrow $10,000 and pay $500 in closing costs, you effectively borrow $9,500. (This assumes you pay your closing costs at closing instead of financing them by
adding them to the loan amount.) Closing costs increase your loan interest rate. The following table shows the APR for a range of closing costs. Loan terms are a one-year loan at 10% simple interest:
│ Simple │ Number │ Closing │ │
│ interest │ of months │ costs │ APR │
│ rate │ │ │ │
│ 10% │ 12 │ $100 │ 11.11% │
│ 10% │ 12 │ $250 │ 12.82% │
│ 10% │ 12 │ $500 │ 15.79% │
│ 10% │ 12 │ $1,000 │ 22.22% │
As the table shows, APR rises sharply as closing costs increase. The rate more than doubles to 22.2% when closing costs are $1,000. This should make some sense: Since $1,000 is 10% of $10,000, and
you are paying an interest rate of 10%, the two rates together added together equal about 20%.
You should always ask a lender to show you the APR. After all, it is your true cost of borrowing. In fact, consumer protection laws require the lender to show you the APR. If the lender is unwilling,
you should apply elsewhere for a loan.
Deciding when to borrow, invest, or
a loan depends on the level of interest rates. Over years, interest rates rise and fall in the pattern of a cycle that is consistent with the rise and fall in the overall economy. When the economy is
expanding, demand for loans is high. High loan demand leads to higher interest rates. When the economy slows down or contracts, the demand for loans is low.
A slowdown in the economy usually leads to lower interest rates, encouraging borrowers to refinance existing debt. Presently, in late 2003, mortgage rates are at 40-year lows as a result of the
substantial slowdown in the economy.
Federal Reserve
plays a central role in determining future interest rates. The Federal Reserve, or Fed, is the U.S. central bank. It is responsible for setting monetary policy in the U.S.
The Fed's main policy-making committee, the Federal Open Market Committee (FOMC), meets about every six weeks to evaluate key indicators of the economy's health. If it anticipates a slowdown, it may
cut the
fed funds
discount rate
, or both. If the Fed anticipates inflation or too much growth, it may raise rates. Through December 2003, the Fed has cut the fed funds rate 14 times since January 2001 to 1.00% in order to
stimulate the economy.
Keeping an eye on interest rates helps you to decide when to borrow and when to invest. When interest rates are heading lower, it may be a good time to borrow. (However, rates can continue to
decline, making borrowing even cheaper.) On the other hand, if rates are rising, you can earn a higher
rate of return
by investing in
money market accounts
money market mutual funds
, and
. All of these savings instruments earn a higher rate of return when interest rates are rising.
Savvy investors and borrowers aim to keep the effective interest rate on their borrowings lower than the effective interest rate earned on their investments. Given a chance to repay debt, a savvy
borrower chooses to pay down or pay off the high-interest debt first.
Finally, you can take advantage of fixed-rate loans and investments to
against adverse movement in interest rates. For example, if you think rates may rise, a fixed-rate loan locks in your interest rate. On the other hand, if you think rates will fall, a variable-rate
loan such as an adjustable-rate mortgage (
) will lower your loan payments as interest rates fall. For all of these reasons, managing interest rates will help you to improve your personal cash flow.
The above information is educational and should not be interpreted as financial advice. For advice that is specific to your circumstances, you should consult a financial or tax adviser.
2008-07-21 15:34:15
Add a Comment | {"url":"http://www.dailyfinance.com/2010/12/31/understanding-credit-card-interest-rates-92238/","timestamp":"2014-04-21T16:58:47Z","content_type":null,"content_length":"72214","record_id":"<urn:uuid:14d26c2c-1438-4e18-8e4d-b6a2394c2720>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi SlowlyFading,
9. Divide the following numbers using scientific notation: (4 x 102)/(1 x 103)
You can split this into two divisions:
Click on the Latex so you can begin to learn how to display exponents properly.
19. Estimate the amount of juice in a typical orange juice carton in liters.
(That's a strange question, huh?)
Yes, a bit strange. I think this is just to get you to have an idea about how big a liter and a gram are.
You're not expected to get an exact answer, just get a result that is about the right size.
So have a look in the cupboard or search on-line for fruit juice and see what sizes are sold.
My morning fruit juice comes in a one liter pack.
20. Estimate the weight of a small can of soup in grams.
Again you could look in your cupboard or on-line.
I've helped someone with this problem before. But they had a choice of answers, two of which were close and we picked the 'wrong' one. I felt this was a very unfair question because of the possible
answers on offer. Multi choice is OK for a question like this if five out of six answers are obviously wrong.
I wonder if you are doing the same course?
http://www.mathisfunforum.com/viewtopic … 02&p=1
The first question is on that sheet is | {"url":"http://www.mathisfunforum.com/post.php?tid=18046&qid=228676","timestamp":"2014-04-18T05:56:08Z","content_type":null,"content_length":"20746","record_id":"<urn:uuid:490e501c-2c61-498b-b73e-5d68e9a3e191>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Timing a Pythonista endeavor
November 24th, 2012 at 3:58 pm by Dr. Drang
I’ve known about John D. Cook’s blog, The Endeavor, for quite a while, but for some reason I didn’t subscribe to its RSS feed until recently. Now that I’ve corrected that omission, it takes every bit
of strength I have to keep from responding to each of his posts. They’re all interesting and well-written.
In this post yesterday morning, he asked this question,
Does the base 10 expansion of 2^n always contain the digit 7 if n is large enough?
and wrote a little Python program that suggested the answer may be “yes” for $n > 72$. The program isn’t a proof, of course, it just tests up to $n = 10,000$.
The program was, I thought, a little odd in that he created a set of all the digits in $2^n$ rather than just doing a string comparison. I thought it worth looking into the relative speeds of doing
it each way.^1 Because my MacBook Air was all the way on the other side of the room, I decided this was a good chance to give Pythonista a try.
Here’s my first crack at the comparison. I copied the code from John’s post, pasted it into a new Pythonista script, and added some new functionality.
1: #!/usr/bin/python
3: def digits(n):
4: s = set()
5: while n > 0:
6: s.add(n%10)
7: n /= 10
8: return s
10: def numerals(n):
11: return '%d' % n
13: for i in range(71, 2000):
14: p = 2**i
15: if 7 not in digits(p):
16: print i, p
18: for i in range(71, 2000):
19: p = 2**i
20: if '7' not in numerals(p):
21: print i, p
John’s original code had only the digits function and the loop in Lines 13-16. Also, his range in Line 13 had an upper limit of 10,000, not 2000; I lowered it to get reasonable run times on my iPhone
As I said, it seemed more natural to me to turn the number $2^n$ into a string and look for instances of the character 7. My function numerals and the loop in Lines 18-21 solved the problem that way.
I should mention at this point that although Pythonista can run code that uses either spaces or tabs for indentation, code that’s written in Pythonista uses tabs. There is, at present, no way to tell
the Pythonista editor to insert spaces when the Tab key is tapped. If you’re writing code from scratch, this is no problem, but because my script was a mix of pasted code (that used spaces) and
original code (that used tabs), it threw indentation errors until I went in and replaced all my tabs with spaces.
I had a brief Twitter discussion about this with Pythonista’s developer, Ole Zorn, and he acknowledged the problem. I hope he adds the ability to auto-expand tabs, not just because I prefer spaces,
but also because pasting and editing is probably a pretty common use case.
Once I got my script running, it was immediately clear that the string method was much faster than the set method. But how much faster? I decided to give the standard timeit library a whirl. After
fighting a bit with timeit’s so-called convenience functions (which I’ll discuss in a bit), I ended up with this script:
1: #!/usr/bin/python
3: from timeit import timeit
5: def digits(n):
6: s = set()
7: while n > 0:
8: s.add(n%10)
9: n /= 10
10: return s
12: def numerals(n):
13: return '%d' % n
15: def time_digits(n):
16: for i in range(72, n):
17: p = 2**i
18: if 7 not in digits(p):
19: print i, p
21: def time_numerals(n):
22: for i in range(72, n):
23: p = 2**i
24: if '7' not in numerals(p):
25: print i, p
27: for i in range(1, 5):
28: print i*1000
29: t = timeit('time_digits(i*1000)',
30: 'from __main__ import time_digits, i',
31: number=1)
32: print 'Digits: %f' % t
33: t = timeit('time_numerals(i*1000)',
34: 'from __main__ import time_numerals, i',
35: number=1)
36: print 'Numerals: %f' % t
37: print
The results were
Digits: 1.845313
Numerals: 0.054590
Digits: 8.365773
Numerals: 0.238395
Digits: 21.571197
Numerals: 0.653563
Digits: 43.310401
Numerals: 1.419421
which shows that the string method is more than an order of magnitude faster than the set method. This didn’t surprise me, there’s a lot of work being done in the digits function.
Just for fun, I later ran the same script on my 2010 MacBook Air. The results were
Digits: 0.233390
Numerals: 0.008770
Digits: 1.266341
Numerals: 0.031867
Digits: 3.823505
Numerals: 0.078629
Digits: 8.306719
Numerals: 0.160169
which is less than an order of magnitude faster than the iPhone 5. I choose to see this as evidence that I have a fast phone, not that I have a slow laptop.
What I disliked about the timeit function was that I needed to include the from __main__ import code chunks as the second argument. When I first tried this script, I didn’t include those arguments
and got errors like
NameError: global name 'time_digits' is not defined
I found the answer in this Stack Overflow discussion, but I wasn’t happy with it. The documentation refers to timeit as a “convenience function,” but there’s nothing convenient about having to import
the names of all your global variables and functions whenever you want to time them. It’s completely non-intuitive to have to import from __main__ when I’m already in __main__.
It’s not that I don’t understand the reasons behind it—I just think the library should take care of the importing for me when I’m using a convenience function in what must be the most common case:
timing a function at the top level. This, like the clumsy way the subprocess module works, is a case of Python being too Pythonic for its own good.
In summary:
1. Looking at the post’s comments today, I see that I wasn’t the only one to consider this. ↩
One Response to “Timing a Pythonista endeavor”
1. Dan says:
Just out of curiosity I wrote this in C, it was about 6X faster than the python string implementation on my computer. I’m actually pretty impressed with python being that fast (not to mention
having builtin arbitrary length integers). | {"url":"http://www.leancrew.com/all-this/2012/11/timing-a-pythonista-endeavor/","timestamp":"2014-04-16T15:59:38Z","content_type":null,"content_length":"35810","record_id":"<urn:uuid:70f9e825-1a2f-4b5f-bd9c-79f51ea2110d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
y Numbers
Quick and Easy Numerical Differentation
Numerical differentation has a cornucopia of uses in engineering. However, when you're first learning it may appear daunting - especially the form of the equations. For instance, these are the
forward difference equations:
f'(x_i) \approx \Delta^1 = \frac{f(x_{i+1})-f(x_i)}{h} + O(h)
f''(x_i) \approx \Delta^2 = \frac{f(x_{i+2}) - 2f(x_{i+1}) + f(x_i)}{h^2}} + O(h)
f'''(x_i) \approx \Delta^3 = \frac{f(x_{i+3})-3f(x_{i+2}) + 3f(x_{i+1}) - f(x_i)}{h^3}} + O(h)
f'''(x_i) \approx \Delta^4 = \frac{f(x_{i+4} - 4f(x_{i+3}) + 6f(x_{i+2}) - 4f(x_{i+1}) + f(x_i)}{h^4}} + O(h)
There are a few patterns here that make memorizing these formulas pretty easy. The first, and most obvious, pattern is that you always start with a positive term, and alternate the sign of the
remaining terms. Another pattern is that order of the h term in denominator increases by 1 every time.
The most important pattern is that the coefficients (if we disregard the sign) are the binomial coefficients. We can pull out the coefficients of the function evaluations and the denominator, D, and
put them in a table to make this a little more obvious.
f(x_{i+4}) f(x_{i+3}) f(x_{i+2}) f(x_{i+1}) f(x_{i}) D
\Delta^1 +1 -1 h
\Delta^2 +1 -2 +1 h^2
\Delta^3 +1 -3 +3 -1 h^3
\Delta^4 +1 -4 +6 -4 +1 h^4
This is great, but we have to remember that this pattern is only for the forward difference formulas. What about the backward difference formulas, denoted by \nabla^n? We're in luck! The coefficients
follow a very similar pattern, as might be expected. There methods are also all O(h).
f(x_{i}) f(x_{i-1}) f(x_{i-2}) f(x_{i-3}) f(x_{i-4}) D
\nabla^1 +1 -1 h
\nabla^2 +1 -2 +1 h^2
\nabla^3 +1 -3 +3 -1 h^3
\nabla^4 +1 -4 +6 -4 +1 h^4
Unforunately, the centered difference fomulas, denoted by \delta^n, don't follow exactly the same patterns. The terms still alternate in sign, but the coefficients aren't binomial. Also, all of these
methods are O(h^2).
f(x_{i+2}) f(x_{i+1}) f(x_{i}) f(x_{i-1}) f(x_{i-2}) D
\delta^1 +1 -1 2h
\delta^2 +1 -2 +1 h^2
\delta^3 +1 -2 +2 -1 2h^3
\delta^4 +1 -4 +6 -4 +1 h^4
Hopefully these patterns will help you as your begin to work with numerical differentiation!
by Chris McComb | {"url":"http://www.cmccomb.com/blog/2013/09/07/","timestamp":"2014-04-16T13:02:17Z","content_type":null,"content_length":"18157","record_id":"<urn:uuid:5beeb722-fa32-4800-a581-d0b9f661972c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bowdon, GA Prealgebra Tutor
Find a Bowdon, GA Prealgebra Tutor
...I do play for the Universities Men's Basketball Team, so I do know what it takes to become a college athlete. Also I have been playing basketball for 7 years starting in 8th grade. Also I am
good at fitness techniques that will get anyone into shape.
2 Subjects: including prealgebra, algebra 1
...I have a masters degree in education and have taught students how to study and prepare for all types of exams, in all subjects including for state and national exams. In adddition I have
tutored hundreds of students preparing for the ACT and SAT entrance exams. I have taught these study skills for more than 20 years as an educator.
47 Subjects: including prealgebra, chemistry, English, physics
...Fraser, a professor of mine in my freshman year of college. I have since passed every writing assignment ever assigned to me. I have studied art history at The University of West Georgia.
40 Subjects: including prealgebra, reading, English, geometry
I love math and I love helping others learn math! I am a certified teacher with 8 years of experience teaching math in public high schools. I have taught Pre-Algebra, Algebra 1, Algebra 2,
Geometry, and Discrete Math and am willing to tutor students in other math classes as well.
10 Subjects: including prealgebra, geometry, ASVAB, algebra 1
...I am a former math teacher, and a former teacher educator. I have a bachelor's degree in Applied Math from Brown University, and a Masters and PhD in Cognitive Psychology, also from Brown
University. And now, in semi-retirement, I find immense joy tutoring students who need an extra boost.
8 Subjects: including prealgebra, statistics, algebra 1, algebra 2
Related Bowdon, GA Tutors
Bowdon, GA Accounting Tutors
Bowdon, GA ACT Tutors
Bowdon, GA Algebra Tutors
Bowdon, GA Algebra 2 Tutors
Bowdon, GA Calculus Tutors
Bowdon, GA Geometry Tutors
Bowdon, GA Math Tutors
Bowdon, GA Prealgebra Tutors
Bowdon, GA Precalculus Tutors
Bowdon, GA SAT Tutors
Bowdon, GA SAT Math Tutors
Bowdon, GA Science Tutors
Bowdon, GA Statistics Tutors
Bowdon, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bowdon Junction prealgebra Tutors
Bremen, GA prealgebra Tutors
Buchanan, GA prealgebra Tutors
Cragford prealgebra Tutors
Delta, AL prealgebra Tutors
Edwardsville, AL prealgebra Tutors
Ephesus, GA prealgebra Tutors
Franklin, GA prealgebra Tutors
Fruithurst prealgebra Tutors
Mount Zion, GA prealgebra Tutors
Muscadine prealgebra Tutors
Ranburne prealgebra Tutors
Sargent, GA prealgebra Tutors
Winston, GA prealgebra Tutors
Woodland, AL prealgebra Tutors | {"url":"http://www.purplemath.com/bowdon_ga_prealgebra_tutors.php","timestamp":"2014-04-19T14:52:45Z","content_type":null,"content_length":"23820","record_id":"<urn:uuid:fdc233e0-a258-4bae-b3ae-646d1b3f5192>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regarding Dice
June 13th 2012, 01:33 AM #1
Jun 2012
Regarding Dice
This might be simple but for some reason I can not stop pondering this question....
Is the probability of rolling 6 dice and getting exactly one 1 equal to the probability of rolling 12 dice and getting exactly two 1s? Please explain thanks.
Re: Regarding Dice
P(one 1 out of 6 throws) =6(1/6)(5/6)^5 (the 6 comes because the 1 could be 1st 2nd........or6th)
P(two 1s out of 12 throws)= 12C2 (1/6)^2(5/6)^10 (12c2 is the number of different ways two 1s can occur in 12 throws. Actually=66)
Re: Regarding Dice
How would this change with say 6 six sided dice vs 6 twelve sided dice? In the 6 12 sided dice scenario I would have a 1/6th chance to roll a one or a two right. Therefore the probability would
be the same of rolling exactly one 1 or 2?
Last edited by tobeelijah1; June 13th 2012 at 10:34 AM.
June 13th 2012, 04:02 AM #2
Senior Member
Mar 2012
Sheffield England
June 13th 2012, 10:27 AM #3
Jun 2012 | {"url":"http://mathhelpforum.com/statistics/199966-regarding-dice.html","timestamp":"2014-04-16T11:06:15Z","content_type":null,"content_length":"35577","record_id":"<urn:uuid:9e043f0e-cd74-4b66-8d90-ebd00be4c106>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grant Proposal
The Math Forum
15 August 1996
Grant Proposal
Back to Table of Contents || On to Traditional References
1. For more information on the accomplishments of the Geometry Forum see: Renninger, K. A., Weimar, S. A., & Klotz, E. A. (in press). Teachers and students investigating and communicating about
geometry: The Math Forum. In R. Lehrer and D. Chazen (Eds.) Designing Learning Environments for Developing Understanding of Geometry and Space. Mahwah, NJ: Lawrence Erlbaum Associates. Other
Publications: Klotz, Gene & Weimar, Steve, Math on the World Wide Web, MAA Horizons, April, 1995, pp. 6, 7; Klotz, Eugene, "Visualization in Geometry: A Case Study of a Multimedia Mathematics
Education Project," in Visualization in Teaching and Learning Mathematics, Zimmermann, Walter, & Cunningham, Steve, editors, Mathematical Association of America, 1991, pp. 95-104; Klotz, Gene,
The Geometry Forum, in Notices of the American Mathematical Society, October, 1993.
2. Forum Awards: /forum.awards.html
3. Forum Workshops: /workshops
4. These Web pages can all be found at /mathed/mathed.confs.html
5. National School Network (BBN): /workshops/nsn96/ and http://nsn.bbn.com/
6. Forum Teacher Associates: /workshops/sum96/
7. DIMACS: http://www.cs.princeton.edu/courses/drei/
8. Park City Mathematics Institute: http://www.admin.ias.edu/ma/proghst.htm and /pcmi/hstp96.html
9. Woodrow Wilson: http://www.woodrow.org/news/1996/web/
10. For some time the Math Forum has hosted the mailing list NCTM-L, whose avowed purpose is to discuss the Standards, and which sometimes remains sufficiently focused to have good discussions. One
element of improving public forums is to seed them with prepared materials as Ron Ward has done from time-to-time on NCTM-L. We plan to collect these discussions and archive them for easy access
on a Standards Discussions page along with other substantive articles and discussions, and to use them to encourage readers to return focused contributions to the mailing list. This will nicely
complement the pages we have already constructed on the NCTM Standards.
11. Suzanne Alejandre's Magic Squares: /alejandre/magic.square.html
12. Forum Web lessons and projects: /web.units.html; Forum Summer Institute activities and projects: /workshops/sum96/activities.projects.html
13. Web Profits Unlikely Till 2000. Most companies banking on selling their content over the Web won't see a profit until the year 2000, predicts Forrester Research, which says the typical site, such
as an electronic newsletter or magazine, will lose $3.9 million beyond the initial investment before they start making money. "Content providers who joined the Web gold rush find themselves
tumbling down a long, dark mine shaft. It will be at least four years before they see a return on their investments," says the report's author. (Investor's Business Daily 3 Jul 96 A5).
14. A link to the Forum's Suggestion Box is found on our home page and in the menubar at the bottom of all other pages: /web.comments/web.comments.form.html
15. In the early 1990s a series of papers by Margaret Riel and James Levin identified some conditions that appear to be necessary for the development of sustainable communities (see electronic
references, below), but this work pre-dated the World Wide Web and Internet access was a more difficult issue. Levin will be working with us on this project in another capacity, and has offered
to counsel us on community building.
See also Kollock, Peter and Marc Smith, Managing the Virtual Commons: Cooperation and Conflict in Computer Communities: http://www.sscnet.ucla.edu/soc/csoc/vcommons.htm
and Smith, M. A. , "Voices from the WELL: The logic of the virtual commons": http://www.sscnet.ucla.edu/soc/csoc/virtcomm.htm
A recent discussion of what is known about the building of electronic communities can be found at http://www-leland.stanford.edu/~keating/Keating.Qual.html
16. Learning & Mathematics Discussion Series: /learning.math.html and /~sarah/Discussion.Sessions/Contents.html
17. "Applications and Misapplications of Cognitive Psychology to Mathematics Education": http://act.psy.cmu.edu/personal/ja/misapplied.html
18. Internet math providers: /mathsites/
19. Introduction to Java Script: http://www.webconn.com/java/javascript/intro/javascr.htm
Introduction to Java programming (DIMACS): http://www.cs.princeton.edu/courses/drei/modules/programming/java/intro/
Traffic Jam Activity (Forum 1996 Summer Institute) (for Java applet see bottom of page; requires Netscape v.3): /workshops/sum96/traffic.jam.html
20. Projects from Forum Summer Institutes:
Sum95: /sum95/projects.html
Sum96: /workshops/sum96/activities.projects.html
21. Online participants, summer 1996: /workshops/sum96/online.participants.html
22. Record of online Institute activity--Agenda, summer 1996: /workshops/sum96/sum96agenda.html
23. Introduction to Vectors: /~klotz/Vectors/vectors.html
24. Several variable calculus:
Limits: /~ethan/klotz/Limits/Limits.html
Tangent Planes: /~ethan/klotz/TangentPl.html
25. Sammamish High School: http://oasis.bellevue.k12.wa.us/sammamish/
26. Eisenhower National Clearinghouse (ENC): http://enc.org/
27. ENC Online documents: http://enc.org/reform/
28. ENC Searchable catalog of curriculum resources: http://enc.org/rf.htm
29. Elementary Problem of the Week: /elempow/
[Privacy Policy] [Terms of Use]
Home || The Math Library || Quick Reference || Search || Help
© 1994-2014 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.
27 October 1998 | {"url":"http://mathforum.org/build/nsf.references1.html","timestamp":"2014-04-17T10:32:47Z","content_type":null,"content_length":"10212","record_id":"<urn:uuid:a12ac9de-2fc7-43e1-a4fc-b30c5bcb8b6e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homotopy Type Theory
Homotopy Type Theory and Univalent Foundations
This site serves to collect and disseminate research, resources, and tools for the investigation of homotopy type theory, and hosts a blog for those involved in its study.
Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory. Propositional equality is interpreted as homotopy
and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system
inherit a homotopical meaning. As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos.
Univalent Foundations of Mathematics is Vladimir Voevodsky’s new program for a comprehensive, computational foundation for mathematics based on the homotopical interpretation of type theory. The type
theoretic univalence axiom relates propositional equality on the universe with homotopy equivalence of small types. The program is currently being implemented with the help of the automated proof
assistant Coq. The Univalent Foundations program is closely tied to homotopy type theory and is being pursued in parallel by many of the same researchers.
NEWS: The HoTT Book is now available! Get it here.
4 Responses to Homotopy Type Theory and Univalent Foundations
1. Here is a page that explains how to use LaTeX in a WordPress posting. Basically, instead of writing $formula$ you write “latex formula” enclosed in $-signs. It even works in comments: $\Sigma x:A
\, \Pi y:A.\, \mathsf{Id}_A(x,y)$.
2. Is there an official forum / mailing list / subreddit / irc channel where one can ask questions and discuss the book and especially the exercises?
□ You might want to try the recently-created “HOTT Amateurs” Google Group (https://groups.google.com/forum/#!forum/hott-amateurs). There may be other options as well.
□ Hi,
There is in fact a recently created Homotopy Type Theory channel on the freenode IRC network. The channel is ##hott (note the double hash!).
You can connect to it through the chat.freenode.net server. More information, including a web-based chat client, at https://freenode.net | {"url":"http://homotopytypetheory.org/","timestamp":"2014-04-16T19:03:22Z","content_type":null,"content_length":"49965","record_id":"<urn:uuid:a6d10a7c-0c5c-4e60-8f68-b100ab10c603>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
It is tough to tell with her. I don't think she is that shy.
Re: Linear Interpolation FP1 Formula
Sorry for the delays. My phone is very busy today. If she is not shy then you have a good chance with her.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
You have the opposite problem to me. My phone is rarely ever called or texted, so whenever I get an important call/text I tend to miss it.
She hangs around with girls, but not a lot of boys. But on the other hand, I hang around with boys, but not a lot of girls.
Re: Linear Interpolation FP1 Formula
That does not mean much. People tend to hang with their friends which despite what hollywood says are usually the same gender.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
So it is not a great indicator of whether or not someone is shy?
Re: Linear Interpolation FP1 Formula
You have spoken to her in person. Was she shy? That is the best indicator.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
She didn't appear to be.
Re: Linear Interpolation FP1 Formula
Now if you could only get her to know that mathematicians are superior to zoologists then you are in heaven.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Why would I be in heaven?
Re: Linear Interpolation FP1 Formula
Just a joke. Everytime I have met up with one of those bio people they always hated mathematicians.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't understand why they hate us. We don't hate them, do we?
Re: Linear Interpolation FP1 Formula
I do not like them very much. Hate is too draining.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I wonder if I can find a way to make her like maths in any way.
Re: Linear Interpolation FP1 Formula
That may be difficult. Perhaps you can tell her about the history of math.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I'd have to do some research for myself on that. I am not able to give the complete history. I think telling her the history of maths would bore her.
Re: Linear Interpolation FP1 Formula
How about a 150 page dissertation on the accumulation of errors in floating point computations? That will thrill her silly.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I think if I sent her that she would want me to read a 150-page zoology paper. Torture for the both of us...
Re: Linear Interpolation FP1 Formula
You could counter with Andy's original 200 page proof.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Andy as in Andrew Wiles? She may not be interested in FLT. I would have to think about what would appeal to her. Typically I have found that the three things my friends are impressed by are calculus
'tricks' (like fractional calculus), complex numbers, and modular arithmetic.
Re: Linear Interpolation FP1 Formula
I was joking about Andy's proof. It even put him to sleep while reading it.
Much better is number theory. Or why computers can not really do arithmetic.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Number theory is great. Maybe an appropriate choice for PJ, although being a biologist she may dislike it if she cannot see an immediate application.
Re: Linear Interpolation FP1 Formula
Wait, what do you mean by computers can't do arithmetic?
Re: Linear Interpolation FP1 Formula
You can always explain how the factoring of numbers relates to cryptography.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't know. I get the feeling maths is vomitworthy to her. Maybe she will like the statistics (the most relevant field to biologists I guess).
Re: Linear Interpolation FP1 Formula
Yes, they like statistics as long as it agrees with their pet theories.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=234273","timestamp":"2014-04-20T06:12:43Z","content_type":null,"content_length":"34464","record_id":"<urn:uuid:2624334a-1627-414a-91d1-2affbcccb1fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isaac Newton
Early Life
Isaac Newton was one of the greatest and most influential men to have contributed to numerous fields such as physics, mathematics, astronomy, philosophy and theology. Born on 4th January 1643 or as
many suggest, 25th December 1642 in the county of Lincolnshire, England, Newton began a life that was to become an inspiration for the scientific world for centuries to come.
As a premature baby he lacked the physical well-being of his age and the remarriage of his mother had caused him severe resentment for her and his stepfather. After his early education from ‘The
King’s School’ in Grantham, Newton tried his luck for Trinity College of the Cambridge University. Succeeding in his attempt he entered Trinity College in June 1661 where he acquired knowledge of the
modern philosophers and astronomers like Descartes, Galileo, Kepler and Copernicus. He showed immense aptitude in mathematics and physics and soon he was working on binomial theorems and developing
mathematical theories that were later to form a branch of mathematics called infinitesimal calculus.
When the university closed down due to the Great Plague in 1665, Newton seized this as a chance for him to work on his theories on calculus. He worked in optics and his law of gravitation which as we
know started from the fall of an apple on his head. 1667 marked the year he became Fellow of Trinity College and in 1669 the Lucasian Professor of Mathematics.
Later Years and Work
Newton worked in all branches of mathematics portraying pure brilliance in each one. He particularly advanced calculus by giving solutions to problems of analytical geometry by using differentiation
and integration. Newton’s method, Newton’s Identities, binomial theorem, improvements to the theory of finite differences and solution to Diophantine Equations are all credited to Newton.
His work in the field of Optics also resulted in major advances that gave clarity to previously vague theories. Newton investigated through various experiments, known as the ‘Experimentum Crucis’,
the refraction of light showing that when white light passes through a glass prism, it forms into spectra of different colors with each color refracting at a different angle. He used various media
like oil, water and even soap bubble to work more on his color theory. His work ‘Opticks’ is a model of his theories of Optics, published in 1704.
Newton’s Law of Gravitation was conceived in 1665. From that moment on Newton work in mechanics and physics started to change the perspective of the world with ground-breaking theories. He wrote
‘Principia’ which was a series of three books each one dealing in different concepts. Book one dealt with basic mechanics and explanation of gravity controlling the motions of celestial bodies. The
second book was about the theory of fluids and their motion and density whereas the third books shows the law of gravitation affecting the universe.
Newton was the master of all sciences. He did a considerable amount of work in chemistry and alchemy. He also conducted several researches other than science which were in theology, prophecy and
history. Though he rejected the sacrament and shunned many other religious beliefs for being superstitious and illogical, Newton can be called a Unitarian for his passion to combine knowledge and | {"url":"http://www.famous-mathematicians.com/isaac-newton/","timestamp":"2014-04-16T05:28:36Z","content_type":null,"content_length":"35864","record_id":"<urn:uuid:dc6934f4-a53a-4c5e-910d-f8f60874f36f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bellmawr Precalculus Tutor
...I frequently work with students far below grade level and close education gaps. I have also worked with accelerated groups in Camden with students that have gone on to receive scholarships and
success at highly accredited local high schools. My strength in tutoring is using vocabulary and phras...
8 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I keep up with the latest research in the scientific journals and have read many books. I would like to use my enthusiasm for this subject to help you master astronomy. I can help you master
SAT Math by guided practice.
10 Subjects: including precalculus, calculus, physics, geometry
I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I
received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school.
8 Subjects: including precalculus, calculus, geometry, algebra 1
...Conclusion: I hope I have peaked your interest with my years of experience and unique perspective. Please do not hesitate to contact me through Wyzant where we can discuss your writing needs
and availability. I look forward to working with you.I graduated from Villanova University in May 2013 with my B.S. in Mechanical Engineering.
37 Subjects: including precalculus, reading, writing, English
I’ve been involved in education in Philadelphia for the six years that I have been in the city. I’ve had the pleasure of working at Germantown Friends, Vaux, Strawberry Mansion, and Roxborough, in
addition to the many individuals that I have tutored as an independent contractor. I started my profe...
23 Subjects: including precalculus, reading, finance, statistics | {"url":"http://www.purplemath.com/bellmawr_precalculus_tutors.php","timestamp":"2014-04-18T11:23:08Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:c85be14d-d15e-4251-99fc-bb668f4c81d2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interest - Holiday Fund
August 21st 2007, 05:15 AM
Interest - Holiday Fund
A man invests $60,000 in an account which earns interest at a compound rate fo 7% per annum. He wants to make a withdrawal of $W at the end of each year for hsi annual holiday expenses. The
withdrawal is made immediately after that year's interest has been paid.
He intends to do this for 20 years, so that at ethe end of this time his account will have zero balance. He is trying to work out the amount of $W that he will have towards his holiday each year.
(a) Firstly, he writes teh expression A1 in terms of W for the amount of money in his account immediately after he has made the first withdrawal.
(b) Then he writes the expressions for A2 and A3, the amounts of money in his account immediately after his second and third withdrawals and simplifies them.
(c) From this he deduces an expression for A20, and then finds how muhc he ahs towards his holiday each year for 20 years.
Find the value of W, showing full justification.
Okay, so for (a) I have A1 = 60,000*(1.07^1) - W, but i can't really decide what A2 or A3, or A20 is, and therefore i cannot find $W - could anyone walk em through how to do this? there is a
similar problem i has ot this one, whihc i'd liek ot try unassisted. But i sort of need a template first :o...I would hugely appreciate some help.
August 21st 2007, 08:32 AM
Hello, Sparta!
Are you looking for a derivation for the required formula?
A man invests $60,000 in an account which earns interest at a compound rate fo 7% per annum.
He wants to make a withdrawal of $W at the end of each year for his annual holiday expenses.
The withdrawal is made immediately after that year's interest has been paid.
He intends to do this for 20 years, so that at the end of this time his account will be $0.
He is trying to work out the amount of $W that he will have towards his holiday each year.
Your reasoning is correct. .Let $P$ = amount invested.
At the end of one year, the account has: $P(1.07)^1$ dollars.
He withdraws $W and the account has: $P(1.07) - W$ dollars.
At the end of year 2, the account has: $1.07[P(1.07) - W]$ dollars.
He withdraws $W and the account has: $P(1.07)^2 - W(1.07) - W$ dollars.
At the end of year 3, the account has: $1.07[P(1.07)^2 - W(1.07) - W]$ dollars.
He withdraws $W and the account has: $P(1.07)^3 - W(1.07)^2 - W(1.07) - W$ dollars.
. . . and so on . . .
At the end of year 20, the account has:
. . $1.07)[P(1.07)^{19} - W(1.07)^{18} - W(1.07)^{17} - \:\cdots \:- W(1.07)^2 - W(1.07)]$ dollars.
He withdraws $W and the account has:
. . $P(1.07)^{20} - W(1.07)^{19} - W(1.07)^{18} - \:\cdots \:- W(1.07)^2 - W(1.07) - W$ dollars.
But this final balance will be zero dollars.
We have: . $P(1.07)^{20} \;=\;W\left[(1.07)^{19} + (1.07)^{18} + \cdots + (1.07)^2 + (1.07) + 1\right]$
The expression at the far right is a geometric series
. . with first term $a = 1$, common ratio $r = 1.07$, and 20 terms.
So we have: . $P(1.07)^{20} \;=\;W\cdot\frac{(1.07)^{20} - 1}{(1.07) - 1}$
. . Therefore: . $W \;=\;P\cdot\frac{(0.07)(1.07)^{20}}{(1.07)^{20}-1}$
For $P = \60,000\!:\;\;W \;\approx\;\5663.58$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
In general, the formula is: . $W \;=\;P\cdot\frac{i(1+i)^n}{(1 + i)^n - 1}$
. . where: . $\begin{Bmatrix}P & = & \text{principal invested} \\ i & = & \text{periodic interest rate} \\ n & = & \text{number of periods} \\ W & = & \text{periodic withdrawl}\end{Bmatrix}$
This formula is identical to the Amortization Formula
. . in which we are to pay off a debt. | {"url":"http://mathhelpforum.com/business-math/17929-interest-holiday-fund-print.html","timestamp":"2014-04-17T17:01:34Z","content_type":null,"content_length":"11218","record_id":"<urn:uuid:ef8aeaea-3cc8-4290-a9bd-c060b886b9b8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2011 (vol 64 pp. 123)
Gosset Polytopes in Picard Groups of del Pezzo Surfaces
In this article, we study the correspondence between the geometry of del Pezzo surfaces $S_{r}$ and the geometry of the $r$-dimensional Gosset polytopes $(r-4)_{21}$. We construct Gosset polytopes
$(r-4)_{21}$ in $\operatorname{Pic} S_{r}\otimes\mathbb{Q}$ whose vertices are lines, and we identify divisor classes in $\operatorname{Pic} S_{r}$ corresponding to $(a-1)$-simplexes ($a\leq r$), $
(r-1)$-simplexes and $(r-1)$-crosspolytopes of the polytope $(r-4)_{21}$. Then we explain how these classes correspond to skew $a$-lines($a\leq r$), exceptional systems, and rulings, respectively.
As an application, we work on the monoidal transform for lines to study the local geometry of the polytope $(r-4)_{21}$. And we show that the Gieser transformation and the Bertini transformation
induce a symmetry of polytopes $3_{21}$ and $4_{21}$, respectively.
Categories:51M20, 14J26, 22E99
2. CJM 2008 (vol 60 pp. 64)
Classification of Linear Weighted Graphs Up to Blowing-Up and Blowing-Down
We classify linear weighted graphs up to the blowing-up and blowing-down operations which are relevant for the study of algebraic surfaces.
Keywords:weighted graph, dual graph, blowing-up, algebraic surface
Categories:14J26, 14E07, 14R05, 05C99
3. CJM 2007 (vol 59 pp. 1098)
Ruled Exceptional Surfaces and the Poles of Motivic Zeta Functions
In this paper we study ruled surfaces which appear as an exceptional surface in a succession of blowing-ups. In particular we prove that the $e$-invariant of such a ruled exceptional surface $E$ is
strictly positive whenever its intersection with the other exceptional surfaces does not contain a fiber (of $E$). This fact immediately enables us to resolve an open problem concerning an
intersection configuration on such a ruled exceptional surface consisting of three nonintersecting sections. In the second part of the paper we apply the non-vanishing of $e$ to the study of the
poles of the well-known topological, Hodge and motivic zeta functions.
Categories:14E15, 14J26, 14B05, 14J17, 32S45
4. CJM 2004 (vol 56 pp. 1145)
On Log $\mathbb Q$-Homology Planes and Weighted Projective Planes
We classify normal affine surfaces with trivial Makar-Limanov invariant and finite Picard group of the smooth locus, realizing them as open subsets of weighted projective planes. We also show that
such a surface admits, up to conjugacy, one or two $G_a$-actions.
Categories:14R05, 14J26, 14R20 | {"url":"http://cms.math.ca/cjm/msc/14J26?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-21T14:41:02Z","content_type":null,"content_length":"30698","record_id":"<urn:uuid:da33bb4f-a61b-4662-b67d-836fce5d59cb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Permutations and Rearrangements
Ivan Antonowitz binarychem at botsnet.bw
Fri May 22 05:13:38 EDT 2009
As far as the Maths Department is concerned, a Permutation IS a
Rearrangement. As the late Prof Niven exclaimed in exasperation, "Why can't
you bloody chemists stick to one convention!"
And yes, I quite understand that you cannot perform the one operation
without implicitly influencing the other isomorph. The problem is that
there are four ways of defining this bijection. It is these defined
non_commutative relationships that are important in the laboratory.
Let me give a History. I am a chemical technician that interfaces between
people and machines, and I carry around a Propositional_Calculus_Toolbox
suitably modified to especially handle non_commutative operations when
solving a variety of problems.
Philosophically this means I am biased in favour of combinatorial and
constructive mathematics. It is simply not good enough to tell me that a
Tool is available to do a Job, but that nobody can supply it. My Universals
are defined in the context of Clausal Form Logic. This limited vision could
be classed as using a microscope in logic.
Of course the question could be raised, "What has this to do with the
Foundations of Mathematics?". The problem arise in the difference between
Ontological_meaning [Chemists and Physicists] and Epistemological_semantics
[Logicians and Mathematicians] when they try to communicate among each
other. In a sense I have rephrased the question with which I began.
Ivan Antonowitz
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2009-May/013690.html","timestamp":"2014-04-19T14:55:46Z","content_type":null,"content_length":"3964","record_id":"<urn:uuid:ffcda340-68a2-4535-8512-531ff9ad5f75>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ambiguous Chance Constrained Problems and Robust Optimization
Results 1 - 10 of 31
- FORTHCOMING IN OPERATIONS RESEARCH , 2004
"... We describe a two-stage robust optimization approach for solving network flow and design problems with uncertain demand. In two-stage network optimization one defers a subset of the flow
decisions until after the realization of the uncertain demand. Availability of such a recourse action allows one ..."
Cited by 34 (3 self)
Add to MetaCart
We describe a two-stage robust optimization approach for solving network flow and design problems with uncertain demand. In two-stage network optimization one defers a subset of the flow decisions
until after the realization of the uncertain demand. Availability of such a recourse action allows one to come up with less conservative solutions compared to single-stage optimization. However, this
advantage often comes at a price: two-stage optimization is, in general, significantly harder than singe-stage optimization. For network flow and design under demand uncertainty we give a
characterization of the first-stage robust decisions with an exponential number of constraints and prove that the corresponding separation problem is N P-hard even for a network flow problem on a
bipartite graph. We show, however, that if the second-stage network topology is totally ordered or an arborescence, then the separation problem is tractable. Unlike single-stage robust optimization
under demand uncertainty, two-stage robust optimization allows one to control conservatism of the solutions by means of an allowed “budget for demand uncertainty.” Using a budget of uncertainty we
provide an upper
, 2005
"... In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward
de-viations. These deviation measures capture distributional asymmetry and lead to better approxima-tions of c ..."
Cited by 26 (9 self)
Add to MetaCart
In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward
de-viations. These deviation measures capture distributional asymmetry and lead to better approxima-tions of chance constraints. We also propose a tractable robust optimization approach for obtaining
robust solutions to a class of stochastic linear optimization problems where the risk of infeasibility can be tolerated as a tradeoff to improve upon the objective value. An attractive feature of the
framework is the computational scalability to multiperiod models. We show an application of the framework for solving a project management problem with uncertain activity completion time.
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as
well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Cited by 23 (5 self)
Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well
as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results
linking RO to adaptable models for multi-stage decision-making problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to,
finance, statistics, learning, and engineering.
- Math. Prog. B, this issue , 2007
"... Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by non-stochastic “uncertain-butbounded” data perturbations. In this paper, we
overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept ..."
Cited by 14 (2 self)
Add to MetaCart
Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by non-stochastic “uncertain-butbounded” data perturbations. In this paper, we overview
several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust
counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control.
Keywords optimization under uncertainty · robust optimization · convex programming · chance constraints · robust linear control
, 2005
"... In this paper we develop approximation algorithms for two-stage convex chance constrained problems. Nemirovski and Shapiro [18] formulated this class of problems and proposed an ellipsoid-like
iterative algorithm for the special case where the impact function f(x,h) is bi-affine. We show that this a ..."
Cited by 10 (0 self)
Add to MetaCart
In this paper we develop approximation algorithms for two-stage convex chance constrained problems. Nemirovski and Shapiro [18] formulated this class of problems and proposed an ellipsoid-like
iterative algorithm for the special case where the impact function f(x,h) is bi-affine. We show that this algorithm extends to bi-convex f(x,h) in a fairly straightforward fashion. The complexity of
the solution algorithm as well as the quality of its output are functions of the radius r of the largest Euclidean ball that can be inscribed in the polytope defined by a random set of linear
inequalities generated by the algorithm [18]. Since the polytope determining r is random, computing r is difficult. Yet, the solution algorithm requires r as an input. In this paper we provide some
guidance for selecting r. We show that the largest value of r is determined by the degree of robust feasibility of the two-stage chance constrained problem – the more robust the problem, the higher
one can set the parameter r. Next, we formulate ambiguous two-stage chance constrained problems. In this formulation, the random variables defining the chance constraint are known to have a fixed
distribution; however, the decision maker is only able to estimate this distribution to within some error. We construct an algorithm that solves the ambiguous two-stage chance constrained problem
when the impact function f(x,h) is bi-affine and the extreme points of a certain “dual ” polytope are known explicitly. 1
, 2005
"... We propose a unified theory that links uncertainty sets in robust optimization to risk measures in portfolio optimization. We illustrate the correspondence between uncertainty sets and some
popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of t ..."
Cited by 7 (1 self)
Add to MetaCart
We propose a unified theory that links uncertainty sets in robust optimization to risk measures in portfolio optimization. We illustrate the correspondence between uncertainty sets and some popular
risk measures in finance, and show how robust optimization can be used to generalize the concepts of these measures. We also show that by using properly defined uncertainty sets in robust
optimization models, one can in fact construct coherent risk measures. Our approach to creating coherent risk measures is easy to apply in practice, and computational experiments suggest that it may
lead to superior portfolio performance. Our results have implications for efficient portfolio optimization under different measures of risk.
- Mathematical Finance
"... Expected utility models in portfolio optimization is based on the assumption of complete knowledge of the distribution of random returns. In this paper, we relax this assumption to the knowledge
of only the mean, covariance and support information. No additional assumption on the type of distributio ..."
Cited by 7 (4 self)
Add to MetaCart
Expected utility models in portfolio optimization is based on the assumption of complete knowledge of the distribution of random returns. In this paper, we relax this assumption to the knowledge of
only the mean, covariance and support information. No additional assumption on the type of distribution such as normality is made. The investor’s utility is modeled as a piecewise-linear concave
function. We derive exact and approximate optimal trading strategies for a robust or maximin expected utility model, where the investor maximizes his worst case expected utility over a set of
ambiguous distributions. The optimal portfolios are identified using a tractable conic programming approach. Using the optimized certainty equivalent (OCE) framework of Ben-Tal and Teboulle [6], we
provide connections of our results with robust or ambiguous convex risk measures, in which the investor minimizes his worst case risk under distributional ambiguity. New closed form expressions for
the OCE risk measures and optimal portfolios are provided for two and three piece utility functions. Computational experiments indicate that such robust approaches can provide good trading strategies
in financial markets. 1
, 2008
"... In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision-maker to vary the protection level in a smooth way across
the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff funct ..."
Cited by 6 (3 self)
Add to MetaCart
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision-maker to vary the protection level in a smooth way across the
uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our
primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our
approach connects closely to the theory of convex risk measures. We show that the complexity of the this approach is equivalent to that of solving a small number of standard robust problems. We then
investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate the methodology on an asset
allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly
favorable risk-return tradeoff: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance.
- Oper. Res
"... Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of
the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of ad ..."
Cited by 5 (1 self)
Add to MetaCart
Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the
underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only
limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via
decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances.
Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the
forms of the so-called second order cone (SOC) programs, which could be solved efficiently both in theory and in practice. We also present computational evidence indicating that our approach is a
viable alternative, and possibly advantageous, to existing stochastic optimization solution techniques in solving a two-stage stochastic optimization problem with complete recourse. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=421136","timestamp":"2014-04-20T05:27:23Z","content_type":null,"content_length":"39127","record_id":"<urn:uuid:426b56af-e44d-4693-8079-497d55b32429>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Axioms of reducibility and infinity
Michael J. DeLaurentis michaeljad at comcast.net
Sun Aug 7 22:25:10 EDT 2011
In a more general context -- and one central to the Tractatus --
Wittgenstein, in the passages you cite and throughout, thought a proper
symbolism would dispense with these matters transparently, so that, e.g., as
he says in 5.5301 et seq., "pseudo-propositions" ["p is a proposition"; "E
infinite x's"; reducibility and type theory, and, more generally, statements
about "prototypes" (symbol categories)] would be unnecessary and impossible
and their content displayed in the symbolism ["infinitely many names with
different meanings" ("Bedeutung") in place of the axiom of infinity, e.g.].
He also was keen to keep pure logic entirely separate from empirical claims,
which he thought reducibility and infinity and much of set theory were or
involved. He was, of course, an extreme anti-Platonist constructivist, an
attitude which survived the transition from the Tractatus to the
Philosophical Investigations.
-----Original Message-----
From: fom-bounces at cs.nyu.edu [mailto:fom-bounces at cs.nyu.edu] On Behalf Of
mlink at math.bu.edu
Sent: Sunday, August 07, 2011 4:00 PM
To: fom at cs.nyu.edu
Subject: Re: [FOM] Axioms of reducibility and infinity
I´m working on Tractatus; Wittgenstein rejects
the axiom of reducibility (see
the axiom of infinity (5.535) and even the even
the set theory (6.031). First, I ´d like to know
more about those axioms. Second, I´d like to know
why/how does Wittgenstein reject all of them?
Francisco, I am not skilled with these topics, but let me try to
start things off. From what I've heard (from Burton Dreben,
Warren Goldfarb, and Juliet Floyd), Wittgenstein's presentation
of the axioms of reducibility and infinity in the Tractatus were
part of his larger philosophical project rejecting the logicism
of Russell and Frege. The logicism of Frege and Russell was the
unfulfilled hope that all of mathematics, indeed, all of rational
discourse, could be reduced to pure logic.
What Wittgenstein says at 6.031 is that set theory (the "theory
of classes") is "superfluous in mathematics." This has been
interpreted in a number of different ways, but at the minimum it
might simply mean that there is not only one approach to the
foundations of mathematics. Russell himself had tried to avoid
appealling to classes as unwelcome entities. Frege wanted to
base arithmetic and mathematical induction on concepts of logic.
But the ramified theory of types, which Russell developed in
order to avoid a paradox (viz. the class of all classes that are
not members of themselves, which results from Frege's Basic Laws
of Arithmetic), entailed in the Principia Mathematica the appeal
to the axiom of reducibility. This axiom reduces or flattens the
typed hierarchy all to one level. Ramsey in 1925 showed that
once the axiom of reducibility is called on, extensionality is
fully in play, making Russell's intensional proclivities
extraneous. Ramsey in making this argument appealed to
formulations he saw in the Tractatus.
The axiom of infinity was another piece of the puzzle that
Russell had hoped to avoid but could not find a way around.
Whitehead and he used it in the Principia to develop the real
numbers, for example. Wittgenstein seems to have had a different
understanding of logic than did Russell and Frege. Perhaps he
did not see logic as representing (4.0312) a distinct or separate
realm which could explain mathematics and rationality. Instead,
it might be that in the Tractatus Wittgenstein understood logic
as the form common to any thoughtful presentation. But that is a
You might check Wikipedia or the Stanford Encyclopedia of
Philosophy for more substantive information. Also, I can provide
you with references to some of the main primary and secondary
sources, if you contact me directly. Finally, there may be a
number of mistakes above, but fortunately there are real experts
on these matters who will be forthcoming with corrections where
FOM mailing list
FOM at cs.nyu.edu
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1391 / Virus Database: 1520/3820 - Release Date: 08/07/11
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-August/015682.html","timestamp":"2014-04-20T03:13:30Z","content_type":null,"content_length":"7354","record_id":"<urn:uuid:f3433709-774f-42cc-86a1-b04f46063b5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capitola Math Tutor
Find a Capitola Math Tutor
...In addition to my mathematics skills I am also fluent in Portuguese, English and Spanish as well as conversational french and beginning German. I can offer my tutoring skills for language
students as well. I have a keen sense for teaching young adults and I am able to explain procedures easily so that they can be understood by the student.
15 Subjects: including calculus, chemistry, statistics, Portuguese
...Elementary math is like a game; once you learn the basic rules you can have fun learning. I operate either with work from school, helping with homework and reviewing what was covered in class
(with more care taken to answer the student's questions than is possible in the classroom), or I have a ...
32 Subjects: including calculus, physics, statistics, ADD/ADHD
...When I tutor econometrics students I tell them up front that I am not familiar with all the terminology and charge a lower rate as a result. I also tutor slower and spend time showing them how
they can use their textbook as a resource. :) I tutor Linear Algebra on a regular basis at UCSC. I've also taught the class.
14 Subjects: including statistics, algebra 1, algebra 2, calculus
...I have a passion for helping others achieve, not only higher academics, but confidence within themselves to reach their learning goals. Some of my greatest academic strengths are in essay
writing and editing, biological sciences (anatomy and physiology), math (particularly word problem solving),...
13 Subjects: including algebra 1, prealgebra, reading, English
...I am patient, thorough, and can work with students of all ages! In addition, I have taken six AP classes. These classes were AP European History, AP US History, AP Psychology, AP Biology, AP
Government and Politics, and AP Microeconomics.
16 Subjects: including prealgebra, physics, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/capitola_math_tutors.php","timestamp":"2014-04-19T05:34:10Z","content_type":null,"content_length":"23709","record_id":"<urn:uuid:4998fa16-5983-4ba2-b792-68cc9a504671>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cauchys Derivative Formula
February 25th 2013, 11:33 AM #1
Feb 2013
united kingdom
Cauchys Derivative Formula
Evaluate the integral with respect to C, C={ z: lz+2l = 3}, Integrate cos(pi*z)/( ((z+4)^2)*(z-2).dz
I understand that from the denominator the value -4 lies inside C but the point 2 does not.
Am i right in thinking then f(z)=(cos(pi*z)) / ((z-2)^2) is analytic on R and C is a simple closed contour in R. Hence by Cauchy's First Derivative Formula......
....and this is where I'm unsure.
Any suggestions?
Re: Cauchys Derivative Formula
If there is a singularity in your region, then no, it is not a simple closed contour. You will need to evaluate residues.
Re: Cauchys Derivative Formula
What does that mean exactly to "evaluate the residues?" Please could you show me using the example?
Thanks Prove It!
February 25th 2013, 05:58 PM #2
February 28th 2013, 05:50 AM #3
Feb 2013
united kingdom | {"url":"http://mathhelpforum.com/calculus/213801-cauchys-derivative-formula.html","timestamp":"2014-04-16T07:16:40Z","content_type":null,"content_length":"35185","record_id":"<urn:uuid:64b2c445-5a0c-4ffa-a2aa-ea971a089b87>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Provided by:
inv_mass -- invert of L2 scalar product
form(const space& V, const space& V, "inv_mass");
Assembly the invert of the matrix associated to the L2 scalar product
of the finite element space V:
m(u,v) = | u v dx
/ Omega
The V space may be either a P0 or P1d discontinuous finite element
spaces see form(3).
The following piece of code build the invert of the mass matrix
associated to the P1d approximation:
geo omega_h ("square");
space Vh (omega_h , "P1d");
form im (Vh, Vh, "inv_mass"); | {"url":"http://manpages.ubuntu.com/manpages/precise/en/man7/inv_mass.7rheolef.html","timestamp":"2014-04-17T01:04:41Z","content_type":null,"content_length":"4498","record_id":"<urn:uuid:711643e0-2f71-413f-a654-988bd22000ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extremely difficult IMO problem
April 6th 2006, 11:10 AM
Extremely difficult IMO problem
This problem is from the 2003 IMO, the third level of the American Mathematics Competition. I do not even know how to approach this.
"Determine all paris (a,b) such that $\frac{a^2}{2ab^2-b^3+1}$ is a positive integer."
The only assertion I can make is that $2ab^2-b^3+1>0$.
April 6th 2006, 01:43 PM
Originally Posted by Jameson
This problem is from the 2003 IMO, the third level of the American Mathematics Competition. I do not even know how to approach this.
"Determine all paris (a,b) such that $\frac{a^2}{2ab^2-b^3+1}$ is a positive integer."
The only assertion I can make is that $2ab^2-b^3+1>0$.
Are a, b integers as well? If so, then another criterion comes from the rational roots test:
Let n be a positive integer, then let
$\frac{a^2}{2ab^2-b^3+1} = n$
So $nb^3-2nab^2+(a^2-n)=0$
Thus we know that the only rational b values that satisfy this equation are of the form:
$b = \frac{factor \, of \, a^2-n}{factor \, of \, n}$
Since b must be an integer, the only possible values for b are those for which factors of n are factors of a^2-n.
Just a thought.
April 6th 2006, 02:02 PM
I just found a non-integer pair $\left ( \frac{1 \pm \sqrt{15}}{4}, \, \frac{1}{2} \right )$ that seems to work.
I guess they DON'T need to be integers...
April 6th 2006, 02:08 PM
(Shrugs) I guess by letting n be a positive integer and otherwise letting n and b be arbitrary we can always find two "a" values for each (b, n) pair:
$a = nb^2 \pm \sqrt{n^2b^4-nb^3+n}$
as long as the discriminant is real.
April 6th 2006, 03:54 PM
Could you show your work for your last post? I appreciate your insight. The question asks for all pairs, and I'm getting from your work that there could be infinite. How would I go about first
finding the number of pairs?
April 6th 2006, 04:07 PM
Originally Posted by Jameson
Could you show your work for your last post? I appreciate your insight. The question asks for all pairs, and I'm getting from your work that there could be infinite. How would I go about first
finding the number of pairs?
topsquark (the physicist) had,
Then, he just used quadradic formula.
April 7th 2006, 06:25 AM
Originally Posted by ThePerfectHacker
topsquark (the physicist)
Do I detect a note of disdain here? (I think he's just jealous. :D ) And hey, even the workers at Friendly's have enough respect to call me "The Professor!" ;)
April 8th 2006, 02:05 AM
by the way the actual question is:
Determine all pairs of positive integers $(a,b)$ such that
is a positive integer.
so a and b must be positive integers.
here are a few examples of positive integers which work:
a = 126, b = 4
a = 7, b = 2
a = 116149892142279364320, b = 123456
June 6th 2006, 11:51 PM
Are there any more pairs except these?
Originally Posted by topsquark
(Shrugs) I guess by letting n be a positive integer and otherwise letting n and b be arbitrary we can always find two "a" values for each (b, n) pair:
$a = nb^2 \pm \sqrt{n^2b^4-nb^3+n}$
as long as the discriminant is real.
I visited this thread yesterday and the question sounded interesting to me.
Continuing the quoted work, I tried to find the integer values(of b and n) for which discriminant is real and a perfect square(so that a is an integer), and hence the answer to the question.
I found that a= 8*(k^4) - k
b= 2k
where k is any positive integer gives the pairs pair satisfying the conditions in the question.
k=1, b=2, a=7
k=2, b=4, a=126
k=3, b=6, a=645
...................k is any positive real number
Are there any more pairs?
Please verify the above answers.
If you want to know how I arrived at the generalization, tell me.
June 7th 2006, 11:30 AM
The more trivial ones.
June 7th 2006, 05:42 PM
Originally Posted by ThePerfectHacker
The more trivial ones.
The method I used show that there should be two values of a for each b
i.e. if b=2k either a = 8*(k^4) - k or a=k(a=k is not trivial)
The trivial one is (2k,1)
Can b be a odd no. except 1? I think no. What do you think?
June 7th 2006, 06:30 PM
We have,
$\frac{a^2}{2ab^2-b^3+1}=k\geq 1$
Let us find under which conditions this quadradic has a unique solution, that is when,
Is not an integer because the numerator factors into $(b-1)(b+1)$ and $\gcd(b,b+1)=1$ therefore the only case is $b=1$, which leads to our pairs $(2k,1)$.
Otherwise what I have showed is that if this equation has two solutions and one is a positive integer then the other must be also because it follows by Viete's theorem. | {"url":"http://mathhelpforum.com/number-theory/2464-extremely-difficult-imo-problem-print.html","timestamp":"2014-04-20T03:46:41Z","content_type":null,"content_length":"15905","record_id":"<urn:uuid:e0875d67-68c1-4eed-b104-42961ae28486>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Lilach Leibovich
bio website
visits member for 1 year, 6 months
seen Oct 14 '12 at 23:26
stats profile views 39
Oct Efficient algorithm for computing the integral closure of a computable domain
13 comment @Igor: See if I care... @Gerhard: 10x, but how did you get the impression that I am looking for such? anyhow, you may fix my keyboard, if we are already talking ;-)
12 awarded Editor
Oct Efficient algorithm for computing the integral closure of a computable domain
12 revised edited title
Oct Efficient algorithm for computing the integral closure of a computable domain
12 comment I believe you should ask for help in communicating with people. Maybe a colleague could assist?
8 asked Efficient algorithm for computing the integral closure of a computable domain
6 awarded Teacher
6 awarded Scholar
6 accepted Decidability of the generated order
6 awarded Student
Oct Decidability of the generated order
6 comment hmm, yes, I forgot to mention (actually, any field which is dense in its real closure). Are there known relativley efficient algorithmm to it (i.e is it atleast primitive recursive),
maybe using SOS stuff?
6 asked Decidability of the generated order
Oct answered Applications of the Ax Kochen Ershov (AKE) princicple | {"url":"http://mathoverflow.net/users/27047/lilach-leibovich?tab=activity","timestamp":"2014-04-18T05:49:08Z","content_type":null,"content_length":"39356","record_id":"<urn:uuid:d95d8cae-3699-4fe1-b6cd-2e111e3f87b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Equation Tutors
Fort Lauderdale, FL 33331
Just In Time Math
...Prerequisites include factoring; solving 1 and 2 step linear equations; adding, subtracting, multiplying, and dividing integers; combining like terms; the order of operations. Algebra 2 success
requires the mastery of solving quadratics by 3 methods: factoring,...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/Pembroke_Pines_Quadratic_Equation_tutors.aspx","timestamp":"2014-04-23T20:28:30Z","content_type":null,"content_length":"61611","record_id":"<urn:uuid:b51427f5-1f9c-4152-9c96-2e129fd4b90a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Mama Writes...
I've changed
the Google doc for calculus planning
to be editable by anyone without permission, and made another one for my personal copy. If you'd like to play with it, please do.
A few people suggested resources including John Golden, who introduced me to his colleague, Matt Boelkins. Matt is putting together a free calculus text currently titled
Active Calculus
Chapter One
is online (pdf link at bottom of page), and the rest of Calc I is available by emailing him. His text does what I want to do, weaving the limits work into the derivatives, as needed. I plan to have
my students use at least some of his work. Thank you, Matt! (I've listed about 8 free or inexpensive texts in the google doc, toward the bottom.)
I went through all my calculus bookmarks and added them to the google doc. I've also included many of the links in Sam's
Virtual Filing Cabinet
. (What a fabulous resource! Thanks, Sam!)
Course Organization
My course outline was broken into 9 units at first, which seemed like too many.
Active Calculus
(AC) has 4 chapters, and I liked how the ideas are organized. So I streamlined my units, partly reflecting AC's organization. Now I have 5:
1. Slopes, Rates of Changes, Tangents (Understanding the Derivative in AC), in which we'll consider limits in an informal way.
2. Exploring the Derivative (Computing Derivatives in AC), which includes power rule, product and quotient rule, etc. I'm saving much of the limit study for the 2nd unit, so the first focuses on the
more important big idea. AC has limits in chapter 1.
3. Exploring the World with Derivatives (Using Derivatives in AC). I want to weave the 'derivative rules' and use of derivatives together some, so will veer from AC's cleanly defined chapters.
4. Area & Anti-derivatives (The Definite Integral in AC). I don't want my language to give away the deep connection of the fundamental theorem. I want the students to uncover that as much as
5. Volume, which I love as a grand finale.
Students need to get stronger with their algebra skills. Sam has some great
algebra bootcamp
ideas. For my college students, I'll move some of it from classtime to homework, and will put much of it in 'problem sets', I think. I'm hoping that challenging problems will get them to think deeper
about things they've learned superficially before.
First Unit, Approaching the Derivative
This is a rough draft, mostly listing activities. I've included a practice test below, which gives a clearer idea of the topics.
• Graphing the tangent
• Axes Exercise (math activity for group introductions)
• "What is the meaning of slope?"
• Describing graphs game (graph on board, one partner looks and describes to other partner, who draws, helps students see the need for precision in language, for limits)
• What is pi? (Circumference and area; although Fawn's students are in 6th grade and mine are college students, I think these activities will work for us)
• Limits: 60/x
• more coming...
I'm not exactly sure how things will play out, since this is so different from how I've done it before. I still need to find sources of good homework problems. I'll be going over the first chapter in
Active Calculus
over the next week to see what to use from that.
I'm thinking the test will have 4 sections. (If students need to re-take, they can do that by section, in my office.)
I make a Unit Sheet for the students for each unit, giving a tentative schedule, homework, and practice test. Our classes begin on August 20, so my unit sheet will be complete by then. Let me know if
you'd like a copy. I'm going to make a shared dropbox folder for all my course handouts. I'll post again when that's set up.
Awaiting your ideas, advice, and questions...
The last time I taught Calc I was before blogging changed my life. (It was spring of 2007, over 5 years ago.) I'm trying to plan out the semester, and I have questions for any other calculus
I don't like starting with all the details about limits - way too technical. (Nor do I want to start with weeks of review. Sam has some great ideas about quick reviews when needed.) I want to start
with the beauty of the derivative idea. But I'll need materials to do that. (My blogging comrades will come to my aid there, all their lovely work is recorded in my google bookmarks.)
How do I approach the limits then? I've started thinking about how to bring them in at different levels, as we need them.
I'm gathering all my thoughts on a google doc. You can see my work, or you can email me (mathanthologyeditor on gmail) for edit access. I'd love to work together with others who are trying to move in
the same direction.
Right now I'm working on getting all of my 50-some google bookmarks for calculus-related posts into my outline of the course. When I'm done, I'll make a version for others that focuses on the links.
My personal system is a modified SBG, I guess. I give tests on whole units, that are broken into sections for the major topics. Students can re-test on one section / topic in my office. Until now
I've used percent grades, but I may put those in my gradebook and not on students' tests. Just let them know what went wrong and whether they've shown their mastery yet or not.
Right now I'm thinking 9 units. I'm not sure whether I'd test after every one. Maybe only after the 3rd, 5th, and 8th? Hmm...
Part I. Slopes and rates of change
• Exploring Slopes of curvy lines / Rate of change
• Playing with Limits (lighter treatment than most of the textbook stuff at this point)
• Exploring Derivatives (the 'rules')
• A more precise study of Limits
• Exploring Further with Derivatives (more 'rules', applications, graphing - this one's huge)
Part II. Areas
• Limits and Area
• Anti-derivatives (or "What's the big idea?")
• Exploring Integration
• Volume
I would love to collaborate. Please join me if you'll be teaching calculus.
Two years ago, I linked to Gary Davis' post which quotes in full Skemp's paper, Relational Understanding and Instrumental Understanding (first published in Mathematics Teaching). Last summer and
fall, a number of bloggers referenced the article. I wrote my own post about teaching for 'understanding' in June, and decided at that time to buy Skemp's book, The Psychology of Learning Mathematics
I've started reading it, and it's great. If anyone would like to buy it and read it together, I'd love that. His discussion of concept formation (chapter 2) and schema (a mental structure, organizing
concepts, chapter 3) helped me think about why things seem so easy once we've learned them, and how hard it is to really get back to the student's perspective, in which the concept hasn't formed yet,
and understanding is a struggle. Chapter 5 talks about ten different functions of symbols. From that chapter:
Thinking is hard work. Once we have understood a mathematical process, it is a great advantage if we can run through it on subsequent occasions without having to repeat every time (even with
greater fluency) the conceptual activities involved. If we are to make progress in mathematics it is, indeed, essential that the elementary processes become automatic, thus freeing our attention
to concentrate on the new ideas which are being learnt - which, in their turn, must also become automatic. ... In mathematics, this is done by detaching the symbols from their concepts, and
manipulating them according to well-formed habits without attention to their meaning. (page 88)
I hope to post a more complete review of this fascinating book once I've finished. I wanted to post now in case anyone would like to read it before they start back to teaching. Let me know if you'd
like to discuss it.
Joint because it's the MAA (Mathematical Association of America) and the AMS (American Mathematical Society). Their annual meeting is huge. I went a few years back when it was right next door in SF.
I went to lots of pedagogical talks, and to all the math circle events. I went with friends and had a blast.
It's hard to travel in January, but I'm going to do it this time. I lived in San Diego for a year, and I can attest that it's a beautiful city. The meetings are Wednesday, January 9 to Saturday,
January 12.
I've been invited to help host the poetry session that will happen (most likely) on Friday evening. I hope we can make that a delightful experience. I'll also be attending every math circle event I
possibly can. And if all goes well I may even be trying to sell a certain book.
I would love to meet some of my online friends there. I just spoke with a blogger I've admired for a long time. He wasn't planning to come, but said I'd gotten him to think twice. So I'm writing this
post to get more of you to think twice. Let's meet up in San Diego in January. Who's coming?
I've set up Playing With Math as a page on Facebook, to report little bits of progress on the book. Today I posted that:
This morning I worked on a chapter I hope to include from Rodi Steinig, who blogs at Talking Stick Learning Center. I'm taking bits from a number of her posts, and highlighting her mindfulness
practice. I love what she does!
Of course Facebook includes a nice little picture of her last post, which I can't figure out how to easily reproduce here.
I'm going to try to post updates every few days there. Check it out if you'd like to follow the progress of Playing With Math: Stories from Math Circles, Homeschoolers, and Passionate Teachers.
While I was on vacation I kept seeing fabulous stuff online. No time to write about it all, but I want to both share and store my many finds here.
Shecky reminded me of this set of 4 videos that James Tanton did. I'm not sure what #2 means (watched the video a while back, forgot the details), but the rest sounds great to me.
This might be a good place to direct my students in the first week. Maybe I'll give them some choices, and ask them to tell me what they think of at least one website.
Mary O'Keeffe wrote earlier this summer about doing Guerrilla Math Circles at a local park. She did a Pascal's Triangle question, making a big triangle with sidewalk chalk. I like the idea of
grabbing the attention of random people in public. This video on Maths Busking* looks like something similar.
Malke wrote a great post about her daughter's take on The Cat in Numberland, pointing to a fabulous review of it [pdf] I hadn't seen before. It goes into great depth on the pedagogical issues
While I was at the Math Circle Teacher Training Institute (just last week? is that possible?!), I ran a math circle for the top level kids, mostly 12-16. I had them analyze Spot It. (Posts here, here
, and here; can you see any progression?) There were 9 kids, in 3 groups of 3. Each group approached it differently, and one young man (call him Trevor) became absolutely obsessed with the problem.
It was a delight to get to work with this group. Our institute participants were there to observe me, and I seduced them into thinking about the problem too. A few of them were able to multi-task
well enough to give me some good feedback. One noticed that Trevor and I got into a conversation that excluded the other kids (oops!). But the other kids watched us for a few moments and then decided
to get back to work on their own ideas! (Invisibility achieved, though indirectly.)
The adults, who were working on the floor in the back of the room, made a bigger dent into the problem, and spurred me to think more about whether cards with 5 or 7 pictures per card can be made with
the process I came up with in January. (5's, yes. 7's, I'm not sure yet. This question has connections to something called finite field planes, which I intend to explore soon.) Bowman Dickson was in
that group and wrote a great post about their thinking. He also wrote about the origami he learned to make. (I meant to make some origami, but never found the time - dang it!)
[Note added on 11/30/12: Perhaps 5's cannot be made. The procedure Bowman's group used, which I had also used, does not work for 5. A student apparently made a deck using trial and error. I wonder if
I can replicate that.]
Dan McKinnon seems to be thinking about number theory on his blog, mathrecreation:
[For] these kinds of sequences (generated by polynomials with Integer coefficients) - their terms are either always even, always odd, or alternate between even and odd values (e.g. you won't get
a sequence that goes "even, even, odd, .." or some combination other than the three possibilities mentioned). Can you see how you can show that this is true using similar arguments to the ones
used here for last-digits?
* Busking is a British term meaning performing for donations. | {"url":"http://mathmamawrites.blogspot.com/2012_07_01_archive.html","timestamp":"2014-04-17T18:45:21Z","content_type":null,"content_length":"290365","record_id":"<urn:uuid:2806a5b7-bbc3-45d5-974d-7a65dce08352>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonantum Science Tutor
Find a Nonantum Science Tutor
...I have spent 6 years studying other cultures and religions on a college level. In 2004, I graduated from Louisiana State University with a BA in anthropology and a minor in religion. My overall
GPA was a 3.73 and my GPA in my religion courses was a 4.0.
3 Subjects: including anthropology, writing, religion
...I am a recent graduate of the University of Oregon with a BS in Biology and Psychology, with a minor in Chemistry. I'm dedicated to the sciences and studied pre-medicine at UO. I've also
applied my knowledge to research and gained practical experience in a neuroscience lab.
17 Subjects: including anatomy, psychology, biology, chemistry
...I have also been successfully tutoring high school and college students in chemistry, physics, computer programming, and topics in mathematics from pre-algebra to statistics and advanced
calculus, as well as SSAT, SAT, and ACT test preparation for over ten years. I can help you, too. References...
33 Subjects: including physics, biochemistry, physical science, chemistry
...I like to try and bring the science to the student, and make it relevant to their daily life. I believe that by engaging the personal aspect and making each topic relevant in that way, students
are more motivated to learn and have more fun doing it!Aside from taking many related courses during m...
5 Subjects: including anatomy, biology, ecology, elementary science
...My science background helps here, and I can often find examples of techniques I have used in my career despite not knowing the application when I was in high school. Although my educational
background is in chemistry, my second major in college was applied mathematics. I recently passed the Mas...
12 Subjects: including chemistry, physics, physical science, calculus | {"url":"http://www.purplemath.com/Nonantum_Science_tutors.php","timestamp":"2014-04-20T19:26:10Z","content_type":null,"content_length":"23858","record_id":"<urn:uuid:362587bb-7d84-422d-b758-003b376ecdd1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
question #326
physics question #326
Shane C., a 25 year old male from the Internet asks on January 30, 1998,
Could you explain the idea that gravity could be thought of a depression or well, much like when a heavy object is placed on a blanket which is held tight? If this is so, could the motion of planets
in our solar system be explained by drawing a gravitational map showing the lines of gravitational flux? Could this idea be applied to the entire universe or anything that has objects orbiting around
a center?
viewed 15738 times
the answer
William George Unruh
answered on January 30, 1998,
Unfortunately, this depiction contain some truth but a lot that misleads. In the case of the "Well" the reason it seems to work is because there is the gravitational force of the earth which pulls
things down the well. Of course if you are describing gravity around the earth, this does not work. But there is a germ of an idea there, because space around the earth actually is deformed into
something very roughly like the geometry of those wells. The reason why is that the circumference of circles is less than 2 pi times the radius of the circle - as it is in those well models. In other
words, the distance through the earth is larger by about one millionth of a percent (about a centimeter) than the circumference of the earth divided by 2 pi. However, except for things moving near
the speed of light, the effect of this spatial curvature is negligible in determining the path that a particle follows. For light it is this spatial curvature which makes the deflection of light past
say the sun be twice what you would naively calculate, and which Einstein calculated in 1908 before he had developed his theory of General Relativity. But for normally moving particles, the effect of
this spatial curvature is very very small. The main effect of gravity as described in Einstein's theory has to do with time, and the relation of paths through time. One way of encoding this is to say
that gravity is the unequal flow of time from place to place. However this feature of GR is very difficult to make some simple model , so those who simplify things overmuch tend to "cheat" by
pretending that gravity has something to do with spatial curvature (á la the well).
If you found this answer useful, please consider
making a small donation to science.ca. | {"url":"http://www.science.ca/askascientist/viewquestion.php?qID=326&-table=activities&-action=list&-cursor=0&-skip=0&-limit=30&-mode=list&-lang=en","timestamp":"2014-04-20T05:42:07Z","content_type":null,"content_length":"18191","record_id":"<urn:uuid:bfbaf8fe-0a37-4fa8-8872-37d1e25f0753>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cashion, AZ Trigonometry Tutor
Find a Cashion, AZ Trigonometry Tutor
I attended Northern Arizona University where I received my Bachelor's degree in Mathematics, Physics, and Chemistry. While at NAU, I began developing my skills as a tutor of the introductory
physics and mathematics classes. In addition to the undergraduate tutoring, I was also involved with scientific outreach programs for elementary schools in the Flagstaff area.
20 Subjects: including trigonometry, chemistry, calculus, physics
...European History is a subject that many of my students take and thus I'm constantly helping students with it. As a high school Special Education teacher I am constantly helping my students
with all of the classes that they take. I constantly keep up on all subjects to best help my students.
40 Subjects: including trigonometry, English, reading, writing
...I have also written two books and numerous supplemental materials for Houghton Mifflin, Prentice Hall and others. I do all my tutoring at the library in Surprise on Bullard, across the street
from the baseball stadium. Below you will find a summation of my teaching experiences: 2011 - Present...
10 Subjects: including trigonometry, geometry, algebra 2, ASVAB
...In most cases, however, there are rules for correct spelling; in others there are not, but merely requiring rote memorization. The most interesting words are those we call homonyms and
heteronyms. Finally, puns are the most fun to learn and to use.
30 Subjects: including trigonometry, reading, chemistry, English
...I have extensive knowledge in Adobe Flash, its usage in Windows and its application in web-based headings in windows. I have this knowledge due to many computer classes in high school as well
as using Adobe flash myself in programming as well as creation of Flash-based visuals. I have been studying Aiki-Jujitsu for about 16 years now.
40 Subjects: including trigonometry, reading, English, chemistry
Related Cashion, AZ Tutors
Cashion, AZ Accounting Tutors
Cashion, AZ ACT Tutors
Cashion, AZ Algebra Tutors
Cashion, AZ Algebra 2 Tutors
Cashion, AZ Calculus Tutors
Cashion, AZ Geometry Tutors
Cashion, AZ Math Tutors
Cashion, AZ Prealgebra Tutors
Cashion, AZ Precalculus Tutors
Cashion, AZ SAT Tutors
Cashion, AZ SAT Math Tutors
Cashion, AZ Science Tutors
Cashion, AZ Statistics Tutors
Cashion, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Cashion_AZ_trigonometry_tutors.php","timestamp":"2014-04-16T04:29:27Z","content_type":null,"content_length":"24302","record_id":"<urn:uuid:5cc6dfac-f90f-4a4e-ba3c-035b43f46888>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
rate of change problem
March 27th 2013, 07:29 AM #1
Jan 2013
Vinh city
rate of change problem
Hi, I'm new here and I would be very grateful if you guys can help me with these 2 exercises :-)
1. A container in the shape of a right circular cone of heighr 20 cm and radius 5cm is held vertex downward and filled with water which then drips out from the vertex at the rate of 5cm^3/s. Find
the rate of change of the height of water in the cone when it is half empty (measured by volume) <= not quite sure here why the change of height is measured by volume, but this one I take from
'The core course for A-level', so it's rather confusing.
2. A 5m ladder is held against the wall. The top of the ladder is sliding down the wall at the rate of 1m/s. Find the rate af change of the angle between the ladder and the ground when the
vertical distance between the top of the ladder and the foot of the wall is 3m.
I know how this type of exercise is normally solved, but I just fail to apply it into these 2
Re: rate of change problem
Hi koplan,
For the first question, you will need to use similar triangle to help you solve the answer.
Let's say r is the radius and h is the height. Then you will need to use similar triangle to write r in term of h (like comparing two fractions).
Then you need to derive the volume equation in terms of h.
Then you will probably need to plug in the h that you found when is half volume.
Question 2:
You will need to use sin function to derive.
Since sin t = opp/hyp
t is your angle
opp is the height
hyp is the ladder which is constant
Hope this help
Tutorat A+ Tutoring | Go for A+ , A big step toward the future.
Re: rate of change problem
For the first one draw the cone and label. mark height and corresponding radius also. By similar triangles we get 4h/r = 20/5=4 that gives r = (1/4 ) h
Volume of cone = 1/3 pi r^2 h = 1/3 pi ( 1/4)^2 h^2 h = 1/48 pi h^3
Now differentiate dV/dt = d [1/48 pi h^3] / dt = 1/48 pi 3 h^2 dh / dt. It is given that volume is decreasing by 5 cm^3 / s therefore we have
- 5 = 1/48 pi 3 h^2 dh / dt Now I am sure you can proceed further.
Re: rate of change problem
Thank guys, the 1st one was solved correctly :-)
But I'm still stuck with the 2nd one. With sin function, I have z = sint = h/5, so dz/dh = 1/5, that means (dz/dh).(dh/dt) = (1/5).(-1) = -1/5 (constant), so regardless of the height (h), the
rate of change in sint is always -1/5 per sec?. And by the way, how can I deduce the rate of change of the angle from the rate of change of its sin?
Re: rate of change problem
if the distance from wall is x,
x^2 + h^2 = 25
x = √(25-h^2)
when h=3, x = √(25-9) = √16
sinθ = h/5
2x dx/dt + 2h dh/dt = 0
So, dh/dθ
h/5 = sinθ
h = 5sinθ
dh/dθ = 5cosθ = 5(x/5) = 5(√16/5)= 4
Let me know if that was helpful
edit: No, it wasn't - I did a mistake
Last edited by dokrbb; March 28th 2013 at 06:26 AM.
Re: rate of change problem
I think your use of "t" is confusing you! On the left side of the equation, sin(t), you mean t to be the angle but when you differentiate, you are using t as the time.
Instead write the angle as " $\theta$" so that $sin(\theta)= h/5$ so that $cos(\theta)\frac{d\theta}{dt}= -1/5$.
Re: rate of change problem
with cos(θ).(dθ/dt) = -1/5, then dθ/dt = (-1/5).(1/cosθ). When the vertical distance between top of the ladder and the foot of the wall is 3m, then cosθ = 4/5, so dθ/dt = (-1/5).(5/4) = -1/4. Am
I right? if I am, how can I put this identity (dθ/dt = -1/4) into words?
Re: rate of change problem
$1 Instant Expert Homework Help 24/7. Physics, Chemistry Help, Do My Homework For Money, Pay Someone To Solve Math Online
Re: rate of change problem
do you mean "(dθ/dt = -1/4) into words" ? it's the rate of change of the angle with respect to time, and it's negative 1/4 m per seconds because it descends,
hope this helps
Re: rate of change problem
Re: rate of change problem
March 27th 2013, 07:19 PM #2
Mar 2013
March 27th 2013, 10:44 PM #3
Super Member
Jul 2012
March 28th 2013, 05:43 AM #4
Jan 2013
Vinh city
March 28th 2013, 06:10 AM #5
March 28th 2013, 06:14 AM #6
MHF Contributor
Apr 2005
March 28th 2013, 07:44 PM #7
Jan 2013
Vinh city
March 29th 2013, 04:37 AM #8
Mar 2013
March 29th 2013, 05:33 AM #9
March 29th 2013, 06:00 AM #10
MHF Contributor
Apr 2005
March 29th 2013, 06:42 AM #11 | {"url":"http://mathhelpforum.com/calculus/215766-rate-change-problem.html","timestamp":"2014-04-16T17:06:44Z","content_type":null,"content_length":"65108","record_id":"<urn:uuid:0cc707ed-878c-4db8-b98c-93c28ccbc92c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
basic shop math test
• Home > basic shop math test - Free Download
□ Software Search For basic shop math test
□ Math Basic Operations Practice is a program for learning and ... and practicing basic addition, subtraction, multiplication and division operations. You can ... take a test of your level. You
solve problems as you ... questions for tests and choose which operations to include and how ... in the test . You can enter your questions. In the unregistered ... ...
□ Learn and Practice Basic Maths in quick bursts with this Windows based application. Then compare your best times with people all around the world at www.kingofmath.net. Suitable for Primary
School children or anyone wanting to test their Basic Maths skills .
□ Solve common machine shop and other trades trigonometry and math problems at a price every trades person can ... the Trades Math Calculator fast, easy and accurate and affordable.Get your ...
□ Download | Screenshot | Review | Demo | | size:5,592K
Tags :
speeds and feeds
bolt circle
cutting speed
milling speed
machinist calculator
taper calculations
screw thread measure
ball nose cutter
machinist helper
machining math
drill charts
□ Highly interactive tutorials and self-test system for individual e-learning, home schooling, college and ... reviews of basic concepts, interactive examples, and standard problems with
randomly ... parameters. The self- test system allows selecting topics and length for a ... test, saving test results, and getting the ...
□ - is an open source shopping cart solution that allows businesses to tailor their ... web sites shopping cart to their needs. The free version of ... all the basic components that a vendor
can use to accept ... online, including: basic credit card validation, an unlimited product catelog, template-based ... ...
□ Speed Math 3.0 is a very easy and interesting Kids ... Kids elementary math games for learning Arithmetic. The fun math games software has been designed for ease of ... learning the basics of
Arithmetic an enjoyable learning experience for children. ... Kids elemntary math games software uses a fun Racing Game concept ... ...
□ Test Preparation Secrets is your complete guide to increasing ... type of test. Strategies for every type of test - including, Multiple choice tests, Short Answer tests, Math tests, Essay
tests, Open Book tests, Number Problem Tests, Short Answer/Fill in the Blank tests, True False tests, Verbal Analogies ...
□ The Trades Math Calculator has been developed to quickly solve common ... common machine shop and other trades trigonometry and math problems at a price every trades person can ... the Trades
Math Calculator fast, easy and accurate and affordable .
□ Divmath kids math online game. Were here to help you with your Math Homework! If youre having difficulties with a math problem, "Ask Us a Question". FREE MATH ON-LINE TUTORING SERVICES.
Online math tutoring service designed primarily for preparation for the ... games Free math lessons and math homework help from ...
□ Math Logic 4.0 is a Math Lesson Plans software for children that is an ... component of math lesson plans. Math Logic is a fun and straightforward computerized method ... and solving math
problems for school students and is used for math lesson plans in nursery and schools. The Math Lesson Plans software has been designed for ease ... ...
☆ Today's Top Free Downloads
□ Copyright 2014 freedownload3.com All Rights Reserved. | {"url":"http://www.freedownload3.com/downloads/basic_shop_math_test.html","timestamp":"2014-04-21T02:31:21Z","content_type":null,"content_length":"34260","record_id":"<urn:uuid:658b39f5-d571-4934-8877-290d7ef6f288>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
03. Iloczyn skalarny - ang. (The scalar product)
The scalar product
We have already got to know a little bit about vectors. We know what they are and what they can be used for. We also know that on the vectors we can perform actions such as adding and subtracting,
and multiplying the vector by the number which is a scalar. Now we can see that the vector can be multiplied by another vector, and this can be performed in two ways. Here we will learn the
multiplication, which results in giving a number (scalar) and it is not a vector. Such a product is called scalar product.
Scalar product of vectors $\vec{a}$ and $\vec{b}$ is called a number (scalar) defined by the formula
${\vec{a} \circ \vec{b} = |\vec{a}| \cdot |\vec{b}| \cdot \cos\varphi,}$
where $\varphi$is the angle between the vectors $\vec{a}$i $\vec{b}$ with a common beginning.
Figure 1. Illustration of the scalar product
The scalar product of two vectors is therefore equal to the product of their lengths multiplied by the cosine of the angle between them. At the same time
$\vec{a} \circ \vec{b} = |a| \cdot |b| \cdot \cos\varphi = |b| \cdot |a| \cdot \cos\varphi = \vec{b} \circ \vec{a}.$
THE PROPERTY of scalar product:
The alternation right of the scalar product (W1)
${\vec{a} \circ \vec{b} = \vec{b} \circ \vec{a}.}$
The communication right with scalar multiplication by a scalar (W2)
The separation right (W3)
Calculate the scalar product of vectors with given lengths of $|\vec{a}|=\sqrt{10}$i $|\vec{b}|=\sqrt{40}$ attached at the same point, knowing that the angle between them is $60^{\circ}$.
$\vec{a} \circ \vec{b} = \sqrt{10}\cdot\sqrt{40}\cdot\cos 60^{\circ} =\sqrt{400}\cdot\frac{1}{2}=10.$
Calculate the scalar product $\vec{a} \circ \vec{a}$.
$\vec{a}^2 = \vec{a} \circ \vec{a} = |\vec{a}|\cdot |\vec{a}|\cdot\cos 0^{\circ} = |\vec{a}|^2.$
• Scalar product $\vec{a}\circ\vec{a}$we mark $\vec{a}^2$and call a square vector.
• Square vector is equal to the square of its length, which is$\vec{a}^2=|\vec{a}|^2$.
To calculate the value of the scalar products we need to know the length of the two vectors and the angle between them. There is a second way to calculate the scalar product using the coordinates of
Scalar product of vectors in the plane $\vec{a}$i $\vec{b}$respectively with the coordinates $[x_a,y_a]$ and $[x_b,y_b]$we call the number (scalar ) defined by the formula
$\vec{a} \circ \vec{b} = x_a \cdot x_b + y_a \cdot y_b.$
a) on the plane b) in the space
Figure 2. The versors of the coordinates sets
In a similar way, we can also calculate the dot product of vectors in three-dimensional space.
Scalar product of vectors in three-dimensional space $\vec{a}=[x_a,y_a,z_a]$i $\vec{b}=[x_b,y_b,z_b]$ we call a number (scalar) defined by the formula
$\vec{a} \circ \vec{b} = x_a \cdot x_b + y_a \cdot y_b + z_a \cdot z_b.$
Since we know that the square of the length of the vector $\vec{a}=[x_a,y_a]$ is equal to the product of the scalar, ie
$|\vec{a}|^2 = \vec{a} \circ \vec{a} = x_a \cdot x_a + y_a \cdot y_a = x_a^2+y_a^2$
then we have already familiar formula for the length of the vector $\vec{a}$
Similar relations appear for vectors in three-dimensional space
$|\vec{a}|^2 = \vec{a} \circ \vec{a} = x_a \cdot x_a + y_a \cdot y_a + z_a \cdot z_a = x_a^2+y_a^2+z_a^2$
so the length of the vector $\vec{a}$ we can calculate depending on$|\vec{a}|=\sqrt{x_a^2+y_a^2+z_a^2}.$
Calculate the scalar product of vectors $\vec{a}=[1,-3]$i $\vec{b}=[-6,2]$ acting at the same point.
This time we will use the second method of calculating the scalar product.
$\vec{a} \circ \vec{b} = 1 \cdot (-6) + (-3) \cdot 2 = (-6) + (-6) = -12.$
Calculate the scalar product of vectors $\vec{a}=[1,-3,3]$i $\vec{b}=[-6,2,4]$ acting at the same point.
This time we will also use the second method of calculating the scalar product.
$\vec{a} \circ \vec{b} = 1 \cdot (-6) + (-3) \cdot 2 + 3 \cdot 4 = (-6) + (-6) + 12 = 0.$
It should be noted that if the vectors are perpendicular, and the angle between them is $90^{\circ}$ then $\cos 90^{\circ}=0$, and at the same time the scalar product is 0. A similar conclusion can
also be drawn in the opposite direction, that is, if the scalar product of vectors is zero it means that these vectors are perpendicular.
• The condition of orthogonality of vectors, ie $\vec{a} \circ \vec{b} = 0,$jeżeli $\vec{a} \perp \vec{b}$.
Let’s move in our discussion one step further. If the angle between the vectors is sharp, ie $\varphi\in (0^{\circ}, 90^{\circ})$ then $\cos\varphi > 0$(cosine of the angles in the first quarter has
the positive value) and therefore the whole scalar product $\vec{a} \circ \vec{b} = |a| \cdot |b| \cdot \cos\varphi > 0$, as the length of the vectors are always positive.
If we think in a similar way we can find that if the angle between the vectors is open, ie $\varphi\in (90^{\circ}, 180^{\circ})$ then $\cos\varphi < 0$(cosine angles in the second quarter are of
negative values) and therefore the whole scalar product $\vec{a} \circ \vec{b} = |a| \cdot |b| \cdot \cos\varphi < 0$.
The summary of two different ways of calculating the scalar product allows us to calculate the angle, and actually cosine of the angle between vectors with known coordinates.
The angle $\varphi$ between vectors $\vec{a}=[x_a,y_a]$i $\vec{b}=[x_b,y_b]$can be calculated as follows
$\cos\varphi=\frac{\vec{a}\circ\vec{b}}{|\vec{a}|\cdot|\vec{b}|}=\frac{x_a \cdot x_b + y_a \cdot y_b}{|\vec{a}|\cdot|\vec{b}|}.$
Later in order to simplify the recording and increase the readability instead of $\vec{a}\circ\vec{b}$we will write in a shorter way $\vec{a}\vec{b}$.
Calculate $(2\vec{a}-3\vec{b})5\vec{c}$.
calar multiplication can be performed as usual multiplication of polynomials linear, so using the qualities of W2 i W3 we have:
Calculate $(2\vec{a}-3\vec{b})(5\vec{c}+6\vec{d})$.
Doing the activities in the same way as in the previous example, we obtain:
Calculate $(\vec{a}-\vec{b})^2$.
Figure 3. The difference of vectors
As before we can calculate:
Bearing in mind that the square vector is equal to the square of its length, and how we calculate the scalar product, we can write:
where $\varphi$ is an angle between vectors $\vec{a}$i $\vec{b}$.
Let’s mark the difference of vectors $\vec{a}-\vec{b}$ from the previous example as $\vec{c}$then the last result can be written as:
Now we can look at the layout of our vectors as the triangle whose sides are equal in length, respectively to the lengths of individual vectors, ie $|\vec{a}|=a$, $|\vec{b}|=b$oraz $|\vec{c}|=c$then
we obtain
where $\varphi$is the angle between the sides of a triangle with lengths $a$ and $b$.
Figure 4. The illustration of the cosine rule
his compound is commonly known as the cosine theory or it can be called Carnot's theory. This theory allows to calculate the length of the third side knowing the length of two sides of a triangle and
the angle between them and to calculate the angle between any two sides knowing the length of the three sides of the triangle. In addition, it is worth noting that if the angle between the sides
(vectors) is $90^{\circ}$then this rule takes the form of a well-known Pythagorean theory. In other words, the Pythagorean theory is a special case of a more general cosine theory.
Let's look at another concept discussed earlier. Vector lying on the axis, whose length is equal to the unit of length and the direction is consistent with the direction of the axis is called the
axis or unit vector of the axis. The unit vectors of another axis $OX$, $OY$, $OZ$ of the coordinate system is denoted by $\vec{i}$, $\vec{j}$oraz $\vec{k}$. So the vector$\vec{a}$ long $|\vec{a}|=a$
for example, lying on the axis $OX$can be written as $a\vec{i}$, that is $\vec{a}=a\vec{i}$.
Note also that
$\vec{i}^2=\vec{i}\circ\vec{i}=1\cdot 1\cdot\cos 0^{\circ}=1, \;\;\;\;\;\vec{j}^2=\vec{j}\circ\vec{j}=1\cdot 1\cdot\cos 0^{\circ}=1$
$\vec{i}\circ\vec{j}=\vec{j}\circ\vec{i}=1\cdot1\cdot\cos 90^{\circ}=0.$
However, in the case of three-dimensional space
$\vec{i}\circ\vec{i}=1 \;\;\;\;\; \vec{j}\circ\vec{j}=1 \;\;\;\;\; \vec{k}\circ\vec{k}=1$
$\vec{i}\circ\vec{j}=\vec{j}\circ\vec{i}=0 \;\;\;\;\; \vec{i}\circ\vec{k}=\vec{k}\circ\vec{i}=0 \;\;\;\;\; \vec{j}\circ\vec{k}=\vec{k}\circ\vec{j}=0.$
Figure 5. Vector’s coordinates
Marking by $\vec{i}$, $\vec{j}$, $\vec{k}$ versors of axis we obtain
$\vec{a_x}=a_x\vec{i} \;\;\;\;\; \vec{a_y}=a_y\vec{j} \;\;\;\;\; \vec{a_z}=a_z\vec{k}$
and since $\vec{a}=\vec{a_x}+\vec{a_y}+\vec{a_z}$, so
what as we know we can write briefly
Now we calculate the scalar product of vectors $\vec{a}=a_x\vec{i}+a_y\vec{j}+a_z\vec{k}$i $\vec{b}=b_x\vec{i}+b_y\vec{j}+b_z\vec{k}$.
Interesting information and notes
Scalar product is widely used in economics. Take two vectors. The first vector $\bf x$ should represent a basket of goods $x_1$, $x_2$, $x_3$, and the second vector $\bf c$should represent prices of
$c_1$, $c_2$, $c_3$ goods from this basket. Then, to calculate the value of the basket you can use the scalar product.
${\bf c}\circ{\bf x}=c_1x_1+c_2x_2+c_3x_3.$
Scalar product commonly occurs in physics. We can use it to describe e.g. work $$W$$ defined as the product of force $\vec{F}$, changes $\vec{L}$ and the cosine of the angle $\varphi$between the
vector of force and the vector of displacement.
{/tex}W=|\vec{F}| \cdot |\vec{L}| \cdot \cos\varphi {tex}
or in a short way using vector notation
{/tex}W=\vec{F} \circ \vec{L}. {tex}
Another example can be the description of the flow of electrostatic of the force field (which is the Gauss's law, one of the basic laws of electromagnetism). The flux of the intensity of
electrostatic field $\Phi$ is the scalar product of the vector of the intensity field $\vec{E}$ and the vector of the surface $\vec{S}$
{/tex}\Phi=\vec{E} \circ \vec{S}, {tex}
or a little longer
{/tex}\Phi=|\vec{E}| \cdot |\vec{S}| \cdot \cos\varphi {tex}
where $\varphi$ is the angle between the vector of the intensity field and the space vector. Of course, the examples of physics can be multiplied indefinitely, but let’s stop at these few. It's hard
to imagine actually the classical mechanics or quantum mechanics without the usage of this kind of writing.
Bronsztejn IN, Siemiendiajew, Mathematics, Encyclopedic handbook, McGraw-Hill, London, 1999
Empacher AB, Vulture Z., Żakowska A., W. Zakowski, A small mathematical dictionary, Wiedza Powszechna, Warsaw 1975
Antoniewicz R., Misztal A., Mathematics for economics student , PWN, Warsaw, 2003
Jurlewicz T., Skoczylas Z., Algebra and analytic geometry, GIS Publishing House, Wroclaw, 2011
CHECK what you KNOW
Underline the correct answer.
1. Scalar product is marked by the symbol:
• $\cdot$
• $\times$
• $\circ$.
2. The scalar product is:
3. The angle between the vectors$\vec{a}=[1,-3,4]$ and $\vec{b}=[-1,0,-2]$ is
Student’s note:
1. Till nowadays, perhaps we defined the product very differently, for example by using the symbols "$\cdot$", "$\circ$" or "$\times$". Now we have to do it with more precision because, in fact,
each of these three symbols mean other activities such as "$\cdot$" - the product of the numbers, "$\circ$" - the scalar product of vectors, and "$\times$" - the vector product of vectors and we
will talk about it in the next section.
2. $\vec{a}\circ\vec{b}=0 \Leftrightarrow \varphi=90^{\circ}$(right angle); $\vec{a}\circ\vec{b}>0 \Leftrightarrow \varphi<90^{\circ}$(kąt ostry), $\vec{a}\circ\vec{b}<0 \Leftrightarrow \varphi>90^
{\circ}$(obtuse angle)
3. In order to, knowing the coordinates of the vertices of the triangle, establish which triangle we have to determine we have to calculate the coordinates of the individual vectors and then
calculate the scalar products between each pair of vectors (note that the tested vectors should be engaged in the same top of the triangle. If the first tested angle is right or open, this means
that the triangle is right-angled or open-angled respectively. Otherwise, it is necessary to examine the next angle. If another angle is right or open it decides about the case. If this angle was
also acute then we need to examine a third angle so we can ultimately determine with which triangle we are dealing with.
4. Sometimes vectors are denoted by bold for example, $\bf a$i $\bf b$, then the scalar product would be written as${\bf a}\circ{\bf b}$.
5. Examine applet: http://www.geogebra.org/en/upload/files/Polish/yuri1969/aaaa/iloczyn_wektorowy.html
6. Try to identify as many other ways of the scalar product to describe o known rights, claims, correlations.
Teacher's notes:
1. Scalar product, for example in the economy can be determined in other ways such as $({\bf c,x})$ or $({\bf c|x})$.
2. The basket does not have to consist of three good, it is worth generalizing to$n$ goods. Then, with vector $\bf x$dóbr $x_1$, $x_2$, $\dots$, $x_n$ and vector$\bf c$the prices of goods of this
basket $c_1$, $c_2$, $\dots$, $c_n$ the value of the basket is calculated
$({\bf c|x})=c_1x_1+c_2x_2+ \cdots +c_nx_n.$
3. Explain the concepts of such terms as goods and a basket of goods..
4. In economics,$\bf x$ is called a vector of goods while $\bf c$a covector of the prices of the basket.
5. It is worth introducing the symbol of a sum, then
$({\bf c|x})=\sum^n_{i=1}c_ix_i.$
6. Discuss how to identify a well-known type of triangle vertices.
7. Give other examples of using of scalar products (especially in physics or economics).
8. Encourage students to watch the applet
9. It should be mentioned during the subjects of mathematics, physics and economics. | {"url":"http://innowacyjnenauczanie.netstrefa.pl/index.php/materialy-szkoleniowe/matematyka/geometria-analityczna-na-plaszczyznie-i-w-przestrzeniu/170-03-iloczyn-skalarny-ang-the-scalar-product","timestamp":"2014-04-18T12:08:55Z","content_type":null,"content_length":"81082","record_id":"<urn:uuid:9fd265e8-49ec-4824-9dc7-3475f8893c1c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quasi-Non-Destructive Evaluation of Yield Strength Using Neural Networks
Advances in Artificial Neural Systems
Volume 2011 (2011), Article ID 607374, 8 pages
Research Article
Quasi-Non-Destructive Evaluation of Yield Strength Using Neural Networks
Department of Applied Mechanics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016, India
Received 26 January 2011; Accepted 17 April 2011
Academic Editor: Ping Feng Pai
Copyright © 2011 G. Partheepan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The objective of this paper is to delineate a method for determining the yield strength of a material in a virtually nondestructive manner. Conventional test methods for predicting the yield strength
require the removal of large material samples from the in-service component, which is impractical. In this paper, the power of neural networks in predicting the yield strength from the data obtained
by conducting tension test on newly developed dumb-bell-shaped miniature specimen is demonstrated using the self-organizing capabilities of the ANN. The input to the neural network is the breakaway
load obtained from the miniature test, and the output obtained from the model is yield strength value. The value of the yield strength estimated by neural network is found to be in good agreement
(<5% error) with that of the actual value from the standard test. The neural network models are convenient and powerful tools for practical applications in solving various problems in engineering.
1. Introduction
In the past, designers/engineers avoided material failure by designing the structures for stresses well below the yield strength of the material. However, when the same approach was used with high
strength materials under extreme conditions, there were many catastrophic failures [1]. The material behavior of in-service structural components is changing due to in service loading, aging,
irradiation, and other adverse conditions leading to embrittlement, which requires an in situ monitoring of the materials’ state. It is not feasible to assess the degraded properties of these
equipments and components by performing standard destructive mechanical tests, because this would mean damaging of equipments and rendering them nonfunctional in addition to long shut down [2]. In
order to determine material parameters at various locations, for example, in weldments or gradient materials, the size of the material taken out for a test specimen should be very small but
representative. Over the years, the subsize specimen test and miniature specimen test techniques have been evolved to estimate the various mechanical properties without seriously affecting the
functionality of the components. In this respect, the small punch test, ball indentation test, disk bend test, shear punch test, and many other tests are reported for evaluation of degraded material
properties of in-service components [3].
A variety of approaches have been employed to determine mechanical properties from small disks and coupons. The earliest use of the 3mm diameter disk specimen was an attempt by Huang et al. [4] to
assess the tensile ductility of a set of irradiated steels. They used simply supported TEM disks of 3mm diameter, which were displaced by a spherical tipped indenter. Mao and Takahashi [5] and Mao
et al. [6] investigated the deformation behaviour using small punch test. Kullen et al. [7] employed a shear punch test technique to determine the mechanical properties of neutron irradiated 9Cr-1Mo
and 12Cr-1Mo steels. Pandey and Bhowmick [8] further used the disk bend test specimen to predict tensile properties and fracture toughness for a number of steels. Recently, Husain [9] employed small
punch test on different materials having varieties of strength to establish a general relationship between the data obtained from small punch test and the yield strength.
The developments in artificial intelligence made the researchers to have a look into the solution of nonlinear problems in physical and mechanical properties of metal alloys [10]. The field of ANN
was born in an attempt to solve the complex problems that are difficult for conventional computers or otherwise. Their capacity to identify the underlying functional relationship in the data, while
neglecting noisy and less significant input data strongly supports the applicability of such methods for prediction of material properties and other problems in engineering.
Recently, in material science and engineering fields, the researchers have used neural network models to predict the mechanical properties of materials. Ozerdem and Kolukisa [11] used Cu-Sn-Pb-Zn-Ni
contents as input to the neural network to get the yield strength value of cast alloys. Similar exercise was performed using C%, Si%, and Mn% contents as inputs to predict the yield strength of
carbon steel bars [12].
Forouzan and Akbarzadeh [13] predicted the effect of thermomechanical parameters such as preheating time and temperature, finish rolling temperature and the final annealing temperature on mechanical
properties such as yield strength, ultimate tensile stress, and elongation using neural network. ANN was used to predict the mechanical properties of forged TC11 titanium alloy by using the
deformation temperature and the true strain as the inputs to the network [14]. In the same manner, Guo and Sha [15] modelled the correlation between processing parameters and properties of maraging
steels using artificial neural network. The input parameters of their neural network model consist of alloy composition, processing parameters vis-à-vis cold deformation degree, ageing temperature
and ageing time, and working temperature. The outputs obtained from ANN model include ultimate tensile strength, yield strength, elongation, and reduction in area. Also, similar exercise were
performed for titanium alloys [16]. The effect of alloy composition, microstructure and work temperature on tensile properties of gamma-based titanium aluminides also studied using artificial neural
networks [17]. On the other hand, Huang et al. [18] used ANN to predict the flexural strength and fracture toughness of ceramic tool materials. Zhang et al. [19] employed ANN model for the prediction
of properties of composite materials. Zeng et al. [20] developed an expert system using neural network to identify cheaper sintering process to efficiently achieve the desired mechanical properties
with respect to cost and time. ANN was also used to predict ductile properties of cast iron [21] and mechanical properties of annealed thin strip [22]. Huber and Tsakmakis [23] performed spherical
indentation test to obtain constitutive properties from the resultant test data using neural network. They obtained a data base for the training and validation of the neural network by carrying out
numerous finite element simulations using commercially available ABAQUS code, for various sets of material parameters. Abendroth and Kuna [24] used feed-forward neural network model to find the
material properties and tested this on three steels using small punch test.
From [25], it was noted that different test techniques have their own advantages and limitations. in the present investigation, an ANN model is developed using the miniature test load-elongation
diagram to obtain the yield strength of materials. In the present research a dumb-bell-shaped miniature specimen has been designed and used to conduct the miniature test through the experimental
setup described elsewhere [26]. the miniature specimen is fabricated from the material taken out from the structure for which the properties are to be determined. Then, the miniature specimen test is
conducted and the load-elongation diagram along with the yield load is obtained. Then, using the trained ANN model and the yield load, the yield strength of the structure can be obtained.
2. Neural Network Model
Neural Network models belong to the class of data driven approaches instead of model-driven approaches [27]. Their ability to learn by example makes artificial neural networks very flexible and
powerful. Therefore, neural networks have been intensively used for solving regression and classification problems in many fields. Recently, neural networks have been used in the areas that require
computational techniques such as pattern recognition, optical character recognition, predicting outcomes, and problem classification, prediction models for mechanical properties of materials [28].
3. Background of Ann
The basic units of neural network are the artificial neuron as shown in Figure 1. Neural networks analyze data by passing it through several simulated processors that are interconnected and highly
distributed. Neural network processes by accepting inputs, , which are then multiplied by a set of weights, . The neurons then nonlinearly transform the sum of the weighted inputs, by means of a
transfer function, , into an output value , as shown in (1). The output of a neuron, thus, depends on the neuron’s input and on its transfer function. Sometimes a bias, , is also added to the
network. The bias is then regarded as a weight with a constant input of 1 [29] In general, a neural network model is trained to reach from a particular input to a desired target output until the
network output matches with that of the target. Hence, the neural network can learn all the systems. This system of learning is known as supervised learning. The learning ability of a neural network
depends on its architecture and applied algorithmic method during the training. In addition, training procedure can be ceased if the network output reaches close enough to the desired/actual output.
Thereafter, the network is ready to produce outputs based on new input parameters that are not used during the learning procedure.
Multilayer feed forward networks are one of the corner stones of research in ANN. In this, the neurons are ordered in layers, with an input set, hidden set and an output set of neurons. The
information (data) contained in the input layer is mapped to the output layers through the hidden layers. Each unit can send its output to the units on the higher layer only and receive its input
from the lower layer.
The neural network used belongs to the class of multilayer perceptrons or feed-forward neural networks as shown in Figure 2. The two-layered neural network model had “logsig” as activation function
in its first layer, that is, hidden layer, and the second layer has “purelin” as its activation function. The number of neurons in the hidden layer varied from 50 to 100 to achieve the optimal
network model, whereas the number of neurons in the output layer was 1. In the present study, training of neural network is carried out using MATLAB version 6.5.
The prediction of yield strength using neural network is made using the load-elongation curve obtained from the miniature test on dumb-bell specimen as shown in Figure 3. In the present
investigation, the study has been conducted in various types of steels, that is, chromium steel (H11), low carbon steel (LC) and medium carbon steel (MC), as well as an aluminum alloy (AR66). The
compositions of above materials are presented in Table 1. The Al-alloy was in optimum aged condition whereas the various steels were in normalized condition.
4. Prediction of Yield Strength
The yield strength prediction scheme used in the present study is shown in Figure 4. The procedure for the evaluation of yield strength using a miniature specimen test along with the neural network
model is delineated in a stepwise manner as follows.
4.1. Input to the Neural Network
The neural network model for the prediction of yield stress is trained by providing the yield load (breakaway load) as the input and yield stress as target output for different materials. The yield
load is obtained from the load-elongation curve of miniature test on dumb-bell shaped specimens. A typical load-elongation diagram for the specimen from the Al-alloy (AR66) is shown in Figure 5.
Different data points on the load-elongation diagram are identified as , where is the elongation in the specimen due to the corresponding tensile force at the data point. These data points are
obtained directly from the miniature test. Similarly, the load-elongation diagrams were obtained for all other materials considered in the present study. Whenever the distinct yield point is not
appearing in the load-elongation curve, yield load is obtained as yield offset corresponding to 0.2% of the elongation corresponding to the maximum load as shown in Figure 6. The load at breakaway
obtained from miniature test load-elongation curve for the materials considered in the present study is shown in Table 2.
4.2. Preparation of Database
(a) The database required for the training of ANN was prepared by performing experimental miniature tests on the materials for which the values of yield strength are already known. The experimental
load-elongation curves from the miniature test are obtained for all the materials. From the miniature test load-elongation curve, the values of breakaway load are obtained for all the materials
considered in the study and are denoted as and the corresponding yield strength values of the material are denoted as as shown in Figure 4. They are stored in the database, where the subscript varies
from 1 to ( number of different materials on which miniature test is performed).
(b) Apart from the experimental data, also data were collected for various materials from the available literature. The collected data constitutes tensile properties (such as Young’s modulus, yield
strength, etc.), true tensile stress-true strain diagram of the materials. Finite element simulation was performed using ABAQUS [30] to obtain FE simulated miniature test load-elongation curve for
these materials. The description about the finite element simulation of miniature test is given elsewhere [31]. From the FE simulated miniature test load-elongation curves, the values of breakaway
load were obtained and are denoted as , and the corresponding yield strength values of the material are denoted as (see Figure 4), where varies from 1 to ( = number of different materials on which FE
simulation of miniature test is performed). Now, the database contains number of load at breakaway and corresponding yield strength values of different materials. Out of the available data 75% data
were used for training the neural network and the remaining were used for testing the network.
4.3. Training of Neural Network
Neural network was trained by using the data stored in the database as described above. The inputs to the neural network are the breakaway load point from the database and the corresponding yield
strength values are the target output to the neural network. The network is said to be trained when the output from the ANN matches closely with the target output within the tolerance. The training
includes selection of suitable training algorithm for a particular problem. This is a very crucial step. It depends on the complexity of the problem, the number of data points in the training set,
the number of weights and biases in the network, and the performance goal as well. The training is done by changing the weights between the layers with an appropriate learning function. In order to
find the best suited training algorithm, a series of training algorithms namely Traingdx, Traingda, Trainoss, Trainscg, Traincgp, Traincgb, Traincgf, and Trains based on various transfer functions
have been tested for achieving the best performance goal, that is, mean square error should be minimum and also the computational time should be minimum. The following parameters were adjusted:
transfer function, optimization technique, and the number of neurons in the hidden layer of the network. Efficiency was improved by changing the number of hidden layers and the corresponding transfer
function based on the training algorithm. It is noted from [25] that Trainscg algorithm gave best result with lowest mean square error in minimum computational time. The trained network model is now
ready for predicting the yield strength value with the use of miniature test load-elongation curve of the desired material.
4.4. Yield Strength Prediction Scheme
With the trained neural network model available, the value of yield strength can be obtained for any material by obtaining miniature test load-elongation curve either through experiment or by FE
simulation if the tensile properties such as Young’s modulus, yield strength, and true stress-true strain diagram of the material are known. The complete procedure is pictorially presented in Figure
4.5. Performance of the Neural Network
The performance of the neural network is checked for the prediction of yield strength over the whole data set and is shown in Figure 7. The analysis of the network response was plotted in the form of
linear regression analysis between the network output (predictions) and the corresponding targets (experimental value) for the whole dataset. The neural network predicted the values to an appreciable
degree of accuracy. The correlation coefficient value is also shown in Figure 7.
In order to test the effectiveness of the developed neural network model, miniature tests were performed on different alloys chosen for the present study whose chemical composition is given in Table
1. In order to compare the result obtained from neural network, standard uniaxial tensile test also performed on these materials according to the standards ASTM E8-03 [32]. The standard uniaxial
tensile test specimens were prepared according to ASTM E8 and tests were carried out using Zwick/250 universal testing machine. A calibrated extensometer was used to measure the elastic and plastic
strains. The least count of the extensometer used was 1μm. Material properties such as Young’s modulus, yield strength, tensile strength, true stress, and true strain diagrams were obtained for all
the materials from the uniaxial tensile test. The values of different parameters obtained from the uniaxial tensile test are shown in Table 3.
4.6. Miniature Test Load-Elongation Curve
With the trained neural network model now available, yield strength predictions may be made quickly for new sets of materials without resorting to detailed finite element simulations or the standard
testing methods. Such techniques require a lot of time and large volume of the materials for preparing the standard test specimen, which is impractical in in-service structures. In the present case,
the miniature test load-elongation diagrams are obtained for the each material by performing miniature tensile test on the specimens prepared from respective materials. From the resulting
load-elongation diagrams, the load at breakaway is obtained for each material. This load vector is given as input to the neural network. The yield strength obtained for the materials used in the
present investigation using the trained neural network are shown in Figure 8. The figure also shows the experimental yield strength from standard tensile test for the respective materials.
Further analysis of the performance accuracy was carried out using statistical analysis of the error of neural network predictions. Neural network predictions are compared to the corresponding
experimental values. The relative errors are calculated using the relationship where is the experimental (measured) output value and is the predicted value by the neural network. Table 4 shows the
error in neural network predicted yield strength value.
It is observed from Table 4 that the ANN predicted yield strength values are in good agreement with those of the experimental values. For AR66 Al-alloy material, the yield strength value predicted by
ANN is 484.7MPa, whereas the experimental value is 506MPa. The percentage error in this case is maximum, that is, 4.20% when compared to other materials. For all other materials, the percentage
error is within 4%. So, it can be used to predict the yield strength value of any unknown material by using the output of miniature test on dumb-bell-shaped specimen.
5. Validation of Neural Network Model
After testing the proposed neural network model with the data obtained from experimental miniature test of different materials, the trained network is also validated with the data obtained from
literatures [9, 33]. The required neural network input (i.e., the load at breakaway point ) for the material were obtained by performing the 2D finite element simulation of dumb-bell-shaped miniature
specimen using the tensile properties as given in the literature. The simulations were performed using ABAQUS.
With the trained neural network model readily available, was fed as input to the network and the neural network predicted the yield strength value of the material. The results are presented in Table
5. It is observed that the yield strength predicted by ANN for the die steel is 482.4MPa in comparison with the experimental evaluated yield strength value of 483MPa, that is, with the error of
0.12%. Similarly, the ANN predicted the yield strength of medium carbon steel as 317.2MPa with the error of 1.49% when compared to its experimental value of 322MPa. Thus, the developed neural
network model along with miniature test result helped in achieving yield strength value of the materials in efficient way.
6. Conclusions
The yield strength value predicted by neural network model with Trainscg algorithm is found to be corroborating well with the standard test results as shown in Figure 8 and Table 5. The yield
strength value of AR66 Al-alloy predicted by ANN is 484.7MPa, whereas the experimental value is 506MPa, for the H11 steel the neural network predicted the yield strength value as 475.9MPa with an
error of 0.40% as compared to its experimental value that is, 474MPa. Similarly, the yield strength values predicted by the ANN for the low carbon steel and medium carbon steel are 354.6 and
305.1MPa, respectively, with the error of 3.90%, 1.70% and when compared to their respective experimental yield strength values. Also, the present neural network model predicted the yield strength
value of the materials taken from the available literature to an appreciable degree with error in their prediction varied from 0.12% to 1.50% for various materials. The approach seems to have
potential to predict the other mechanical properties of the material, which could be used in remaining life estimation of the costly energy producing plants and other structures.
In addition, the proposed novel dumb-bell-shaped miniature specimen can be prepared from the material taken out noninvasively from the location of interest where adverse conditions are present in the
in-service component. The properties of the material at this location of interest can be obtained nondestructively through the approach employed in the present study.
1. M. E. Haque and K. V. Sudhakar, “ANN back-propagation prediction model for fracture toughness in microalloy steel,” International Journal of Fatigue, vol. 24, no. 9, pp. 1003–1010, 2002. View at
Publisher · View at Google Scholar · View at Scopus
2. J. R. Foulds and R. Viswanathan, “Determination of the toughness of in-service steam turbine disks using small punch testing,” Journal of Materials Engineering and Performance, vol. 10, no. 5,
pp. 614–619, 2001. View at Publisher · View at Google Scholar · View at Scopus
3. G. E. Lucas, “Review of small specimen test techniques for irradiation testing,” Metallurgical and Materials Transactions A, vol. 21, pp. 1105–1119, 1990. View at Scopus
4. F. M. Huang, M. L. Hamilton, and G. L. Wire, “Bend testing for miniature disks,” Nuclear Technology, vol. 57, pp. 234–242, 1982.
5. X. Mao and H. Takahashi, “Development of a further-miniaturized specimen of 3mm diameter for Tem disk small punch tests,” Journal of Nuclear Materials, vol. 150, no. 1, pp. 42–52, 1987. View at
6. X. Mao, M. Saito, and H. Takahashi, “Small punch test to predict ductile fracture toughness ${\text{J}}_{\text{IC}}$ and brittle fracture toughness ${\text{K}}_{\text{IC}}$,” Scripta Metallurgica
et Materiala, vol. 25, no. 11, pp. 2481–2485, 1991. View at Scopus
7. P. S. Kullen, H. H. Smith, and D. J. Michel, “The shear punch measurement of the mechanical properties of selected unirradiated and irradiated alloys,” Journal of Nuclear Materials, vol. 158, pp.
57–63, 1988. View at Scopus
8. R. K. Pandey and S. Bhowmick, “Studies on miniature test techniques for the assessment of residual life,” in Proceedings of Trends in Mechanical Engineering Education and Research, pp. 542–547,
New Delhi, India, 1999.
9. A. Husain, Determination of mechanical behaviour of materials using miniature specimen test technique and finite element method, Ph.D. thesis, Department of Applied Mechanics, Indian Institute of
Technology Delhi, New Delhi, India, 2003.
10. C. C. Tsao, “Prediction of flank wear of different coated drills for JIS SUS 304 stainless steel using neural network,” Journal of Materials Processing Technology, vol. 123, no. 3, pp. 354–360,
2002. View at Publisher · View at Google Scholar · View at Scopus
11. M. S. Ozerdem and S. Kolukisa, “Artificial neural network approach to predict the mechanical properties of Cu—Sn—Pb—Zn—Ni cast alloys,” Materials and Design, vol. 30, no. 3, pp. 764–769, 2009.
View at Publisher · View at Google Scholar · View at Scopus
12. M. S. Ozerdem and S. Kolukisa, “Artificial Neural Network approach to predict mechanical properties of hot rolled, nonresulfurized, AISI 10xx series carbon steel bars,” Journal of Materials
Processing Technology, vol. 199, no. 1–3, pp. 437–439, 2008. View at Publisher · View at Google Scholar · View at Scopus
13. S. Forouzan and A. Akbarzadeh, “Prediction of effect of thermo-mechanical parameters on mechanical properties and anisotropy of aluminum alloy AA3004 using artificial neural network,” Materials
and Design, vol. 28, no. 5, pp. 1678–1684, 2007. View at Publisher · View at Google Scholar · View at Scopus
14. M. Li, X. Liu, and A. Xiong, “Prediction of the mechanical properties of forged TC11 titanium alloy by ANN,” Journal of Materials Processing Technology, vol. 121, no. 1, pp. 1–4, 2002. View at
Publisher · View at Google Scholar · View at Scopus
15. Z. Guo and W. Sha, “Modelling the correlation between processing parameters and properties of maraging steels using artificial neural network,” Computational Materials Science, vol. 29, no. 1,
pp. 12–28, 2004. View at Publisher · View at Google Scholar · View at Scopus
16. S. Malinov, W. Sha, and J. J. McKeown, “Modelling the correlation between processing parameters and properties in titanium alloys using artificial neural network,” Computational Materials Science
, vol. 21, no. 3, pp. 375–394, 2001. View at Publisher · View at Google Scholar · View at Scopus
17. J. McBride, S. Malinov, and W. Sha, “Modelling tensile properties of gamma-based titanium aluminides using artificial neural network,” Materials Science and Engineering A, vol. 384, no. 1-2, pp.
129–137, 2004. View at Publisher · View at Google Scholar · View at Scopus
18. C. Z. Huang, L. Zhang, L. He et al., “A study on the prediction of the mechanical properties of a ceramic tool based on an artificial neural network,” Journal of Materials Processing Technology,
vol. 129, no. 1–3, pp. 399–402, 2002. View at Publisher · View at Google Scholar · View at Scopus
19. Z. Zhang, P. Klein, and K. Friedrich, “Dynamic mechanical properties of PTFE based short carbon fibre reinforced composites: experiment and artificial neural network prediction,” Composites
Science and Technology, vol. 62, no. 7-8, pp. 1001–1009, 2002. View at Publisher · View at Google Scholar · View at Scopus
20. Q. Zeng, J. Zu, L. Zhang, and G. Dai, “Designing expert system with artificial neural networks for in situ toughened Si[3]N[4],” Materials and Design, vol. 23, no. 3, pp. 287–290, 2002. View at
21. M. Perzyk and A. W. Kochański, “Prediction of ductile cast iron quality by artificial neural networks,” Journal of Materials Processing Technology, vol. 109, no. 3, pp. 305–307, 2001. View at
Publisher · View at Google Scholar · View at Scopus
22. P. Myllykoski, J. Larkiola, and J. Nylander, “Development of prediction model for mechanical properties of batch annealed thin steel strip by using artificial neural network modeling,” Journal of
Materials Processing Technology, vol. 60, no. 1–4, pp. 399–404, 1996. View at Scopus
23. N. Huber and C. Tsakmakis, “Determination of constitutive properties from spherical indentation data using neural networks. Part II: the case of pure kinematic hardening in plasticity laws,”
Journal of the Mechanics and Physics of Solids, vol. 47, no. 7, pp. 1589–1607, 1999. View at Publisher · View at Google Scholar · View at Scopus
24. M. Abendroth and M. Kuna, “Determination of deformation and failure properties of ductile materials by means of the small punch test and neural networks,” Computational Materials Science, vol.
28, no. 3-4, pp. 633–644, 2003. View at Publisher · View at Google Scholar · View at Scopus
25. G. Partheepan, Determination of mechanical properties of materials using newly developed dumb-bell shaped miniature specimen and numerical techniques, Ph.D. thesis, Department of Applied
Mechanics, Indian Institute of Technology Delhi, New Delhi, India, 2006.
26. G. Partheepan, D. K. Sehgal, and R. K. Pandey, “Design and usage of a simple miniature specimen test setup for the evaluation of mechanical properties,” International Journal of Microstructure
and Materials Properties, vol. 1, no. 1, pp. 38–50, 2005.
27. K. Chakraborty, K. Mehrotra, C. K. Mohan, and S. Ranka, “Forecasting the behavior of multivariate time series using neural networks,” Neural Networks, vol. 5, no. 6, pp. 961–970, 1992. View at
28. M. Çöl, H. M. Ertunç, and M. Yilmaz, “An artificial neural network model for toughness properties in microalloyed steel in consideration of industrial production conditions,” Materials and Design
, vol. 28, no. 2, pp. 488–495, 2007. View at Publisher · View at Google Scholar · View at Scopus
29. L. Fausett, Fundamentals of Neural Networks: Architectures, Algorithms, and Applications, Prentice-Hall, New Jersey, NJ, USA, 1994.
30. K. Hibbit and Sorensen Inc., ABAQUS Users Manual I–II version 6.3, USA, 2002.
31. G. Partheepan, D. K. Sehgal, and R. K. Pandey, “Finite element application to estimate in-service material properties using dumb-bell miniature specimen,” in Proceedings of the First
International Conference on Advances and Trends of Engineering Materials and Their Applications (AES-ATEMA '07), pp. 369–377, Montreal, Canada, August 2007.
32. ASTM Standard Test Method for Tension Testing of Metallic Materials, E8-03, Annual book of ASTM standard, 2003.
33. A. Husain, D. K. Sehgal, and R. K. Pandey, “An inverse finite element procedure for the determination of constitutive tensile behavior of materials using miniature specimen,” Computational
Materials Science, vol. 31, no. 1-2, pp. 84–92, 2004. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/aans/2011/607374/","timestamp":"2014-04-21T16:49:32Z","content_type":null,"content_length":"96097","record_id":"<urn:uuid:ad7a960a-c6d4-48df-a2ee-4deb4069d547>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
The propagation of a liquid bolus along a liquid-lined flexible tube
Howell, P. D. and Waters, S. L. and Grotberg, J. B. (2000) The propagation of a liquid bolus along a liquid-lined flexible tube. Journal of Fluid Mechanics, 406 . pp. 309-335.
Official URL: http://journals.cambridge.org/bin/bladerunner?REQU...
We use lubrication theory and matched asymptotic expansions to model the quasi-steady propagation of a liquid plug or bolus through an elastic tube. In the limit of small capillary number, asymptotic
expressions are found for the pressure drop across the bolus and the thickness of the liquid film left behind, as functions of the capillary number, the thickness of the liquid lining ahead of the
bolus and the elastic characteristics of the tube wall. These results generalise the well-known theory for the low-capillary-number motion of a bubble through a rigid tube (Bretherton 1961). As in
that theory, both the pressure drop across the bolus and the thickness of the film it leaves behind vary like the two-thirds power of the capillary number. In our generalised theory, the coefficients
in the power laws depend on the elastic properties of the tube.
For a given thickness of the liquid lining ahead of the bolus, we identify a critical imposed pressure drop above which the bolus will eventually rupture, and hence the tube will reopen. We find that
generically a tube with smaller hoop tension or smaller longitudinal tension is easier to reopen. This flow regime is fundamental to reopening of pulmonary airways, which may become plugged through
disease or by instilled/aspirated fluids.
Repository Staff Only: item control page | {"url":"http://eprints.maths.ox.ac.uk/110/","timestamp":"2014-04-21T07:05:21Z","content_type":null,"content_length":"16120","record_id":"<urn:uuid:f9e67de1-07c9-4d89-8c91-e696aee3adf2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Theorem on Convex sets
Jeremy Clark jeremy.clark at wanadoo.fr
Thu Oct 6 09:18:50 EDT 2005
Here is a proof for N dimensions:
We can assume that the surface of A is made up of "small" (hyper-)
polygons A_i. Project each of these outwards onto the surface of B
(which needn't be convex) along lines parallel to a normal to A_i.
You get shadows B_i on B which do not intersect (convexity of A) and
must be of area not smaller than that of A_i (because we chose
normals to A_i), so the surface area of A must be smaller than that
of B.
Jeremy Clark
On Oct 5, 2005, at 9:40 pm, joeshipman at aol.com wrote:
> If A contained in B are convex sets in the plane, the boundary of A is
> no larger than the boundary of B.
> Is this true in N dimensions, and if so, who proved it?
> In 2 dimensions, I have trouble proving the theorem without rather
> advanced tools. Does anyone know a simple proof?
> -- Joe Shipman
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-October/009136.html","timestamp":"2014-04-17T04:19:48Z","content_type":null,"content_length":"3641","record_id":"<urn:uuid:b224706e-e0b9-4447-b785-1b07c39b9788>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Supplements to Articles in
Supplements to "What Do You Study?" by Brendan W. Sullivan
Supplement to "Perplexities Related to Fourier's 17 Line Problem," by Gerald L. Alexanderson and John E. Wetzel:
Perplexed Computations (pdf) by Daniel Lichtblau and Wacharin Wiciramala.
Supplement to "The Graph Menagerie: Abstract Algebra and the Mad Veterinarian," by Gene Abrams and Jessica K. Sklar, in the June issue:
Supplements to "Modeling a Diving Board" by Michael A. Karls and Brenda M. Skoczelas: Appendix showing the solution to the model, Mathematica notebook, and the same notebook in PDF format.
Supplement to "Casting Light on Cube Dissections" by Greg Frederickson in the December 2009 issue: Casting Light on Cube Dissections.
Supplement to "Keeping Dry: The Mathematics of Running in the Rain" by Dan Kalman and Bruce Torrence in the October 2009 issue. These demonstrations allow you to dynamically control both weather
conditions and the size and shape of a 3-dimensional traveler as he runs through the rain. There are two versions. The file Torrence- RunningInTheRain-Source.nb requires Mathematica (version 6 or
higher). The file Torrence-RunningInTheRain.nbp requires the free Mathematica Player application, which is available here.
Supplement to "Dirichlet: His Life, His Principle, and His Problem", by Pamela Gorkin and Joshua Smith in the October 2005 issue: A Mathematica package file, which can be used to show solutions to
the Dirichlet problem on the unit disk with rational boundary data.
Supplement to "How to Maximize Your Chances of Getting a Color Match" by Ramin Naimi and Roberto Carlos Pelayo in the April 2005 issue: A PDF file containing proofs of some of the more technical
Supplement to "Simpson Symmetrized and Surpassed," by Daniel J. Velleman in the February 2004 issue: A Mathematica notebook that implements all the algorithms from the article.
Supplement to "Means Appearing in Geometric Figures" by Howard Eves in the October 2003 issue: An animated demonstration (in Geometer's Sketchpad) of the means in a trapezoid, by Shannon Umberger
Supplement to "Symmetric Polynomials in the Work of Newton and Lagrange" by Greg St. George in the December 2003 issue: The author's translation of sections 30 and 31 of Lagrange's Reflexions sur la
Resolution Algebrique des Equations.
Supplements to "Plotting the Escape—An Animation of Parabolic Bifurcations in the Mandelbrot Set" by Anne Burns in the April 2002 issue: Plotting orbits and Viewing the animation.
Supplement to "Two Reflected Analyses of Lights Out" by Óscar Martín-Sánchez and Cristóbal Pareja-Flores, in the October 2001 issue.
Supplement to "The Track of a Bicycle Back Tire" by Steven R. Dunbar, Reinier J. C. Bosman, and Sander E. M. Nooij, in the October 2001 issue.
Envelopes of Zags, a supplement to "ZigZags" by Peter Giblin, in the October 2001 issue (Requires a Java-enabled web browser.).
Supplement to the "Proof Without Words: The Pythagorean Theorem," by Jose Gomez in the April 2001 issue: an interactive display. (Requires a Java-enabled web browser.)
Supplements to the article "A Postmodern View of Fractions and the Reciprocals of Fermat Primes" by Rafe Jones and Jan Pearce, in the April 2000 issue:
Supplements to the article "Apollonian Cubics: An Application of Group Theory to a Problem in Euclidean Geometry," by Paris Pamfilos and Apostolos Thoma, in the December 1999 issue: computer programs
for further study of the Apollonian cubics.
Supplement to the Note "Proof Without Words: Geometry of Subtraction Formulas" by Leonard M. Smiley in the December 1999 issue: Geometry of Addition Formulas.
Supplements to the article "Woven Rope Friezes" by Frank Farris, Santa Clara University, and Dr. Nils Kr. Rossing, in the February 1999 issue: | {"url":"http://www.maa.org/publications/periodicals/mathematics-magazine/mathematics-magazine-supplements?device=desktop","timestamp":"2014-04-17T22:40:39Z","content_type":null,"content_length":"100844","record_id":"<urn:uuid:7fc4f67e-69e4-4135-8bf4-23703585955f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
$Le Monde puzzle [48]$
This week(end), the Le Monde puzzle can be (re)written as follows (even though it is presented as a graph problem): Given a square 327×327 symmetric matrix A, where each non-diagonal entry is in
{1,2,3,4,5} and , does there exist a triplet (i,j,k) such that Solving this problem in R is very easy. We can create
Le Monde puzzle [43]
Here is the puzzle in Le Monde I missed last week: Given a country with 6 airports and a local company with three destinations from each of the six airports, is it possible to find a circular trip
with three intermediate stops from one of the airports? From all of the airports? One more airport
Le Monde puzzle [42]
An interesting suduko-like puzzle for this week puzzle in Le Monde thi A 10×10 grid is filled by a random permutation of {0,…,99}. The 4 largest figures in each row are coloured in yellow and the 4
largest values in each column are coloured in red. What is the range of the number of yellow-and-red
Le Monde puzzle [41]
The current puzzle in Le Monde this week is again about prime numbers: The control key on a credit card is an integer η(a) associated with the card number a such that, if the card number is c=ab, its
key η(c) satisfies η(c)=η(a)+η(b)-1. There is only one number with a key equal to 1 and
Le Monde puzzle [40]
The puzzle in Le Monde this week is called the “square five” (sic!): Two players each have twenty-five cards with five times each of the digits 1,2,3,4,5. They alternate putting one card on top of
the pile, except that they can instead take an arbitrary number of consecutive cards from the top of the pile
Le Monde puzzle [34]
Since the puzzle in this week (-end) edition of Le Monde is not (easily) solvable via an R program, I chose to go back to an older puzzle that my students can solve. Eleven token are distributed
around a 200 meter perimeter-long ring. They all start moving at the same speed, 18km/h, in
Le Monde puzzle [38]
Since I have resumed my R class, I will restart my resolution of Le Monde mathematical puzzles…as they make good exercises for the class. The puzzle this week is not that exciting: Find the four
non-zero different digits a,b,c,d such that abcd is equal to the sum of all two digit numbers made by picking
Candy branching process
$Candy branching process$
The mathematical puzzle in the latest weekend edition of Le Monde is as follows: Two kids are given three boxes of chocolates with a total of 32 pieces. Rather than sharing evenly, they play the
following game: Each in turn, they pick one of the three boxes, empty its contents in a jar and pick
Solving the rectangle puzzle
$Solving the rectangle puzzle$
Given the wrong solution provided in Le Monde and comments from readers, I went to look a bit further on the Web for generic solutions to the rectangle problem. The most satisfactory version I have
found so far is Mendelsohn’s in Mathematics Magazine, which gives as the maximal number for a grid. His theorem is
Wrong puzzle of the week [w10]?!
In the weekend supplement to Le Monde, the solution of the rectangle puzzle is given as 32 black squares. I am thus… puzzled!, since my R program there provides a 34 square solution. Am I missing a
hidden rectangle in the above?! Given that the solution in Le Monde is not based on a precise | {"url":"http://www.r-bloggers.com/tag/mathematical-puzzle/page/4/","timestamp":"2014-04-20T18:47:19Z","content_type":null,"content_length":"39050","record_id":"<urn:uuid:1c2bdbe8-ea22-4053-b0e4-da3f94561e76>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
User David White
bio website dwhite03.web.wesleyan.edu
location Middletown, CT
visits member for 3 years, 4 months
seen 1 hour ago
stats profile views 5,377
I am a graduate student at Wesleyan University, currently in my final year. I study algebraic topology with Mark Hovey, specifically questions involving localization and model categories. My work has
applications to the theory of algebras over operads, equivariant homotopy theory, and motivic homotopy theory. I am also interested in creating a stronger bridge between infinity categories and model
categories, and in particular in determining whether methods from the former can be applied to the latter.
I also hold a masters degree in computer science from Wesleyan, which I received under Danny Krizanc. My thesis problem applied methods of discrete mathematics and probability theory to settle a
conjecture involving autonomous agents moving on a graph. I've put aside computer science for the time being but may return to it to resolve out-standing questions or to advise a project.
My email address is dwhite03 at wesleyan dot edu
22h comment Chain homotopy of non-abelian category
This mathoverflow thread might be useful: mathoverflow.net/questions/430/…
2d awarded Nice Answer
14 reviewed Approve suggested edit on How to flow submanifolds?
11 answered Truncations of E_infinity algebras
Apr Algebraic topology vs. category theory
10 revised edited body
12 awarded Good Answer
Mar Adjoining adjoints in a 2-category
11 revised added 1 characters in body
Mar In what generality does Eilenberg-Watts hold?
9 comment Yes, that's exactly right. The paper which puts a model structure on the module category is Algebras and Modules in Monoidal Model Categories, by Schwede-Shipley. Also, Mark Hovey's
preprint Monoidal Model Categories if you want to see what happens when the Schwede-Shipley hypotheses are weakened.
Mar Non-examples of model structures, that fail for subtle/surprising reasons?
9 revised deleted 140 characters in body
8 awarded Good Answer
Mar Non-examples of model structures, that fail for subtle/surprising reasons?
8 revised Corrected an erroneous statement
Mar In what generality does Eilenberg-Watts hold?
8 revised added 21 characters in body
8 answered In what generality does Eilenberg-Watts hold?
28 answered Is a 'join' of two cofibrations a cofibration?
27 reviewed Leave Closed Replacing the Lie commutator with something else
27 reviewed No Action Needed Is there a simple closed form solution for the joint density distribution of an exponential distribution with a rate given by a Gamma distribution?
Feb Dimension of a ring after localization
27 revised deleted 1 characters in body
26 reviewed No Action Needed What can representations of affine Weyl groups do?
26 reviewed No Action Needed On the relation between the set of extreme points of the unit ball of $M(X)$ and $M(X)^{**}$
Feb reviewed No Action Needed Strong morita equivalence and morphims between $C^*-$algebras | {"url":"http://mathoverflow.net/users/11540/david-white?tab=activity","timestamp":"2014-04-19T18:11:06Z","content_type":null,"content_length":"46747","record_id":"<urn:uuid:a3eee08c-03ef-421c-b04a-ab53dc92edf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2010 [00114]
[Date Index] [Thread Index] [Author Index]
Re: Issuing Function Calls within a Plot command
• To: mathgroup at smc.vnet.net
• Subject: [mg111518] Re: Issuing Function Calls within a Plot command
• From: "David Park" <djmpark at comcast.net>
• Date: Wed, 4 Aug 2010 05:48:03 -0400 (EDT)
The reason you obtain nothing is that Plot has the Attribute HoldFirst,
which means that the plot expression is not immediately evaluated. Instead,
Mathematica substitutes x values into it and only then evaluates. So say
that Mathematica evaluates at x = 0.1. Then the plot expression becomes
D[myFunction[0.1],0.1], which makes no sense and will not return a value.
The solution is to tell Mathematica to evaluate the derivative first.
myFunction[x_] := x^2;
Plot[D[myFunction[x], x] // Evaluate, {x, 0, 5}]
In this case, there is an even simpler construction.
Plot[myFunction'[x], {x, 0, 5}]
If you had a function with parameters, defined as f[a_,b_][x_]:= expression
in a, b and x, then you could use f[2,3]`[x] say as a plot function of the
derivative with specific parameters.
If you have a plot function that involves complicated processing, such as
Integral, then it is worthwhile to define the plot function outside of any
Plot statement and make certain you know what you are dealing with - by
looking at the resulting expression, or evaluating at some points, or
looking at it with Table. This untangles the plot function definition from
the plotting algorithm and simplifies any debugging.
David Park
djmpark at comcast.net
From: Andrew DeYoung [mailto:adeyoung at andrew.cmu.edu]
It seems like Mathematica has difficulty plotting calls to other
functions. Is this true? For example, can you help me understand why
the following does not plot?
If I write
myFunction[x_] := x^2;
Plot[D[myFunction[x], x], {x, 0, 5}]
nothing plots.
If I write
myFunction[x_] := x^2;
Plot[D[myFunction[x][x], x], {x, 0, 5}]
again, nothing plots.
However, if I compute the derivative of the function outside of the
Plot command,
myFunction[x_] := x^2;
myDerivative = D[myFunction[x], x]
Plot[myDerivative, {x, 0, 5}]
the derivative of x^2 (i.e., 2x) plots correctly.
Can anyone please help me understand why my first two tries do not
work, but my third try does?
Many thanks in advance.
Andrew DeYoung
Carnegie Mellon University
adeyoung at andrew.cmu.edu | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Aug/msg00114.html","timestamp":"2014-04-21T04:47:26Z","content_type":null,"content_length":"27206","record_id":"<urn:uuid:d9ab4456-3ca6-4905-a319-8a1cc3f764c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computation - Operations Research Models and Methods
The programs described in this site, must be installed as add-ins for Microsoft Excel. This article explains how to install add-ins and what to do in case of trouble. We also show
how to get access to the source code.
This add-in places the item Add ORMM on the OR_MM menu. Selecting this item presents a dialog box that allows easy installation and removal of the other add-ins in the collection.
A second dialog easily loads demonstration workbooks.
The Mathematical Programming Add-in constructs optimization models of several kinds:
The models can be solved using the Solver Add-in or one of the solution add-ins provided in this collection. The Solver Add-in comes with Excel, and it can solve linear
programming, integer programming and nonlinear programming models. Additional add-ins are available in this package to solve linear programming, integer programming, network
programming and transportation models.
Mathematical This add-in is an alternative to the Mathematical Programming Add-in for linear and integer programming models. It's chief contribution is alternative format for models. The
Programming formats are: Tableau, dual tableau, column list and row list. The latter two store constraint coefficients in lists rather than matrix.
Model Builder
Stochastic programming explicitly recognizes uncertainty by using random variables for some aspects of the problem. With probability distributions assigned to the random variables,
an expression can be written for the expected value of the objective to be optimized. The expected value is to be maximized or minimized.
For stochastic programming, some variables are to be set by a decision maker, these are the decision variables, while some model parameters are determined by chance, and these are
the random variables. To model a stochastic programming problem, one must answer: When must the decision maker make decisions relative to the time when the random variables are
realized? The several different answers to this question lead to different computational methods.
Linear/Integer This add-in provides an algorithm that solves Linear Programming or Integer Programming problems. It can be used instead of the Excel solver for linear models created by the
Programming Mathematical Programming add-in. When the LP/IP Solver add-in is installed a new item now appears in the OR_MM menu, LP_Solver. Clicking on this item presents a form allowing the
Solver selection three options: show a sensitivity analysis, show detailed information about the steps primal simplex procedure, and start the solution using the current solution value.
Network Flow A network solution algorithm is provided by this add-in. The Excel Solver actually solves network problems by solving the underlying linear programming problem. Network algorithms
Programming are generally faster than linear programming algorithms for solving problems that can be modeled entirely as networks. The add-in places a Network Solver item on the OR_MM menu. In
Solver addition to pure network models, the add-in can solve generalized networks when arc gains are different than 1. It can also solve models that require flows to be integer. Clicking
this item presents a dialog with which a number of solution algorithm options are controlled.
The Dynamic Programming Collection is a series of add-ins associated with processes that involve states, actions and events. Many situations can be described by a collection of
mutually exclusive states that are visited sequentially. From each state the decision maker must choose an action. Given the state and action the next state is determined by an
When the situation has only of states and events, the model is a Markov Chain. When the situation has only of states and actions, the model is a Deterministic Dynamic Program. When
the situation has states, actions and events, the model is a Stochastic Dynamic Program, or Markov Decision Process. The collection models and solves all of these problems.
Dynamic The DP Examples section holds example problems, some from the literature, that illustrate the capabilities of the Dynamic Programming Collection.
The DP Data add-in provides the data structure for a selected set of problems used to illustrate the remaining add-ins. This add-in has some interesting problem classes of
operations research and can be revised to include new classes. The DP Data add-in will call the DP Models add-in and fill the forms created by that add-in.
Dynamic The DP Models add-in constructs a form that describes the states, actions and events characterizing a given problem. It is an algebraic model generator similar to GAMS used for
Programming Mathematical Programming models. The form constructed by the DP Models add-in holds the definitions of the states, actions and events for the problem, formulas for computing the
Models objective function, and formulas for computing the transitions from one state to another. The forms are filled by the DP Data add-in for certain problem classes. The DP Models
add-in constructs lists of states, actions, events, decisions and transitions that are used by the DP Solver add-in. The DP Models add-in can also be used directly for problems not
modeled by the DP Data add-in.
Dynamic The DP Solver add-in creates a form holding lists of states, actions, events, decisions and transitions. The add-in uses these lists with iterative algorithms to find optimum
Programming actions for the states. The add-in handles, deterministic DP models, stochastic DP models, and discrete time markov chains (DTMC) models. Several solution strategies are provided.
Many Excel worksheet models depend on a few design variables. Through formulas, the worksheet contents vary with the values of the design variables and the modeler uses the
worksheet as a "what if" tool. It is often the goal to find the best values. This add-in provides algorithms that search for the best among a perhaps large set of discrete
alternatives. In addition to stand-alone forms that can be used for general optimization on a worksheet, the add-in provides solutions to combinatorial problems encountered in
operations research studies. The problems include mixed integer programming, the traveling salesman problem, the assignment problem, spanning tree problems, optimum path problems
in networks and flow-tree problems.
This add-in creates combinatorial models that use the search methods of the Optimize add-in. The add-in provides a model for the Quadratic Assignment problem, the Minimal Spanning
Tree problem and the Shortest Path Tree problem. Other models may be added in the future. The Optimize add-in must be installed for the Combinatorics add-in to work.
This program models and solves the vehicle routing problem for several vehicles visiting several delivery sites. This uses the methods of the Combinatorics add-in, but the Routing
add-in is a stand-alone program. A full description of the Routing add-in is in the OM/IE section of this site.
This add-in performs operations on a function of continuous variables. The function may depend on many variables and be constructed of arbitrary combinations of Excel functions.
The function will usually be continuous and differentiable. The add-in uses numerical methods to compute the gradient and Hessian matrices (matrices of first and second partial
derivatives). It also computes integrals and moments. The add-in uses a gradient search method to find values of the variables that maximize or minimize the function.
The Random Variables Add-in performs computations associated with probability distributions. Random variables with any of 16 different named distributions may be defined. Functions
compute probabilities of events, inverse probabilities and moments. Up to three distributions may be plotted. Complex probability problems may be solved through Monte-Carlo
The Queuing Add-in performs calculations associated with Poisson queuing models, Non Poisson models and networks of queues. It also performs simulations of multi-channel queuing
This collection has been replaced by the more general dynamic programming collection.
This add-in performs computations for discrete time or continuous time Markov stochastic processes, DTMC or CTMC respectively. The data defining a DTMC model are the probabilities
in the transition matrix, and the data defining a CTMC model are the activity rates in the rate matrix. The add-in performs the following analyses: steady state probabilities,
n-step probabilities, first passage probabilities and expected values, transient probabilities, simulation and absorbing state probabilities. Economic data allows the performance
of economic analyses.
A Markov Decision Process (MDP) adds decisions to Markov analysis. Here states may have several actions available to the decision maker. Actions modify transition probabilities and
have different costs or rewards. The goal is to find an optimum policy that minimizes expected discounted costs or maximizes discounted rewards. This add-in has been replaced by
the more general DP Solver add-in.
Consider a situation in which a series of decisions are to made sequentially. The problem is complicated however in that the results of some of the decisions are not deterministic,
rather they are affected by risk. The goal is to make a series of decisions in order to maximize the expected return. Models and solution methods for this type of problem are the
subjects of decision analysis. The add-in provides extensive programs to build, solve and display decision trees.
This add-in creates multiline simulations useful for analyzing a variety of systems that don't fit the model types handled by the other add-ins. The add-in builds and maintains
worksheets on which simulations are easily built. Packaged models for time series simulation and inventory simulation are included.
This workbook tracks and forecasts hurricanes in the Gulf of Mexico region. The download is an Excel workbook rather than an add-in, and it contains both sample data and the
macros necessary to add new hurricanes, add position data as provided by the National Hurricane Center, make forecasts of future movement, plot data and forecasts on a map and
construct an error analysis after the storm is over.
To train their production managers in the strategy developed, the Proctor and Gamble company developed a simulation model to illustrate the effects of the varying demand and how
the P&G strategy worked. At that time, the simulation was done manually. The manual simulation was computerized in the 1970’s in interactive BASIC and with the advent of
microcomputers became a viable tool in teaching Production Scheduling and Inventory Control. More recently, it has been converted by Paul Jensen to Visual Basic for Applications
and implemented for Microsoft Excel. We call the simulation the P&G Game.
Excel add-ins for several topics related to Operations Management and Industrial Engineering are described in the OM/IE section of this site. | {"url":"http://www.me.utexas.edu/~jensen/ORMM/computation/index.html","timestamp":"2014-04-16T22:21:23Z","content_type":null,"content_length":"51932","record_id":"<urn:uuid:25f90307-bc0e-432b-819b-97c1088ef7a5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dear sir,
we have air flow and one FRP fan is fitted in that air flow. which fan rotate on app. 1000 rpm. so, can I use this air flow rpm for generating single phase 230 volt 50 hz. supply? If yes, then
suggest suitable capacity alternator. and how much load can be run on the alternator?
thank you.
Generator… following is a procedure you can follow:
1) Determine the area, A, swept out by the fan blades
A = pi x (D^2)/4), where D =blade diameter
2) Determine the air velocity, V, of the air flowing through Area A!
I'll leave that up to you!
3) Determine the mass, M, of the air flowing through Area A:
M = rho x A x V, where rho = air density.
4) Determine the energy, E, of the air flowing through Area A:
E = (1/2) x Ma x V^2 = (1/2) x rho x A x V x V^2 = (1/2) x rho x A x V^3
5) Determine the power, P, produced by the air flow.
P equals the change in Energy with respect to time, but it requires knowledge of differential equations (available on request), the end result is P = (1/2) x rho x A x V^3
6) THE CAVEAT
The conclusion normally reached is, "There is a lot of energy/power in wind!" Unfortunately, in 1919, Albert Betz, a German physicist, determined the theoretical value is unreachable! He proved the
maximum available power is only 59% of that derived above!
7) The CONCLUSION
Thus the practical wind-mill will produce only:
P = (1/2) x Nb x Nm x Ne x A x V^3, where Nb is the Betz Factor, and Nm and Ne are the efficiencies of the turbine and generator, respectively!
Regards, Phil Corso
I forgot to mention viirtually all wind-power applications have a Betz factor well below 0.59!
There's also the minor matter of speed of the Fiber Reinforced Plastic fan vs frequency. A 6-pole machine would be required for 1000 RPM @ 50 Hz.
But there are a LOT of other factors to be considered as well.
Your use of this site is subject to the terms and conditions set forth under
Legal Notices
and the
Privacy Policy
. Please read those terms and conditions carefully. Subject to the rights expressly reserved to others under Legal Notices, the content of this site and the compilation thereof is © 1999-2014 Nerds
in Control, LLC. All rights reserved.
Users of this site are benefiting from open source technologies, including PHP, MySQL and Apache. Be happy.
If you think education is expensive, try ignorance.
-- Derek Bok, president of Harvard | {"url":"http://control.com/thread/1362381054","timestamp":"2014-04-17T18:24:10Z","content_type":null,"content_length":"13474","record_id":"<urn:uuid:d349db17-e0b6-4fe2-96d6-03b351420dd7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenMx - Advanced Structural Equation Modeling
Hi folks, I am just pasting in here some email discussion between Greg Carey and myself concerning the language interface. In the thread we drifted off to talk about models of assortative mating.
Greg Carey's email:
just looked at openMx and am worried that things might get unnecessarily complicated when dealing with large multivariate problems.
imagine data on twins and families on the WAIS and a factor model. the proposed GUI is like Amos--you will spend so much time drawing paths from 13 variables to X latent factors for father (including
13 specifics), X latent factors for mother (13 more specifics), for son 1, son 2 ... to daughter k that the actual canvas cannot accommodate the number of necessary symbols. in addition, the 13 x 13
covariance matrix between husband's WAIS and wife's WAIS should be predicted by the model of marital assortment. try to express this in jack's RAM notation--it is possible but the only practical way
(that i can see) is to define blocks of the path-coefficient matrix and the exogenous-residual covariance matrix in terms of other, more specific matrices.
anyway, what everyone is doing is perfectly fine--for simple models. for complicated models, i suspect that a different interface is required (one that may, in fact, be accommodated by expansion of
the python script, mxParser.py). in terms of a positive contribution, i suggest that the old logic of Mx be followed but in the following way:
(1) define the parameter matrices containing free and/or fixed parameters [e.g., what is typically done in Group 1]
begin matrices;
covA sy 3 fr
D di 3 fi
StdE di 3
WhatEver fu 3 6
end matrices;
(2) parse a series of statements that fix, free, equate, etc. elements of the above matrices. these would include the FI, FR, EQ, PA, MA, SP statements of Mx
begin Whatever_You_Want_to_Call_This_Section;
PA StdE
MA StdE
23.6 4 9.3
MA WhatEver
.3 .8 .7 0 0 0
0 .7 .3 .2 0 0
0 0 0 0 .8 .5
FI Whatever 1 1 2 1 3 2
end Whatever_You_Want_to_Call_This_Section;
(3) give the algebra (as in the first group in usual Mx Code) but in R syntax
begin algebra
Example = Stde %*% WhatEver %*% transpose(Whatever)
end algebra
(4) assign the predicted matrices from (3) to the groups in the model
begin PredictGroups
1 CPre1
2 hbind(cbind(Vp,covDZ),cbind(covDZ,Vp))
end PredictGroups
(5) from the above information, it is possible to construct two different numeric vectors:
xfandf = vector of all free and fixed parameters
xf = compact vector of free parameters,
so do the following in the minimization function passed into NAG E04???
(5.a) take the vector xf passed from E04??? and put the appropriate elements into vector xfandf
(5.b) from vector xfandf, construct the parameter matrices specified in BEGIN MATRICES and BEGIN WHATEVER
(5.c) perform the algebra specified in step (3) above
----- CHECK THIS CODE OUT ----------------------------------------------
npheno <- 5
begin matrices
Va sy npheno
sigE di npheno
Re st npheno
eta sy npheno
end matrices
begin matrixStuff
PA eta
ST 2.5 DIAG(Va)
ST 0.4 OFFDIAG(Va)
ST 3.0 sigE
end matrixStuff
begin algebra
Vp <- Va + sigE %*% Re %*% sigE
covE <- sigE %*% eta %*% sigE
end algebra
begin predictGroups
1 rbind (cbind(Vp, Va + covE), cbind(Va + covE, Vp))
2 rbind (cbind(Vp, .5*Va + covE), cbind(.5*Va + covE, Vp))
end predictGroups
----- END OF CODE --------------------------------------------------------
bottom line = if you implement this in the existing openMx framework, it will take pages of code. a suitable preprocessor in python, perl, or (if really interested in a potential gui with web
interfaces) java can have a user enter the code and have it translated into Mx equivalent using the above rules.
anyway, just trying to help out.
Editorial Note from Mike Neale: the Pattern command, which is rarely used in Mx, simply identifies which elements should be free parameters without actually assigning parameter numbers (labels in
OpenMx) to them. It is equivalent to free=c(T,F,T) syntax in OpenMx.
Sun, 11/01/2009 - 16:47
Indeed: many complex models
Indeed: many complex models have lots of paths, and the pathic-interface can't handle some of the kinds of syntax which would express these compactly.
It does have some powerful capabilities already, of course: one line can easily build, for instance 13 paths from a latent to WAIS manifests.
It would be good to have some additional syntax to say "and do this for the same-named variables in each of the groups you find in the supermodel" and some other commands that you would give to a
clever automaton with a large drawing board but not much clue about what your scientific question is.
Matrices are great way of hiding this complexity: nothing like a 5*5 matrix and a couple of 1*1s to make a 100 path monster seem tractable.
Saying that, it would be handy to be able to add and drop paths from matrix models using path syntax, and it seems that this won't be too hard to code (and then share) in R for each of the common
matrix metaphors we use.
Solving this for a two group Cholesky might be a nice place to start: it just hides the matrix setting functions (as do the pathic functions, but would make scripts more readable.
Fri, 10/30/2009 - 12:46
Greg Yes, I have been trying
Yes, I have been trying to maintain and retain more compact coding. Note first that OpenMx has retained full matrix algebra specification of models. Second, the path language permits 'drawing' all
the paths from one set of variables to another (default is element 1 list 1 - element 1 list 2, element 2 list 1 = element 2 list 2, i.e. diagonal but can be changed with all=TRUE for a full
Cartesian set). Third, the path language can be wrapped inside regular R loops and so forth in order to set up more complex patterning of paths within a matrix.
I suspect, however, that you really want a version of George Vogler's multivariate path analysis. I have been trying to effect this with constraints, such that a subset of matrix elements can be the
result of a formula. I'm not sure exactly where we are with that. For a while at least, only scalar elements could be formulated.
The other approach I have been wanting to implement, drawing-wise, was to have an "arrows=0" option which would effectively draw co-paths. The subsequent algebra is not that bad - Pearson Aitken as
usual. However, the situation gets more problematic if there are multiple copaths and their order is significant spouses of twins is an example; with a single application of the formula I think the
spouses remain uncorrelated. If one applies first one copath and then the second, then the spouses end up correlated. This latter approach seems more reasonable because in the case of high
correlations between spouses, and highly correlated twins, it would not be positive definite to have no spouse-spouse correlation.
Personally, I would rather have an R function than a Python script (see thread http://openmx.psyc.virginia.edu/thread/214) to convert Mx formulae to OpenMx formulae. But then beggars can't be
Fri, 10/30/2009 - 14:04
> Personally, I would rather
> Personally, I would rather have an R function than a Python script
> (see thread http://openmx.psyc.virginia.edu/thread/214) to convert
> Mx formulae to OpenMx formulae. But then beggars can't be choosers!
I agree. You'll have to ask Michael, but I think doing it using R (or with a C back-end) would be difficult at best. If someone wanted, however, they could wrap up the python with: | {"url":"http://openmx.psyc.virginia.edu/thread/253","timestamp":"2014-04-21T13:02:37Z","content_type":null,"content_length":"36067","record_id":"<urn:uuid:8cd64465-b24f-4854-9ac8-52e977143fd3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 1998 [00048]
[Date Index] [Thread Index] [Author Index]
Re: Can I get ComplexExpand to really work?
• To: mathgroup at smc.vnet.net
• Subject: [mg14635] Re: Can I get ComplexExpand to really work?
• From: "Allan Hayes" <hay at haystack.demon.co.uk>
• Date: Wed, 4 Nov 1998 13:47:04 -0500
• References: <719f5p$lc6@smc.vnet.net> <71bkvu$pul@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Two observations: (1) on assumptions in Integrate(2) on your sequence.
(1) The Integral
As has already been pointed out by several people
E^(I a b x^2),{x, -Infinity, Infinity}, Assumptions -> {a>0, b>0}
If[Im[a*b] == 0, (Sqrt[Pi/2]*(1 + I*Sign[a*b]))/
(Sqrt[a]*Sqrt[b]), Integrate[E^(I*a*b*x^2),
{x, -Infinity, Infinity}]]
gives nearly what you want
If we help Mathematica a bit we get
E^(I a b x^2),{x, -Infinity, Infinity}, Assumptions -> {a b>0,a>0,b>0}
((1 + I)*Sqrt[Pi/2])/(Sqrt[a]*Sqrt[b])
Which, I guess, points up the need to internally extend the assumptions
in the manner of the package Declare.
(2) The sequence
The sequence you gave was
data = { {1, 0}, {2, 1/4}, {3, 1}, {4, 5/2}, {5, 5}, {6, 35/4}, {7, 14},
{8, 21}, {9, 30}, {10, 165/4}, {11, 55}}; and your answer, given
later, was that this is given by the function
f(N) = (N-1) N (N+1) / 24
Here is a Mathematica deduction of this
p[n_] := InterpolatingPolynomial[Take[data, n], x]
Look for where p[n] repeats
fp = (n=1;FixedPoint[(++n; p[n]) &, 1])
(1/4 + (1/4 + 1/24*(-3 + x))*(-2 + x))*(-1 + x)
f[x_] = Simplify[fp]
1/24*(-1 + x)*x*(1 + x)
Last/@data == Table[f[n], {n, Length[data]}]
In a symbolic case, having used Mathematica to guess the formula, we
might use induction to prove the result.
Allan Hayes
Mathematica Training and Consulting
hay at haystack.demon.co.uk
Voice: +44 (0)116 271 4198
Fax: +44 (0)870 164 0565
Topher Cawlfield wrote in message <71bkvu$pul at smc.vnet.net>...
>Many thanks for the help!
>(I'm not quite sure how to best respond since this newsgroup seems to be
>acting funny)
>Daniel Lichtblau, P.J. Hinton, and Lars Hohmuth suggested that I use the
>"Assumptions" option with Integrate. Thanks, I didn't know about that!
>This works well in the most simple case, but doesn't work quite right
>in more general cases.
>Integrate[E^(I a b x^2), {x, -Infinity, Infinity}, Assumptions->{a>0 ,
>Essentially returns Sqrt[Pi/(2 a b)] (1 + I Sign[a b])
>This is progress, but I could make the argument that if a>0 and b>0,
>then Sign[a b] = 1.
>If I use "Assumptions->{a b > 0}" instead, I get the more simple result.
>M. R. Schaferkotter suggested that I try the Declare.m package from
>mathsource. I haven't tried this yet, but I will soon.
>For the specific problem I was working on, I was finally able to use
>Mathematica very successfully! I thought I would post it here since it
>worked so well, and must have saved me at least a month of grinding
>through algebra and making mistakes. Well, it didn't save me a month
>really, it just made one technique practical whereas it wouldn't have
>been otherwise.
>I want to use a Feynman path integral approach to find the transition
>amplitude for an electron in a constant electric field. This boils
>down to doing a lot of integrals, noticing a pattern, and deducing the
>limiting case of performing an infinite number of integrals.
>I won't go into all the details, but I would end up needing to perform
>integrals like:
>Integrate[E^(I(a x^2 + b x y + c y^2 + d y z + e z^2 + f)),
> {x, -Infinity, Infinity}, {y, -Infinity, Infinity}, {z, -Infinity,
>where a, b, c, d, e, and f are simple ratios of physical constants and
>parameters in the problem.
>The best way to evaluate these by hand is to first "complete the square"
>in x:
>Integrate[E^(I(a (x+junk)^2 + more junk)),...]
>The value of "junk" doesn't matter since you can rescale x. The limits
>of integration, being infinite, don't change. You can use the
>integration rule I was getting at above. Then, repeat the process by
>completing the square in y of "more junk", applying the integration
>rule again, and moving on to z.
>I couldn't find any functions in mathematica that would complete the
>square for me, but the CoefficientList function helped a lot. I made
>my own function:
>CompleteTheSquare[p_, x_] := Module[{coeffs, fp},
> coeffs = CoefficientList[p, x];
> fp = coeffs[[3]] (x + coeffs[[2]] / (2 coeffs[[3]]) )^2;
> parts = {coeffs[[3]], Simplify[p - fp], fp};
> fp + parts[[2]]] ]
>This function returns the modified polynomial, and also sets the symbol
>'parts' to a list containing the various pieces of the result.
>I then defined a function that "did the integration": MyInt[a_] :=
>Sqrt[- Pi / a]
>So that the integral of E^(a (x-b)^2 + c) is MyInt[a] E^c.
>And finally I made a function that did a Do loop over the integrals.
>This was a bit of work, but the results were really quite nice. By
>hand I could have only done 1 or 2 such integrals before tiring, and
>that would have taken me hours. With this I could try any number. I
>went up to 10, which just took a Mathematica a couple minutes.
>The morals of the story (IMO): (1) Use Mathematica for what it's good
>at. (2) One still needs to be somewhat knowledgable of and practiced
>with mathematical techniques. Even though Mathematica can do integrals
>fairly well, it's still important to know how to do them "by hand".
>This is a good thing.
>- Topher Cawlfield
>p.s. The fun part of the problem, besides the Mathematica programming,
>was figuring out a strange number series that came up:
>What's f(N)?
>N | f(N)
>1 | 0
>2 | 1/4
>3 | 1
>4 | 5/2
>5 | 5
>6 | 35/4
>7 | 14
>8 | 21
>9 | 30
>10 | 165/4
>11 | 55
>I'll post the answer tomorrow. These were coefficients in the answer.
>To get one of them requires doing N-1 integrals symbolicly.
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Nov/msg00048.html","timestamp":"2014-04-18T23:56:41Z","content_type":null,"content_length":"40746","record_id":"<urn:uuid:ad894603-884b-4db5-a660-8fe989245e92>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode
[SciPy-Dev] scipy.sparse spdiags/dia_matrix additional calling mode
Pauli Virtanen pav@iki...
Thu Oct 6 04:33:52 CDT 2011
06.10.2011 10:59, Tony Stillfjord kirjoitti:
> Given that Python is not MATLAB, would it be possible to add an
> additional, in some cases more intuitive, way of calling the spdiags
> function? This function creates a sparse matrix given its non-zero
> Currently, the first input has to be an array 'data' where every row
> corresponds to a diagonal - which one is specified in the next input, 'diags'. However,
> as the diagonals have different length, the actual values have to be zero-padded on
> either the left or right end to fit the array.
I was just thinking about the same thing: the way it works now is a bit
of a pain.
Currently, I'm using this:
def spdiags2(data, diags, M, N, **kw):
n = min(M, N)
data_mat = np.zeros((len(data), n),
for j, q in enumerate(data):
m = n - abs(diags[j])
if diags[j] >= 0:
data_mat[j,-m:] = q
data_mat[j,:m] = q
return sparse.spdiags(data_mat, diags, M, N, **kw)
But I agree the `spdiags` function itself should be fixed.
Pauli Virtanen
More information about the SciPy-Dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2011-October/016621.html","timestamp":"2014-04-21T04:57:46Z","content_type":null,"content_length":"4032","record_id":"<urn:uuid:6da9f4c7-fc8a-494e-be69-1a1f52c06f12>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the logarithmic equation. log 4 x=log 3 + log(x+3)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b2972fe4b09749ccaca216","timestamp":"2014-04-18T08:27:24Z","content_type":null,"content_length":"94959","record_id":"<urn:uuid:c9ababef-e6d4-409a-bd4d-50522ae41281>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 13, 2008, at 05:43 PM by -
Changed lines 5-6 from:
These operators return the sum, difference, product, or quotient (respectively) of the two operands. The operation is conducted using the data type of the operands, so, for example, @@9 / 4@@ gives
@@2@@ since 9 and 4 are ints. This also means that the operation can overflow if the result is larger than that which can be stored in the data type. If the operands are of different types, the
"larger" type is used for the calculation.
These operators return the sum, difference, product, or quotient (respectively) of the two operands. The operation is conducted using the data type of the operands, so, for example, @@9 / 4@@ gives
@@2@@ since 9 and 4 are ints. This also means that the operation can overflow if the result is larger than that which can be stored in the data type (e.g. adding 1 to an [[int]] with the value 32,767
gives -32,768). If the operands are of different types, the "larger" type is used for the calculation.
Added lines 35-36:
* Know that [[integer constants]] default to [[int]], so some constant calculations may overflow (e.g. 60 * 1000 will yield a negative result).
May 29, 2007, at 03:24 AM by -
May 29, 2007, at 03:23 AM by -
Changed line 7 from:
If one of the numbers (operands) are of the type '''float''' or of type '''double''', floating point math will be used for the operation.
If one of the numbers (operands) are of the type '''float''' or of type '''double''', floating point math will be used for the calculation.
May 29, 2007, at 03:21 AM by -
Changed lines 5-6 from:
These operators return the sum, difference, product, or quotient (respectively) of the two operands. The operation is conducted using the data type of the operands, so, for example, @@9 / 4@@ gives
@@2@@ since 9 and 4 are ints. This also means that the operation can overflow if the result is larger than that which can be stored in the data type. If the operands are of different types, the
"larger" type is used for the calculation.
These operators return the sum, difference, product, or quotient (respectively) of the two operands. The operation is conducted using the data type of the operands, so, for example, @@9 / 4@@ gives
@@2@@ since 9 and 4 are ints. This also means that the operation can overflow if the result is larger than that which can be stored in the data type. If the operands are of different types, the
"larger" type is used for the calculation.
If one of the numbers (operands) are of the type '''float''' or of type '''double''', floating point math will be used for the operation.
Changed lines 33-34 from:
Changed lines 41-47 from:
* Use the cast operator e.g. (int)myfloat to convert one variable type to another on the fly.
* Use the cast operator e.g. (int)myFloat to convert one variable type to another on the fly.
April 16, 2007, at 05:49 PM by -
Changed lines 42-45 from:
[[HomePage | Reference Home]]
April 16, 2007, at 05:18 AM by -
Changed lines 38-41 from:
* Use the cast operator eg (int)myfloat to convert one variable type to another on the fly.
* Use the cast operator e.g. (int)myfloat to convert one variable type to another on the fly.
April 16, 2007, at 05:11 AM by -
Changed lines 5-6 from:
The arithmetic operators work exactly as one expects with the result returned being the result of the two values and the operator
These operators return the sum, difference, product, or quotient (respectively) of the two operands. The operation is conducted using the data type of the operands, so, for example, @@9 / 4@@ gives
@@2@@ since 9 and 4 are ints. This also means that the operation can overflow if the result is larger than that which can be stored in the data type. If the operands are of different types, the
"larger" type is used for the calculation.
Changed lines 17-18 from:
result = value1 [+-*/] value2
result = value1 + value2;
result = value1 - value2;
result = value1 * value2;
result = value1 / value2;
Changed lines 26-28 from:
value1: any variable type\\\
value2: any variable type
value1: any variable or constant
value2: any variable or constant
April 15, 2007, at 11:13 PM by -
Added lines 35-39:
[[HomePage | Reference Home]]
April 15, 2007, at 11:11 PM by -
Changed lines 26-29 from:
A longer tutorial on computer math can eventually go in this space but for now, to benefit beginning programmers some general guidelines will be presented. These will hopefully get you started toward
getting the same answer out of your Arduino that you do on your calculator.
* Choose variable sizes that you are sure are large enough to hold the largest results from your calculations
* Choose variable sizes that are large enough to hold the largest results from your calculations
Changed lines 32-34 from:
* Use the cast operator eg (int)myfloat to convert one variable type to another on the fly.
April 15, 2007, at 11:08 PM by -
Changed lines 28-30 from:
* Choose variable sizes that you are sure are large enough to hold the largest results from your calculations
* Know at what point your variable will "roll over" and also what happens in the other direction e.g. (0 - 1) OR (0 - - 32768)
* For math that requires fractions, use float variables, but be aware of their drawbacks: large size, slow computation speeds
April 15, 2007, at 11:03 PM by -
Changed line 21 from:
value1: any variable type
value1: any variable type\\\
Changed lines 26-32 from:
For beginning programmers there are several details of doing math on the computer to which one must pay attention. One is that math on computers, as opposed to algebra class, must exist in physical
space. This means that the variable (which occupies a physical space on your Atmega chip) must be large enough to hold the results of your calculations.
Hence if you try something like this
[@ byte x;
x = 255;
x = x + 1; @]
A longer tutorial on computer math can eventually go in this space but for now, to benefit beginning programmers some general guidelines will be presented. These will hopefully get you started toward
getting the same answer out of your Arduino that you do on your calculator.
April 15, 2007, at 10:59 PM by -
Added lines 14-18:
result = value1 [+-*/] value2
April 15, 2007, at 10:27 PM by -
Changed line 8 from:
Changed line 13 from:
Changed lines 26-27 from:
April 15, 2007, at 10:23 PM by -
Added lines 1-27:
!! Addition, Subtraction, Multiplication, & Division
!!!! Description
The arithmetic operators work exactly as one expects with the result returned being the result of the two values and the operator
y = y + 3;
x = x - 7;
i = j * 6;
r = r / 5;
value1: any variable type
value2: any variable type
For beginning programmers there are several details of doing math on the computer to which one must pay attention. One is that math on computers, as opposed to algebra class, must exist in physical
space. This means that the variable (which occupies a physical space on your Atmega chip) must be large enough to hold the results of your calculations.
Hence if you try something like this
[@ byte x;
x = 255;
x = x + 1; | {"url":"http://arduino.cc/en/Reference/Arithmetic?action=diff&source=y&minor=y","timestamp":"2014-04-20T14:11:09Z","content_type":null,"content_length":"33079","record_id":"<urn:uuid:15c39d4f-c760-4e19-b2ae-41e9e07d0420>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
fsolve for competing equilibria
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
MATLAB Central > MATLAB Newsreader > fsolve for competing equilibria
Subject: fsolve for competing equilibria
From: Jesper
Date: 13 Mar, 2012 16:46:13
Message: 1 of 2
I'm trying to solve an equilibrium problem for the following competing chemical reaction:
AP + P -><- APa@P (K1 = APa@P/(AP*P) = 1e5)
AP + P -><- APp@P (K2 = APp@P/(AP*P) = 1e4)
That is; AP can bind in two ways to P and does so with two different association constants (K1 and K2). I will therefore have 4 species in solution: unbound AP, unbound P, APa@P and APp@P. Knowing
the starting concentrations AP0 and P0, fsolve should easily be able to solve all the concentrations by solving the following four non-linear equations:
P0 - P - APa@P - APp@P = 0
AP0 - APa@P - APp@P = 0
APa@P/(P*AP) - K1 = 0
APp@P/(P*AP) - K2 = 0
My function file becomes:
function [Concentrations] = svdConcfunc(D)
global AP0 P0 K1 K2
Concentrations = [P0 - D(2) - D(3) - D(4);
AP0 - D(1) - D(3) - D(4);
D(3)./(D(1).*D(2)) - K1;
D(4)./(D(1).*D(2)) - K2];
My script, calling the above function looks like this:
clear all
close all
format long
global conc AP0 P0 K1 K2
options = optimset ('Tolfun', 1e-30, 'TolX', 1e-20, 'MaxFunEvals', 1e20, 'MaxIter', 1e20);
P0 = 2e-6;
K1 = 1e5;
K2 = 1e4;
for i = 1:length(conc);
AP0 = conc(i);
%format = [AP P APa@P APp@P ];
Guessconc = [AP0 P0 AP0 AP0 ];
CalcConc(i,:) = fsolve(@svdConcfunc, Guessconc, options);
test_should_be(i) = P0 + AP0;
test = sum(CalcConc, 2)
test_should_be = test_should_be'
the loaded "conc" becomes:
1.0e-004 *
The output I get is the following:
test =
1.0e-004 *
test_should_be =
1.0e-004 *
So the sum of all calculated concentrations in solution is not the same (there is a difference between "test_should_be" and "test") as the sum of what I put in, which is physically impossible and I
cannot for the life of me understand why it won't work. I have been able to solve far more complicated equilibrium problems (more equations, species and binding constants) before so this should be a
walk in the park.
I have tried to play around with starting guesses and tolerance values but I can't seem to get a better solution.
Any help on this would be much appreciated!
I'm trying to solve an equilibrium problem for the following competing chemical reaction: AP + P -><- APa@P (K1 = APa@P/(AP*P) = 1e5) AP + P -><- APp@P (K2 = APp@P/(AP*P) = 1e4) That is; AP can bind
in two ways to P and does so with two different association constants (K1 and K2). I will therefore have 4 species in solution: unbound AP, unbound P, APa@P and APp@P. Knowing the starting
concentrations AP0 and P0, fsolve should easily be able to solve all the concentrations by solving the following four non-linear equations: P0 - P - APa@P - APp@P = 0 AP0 - APa@P - APp@P = 0 APa@P/
(P*AP) - K1 = 0 APp@P/(P*AP) - K2 = 0 My function file becomes: function [Concentrations] = svdConcfunc(D) global AP0 P0 K1 K2 Concentrations = [P0 - D(2) - D(3) - D(4); AP0 - D(1) - D(3) - D(4); D
(3)./(D(1).*D(2)) - K1; D(4)./(D(1).*D(2)) - K2]; return My script, calling the above function looks like this: clear all close all format long global conc AP0 P0 K1 K2 options = optimset ('Tolfun',
1e-30, 'TolX', 1e-20, 'MaxFunEvals', 1e20, 'MaxIter', 1e20); conc=load('AP_titration_conc_vector.txt')./(1e6); P0 = 2e-6; K1 = 1e5; K2 = 1e4; for i = 1:length(conc); AP0 = conc(i); %format = [AP P
APa@P APp@P ]; Guessconc = [AP0 P0 AP0 AP0 ]; CalcConc(i,:) = fsolve(@svdConcfunc, Guessconc, options); test_should_be(i) = P0 + AP0; end test = sum(CalcConc, 2) test_should_be = test_should_be' the
loaded "conc" becomes: conc= 1.0e-004 * 0 0.000800000000000 0.001600000000000 0.006380000000000 0.014300000000000 0.030000000000000 0.053000000000000 0.082800000000000 0.126000000000000
0.180000000000000 0.244000000000000 0.315000000000000 0.396000000000000 0.493000000000000 The output I get is the following: test = 1.0e-004 * 0.020000000000000 0.020656586717874 0.021314855866978
0.025281807267153 0.031972557259257 0.045604844303509 0.066255477286183 0.093836312858125 0.134837016836453 0.187048378261830 0.249671527576348 0.319655268142224 0.399862134556760 0.496206019001146
test_should_be = 1.0e-004 * 0.020000000000000 0.020800000000000 0.021600000000000 0.026380000000000 0.034300000000000 0.050000000000000 0.073000000000000 0.102800000000000 0.146000000000000
0.200000000000000 0.264000000000000 0.335000000000000 0.416000000000000 0.513000000000000 So the sum of all calculated concentrations in solution is not the same (there is a difference between
"test_should_be" and "test") as the sum of what I put in, which is physically impossible and I cannot for the life of me understand why it won't work. I have been able to solve far more complicated
equilibrium problems (more equations, species and binding constants) before so this should be a walk in the park. I have tried to play around with starting guesses and tolerance values but I can't
seem to get a better solution. Any help on this would be much appreciated! /Jesper
Subject: fsolve for competing equilibria
From: Roger Stafford
Date: 13 Mar, 2012 21:06:19
Message: 2 of 2
"Jesper" wrote in message <jjntkl$8p1$1@newscl01ah.mathworks.com>...
> .......
> test_should_be(i) = P0 + AP0;
> .......
> test = sum(CalcConc, 2)
> ........
> So the sum of all calculated concentrations in solution is not the same (there is a difference between "test_should_be" and "test") as the sum of what I put in, which is physically impossible and I
cannot for the life of me understand why it won't work. ....
- - - - - - - - -
Though I am not a chemist there seems to be something rather questionable with your test procedure. For every one mole of AP and one mole of P which is converted to APa@P or APp@P you have only one
mole of result, but you appear to be requiring in "test=sum(CalcConc,2)" that the total number of moles should remain constant. In other words you may be obtaining perfectly valid solutions but your
test looks invalid to me.
A better test would be to check whether your original four equations are satisfied or not for each of the AP0 concentrations.
There is a way you can avoid using 'fsolve' and I don't understand why you are not doing things this way. Your four equations can be reduced to the solution of simple quadratic equations in D1 and D2
(AP and P) in which one of the two roots gives positive values for the concentrations. D3 and D4 (APa@P and APp@P) are then readily found from D1 and D2. Do as follows:
T1 = (-1+sqrt(1+2*(K1+K2)*(P0+AP0)+((K1+K2)*(P0-AP0)).^2))/2/(K1+K2);
T2 = (P0-AP0)/2;
D1 = T1-T2;
D2 = T1+T2;
D3 = K1*D1.*D2;
D4 = K2*D1.*D2;
(These should be valid as vectorized solutions using a vector for AP0.)
I am assuming I have the correct equations to be solved:
D1 + D3 + D4 = AP0
D2 + D3 + D4 = P0
D3/(D1*D2) = K1
D4/(D1*D2) = K2
(You first wrote: "AP0-APa@P-APp@P=0" for that first equation but later added the obviously needed D1 (AP) term.)
Roger Stafford
"Jesper" wrote in message <jjntkl$8p1$1@newscl01ah.mathworks.com>... > ....... > test_should_be(i) = P0 + AP0; > ....... > test = sum(CalcConc, 2) > ........ > So the sum of all calculated
concentrations in solution is not the same (there is a difference between "test_should_be" and "test") as the sum of what I put in, which is physically impossible and I cannot for the life of me
understand why it won't work. .... - - - - - - - - - Though I am not a chemist there seems to be something rather questionable with your test procedure. For every one mole of AP and one mole of P
which is converted to APa@P or APp@P you have only one mole of result, but you appear to be requiring in "test=sum(CalcConc,2)" that the total number of moles should remain constant. In other words
you may be obtaining perfectly valid solutions but your test looks invalid to me. A better test would be to check whether your original four equations are satisfied or not for each of the AP0
concentrations. There is a way you can avoid using 'fsolve' and I don't understand why you are not doing things this way. Your four equations can be reduced to the solution of simple quadratic
equations in D1 and D2 (AP and P) in which one of the two roots gives positive values for the concentrations. D3 and D4 (APa@P and APp@P) are then readily found from D1 and D2. Do as follows: T1 =
(-1+sqrt(1+2*(K1+K2)*(P0+AP0)+((K1+K2)*(P0-AP0)).^2))/2/(K1+K2); T2 = (P0-AP0)/2; D1 = T1-T2; D2 = T1+T2; D3 = K1*D1.*D2; D4 = K2*D1.*D2; (These should be valid as vectorized solutions using a vector
for AP0.) I am assuming I have the correct equations to be solved: D1 + D3 + D4 = AP0 D2 + D3 + D4 = P0 D3/(D1*D2) = K1 D4/(D1*D2) = K2 (You first wrote: "AP0-APa@P-APp@P=0" for that first equation
but later added the obviously needed D1 (AP) term.) Roger Stafford
A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.
Anyone can tag a thread. Tags are public and visible to everyone. | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/317937","timestamp":"2014-04-19T06:54:48Z","content_type":null,"content_length":"36014","record_id":"<urn:uuid:9215bba1-0b1f-4419-bd88-a23d0b286200>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Using Geometric Algebra/Mathematical Introduction
Systems of NumbersEdit
Geometric algebra is an example of a system of numbers. In particular, it's an example of what mathematicians call an algebra over a field. But since this is not intended to be a book solely for
mathematicians, this page is here as an attempt to explain what exactly this means, give a general impression of what systems of numbers are and how they can be built up, and show where geometric
algebra fits in. You probably already know most of what's in this section, and it is presented primarily for comparison's sake, but seeing it presented in this way may raise some interesting
Perhaps the most basic system of numbers is the natural, or counting, numbers. This is the familiar set of numbers 0, 1, 2, 3, etc. One of the fundamental properties of the natural numbers is that
for every natural number n, there is a natural number n + 1 (which is never equal to n) that comes after it. Since this is true no matter how big n is, there are infinitely many natural numbers.
We'll call the first number that comes after n the successor of n. One reason this property is fundamental to the natural numbers is that any natural number can be represented by the successor
function applied to 0 some number of times. In fact if you have another set of things in which each one has a successor and there is only one that is not the successor of any other, your set of
things can be identified with the natural numbers -- just call the thing that isn't the successor of anything else 0, and go from there. Another reason the successor operation is fundamental is that
the basic operations of addition and multiplication can be reduced to it. The sum of two natural numbers a and b is the bth successor of a. Likewise, the product of a and b is the sum of a and itself
repeated b times. a to the power of b is defined similarly, as the product of a with itself repeated b times.
The operations of addition and multiplication that we've introduced have a few important properties in common. If you add three numbers, it doesn't matter whether you start by adding the first and
second numbers, and then add the third one to your answer, or if you start by adding the second and third numbers, and then add your answer to the first number. In equation form, this is (a + b) + c
= a + (b + c). The corresponding statement is true for multiplication. That is, a(bc) = (ab)c. This is called the associative property, since it is a statement of the fact that it doesn't matter in
which order the terms (or factors) are associated. Both versions of the associative property are true for all natural numbers a, b, and c. A statement that is true for all values of the variables in
it is called an identity, and mathematicians sometimes say that it is identically true or holds identically. This can be confusing because it conflicts with the everyday definition of identical, but
the meaning is usually clear in context.
Another important identity that's true for both addition and multiplication is the commutative property, which is the statement that the order in which you add (or multiply) two numbers doesn't
matter. That is, for all natural numbers a and b, a + b = b + a and ab = ba. Also, for all natural numbers a, 0 + a = a + 0 = a and 1a = a1 = a. Because of this, 0 is called the identity element for
addition and 1 is called the identity element for multiplication -- yet another sense of the word identity. An important identity that relates multiplication to addition is the distributive property,
the statement that, for all natural numbers a, b, and c, a(b + c) = ab + ac.
In the fairly recent past, mathematicians have begun to study sets of "numbers" that share some or all of these properties (including ones we'll discuss later on), and have come up with names for
certain combinations of properties. Any set of elements with an operation that satisfies the associative property and has an identity element is called a monoid. In particular, the natural numbers
with addition form what is called (for rather obvious reasons) a commutative monoid. The natural numbers with multiplication also form a commutative monoid, but of a rather different sort. To see the
difference, notice that any natural number can be formed by adding 1 and 0 in various combinations, but there is no (finite) set of numbers that you can build all natural numbers out of using
multiplication in a similar way. There are, however, infinite sets with this property -- take, for instance, the set containing 0, 1, and all prime numbers.
Inverses and GroupsEdit
One of the most basic questions we can ask about natural numbers is this: given two natural numbers a and b, what is a number that, when added to a, gives b? Let's call this the subtraction problem
for the pair (a, b). There is never more than one natural number that answers the subtraction problem for a given pair (a, b); we call that number, if it exists, b - a. For example, the answer to the
question "What number, when added to 6, gives 13?" is 13 - 6 = 7. But suppose we ask "What number, when added to 12, gives 9?" There is no answer. None of the numbers n in the set we have just
constructed has the property that 12 + n = 9. But suppose there were such a number. If it obeyed all the same rules as the natural numbers, we would be able to say that it also satisfies 20 + n = 17,
since 9 = 9, and therefore 8 + 9 = 8 + 9 -- but 12 + n = 9, so 8 + (12 + n) = 8 + 9. Using the associative property, we can show that this means (8 + 12) + n = 8 + 9, and so, since we know 8 + 12 is
20 and 8 + 9 is 17, 20 + n = 17. By running the same kind of reasoning in reverse, we can show that our number n must also satisfy 3 + n = 0. Symbolically, then, we can say that our number is 0 - 3,
since it solves our original problem for 3 and 0 just as 13 - 6 = 7 solves it for 6 and 13. Usually, we drop the 0 and write this number as -3. By adding the numbers -n for all natural numbers n
greater than 0 to our set, we end up with a solution for the subtraction problem, not only for the pairs (n, 0), but for all pairs of numbers (a, b) in our new set. (Notice that we don't have to add
-0 because the subtraction problem already has a solution for the pair (0, 0) - namely 0.)
We need to be careful here to make sure that what we're talking about -- the set of integers -- actually exists, and that we can define addition and multiplication in a meaningful way on this new set
of numbers. (Of course, the integers are a well-known system, and you almost certainly already know how to add and multiply them, but this is intended as a demonstration and a clarification.) One way
to do this is to represent a number by the set of pairs for which it satisfies the subtraction problem. We do this by putting together a set of rules that tell us whether two pairs represent the same
number -- what mathematicians call an equivalence relation. In this case, we'll say that the pair (a, b) is equivalent to the pair (c, d) if and only if (a + d = b + c). Then, writing b - a for the
set of pairs related to (a, b), we define addition and multiplication by the rules we want them to follow: (b - a) + (d - c) = (b + d) - (a + c) and (b - a)(d - c) = (ac + bd) - (ad + bc). Now all we
have to do is name the sets n and -n for each natural number n, and this is easy - just let n be the set of pairs we've been calling n - 0 and -n be the set of pairs 0 - n. (Can you see how this all
fits together? Hint: Try drawing a picture.)
Our new set of numbers, with addition, has a new, important property -- every integer has an inverse element, or a number that it can be added to to get the identity (that is, 0). A monoid with this
property is called a group -- a name which can be confusing, especially since the everyday meaning of group, unlike the mathematical definition, has no notion of structure associated with it. A
commutative group, like the integers, is also called an Abelian group, after the mathematician Niels Henrik Abel.
One question we may want to ask about integers is the equivalent of the subtraction problem for multiplication: the division problem. Given a pair of integers (a, b), it asks what number, when
multiplied by a, gives b. We call the answer to the division problem for a pair (a, b), if it exists, b/a. If we treat the division problem just as we treated the subtraction problem for natural
numbers, we end up with a new set of numbers, the rational numbers, in which every number other than 0 has a multiplicative inverse element 1/a. A set, like the rational numbers, with an addition
operation with which it is an Abelian group and a multiplication operation with which its set of nonzero elements is a group is called a field.
Magnitudes and the ContinuumEdit
The process by which we extended the natural numbers to the integers and then the rational numbers was one of abstraction and generalization: If a solution to an equation existed and obeyed the rules
we had already learned about numbers, it had to have certain properties. Once we knew all the properties it would have to have, we could build a new set of numbers in which it did exist. We can carry
this process even further. Some equations cannot be solved by rational numbers, for example $x^2=2$. In introducing irrational numbers into our system, let's go a bit further than we strictly have to
by allowing numbers with any decimal expansion as part of our set, which we'll call the real numbers. The real numbers have some interesting properties: every positive real number has a real root (of
every order) and a real logarithm, and they are complete in the sense that for every set of real numbers, you can find exactly one number that's greater than every other number in the set and smaller
than every other number with this property. You can't do this for all sets of rational numbers -- for example, it fails for the set 3, 3.1, 3.14, 3.141, 3.1415, ..., which contains infinitely many
numbers that approach $\pi$. One of their most interesting properties is that you can assign exactly one real number to every point on a line and cover the entire line. Somehow along the way in this
process of abstraction and generalization, numbers have turned from something we use only for counting to something we can use to describe shapes, and this is the concept geometric algebra is based
Complex NumbersEdit
As you may know, the real numbers aren't the end of the story. There are still equations that can't be solved by any real numbers, for example $x^2=-1$. In order to solve equations like this, we will
introduce a number $i=\sqrt{-1}$. The new set of numbers we build using $i$, which includes all numbers of the form $a+bi$, where $a$ and $b$ are real numbers, is called the complex numbers. $i$ is
quite a remarkable invention. For one thing, it's the last hoop we have to jump through -- every algebraic equation has a solution in the complex numbers. For this reason, we say that the complex
numbers are algebraically closed.
Since complex numbers aren't as familiar as the other sets of numbers we've discussed, I'll talk briefly about how to work with them. To find the sum of two complex numbers, we use the distributive
property and the associative and commutative properties of addition:
To find their product, we use the distributive property and the fact that $i^2=-1$:
Finding the difference of two complex numbers is simple as long as you remember to distribute the negative sign to both parts of the number, but division is a little more complicated. You have to use
a little trick to turn the bottom of the fraction into a real number:
We call the complex number $c-di$, which you had to multiply by on the top and the bottom as part of that trick, the complex conjugate of $c+di$.
Like the real numbers, the complex numbers have a natural geometric interpretation -- they can be thought of as points in a plane. The standard representation of the complex plane is with the real
line running horizontally and the line of real multiples of $i$ running vertically, $i$ above the real line and $-i$ below it. Geometrically, the sum of two complex numbers can be constructed in the
same way as the sum of two vectors -- draw a parallelogram with two of its sides running from the origin (that is, 0) to the numbers you are adding. The point of the parallelogram opposite the origin
is the sum of the two complex numbers. The rule for multiplication is a little more complicated: first, construct a right triangle for each of the numbers being multiplied, with one of its vertices
at the origin, one at the endpoint of the complex number, and one along the real axis (at the point representing the real part of the complex number). Rotate one of these triangles until the leg that
was on the real axis is parallel to the hypotenuse of the other triangle, and then scale it by a factor equal to the length of the hypotenuse of the other triangle. The tip of this triangle
represents the product of the two complex numbers.
When we interpret complex numbers as points (or vectors) in a plane, it makes sense to introduce the concept of absolute value, which we will define exactly as we did for real numbers: the distance
between a number and 0. The absolute value of a complex number is also called its modulus. Notice that there's a complication: more often than not, a complex number is situated along a diagonal, and
so we have to use the Pythagorean theorem to find this distance. For example,
Complex conjugates come in handy here. As we saw in the division rule, the product of a complex number and its conjugate is always real. What's more, it's equal to the square of the complex number's
absolute value. So we can simply multiply a complex number by its conjugate and then take the square root to find its absolute value. Here's a little demonstration of why this works algebraically:
See if you can work out the geometric proof for yourself. Note that taking the complex conjugate of a number amounts to reflecting it across the real axis.
Building the Geometric AlgebraEdit
Last modified on 27 October 2012, at 18:59 | {"url":"http://en.m.wikibooks.org/wiki/Physics_Using_Geometric_Algebra/Mathematical_Introduction","timestamp":"2014-04-16T19:15:29Z","content_type":null,"content_length":"35249","record_id":"<urn:uuid:3965276b-081c-4639-96a7-ba5dd09ae5e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
GMAT Gurus Speak Out: Skipping the Right Questions
I decided to take the test again because I didn’t feel that my quant score reflected my true ability. And the second time, I approached things differently. These are the tips I used to raise my score
almost 100 points the second time I took it – tips I give my Veritas Prep students frequently.
Choose your target score
The first time I took the test, because of the questions I left blank at the end, I was 70% but I’d been scoring around 80%. I should have known to leave the 90% problems alone. I had a student who
was scoring anywhere from 50%-70% on practice tests but got some of the 90% problems right, it just took him 6 minutes. I told him to aim for 70%, maybe 80%, but he DEFINITELY shouldn’t be answering
the 90% problems – which steal time away from the end of the test.
Learn the material to be able to get the right questions right
For that student, I recommended he absolutely master the 70%-80% questions. Basic statistics and probability, slightly advanced geometry, moderate word problems. And focus on verbal, which was
stronger for him – a great verbal score can help a lot!
Guess on the questions you know are above your target
I’m pretty sure I got another arc angle question the second time I took the GMAT and although I had studied a lot in between, I still didn’t understand it. I took a guess and moved on within a minute
– no more 6-minute questions for me. The next problem was probably easier – maybe even 70 or 80% level – but I’m sure from there I did fine and worked my way up.
Skip any question you’re having trouble with after 1 minute
Watch the clock – if it’s been a minute and you have no idea what you’re doing wrong, make a guess and move on. Similarly, if you have four or five lines of algebra and you don’t have the right
answer and/or you don’t see how to get there, move on. If you do a bunch of math, make a mistake, and can’t see the mistake clearly, move on!
In closing, if you’re taking the GMAT, you’re probably applying to business school. If you’re applying to business school, you’re probably a competitive, over-achiever. It’s hard for us
over-achievers to admit when we’ve reached our threshold, when we’re faced with a problem we can’t solve. The trick for the GMAT is to let individual problems go so that your whole test doesn’t
Plan on taking the GMAT soon? We have GMAT prep courses starting all the time. And, be sure to find us on Facebook and Google+, and follow us on Twitter!
Julia Kastner is a Veritas Prep GMAT instructor based in New York. She runs her own socially responsible, fair trade denim company called Eva & Paul and before starting her business she worked on
nonprofit outreach projects of all kinds. | {"url":"http://www.veritasprep.com/blog/2013/02/gmat-gurus-speak-out-skipping-questions-on-the-gmat/","timestamp":"2014-04-17T04:17:24Z","content_type":null,"content_length":"47917","record_id":"<urn:uuid:d3c7baab-3088-4f33-93f6-45c3f3d2592d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Let a graph , is a pair of sets V vertices and set E edges. Let be a subset of . If each vertex of is adjacent to atleast one vertex of , then is called a dominating set in . The domination number
of a graph denoted as is the minimum cardinality of a dominating set in . A set of vertices in a graph is said to be an independent set if no two vertices in the set are adjacent. the number of
vertices in the largest independent set of a graph is called the independence number and denoted by . In this final project, we consider the relation between independent set and dominating set of
finite simple graphs. In particular, discuss them for some cubic bipartite graphs and find that the domination number is less than of the number of vertices and independence number is half of the
number of vertices.
Graphical Abstract
Full Text:
• There are currently no refbacks.
Jurnal Matematika, Fakultas Sains dan Matematika, Universitas Diponegoro.
Jl. Prof. Sudarto Tembalang. telp/fax (024)76480922 Semarang 50275 | {"url":"http://ejournal.undip.ac.id/index.php/matematika/article/view/4904/0","timestamp":"2014-04-20T00:37:30Z","content_type":null,"content_length":"13086","record_id":"<urn:uuid:61f3b4f5-470c-4501-ba44-0948b9ae66b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Optimality of using probability
From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Feb 05 2007 - 22:31:15 MST
Mitchell Porter wrote:
> If you the programmer ('you' being an AI, I assume) already have the
> concept of probability, and you can prove that a possible program will
> estimate probabilities more accurately than you do, you should be able
> to prove that it would provide an increase in utility, to a degree
> depending on the superiority of its estimates and the structure of
> your utility function. (A trivial observation, but that's usually where
> you have to start.)
Mitch, I haven't found that problem to be trivial if one seeks a precise
demonstration. I say "precise demonstration", rather than "formal
proof", because formal proof often carries the connotation of
first-order logic, which is not necessarily what I'm looking for. But a
line of reasoning that an AI itself carries out will have some exact
particular representation and this is what I mean by "precise". What
exactly does it mean for an AI to believe that a program, a collection
of ones and zeroes, "estimates probabilities" "more accurately" than
does the AI? And how does the AI use this belief to choose that the
expected utility of running its program is ordinally greater than the
expected utility of the AI exerting direct control? For simple cases -
where the statistical structure of the environment is known, so that you
could calculate the probabilities yourself given the same sensory
observations as the program - this can be argued precisely by summing
over all probable observations. What if you can't do the exact sum?
How would you make the demonstration precise enough for an AI to walk
through it, let alone independently discover it?
*Intuitively* the argument is clear enough, I agree.
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT | {"url":"http://www.sl4.org/archive/0702/16219.html","timestamp":"2014-04-18T08:15:44Z","content_type":null,"content_length":"6176","record_id":"<urn:uuid:14590d29-6dc5-4a69-beca-96b6a673340c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
List the elements of S produced by the first five applications of the recursive defin
December 4th 2013, 04:10 PM
List the elements of S produced by the first five applications of the recursive defin
Let S be the subset of the set of ordered pairs of integers
defined recursively by
Basis step: (0, 0) ∈ S.
Recursive step: If (a, b) ∈ S, then (a + 2, b + 3) ∈ S
and (a + 3, b + 2) ∈ S.
a) List the elements of S produced by the first five applications
of the recursive definition.
How do I know which (a,b) pairs to use? I know that it's the first five but I don't know how to get the first five.
December 4th 2013, 04:40 PM
Re: List the elements of S produced by the first five applications of the recursive d
Taken from this thread.
the recursion step forms two new pairs. So, at each step you could also form (a + 3, b + 2) from (a, b), not necessarily (a + 2, b + 3). In fact, the process of generating new pairs branches at
each step and the order in which pairs are generated is not determined.
December 4th 2013, 04:44 PM
Re: List the elements of S produced by the first five applications of the recursive d
Let S be the subset of the set of ordered pairs of integers
defined recursively by Basis step: (0, 0) ∈ S.
Recursive step: If (a, b) ∈ S, then (a + 2, b + 3) ∈ S
and (a + 3, b + 2) ∈ S.
a) List the elements of S produced by the first five applications of the recursive definition.
I am somewhat confused as to exactly the phrase "the first five applications of the recursive definition" actually means.
You may need to either drop the (0,0) or make 4 a 5 in the above.
BTW LaTeX seems to be down: \{(0,0)\}\cup\{(3k,2k):k=1,\cdots~4\}
December 4th 2013, 05:10 PM
Re: List the elements of S produced by the first five applications of the recursive d
I am somewhat confused as to exactly the phrase "the first five applications of the recursive definition" actually means.
You may need to either drop the (0,0) or make 4 a 5 in the above.
BTW LaTeX seems to be down: \{(0,0)\}\cup\{(3k,2k):k=1,\cdots~4\}
first five applications are the first five definitions. | {"url":"http://mathhelpforum.com/discrete-math/224833-list-elements-s-produced-first-five-applications-recursive-defin-print.html","timestamp":"2014-04-18T11:38:26Z","content_type":null,"content_length":"7825","record_id":"<urn:uuid:02757736-713c-41bd-a4b5-e4bdd388eead>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question 10
December 17th 2006, 09:00 PM #16
Grand Panjandrum
Nov 2005
Let $t$ be time in hours since it started to snow.
Let the rate of snowfall be $H$ metres/hr
Let the plough swept width be $W$ metres.
let the distance of the plough from its starting point be $x(t)$ km.
Then as the plough clears a fixed volume of snow per unit time we have:
where $K$ is the constant volume swept per hour in units of thousanths of a cubic metre/hr (but the units are unimportant). (this is the swept width times the snow depth times the rate of
progress of the plough)
This is an ordinary differential equation of variables seperable type and has general solution:
for some constant $C$, and times after $t=N$ which is how long Noon is after [tex]t=0[tex].
So from the given conditions we have:
The first of these equations gives:
Dividing the secon by the third equation elliminates $\frac{K}{W\,H}$, and can be rearranged to give:
or equivalently:
$\left(\frac{N+1}{N}\right)^3=\left(\frac{N+2}{N}\r ight)^2$.
Simplifying this last equation gives us: $N^2+N-1=0$, which has solutions $-\frac{1+\sqrt{5}}{2}$ and $\frac{-1+\sqrt{5}}{2}$. The first of these is negative and so a spurious solution, so:
$N=\frac{-1+\sqrt{5}}{2}\approx 0.618$ hours $\approx 37$ minutes.
So the snow began falling $\sim 37$ minutes before noon or about $11:23$. | {"url":"http://mathhelpforum.com/math-challenge-problems/8711-question-10-a-2.html","timestamp":"2014-04-20T21:40:54Z","content_type":null,"content_length":"37184","record_id":"<urn:uuid:6b0c3828-ba0e-4a4b-aa49-9dc8db45107e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phantoms and mutants
Language hacking
Let's create a very small fragment of a programming language:
data Expr = Num Int -- atom
| Str String -- atom
| Op BinOp Expr Expr -- compound
deriving (Show)
data BinOp = Add | Concat
deriving (Show)
And an interpreter for it:
interp x@(Num _) = x
interp x@(Str _) = x
interp (Op Add a b) = Num (i a + i b)
where i x = case interp x of Num a -> a
interp (Op Concat (Str a) (Str b)) = Str (a ++ b)
Does it work?
Our very quick round of prototyping gave us a tiny interpreter that actually seems to work:
>> interp (Op Add (Num 2) (Num 3))
Num 5
Please help me to spot some problems with my interpreter!
Two sides of the same problem
1. We can construct ill-formed expressions ("add a Num to a Str").
2. Our interpreter crashes on these expressions, because we (quite reasonably) didn't take their possible existence into account.
Watch your language!
Here's a slightly modified version of our language:
data Expr a = Num Int
| Str String
| Op BinOp (Expr a) (Expr a)
deriving (Show)
-- This is unchanged.
data BinOp = Add | Concat
deriving (Show)
We've introduced a type parameter here...
...But we never actually use it to represent a value of whatever type a is.
Let's see where that takes us.
Some modifications to our interpreter
Here is our modified interpreter.
interp x@(Num _) = x
interp x@(Str _) = x
interp (Op Add a b) = Num (i a + i b)
where i x = case interp x of Num a -> a
interp (Op Concat a b) = Str (i a ++ i b)
where i x = case interp x of Str y -> y
Our only change is to apply interp recursively if we're asked to perform a Concat.
We could have done this in our original interpreter, so that can't be the real fix. But what is?
What's the type of the rewritten interp?
Our new type
The interpreter function now has this type:
interp :: Expr a -> Expr a
But we know from the definitions of Expr and BinOp that we never use a value of type a. Then what purpose does this type parameter serve?
Recall the type of Expr:
data Expr a = ...
| Op BinOp (Expr a) (Expr a)
Some context
Let's think of that a parameter as expressing our intent that:
• The operands of an Op expression should have the same types.
• The resulting Expr value should also have this type.
data Expr a = ...
| Op BinOp (Expr a) (Expr a)
In fact, the type system will enforce these constraints for us.
Building blocks
The first step in making all of this machinery work is to define some functions with the right types.
These two functions will construct atoms (values that can't be reduced any further) in our language:
num :: Int -> Expr Int
num = Num
str :: String -> Expr String
str = Str
Applying operators safely
These two functions construct compound expressions:
add :: Expr Int -> Expr Int -> Expr Int
add = Op Add
cat :: Expr String -> Expr String -> Expr String
cat = Op Concat
Notice that each one enforces the restriction that its parameters must be compatible.
A trusted computing base
One we have our functions defined, the last step is to lock our world down.
Here's what the beginning of my module looks like:
module Interp
Expr, -- type constructor
interp, -- interpreter
num, str, -- atom constructors
add, cat, -- expression constructors
) where
Notice that we've exercised careful control over what we're exporting.
• We export the Expr type constructor, but none of its value constructors.
• Users of our module don't need BinOp, so we don't export that at all.
More about our type and export choices
Consequences of exporting only the type constructor for Expr:
• Clients cannot use the value constructors to create new values.
• The only way for a client to construct expressions is using our handwritten "smart constructor" functions with their carefully chosen types.
• Clients cannot pattern-match on an Expr value. Our internals are opaque; we could change our implementation without clients being able to tell.
These are in fact the completely standard techniques for creating abstract data types in Haskell. So where does the type parameter come in?
Consequences of that type parameter
Due to our judicious use of both abstraction and that type parameter:
• Clients cannot construct ill-formed expressions. Any attempts will be rejected by the type checker.
This additional safety comes "for free":
• We don't need runtime checks for ill-formed expressions, because they cannot occur.
• Our added type parameter never represents data at runtime, so it has zero cost when the program runs.
Phantom types
When we refer to a type parameter on the left of a type definition, without ever using values of that type on the right, we call it a phantom type.
We're essentially encoding compile-time data using types, and the compiler computes with this data before our program is ever run.
Mutable variables
We've already seen the very handy MVar type, which represents a "blocking mutable box": we can put a value in or take one out, but we'll block if we put when full or take when empty.
Even though MVars are the fastest blocking concurrent structure in the industry (they made the the Kessel Run in less than twelve parsecs!), we don't always want blocking semantics.
For cases where we want non-blocking updates, there's the IORef type, which gives us mutable references.
import Data.IORef
newIORef :: a -> IO (IORef a)
readIORef :: IORef a -> IO a
writeIORef :: IORef a -> a -> IO ()
modifyIORef :: IORef a -> (a -> a) -> IO ()
Managing mutation
Application writers are often faced with a question like this:
• I have a big app, and parts of it need their behaviour tweaked by an administrator at runtime.
There are of course many ways to address this sort of problem.
Let's consider one where we use a reference to a piece of config data.
Any code that's executing in the IO monad can, if it knows the name of the config reference, retrieve the current config:
curCfg <- readIORef cfgRef
The trouble is, ill-behaved code could clearly also modify the current configuration, and leave us with a debugging nightmare.
Phantom types to the rescue!
Let's create a new type of mutable reference.
We use a phantom type t to statically track whether a piece of code is allowed to modify the reference or not.
import Data.IORef
newtype Ref t a = Ref (IORef a)
Remember, our use of newtype here means that the Ref type only exists at compile time: it imposes no runtime cost.
Since we are using a phantom type, we don't even need values of our access control types:
data ReadOnly
data ReadWrite
We're already in a good spot! Not only are we creating compiler-enforced access control, but it will have zero runtime cost.
Creating a mutable reference
To create a new reference, we just have to ensure that it has the right type.
newRef :: a -> IO (Ref ReadWrite a)
newRef a = Ref `fmap` newIORef a
Reading and writing a mutable reference
Since we want to be able to read both read-only and read-write references, we don't need to mention the access mode when writing a type signature for readRef.
readRef :: Ref t a -> IO a
readRef (Ref ref) = readIORef ref
Of course, code can only write to a reference if the compiler can statically prove (via the type system) that it has write access.
writeRef :: Ref ReadWrite a -> a -> IO ()
writeRef (Ref ref) v = writeIORef ref v
Converting a reference to read-only
This function allows us to convert any kind of reference into a read-only reference:
readOnly :: Ref t a -> Ref ReadOnly a
readOnly (Ref ref) = Ref ref
In order to prevent clients from promoting a reference from read-only to read-write, we do not provide a function that goes in the opposite direction.
We also use the familiar technique of constructor hiding at the top of our source file:
module Ref
Ref, -- export type ctor, but not value ctor
newRef, readOnly,
readRef, writeRef
) where
Meaning: that slippery thing
What does this type signature mean?
something :: a -> a
What are all of the possible behaviours of a code with this type?
What about this signature?
another :: [a]
Being more explicit
What does this type signature mean?
something :: a -> a
We know that for all possible types a, this function accepts a value of that type, and returns a value of that type.
We clearly cannot enumerate all possible types, so we equally clearly cannot create all (or indeed any) values of these types.
Therefore, if we exclude crashes and infinite loops, the only possible behaviour for this function is to return its input.
Being even more explicit: quantifiers
In fact, Haskell provides a keyword, forall, to make this quantification over type parameters more explicit:
something :: forall a. a -> a
The same "universal quantification" syntax works with typeclass constraints:
something :: forall a. (Show a) -> String
Here, our quantifier is "for all types a, where the only thing we know about a is what the Show typeclass tells us we can do".
These forall keywords are implied if they're not explicitly written.
Building blocks
Love 'em or hate 'em, everybody has to deal with databases.
Here are some typical functions that a low-level database library will provide, for clients that have to modify data concurrently:
begin :: Connection -> IO Transaction
commit :: Transaction -> IO ()
rollback :: Transaction -> IO ()
We can create a new transaction with begin, finish an existing one with commit, or cancel one with rollback.
Typically, once a transaction has been committed or rolled back, accessing it afterwards will result in an exception.
Shaky foundations build a shaky house
Clearly, these constructs make it easy to inadvertantly write bad code.
oops conn = do
txn <- begin conn
throwIO (AssertionFailed "forgot to roll back!")
-- also forgot to commit!
We can avoid rollback and commit forgetfulness with a suitable combinator:
withTxn :: Connection -> IO a -> IO a
withTxn conn act = do
txn <- begin conn
r <- act `onException` rollback txn
commit txn
return r
All right! The code running in act never sees a Transaction value, so it can't leak a committed or rolled back transaction.
But still...
We're not out of the woods yet!
High-performance web apps typically use a dynamically managed pool of database connections.
getConn :: Pool -> IO Connection
returnConn :: Pool -> Connection -> IO ()
It's a major bug if a database connection is not returned to the pool at the end of a handler.
So we write a combinator to handle this for us:
withConn :: Pool -> (Connection -> IO a) -> IO a
withConn pool act =
bracket (getConn pool) (returnConn pool) act
Nice and elegant. But correct? Read on!
Connections vs transactions
In a typical database API, once we enter a transaction, we don't need to refer to the handle we got until we either commit or roll back the transaction.
So it was fine for us to write a transaction wrapper like this:
withTxn :: Connection -> IO a -> IO a
On other other hand, if we're talking to a database, we definitely need a connection handle.
query :: Connection -> String -> IO [String]
So we have to pass that handle into our combinator:
withConn :: Pool -> (Connection -> IO a) -> IO a
"Ouch, sorry about that!"
Unfortunately, since withConn gives us a connection handle, we can defeat the intention of the combinator (sometimes accidentally).
What is the type of this function?
evil pool = withConn pool return
Phantom types! They'll save us again!
Here, we are using the newtype keyword to associate a phantom type with the IO monad.
newtype DB c a = DB {
fromDB :: IO a
We're going to run some code in the IO monad, and pass around a little extra bit of type information at compile time.
Let's create a phantom-typed wrapper for our earlier Connection type:
newtype SafeConn c = Safe Connection
Where are these phantom types taking us?
Safe querying
The easiest place to start to understand with a little use of our new code, in the form of a function we'll export to clients.
This is just a wrapper around the query function we saw earlier, making sure that our newtype machinery is in the right places to keep the type checker happy.
safeQuery :: SafeConn c -> String -> DB c [String]
safeQuery (Safe conn) str = DB (query conn str)
Notice that our phantom type c is mentioned in both our uses of SafeConn c and DB c: we're treating it as a token that we have to pass around.
Our library will not be exporting the value constructors for SafeConn or DB to clients. Once again, this newtype machinery is internal to us!
Giving a client a connection from a pool
Here, we'll use our earlier exception-safe withConn combinator. Recall its type:
withConn :: Pool -> (Connection -> IO a) -> IO a
To make it useful in our new setting, we have to wrap the Connection, and unwrap the DB c that is our act to get an action in the IO monad.
withSafeConn pool act =
withConn pool $ \conn ->
fromDB (act (Safe conn))
It's not at all obvious what this is doing for us until we see the type of withSafeConn.
Here's a burly type for you:
{-# LANGUAGE Rank2Types #-}
withConnection :: Pool
-> (forall c. SafeConn c -> DB c a)
-> IO a
We've introduced a universal quantifier (that forall) into our type signature. And we've added a LANGUAGE pragma! Whoa. Duuude.
Relax! Let's not worry about those details just yet. What does our signature seem to want to tell us?
• We accept a Pool.
• And an "I have a connection, so I can talk to the database now" action that accepts a SafeConn c, returning a value a embedded in the type DB c.
Not so scary after all. Well, except for the details we're ignoring.
Universal quantification to the rescue!
Let's start with the obviously bothersome part of the type signature.
(forall c. SafeConn c -> DB c a)
This is the same universal quantification we've seen before, meaning:
• Our "I can haz connection" action must work over all types c.
• The scope of c extends only to the rightmost parenthesis here.
Putting it back into context:
withConnection :: Pool
-> (forall c. SafeConn c -> DB c a)
-> IO a
The type variable c can't escape from its scope, so a cannot be related to c.
Wait, wait. What, exactly, got rescued?
withConnection :: Pool
-> (forall c. SafeConn c -> DB c a)
-> IO a
Because SafeConn c shares the same phantom type as DB c, and the quantified c type cannot escape to the outer IO, there is no way for a SafeConn c value to escape, either!
In other words, we have ensured that a user of withConnection cannot either accidentally allow or force a connection to escape from the place where we've deemed them legal to use.
Rank-2 types
Standard Haskell types and functions have just one scope for universal quantification.
foo :: forall a b. a -> b -> a
When an extra level of scoping for universal quantification is introduced, this is called a rank-2 type.
fnord :: forall b. (forall a. a -> a) -> b
(Normal types are thus called rank-1 types.)
Although widely used, rank-2 types are not yet a part of the Haskell standard, hence our use of a pragma earlier:
{-# LANGUAGE Rank2Types #-}
Bonus question 1
What expressions can we write that have this type?
[forall a. a]
What about this one?
[forall a. (Enum a) => a]
Or this?
[forall a. (Num a) => a]
Bonus question 2
Do we have time to talk about how to write a Monad instance for DB?
Purity in the face of change
We've now seen several cases where phantom types and rank-2 types let us use the compiler to automatically prevent ourselves from writing bad code.
We can also use them to introduce safe, controlled mutation into our programs.
Sad face
A typical lament of a functional programmer:
• "Alas! Woe is me! Etc., etc.! There is no known purely functional algorithm for my problem that performs as well as this seductive imperative code!"
Of course, in the worst case, we can emulate a flat, mutable memory with a purely functional map, thus incurring only O(log n) of additional overhead.
Cake: having and eating
Enter the ST monad!
import Control.Monad.ST
This defines for us a function with a glorious rank-2 type:
>> :t runST
runST :: (forall s. ST s a) -> a
Since we've only just been introduced to rank-2 types, we know exactly what this implies:
• What happens in the ST monad stays in the ST monad.
• Nevertheles, we can obtain a pure result when we run an action in this monad. That's an exciting prospect!
Mutable references, ST style
The STRef type gives us the same mutable references as IORef, but in the ST monad.
import Control.Monad.ST
import Data.STRef
whee :: ST s Int
whee z = do
r <- newSTRef z
modifySTRef r (+1)
readSTRef r
Let's try this in ghci:
>> runST (whee 1)
Thanks to chaining of the universally quantified s, there is no way for an STRef to escape from the ST monad, save by the approved route of reading its current value with readSTRef.
newSTRef :: a -> ST s (STRef s a)
readSTRef :: STRef s a -> ST s a
Arrays and vectors
For working with large collections of uniform data, the usual representation in most languages is an array.
The longtime standard for working with arrays in Haskell is the Array type, from the array package, but I don't like it: it has an API that is simultaneously bizarre, too general, and puny.
I much prefer its modern cousin, the vector package:
• vector provides a vastly richer API than array.
• A Vector is one-dimensional and indexed by Ints counting from zero, so it's easy to reason about.
• An Array is indexed by an instance of the Ix class, can have arbitrary bounds, and makes my brain hurt.
Families and flavours of vectors
The vector package provides two "flavours" of vector type:
• Vector types are immutable.
• MVector types can be modified in either the ST or IO monad, and cannot be read by purely functional code.
Within these flavours, there are two "families" of vector type:
• Unboxed vectors are tightly packed in contiguous memory. They are very fast, but it is only possible to create unboxed vectors of certain types, and an unboxed vector can't store thunks.
• Normal vectors are boxed, just like ordinary Haskell values. Any value can be stored in a plain old vector, at the cost of an additional level of indirection.
We can thus have an immutable unboxed vector, a mutable boxed vector, and so on.
Mutable vectors in action
The classic Haskell implementation of a "quicksort":
import Data.List (partition)
qsort (p:xs) = qsort lt ++ [p] ++ qsort ge
where (lt,ge) = partition (<p) xs
qsort _ = []
This isn't really a quicksort, because it doesn't operate in-place.
We can apply our newfound knowledge to this problem:
import qualified Data.Vector.Unboxed.Mutable as V
import Control.Monad.ST (ST)
quicksort :: V.MVector s Int -> ST s ()
quicksort vec = go 0 (V.length vec)
{- ... -}
The recursive step
recur left right
| left >= right = return ()
| otherwise = do
idx <- partition left right
(left + (right-left) `div` 2)
recur left (idx-1)
recur (idx+1) right
Partitioning the vector
(Remember, vec is in scope here.)
partition left right pivotIdx = do
pivot <- V.read vec pivotIdx
V.swap vec pivotIdx right
let loop i k
| i == right = V.swap vec k right >>
return k
| otherwise = do
v <- V.read vec i
if v < pivot
then V.swap vec i k >>
loop (i+1) (k+1)
else loop (i+1) k
loop left left
From immutable to mutable, and back
We can even use this in-place sort to efficiently perform an in-place sort of an immutable array!
Our building blocks:
thaw :: Vector a -> ST s (MVector s a)
create :: (forall s. ST s (MVector s a)) -> Vector a
• thaw creates a new mutable vector, and copies the contents of the immutable vector into it.
• create runs an ST action that returns a mutable vector, and "freezes" its result to be immutable, and hence usable in pure code.
import qualified Data.Vector.Unboxed as U
vsort :: U.Vector Int -> U.Vector Int
vsort v = U.create $ do
vec <- U.thaw v
quicksort vec
return vec
Mutability, purity, and determinism
The big advantage of the ST monad is that it gives us the ability to efficiently run computations that require mutability, while both the inputs to and results of our computations remain pure.
In order to achieve this, we sacrifice some power:
• We can't run arbitrary IO actions. No database accesses, no filesystem, etc.
• Other potential sources of nondeterminism (e.g. threads) are thus also off limits.
Originally, this lecture was supposed to be all about the joys of lazy evaluation, but we hijacked much of our time to serve other purposes.
I'm going to talk a little bit about it anyway.
In a minute.
A digression
How can we use random numbers to approximate the value of π?
A digression
How can we use random numbers to approximate the value of pi?
• Take two random numbers, x and y, on the interval [0,1]
• Add their squares: r=x^2+y^2
• We have a π/4 probability of r≤1
What can we do with this knowledge?
Purely functional random numbers
Haskell supplies a random package that we can use in a purely functional setting.
class Random a where
random :: RandomGen g => g -> (a, g)
class RandomGen g where
next :: g -> (Int, g)
split :: g -> (g, g)
The RandomGen class is a building block: it specifies an interface for a generator that can generate uniformly distributed pseudo-random Ints.
There is one default instance of this class:
data StdGen {- opaque -}
instance RandomGen StdGen
The Random class specifies how to generate a pseudo-random value of some type, given the random numbers generated by a Gen instance.
Quite a few common types have Random instances.
• For Int, the instance will generate any representable value.
• For Double, the instance will generate a value in the range [0,1].
Generators are pure
Since we want to use a PRNG in pure code, we obviously can't modify the state of a PRNG when we generate a new value.
This is why next and random return a new state for the PRNG every time we generate a new pseudo-random value.
Throwing darts at the board
Here's how we can generate a guess at x^2+y^2:
guess :: (RandomGen g) => (Double,g) -> (Double,g)
guess (_,g) = (z, g'')
where z = x^2 + y^2
(x, g') = random g
(y, g'') = random g'
Note that we have to hand back the final state of the PRNG along with our result!
If we handed back g or g' instead, our numbers would either be all identical or disastrously correlated (every x would just be a repeat of the previous y).
Global state
We can use the getStdGen function to get a handy global PRNG state:
getStdGen :: IO StdGen
This does not modify the state, though. If we use getStdGen twice in succession, we'll get the same result each time.
To be safe, we should update the global PRNG state with the final PRNG state returned by our pure code:
setStdGen :: StdGen -> IO ()
Ugh - let's split!
Calling getStdGen and setStdGen from ghci is a pain, so let's write a combinator to help us.
Remember that split method from earlier?
class RandomGen g where
split :: g -> (g, g)
This "forks" the PRNG, creating two children with different states.
The hope is that the states will be different enough that pseudo-random values generated from each will not be obviously correlated.
withGen :: (StdGen -> a) -> IO a
withGen f = do
g <- getStdGen
let (g',g'') = split g
setStdGen g'
return (f g'')
Living in ghci
Now we can use our guess function reasonably easily.
>> let f = fst `fmap` withGen (guess . ((,) 0))
>> f
>> f
Let's iterate
Here's a useful function from the Prelude:
iterate :: (a -> a) -> a -> [a]
iterate f x = x : iterate f (f x)
Obviously that list is infinite.
Let's use iterate and guess, and as much other Prelude machinery as we can think of, to write a function that can approximate π.
By the way, in case you don't recognize this technique, it's a famous example of the family of Monte Carlo methods.
Where's the connection to laziness?
What aspects of laziness were important in developing our solution? | {"url":"http://www.scs.stanford.edu/11au-cs240h/notes/laziness-slides.html","timestamp":"2014-04-20T20:55:09Z","content_type":null,"content_length":"106898","record_id":"<urn:uuid:65da126e-7481-444d-b13b-524207f2cf64>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Intermediate Turing Degrees
[FOM] Intermediate Turing Degrees
Stephen G Simpson simpson at math.psu.edu
Tue Aug 31 11:30:56 EDT 2010
Merlin Carl writes:
it is well known that there's a rich structure of Turing degrees
between 0 and 0`. However, [...] I have yet neither seen
intermediate degrees occuring in other disciplines of mathematics
nor corresponding to an application. [...]
Some years ago I was a principal participant in an extensive FOM
discussion of the role of intermediate r. e. Turing degrees and
priority arguments in what was then being called "applied
computability theory". See for instance:
Unfortunately, although the discussion was interesting and promising
from a scientific viewpoint, it became impossible to continue on the
FOM list. Instead I have continued the discussion in another way, by
publishing several relevant research articles:
Mass problems and randomness (BSL, 2005)
An extension of the recursively enumerable Turing degrees (JLMS, 2007)
Mass problems and almost everywhere domination (MLQ, 2007)
Some fundamental issues concerning degrees of unsolvability (CPOI II, 2008)
Mass problems and hyperarithmeticity (JML, 2008)
Mass problems and intuitionism (NDJFL, 2008)
Mass problems and measure-theoretic regularity (BSL, 2009)
All of these articles are available on my web page.
Name: Stephen G. Simpson
Affiliation: Pennsylvania State University
Research interests: foundations of mathematics, mathematical logic
Web page: http://www.math.psu.edu/simpson/
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-August/015038.html","timestamp":"2014-04-18T04:37:03Z","content_type":null,"content_length":"4359","record_id":"<urn:uuid:4453f6ce-e371-41ce-b5c5-810c09c06d49>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Darby, PA Prealgebra Tutor
Find a Darby, PA Prealgebra Tutor
...I have always spent my time tinkering with the settings in order to maximize the efficiency with which a computer can operate. So, whether you need help setting up your wireless network,
setting your desktop background, or configuring a new printer, I can do pretty much anything! I worked as a civil engineer, and needed to use my knowledge of geometry nearly every day.
21 Subjects: including prealgebra, reading, calculus, physics
...I enjoy working with teens, and I generally get along pretty well with them. Although, adults are okay too (they tend to ask better questions). So if you are someone in need of tutoring in
math and/or physics (and there is no shame in asking for extra help), then please feel free to contact me. ...
16 Subjects: including prealgebra, English, calculus, physics
...Students may also come to my home if easier. I look forward to hearing from you!I am a certified teacher in Elementary Education, grades Kindergarten through 6th grade. I have taught several
grades throughout my 15 years of teaching.
17 Subjects: including prealgebra, reading, writing, algebra 1
...As a professional, I have taught lessons and strategies to other teachers, and have held workshops on the college process and college essay help. I have taught at the middle school and high
school level (currently at a magnet high school teaching advanced, mentally gifted and AP classes) as well...
17 Subjects: including prealgebra, reading, writing, English
...I am very patient, compassionate, and dedicated to helping each student achieve their goals in their respective subjects. I am comfortable tutoring a wide range of ages from middle school
through college undergraduate students. I am confident that I can help you to excel in whichever area you feel that you need help in.
11 Subjects: including prealgebra, biology, algebra 1, grammar
Related Darby, PA Tutors
Darby, PA Accounting Tutors
Darby, PA ACT Tutors
Darby, PA Algebra Tutors
Darby, PA Algebra 2 Tutors
Darby, PA Calculus Tutors
Darby, PA Geometry Tutors
Darby, PA Math Tutors
Darby, PA Prealgebra Tutors
Darby, PA Precalculus Tutors
Darby, PA SAT Tutors
Darby, PA SAT Math Tutors
Darby, PA Science Tutors
Darby, PA Statistics Tutors
Darby, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Darby_PA_Prealgebra_tutors.php","timestamp":"2014-04-18T11:05:16Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:f24fd78c-14fe-4fd2-a964-b7b32963ea1d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] First few eigenvectors of a numarray array
Stuart Murdock s.e.murdock at soton.ac.uk
Thu Sep 2 14:50:57 CDT 2004
Hi Dave
David Grant wrote:
> When you say "top 10" do you mean the eigenvalues with the largest or
> smallest values?
Primarily I would be concerned with the eigenvectors associated with the
largest eigenvalues as presently
I am interested in Principal Component Analysis of biomolecular
simulation trajectories. There are ways to approximate
the top eigenvector / eigenvalue, then get the next and so on but I was
wondering if there were any packages with those
types of algorithms already implemented.
> I would also be interested in knowing if there is any mathematical way
> of doing this. Sometimes for example in molecular simulations you
> only want to calculate the ground state energy, for example, and you
> don't care about the rest...
> Dave
> Stuart Murdock wrote:
>> Hi
>> I have a 10000 by 10000 square numarray array which I have obtained
>> using numarray. I need to obtain the first 10 eigenvalues and
>> eigenvectors of this so I dont want to have to calculate all
>> eigenvectors of the matrix. Is anyone aware of any pythonic packages
>> which
>> can calculate, lets say, the 10 eigenvectors corresponding to the top
>> 10 eigenvalues of a numarray array.
>> There are a few functions which calculate all of the eigenvectors e.g.
>> eig
>> eigenvectors
>> but I only want to calculate the top few.
>> Thanks
>> Stuart
>SciPy-user mailing list
>SciPy-user at scipy.net
Stuart Murdock Ph.D,
Research Fellow,
Dept. of Chemistry / E-Science,
University of Southampton,
Highfield, Southampton,
SO17 1BJ, United Kingdom
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2004-September/003237.html","timestamp":"2014-04-19T19:45:31Z","content_type":null,"content_length":"4934","record_id":"<urn:uuid:9f09fe98-3208-46c9-bd8a-398fbf6b1d18>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
C09CDF computes the inverse one-dimensional multi-level discrete wavelet transform (DWT). This routine reconstructs data from (possibly filtered or otherwise manipulated) wavelet transform
coefficients calculated by
from an original set of data. The initialization routine
must be called first to set up the DWT options.
C09CDF performs the inverse operation of
. That is, given a set of wavelet coefficients, computed by
using a DWT as set up by the initialization routine
, on a real data array of length
, C09CDF will reconstruct the data array
, for
$\mathit{i}=1,2,\dots ,n$
, from which the coefficients were derived. If the original input dataset is level
, then it is possible to terminate reconstruction at a higher level by specifying fewer than the number of levels used in the call to
. This results in a partial reconstruction.
If on entry
, explanatory error messages are output on the current error message unit (as defined by
The accuracy of the wavelet transform depends only on the floating point operations used in the convolution and downsampling and should thus be close to machine precision. | {"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/C09/c09cdf.html","timestamp":"2014-04-17T07:09:34Z","content_type":null,"content_length":"19798","record_id":"<urn:uuid:bcc8be17-34e6-4412-8545-a95acb028b6c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Floating Point/Floating Point Formats
Floating-Point FormatsEdit
There are 4 different formats of floating point number representation in the IEEE 754 standard:
Single, Extended-Precision
Double, Extended-Precision
Single precision floating point numbers are 32 bits wide. The first bit (bit 31, the MSB) is a sign bit, the next 8 bits (bits 30-23) are the exponent, and the remaining 23 bits are for the
significand. Note that even though 23 bits are stored for the significand, the precision($p$) is actually 24 bits. This is a trick made possible by a normalized floating point system with $b = 2$.
The exponent is biased by 127, so that negative exponents can be expressed.
Double-precision numbers are 64 bits wide. The MSB (bit 63) is the sign bit. The next 11 bits (bits 62-52) are the exponent, and the rest of the bits (bits 51-0) are for the significand. Again, the
precision is actually 53 bits (not 52) because of the same normalization trick.
Format Width Precision Exponent Significand
Single 32 bits 23 bits bits 30-23 bits 22-0
Double 64 bits 52 bits bits 62-52 bits 51-0
Last modified on 8 October 2006, at 04:06 | {"url":"http://en.m.wikibooks.org/wiki/Floating_Point/Floating_Point_Formats","timestamp":"2014-04-16T16:19:13Z","content_type":null,"content_length":"15529","record_id":"<urn:uuid:47f45e05-ee35-4d22-b99b-0c6fee529ade>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Recent Homework Questions About Statistics
Post a New Question | Current Questions
3. The formula for finding sample standard deviation is ________________. a.𝑠=∑1▒𝑥W22;𝑥 ̅^2 b.𝜎^2=(∑1▒(𝑋−𝜇)^2 )/𝑁 c.𝑠^2=(∑1▒(𝑋...
Friday, December 20, 2013 at 7:14pm
A student union wants to obtain a 90% confidence interval for the proportion of full time students who hold a part-time job for 20 hours or more per week. For this purpose, they want to interview a
sample of full-time students. They want a margin of error to be at ...
Friday, December 20, 2013 at 12:33pm
Friday, December 20, 2013 at 11:51am
Friday, December 20, 2013 at 11:50am
By definition, a Type I error is ____. a. rejecting a false H1 b. rejecting a false H0 c. rejecting a true H0 d. rejecting a true H1
Friday, December 20, 2013 at 8:35am
Which of the following is an accurate definition of a Type II error? a. rejecting a false null hypothesis b. rejecting a true null hypothesis c. failing to reject a false null hypothesis d. failing
to reject a true null hypothesis
Friday, December 20, 2013 at 8:33am
What is the consequence of a Type II error? a. concluding that a treatment has an effect when it really does b. concluding that a treatment has no effect when it really has no effect c. concluding
that a treatment has no effect when it really does d. concluding that a ...
Friday, December 20, 2013 at 8:32am
MATH Statistics
A student union wants to obtain a 90% confidence interval for the proportion of full time students who hold a part-time job for 20 hours or more per week. For this purpose, they want to interview a
sample of full-time students. They want a margin of error to be at ...
Friday, December 20, 2013 at 1:07am
Margin of error : 0.372 95 % confidence interval. (15.228, 15.972)
Thursday, December 19, 2013 at 9:01pm
For these 90 times the mean was 15.60 minutes and the standard deviation s=1.80 minutes. Find the margin of error (round to 2 decimal places) and the 95% confidence interval for Julia s average
running time.
Thursday, December 19, 2013 at 8:54pm
Choose d. None of the other choices are correct. Choice a: should read change alpha from .01 to .05. Choice b: should read change from a two-tailed test to a one-tailed test. Choice c: should read
change sample size n = 25 to n = 100.
Thursday, December 19, 2013 at 5:33pm
Thank you for you held once again your awesome.
Thursday, December 19, 2013 at 4:57pm
When is there a risk of a Type II error? a. whenever H0 is rejected b. whenever H1 is rejected c. whenever the decision is "fail to reject H0" d. The risk of a Type II error is independent of the
decision from a hypothesis test.
Thursday, December 19, 2013 at 4:01pm
Which of the following will increase the power of a statistical test? a. Change Ñ from .05 to .01. b. Change from a one-tailed test to a two-tailed test. c. Change the sample size from n = 100 to n
= 25. d. None of the other options will increase power.
Thursday, December 19, 2013 at 4:01pm
Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/probability (.25) and its Z score. Insert Z and other data into equation
below and calculate. Z = (score-mean)/SD
Thursday, December 19, 2013 at 1:27pm
probability & statistics
m = (12-5)/(11-(-9) m =7/20 y = mx + b 5 = -9(7/20) +b 5 = -63/20 + b 163/20 = b y-intercept (0,163/20)
Wednesday, December 18, 2013 at 5:57pm
Solve the problem. Round to the nearest tenth unless indicated otherwise. The amount of rainfall in January in a certain city is normally distributed with a mean of 4.3 inches and a standard
deviation of 0.3 inches. Find the value of the quartile .
Wednesday, December 18, 2013 at 5:44pm
probability & statistics
Find the slope and the y-intercept for the line that passes through (-9,5) (11,12)
Wednesday, December 18, 2013 at 4:33pm
A national opinion poll found that 44% of all American adults agree that parents should be given vouchers good for education at any public or private school of their choice. Suppose that in fact the
population proportion who feel this way is p=0.44. A) Many opinion polls have ...
Tuesday, December 17, 2013 at 7:24pm
How would you check the Z table for the first problem?
Tuesday, December 17, 2013 at 7:09pm
Statistics 3
See your latest post.
Tuesday, December 17, 2013 at 7:08pm
Statistics 4
See your later post.
Tuesday, December 17, 2013 at 7:07pm
statistics 4.
See your later post.
Tuesday, December 17, 2013 at 7:00pm
statistics 3.
See your later post.
Tuesday, December 17, 2013 at 7:00pm
statistics 4.
Here are a few ideas to get you started. First problem: Try a one-sample z-test. Formula: z = (sample mean - population mean)/(standard deviation divided by the square root of sample size) Your data:
sample mean = 2.6 population mean = 2.5 standard deviation = 0.5 sample size...
Tuesday, December 17, 2013 at 6:42pm
statistics 3.
Here are a few ideas to get you started. First problem: Use a formula to find sample size. Here is one for proportions: n = [(z-value)^2 * p * q]/E^2 ... where n = sample size, z-value is 1.645 (90%
confidence interval), p = .5 (when no value is stated), q = 1 - p, ^2 means ...
Tuesday, December 17, 2013 at 6:26pm
statistics 3.
A student union wants to obtain a 90% confidence interval for the proportion of full time students who hold a part-time job for 20 hours or more per week. For this purpose, they want to interview a
sample of full-time students. They want a margin of error to be at ...
Tuesday, December 17, 2013 at 5:23pm
statistics 4.
The manufacturer of Porshe 918 Spyder car claims that their car can go from 0 mph to 60 mph speed in less than 2.5 seconds. To test this claim, a sample of 40 new cars of this type were driven and
their average 0-60 mph time was 2.6 seconds with a standard deviation of 0.5 ...
Tuesday, December 17, 2013 at 5:23pm
statistics-PLEASE HELP quick
Please only post your questions once. Repeating posts will not get a quicker response. In addition, it wastes our time looking over reposts that have already been answered in another post. Thank you.
Tuesday, December 17, 2013 at 2:48pm
This is probably too late, but does "etc." include the mean and standard deviation?
Tuesday, December 17, 2013 at 2:47pm
a) Ho: mean heights = Ha: mean heights ≠ b) SEm = SD/√n c) Z = (score-mean)/SEm Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the
proportion/probability related to the Z score. d) Once you ...
Tuesday, December 17, 2013 at 2:39pm
statistics 3.
A student union wants to obtain a 90% confidence interval for the proportion of full time students who hold a part-time job for 20 hours or more per week. For this purpose, they want to interview a
sample of full-time students. They want a margin of error to be at ...
Tuesday, December 17, 2013 at 2:20pm
statistics 4.
The manufacturer of Porshe 918 Spyder car claims that their car can go from 0 mph to 60 mph speed in less than 2.5 seconds. To test this claim, a sample of 40 new cars of this type were driven and
their average 0-60 mph time was 2.6 seconds with a standard deviation of 0.5 ...
Tuesday, December 17, 2013 at 2:19pm
proportion = 112/250 = .448 z = (.448 - .47) / sqrt(.47*.53/250) = -0.70 p-value = .2420 Since the p-value is greater than .05, do not reject the null hypothesis. The claim is not supported.
Monday, December 16, 2013 at 11:17pm
Use a 1.15 level of significance to test a claim that a true proportion is less than 47% when n=250 and x-112
Monday, December 16, 2013 at 11:07pm
statistics-PLEASE HELP quick
A family has four children and we assume that births occur with equal frequency for both sexes. Construct a sample space for all possible family arrangements allowing for order of birth to count. a)
How many different arrangements are possible? b)find P (three are boys)= c)...
Monday, December 16, 2013 at 11:05pm
Z = (score -mean) /sd z = (126.2-135)/4 z = -9.8/4 = -2.45 z-table
Monday, December 16, 2013 at 11:04pm
4. Are women getting taller? In one state, the average height of a woman aged 20 years or older is 63.7 inches in 1990. A random sample of 100 women is taken to test if women s mean height today is
different from 1990. The mean height of the 100 surveyed women is 63.9 ...
Monday, December 16, 2013 at 10:13pm
Statistics 4
The manufacturer of Porshe 918 Spyder car claims that their car can go from 0 mph to 60 mph speed in less than 2.5 seconds. To test this claim, a sample of 40 new cars of this type were driven and
their average 0-60 mph time was 2.6 seconds with a standard deviation of 0.5 ...
Monday, December 16, 2013 at 9:42pm
The breaking strength (in pounds) of a certain new synthetic is normally distributed, with a mean of 135 and a variance of 16. The material is considered defective if the breaking strength is less
than 126.2 pounds. What is the probability that a single, randomly selected ...
Monday, December 16, 2013 at 9:39pm
Statistics 3
A student union wants to obtain a 90% confidence interval for the proportion of full time students who hold a part-time job for 20 hours or more per week. For this purpose, they want to interview a
sample of full-time students. They want a margin of error to be at ...
Monday, December 16, 2013 at 9:38pm
I have done an entire project based on football scores from a certain team over the last 3 years and I'm not sure what information I can get from my answers. Ive done a frequency distribution table,
Ogive, Histogram,cumulative frequency table, frequency polygon, etc. and I...
Monday, December 16, 2013 at 7:54pm
statistics-PLEASE HELP quick
I understand most of the formulas however Im not sure what the answers tell me. I did a frequency distribution table for all points scored by a football team over 4 years, but I don't understand what
my answers mean and I don't want to look stupid in front of my class...
Monday, December 16, 2013 at 7:47pm
Note: For the second problem, we use the normal approximation to the binomial distribution when finding the mean and standard deviation.
Monday, December 16, 2013 at 6:48pm
Use z-scores. First problem: z = (x - mean)/(sd/√n) Find two z-scores: z = (2.4 - 2.5)/(0.2/√50) = ? z = (2.8 - 2.5)/(0.2/√50) = ? Finish the calculations. Next, check a z-table to find the
probability between the two scores. Find mean and standard deviation...
Monday, December 16, 2013 at 6:46pm
Statistics 2
Formulas: CI90 = mean ± 1.645 (sd/√n) CI95 = mean ± 1.96 (sd/√n) For the first part, substitute the mean, standard deviation, and sample size into the appropriate formulas to determine the confidence
intervals. This will help you answer the questions ...
Monday, December 16, 2013 at 6:32pm
statistics in psychology
Using an online calculator, we have 4 data pairs (x,y): (1,10) (1,8) (2,4) (4,-2) Regression equation: y = a + bx y = 12.3 - 3.67x This is the line of best fit. You can plot the scatter graph using
the points above (x,y) to see how close the points are to the regression line...
Monday, December 16, 2013 at 6:23pm
statistics in psychology
12. Four research participants take a test of manual dexterity (high scores mean better dexterity) and an anxiety test (high scores mean more anxiety). The scores are as follows. Person Dexterity
Anxiety 1 1 10 2 1 8 3 2 4 4 4 2 (a) Scatter Graph: Scatter and Regression
Monday, December 16, 2013 at 3:40pm
Statistics 2
The body temperature of healthy human beings can be assumed to have a normal distribution with mean and standard deviation . Body temperature measurements taken from a sample of 130 healthy
individuals shows the sample mean temperature to be 98.249O F and ...
Monday, December 16, 2013 at 1:21pm
A manufacturing facility produces iron rods whose mean diameter is 2.5 inches with a standard deviation of 0.2 inch. If a sample of 50 metal rods produced by the facility were randomly selected, what
is the probability that the mean of these rods will lie between 2.4 inches ...
Monday, December 16, 2013 at 12:53pm
I will start it for you. Most numbers can be obtained in more than one way. 2 and 12, the only no pay numbers, can only be obtained one way. 3 = 2,1 or 1,2 = 2 ways 4 = 2,2 or 3,1 or 1,3 = 3 ways 5 =
3,2 or 2,3 or 1,4 or 4,1 = 4 ways etc.
Monday, December 16, 2013 at 12:31pm
No, unless you have more data that was not posted.
Monday, December 16, 2013 at 12:03pm
You are so welcome
Sunday, December 15, 2013 at 6:21pm
thank you!
Sunday, December 15, 2013 at 6:19pm
Sunday, December 15, 2013 at 6:18pm
so would it be true?
Sunday, December 15, 2013 at 6:15pm
The null hypothesis will be that all population means are equal, the alternative hypothesis is that at least one mean is different.
Sunday, December 15, 2013 at 6:10pm
True or False: For ANOVA you will reject the null hypothesis is the variability within a sample is greater than the variability among samples?
Sunday, December 15, 2013 at 6:02pm
3. Assuming α = 0.05 and that the population is normally distributed, is there enough evidence to support a claim that the mean amount of servings teenage females drink is less than or equal to 1.5
servings per day?
Saturday, December 14, 2013 at 7:57pm
In a game you roll two fair dice. If the sum of the two numbers obtained is 3,4,9,10 or 11 you win $20. If the sum is 5,6,7 or 8 you pay $20. However if the scores on the dice are the same no one is
required to pay. (a) Construct a probability distribution table for the event...
Saturday, December 14, 2013 at 4:30pm
I assume this is for a distribution of means. Z = (score-mean)/SEm SEm = SD/√n Plug in data and solve for Z.
Saturday, December 14, 2013 at 12:11pm
Sometimes we can answer specific questions if a math or engineering or economics or science teacher is around who does statistics.
Friday, December 13, 2013 at 9:44pm
9. Explain the transformation needed to convert the following data to a linear data set. {(1, 0.98), (2, 1.39), (3, 1.71), (4, 1.98), (5, 2.22), (6, 2.43)} (6 points)
Friday, December 13, 2013 at 7:15pm
For a population with a mean of µ=60 and standard deviation of Ợ=24, find the z=score corresponding to each of the following samples a. M=63 for sample of n=16 scores b. M=63 for sample of n=36
scores c. M=63 for sample of n=64 scores
Friday, December 13, 2013 at 7:12pm
is this statement true or false.
Friday, December 13, 2013 at 6:42pm
Variance =38.2
Friday, December 13, 2013 at 6:26pm
what is the variance of 18, 16, 12, 2, 11
Friday, December 13, 2013 at 6:12pm
Statistics (?)
What is your question?
Friday, December 13, 2013 at 1:29pm
SEm = SD/√n From this equation, you can see why it is false.
Friday, December 13, 2013 at 1:29pm
Standard deviation (SD) = square root of variance Z = (score-mean)/SD Find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/
probability (.05) related to the Z score. Insert the values into ...
Friday, December 13, 2013 at 12:44pm
Z = (score-mean)/SEm SEm = SD/√n Assuming a normal distribution, find table in the back of your statistics text labeled something like "areas under normal distribution" to find the proportion/
probability (.05) and its Z score. Insert Z score and other data into...
Friday, December 13, 2013 at 12:29pm
Thank you for all your help.
Friday, December 13, 2013 at 10:22am
86.2? :D
Friday, December 13, 2013 at 4:06am
Scores on a certain exam are normally distributed with a mean of 78 and a variance of 25. What is the score obtained by the top 5% of the students? I do not understand the 5% part. What does the tail
look like on the bell-shaped graph?
Friday, December 13, 2013 at 4:03am
n1-25, n2-36, s1 power 2=50, s2 2 power=72, m1=100. m2=105
Thursday, December 12, 2013 at 11:35pm
As sample size increases, the value of the standard error also increases. Ans false
Thursday, December 12, 2013 at 10:28pm
mah statistics
Mean = 0(.5) + 1(.12) + 2( .23) + 3(.3) + 46.16) + 5(.1) +5 (.04) = ? variance = 0^2(.5) + 1^2(.12) + 2^2 (.23) + 3^2(.3) + 4^2(.16) +5^2 (.1) + 5^2(.04) = ? Standard deviation = sqrt(variance -mean
Thursday, December 12, 2013 at 10:21pm
True or False The mean for a sample of n = 4 scores has a standard error of 5 points. This sample was selected from a population with a standard deviation of ã = 20.
Thursday, December 12, 2013 at 10:19pm
True or False As sample size increases, the value of the standard error also increases.
Thursday, December 12, 2013 at 10:18pm
mah statistics
the following table gives the number of camcorders sold on a given day in an electronics store X: 0, 1, 2, 3, 4, 5, 5, P(x) 0.5, .12, .23, .3, .16, .1, .04 find the mean, variance, and standard
deviation of the daily sales of camcorders in the store.
Thursday, December 12, 2013 at 6:52pm
The sample size reduces variability of distribution of means as it gets larger, but does not effect the expected value for the sample mean.
Thursday, December 12, 2013 at 3:16pm
Using P ≤ .05, if the probability the results would occur by chance is P > .05, you would accept the null hypothesis. By using any level of significance, if it equals or is less than that level, you
would be assuming that the results indicate significant differences. ...
Thursday, December 12, 2013 at 3:11pm
a) Use ratio and proportion... 500 tagged birds/n total birds = 20/215 by cross products: 20n = 500 * 215 20n = 107,500 n = 107,500 / 20 = 5,375 b) ? c) In early spring the recently hatched baby
birds are probably not available for capture, as they are still in the nest. ...
Thursday, December 12, 2013 at 2:35pm
How is the rejection region defined and how is that related to the z-score and the p value? When do you reject or fail to reject the null hypothesis? Why do you think statisticians are asked to
complete hypothesis testing? Can you think of examples in courts, in medicine, or ...
Thursday, December 12, 2013 at 12:16am
Can someone help me do this problem? A group of environmentalist are trying to estimate the population of the state rid in clear view park. They capture a sample of 500 birds in early spring and tag
them. in august they capture 215 birds, of which 20 are tagged. a) use the ...
Thursday, December 12, 2013 at 12:12am
True or False According to the central limit theorem, the expected value for a sample mean becomes smaller, approaching zero, as the sample size approaches infinity.
Wednesday, December 11, 2013 at 9:39pm
True or False In order for the distribution of sample means to be normal, it must be based on samples of at least n = 30 scores.
Wednesday, December 11, 2013 at 9:39pm
Thank you for the help.
Wednesday, December 11, 2013 at 9:36pm
Victor or Junior, did you multiply my response to find the answer?
Tuesday, December 10, 2013 at 4:25pm
Formula: z = (x - mean)/(sd/√n) With your data: z = (76 - 70)/(18/√9) = 6/6 = 1 Choice: c
Tuesday, December 10, 2013 at 3:49pm
False. Should read "...the sampling distribution of the mean will be equal to the population mean." (Central Limit Theorem)
Tuesday, December 10, 2013 at 3:45pm
How about a? Check this.
Tuesday, December 10, 2013 at 2:59pm
Math :)
first, check the differences: 8.6 2.0 1.6 2.8 1.3 Doesn't look good, since there's no obvious slope. Is this a class in statistics, where you cover linear regression? The 62.5 looks like an outlier.
Tuesday, December 10, 2013 at 12:10pm
well, 3/4 of the scores were below 69. That leaves 1/4 above. What's 1/4 of 135?
Tuesday, December 10, 2013 at 11:54am
Multiple choice question that's hard to figure out and understand.(information provided) Ten members of a fraternity take a statistics course. Here are their scores on the first exam in the course:
61,74,47,60,62,63,65,79,55,85. The question states: In all, 135 students ...
Tuesday, December 10, 2013 at 11:52am
Multiple choice question that's hard to figure out and understand.(information provided) Ten members of a fraternity take a statistics course. Here are their scores on the first exam in the course:
61,74,47,60,62,63,65,79,55,85. The question states: In all, 135 students ...
Tuesday, December 10, 2013 at 11:51am
You have a lot of unnecessary data given to answer your question. .25 * 135 = ?
Tuesday, December 10, 2013 at 10:03am
Monday, December 9, 2013 at 10:05pm
Business Math and Statistics
Ill be concise:60% of people in survey had income greater than 25k, 40% under 25k. Further, same survey, 70% have exactly 2 cars, while 30% have a different number than 2. The probability that the
residents own 2 cars IF income is over 25k is 80%. what is the probability of a ...
Monday, December 9, 2013 at 9:46pm
Ten members of a fraternity take a statistics course. Here are their scores on the first exam in the course. 61,74,47,60,62,63,65,79,55,85. In all 135 students took the exam. The third quartile for
all 135 scores was 69. How many students had scores higher than 69?
Monday, December 9, 2013 at 9:17pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
Post a New Question | Current Questions | {"url":"http://www.jiskha.com/math/statistics/?page=8","timestamp":"2014-04-18T18:40:18Z","content_type":null,"content_length":"36016","record_id":"<urn:uuid:117703b7-6da8-4841-b452-4e590414d218>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
June 2008
The Plus teacher packages are designed to give teachers (and students) easy access to Plus content on a particular subject area. Most Plus articles go far beyond the explicit maths taught at school,
while still being accessible to someone doing A level maths. They put classroom maths in context by explaining the bigger picture — they explore applications in the real world, find maths in unusual
places, and delve into mathematical history and philosophy. We therefore hope that our teacher packages provide an ideal resource for students working on projects and teachers wanting to offer their
students a deeper insight into the world of maths.
Vectors and matrices
This teacher package brings together all Plus articles on vectors and matrices.
We've grouped the articles into three categories:
• Vectors and matrices in geometry — whether it's camouflage, computer movies, or simply getting from A to B, vectors are a great way of describing the space around us and even within us;
• Vectors and matrices in physics and biology — forces have a magnitude and a direction... .
• Matrices in game theory and computer science — Need to control a virtual world, find the best strategy for world domination, or create artificial intelligence? Matrices are the best way to store
those vital bits of information.
We also present a list of relevant problems from our sister site NRICH.
Vectors and matrices in geometry
It's all in the detail — The computer animation used in movies and games is now so lifelike, it is very hard to believe that you are actually watching a surface built from simple shapes of triangles.
This article how mathematics helps bring these models to life.
Finding your way home without knowing where you are — Foraging ants have a hard life, embarking on long and arduous trips several times a day, until they drop dead from exhaustion. The trips are not
just long, they also follow complex zig-zag paths. So how do ants manage to find their way back home? And how do they manage to do so along a straight line? Their secret lies in a little vector
Maths goes to the movies — How do you make a virtual being as life-like as Gollum in the Lord of the Rings? This article explores the maths behind computer-generated movies and games. Vector geometry
has a big part to play.
They never saw it coming — Most of us have heard of "stealth" — a technology used by the military to disguise craft from enemy radar. But nature's stealth fighters are not so well known — creatures
that use motion camouflaging to approach their prey undetected. This article looks at the vector mathematics behind the phenomenon.
Time and motion — Whatever is so wonderful about point B that makes all the people at point A want to get there? This article uses vectors to see how to get to places.
Getting into the picture — Using the mathematics of perspective, researchers are now able to produce three-dimensional reconstructions of the scenes depicted in famous paintings. With all those
sightlines and projections, vectors and matrices are an important tool.
Relating relativity — How do writers manage to commit the multi-dimensional world of their imagination to a two-dimensional page? This article explores how the vector geometry of projection can be
used to understand language.
Solving Symmetry — Mathematicians pin down symmetry, using matrices to help them.
Secrets from a bathroom floor — Tilings have adorned buildings from ancient Rome to the Islamic world, from Victorian England to colonial Mexico. But while it sometimes seems free from worldly
limitations, tiling is a very precise art, where not much can be left to chance. We can push and turn and wiggle, but if the maths is not right, it isn't going to tile.
Face to face — How would it feel to look in a mirror and see not your own reflection but instead how you would look as the opposite sex? Scientists use mathematical wizardry involving
high-dimensional vectors to produce gender reversed images of faces.
Vectors and matrices in physics and biology
Finding your way home without knowing where you are — Ants have tiny brains and appalling vision, yet they always manage to find their way home. Their secret lies in vector maths.
Understanding turbulence — The study of turbulence is used to understand a range of phenomena from the simple squirting of a jet of water to the activity of the Sun. The forces at work are described
by vectors.
Reconstructing the tree of life — At the heart of Darwin's theory of evolution lies a beautifully simple mathematical object: the evolutionary tree. In this article we look at how maths is used to
reconstruct and understand it.
The maths of climate change: the melting Arctic — The Arctic ice cap is melting fast and the consequences are grim. Mathematical modelling is key to predicting how much longer the ice will be around
and assessing the impact of an ice-free Arctic on the rest of the planet.
If you can't bend it, model it! — This article explores the aerodynamics of footballs and helps you perfect your free kick.
How to make a perfect plane — Two lines in a plane always intersect in a single point ... unless the lines are parallel. This annoying exception is constantly inserting itself into otherwise simple
mathematical statements. Burkard Polster and Marty Ross explain how to get around the problem.
Matrices in game theory and computer science
The amazing librarian— Google's Page Rank algorithm relies heavily on linear algebra - it's as impressive as it is elegant.
Matrix: Simulating the world part I and Matrix: Simulating the world part II— These two articles give hands-on guides to creating simple computer models of anything from flocks of birds to forest
fires. At the heart of it all sits one big omniscient matrix.
Matrix: Simulating the world Part II: cellular automata — Lewis Dartnell turns the universe into a matrix to model traffic, forest fires and sprawling cities.
Blast it like Beckham — What tactics should a soccer player use when taking a penalty kick? And what can the goalkeeper do to foil his plans? This article uses game theory to find the answers.
What computers can't do — This article looks at the life and work of wartime code-breaker Alan Turing. Find out what types of numbers we can't count and why there are limits on what can be achieved
with Turing machines.
Game theory and the Cuban missile crisis — This article uses the Cuban missile crisis to illustrate the theory of moves, which is not just an abstract mathematical model but one that mirrors the
real-life choices, and underlying thinking, of flesh-and-blood decision makers.
Try it yourself with our sister site NRICH
Here is a selection of relevant problems and articles from our sister site NRICH.
Vectors - what are they? — This article provides a summary of the elementary ideas about vectors usually met in school mathematics, describes what vectors are and how to add, subtract and multiply
them by scalars.
Multiplication of vectors — This article explores scalar and vector products.
Earth routes — Find the distance of the shortest air route at an altitude of 6000 metres between London and Cape Town given the latitudes and longitudes. A simple application of scalar products of
The use of maths in computer games — An account of how mathematics is used in computer games including geometry, vectors, transformations, 3D graphics, graph theory and simulations.
Quaternions and rotations — Find out how quaternions give a way of working with rotations in three-dimensional space.
Quaternions and reflections — See how four-dimensional quaternions involve vectors in three-dimensional space and how they give a simple algebraic method of working with reflections in planes in
three dimensions.
Thasan's flight — An aircraft flies on a bearing of 070 degrees at 350 km/hour with wind blowing at 40 km/hour from 340 degrees. Find the actual speed and bearing of the aircraft.
The matrix — Investigate the transformations of the plane given by some 2 by 2 matrices.
Reflect again — Investigate the matrix which gives a reflection of the plane in a given line.
Rots and refs — Use a little coordinate geometry, plane geometry and trig to see how matrices are used to work on transformations of the plane. | {"url":"http://plus.maths.org/content/comment/reply/4341","timestamp":"2014-04-16T19:20:12Z","content_type":null,"content_length":"40599","record_id":"<urn:uuid:ee3b37e9-ef69-4062-a4ce-ea58de733067>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Circumcenter of a Triangle
The circumcircle of a triangle is the circle that passes through each vertex of the triangle. The circumcenter of a triangle is the center of the circumcircle. The circumcenter is the intersection
of the three perpendicular bisectors of the sides of the triangle. If the vertices are only allowed to move around the circumcircle, then the circumcenter never changes position! Drag the locators
to move the vertices. | {"url":"http://demonstrations.wolfram.com/CircumcenterOfATriangle/","timestamp":"2014-04-18T03:01:20Z","content_type":null,"content_length":"42403","record_id":"<urn:uuid:0611b77d-9d31-45bc-94a4-4f3f0354cd2c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rules of Differentiation: Derivative of Trigonometric
Rules of Differentiation: Derivative of Trigonometric Functions
Erin Horst
Now that we have explored derivatives, we can now progress to the rules of differentiation. These rules are simply formulas that instruct the learner how to compute derivatives depending on a given
When differentiating trigonometric functions, special differentiation rules apply. They are:
To illustrate derivatives of trigonometric functions, consider the following functions and their respective derivatives. It is suggested that you compute the derivative to verify the function of the
The following graph illustrates the function y=sin(3x) and its derivative y'=3cos(3x).
The following graph illstrates the function | {"url":"http://jwilson.coe.uga.edu/EMAT6680/Horst/derivativetrig/derivativetrig.html","timestamp":"2014-04-20T03:10:35Z","content_type":null,"content_length":"2411","record_id":"<urn:uuid:f87ccfdc-e434-4773-922c-a8d4c7a99b84>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Proofs that numbers are rational, algebraic, or transcendental.
Replies: 6 Last Post: Apr 3, 2013 2:07 PM
Messages: [ Previous | Next ]
Re: Proofs that numbers are rational, algebraic, or transcendental.
Posted: Apr 3, 2013 1:48 PM
On Apr 3, 5:50 pm, Paul <pepste...@gmail.com> wrote:
> All real numbers are obviously either rational, (algebraic and irrational) or transcendental. So these descriptors partition the reals into three disjoint sets.
> From what I've seen, all the difficult results placing a real number into one of these classes have been of the form "x is transcendental".
> Does anyone know of any non-trivial results which show that a specific number is rational or algebraic? In other words, does anyone know any non-trivial results (advanced undergraduate or higher)
which define a specific x and then prove a statement of the type "x is rational" or "x is algebraic"?
Let S(n) = sum (1 / (pi*k)^n) over 1 <= k < infinity.
For some n, S (n) is proven rational, and for others it isn't. (It is
related to Bernoulli numbers when n is even).
For example, the infinite sum of 1/k^2 is pi^2 / 6, so S (2) = 1/6.
Date Subject Author
4/3/13 Proofs that numbers are rational, algebraic, or transcendental. Paul
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. David Bernier
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. Frederick Williams
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. Frederick Williams
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. gnasher729
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. Frederick Williams
4/3/13 Re: Proofs that numbers are rational, algebraic, or transcendental. gnasher729 | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2444585&messageID=8816940","timestamp":"2014-04-17T19:22:39Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:00096ec8-f914-46ee-bb4d-218f15557ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to the Princeton University Mathematics Department!
The Princeton University Mathematics Department is located in Fine Hall on Washington Road. Our administrative offices are located on the third floor, as is our Common Room, where Tea is held each
afternoon at 3:30 while classes are in session. For the 2013-14 academic year, our department consists of 60 faculty members, 10 visitors and researchers, 8 emeriti faculty members in residence, 60
graduate students, and 73 undergraduate majors.
Seminars & Events
Thursday, April 17
Friday, April 18
Monday, April 21 | {"url":"http://www.math.princeton.edu/","timestamp":"2014-04-17T06:52:57Z","content_type":null,"content_length":"32281","record_id":"<urn:uuid:a7e101d8-4846-4eef-a4d9-c188cd4ca43c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contributions to the theory of logic programming
Results 1 - 10 of 139
- Journal of the ACM , 1991
"... ..."
- JOURNAL OF LOGIC PROGRAMMING , 1996
"... This paper presents for the first time the semantic foundations of CLP in a self-contained and complete package. The main contributions are threefold. First, we extend the original conference
paper by presenting definitions and basic semantic constructs from first principles, giving new and comp ..."
Cited by 786 (13 self)
Add to MetaCart
This paper presents for the first time the semantic foundations of CLP in a self-contained and complete package. The main contributions are threefold. First, we extend the original conference paper
by presenting definitions and basic semantic constructs from first principles, giving new and complete proofs for the main lemmas. Importantly, we clarify which theorems depend on conditions such as
solution compactness, satisfaction completeness and independence of constraints. Second, we generalize the original results to allow for incompleteness of the constraint solver. This is important
since almost all CLP systems use an incomplete solver. Third, we give conditions on the (possibly incomplete) solver which ensure that the operational semantics is confluent, that is, has
independence of literal scheduling.
- ANNALS OF PURE AND APPLIED LOGIC , 1991
"... A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the
declarative meaning of a logic program, provided by provability in a logical system, should coincide with its ..."
Cited by 374 (108 self)
Add to MetaCart
A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the
declarative meaning of a logic program, provided by provability in a logical system, should coincide with its operational meaning, provided by interpreting logical connectives as simple and fixed
search instructions. The operational semantics is formalized by the identification of a class of cut-free sequent proofs called uniform proofs. A uniform proof is one that can be found by a
goal-directed search that respects the interpretation of the logical connectives as search instructions. The concept of a uniform proof is used to define the notion of an abstract logic programming
language, and it is shown that first-order and higher-order Horn clauses with classical provability are examples of such a language. Horn clauses are then generalized to hereditary Harrop formulas
and it is shown that first-order and higher-order versions of this new class of formulas are also abstract logic programming languages if the inference rules are those of either intuitionistic or
minimal logic. The programming language significance of the various generalizations to first-order Horn clauses is briefly discussed.
"... When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃
G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ..."
Cited by 306 (40 self)
Add to MetaCart
When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G
from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ∪ {D}. Thus during the bottom-up search for a cut-free proof contexts, represented as the
left-hand side of intuitionistic sequents, grow as stacks. While such an intuitionistic notion of context provides for elegant specifications of many computations, contexts can be made more
expressive and flexible if they are based on linear logic. After presenting two equivalent formulations of a fragment of linear logic, we show that the fragment has a goal-directed interpretation,
thereby partially justifying calling it a logic programming language. Logic programs based on the intuitionistic theory of hereditary Harrop formulas can be modularly embedded into this linear logic
setting. Programming examples taken from theorem proving, natural language parsing, and data base programming are presented: each example requires a linear, rather than intuitionistic, notion of
context to be modeled adequately. An interpreter for this logic programming language must address the problem of splitting contexts; that is, when attempting to prove a multiplicative conjunction
(tensor), say G1 ⊗ G2, from the context ∆, the latter must be split into disjoint contexts ∆1 and ∆2 for which G1 follows from ∆1 and G2 follows from ∆2. Since there is an exponential number of such
splits, it is important to delay the choice of a split as much as possible. A mechanism for the lazy splitting of contexts is presented based on viewing proof search as a process that takes a
context, consumes part of it, and returns the rest (to be consumed elsewhere). In addition, we use collections of Kripke interpretations indexed by a commutative monoid to provide models for this
logic programming language and show that logic programs admit a canonical model.
, 1997
"... This paper surveys various complexity results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming
and datalog, but we also mention general logic programming with function symbols. Next to classical results ..."
Cited by 281 (57 self)
Add to MetaCart
This paper surveys various complexity results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming and
datalog, but we also mention general logic programming with function symbols. Next to classical results on plain logic programming (pure Horn clause programs), more recent results on various
important extensions of logic programming are surveyed. These include logic programming with different forms of negation, disjunctive logic programming, logic programming with equality, and
constraint logic programming. The complexity of the unification problem is also addressed.
, 1996
"... SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog
systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evalu ..."
Cited by 260 (27 self)
Add to MetaCart
SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems.
Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: a) it may not terminate due to infinite positive recursion; b) it may
not terminate due to infinite recursion through negation; c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address three problems fir a
goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying (SLG resolution).
- In The Logic Programming Paradigm: a 25-Year Perspective , 1999
"... In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent
features of stable model semantics naturally lead to a logic programming system that offers an interesting ..."
Cited by 250 (18 self)
Add to MetaCart
In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent
features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming,
stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not
describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes
restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the
class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of
algorithms to compute stable models of propositional logic programs. 1
- JOURNAL OF LOGIC PROGRAMMING , 1994
"... We survey here various approaches which were proposed to incorporate negation in logic programs. We concentrate on the proof-theoretic and model-theoretic issues and the relationships between
them. ..."
Cited by 245 (8 self)
Add to MetaCart
We survey here various approaches which were proposed to incorporate negation in logic programs. We concentrate on the proof-theoretic and model-theoretic issues and the relationships between them.
, 1995
"... The alternating fixpoint of a logic program with negation is defined constructively. The underlying idea is monotonically to build up a set of negative conclusions until the least fixpoint is
reached, using a transformation related to the one that defines stable models. From a fixed set of negative ..."
Cited by 208 (2 self)
Add to MetaCart
The alternating fixpoint of a logic program with negation is defined constructively. The underlying idea is monotonically to build up a set of negative conclusions until the least fixpoint is
reached, using a transformation related to the one that defines stable models. From a fixed set of negative conclusions, the positive conclusions follow (without deriving any further negative ones),
by traditional Horn clause semantics. The union of positive and negative conclusions is called the alternating xpoint partial model. The name "alternating" was chosen because the transformation runs
in two passes; the first pass transforms an underestimate of the set of negative conclusions into an (intermediate) overestimate; the second pass transforms the overestimate into a new underestimate;
the composition of the two passes is monotonic. The principal contributions of this work are (1) that the alternating fixpoint partial model is identical to the well-founded partial model, and (2)
that alternating xpoint logic is at least as expressive as xpoint logic on all structures. Also, on finite structures, fixpoint logic is as expressive as alternating fixpoint logic.
- HANDBOOK OF LOGIC IN AI AND LOGIC PROGRAMMING, VOLUME 5: LOGIC PROGRAMMING. OXFORD (1998
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=21347","timestamp":"2014-04-21T12:51:30Z","content_type":null,"content_length":"37601","record_id":"<urn:uuid:1df7ea57-fe12-4ce1-821c-df831c0648bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
NYJM Abstract - 11-29 - Guyan Robertson
Guyan Robertson
Tiling systems and homology of lattices in tree products
Published: December 14, 2005
Keywords: tree products, lattices, homology, K-theory, operator algebra
Subject: 22E40, 22D25
Let Γ be a torsion-free cocompact lattice in Aut(T[1]) × Aut(T[2]), where T[1], T[2] are trees whose vertices all have degree at least three. The group H[2](Γ, Z) is determined explicitly in
terms of an associated 2-dimensional tiling system. It follows that under appropriate conditions the crossed product C*-algebra A associated with the action of Γ on the boundary of T[1]×T[2]
satisfies rank K[0](A) = 2⋅rank H[2](Γ, Z).
Author information
School of Mathematics and Statistics, University of Newcastle, NE1 7RU, U.K. | {"url":"http://www.emis.de/journals/NYJM/j/2005/11-29.html","timestamp":"2014-04-17T19:00:06Z","content_type":null,"content_length":"8841","record_id":"<urn:uuid:446a4d22-2c41-4c0f-b4e9-193626efc952>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimate which random variable has highest expectation
up vote 1 down vote favorite
You are given a sample of size $m$ from $n$ independent normally distributed random variables. Expectations and standard deviations of the random variables are unknown. Estimate, which random
variable has the largest expectation.
The interesting case is when $n > 2$ and the expectations of the random variables are close to each other compared to the standard deviations divided by $\sqrt{m}$. In this case just comparing the
sample means may not give the best results. For example, with 3 random variables $X_1, X_2, X_3$, with $X_i \sim N(\mu_i, \sigma_i^2)$, with $$ \mu_1 = 0, \; \mu_2, \mu_3 \in [-1, 1] $$ $$ \sigma_1 =
\sqrt{m}, \; \sigma_2, \sigma_3 = 10 \sqrt{m} $$ there is only a roughly $1/4$ chance for $X_1$ to have the largest sample mean, whether any of $\mu_2, \mu_3$ are positive or not. For a good test I
would expect this probability to be around $1/3$. Hence the question.
add comment
1 Answer
active oldest votes
For the case of $3$ random variables, choose two of them (at random) and compare the sample means of the first half of the samples. Then compare the sample means of the second half of
the samples for the winner of the first round and the third random variable. You'll have probability greater than $1/3$ of the result being correct. The random choice of the initial
pair to compare is, of course, necessary for this.
up vote 0 down
vote accepted Similarly, with $n$ random variables, conduct a "knockout tournament" such that at each level a different set of samples is used (so that you maintain independence of the pairwise
It seems to me that this method uses less information than what's available, so I'd expect to do better with some method that uses all of it. – Michael Hardy Feb 27 '13 at 0:57
Well, I would probably feel more comfortable with a more deterministic algorithm, but your answer does answer my question. Thanks! – Anton Feb 27 '13 at 18:51
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/123027/estimate-which-random-variable-has-highest-expectation","timestamp":"2014-04-17T07:38:54Z","content_type":null,"content_length":"52076","record_id":"<urn:uuid:c09537f0-5f1d-4047-9935-fac804221267>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grow Room Tools
Grow area
width in feet (optional)
depth in feet (optional)
height in feet (optional)
Volume cubic feet
Volume in cubic meters m3
Number of lights (required)
Total lumens
Total watts
Type of light (required)
Light coverage
Watts per square foot
Lumens per square foot
(lumens depend on plant distance from light, value here is for guidance only)
Running costs
Pence per kwh (optional) (you can change if you know what you pay)
Cost per month at 12hours a day
Cost per month at 18hours a day
Cost per month at 24hours a day
Minimum inline fan capacity to exchange air every 3 minutes
In cubic meters an hour = m3/ h
m3/ h (with carbon filter and ducting)
Falloff in light intensity with distance
Distance in feet from light to plant (optional)
Lumens at plant
(calculation assumes single point light source of 1 light to 1 plant)
Wed Aug 25 2004. | {"url":"http://www.uk420.com/growroomtools2.php","timestamp":"2014-04-20T00:59:30Z","content_type":null,"content_length":"31198","record_id":"<urn:uuid:96da58be-ef23-48df-8d41-ac02a679104e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] [OT] Transform (i.e., Fourier, Laplace, etc.) methods in Prob. & Stats.
[SciPy-User] [OT] Transform (i.e., Fourier, Laplace, etc.) methods in Prob. & Stats.
josef.pktd@gmai... josef.pktd@gmai...
Thu Nov 26 00:19:38 CST 2009
On Wed, Nov 25, 2009 at 11:45 PM, David Cournapeau
<david@ar.media.kyoto-u.ac.jp> wrote:
> josef.pktd@gmail.com wrote:
>> Maybe the last statement is wrong, it's too long ago that I
>> struggled with this. Maybe I'm mixing up Lebesgue-integral,
>> Lebesgue-measurable, and measures that are absolutely continuous
>> with respect to Lebesgue-measure.
> I am by no mean an expert on this, but I believe you are right. AFAIK,
> contour integrals require to have a piecewise-continuous parametrization
> of your path, and for me, the whole point of Lebesgue integrals is to
> handle cases where the set over which you integrate the function is not
> a (finite) union of intervals.
> I don't know if it makes sense to define something "like" contour
> integrals for lebesgue integrals. The fundamental reason why Lebesgue
> integrals work the way they do is because for a function f: E ->F, only
> the properties of F (and how the inversion function maps elements of the
> sigma algebra F) matter. And complex analysis is 'special' because of
> the special structure of E, not F.
I think on the theoretical level I'm right, but from what I read the last few
hours, contour integrals seem to provide a method to actually calculate
the integral, while I haven't seen much practical applications of
Lebesgue integration.
For the simple examples that I tried so far for the inversion of the
characteristic function, I didn't need contour nor Lebesque integrals.
And I hope it stays this way when I get back to this, especially
since I never had to learn anything about complex analysis and
the special structure of complex numbers.
> David
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-November/023471.html","timestamp":"2014-04-16T14:00:49Z","content_type":null,"content_length":"5301","record_id":"<urn:uuid:102dae77-b3d7-44dd-b3e1-f65b7c4eaa42>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Downingtown Prealgebra Tutor
...I am willing to tutor individuals or small groups. I am most helpful to students when the tutoring occurs over a longer period of time. This allows me to identify the topics that are the root
causes of the student's problems.
18 Subjects: including prealgebra, calculus, statistics, GRE
...Math problems can be likened to little puzzles, for which you only need to uncover the correct answer. I hold Bachelor of Science and Master of Science degrees. Also, I have experience
instructing elementary-age children in a home-schooling environment.
20 Subjects: including prealgebra, reading, statistics, biology
...In June 2014, I will be receiving certification from the Commonwealth of Pennsylvania to teach grades preK-8. I have taken education courses in the all of the required fields, as well as
student taught in a first through third grade classroom and assistant taught in a first grade bilingual classroom. I have over 800 hours of experience working with elementary age children.
32 Subjects: including prealgebra, reading, English, algebra 1
Hello, I have 5 years experience in the teaching profession. I have taught kindergarten and middle school math. My approach is that every person learns differently and needs to find their own way
in order to be success. I use audio and visual techniques, as well as a hands on approach.
19 Subjects: including prealgebra, English, reading, writing
...I can help study the types of answers generally given and the types of word problems to understand what the problem is asking. I can also help with general studies of the science, social
studies, and language arts. I have taken a practice Praxis exam to learn more about the type of questions included.
26 Subjects: including prealgebra, reading, calculus, writing | {"url":"http://www.purplemath.com/Downingtown_prealgebra_tutors.php","timestamp":"2014-04-17T07:39:26Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:4c5d9f38-6cf6-47d3-9a64-0d3599276964>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Symbolic Math: Combining Equations
Replies: 2 Last Post: Feb 15, 2013 2:29 AM
Messages: [ Previous | Next ]
cypher Symbolic Math: Combining Equations
Posted: Dec 14, 2012 3:53 PM
Posts: 1
Registered: 12/14/12 How do I combine equations symbolically? For example:
f1 = sym('ax +b')
f2 = sym('cx+d')
f3 = f1*f2
I'm looking for an output like
f3 = 'acx^2+bcx+adx+bc'
I am working on an extremely large set of equations and unfortunately they don't fit on paper.
Thanks for the help!
Date Subject Author
12/14/12 Symbolic Math: Combining Equations cypher
12/14/12 Re: Symbolic Math: Combining Equations bartekltg
2/15/13 Re: Symbolic Math: Combining Equations Christopher Creutzig | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2420587","timestamp":"2014-04-20T00:46:16Z","content_type":null,"content_length":"18402","record_id":"<urn:uuid:bc4707a3-55f0-4eed-ab1c-7574204e983a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help me understand linear separability in a binary SVM
up vote 5 down vote favorite
I'm cross-posting this from math.stackexchange.com because I'm not getting any feedback and it's a time-sensitive question for me.
My question pertains to linear separability with hyperplanes in a support vector machine.
According to Wikipedia:
...formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite dimensional space, which can be used for classification, regression or other tasks.
Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the
larger the margin the lower the generalization error of the classifier.classifier.
The linear separation of classes by hyperplanes intuitively makes sense to me. And I think I understand linear separability for two-dimensional geometry. However, I'm implementing an SVM using a
popular SVM library (libSVM) and when messing around with the numbers, I fail to understand how an SVM can create a curve between classes, or enclose central points in category 1 within a circular
curve when surrounded by points in category 2 if a hyperplane in an n-dimensional space V is a "flat" subset of dimension n − 1, or for two-dimensional space - a 1D line.
Here is what I mean:
That's not a hyperplane. That's circular. How does this work? Or are there more dimensions inside the SVM than the two-dimensional 2D input features?
This example application can be downloaded here.
Thanks for your comprehensive answers. So the SVM can separate weird data well by using a kernel function. Would it help to linearize the data before sending it to the SVM? For example, one of my
input features (a numeric value) has a turning point (eg. 0) where it neatly fits into category 1, but above and below zero it fits into category 2. Now, because I know this, would it help
classification to send the absolute value of this feature for the SVM?
1 Linearizing the data is an option, but one of the nice things about a kernel function is that you don't actually need to construct such a space. The kernel is effectively a measure of
dis-similarity of the data points. Actually finding a space that realizes some arbitrary inner product may involve introducing an infinite number of dimensions. But all the SVM algorithm needs is
an inner product. In fact, IIRC, all it really needs is something sufficiently "like" an inner product. – mokus Oct 22 '10 at 15:11
This question is on-topic here: stats.stackexchange.com – Shane Oct 25 '10 at 14:41
@Shane: Yeah, but SO's bigger audience gets quicker answers. – pate Oct 26 '10 at 10:57
add comment
7 Answers
active oldest votes
As mokus explained, support vector machines use a kernel function to implicitly map data into a feature space where they are linearly separable:
Different kernel functions are used for various kinds of data. Note that an extra dimension (feature) is added by the transformation in the picture, although this feature is
up vote 10 down vote never materialized in memory.
(Illustration from Chris Thornton, U. Sussex.)
How does the SVM know which kernel function will separate the data sensibly? Does it iterate though a bunch of equations and calculate which produces the largest margin? See
also edits to my question. – pate Oct 22 '10 at 15:12
The kernel is typically set by the user/developer as a parameter. LibSVM, for instance, has linear, polynomial, RBF and sigmoid kernel types. Its authors recommend the RBF
kernel for beginners. – larsmans Oct 22 '10 at 15:19
add comment
Check out this YouTube video that illustrates an example of linearly inseparable points that become separable by a plane when mapped to a higher dimension.
up vote 6 down vote
1 +1 Very intuitive. – pate Oct 23 '10 at 8:51
great video I wish I could find more things like this! – Diego Aug 24 '12 at 0:21
add comment
I am not intimately familiar with SVMs, but from what I recall from my studies they are often used with a "kernel function" - essentially, a replacement for the standard inner product that
effectively non-linearizes the space. It's loosely equivalent to applying a nonlinear transformation from your space into some "working space" where the linear classifier is applied, and
then pulling the results back into your original space, where the linear subspaces the classifier works with are no longer linear.
up vote 3
down vote The wikipedia article does mention this in the subsection "Non-linear classification", with a link to http://en.wikipedia.org/wiki/Kernel_trick which explains the technique more generally.
As I recall, Numerical Recipes had a pretty good section on implementing SVMs with kernel functions to solve nonlinear classification problems. – mokus Oct 22 '10 at 14:01
Actually, the "pulling the results back" step is never needed, even if you'd do the feature space expansion manually. You'd just do dot products in the expanded feature space, giving a
single number per SV/test sample pair. – larsmans Sep 10 '12 at 11:33
add comment
SVM clustering
up vote 0 down vote HTH
add comment
My answer to a previous question might shed some light on what is happening in this case. The example I give is very contrived and not really what happens in an SVM, but it should
up vote 0 down give you come intuition.
add comment
For the SVM example in the question given in 2-D space let x1, x2 be the two axes. You can have a transformation function F = x1^2 + x2^2 and transform this problem into a 1-D space
problem. If you notice carefully you could see that in the transformed space, you can easily linearly separate the points(thresholds on F axis). Here the transformed space was [ F ] ( 1
up vote 0 dimensional ) . In most cases , you would be increasing the dimensionality to get linearly separable hyperplanes.
down vote
add comment
This is done by applying what is know as a [Kernel Trick] (http://en.wikipedia.org/wiki/Kernel_trick) What basically is done is that if something is not linearly separable in the existing
input space ( 2-D in your case), it is projected to a higher dimension where this would be separable. A kernel function ( can be non-linear) is applied to modify your feature space. All
computations are then performed in this feature space (which can be possibly of infinite dimensions too).
Each point in your input is transformed using this kernel function, and all further computations are performed as if this was your original input space. Thus, your points may be separable
in a higher dimension (possibly infinite) and thus the linear hyperplane in higher dimensions might not be linear in the original dimensions.
For a simple example, consider the example of XOR. If you plot Input1 on X-Axis, and Input2 on Y-Axis, then the output classes will be:
1. Class 0: (0,0), (1,1)
2. Class 1: (0,1), (1,0)
up vote 0 As you can observe, its not linearly seperable in 2-D. But if I take these ordered pairs in 3-D, (by just moving 1 point in 3-D) say:
down vote
1. Class 0: (0,0,1), (1,1,0)
2. Class 1: (0,1,0), (1,0,0)
Now you can easily observe that there is a plane in 3-D to separate these two classes linearly.
Thus if you project your inputs to a sufficiently large dimension (possibly infinite), then you'll be able to separate your classes linearly in that dimension.
One important point to notice here (and maybe I'll answer your other question too) is that you don't have to make a kernel function yourself (like I made one above). The good thing is that
the kernel function automatically takes care of your input and figures out how to "linearize" it.
add comment
Not the answer you're looking for? Browse other questions tagged algorithm machine-learning classification svm libsvm or ask your own question. | {"url":"http://stackoverflow.com/questions/3997454/help-me-understand-linear-separability-in-a-binary-svm","timestamp":"2014-04-24T20:51:30Z","content_type":null,"content_length":"98777","record_id":"<urn:uuid:bb5990b5-f1a7-439a-985c-b06feb1b9d22>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edexcel Physics Help
Vertical Circular Motion Problems
1. A ball with a mass of 130 g is swung at the end of a string 93.0 cm in length. The ball is whirled in a vertical circle at 4.00 revolutions per second.
a. What is the tension on the string at the bottom of the loop?
b. What is the tension on the string at the top of the loop?
2. A jet fighter pilot knows he is able to withstand an acceleration of 9g before blacking out. The pilot points his plane vertically down while traveling at Mach 3 speed and intends to pull up in a
circular maneuver before crashing to the ground.
a) Where does the maximum acceleration occur in the maneuver?
b) What is the minimum radius the pilot can take?
3. A roller coaster at an amusement park has a dip that bottoms out in a vertical circle of radius r. A passenger feels the seat of the car pushing upward on her with a force equal to 3.0 times her
weight as she goes through the dip. If r = 25.0 m, how fast is the roller coaster traveling at the bottom of the dip?
4. What is the apparent weight of a 75-kg person driving a car with a constant speed of 12 m/s over a bump with a circular cross-section and radius of curvature of of 35 m?
5. What is the minimum speed of a roller coaster at the top of a 39.0 m vertical loop if the passengers are "weightless" at that point.
6. A ball of a mass 4.0 kg is attached to the end of a 1.2 cm long string and whirled around in a circle that describes a vertical plane.
a. What is the minimum speed that the ball can be moving at and still maintain a circular path? Provide a free body diagram.
b. At this speed, what is the maximum tension in the string? Provide a free body diagram.
c. If the ball was swung in a horizontal circle at this speed, what angle would the string make with the vertical?
7. How do you find the tension in the string of a ball traveling in a vertical circle at the 45 degree angle?
8. A hill is in the shape of an arch having the radius of curvature of 41. m. What is the maximum speed that a car can travel across the hill without 'getting some air'?
Banked Curves
1. Determine the minimum angle at which a road should be banked so that a car traveling at 20.0 m/s can safely negotiate the curve if the radius of the curve is 200.0 m.
2. If a curve with a radius of 65 m is properly banked for a car traveling 75 km/h, what must be the coefficient of static friction for a car not to skid when traveling at 90 km/h?
3. A Car is driven around a circle with a radius of 200m, bank angle 10 degrees. The static frictional coefficient is 0.60. Calculate the maximum velocity the car can travel.
4. An airplane is flying in a horizontal circle at a speed of 460 km/h. If its wings are tilted 40° to the horizontal, and force is provided by lift that is perpendicular to the wing surface. What is
the radius of the circle? | {"url":"http://edexcelphysics.blogspot.com/2010/11/circular-motion-2.html","timestamp":"2014-04-16T07:13:25Z","content_type":null,"content_length":"78855","record_id":"<urn:uuid:ce683545-3bf1-4646-bdd3-6fc5ca6e4af9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
On a Balanced Property of Derangements
We prove an interesting fact describing the location of the roots of the generating polynomials of the numbers of derangements of length $n$, counted by their number of cycles. We then use this
result to prove that if $k$ is the number of cycles of a randomly selected derangement of length $n$, then the probability that $k$ is congruent to a given $r$ modulo a given $q$ converges to $1/q$.
Finally, we generalize our results to $a$-derangements, which are permutations in which each cycle is longer than $a$.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v13i1r102","timestamp":"2014-04-16T19:49:39Z","content_type":null,"content_length":"14338","record_id":"<urn:uuid:05337134-ff32-45c0-8393-e9b04bf5ebf8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
: History and Practice
EUCLID'S GEOMETRY: History and Practice
This series of interdisciplinary lessons on Euclid's Elements was researched and written by Alex Pearson, a Classicist at The Episcopal Academy in Merion, Pennsylvania.
The material is organized into class work, short historical articles, assignments, essay questions, and a quiz. For the Greek text and a full translation of The Elements, see the Perseus Project
at Tufts University.
Introduction "Why do we have to learn this?" A discussion of how geometry has seemed indispensable to some people for over two millennia.
Unit 1 Definitions, axioms and Theorem One.
On a given finite straight line construct an equilateral triangle.
Upon a given point place a straight line equal to a given straight line.
Unit 2 Theorem Two and an introduction to history.
Upon a given point place a straight line equal to a given straight line. Historical articles; essay questions.
Unit 3 Group discussions on the Elements; history and propositions; preparation for the Unit 4 Quiz.
Unit 4 Quiz: Complete Euclid's Fifth Theorem and identify the definitions, common notions, postulates and prior theorems by number. Prove two of the historical propositions using at least
two different pages from my history of The Elements as building blocks for each.
Historical Articles
A series of seventeen short historical articles arranged in chronological order and ranging from Alexander the Great to discussions of nineteenth-century of Euclid's Elements.
Essay Questions
Sixteen essay questions requiring a reading of the historical articles.
Theorems One and Two, with important Definitions and Postulates
Translated by Alex Pearson
Please e-mail comments to Alex Pearson. | {"url":"http://mathforum.org/geometry/wwweuclid/index.html","timestamp":"2014-04-17T07:41:10Z","content_type":null,"content_length":"5771","record_id":"<urn:uuid:64531d6f-46fd-4662-ad7c-cf6a2705e39c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with arrays and matrices!
02-06-2013, 11:46 PM
Help with arrays and matrices!
I am currently stuck with this problem...
Write a program for the next problem. A store sells the following products: Milk, soap, meat, beans and tomatoes. The cost of each product is $12.5, $18.2, $69.9, $30.5 y $15.9 respectively. The
price for the costumers is 30% more than the cost of the product.
Randomly create numbers from 0 to 10 which will represent the quantity sold of each product each day for a whole month.
The owner wants to know the total sales for each product in a day and the whole month, the total sales of each day and which product gives the most profit for the month.
So far I have created an array that holds the string names, the cost of each product and the price for each costumer. I got stuck on the matrix part where I need to place a vector inside each
element... After that I need to start doing the operations inside the matrix such as, accumulators and counters... I started this without guidance, so I don't know if I have the right approach.
So far this is what I have.
Thanks in advance!
public static void main(String[] args) {
int x=0, y=0, z=0, number=0;
Double precio[];
String[] productos = {"leche", "jabon", "carne", "frijol", "tomate"};
Double[] costo = {12.5, 12.2, 69.9, 30.5, 15.9};
precio = new Double[5];
Random num = new Random();
int mmes[][] = new int [5][6];
for(x=0; x<=4; x++){
precio[x] = costo[x]+(costo[x]*.30);
for(x=0; x<=5; x++){
for(y=0; y<=6; y++){
mmes[x][y] = precio[z];
for(int counter=1; counter>=10; counter++){
number = num.nextInt(11);
System.out.println(number + " ");
02-11-2013, 01:59 PM
Re: Help with arrays and matrices!
Hi chwex,
I'm assuming you want something basic for this problem.
You could use a two dimensional array which holds the number of each product sold where each row represents a different item whilst each colums represent a day.
In effect you would use a 5 by 30/31 array to hold the values. The months can get a bit messy though as the actual number of columns required is dependant on the number of days in the month.
02-13-2013, 05:54 AM
Re: Help with arrays and matrices!
Thanks for the response Ronin! I appreciate it! I will try your approach since it seems a bit easier than the one I had in mind!
02-13-2013, 11:46 AM
Re: Help with arrays and matrices!
You're welcome. | {"url":"http://www.java-forums.org/new-java/68664-help-arrays-matrices-print.html","timestamp":"2014-04-23T21:03:41Z","content_type":null,"content_length":"8001","record_id":"<urn:uuid:f9e23ef3-0d76-4437-b0d8-fe83eaaaf8d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Family of Brownian Motions
up vote 2 down vote favorite
I am trying to show the following statement
Let $D\subset \mathbb{R}^2$ be an open and bounded subset. $\Pi=(P^x : x \in D )$ a Family of standard Brownian Motions started at $x \in D$. Then $\Pi$ ist a tight (or relativly compact due to
Prorhorvs Theorem).
I need that result to proof the existence of a certain type of Process. Unfortunately I do not have any experience working with the concept of "tightness".
pr.probability stochastic-processes brownian-motion measure-theory
2 So you see each $P^x$ as the probability measure associated with the random variable which makes correspond to each $\omega$ a continuous function on $\{t\in\Bbb R, t\geq\}$. You can endow this
space with the topology of uniform convergence over compact subsets, and try to find a characterization of compact subsets for this topology. – Davide Giraudo Jun 23 '12 at 12:04
1 Karatzas book's about Brownian motion contains good information about that, since Donker's invariance principle is shown for the half-line. You will find criterion of tighness. – Davide Giraudo
Jun 23 '12 at 12:32
add comment
2 Answers
active oldest votes
Using the fact that $S:=C[0,+\infty):=(f\colon [0,+\infty)\to \Bbb R, f\mbox{ continuous })$ endowed with the metric $$d(f,g):=\sum_{j=1}^{+\infty}2^{—j}\min(1,\sup_{0\leq t\leq j}|f
(x)-g(x)|)$$ is polish, it's enough to show that each subsequence is tight. Using theorem 4.10 in Karatzas-Shreve book's, we know that a sequence of probability measures $(\mathbb P_n)
_{n\geq 1}$ over the Borel $\sigma$-albegra is tight if and only if the two following conditions are satisfied:
1. $\lim_{\lambda\to+\infty}\sup_{n\geq 1}\mathbb P_n(\omega,|\omega(0)|\geq \lambda)=0$ and
2. $\lim_{\delta\to 0}\sup_{n\geq 1}\mathbb P_n(\omega,m^T(\omega,\delta)\geq \varepsilon)=0$ for each $T>0$ and $\varepsilon>0$, where $m^T(\omega,\delta)=\sup_{|t_1-t_2|\leq\delta,0
\leq t_1,t_2\leq T}|\omega(t_1)-\omega(t_2)|$.
up vote 3 down
vote accepted We apply this to $\mathbb P_n=\mathbb P_{x_n}$, where $(x_n)_{n\geq 1}$ is a sequence in $D$. By boundedness of $D$, the first condition is satisfied. We can control the modulus of
continuity using the fact that $$\mathbb P_n(\omega,m^T(\omega,\delta)\geq \varepsilon) \leq \frac 1{\varepsilon^2}\delta^2,$$ which is a consequence of the fact that the increments of
the Brownian motion $W_{t_2}-W_{t_1}$ are normally distributed, of normal distribution of mean $0$ and variance $t_2-t_1$.
In fact, with condition 1., we can see that $(\mathcal P_x,x\in D)$ is tight (where $\mathcal P_x$ is associated to a Brownian motion started at $x$) if and only if $D$ is bounded.
this is precisely what I was looking for :D thank you very much. The first condition was somehow obvious to me too but I had trouble seeing why the second one of 4.10 is satisfied.
My first guess was to use the continuity of sample paths of brownian motion. – Boldwing Jun 23 '12 at 13:20
add comment
Here is another argument: The map $x \mapsto \mathcal{P}_x$ is a continuous map from $\mathbb{R}^2$ (the same in higher dimensions) to the probability measures on the path space with the
topology of convergence in law. Therefore the family $(\mathcal{P}_x,x \in \overline{D})$ is compact, being the continuous image of the compact set $\overline{D}$. The path space is Polish
up vote 2 and hence the family is uniformly tight (the converse of Prokhorov's theorem).
down vote
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-processes brownian-motion measure-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/100443/family-of-brownian-motions/102192","timestamp":"2014-04-16T13:40:48Z","content_type":null,"content_length":"58641","record_id":"<urn:uuid:c40aaad3-87f2-4dde-9d60-7b50fd848550>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Void Return Type Methods and Mathematical Functions
Author Void Return Type Methods and Mathematical Functions
Ranch Hand
While preparing for an upcoming Coursera course (Introduction to Algorithms) I came across this statement in the text
Joined: Aug 20,
Posts: 39
For example, the main() static method in our programs has a void return type
I like... because its purpose is to produce output. Technically, void methods do not
implement mathematical functions (and neither does Math.random(), which
takes no arguments but does produce a return value).
-- p. 24 Algorithms (4e), Sedgewick & Wayne
Bold mine.
The part in bold stumped me. Can someone walk me through that statement?
lowercase baba
Joined: Oct 02, The strict interpretation of a function is that basically, you give it some input (like 90 degrees), and you get some new value out - like 1. For any given input, there is exactly
2003 one possible output.
Posts: 10916
Since a method of type void does not give you back a return value, it does not fit the definition of a mathematical function.
I like...
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
Ranch Hand
Joined: Mar 05,
Posts: 2969 And since Math.random() takes no arguments, and since it gives you different results each time it's called, it's not a function either.
Ranch Hand
Joined: Aug 20, Ah, yes. Thank you, both.
Posts: 39 I was missing the computer science specific definition of the word "implement."
I like... I thought we were about to get into a deep discussion about design theory.
Joined: Oct 13,
Posts: 36508 No, more a deep discussion of maths.
subject: Void Return Type Methods and Mathematical Functions | {"url":"http://www.coderanch.com/t/603884/java/java/Void-Return-Type-Methods-Mathematical","timestamp":"2014-04-20T13:42:26Z","content_type":null,"content_length":"27403","record_id":"<urn:uuid:ad6c71c9-816f-478b-869e-53a4eead90d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help understanding how this was simplified. Fraction with multiple square roots
December 11th 2012, 08:27 PM #1
Sep 2012
Need help understanding how this was simplified. Fraction with multiple square roots
I was able to clean this up:
I don't know how to proceed any further. I know the simplified answer is:
But I really don't know how to get there.
Re: Need help understanding how this was simplified. Fraction with multiple square ro
Hello, jamesrb!
I was able to clean this up: . $\frac{\sqrt{x^2-9}-\frac{x^2}{\sqrt{x^2-9}}}{{\color{blue}x^2-9}}$
I don't know how to proceed any further.
I know the simplified answer is: . $\frac{\text{-}9}{(x^2-9)^\frac{3}{2}}$
You have: . $\frac{\sqrt{x^2-9}-\dfrac{x^2}{\sqrt{x^2-9}}}{x^2-9}$
Multiply top and bottom by $\sqrt{x^2-9}$
. . $\frac{\sqrt{x^2-9}\cdot\left(\sqrt{x^2-9}-\dfrac{x^2}{\sqrt{x^2-9}}\right)}{\sqrt{x^2-9}\cdot(x^2-9)} \;=\;\frac{x^2-9-x^2}{(x^2-9)^{\frac{3}{2}}} \;=\;\frac{\text{-}9}{(x^2-9)^{\frac{3}
December 11th 2012, 08:58 PM #2
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/algebra/209632-need-help-understanding-how-simplified-fraction-multiple-square-roots.html","timestamp":"2014-04-17T08:44:45Z","content_type":null,"content_length":"36265","record_id":"<urn:uuid:fe31e991-75d4-4d21-a788-f3ed131f26ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
EGCE 491 Advance Math for CE
D. Hanselman, B. Littlefield, “Mastering MATLAB: A comprehensice tutorial and reference”, Prentice-Hall, 1995
R.D. Strum, D.E. Kirk,“Contemporary Linear Systems using MATLAB”, Books/Cole Thomson Learning, 2000.
S.J. Chapman,“MATLAB Programming for Engineers”, 2nd edition, Thomson- Engineering, 2001.
E. Kreyszig, “Advanced Engineering Mathematics ”, 8th edition, John Wiley.
S. T. Karris, “Numerical analysis using MATLAB and spreadsheets”, 4th Edition, Orchard Publications, 2004.
C.F. Gerald, P.O. Wheatley, “Applied Numerical Analysis”, 7th edition, Pearson Education, Inc., 2004.
L.V. Fausett, “Applied Numerical Analysis using MATLAB”, Pearson Education, Inc., 2008. | {"url":"http://www.egmu.net/civil/wonsiri/EGCE491%20Advanced%20math.htm","timestamp":"2014-04-20T05:43:00Z","content_type":null,"content_length":"12121","record_id":"<urn:uuid:9b4cfb7d-6cf9-4d85-bc02-ee5b306cff2c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |