content
stringlengths
86
994k
meta
stringlengths
288
619
Application of a Sum Formula January 6th 2012, 06:03 AM #1 May 2011 Application of a Sum Formula "Write cos(arctan 1 + arccos x) as an algebraic expression." So, I understand that I am to use formula for cos(u + v) because it fits the expression, where u = arctan 1 and v = arccos x. I also know that I must create to right triangles for each of the angles u and v, the reason being that the angles are likely to be different. What I do not understand is why, when the formula is used, we don't evaluate cos(arctan 1) as cos(pi/4), which then equals 0. In the book, cos(arctan 1) is evaluated as 1/sqroot(2). Perhaps I do not understand inverse trigonometric functions as I thought I did. Re: Application of a Sum Formula "Write cos(arctan 1 + arccos x) as an algebraic expression." So, I understand that I am to use formula for cos(u + v) because it fits the expression, where u = arctan 1 and v = arccos x. I also know that I must create to right triangles for each of the angles u and v, the reason being that the angles are likely to be different. What I do not understand is why, when the formula is used, we don't evaluate cos(arctan 1) as cos(pi/4), which then equals 0. In the book, cos(arctan 1) is evaluated as 1/sqroot(2). Perhaps I do not understand inverse trigonometric functions as I thought I did. pi/4 is on the unit circle and its cosine is not 0: $\cos \left(\frac{\pi}{4}\right) = \dfrac{\sqrt2}{2}$. Re: Application of a Sum Formula "Write cos(arctan 1 + arccos x) as an algebraic expression." So, I understand that I am to use formula for cos(u + v) because it fits the expression, where u = arctan 1 and v = arccos x. I also know that I must create to right triangles for each of the angles u and v, the reason being that the angles are likely to be different. What I do not understand is why, when the formula is used, we don't evaluate cos(arctan 1) as cos(pi/4), which then equals 0. In the book, cos(arctan 1) is evaluated as 1/sqroot(2). Perhaps I do not understand inverse trigonometric functions as I thought I did. This is standard fair: $\arctan (1) = \frac{\pi }{4}\;\& \,\cos \left( {\frac{\pi }{4}} \right) = \frac{{\sqrt 2 }}{2}$ Re: Application of a Sum Formula You two are certainly right--and I am grateful for that. Sorry for the imprudent observation on my part. Re: Application of a Sum Formula "Write cos(arctan 1 + arccos x) as an algebraic expression." So, I understand that I am to use formula for cos(u + v) because it fits the expression, where u = arctan 1 and v = arccos x. I also know that I must create to right triangles for each of the angles u and v, the reason being that the angles are likely to be different. What I do not understand is why, when the formula is used, we don't evaluate cos(arctan 1) as cos(pi/4), which then equals 0. In the book, cos(arctan 1) is evaluated as 1/sqroot(2). Perhaps I do not understand inverse trigonometric functions as I thought I did. An alternative is to note that \displaystyle \begin{align*} \sin^2{X} + \cos^2{X} &\equiv 1 \\ \frac{\sin^2{X}}{\cos^2{X}} + \frac{\cos^2{X}}{\cos^2{X}} &\equiv \frac{1}{\cos^2{X}} \\ \tan^2{X} + 1 &\equiv \frac{1}{\cos^2{X}} \\ \frac{1}{\tan^2{X} + 1} &\equiv \cos^2{X} \\ \cos{X} &\equiv \sqrt{\frac{1}{\tan^2{X} + 1}} \end{align*} So that means \displaystyle \begin{align*} \cos{\left(\arctan{1}\right)} &= \sqrt{\frac{1}{\left[\tan{\left(\arctan{1}\right)}\right]^2 + 1}} \\ &= \sqrt{\frac{1}{1^2 + 1}} \\ &= \sqrt{\frac{1}{1 + 1}} \\ &= \ sqrt{\frac{1}{2}} \\ &= \frac{\sqrt{1}}{\sqrt{2}} \\ &= \frac{1}{\sqrt{2}} \end{align*} January 6th 2012, 06:13 AM #2 January 6th 2012, 06:14 AM #3 January 6th 2012, 06:43 AM #4 May 2011 January 6th 2012, 05:01 PM #5
{"url":"http://mathhelpforum.com/trigonometry/194968-application-sum-formula.html","timestamp":"2014-04-19T08:38:57Z","content_type":null,"content_length":"48729","record_id":"<urn:uuid:06bad674-defb-4d6a-ae9c-64a599cf26b7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
East Boston Math Tutor Find an East Boston Math Tutor ...Then you solve those equations. You will get what you are looking for. I can help you understand concepts well and improve problem solving skills. 5 Subjects: including algebra 1, algebra 2, precalculus, physics I have worked over 20 years as an engineer. I understand how science and math are used in industry. I like to help students understand the importance of trying to determine if answers make sense. 10 Subjects: including linear algebra, mechanical engineering, algebra 1, algebra 2 ...These are also subjects that I tutor, so I am very able to help students through this aspect of their astronomy course. Mastery of basic statistics is a common need for work in both the natural and social sciences. In the last year, I have tutored three undergrads and two graduate students needing help with this core requirement. 44 Subjects: including econometrics, algebra 1, algebra 2, calculus ...In addition, I have written and edited computer training manuals, and have edited and proofread other types of materials including essays, reports, a dissertation, magazine articles, and a book. I have a degree in Linguistics from the University of California at Berkeley and have completed addit... 33 Subjects: including algebra 2, SAT math, English, algebra 1 ...My strength is my ability to tutor Spanish. I speak Spanish fluently as a result of bilingual school and a semester abroad in Madrid. I am a very personable and patient person and am able to meet the demands of any academic situation. 30 Subjects: including algebra 1, biology, calculus, elementary (k-6th) Nearby Cities With Math Tutor Boston Math Tutors Cambridge, MA Math Tutors Charlestown, MA Math Tutors Chelsea, MA Math Tutors Everett, MA Math Tutors Jamaica Plain Math Tutors Malden, MA Math Tutors Medford, MA Math Tutors Melrose, MA Math Tutors Revere, MA Math Tutors Roslindale Math Tutors Roxbury, MA Math Tutors Somerville, MA Math Tutors South Boston, MA Math Tutors Winthrop, MA Math Tutors
{"url":"http://www.purplemath.com/east_boston_math_tutors.php","timestamp":"2014-04-17T21:47:12Z","content_type":null,"content_length":"23584","record_id":"<urn:uuid:4d35e3e8-310b-45b5-9e92-fccb214e11bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
User Chris Eagle bio website age 28 visits member for 3 years, 3 months seen Feb 19 at 15:09 stats profile views 577 22 awarded Yearling 25 awarded Excavator 22 awarded Yearling 23 awarded Yearling Sep Independence of being an integer 9 comment If it's not an integer, then ZFC can certainly prove that, just by approximating it closely enough. 15 answered Most 'unintuitive' application of the Axiom of Choice? Jun Bijection of proper classes 18 comment How does this prove $\kappa^2=\kappa$ for every $\kappa \ge \aleph_0$? It seems to only work for alephs. Jun Counterexamples in Algebra? 18 comment Every integral domain is a subring of a UFD (for example, its field of fractions). So all you need here is an integral domain which is not a UFD. May Ordinal set $\Omega$ : von Neumann definition and modern definition 31 comment With what you call the modern definition, the ordinals themselves are not sets (except $0$), so it doesn't even make sense to ask if the class of ordinals is a set. May Why the triangle inequality? 20 comment Perhaps Michael means that you need the triangle inequality to show that open balls are open, and so the metric topology is generated by the open balls. May incompleteness in real analysis 17 comment Once you've got quantification over sets, it's easy to define the integers. "$x$ is an integer" means "$x$ is in every subset of $\mathbb{R}$ containing $0$, $1$ and $-1$ and closed under addition". May incompleteness in real analysis 16 comment The first-order theory of the real ordered field is not categorical in any uncountable cardinal. Such theories are $\omega$-stable, and hence can never define an infinite total order. May Mathematical ideas named after places 11 comment and the "British Rail metric" May Characterization of Tychonoff spaces in terms of open sets 8 comment Here's one $\mathbb{R}$-free characterization: a space is Tychonoff iff it has a Hausdorff compactification. Is that the sort of thing you want? May How to decompose an infinite set into two isomorphic ones without choice? 1 comment @André: Why are you presuming that? The usual definition of "infinite" is "not equinumerous with any natural number". Your property is called "Dedekind infinite". Apr Zariski's Main theorem 24 comment Crossposted at math.stackexchange.com/questions/34836/zariskis-main-theorem Apr solvability of an elementary functional equation 5 comment $f(x,0)=g(0)$, not $0$. 31 awarded Fanatic Mar What are some reasonable-sounding statements that are independent of ZFC? 25 revised you can do accents just by using the unicode characters Mar revised Choice function on the countable subsets of the reals 23 answer expanded question
{"url":"http://mathoverflow.net/users/11771/chris-eagle?tab=activity","timestamp":"2014-04-17T01:28:18Z","content_type":null,"content_length":"45427","record_id":"<urn:uuid:bfa2ac28-3ca8-4d50-9a72-0970cd9aa944>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Big O notation Author Big O notation Ranch Hand Joined: Sep 05, Can somebody point me in the right direction to understand Big O notation? I am reading "Data Structures & Algorithms in Java" by Robert Lafore, and I'm having trouble understanding 2007 how to understand Big O notation. Thanks. Posts: 86 Joined: Jan 08, 2008 The concept of Big O notation can certainly be confusing (at least it was when I was trying to learn it for the first time). Here are two good references to get you started: Posts: 5 1) The classic on algorithm analysis is "Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein. Chapter 3 of this book, entitled "Growth of Functions" is all about asymptotic notation (i.e., Big O and others) and contains a very good discussion. 2) Another good discussion can be found in a free, on-line book called "Algorithms and Complexity" by Herbert Wilf and found here: ftp://ftp.cis.upenn.edu/pub/wilf/AlgComp.html. Chapter 1 is where you will find the discussion of Big O. Hope that helps! Best Regards, Joined: Oct 14, intermediate forum for this one Posts: 8764 Spot false dilemmas now, ask me how! (If you're not on the edge, you're taking up too much room.) subject: Big O notation
{"url":"http://www.coderanch.com/t/384448/java/java/Big-notation","timestamp":"2014-04-18T21:36:37Z","content_type":null,"content_length":"22311","record_id":"<urn:uuid:4af5ddaf-2b6d-4df3-92e9-c0049f6f2c2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Trig Identities. Prove that cos^-1(1/5)+cos^-1(5/7)+cos^-1(19/35) = Pie without the use of a calculator? How would I do this? Several approaches are possible. One approach might be the following: Let $\cos \alpha = \frac{1}{5}$, $\cos \beta = \frac{5}{7}$ and $\cos \gamma = \frac{19}{35}$. Now evaluate $\cos (\alpha + \beta + \gamma)$ by applying the compound angle formula several times. Note: Find $\sin \alpha$, $\sin \beta$ and $\sin \gamma$ by using the known values for cos and applying the Pythagorean Identity.
{"url":"http://mathhelpforum.com/pre-calculus/87209-proving-trig-identities-print.html","timestamp":"2014-04-19T16:06:03Z","content_type":null,"content_length":"5559","record_id":"<urn:uuid:e13c50a0-a34a-4764-b22e-ddcedcb04602>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: MATH B - Street Question Replies: 9 Last Post: Jun 24, 2008 1:03 PM Messages: [ Previous | Next ] RE: MATH B - Street Question Posted: Jun 20, 2008 5:48 AM For question #33, if they crossed out, I think they still need to state that they understood the domain of the problem x>-2 Jennifer Griffin Math Teacher Stissing Mountain High School Pine Plains Central School District (518) 398-7181 ext. 221 -----Original Message----- From: owner-nysmathab@mathforum.org on behalf of mathematics226@aol.com Sent: Thu 6/19/2008 7:31 PM To: nysmathab@mathforum.org Subject: MATH B - Street Question Good day to everyone - I do not remember what question number it was, but the question about the streets with sides 12 and 10 - it was a similar right triangles question - how does everyone plan on grading it if they decided to do law of sines somehow and get 8 and change as the answer. Would this qualify as a conceptual error for 2 out of the four points? Also, question 33 - if the students factored out the difference of squares after applying the Quotient Rule, and cancelled out the binomial resulting in eliminating the extraneous root, will you accept that as a complete answer? Joe Perlman Date Subject Author 6/19/08 Mathematics226@aol.com 6/20/08 RE: MATH B - Street Question Jennifer Griffin 6/20/08 RE: MATH B - Street Question Eleanor Pupko 6/24/08 Eleanor Pupko 6/20/08 Re: MATH B - Street Question Kim Bourgeois 6/20/08 RE: MATH B - Street Question Ryley David 6/20/08 Re: MATH B - Street Question WKR918@aol.com 6/20/08 Re: MATH B - Street Question Fred Salvo 6/20/08 JACQUELINE GENESONI 6/20/08 Re: MATH B - Street Question WKR918@aol.com
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1760386&messageID=6261865","timestamp":"2014-04-17T19:33:47Z","content_type":null,"content_length":"28630","record_id":"<urn:uuid:ae83bcc4-209c-491a-9314-b18917ed2501>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Interactive Math Programs These programs are designed to be used with Multivariable Mathematics by R.E. Williamson and H.F. Trotter, and Introduction to Differential Equations by Richard E. Williamson, but are quite generally useful for illustrating concepts in the areas covered by the texts. So have fun, experiment with different values, and let us know if you have any trouble. These programs were originally written in True Basic for the Macintosh by Richard Williamson. The translation into Java and the writing of a recursive descent equation parser was done by Scott Rankin and Susan Schwarz. The programs are java applets tested on Macintosh computers running OS 10 using Netscape v7 and Internet Explorer v5.2, and on computers running Windows 2000 and XP using Netscape v7 and Internet Explorer v6. The applets will run on Macintosh computers running OS 9 using Netscape 7 but not Internet Explorer. To run one of the programs in the list below, just click on its name. Each program displays a brief explanation of how to use it. If you run into difficulties, here are more detailed instructions on running the applets. The first time you try any of the programs you may want to look at the link anyway. If you do not see the buttons that are used to run the program, you may need to scroll down in the browser window until the buttons are visible. Multivariable Calculus First Order Equations Second Order Equations Higher Dimensional Systems Partial Differential Equations List of the features of the equation parser For more information, please contact Professor Richard Williamson, or for problems with the applets, Susan Schwarz Last modified: Mon Jun 26 09:02:39 EST 2000
{"url":"http://www.dartmouth.edu/~rewn/","timestamp":"2014-04-16T11:11:06Z","content_type":null,"content_length":"7655","record_id":"<urn:uuid:22fa66bb-7802-4e5b-95fc-298cce16f47f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertices of a simple graph May 24th 2007, 03:07 PM #1 Super Member Mar 2006 Vertices of a simple graph Prove that a smiple graph with at least two vertices would have two or more vertices of the same degree. Realize that if a simple graph has n vertices then the maximum degree of any vertex is n-1. Thus suppose that we have a simple graph G with a degree sequence of <0,1,2,…,n-1>, that is all the n-vertices are of different degree. Now remove the vertex of degree 0. We now have a new graph G’ with (n-1) vertices and the same number of edges. But it has a vertex of degree (n-1). But that is impossible; so the original graph is impossible. Thus the graph G would have to have at least two vertices of the same degree. Last edited by Plato; May 24th 2007 at 04:23 PM. Discrete maths solutions - option? Thanks heaps for your explanations. By the way what is the option of using the "piegonhole" principle of answering the question - 2nd form. Looking forward to your comments. Sorry for getting the name wrong my apology Plato By the way what is the option of answering the question using the pigeonhole princple - 2nd form? Look forward to your comments. Actually the graph theory proof above does use the pigeonhole principle implicitly. Think of 15 pigeonholes numbered 0,1,2,…,14. Assign each person to the hole that matches the number of acquaintances at the meeting that he/she has. If the hole #14 is empty then the fifteen will be assigned to 14 holes so there is a hole 0-13 with at least two. If hole #14 is not empty then someone there knows each other person there. Hence hole #0 must be empty(WHY?) Thus the fifteen will be assigned to 14 holes so there is a hole 1-14 with at least two. Last edited by Plato; June 8th 2007 at 02:48 PM. Question 2 - Thanks Your explanation makes good sense. Thanks heaps. May 24th 2007, 03:41 PM #2 June 8th 2007, 01:40 PM #3 Apr 2007 June 8th 2007, 01:49 PM #4 Apr 2007 June 8th 2007, 01:49 PM #5 June 8th 2007, 02:25 PM #6 June 8th 2007, 03:37 PM #7 Apr 2007
{"url":"http://mathhelpforum.com/discrete-math/15332-vertices-simple-graph.html","timestamp":"2014-04-18T12:05:38Z","content_type":null,"content_length":"49705","record_id":"<urn:uuid:f778e3fb-a3f7-4b0b-9c6c-2e3c53dc792f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
6.892 Lecture #8 6.892 Lecture #8 Minimizing Misclassification An Example The Perceptron Picture A Learning Procedure Perceptron Learning Rule Random Gradient Descent How Random is this Descent? Very Random Descent Stochastic Approximation Perceptron: Solution to the World’s Problems Perceptrons: Unfortunate Dead End? The Derivative Trick Derivative of Sigmoid Layout of Eye and Brain Picture of Brain Neuron Tutorial Neuron Figure Pictures of Spikes Neurons Communicate With Spikes Real vs. Model Neurons What do Spikes Mean? The Structure of the Eye The Retina The Receptive Field Retinal Receptive Fields The Structure of LGN
{"url":"http://www.ai.mit.edu/courses/6.892/lecture8-html/index.htm","timestamp":"2014-04-17T18:24:35Z","content_type":null,"content_length":"2080","record_id":"<urn:uuid:dc9179a3-7222-4206-a8f6-817430d9d487>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
PEMDAS Calculator November 10th, 2013, 08:22 PM PEMDAS Calculator I'm writing a calculator, and to start, I'm writing basic arithmetic functionality. I've figured out how to tackle the "EMDAS" portion of the Order of Ops, however, I don't have a clue on how to simplify parentheses. The code I've written doesn't work in that regard, though everything else so far works great. How would you guys do the parentheses? Code : pseudo-code which needs revising/reworking (I've written actual code, but it's fairly confusing...) solveParentheses(String[] equation): loop until no more parentheses: find first parenthesis store its location in a variable find close parenthesis store its location in a variable If found one but not the other, give error message save the content of the parentheses into an array that array = solveParentheses(that array) simplify order of ops inside parentheses set the location of end parenthesis to result of simplification set indices between the first parenthesis and result to "null" :end loop count number of null values new array of length equation.length-nullcounter fill array with non-null values return array :end function November 10th, 2013, 09:17 PM Re: PEMDAS Calculator 7 + 3 infix 7 3 + postfix The easiest way to evaluate infix equations is to convert them to post fix first. November 10th, 2013, 10:09 PM Re: PEMDAS Calculator That's not how my algorithm works... Care to elaborate? November 10th, 2013, 10:24 PM Re: PEMDAS Calculator Maybe its not how YOUR algorithm works but I suggested a different algorithm you can use. Have you bothered to Google "infix to postfix"? November 10th, 2013, 10:53 PM Re: PEMDAS Calculator Yes, I still don't know what it does.
{"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/33669-pemdas-calculator-printingthethread.html","timestamp":"2014-04-18T08:41:16Z","content_type":null,"content_length":"6479","record_id":"<urn:uuid:4ac3611c-cafe-4e99-8240-f7e7420c426f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Leibnitz Rule April 22nd 2007, 10:05 AM #1 Leibnitz Rule I'm having a diffucult time sovling this problem The problem statements reads as. x = y(x) - (1/2)*integral( (1-|x-u|) * y(u) du) from -1 to 1. From direct differentialtion, we are suppose to use Leibnitz rule to prove that y''(x) + y(x) = 0, subject to y(1) + y(-1) = 0 y'(1) + y'(-1) = 2 Any help would be appreicated. Differentiating the absolute value of (x-u) with respect to x is giving me issues. Thank you. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/14028-leibnitz-rule.html","timestamp":"2014-04-18T18:03:33Z","content_type":null,"content_length":"28053","record_id":"<urn:uuid:0c37d263-0af2-46f4-99fd-39ac034efe9e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Absolute value equation hi Sabbir, Welcome to the forum. For any 'linear' expression The critical value for x is So, for your equation you need to consider these cases: (i) x < - 1/2 expression becomes Although, strictly that value is outside the range that I'm checking, a numeric check shows it is a solution. (ii) x = -1/2 Just done that. (iii) -1/2 < x < 1/2 expression becomes This is true for all values of x in the range. (iv) x = +1/2 expression becomes So this is a solution. expression becomes Putting (i) to (v) together, the solution set is {x : -1/2 ≤ x ≤ 1/2} or [-1/2,1/2] as noelevans has already explained. Hope that helps, You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=237388","timestamp":"2014-04-18T06:14:48Z","content_type":null,"content_length":"22328","record_id":"<urn:uuid:61e2d346-beb2-49ee-a60a-77d19a65a76b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Function Notation with Logs and Exponentials - Problem 3 Function Notation with Logs and Exponentials - Problem 3 Exponential functions are often used for modeling population growth, as we see in these examples. The function notation shows that you're modeling a population in terms of the input, time. Pay attention to the units because these problems are often presented "in thousands of individual" or perhaps "per decade." Most errors come from typing these into your calculator incorrectly- use parentheses or do it in parts according to the order of operations if necessary. Transcript Coming Soon! population growth function notation exponential form
{"url":"https://www.brightstorm.com/math/precalculus/exponential-and-logarithmic-functions/function-notation-with-logs-and-exponentials-problem-3/","timestamp":"2014-04-18T18:27:32Z","content_type":null,"content_length":"72845","record_id":"<urn:uuid:826fc453-bec0-4db1-b476-981107d3cd51>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 10th 2010, 06:29 AM #1 Find all solutions Are you suppose to let this equal to 0, but I thought thats only for quadratics? and it would not be a quadratic? It's a degree-5 equation, but no need to fear: The middle one can't be zero, so you'll get one family of solutions from the first one and two families of solutions from the second. They'll be families of solutions because if $\theta$ is a solution, then $\theta+2n\pi$ is a solution for all integers n. Post again if you're still having trouble. March 10th 2010, 08:28 AM #2 Super Member Mar 2010
{"url":"http://mathhelpforum.com/trigonometry/133086-solutions.html","timestamp":"2014-04-16T11:59:40Z","content_type":null,"content_length":"32435","record_id":"<urn:uuid:62753055-b9ae-45a4-9f5c-effbad35ba6a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Limits! Hi bobbym, What is the limit of a_n as n approaches infinity, with a being any real number. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Limits! Very good! You found the link, I was hiding it! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! Do you want me to erase the link? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Limits! No, that is okay. I was just teasing him. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Limits! Hi gAr; Did you know to do that one before you found the link? Hi anonimnystefy; See post #450. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! Hi bobbym, I tried for sometime, but didn't get the solution. Then I searched for the link. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Limits! I was unable to get it either. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! Summations with flooring and ceiling are a bit difficult for me. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Limits! I agree, it is very hard to eliminate them. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! No problem is too easy! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! Very good! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym next one? and if you can,please check my attempt at the series #17 problem.tell me if it's good at all.if not i want to change my direction so that i don't waste my time on wrong attempts. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym my new friend Maxima says it's The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! What do you say it is? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! How do you go about getting such an answer? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym i'm not sure.maybe because e/x tends to zero when x goes to infinity,and 0^'infinity' tends to zero? i'm just guessing because i tried to do the l'Hopital's rule,but i got some pretty nasty The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! Do you at least have some empirical evidence? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! hi bobbym wolfram won't work it out for x=100 or x=1000. think there's no sense in going larger. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Limits! Did you know that Maxima can do arithmetic that dwarfs Mathematica's puny efforts. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Limits! Hi bobbym I haven't done this on Maxima yet, but I'm sure that the answer is 0. maybe another way could be using a substitution y=e^x. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=202066","timestamp":"2014-04-20T13:53:09Z","content_type":null,"content_length":"39787","record_id":"<urn:uuid:d2fb2f54-0e61-436c-90b2-020a61e2e9bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [TF-ENT] redundant blank nodes From: Birte Glimm <birte.glimm@comlab.ox.ac.uk> Date: Fri, 9 Oct 2009 19:14:01 +0100 Message-ID: <492f2b0b0910091114j2df67cf8o8ed30d4dd61cfdde@mail.gmail.com> To: "Seaborne, Andy" < andy.seaborne@hp.com> Cc: SPARQL Working Group <public-rdf-dawg@w3.org> 2009/10/9 Seaborne, Andy <andy.seaborne@hp.com>: > Birte, > Could you explain further? This could be a way forward but I can't see how the existing text works for this. > The only entailment relationship in the matching step is between G and P(BGP) and does not consider SG. > SG is related to G but after that it is just a graph. The way it is related isn't mentioned and does not appear at the point of entailment between G and P(BGP). In the matching step, only the vocabulary of SG is considered, not the way it is e-equivalence to G. Maybe we can add that in some way. > Andy Ok, here is my take on that: It could be that I misinterpreted whether the pattern instance mapping maps to the scoping graph or to the active graph. But I think even if I got that wrong, we can make the definition work. I believed that entailment is decided w.r.t. scoping graph and in my mind I imagined that you could, for example, take the active graph, rename the blank nodes to get SG, then built a partial RDFS closure of SG with an algorithm like what ter Horst proposed and find solutions with simple entailment (systems could use different ways of course). I came to this because of condition 3 of extensuions for BGP matching: For any scoping graph SG and answer set {P1 ... Pn} for a basic graph pattern BGP, and where {BGP1 .... BGPn} is a set of basic graph patterns all equivalent to BGP, none of which share any blank nodes with any other or with SG SG E-entails (SG union P1(BGP1) union ... union Pn(BGPn)) but since the only difference between the active and the scoping graph are blank node names and they are disjoint it does not really matter whether I decide entailment w.r.t. G or SG. Also, in the description of data sets it says: If DS is the dataset of a query, pattern solutions are therefore understood to be not from the active graph of DS itself, but from an RDF graph, called the scoping graph, which is graph-equivalent to the active graph of DS but shares no blank nodes with DS or with BGP. "underderstood" is, however, a bit vague. All in all, this scoping graph business is a bit confusing to me since, on the other hand, the scoping graph is said to be a purely theoretical construct and you could also map the variables to the active graph and use the scoping graph only to rename blank nodes in the matches that you got from the active graph. Is that how I have to read it? I'll try my example again this time assuming I decide entailment w.r.t. G and show how C1 and C2 need to be rephrased then. We assumed the active graph G is: _:b1 :p :z. _:b2 :q :y. We assumed the scoping graph SG is: _:sg1 :p :z. _:sg2 :q :y. and the BGP of the query is ?x :p _:n . (I named [] _:n just to have some ID for it. ) If I do not derive the solutions from SG, but from G it would work as follows: G entails _:b1_1 :p :z. for _:b1_1 allocated to _:b1 _:b1_2 :p :z. for _:b1_2 allocated to _:b1_1 _:b1 :p _:z_1. for _:z_1 allocated to :z _:b1_1 :p _:z_1. _:b2_1 :q :y. for _:b2_1 allocated to _:b2 same as when using SG but starting with blank nodes _:b1 and _b2 instead of _:sg1 and _:sg2. We get the following pattern instance mappings for the BGP: P0: mu: x->_:b1, sigma: _:n->:z P1: mu: x->_:b1_1, sigma: _:n->:z P2: mu: x->_:b1_2, sigma: _:n->:z Pn: mu: x->_:b1, sigma: _:n->_:z_1 Pn+1: mu: x->_:b1_1, sigma: _:n->_:z_1 but not mu: x->_:b2, sigma: _:n->:y because the predicate is :q mu: x->_:b2_1, sigma: _:n->:y etc Since blank nodes in SG are different from the ones in G, we have as required that SG RDFS-entailes Sg union P0(BGP0) union P1(BGP1) union ... although strictly speaking the renaming of blank nodes from BGP to BGP0, BGP1, ... makes the pattern instance mappings possibly invalid, but we don't really need sigma any more anyway. We have the possible solutions mu0:x->_:b1, mu1:x->_:b1_1, mu2:x->_:b1_2, ... for ?x. Now what we can say is that since G and SG are graph equivalent (only differing in their blank nodes), there is a bijection M (one-to-one mapping) from G to SG witnessing the equivalence with M(_:b1)=:_sg1 and M(_:b2)=:_sg2 and we can rephrase C1 and C2 as follows A possible solution μ is a solution for BGP from G under RDF entailment if (C1) for each subject s of a triple ( s, p, o ) in P(BGP), M(s) is defined and M(s) occurs in the scoping graph and (C2) if μ(v) is a blank node, then M(μ(v)) is defined and M(μ(v)) occurs in the scoping graph SG. Now we would get only mu0:x->_:b1 as an answer since M(_:b1)=_:sg1 occurs in the scoping graph, but mu1:x->_:b1_1 is not an answer since mu1(x) is a blank node, but M is not defined for mu1(x)=_:b1_1as required by C2. Similarly for all other solution mappings mu2, ... Now, however, the blank nodes in the answers are not really from SG and that is why I thought we do the whole mapping on SG (at least theoretically), but is that how it is intended? I can of course rename them with M(blank node), but I cannot read that from the spec. >> -----Original Message----- >> From: public-rdf-dawg-request@w3.org [mailto:public-rdf-dawg-request@w3.org] >> On Behalf Of Birte Glimm >> Sent: 08 October 2009 12:22 >> To: SPARQL Working Group >> Subject: Re: [TF-ENT] redundant blank nodes >> Actually, I don't think any more we have a problem here because (x, >> _:sg2) is not an entailed triple. I should have seen that earlier and >> apologise for the confusion. I explain below: >> We assumed the scoping graph SG is: >> _:sg1 :p :z. >> _:sg2 :q :y. >> and the BGP of the query is >> ?x :p [] . >> We assumed that (unfortunately) (x, _:sg2) would be an answer, but it >> is not. Clearly _:sg2 :p [] is a well-formed RDF triple, but it is NOT >> RDF(S) entailed by SG or G because you cannot just exchange blank >> nodes. You can introduce fresh blank nodes, but you have to keep track >> of which of your fresh blank nodes is allocated to which existing >> node. You can validly derive: >> _:sg1_1 :p :z. for _:sg1_1 allocated to _:sg1 >> _:sg1_2 :p :z. for _:sg1_2 allocated to _:sg1_1 >> _:sg1 :p _:z_1. for _:z_1 allocated to :z >> _:sg1_1 :p _:z_1. >> etc >> There is no legal way to get >> _:sg2 :p :z. >> This would say that :z has both a :p and a :q predecessor and that >> there exists a node (_:sg2) that has both a :p and a :p sucessor. This >> does not follow from the given graph. The given graph just says there >> exists a node that has :z has :p successor (_:sg1) and there exists a >> node that as :y as :q successor (_:sg2). They could be the same, but >> they do not have to be the same and entailment considers only what is >> true in all models. >> Thus, you cannot arbitrarily use blank nodes from the scoping graph. >> You can only use them in consequences if they stand for the same node. >> You can introduce (arbitrarily many) fresh blank nodes, but you still >> have to keep track of which fresh node represents which original node >> (called allocated to in the RDF spec). We will not return this fresh >> blank nodes in the query answers (could be infinitely many), but they >> can of course be used in the derivation process. >> Birte >> 2009/10/7 Birte Glimm <birte.glimm@comlab.ox.ac.uk>: >> > Hi all, >> > I am working my way through the open questions/issues, so the next on >> > my list is something that Andy mentioned. At the moment, we have the 2 >> > conditions C1 and C2 that restrict the set of possible answers to a >> > finite set of answers for RDF(S) entailment regimes. What these >> > conditions don't cover is redundant answers that use different blank >> > nodes. E.g., >> > suppose G is >> > _:b1 :p :z. >> > _:b2 :q :y. >> > SG is: >> > _:sg1 :p :z. >> > _:sg2 :q :y. >> > and the BGP of the query is >> > ?x :p [] . >> > We would get the two solutions >> > (x, _:sg1) >> > (x, _:sg2) >> > because both _:sg1 :p [] and _:sg2 :p [] are well-formed RDF triples >> > that are RDF(S) entailed by G (C1) each subject is in the set of terms >> > used by the scoping graph and (C2) μ(?x) is a blank node occurring in >> > SG. This always results in a finite answer sequence, but the more >> > blank nodes we have origianlly, the more redundant answers we get. >> > >> > Now what I would rather have only (x, _:sg1) as an answer. This could >> > be defined by a notion of derivability I think. E.g., >> > Let R be a set of entailment rules for the entailment regime E, then, >> > for each triple (s, p, o) in P(BGP), there must be a derivation of (s, >> > p, o) from SG by means of R. >> > Now I could use, for example, the RDFS entailment rules as suggested >> > by ter Horst, and I get what I want because _:sg2 :p [] has no >> > derivation. >> > >> > What I am quite unhappy about is the use of "a set of entailment >> > rules" because different systems might want to use different ways of >> > deriving consequences and this might be too specific. Any opinions on >> > that? Any better suggestion? >> > >> > Birte >> > >> > >> > >> > >> > -- >> > Dr. Birte Glimm, Room 306 >> > Computing Laboratory >> > Parks Road >> > Oxford >> > OX1 3QD >> > United Kingdom >> > +44 (0)1865 283529 >> > >> -- >> Dr. Birte Glimm, Room 306 >> Computing Laboratory >> Parks Road >> Oxford >> OX1 3QD >> United Kingdom >> +44 (0)1865 283529 Dr. Birte Glimm, Room 306 Computing Laboratory Parks Road OX1 3QD United Kingdom +44 (0)1865 283529 Received on Friday, 9 October 2009 18:14:34 GMT
{"url":"http://lists.w3.org/Archives/Public/public-rdf-dawg/2009OctDec/0099.html","timestamp":"2014-04-19T22:57:10Z","content_type":null,"content_length":"20232","record_id":"<urn:uuid:0923182f-b4d3-49fd-b2c1-234abb096f31>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Complex Newton Map Newton's method uses an initial value and the recursion to find roots of the equation . In the complex plane, equations with multiple roots show fractal behavior on grids of initial conditions. The Newton depth determines the number of recursions. In this Demonstration, the function is the polynomial based on complex numbers defined by the locators (there are four to start). Under them is an grid based on the resolution size. The calculation of the highest resolution is 36 times slower than the lowest resolution.
{"url":"http://demonstrations.wolfram.com/ComplexNewtonMap/","timestamp":"2014-04-17T12:51:14Z","content_type":null,"content_length":"43147","record_id":"<urn:uuid:1b4e3b22-5b26-4d34-9c20-a9c6df7b0f58>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Ostrich-inspired robot learns some fancy footwork March 23, 2013 Meet FastRunner, a bioinspired robot that thinks it’s an ostrich, being built at the Florida Institute for Human and Machine Cognition. It’s expected to be the world’s fastest robotic biped, at 22 Impressive, but no Boston Dynamics Cheetah, at 28.3 mph (on a treadmill) — beating out Usain Bolt’s 27.79 mph. But FastRunner may soon negotiate more complex environments — ones that Cheetah may fear to tread, thanks to research at MIT. How to trip a robot When an ambulatory robot, such as FastRunner, is moving one of its limbs through free space, its behavior is well-described by a few simple equations. But as soon as it strikes something solid — when a walking robot’s foot hits the ground, or a grasping robot’s hand touches an object — those equations break down. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory are hoping to change all that, with a new mathematical framework that could lead to more efficient controllers for a wide range of robotic tasks. Ordinarily, says MIT professor of associate professor of computer science and engineering Russ Tedrake, a roboticist trying to develop a controller for a bipedal robot would assume that the robot’s foot makes contact with the ground in some prescribed way: say, the heel strikes first; then the forefoot strikes; then the heel lifts. “That doesn’t work for Fast Runner, because there’s a compliant foot that could hit at any number of points, there’s joint limits in the leg, there’s all kinds of complexity,” Tedrake says. “If you look at all the possible contact configurations the robot could be in, there’s 4 million of them. And you can’t possibly analyze them all independently.” The mathematics of agility To prove the stability of a control system for a robot that’s colliding with the world, then, it’s necessary to evaluate not only every possible configuration of the point of the contact, but also every possible solution of the resulting equations. That’s exactly what Tedrake and his associates did. The key to their approach is to describe opposed possibilities for the state of a robotic system using simple algebraic expressions. For instance, as the foot of a bipedal robot approaches the ground, either the force exerted by the ground — call it F — or the distance to the ground — call it d — is equal to zero. So the equation Fd = 0 holds whether the robot’s foot is moving through free space or touching the ground. Just a few such equations give the researchers enough mathematical purchase on the problem of collision that they can draw boundaries around the whole space of solutions. The result is not a precise description of how a robot will behave in any given instance, but enough to offer guarantees of stability. And maybe enough to help FastRunner negotiate even trickier landscapes than the ones shown in the video. UPDATE 3/26/2013: First paragraph revised. FastRunner is being built at the Florida Institute for Human and Machine Cognition. It’s expected to be the world’s fastest robotic biped, at 22 mph. There have been no tests yet. Comments (10) 1. March 26, 2013 by Ralph Dratman If, as it appears, this robot has not yet been constructed in the real world, what is the source of the 22 mph claim? □ March 26, 2013 by Editor Thanks for that catch. First paragraph revised to read: “Meet FastRunner, a bioinspired robot that thinks it’s an ostrich, being built at the Florida Institute for Human and Machine Cognition. It’s expected to be the world’s fastest robotic biped, at 22 mph.” ☆ March 26, 2013 by Ralph Dratman Thank you, Editor! I hope we get an update when the Runner actually meets the Road. (Hmm. Runner, Road, Runner… kind of catchy.) ○ March 27, 2013 by Editor Uh, OK, I’ll run with that… 2. March 24, 2013 by Starheart And here’s a proof that MIT is being run by gnomes. Google “Mechanostrider” 3. March 24, 2013 by Bri At twenty two miles an hour, what happens when it trips in real world situations? After it’s rolled around from falling at that speed, do they have any algorithms for assessing it’s self condition? The functionality of it’s limbs? How to stand it’s self up? It’s great research but they have quite a ways to go before they have a practical field model. It will probably work very well in extremely varied terrain, such as a debris field or boulder field. □ March 24, 2013 by Pete Once the robot is complete and is mass-manufactured, DARPA should buy these ostrich-oid biped robots. 4. March 23, 2013 by pazuzu great, it’s going to be called “Deady Chicken” by future human anti-Skynet resistance □ March 24, 2013 by John Nice ;) 5. March 23, 2013 by Brian Nice. I can’t wait to see a working model.
{"url":"http://www.kurzweilai.net/ostrich-inspired-robot-learns-some-fancy-footwork?_escaped_fragment_=prettyPhoto","timestamp":"2014-04-19T22:13:27Z","content_type":null,"content_length":"38130","record_id":"<urn:uuid:11c6b654-d853-4a8e-b37a-60a7d3013b11>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] shared memory machines Gael Varoquaux gael.varoquaux@normalesup.... Sun Feb 1 09:29:40 CST 2009 On Sun, Feb 01, 2009 at 10:03:30AM -0500, Gideon Simpson wrote: > Yes, but I'm talking about when you have a multiprocessor/multicore > system, not a commodity cluster. In these shared memory > configurations, were I using compiled code, I'd be able to use OpenMP > to take advantage of the additional cores/processors. I'm wondering > if anyone has looked at ways to take advantage of such configurations > with scipy. I use the multiprocessing module: I also have some code to share arrays between processes. I'd love to submit it for integration with numpy, but first I'd like it to get more exposure so that the eventual flaws in the APIs are found. I am attaching Actually I wrote this code a few months ago, and now that I am looking at it, I realise that the SharedMemArray should probably be a subclass of numpy.ndarray, and implement the full array signature. I am not sure if this is possible or not (ie if it will still be easy to have multiprocessing share the data between processes or not). I don't really have time for polishing this right, anybody wants to have a go? > On Feb 1, 2009, at 4:57 AM, Gael Varoquaux wrote: > > On Sun, Feb 01, 2009 at 12:37:48AM -0500, Gideon Simpson wrote: > >> Has anyone been able to take advantage of shared memory machines with > >> scipy? How did you do it? > > I am not sure I understand your question. You want to do parallel > > computing and share the arrays between processes, is that it? -------------- next part -------------- Small helper module to share arrays between processes without copying Numpy arrays can be converted to shared memory arrays, which implement the array protocole, but are allocated in memory that can be share transparently by the multiprocessing module. # Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org> # Copyright: Gael Varoquaux # License: BSD import numpy as np import multiprocessing import ctypes _ctypes_to_numpy = { ctypes.c_char : np.int8, ctypes.c_wchar : np.int16, ctypes.c_byte : np.int8, ctypes.c_ubyte : np.uint8, ctypes.c_short : np.int16, ctypes.c_ushort : np.uint16, ctypes.c_int : np.int32, ctypes.c_uint : np.int32, ctypes.c_long : np.int32, ctypes.c_ulong : np.int32, ctypes.c_float : np.float32, ctypes.c_double : np.float64 _numpy_to_ctypes = dict((value, key) for key, value in def shmem_as_ndarray(data, dtype=float): """ Given a multiprocessing.Array object, as created by ndarray_to_shmem, returns an ndarray view on the data. dtype = np.dtype(dtype) size = data._wrapper.get_size()/dtype.itemsize arr = np.frombuffer(buffer=data, dtype=dtype, count=size) return arr def ndarray_to_shmem(arr): """ Converts a numpy.ndarray to a multiprocessing.Array object. The memory is copied, and the array is flattened. arr = arr.reshape((-1, )) data = multiprocessing.RawArray(_numpy_to_ctypes[arr.dtype.type], ctypes.memmove(data, arr.data[:], len(arr.data)) return data def test_ndarray_conversion(): """ Check that the conversion to multiprocessing.Array and back works. a = np.random.random((100, )) a_sh = ndarray_to_shmem(a) b = shmem_as_ndarray(a_sh) np.testing.assert_almost_equal(a, b) def test_conversion_non_flat(): """ Check that the conversion also works with non-flat arrays. a = np.random.random((100, 2)) a_flat = a.flatten() a_sh = ndarray_to_shmem(a) b = shmem_as_ndarray(a_sh) np.testing.assert_almost_equal(a_flat, b) def test_conversion_non_contiguous(): """ Check that the conversion also works with non-contiguous arrays. a = np.indices((3, 3, 3)) a = a.T a_flat = a.flatten() a_sh = ndarray_to_shmem(a) b = shmem_as_ndarray(a_sh, dtype=a.dtype) np.testing.assert_almost_equal(a_flat, b) def test_no_copy(): """ Check that the data is not copied from the multiprocessing.Array. a = np.random.random((100, )) a_sh = ndarray_to_shmem(a) a = shmem_as_ndarray(a_sh) b = shmem_as_ndarray(a_sh) a[0] = 1 np.testing.assert_equal(a[0], b[0]) a[0] = 0 np.testing.assert_equal(a[0], b[0]) # A class to carry around the relevant information class SharedMemArray(object): """ Wrapper around multiprocessing.Array to share an array accross def __init__(self, arr): """ Initialize a shared array from a numpy array. The data is copied. self.data = ndarray_to_shmem(arr) self.dtype = arr.dtype self.shape = arr.shape def __array__(self): """ Implement the array protocole. arr = shmem_as_ndarray(self.data, dtype=self.dtype) arr.shape = self.shape return arr def asarray(self): return self.__array__() def test_sharing_array(): """ Check that a SharedMemArray shared between processes is indeed modified in place. # Our worker function def f(arr): a = arr.asarray() a *= -1 a = np.random.random((10, 3, 1)) arr = SharedMemArray(a) # b is a copy of a b = arr.asarray() np.testing.assert_array_equal(a, b) multiprocessing.Process(target=f, args=(arr,)).run() np.testing.assert_equal(-b, a) if __name__ == '__main__': import nose More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-February/019688.html","timestamp":"2014-04-17T21:25:54Z","content_type":null,"content_length":"8788","record_id":"<urn:uuid:7786dc5c-029e-4052-8aa1-88ba3d450a81>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket-dev] Math library initial commit almost ready; comments on issues welcome From: Neil Toronto (neil.toronto at gmail.com) Date: Mon Oct 1 18:49:03 EDT 2012 On 10/01/2012 04:20 PM, Sam Tobin-Hochstadt wrote: > On Mon, Oct 1, 2012 at 6:08 PM, Neil Toronto <neil.toronto at gmail.com> wrote: >> On 10/01/2012 02:06 PM, Sam Tobin-Hochstadt wrote: >>> On Mon, Oct 1, 2012 at 2:26 PM, Neil Toronto <neil.toronto at gmail.com> >>> wrote: >> My timing tests also show that typechecking is apparently quadratic in the >> depth of expressions, regardless of how simple the types are. Is that >> something you already knew? > This surprises me in general, but I wouldn't be surprised if it were > the case for some examples. If you have particular examples, that > would be helpful for trying to fix them. However, some algorithms in > TR are inherently super-linear. This is similar to the testing code I wrote, and it also exhibits quadratic behavior. The `apply*' macro generates the simplest deep expression possible. It's used to repeatedly apply a function with the simplest one-argument floating-point type possible. #lang typed/racket/base #:no-optimize (require racket/flonum (for-syntax racket/base)) (define-syntax (apply* stx) (syntax-case stx () [(_ f x 0) #'(f x)] [(_ f x n) #`(f (apply* f x #,(- (syntax->datum #'n) 1)))])) (: simple-flabs (Flonum -> Flonum)) (define simple-flabs flabs) (: flabs* (Flonum -> Flonum)) (define (flabs* x) (apply* simple-flabs x 1000)) Typechecking is quadratic in the number of nested applications done in `flabs*'. Changing `simple-flabs' to `flabs' doubles the time; changing it to `abs' apparently changes typechecking to O(n^3). >> (The latter is a medium-ish deal for the math library. Most special >> functions have domains in which they're computed by evaluating a 25- to >> 50-degree polynomial. Using Horner's method, that's an expression 50 to 100 >> deep; if they're Chebyshev polynomials, it's more like 200-400.) > Is there a reason to generate these expressions statically? Is it > genuinely faster? Yes. I assume it's because the CPU can pipeline the entire computation. I might try splitting them up. There's probably a dividey-conquery way to minimize the depth of the expression. >> I've managed to make some headway here in another part of the library, by >> defining macros only in #lang racket. If their expansions contain typed >> identifiers, it seems TR is smart enough to not check their contracts when >> the macros are used in typed code. > Yes, TR should be able to make this work in general -- the contract > wrapping doesn't happen until the real reference to the identifier is > expanded. Related question: How do I make this work? #lang typed/racket (: plus (Flonum Flonum -> Flonum)) (define (plus a b) (+ a b)) (module provider racket (require (submod "..")) (provide inline-plus) (define-syntax-rule (inline-plus a b) (plus a b))) (require 'provider) (inline-plus 1.2 2.3) I've tried the various module-declaration forms, but can't find the right incantation. Am I trying to do something that I can't? (FWIW, I can't make it work in untyped Racket, either.) Neil ⊥ Posted on the dev mailing list.
{"url":"http://lists.racket-lang.org/dev/archive/2012-October/010467.html","timestamp":"2014-04-16T11:17:30Z","content_type":null,"content_length":"9102","record_id":"<urn:uuid:bf936099-9930-4e8a-aeae-015ebbbf4ed4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
December 2012 Volume 10 Issue 6 pp 1243-1267 Open Access This content is freely available online to anyone, anywhere at any time. The concept of function is a central but difficult topic in secondary school mathematics curricula, which encompasses a transition from an operational to a structural view. The question in this paper is how the use of computer tools may foster this transition. With domain-specific pedagogical knowledge on the learning of function as a point of departure and the notions of emergent modeling and instrumentation as design heuristics, a potentially rich technology-intensive learning arrangement for grade 8 students was designed and field-tested. The results suggest that the relationship between tool use and conceptual development benefits from preliminary activities, from tools offering representations that allow for progressively increasing levels of reasoning, and from intertwinement with paper-and-pencil work. 1. Ainley, J., Bills, L. & Wilson, K. E. (2005). Designing spreadsheet-based tasks for purposeful algebra. International Journal of Computers for Mathematical Learning, 10(3), 191–215. CrossRef 2. Akkus, R., Hand, B. & Seymour, J. (2008). Understanding students’ understanding of functions. Mathematics Teaching, 207, 10–13. 3. Artigue, M. (2002). Learning mathematics in a CAS environment: The genesis of a reflection about instrumentation and the dialectics between technical and conceptual work. International Journal of Computers for Mathematical Learning, 7, 245–274. CrossRef 4. Bereiter, C. (1985). Towards a solution of the learning paradox. Review of Educational Research, 55(2), 201–226. 5. Bloch, I. (2003). Teaching functions in a graphic milieu: What forms of knowledge enable students to conjecture and prove? Educational Studies in Mathematics, 52(1), 3–28. CrossRef 6. Boon, P. (2008). AlgebraArrows. Retrieved at June 9th, 2008, from http://www.fi.uu.nl/wisweb/en/welcome.html. 7. Breidenbach, D., Dubinsky, E., Hawks, J. & Nichols, D. (1992). Development of the process conception of function. Educational Studies in Mathematics, 23, 247–285. CrossRef 8. Carlson, M., Jacobs, S., Coe, E., Larsen, S. & Hsu, E. (2002). Applying covariational reasoning while modeling dynamic events: A framework and a study. Journal for Research in Mathematics Education, 33, 352–378. CrossRef 9. Cobb, P. (2002). Reasoning with tools and inscriptions. The Journal of the Learning Sciences, 11(2&3), 187–215. 10. Cobb, P. & Yackel, E. (1996). Constructivist, emergent, and sociocultural perspectives in the context of developmental research. Educational Psychologist, 31 (3/4), 175–190. 11. Doorman, L. M. & Gravemeijer, K. P. E. (2009). Emergent modeling: discrete graphs to support the understanding of change and velocity. ZDM-International Journal on Mathematics Education, 41, 12. Drijvers, P., Doorman, M., Boon, P., Van Gisbergen, S. & Gravemeijer, K. (2007). Tool use in a technology-rich learning arrangement for the concept of function. In Pitta-Pantazi, D., & Philippou, G., Proceedings of CERME 5, 1389–1398. 13. Dubinsky, E. (1991). Reflective abstraction in advanced mathematical thinking. In D. O. Tall (Ed.), Advanced mathematical thinking (pp. 95–123). Dordrecht: Kluwer. 14. Duval, R. (2006). A cognitive analysis of problems of comprehension in a learning of mathematics. Educational Studies in Mathematics, 61, 103–131. CrossRef 15. Elia, I., Panaoura, A., Eracleous, A. & Gagatsis, A. (2007). Relations between secondary pupils’ conceptions about functions and problem solving in different representations. International Journal of Science and Mathematics Education, 5, 533–556. CrossRef 16. Even, R. (1998). Factors involved in linking representations of functions. The Journal of Mathematical Behavior, 17, 105–121. CrossRef 17. Falcade, R., Laborde, C. & Mariotti, M. A. (2007). Approaching functions: Cabri tools as instruments of semiotic mediation. Educational Studies in Mathematics, 66, 317–333. CrossRef 18. Freudenthal, H. (1983). Didactical phenomenology of mathematical structures. The Netherlands: Reidel: Dordrecht. 19. Freudenthal, H. (1991). Revisiting mathematics education—China lectures. Dordrecht: Kluwer Academic Publishers. 20. Gravemeijer, K. (1999). How emergent models may foster the constitution of formal mathematics. Mathematical Thinking and Learning, 1, 155–177. 21. Gravemeijer, K. (2007). Emergent modelling as a precursor to mathematical modelling. In W. Blum, P. L. Galbraith, H.-W. Henn, & M. Niss (Eds.), Modelling and applications in mathematics education. The 14th ICMI Study (pp 137–144). New York: Springer. 22. Gravemeijer, K. P. E., Lehrer, R., van Oers, B., & Verschaffel, L. (Eds.). (2002). Symbolizing, modeling and tool use in mathematics education. Dordrecht, the Netherlands: Kluwer Academic 23. Gravemeijer, K., & Cobb, P. (2006). Design research from the learning design perspective. In J. van den Akker, K. Gravemeijer, S. McKenney, & N. Nieveen (Eds.), Educational design research (pp. 17–51). London: Routledge. 24. Hennessy, S., Ruthven, K. & Brindley, S. (2005). Teacher perspectives on integrating ICT into subject teaching: commitment, constraints, caution and change. Journal of Curriculum Studies, 37(2), 25. Hoyles, C. & Noss, R. (2003). What can digital technologies take from and bring to research in mathematics education? In A. J. Bishop, M. A. Clements, C. Keitel, J. Kilpatrick & F. K. S. Leung (Eds.), Second international handbook of mathematics education (pp. 323–349). Dordrecht: Kluwer Academic Publishers. CrossRef 26. Janvier, C. (1987). Translation processes in mathematics education. In C. Janvier (Ed.), Problems of representation in teaching and learning mathematics (pp. 27–32). Hillsdale: Lawrence Erlbaum 27. Kalchman, M. & Koedinger, K. (2005). Teaching and learning functions. In S. Donovan & J. Bransford (Eds.), How students learn mathematics (pp. 351–392). Washington DC: National Academy of 28. Kaput, J. & Schorr, R. (2007). Changing representational infrastructures changes most everything: The case of SimCalc, algebra and calculus. In G. W. Blume & M. K. Heid (Eds.), Research on technology and the learning and teaching of mathematics: Vol. 2 cases and perspectives (pp. 211–253). Charlotte: Information Age Publishing. 29. Kuchemann, D. (1981). Algebra. In K. Hart (Ed.), Children’s understanding of mathematics:11–16 (pp. 102–119). London: Murray. 30. Lehrer, R. & Schauble, L. (2002). Symbolic communication in mathematics and science: Constituting inscription and thought. In E. D. Amsel & J. Byrnes (Eds.), Language, literacy, and cognitive development. The development and consequences of symbolic communication. (pp. 167–192). Mahwah, NJ: Lawrence Erlbaum Associates. 31. Malle, G. (2000). Zwei Aspekte von Funktionen: Zuordnung und Kovariation. Mathematik Lehren, 103, 8–11. 32. Meel, D. (1998). Honors students’ calculus understandings: Comparing Calculus&Mathematica and traditional calculus students. In Shoenfeld, A., J. Kaput, & E. Dubinsky (Eds.) CBMS Issues in Mathematics Education 7: Research in Collegiate Mathematics Education III. pp. 163–215. 33. Meira, L. (1995). The microevolution of mathematical representations in children’s activity. Cognition and Instruction, 13, 269–313. CrossRef 34. Oehrtman, M. C., Carlson, M. P. & Thompson, P. W. (2008). Foundational reasoning abilities that promote coherence in students’ understandings of function. In M. P. Carlson & C. Rasmussen (Eds.), Making the connection: Research and practice in undergraduate mathematics (pp. 27–42). Washington DC: Mathematical Association of America. CrossRef 35. Pirie, S. E. B. & Kieren, T. E. (1989). A recursive theory of mathematical understanding. For the Learning of Mathematics, 9(3), 7–11. 36. Ponce, G. (2007). Critical juncture ahead: Proceed with caution to introduce the concept of function. Mathematics Teacher, 101(2), 136–144. 37. Ponte, J.P. (1992). The history of the concept of function and some educational implications. The Mathematics Educator, 3(2), 3–8. Retrieved April, 2nd, from http://math.coe.uga.edu/TME/Issues/ 38. Rasmussen, C. & Blumenfeld, H. (2007). Reinventing solutions to systems of linear differential equations: A case of emergent models involving analytic expressions. The Journal of Mathematical Behavior, 26, 195–210. CrossRef 39. Sfard, A. (1991). On the dual nature of mathematical conceptions: Reflections on processes and objects as different sides of the same coin. Educational Studies in Mathematics, 22, 1–36. CrossRef 40. Sfard, A. & McClain, K. (2002). Special issue: Analyzing tools: Perspective on the role of designed artifacts in mathematics learning. The Journal of the Learning Sciences, 11, 153–388. 41. Sherin, M. G. (2002). A balancing act: Developing a discourse community in a mathematics community. Journal of Mathematics Teacher Education, 5, 205–233. CrossRef 42. Skaja, M. (2003). A secondary school student’s understanding of the concept of function—A case study. Educational Studies in Mathematics, 53(3), 229–254. CrossRef 43. Slavit, D. (1997). An alternate route to the reification of function. Educational Studies in Mathematics, 33, 259–281. CrossRef 44. Stein, M. K., Engle, R. A., Smith, M. S. & Hughes, E. K. (2008). Orchestrating productive mathematical discussions: Five practices for helping teachers move beyond show and tell. Mathematical Thinking and Learning, 10(4), 313–340. CrossRef 45. Tall, D. (1996). Functions and calculus. In A. J. Bishop, K. Clements, C. Keitel, J. Kilpatrick & C. Laborde (Eds.), International handbook on mathematics education (pp. 289–325). Dordrecht: Kluwer Academic Publishers. 46. Trouche, L. (2004). Managing complexity of human/machine interactions in computerized learning environments: Guiding students’ command process through instrumental orchestrations. International Journal of Computers for Mathematical Learning, 9, 281–307. CrossRef 47. van den Heuvel-Panhuizen, M. H. A. M. (2003). The learning paradox and the learning miracle: Thoughts on primary school mathematics education. ZDM-International Journal on Mathematics Education, 24, 96–121. 48. van Nes, F. T. & Doorman, L. M. (2010). The interaction between multimedia data analysis and theory development in design research. Mathematics Education Research Journal 22(1), 6–30. 49. Vinner, S. & Dreyfuss, T. (1989). Images and definitions for the concept of function. Journal for Research in Mathematics Education, 20(4), 356–366. CrossRef 50. Vygotsky, L. S. (1986). Thought and language—Rev’d edition. Cambridge: A. Kozulin. The MIT Press. 51. Wertsch, J. V. (1998). Mind as action. New York: Oxford University Press. Open Access Available under Open Access This content is freely available online to anyone, anywhere at any time. Cover Date Print ISSN Online ISSN Springer Netherlands Additional Links □ emergent modeling □ function concept □ instrumentation □ mathematics education □ technology Author Affiliations □ 1. Freudenthal Institute for Science and Mathematics Education, Utrecht University, Princetonplein 5, 3584 CC, Utrecht, The Netherlands
{"url":"http://link.springer.com/article/10.1007%2Fs10763-012-9329-0","timestamp":"2014-04-23T22:46:04Z","content_type":null,"content_length":"59498","record_id":"<urn:uuid:29608124-f4d2-4b8f-9552-7fdd4641c14d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Walking Randomly A colleague recently sent me the following code snippet in R > a=c(1,2,3,40) > b=a[1:10] > b [1] 1 2 3 40 NA NA NA NA NA NA The fact that R didn’t issue a warning upset him since exceeding array bounds, as we did when we created b, is usually a programming error. I’m less concerned and simply file the above away in an area of my memory entitled ‘Odd things to remember about R’ — I find that most programming languages have things that look odd when you encounter them for the first time. With that said, I am curious as to why the designers of R thought that the above behaviour was a good idea. Does anyone have any insights here?
{"url":"http://www.walkingrandomly.com/","timestamp":"2014-04-18T15:41:10Z","content_type":null,"content_length":"98388","record_id":"<urn:uuid:97f497e2-fd1a-4789-ac2a-590df32bc920>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific Calculator Code Scientific Calculator Code DreamCalc Scientific Calculator 4.1.0 Edition is the smarter ... to a hand-held Scientific Calculator for your PC or ... to reach for a . DreamCalc also offers you a full range of functions, statistics, ... ESBCalc Pro - Scientific Calculator 8.1.4 is an Enhanced Windows Scientific Calculator with Infix Processing, Exponential Notation, Brackets, Functions, Memory, Optional Paper ... the User Interface. The can have Prefix ... Scientific Calculator - ScienCalc 1.3 a convenient and powerful scientific calculator . ScienCalc calculates mathematical expression ... your equations in seconds: Scientific calculator gives students, teachers, scientists ... Java Scientific Calculator 0.4 Scientific Calculator is a general-purpose scientific calculator that you can use as a computer-desktop or in a web ... and hexadecimal calculations, logic, notation, memory, ... Scientific Calculator Precision 72 1.0.1.0 Scientific Calculator Precision 72 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... Scientific Calculator Precision 63 1.0.1.3 Scientific Calculator Precision 63 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... College Scientific Calculator 45 1.0.1.4 Scientific Calculator Precision 45 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... Scientific Calculator Precision 90 1.0.0.6 Scientific Calculator Precision 90 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... DreamCalc Graphing Edition 4.8.2000 is the leading Graphing Scientific Calculator software that's so realistic—it ... for a hand-held altogether. It Feels Real! ... productivity of a real . Graph Functions and Plot ... Java Scientific Calculator For Windows 0.4 A desktop calculator/scientific . Use as desktop/graphical , web applet, or to show operations: arithmetic, trigonometry, logarithms ... Scientific Calculator Precision 54 1.0.1.5 Scientific Calculator Precision 54 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... DreamCalc DCS Scientific Calculator 4.8.0 is a fully featured Scientific Calculator for Windows. Unlike the standard Windows , DreamCalc gives you an ... a full range of functions, statistics, complex numbers ... you reach ... ECalc Scientific Calculator 1.5.2 an easy to use scientific calculator with many advanced features ... RPN operating modes. The has an attractive user ... a full featured handheld ). The user can scroll ... College Scientific Calculator 27 1.0.1.4 Scientific Calculator Precision 27 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... Scientific Calculator Decimal 1.0.1.3 Scientific Calculator Decimal for scientists, engineers ... perform complex mathematical calculations. Scientific Calculator Decimal is programmed in ... decimal data type. Each has its own ... College Scientific Calculator 36 1.0.1.4 Scientific Calculator Precision 36 is programmed ... proprietary data type. The handles mathematical formulas of ... Uncertainty, and Infinity. The follows classical approach when ... Desktop Calculator - DesktopCalc 2.1 to-use and powerful scientific calculator with an expression editor ... and integrated help. Desktop gives students, teachers, scientists ... for both elementary and scientific calculator PG Calculator (Second Edition) 2.2 is a powerfull scientific calculator and an excellent replacement for standard . It ofers full customizable ... and looks like real on user desktop.PG works ... Sicyon Pro Calculator 4.60 is all-in-one scientific calculator for every student and ... is an expression (VBScript/JScript) with features as: estimate ... Sicyon Lite Calculator 4.20 is all-in-one scientific calculator for every student and ... is an expression (VBScript/JScript) with features as: estimate ... HEXelon MAX 6.7.0.8 MAX is a powerful scientific calculator designed to help us ... own functions. Furthermore, this can also be used ... that of a handheld . This program also lets ... Complex Number Calculator Precision 45 1.0.0.4 The Complex Number Precision 45 application was ... be a complex number and works with complex ... as a real number , that is a scientific calculator . Button Mod stands ... NumberMate 2.0.0 is an arithmetic and scientific calculator , adding machine, and time that records and documents ... CalcuNote to NumberMate. Added functions; many other functions ... EzCalc Pro 2.1 the most feature rich scientific calculator you have ever seen! ... for Windows include Brackets, Functions (Trigonometric, Hyperbolic, Logarithmic ... results directly from the ESBTools For RAVE 2.0 that supplies the following: Property Editor - When you ... bring up the ESBPCS - a full featured Scientific Calculator with Memory. Coloured Number ... Coco Calculator 3.0 an easy to use Scientific Calculator with many more features than the standard Windows . It handles Trigonometric, Logarithmic ... Constants Library, a Statistical , and a Financial ... Coco Calculator Short 3.0 an easy to use Scientific Calculator with many more features than the standard Windows . It handles Trigonometric, Logarithmic ... Constants Library, a Statistical , and a Financial ... Smart Math Calculator 2 2 Smart Math is a very comprehensive and easy-to-use scientific calculator that has everything you ... 20 math functions and constants, and provides a ... The only thing this DPLS Science Calculator 3.2 DPLS Science is a scientific calculator that performs the normal functions of a scientific calculator , and is also able ... ESB Calculator 7.3 is a free scientific calculator for Windows. It allows ... the memory of the to put values in ... ten results that the obtained. You can set the to use ... ECalc Scientific 1.5 is a very advanced scientific calculator that offers you all ... an equation solver. eCalc Calculator's interface is ... feel of a real scientific calculator . Its buttons ... Pc Calculator 1.0 is a clever note ... an advanced and strong scientific calculator . Being an editor it ...
{"url":"http://www.softwaregeek.com/scientific-calculator-code/p1.html","timestamp":"2014-04-17T12:42:17Z","content_type":null,"content_length":"81482","record_id":"<urn:uuid:e6195617-67d3-493b-bea9-b2463fefd42d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
rocket launch September 4th 2008, 05:21 PM #1 Junior Member Mar 2008 rocket launch When watching a rocket launch, Nema is 0.8Km closer to the launching pad than Joel is. When the rocket disappears from view, its angle of elevation for Nema is 36.5* and for Joel is 31.9*. How high is the rocket at this point? thank you in advance for any help with a diagram will be more appreciated~~~` When watching a rocket launch, Nema is 0.8Km closer to the launching pad than Joel is. When the rocket disappears from view, its angle of elevation for Nema is 36.5* and for Joel is 31.9*. How high is the rocket at this point? thank you in advance for any help with a diagram will be more appreciated~~~` how about you draw the diagram and think about it. call the distance between Nema and Joel x, and call the height of the rocket at the said point, y. now think trig ratios, in particular, trig ratios that relates x and y. what can you come up with? i just want to check if the answer in the back is correct. and finnally, teacher confirmed it. September 4th 2008, 05:44 PM #2 September 10th 2008, 03:59 AM #3 Junior Member Mar 2008
{"url":"http://mathhelpforum.com/pre-calculus/47760-rocket-launch.html","timestamp":"2014-04-23T07:19:06Z","content_type":null,"content_length":"34840","record_id":"<urn:uuid:2ef728cd-1fa8-46e8-8fbc-4a489a2f5816>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
302 Quantitative Methods ES302 Quantitative Methods in Earth Science Class Syllabus (Adobe Acrobat Format) ES302 Spring 2013 Class Syllabus 2012-2013 Finals Schedule Bulletin Board **Check Here for Class News and Events** ES302 Final Grades are Posted - Good work everyone... great improvement across the board throughout the term... have a good summer break! Click here for final grade roster (updaed June 16, 2013) Class Notes The class notes are organized according to their order of occurrence throughout the term. They are available in Adobe Acrobat Reader (PDF) Format. Acrobat Reader is invoked as a plug-in within the web browser environment. It is available on most machines at WOU. Acrobat Reader can be downloaded for free by returning to Taylor's Home Page Exam study guides will be included as the term progresses. Please check future postings as exam time approaches Introductory Math and Algebra Review NOAA National Geophysical Data Center - Magnetic Declination Calculator Review of Maps / Topographic Maps Overview of Analog Map Measurements Using an Engineer's Scale Application of Trigonometric Analysis to Geologic Problems Text Chapter 5 - Trigonometric Problems Overview of Map Digitizing Software and Image Processing (Tutorial) Overview of Graphing Techniques and Regression Overview of Grapher Software Package Introduction to Contouring and Digital Elevation Models Overview of Surfer Mapping Software Package Reading: Introduction to Geostatistics and Data Analysis Microsoft Office Training Resources Introduction to MS Excel 1997-2003 (from Dr. Fay-Wolfe, University of Rhode Island) Excel 1997-2003 Getting Started Introduction to Office 2007 Excel 2007 Tutorial Polar Plot-Rose Graph Paper Unit Algebra Exercise Intro to Map Scales Intro to Topgraphic Maps Map Scale Problem (part 2) Large-Scale Format Photo Exercise (Vintage WOU Campus Photo) Map Measurement and Location Exercise Using Engineer's Scale and Protractor Map Projections - UTM Map Location Exercise Measuring Scaled Map Areas Using Planimeter Quick Guide - Map Area Measurement with Planimeter Student Worksheet for Determining Pace and Ocular Height Introduction to Scaling and Map Drawing Classroom Mapping Exercise Relationships Between Geologic Variables Introduction to Brunton Compass Lab Exercise (NEW Spring 2013) Tape and Compass Exercise - Surveying Campus Locations WOU Campus Base Map - UTM (meters) Brunton Navigator / GPS User Guide Watershed Delineation and Drainage Area Measurement Base Map for Triangulation Problem Intro Problem Set: Solving Geologic Problems Exercise: Applying Algebra to Practical Hydrology Problems Intro to Spreadsheets, MS Excel, and Geologic Data Analysis Application of Excel to a Climate Problem Application of Excel to an Alluvial Fan Problem In-Class Trig Map / Strike-Dip Exercise Introduction to the Three Point Problem Three Point Problem (Pittsburgh Coal) Application of Ternary Diagrams to Geologic Problems Introduction to Rose Diagrams Introduction to Contour Drawing /Interpolation (Spring 2011) Using Surfer to Build Digital Elevation Models Dallas 7.5-min. 10-m DEM (*.zip compressed file format) Using Surfer to Build Lidar-Based Digital Elevation Models Instructions for Converting ESRI ASCII Grid Format to Surfer Grid Format (Header Edits) Intro to Geostatistics and Data Analysis Introduction to Data Analysis via Hillslope Morphometry Introduction to Rockworks Tutorial Final Integrated Lab Project Digital Lab Report Submission Instructions Upload Instructions: (1) Clicking on the link at the end of this section will take you to a web-based course management software called "Moodle". (2) Once at the site, choose the "Login" option at the upper right of the page. (3) Enter your WOU student server username (the one you use for WOU email or to get to your "H:\" drive). (4) Enter the last four digits of your student ID V-no. for the password (unless if you have already changed it). (5) Once you are logged in, look for the "Change Password" icon...but DON'T FORGET IT. (6) Scroll down your list of online classes to ES473 Environmental Geology; you are automatically enrolled in the Moodle course with your class registration. (7) Scroll to the topic/assignment section, and begin the submission process. (9) Make sure you print hard copies and make back-up files of your lab report materials; save back up files! Note: Pay attention to the availability and due dates. Click Here to Go to Moodle and the ES473 Class Page Instructions for Making Acrobat PDF Files and Compiling a Digital Lab Report Moodle User Notes (Scott Carter, 2010-11) Instructions for WOU Terminal Server Access from Home Lab Data Link to Online Custom / Printable Graph Paper Blank Ternary Graph Paper Intro to Excel Data Set (Excel Spreadsheet) Oregon Weather Station Locations (Excel Spreadsheet) Oregon Climate Zone 1 (Coastal) Precipitation Data (Excel Spreadsheet) Oregon Climate Zone 2 (Willamette Valley) Precipitation Data (Excel Spreadsheet) Oregon Climate Zone 4 (High Cascades) Precipitation Data (Excel Spreadsheet) Oregon Climate Zone 7 (High Lava Plains) Precipitation Data (Excel Spreadsheet) Alluvial Fan Exercise Data (Excel Spreadsheet) Ternary Diagram Exercise - Swauk Sandstone Pointcount Data (Excel Spreadsheet) Data Analysis Tutorial (MS Excel Spreadsheet) Locations of US Cities (MS Excel *.xls) Click Here for Excel Lat-Lon Decimal Degree to Deg-Min-Sec Conversion Formula (MS Excel *.xls) Appalachian Slope Data (MS Excel Spreadsheet) Final Project Data Download Mt. Bachelor USGS 10-m Digital Elevation Model (DEM - *.zip compressed file) Download Mt. Bachelor USGS Digital Raster Graphic (DRG - *.zip compressed file) Appalachian Morphometry Data (Excel Workbook) Fracture Orientation Data (Excel Workbook) Mt. Bachelor X-Y-Z Elevation Data (Text File) Newberry Cinder Cone Data (Excel Workbook) Lab Answer Keys Homework 1 Answer Key - Waltham Ch. 1 Intro to Geologic Problems Homework 2 Answer Key - Waltham Ch. 2 Geological Variables Homework 3 Answer Key - Waltham Ch. 3 Equation Manipulation Homework 4 Answer Key - Waltham Ch. 5 Trigonometric Applications Answer Key: Tape and Compass Exercise / Surveying Campus Locations Three-Point Problem (Pittsburgh Coal) Answer Key Final Project Answer Key Lab Portfolio and Final Project Instructions Updated Lab Checklist - May 8, 2013 - Moodle Upload / Midterm Digital Lab Reports Due Friday May 10, 2018
{"url":"http://www.wou.edu/las/physci/taylor/g302/g302sp13.htm","timestamp":"2014-04-16T19:14:40Z","content_type":null,"content_length":"17083","record_id":"<urn:uuid:b7064d57-123e-4674-b832-d08874f5bc72>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
San Marino Calculus Tutor Find a San Marino Calculus Tutor I recently graduated from USC with a BS in biomedical engineering. In college, I taught 4th graders chemistry and focused on keeping my lessons simple and interactive. Currently, I work with college students that are preparing for the MCAT (admissions test for medical school). My engineering coll... 43 Subjects: including calculus, reading, chemistry, English ...Above all other things, I love to learn how other people learn and to teach people new things in ways so that they will find the material interesting and accessible.I took Spanish I-IV in high school, and I took the AP Spanish exam. I received my high school's Spanish award for excellence in bot... 28 Subjects: including calculus, Spanish, French, chemistry ...Even though in the real world we have to deal with imperfect shapes, the base to be able to deal with them is perfect shapes. I passed more than 20 units of mathematics including trigonometry, algebra, and calculus which all are very helpful in better understanding the geometry. I am very experienced in tutoring geometry since I had couple of students during this school year. 11 Subjects: including calculus, statistics, algebra 2, geometry ...It is my goal to not only teach my students the material, but to give them the tools needed to succeed in all their classes. With the right tools and encouragement from myself, teachers and parents, students are able to achieve great things.I earned my MBA from UCLA Anderson School of Management... 30 Subjects: including calculus, chemistry, physics, geometry ...During my years there, I graded math tests in the engineering school ranging from pre-calculus to Calculus 1-3 as well as the higher courses of Differential Equations and Numerical Methods. I also provided tutoring in those classes. I was a teaching assistant in an introductory engineering class. 8 Subjects: including calculus, chemistry, geometry, algebra 1
{"url":"http://www.purplemath.com/san_marino_calculus_tutors.php","timestamp":"2014-04-20T06:46:28Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:907b206e-2fd3-4652-a75e-68355ac9ba00>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
raduation & freshman retention highlights PBA Home > Institutional Research & Analysis > Students > UG Graduation & Retention Rates > Highlights Fall 2008 CU-Boulder undergraduate graduation & freshman retention highlights Fall 2008 For first-time full-time new freshmen entering summer or fall terms (full time = 12+ hrs, counted at end of the fall term) Students graduating from institutions other than CU-Boulder are NOT counted in the graduation rates. Rates are updated each October with fall census enrollment information and degrees posted through the prior summer term. Main graduation/retention rate page Fall 2008 Highlights • Overall □ The overall 6-year graduation rate was 67% for the freshman class entering in 2002, the most recent entering class to have had a full 6 years to graduate. This was the same as the rate for the previous entering class, one percentage point higher than the three classes before that, and almost equal to the peak of 68% for the fall 1997 entering class. The 6-year graduation rate is the standard used in federal and comparative reporting. ☆ Freshmen who entered CU-Boulder as Colorado residents in the class entering in 2002 had a 6-year graduation rate of 71%, one percentage point lower than the all-time (since tracking began in 1980) high of 72% set by the previous year’s class. ☆ The non-resident graduation rate was 61%, one percentage point lower than the previous class. The rate has been between 61% and 65% for each entering class since 1986, with the high figure achieved by the entering classes of 1996 and 1997. Non-residents are further from home and pay substantially higher tuition than residents; both factors contribute to their lower graduation rate. □ The 4-year graduation rate was 41% for the freshman class entering in 2004, remaining at the highest rate on record for the fourth consecutive year. The 37% rate for non-residents was two percentage points below the all-time high, most recently set last year, while the 43% rate for residents equaled the all-time high. □ The one-year retention rate for the freshman class entering in fall 2007 was 84%. It has been 83 or 84% for 12 of the last 13 entering classes. ☆ The resident retention rate, which has held fairly steady for years, was 86%. The non-resident rate was 80%; it has fluctuated slightly more than the resident rate, but has been between 79% and 83% for the last 11 years. □ We are now able to track students who leave CU-Boulder for other institutions through the National Student Clearinghouse, and to calculate an “enhanced graduation rate” including students in an entering freshman cohort who graduate either from CU-Boulder or from another four-year institution. For the entering class of 2001, the latest available, 99% were matched by the Clearinghouse. Of these, 68% graduated from CU-Boulder within 6 years, 10% from another 4-year institution, making the enhanced graduation rate 78%. An additional 10% were still enrolled either at CU-Boulder or other institutions, meaning that only 12% of the original entry cohort had not either graduated or were still working on degrees. □ Of transfer students who entered CU-Boulder in 2001, 67% graduated from Boulder by 2007, 9% from other institutions, for an enhanced graduation rate of 77%. (Total does not match the sum of the individual elements due to rounding.) An additional 7% were still enrolled at Boulder or elsewhere. • Gender □ Six-year graduation rates for women are consistently higher than those for men by 3-6 percentage points. This has been true for all classes entering since 1986, although rates for men and women were about equal before that. Women also graduate faster--their four-year graduation rate consistently exceeds men's by 10-15 percentage points. • Students of color □ Graduation rates for students of color are lower than those for whites. However, six-year graduation rates for more recent freshmen students of color, while showing some year-to-year fluctuations, are clearly higher than those for earlier classes, for each of Asian American, African Americans, and Hispanic/Chicanos. □ The 6-year graduation rate for students of color in the freshman class entering in 2001 was 61%, an all-time high. The 6-year graduation rates of Asian-Americans, Hispanics, and African-Americans have all shown long-term gains. Asian-Americans reached an all-time high rate for the second consecutive year, reaching 66%. □ The six-year rate for African-Americans reached 53%, two percentage points higher than last year. Although still short of the all time high of 59%, achieved three years ago, this is in keeping with an overall long-term upward trend, allowing for quite a bit of year-to-year fluctuation, probably partly owing to relatively small numbers. □ The 4-year graduation rate of 35% for students of color in the freshman class entering in 2004 was an all-time high, and was the fourth consecutive increase. □ The 1-year retention rate for students of color in the class entering in 2007 was 80%, a slight (1 percentage point) decline from the previous year; the long-term trend in retention among students of color continues to be steady, and generally 1 to 3 percentage points below white students. • Predicted grade-point average (PGPA) □ PGPA is a measure of academic preparation that projects the student’s first-year UCB grade-point average based on high school grades and standardized test (SAT, ACT) scores, using formulas empirically derived from data on earlier freshman classes. As designed, PGPA is related to academic performance and thus ultimately to graduation rate, particularly for Colorado residents. Among residents, 81% of the top PGPA quartile graduate in 6 years or less, compared to 58% in the bottom quartile. For non-residents, the comparable rates are 67% and 56%. (Non-residents, more often than residents, have reasons other than academic performance for not staying through graduation.) • Pell Grant recipients and first-generation college students □ Graduation rates for Pell Grant recipients (who have relatively low financial resources) and first-generation college students are generally lower than the overall rate, by several percentage points. This is true for both residents and non-residents. All these factors – being a Colorado resident, a Pell-eligible student, and a first-generation student – are positively correlated to each other, so the independent relationships of each factor to graduation rate can be hard to interpret from simply comparing graduation rates for the various categories. In an attempt to untangle these factors, we did a logistic regression analysis, looking at the relationship between graduation rate and each factor, while controlling for each of the others (and also controlling for predicted GPA). The results indicate that being a non-resident, a Pell recipient, and a first-generation college student each is related to a lower graduation rate, by anywhere from 6 to 10 percentage points. • Time to degree □ Graduation rates are typically reported using 4-year, 5-year and 6-year rates. The rates represent the percentage of students who entered in a given fall (including prior summer entry) as new full-time freshmen and who graduated in four, five or six years. Graduation rates are used for comparisons among institutions, among groups of students (e.g., resident versus non-resident or by ethnicity), and for comparisons over time. For example, compared to Colorado residents, non-residents (at entry) have lower overall graduation rates (from 5-9 percentage points lower on the 6-year rate). □ Graduation rates, however, do not answer the question of how long it takes, on average, for students to graduate. At CU-Boulder, graduation in four years is still the norm. If you look at the bachelor’s degree recipients in a fiscal year who originally entered as first-time, full-time summer/fall freshmen, over half of them took four years or less to graduate. This percentage climbed steadily from 52% for FY 2002 bachelor’s recipients to 58% for 2007, before falling back to 55% for 2008. The average over the last 6 years has been 55%. □ The average has been 57% for students who entered as non-residents, 54% for residents; 63% for females, 48% for males. □ The median time to degree for bachelor’s recipients in 2002-08 was 4.0 years, which equates to the 4th summer after fall entry. It was 3.7 years for females – equivalent to the 4th spring – and 4.3 years for males. It was slightly longer for students who changed majors, changed colleges, and/or started as undeclared majors, but for none of these groups was the median over 4.3 years, or one term beyond 4 years. All these medians have been very stable over time. □ Mean or average time to degree is slightly longer than the median, because while there is a real lower limit – very few can graduate in, say, 3 years or less – there’s no upper limit, and a few students might take 6 years, 7 years, or even longer, and we follow them for as long as it takes. Even so, the mean time to degree from 2002 through 2008 has been 4.3 years, with no yearly cohort longer than 4.4 years. Prior-year highlights, from fall: 2007 | 2006 | 2005 Questions? E-mail IR@colorado.edu PBA: L:\ir\tracking\anal\highlightsF06.doc
{"url":"http://www.colorado.edu/pba/records/gradrt/highlights08.htm","timestamp":"2014-04-21T16:28:35Z","content_type":null,"content_length":"23120","record_id":"<urn:uuid:0945cfed-5500-46a6-8cc6-aa86773cb92a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivative pricing and optimal execution of portfolio transactions in finitely liquid markets Mitton, M. D. (2007) Derivative pricing and optimal execution of portfolio transactions in finitely liquid markets. PhD thesis, University of Oxford. In real markets, to some degree, every trade will incur a non-zero cost and will influence the price of the asset traded. In situations where a dynamic trading strategy is implemented these liquidity effects can play a significant role. In this thesis we examine two situations in which such trading strategies are inherent to the problem; that of pricing a derivative contingent on the asset and that of executing a large portfolio transaction in the asset. The asset's finite liquidity has been incorporated explicitly into its price dynamics using the Bakstein-Howison model [4]. Using this model we have derived the no-arbitrage price of a derivative on the asset and have found a true continuous-time equation when the bid-ask spread in the asset is neglected. Focussing on this pure liquidity case we then employ an asymptotic analysis to examine the price of a European call option near strike and expiry where the liquidity effects are shown to be most significant and closed-form expressions for the price are derived in this region. The asset price model is then extended to incorporate the empirical fact that an asset's liquidity mean reverts stochastically. In this situation the pricing equation is analyzed using the multiscale asymptotic technique developed by Fouque, Papanicolaou, and Sircar [22] and a simplified pricing and calibration framework is developed for an asset possessing liquidity risk. Finally, the derivative pricing framework (both with and without liquidity risk) is applied to a new contract termed the American forward which we present as a possible hedge against an asset's liquidity risk. In the second part of the thesis we investigate how to optimally execute a large transaction of a finitely liquid asset. Using stochastic dynamic programming and attempting only to minimize the transaction's cost, we first find that the optimal strategy is static and contains the naive strategy found in previous studies, but with an extra term to account for interest rates neglected by those studies. Including time risk into the optimization procedure we find expressions for the optimal strategy in the extreme cases when the trader's aversion to this risk is very small and very large. In the former case the optimal strategy is simply the cost-minimization strategy perturbed by a small correction proportional to the trader's level of risk aversion. In the latter case the problem is shown to be much more difficult; we analyze and derive implicit closed-form solutions to the much-simplified perfect liquidity case and show numerical results to demonstrate the agreement of the solution with our intuition. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/585/","timestamp":"2014-04-18T02:59:04Z","content_type":null,"content_length":"19218","record_id":"<urn:uuid:fa464610-5e3a-46fd-a46f-9ac06b7ef618>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Calibrating & Simulating Natural Gas Spot Prices This example demonstrates calibrating an Ornstein-Uhlenbeck mean reverting stochastic model from historical data of natural gas prices. The model is then used to simulate the spot prices into the future using the Stochastic Differential Equation Simulation engine in Econometrics Toolbox. Import Historical Data The data can either be imported from a MAT-file or from the database using the auto-generated fetch function. The data set contains spot prices for natural gas at Henry Hub from 2000 to 2008 data = fetchNGData S = data.NaturalGas; data = Date: [2676x1 double] NaturalGas: [2676x1 double] CrudeOil: [2676x1 double] FuelOil: [2676x1 double] The Model The model used for simulating the Natural Gas prices is an Ornstein-Uhlenbeck brownian motion with mean reverting drift. This model is fit to the log of natural gas prices. The discrete-time equation of this model can be written as, This model can be calibrated to historical data by performing a linear regression between log prices and their first difference. Specifically, the equation can be rewritten as, Calibrate Parameters The reversion rate and mean level can be calculated from the coefficients of a linear fit between the log prices and their first difference scaled by the time interval parameter. All quantities are specified on an annual scale. x = log(S); dx = diff(x); dt = 1/261; % Time in years (261 observations per year) dxdt = dx/dt; x(end) = []; % To ensure the number of elements in x and dxdt match % Fit a linear trend to estimate mean reversion parameters coeff = polyfit(x, dxdt, 1); res = dxdt - polyval(coeff, x); revRate = -coeff(1) meanLevel = coeff(2)/revRate vol = std(res) * sqrt(dt) revRate = meanLevel = vol = Create an Ornstein-Uhlenbeck mean reverting drift model An Ornstein-Uhlenbeck model is a special case of a Hull-White-Vasicek model with constant volatility. The HWV constructor is used to setup an SDE model with the parameters estimated above. The start state of the model is set to the last observed log spot price. This model can be easily extend to accommodate the forward curve and option prices by setting the meanLevel and volatility parameters to be functions of time. OUmodel = hwv(revRate, meanLevel, vol, 'StartState', x(end)) % Alternatively, one could equivalently use the SDEMRD object as follows % OUmodel = sdemrd(revRate, meanLevel, 0, vol, 'StartState', x(end)) OUmodel = Class HWV: Hull-White/Vasicek Dimensions: State = 1, Brownian = 1 StartTime: 0 StartState: 1.31507 Correlation: 1 Drift: drift rate function F(t,X(t)) Diffusion: diffusion rate function G(t,X(t)) Simulation: simulation method/function simByEuler Sigma: 0.744514 Level: 1.70426 Speed: 1.76964 Monte-Carlo Simulation The model defined above can be simulated with the simulate method of the SDE object to generate multiple log price paths. These are exponentiated to compute the simulated natural gas prices. The plot below shows 100 paths simulated 80 days into the future. NTrials = 1000; NSteps = 2000; Xsim = simulate(OUmodel, NSteps, 'NTrials', NTrials, 'DeltaTime', dt); Xsim = squeeze(Xsim); % Remove redundant dimension Ssim = exp(Xsim); % Visualize first 80 prices of 100 paths plot(data.Date(end-20:end), S(end-20:end), data.Date(end)+(0:79), Ssim(1:80,1:100)); datetick; xlabel('Date'); ylabel('NG Spot Price'); Save Model The calibrated model is saved in a MAT-file for later use. save SavedModels\NGPriceModel OUmodel dt Visual Analysis of Simulated Price Paths Instead of plotting a number of paths at once, we can plot longer single paths against the observed historical data to visually validate the simulated paths. This can serve as a final sanity check. path = 14; plot(data.Date, data.NaturalGas, 'b', data.Date(end)+(0:NSteps), Ssim(:,path), 'r'); title(['Historical & Simulated Prices, Path ' int2str(path)]); Automated Visualization for Calibration Report This section creates plots of different simulations in an automated fashion to include in the calibration report. NTrials = 12; NSteps = 2000; Xsim = simulate(OUmodel, NSteps, 'NTrials', NTrials, 'DeltaTime', dt); Ssim = exp(Xsim); for path = 1:NTrials plot(data.Date, data.NaturalGas, 'b', data.Date(end)+(0:NSteps), Ssim(:,path), 'r'); title(['Historical & Simulated Prices, Path ' int2str(path)]);
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/28056-energy-trading-risk-management-with-matlab-webinar-case-study/content/html/ModelNGPrice.html","timestamp":"2014-04-18T21:04:34Z","content_type":null,"content_length":"36055","record_id":"<urn:uuid:5ef184ca-d124-480a-a6f8-af4595afd0a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos Statistics Tutor Find a Los Altos Statistics Tutor ...I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help. 11 Subjects: including statistics, chemistry, calculus, physics ...My emphasis was and still is on understanding the fundamental concepts as problem solving naturally follows. Teaching and drilling the “how” of problem solving mechanics are important, but really understanding the “why” that those problems are based on generates the greatest returns, especially ... 24 Subjects: including statistics, reading, chemistry, calculus ...I have also taught C to students in a data structure class at San Jose State University. I have both MS degree in Computer Engineering from Case Western Reserve University and MA in Mathematics from SUNY, Stony Brook. I have programmed in C, C++, Java, on Unix and Linux based systems for 20 years. 23 Subjects: including statistics, calculus, physics, algebra 2 ...I've used the concepts during my years as a programmer and have tutored many students in the subject. I have a strong background in linear algebra and differential equations. I have a Masters in mathematics and a PhD in economics which requires a good understanding of both topics. 49 Subjects: including statistics, calculus, physics, geometry ...I focus on making the basics strong through customized work-sheets. At this stage, child learns well in a pleasant atmosphere. I make sure that child is active throughout the class. 17 Subjects: including statistics, calculus, precalculus, geometry
{"url":"http://www.purplemath.com/los_altos_statistics_tutors.php","timestamp":"2014-04-19T10:13:35Z","content_type":null,"content_length":"24095","record_id":"<urn:uuid:74a2d5d6-44c6-4166-b436-6a1ffa01e934>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
type 1 error and type 2 error November 18th 2010, 02:54 AM #1 Sep 2009 type 1 error and type 2 error A bowl contains seven marbles of which to test the null hypothesis Ho : randomly drawn without replacement and the null hypothesis is rejected if and only if both are red. (a) What is the probability of committing a Type I error? (b) What is the probability of committing a Type II error? a) pr(2 red|theta=2) 2/7x1/6 = 1/21 b) pr(o red, 1 red|theta=4) = 3/7*2/6+4/7*3/6+3/7*4/6=5/7 can some1 explain this to me? i'm a bit lost. i know type 1 error is reject Ho when Ho is actually true. for for a, i get 2 red marbles. so 2/7 and multiple it 1/6 chance? but why 1/6 chance? and for b) type2 eror is failing to reject Ho when we actually should. just cant get my head around it some explaination would be great This is what i got so far Type I error = "false positive" = rejecting H_0 when H_0 is true Type II error = "false negative" = accepting H_0 when H_0 is false = accepting H_0 when H_1 is true So, in symbols: Now you go back and plug in what is meant by reject/accept H_0, and the hypotheses H_0 and H_1 in this problem, to get and then it is just a matter of calculating them, as in the solution. then, are my numbers correct? just a bit lost in how to interpret the numbers within the question to get the answer. alpha is (2/7)(1/6)=1/21 you can also do this by the hypergeo formula, but here we have to pick 2 red from a bowl with 2 reds and 5 blues. You reach in and select a red with prob 2/7 and you DO NOT replace it. Then there is 1 red left and still 5 whites, which has a 1/6 chance of being a red marble. as for beta, you don't want 2 reds, which leads to rejection, so I would use the complement $1-P(RR)=1-(4/7)(3/6)=5/7$ since we have 4 reds in our bowl now. November 21st 2010, 09:53 PM #2 November 21st 2010, 09:56 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/163650-type-1-error-type-2-error.html","timestamp":"2014-04-20T00:32:19Z","content_type":null,"content_length":"38852","record_id":"<urn:uuid:de0c147a-cfc3-4c65-87ad-6811f58f85dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Radical Equations Practice Radical Equations What if you had a radical equation like $\sqrt{2x + 5} - 3 = 0$ Watch This CK-12 Foundation: Radical Equations When the variable in an equation appears inside a radical sign, the equation is called a radical equation. To solve a radical equation, we need to eliminate the radical and change the equation into a polynomial equation. A common method for solving radical equations is to isolate the most complicated radical on one side of the equation and raise both sides of the equation to the power that will eliminate the radical sign. If there are any radicals left in the equation after simplifying, we can repeat this procedure until all radical signs are gone. Once the equation is changed into a polynomial equation, we can solve it with the methods we already know. We must be careful when we use this method, because whenever we raise an equation to a power, we could introduce false solutions that are not in fact solutions to the original problem. These are called extraneous solutions. In order to make sure we get the correct solutions, we must always check all solutions in the original radical equation. Solve a Radical Equation Let’s consider a few simple examples of radical equations where only one radical appears in the equation. Example A Find the real solutions of the equation $\sqrt{2x-1}=5$ Since the radical expression is already isolated, we can just square both sides of the equation in order to eliminate the radical sign: $\text{Remember that} \ \sqrt{a^2}=a \ \text{so the equation simplifies to:} && 2x-1& =25\\\text{Add one to both sides:} && 2x& =26\\\text{Divide both sides by 2:} &&& \underline{\underline{x=13}}$ Finally we need to plug the solution in the original equation to see if it is a valid solution. $\sqrt{2x-1}=\sqrt{2(13)-1}=\sqrt{26-1}=\sqrt{25}=5$ The solution checks out. Example B Find the real solutions of $\sqrt[3]{3-7x}-3=0$ $\text{We isolate the radical on one side of the equation:} && \sqrt[3]{3-7x}& =3\\\text{Raise each side of the equation to the third power:} && \left(\sqrt[3]{3-7x}\right)^3& =3^3\\\text{Simplify:} && 3-7x& =27\\\text{Subtract 3 from each side:} && -7x& =24\\\text{Divide both sides by -7:} &&& \underline{\underline{x=-\frac{24}{7}}}$ Check: $\sqrt[3]{3-7x}-3=\sqrt[3]{3-7 \left(-\frac{24}{7}\right)}-3=\sqrt[3]{3+24}-3=\sqrt[3]{27}-3=3-3=0$ Example C Find the real solutions of $\sqrt{10-x^2}-x=2$ $\text{We isolate the radical on one side of the equation:} && \sqrt{10-x^2}& =2+x\\\text{Square each side of the equation:} && \left(\sqrt{10-x^2}\right)^2& =(2+x)^2\\\text{Simplify:} && 10-x^2& = 4+4x+x^2\\\text{Move all terms to one side of the equation:} && 0& =2x^2+4x-6\\\text{Solve using the quadratic formula:} && x& =\frac{-4 \pm \sqrt{4^2-4(2)(-6)}}{4}\\\text{Simplify:} && x& =\frac{-4 \pm \sqrt{64}}{4}\\\text{Re-write} \ \sqrt{24} \ \text{in simplest form:} && x& =\frac{-4 \pm 8}{4}\\\text{Reduce all terms by a factor of 2:} && x& =1 \ \text{or} \ x=-3$ Check: $\sqrt{10-1^2}-1=\sqrt{9}-1=3-1=2$ The equation has only one solution, $\underline{\underline{x=1}}$$x=-3$ Applications using Radical Equations Example D A sphere has a volume of $456 \ cm^3$volume of the sphere? Make a sketch: Define variables: Let $R =$ Find an equation: The volume of a sphere is given by the formula $V=\frac{4}{3}\pi R^3$ Solve the equation: $\text{Plug in the value of the volume:} && 456& =\frac{4}{3} \pi R^3\\\text{Multiply by 3:} && 1368& =4 \pi R^3\\\text{Divide by} \ 4 \pi: && 108.92& =R^3\\\text{Take the cube root of each side:} && R& =\sqrt[3]{108.92} \Rightarrow R=4.776 \ cm\\\text{The new radius is 2 centimeters more:} && R& =6.776 \ cm\\\text{The new volume is:} && V & =\frac{4}{3} \pi (6.776)^3=\underline{\underline {1302.5}} \ cm^3$ Check: Let’s plug in the values of the radius into the volume formula: $V=\frac{4}{3} \pi R^3=\frac{4}{3} \pi (4.776)^3=456 \ cm^3$ The solution checks out. Example E The kinetic energy of an object of mass $m$$v$$KE=\frac{1}{2} mv^2$$(kg \cdot m^2/s^2)$ $\text{Start with the formula:} && KE& =\frac{1}{2} mv^2\\\text{Plug in the values for the mass and the kinetic energy:} && 654 \frac{kg \cdot m^2}{s^2}& =\frac{1}{2}(145\ kg)v^2\\\text{Multiply both sides by 2:} && 1308 \frac{kg \cdot m^2}{s^2}& =145 \ kg \cdot v^2\\\text{Divide both sides by 145} \ kg: && 9.02 \frac{m^2}{s^2}& =v^2\\\text{Take the square root of both sides:} && v& =\sqrt{9.02} \sqrt{\frac{m^2}{s^2}}=3.003 \ m/s$ Check: Plug the values for the mass and the velocity into the energy formula: $KE=\frac{1}{2}mv^2=\frac{1}{2}(145 \ kg)(3.003 \ m/s)^2=654 \ kg \cdot m^2/s^2$ (To learn more about kinetic energy, watch the video at Watch this video for help with the Examples above. [https://vimeo.com/46325703 CK-12 Foundation: Radical Equations • For a quadratic equation in standard form, $ax^2 + bx + c = 0$ quadratic formula looks like this: $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ Guided Practice Find the real solutions of $\sqrt{3x-9}-1=5$ $\text{We isolate the radical on one side of the equation:} && \sqrt{3x-9}&=6\\\text{Square each side of the equation:} && \sqrt{3x-9}^2&=6^2\\\text{Simplify:} && 3x-9&=36\\\text{Add 9 from each side:} && 3x&=45\\\text{Divide both sides by 3:} &&& \underline{\underline{x=\frac{45}{3}=15}}$ Check: $\sqrt{3x-9}-1=\sqrt{3(15)-9}-1=\sqrt{45-9}-1=\sqrt{36}-1=6-1=5.$ Find the solution to each of the following radical equations. 1. $\sqrt{x+2}-2=0$ 2. $\sqrt{3x-1}=5$ 3. $2 \sqrt{4-3x}+3=0$ 4. $\sqrt[3]{x-3}=1$ 5. $\sqrt[4]{x^2-9}=2$ 6. $\sqrt[3]{-2-5x}+3=0$ 7. $\sqrt{x^2-3}=x-1$ 8. $\sqrt{x}=x-6$ 9. $\sqrt{x^2-5x}-6=0$ 10. $\sqrt{(x+1)(x-3)}=x$ 11. $\sqrt{x+6}=x+4$ 12. $\sqrt{3x+4}=-6$ 13. The area of a triangle is $24 \ in^2$ 14. The length of a rectangle is 7 meters less than twice its width, and its area is $660 \ m^2$ 15. The area of a circular disk is $124 \ in^2$circumference of the disk? $(\text{Area} = \pi R^2, \text{Circumference} =2 \pi R)$ 16. The volume of a cylinder is $245 \ cm^3$$(\text{Volume} =\pi R^2 \cdot h)$ 17. The height of a golf ball as it travels through the air is given by the equation $h=-16t^2+256$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Algebra-I-Concepts/r2/section/11.6/","timestamp":"2014-04-21T13:35:48Z","content_type":null,"content_length":"170269","record_id":"<urn:uuid:be30058e-1a11-484f-92bc-b7caf21cbf98>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Code Breaker' Brain Teaser Code Breaker Logic puzzles require you to think. You will have to be logical in your reasoning. Puzzle ID: #47775 Category: Logic Submitted By: jtolley You are a spy trying to gain access to a high profile building that contains very classified information. The only thing standing between you and those secrets is a keypunch security system containing the digits 1-9 that will open the door to the building. Your spy agency has provided you details about the order of the keys you need to enter: 1. Each digit is pressed once and only once. 2. No two consecutive digits are pressed consecutively. 3. The digits 2 and 4 are pressed 2nd and 4th in some order; the digits 6 and 8 are pressed 6th and 8th in some order. 4. The sum of the first 5 digits pressed is 19; the sum of the last 5 digits pressed is 27. As you wonder why your agency couldn't just write down the sequence for you, you crack the code and enter the building. What sequence of keys did you press on the security system to get into the building? Show Hint 5, 2, 7, 4, 1, 8, 3, 6, 9 From clue 4, the sum of the first 4 digits to be pressed, the last 4 digits to be pressed, and two times the 5th digit to be pressed is 19 + 27 = 46. The sum of the first and last 4 digits to be pressed and the 5th digit must be the sum of the digits 1-9, from clue 1. This sum is 45, meaning the 5th digit is 46-45 = 1. From clues 2 and 3, 2 must be the 2nd digit and 4 must be the 4th digit. From clue 3, 7 must be the 1st or 3rd digit pressed since it can not be pressed before or after the 6 or 8. From clue 4, this means 5 must be pressed 1st or 3rd. But, from clue 2, it can not be the 3rd digit, so it must be the first, meaning 7 is the 3rd digit pressed. That leaves 3, 6, 8, and 9 as the last 4 digits to be pressed. If 8 were pressed 8th, then the 9 must be pressed 6th, a contradiction with clue 4. So 8 must be pressed 6th meaning 9 is to be pressed last. This leaves 6 to be entered 8th and 3 to be entered 7th. Hide What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=47775&comm=0","timestamp":"2014-04-19T01:58:55Z","content_type":null,"content_length":"24995","record_id":"<urn:uuid:a497a400-1122-4a44-9051-79bbda7f52c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Testing normality by Skewness and Kurtosis: a new focusing Replies: 1 Last Post: Oct 28, 2013 7:46 AM Messages: [ Previous | Next ] Topics: [ Previous | Next ] Luis A. Afonso Re: Testing normality by Skewness and Kurtosis: a new focusing Posted: Oct 28, 2013 7:46 AM Posts: 4,518 From: LIsbon Skewness and Excess Kurtosis Statistics tests Registered: 2/ Setting the Confidence level at 95%, we found the following Confidence Intervals against sizes n, regarding normal (Gaussian) random samples: __n=20_____+/- 0.742___[-1.080, 1.567]__ ____30________0.599___[-0.903, 1.223]__ ____40________0.516___[-0.802, 1.053]__ ____50________0.453___[-0.723, 0.921]__ ____60________0.412___[-0.669, 0.830]__ ____70________0.381___[-0.626, 0.768]__ ___100________0.313___[-0.532, 0.631]__ We aim to compute these interval contents when filled with normal data <CHECK> __both: Skew and Exc.Kurtosis capture. __whatever: at least one parameter inside __any : no value captured by the intervals. __W= both/(1-any)______________________0.811__ _____________________________________ 0.427__ __W=both/(1-any)______Normal___________ 0.708__ _____________________Uniform__________ 0.000_* * any= 0.000, whatever= 0.9666, any= 0.0334 Conclusion: in the context we are dealing with the test statistics W= both/(1-any) is very effective in what concerns to discriminate Uniform to Normal data, even for samples as short as 20. Note: one have 0<= W <=1 these value attained only when any=0, therefore all samples have at least 1 success i.e. fall in one of the two intervals. Luis A. Afonso Date Subject Author 10/24/13 Testing normality by Skewness and Kurtosis: a new focusing Luis A. Afonso 10/28/13 Re: Testing normality by Skewness and Kurtosis: a new focusing Luis A. Afonso
{"url":"http://mathforum.org/kb/thread.jspa?messageID=9313466&tstart=0","timestamp":"2014-04-21T15:01:39Z","content_type":null,"content_length":"19271","record_id":"<urn:uuid:433741a9-823b-4a7c-93d9-642e88112eef>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Two meshing methods are better than one The most widely used methods are the h and p-adaptive refinement. Each has its trade-offs. But combining the two into an advanced form improves accuracy and processing time while eliminating many of the trade-offs. "In an ideal world, the accuracy of FEA results is determined by the user knowing the exact solution for a particular mode," says Tomi Mossessian, CEO, PlassoTech Inc., Encino, Calif. "But the solution is not available in most cases, so a process has to be established for reaching a specified accuracy. The automated procedure for improving results that reach the required level of accuracy is called an adaptive process," he says. The most common way to check the effectiveness of an adaptive approach is the rate of convergence. "It shows how fast the solver is approaching an exact solution as the user increases the discretization degrees of freedom," says Mossessian. The compute efficiency of the method is also an important factor because users want to expend the least amount of time and memory to get the most accurate results. Mossessian has experimented with several methods to show the pros and cons of each. For example, the uniform mesh refinement method is one of the simplest for improving FEA results. "It creates a finer mesh by uniformly decreasing the element size throughout the model. This increases the number of elements until the results stop substantially changing. That is, the solution has converged within a certain percentage," he says. Although the simplest method, it is also the least efficient because a uniformly finer mesh of a 3D model substantially increases the number of elements to process. "For example, a 1-m cube containing cubic elements of 10 cm in lengths would have 1,000 elements, by reducing the element size to 5 cm, the number of elements increases to 8,000. The growth is cubic," says Mossessian. Even in thin solids the growth is quadratic. "The uniform mesh refinement method is too slow and resource intensive to be practical in most real-life design situations," he says. An h-adaptive method improves on uniform mesh refinement. H-adaptive refines the mesh only in areas containing a high number of errors. The increase in the number of elements that must be processed is substantially less than when uniformly refining the mesh. Technical papers often refer to these methods as h-adaptive. The technique typically uses an error estimate over the elements at a given solution step and refines the mesh by a practical factor only in the high-error regions and then resolves the problem. The adaptive mesh refinement and resolving continues until reaching the specified accuracy. For structural analysis, typically this would be highest stress in the model. Although the h-adaptive method is more efficient than uniform refinement, for many cases it does not have an optimum convergence rate and requires complete or partial remeshing of the model in addition to resolving at each cycle, which is a time-consuming process," says Mossessian. The p-adaptive method does not change the mesh or number of elements. Instead, it increases the order of the polynomial approximation used within each element. This eliminates the need for remeshing and only requires resolving the problem at each p-cycle until the required accuracy is reached. "Similar to the h-adaptive method, error estimates are used with different points of the mesh. Polynomial order is increased within regions of high error until results reach the user-defined accuracy. In terms of convergence rate, this approach is superior to the traditional h-adaptive method. However, when sharp stress concentrations are present, this advantage deteriorates. In addition, by substantially increasing the p-order, it becomes compute expensive," he says. The best method for efficiently reaching convergence is a combination of both methods, the hp-adaptive method. It provides the most-efficient technique for controlling the FEA approximation errors. Rather than increasing the p order indefinitely or just relying on pure mesh refinement through the h-adaptive method, the hpmethod uses the p-adaptive within each h-refinement step. This is the most difficult method to implement and is offered by few FEA packages. The implementation of FEA solvers based on the hp-adaptive method in 3G.author, an FEA design-analysis program, allows calculating accurate results in the most-efficient manner. This allows a versatile definition of convergence criteria for both global and local parameters and provides a number of advanced attributes with respect to accuracy, efficiency, and robustness. A key attribute is an efficient p-based solver using a state of the PCCG ( preconditioned conjugate gradient) technique tied to adaptive mesh refinement. In addition, accuracy can be controlled so it is easily tailored for global or local result parameters, including stresses, displacement and temperature. Localized results can include or exclude any geometric entity, such as faces, edges, and In addition, advanced functions allow controlling accuracy and efficiency when solving assembly-type models. For users needing complete control over meshing, Mossessian says, it is available on top of the hp-adaptive method. "Users more familiar with their model behavior can accelerate the adaptive process by choosing a finer initial mesh over any area of interest, such as parts, faces, edges and vertices, using 3G.author's global and local mesh control capabilities. PlassoTech Inc.,
{"url":"http://machinedesign.com/print/archive/two-meshing-methods-are-better-one","timestamp":"2014-04-24T09:38:01Z","content_type":null,"content_length":"21494","record_id":"<urn:uuid:c22e2277-6d28-4fc6-b5dd-29baa7fe36f4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.34 no.2a São Paulo June 2004 Microphase separation of diblock copolymer with moving walls Antonio Daud Júnior^I; Flavio Moretti Morais^I; Sílvia Martins^I; Débora Coimbra^II; Welles A. M. Morgado^III ^IFaculdade de Física, Universidade Federal de Uberlândia, Uberlândia, MG, 38400-902, Brazil ^IIDepartamento de Física, Universidade Federal do Amazonas, Manaus, AM, 69077-000, Brazil ^IIIPontifícia Universidade Católica do Rio de Janeiro, Departamento de Física, Rio de Janeiro, RJ, 22452-970, Brazil Diblock copolymers are linear chain molecules consisting of two subchains A and B grafted covalently to each other. Below some critical temperature T[c] these two blocks tend to segregate, but due to the covalent bond they can segregate at best locally to form periodic structures (microdomains). For molecules whose subchains have the same length, the equilibrium pattern is lamellar. In the bulk regime, these microdomains are ordered at random. To obtain an oriented lamellar pattern it is necessary to consider some asymmetry. In the presence of an external field, the lamellae will align to it. Directional quenching also can lead to the growth of oriented microphase separation. The effect of boundary conditions (confinement between parallel walls) also generates well-aligned lamellae, parallel to the walls. If the distance between the walls is comparable to the molecular sizes, another constraint is imposed on the system since the domains are forced to accommodate between the walls and, for certain conditions we will see a frustration phenomenon. If we allow the walls to move with a certain velocity during phase segregation, the accommodation of the lamellae can be changed. We use a cell dynamical system, which is a very efficient computational method, in order to investigate the effect of moving walls in lamellae formation. 1 Introduction Diblock copolymers are linear-chain molecules consisting of two subchains of different monomers A and B, bounded covalently to each other [1]. Below some critical temperature T[C], the two polymers species tend to separate, but due to the covalent bond they are unable to form only two domains, growing in a periodic structure which is locally segregated on the scale of nanometers, referred to microphase separation. If the two blocks have equal length, they segregate in a lamellar structure parallel to the walls of confinement, when this interaction is dominant. Microphase separation has been widely studied during the last twenty years, for technological and theoretical reasons. As shown by Bates and Fredrickson [2], control over molecular scale morphology allows the development of new kinds of materials, of great utility for industry. From a fundamental viewpoint, films of polymers present advantageous model systems for investigations of phase equilibria in reduced geometry. Likewise, the influence of physical boundaries on the kinetics of phase separation and the resulting equilibrium morphology can be studied in quite some detail [3]. From the technological view, they are present in commercial products as coatings, paints, photoresists, and even exciting applications in lithography, because of self-organization of amphiphilic block copolymers [3]. Diblock copolymers films have been studied in a variety of settings: interacting walls either symmetric or asymmetric, neutral walls with matched densities [4, 5, 6, 7, 8, 9] and neutral walls confining molecules with mismatched densities [10] and gravity effects [11] , to investigate pattern formation in a film confined between two hard surfaces. As for pattern formation, we can summarize all the above situations: when the walls are neutral, in the absence of mismatch, lamellae will form in the direction normal to the substrate; if interaction with the walls is added, the lamellae will form parallel to the walls; for molecules with density mismatch in the presence of a gravitational field normal to the confining walls, lamellae form parallel to the walls. In this article we assume that the walls are not rigid, and we investigate the influence of moving walls on pattern formation. For this we simulate the diblock copolymer using the cell dynamics system (CDS) model [12]. Oono and Shiwa [13] first studied pattern formation in this model, which is intensively used nowadays. The spirit of their modeling was purely phenomenological [14], however the motivation was to have the simplest model to give a spatially nonuniform equilibrium pattern. CDS consists in a map that describes the order parameter evolution of each little portion (cell) of the system in a mesoscopic scale. It is a discreet formulation able to describe on a non-continuous functional, computationally efficient to the numerical study of phase ordering [15]. This work is organized as follows: in the next section we present the basic formulation of CDS model and our considerations to include the effect of moving walls. Then, in the third section, we discuss the results obtained. Our conclusions are presented in the final section. 2 CDS Model The CDS model is a map that sends an instantaneous discrete frame of spatial pattern to another near it. We assign a scalar variable y(n, t) to each lattice site corresponding to the coarse-grained order parameter in the nth cell at time t (time here is defined as the number of interations). This order parameter represents the difference y[A] y[B], where y[A](y[B]) is the local number density of A(B) species. First, we consider the local dynamics (each cell), analyzing if the order parameter increases or not. Second, we include the nearest neighbor interaction. Next, we must impose the condition of conservation of order parameter and connectivity of distinct species. Taking this to account, we can write where a = [ I(n, t) I(j, t)] and is the chemical potential; á á ñ ñ represents the isotropic space average and e, D and E are positive phenomenological constants. The parameter e appears in this model to stabilize the solution y = 0 in the bulk. Also, where ±y[c] denote the zeroes of Dtanh[y(n, t)] y(n, t). The DCS formulation is equivalent to a discrete formulation of the Cahn-Hilliard equation, usually employed to study polymer phase separation. The former has the advantage that the function tanh[y(n, t)] is more stable than the cubic polynomial form of the Cahn-Hilliard order parameter [1]. The movement of the walls and the interaction of the subchain with the surface are considered in the boundary conditions. The intensity of the interaction with the surface is given by the parameter s, and we assume that each surface attracts a different subchain. 3 Results and Discussion For the simulations we choose a 25 × 256 initial lattice, A = 1.2, D = 0.5, e = 0.01, and a uniformly distributed initial condition. At each time step, we impose the movement of the walls, with a stochastic velocity, where Dx[i] = 0 or 1 with the probability P and t is the computational time (number of interactions). A schematic view can be seen in Fig. 1. In Fig. 2, we can see the effect of the surface interaction on the pattern formation, after 2000 iterations. For larger values of s, the tendency of the pattern is to form lamellae parallel to the surface. When s is small, the movement of the walls leads to the formation of perpendicular lamellae. In Fig. 3 the time evolution of the pattern is shown for parameter values P = 0.01 and s = 0.01. We now see the competition between the two effects: the pattern exhibits both parallel and perpendicular lamellae. The size of lamellae, w, changes slowly, as can be seen in Fig. 4. To accommodate the pattern between the walls, the number of lamellae changes, growing with time. 4 Final Remarks In this work, we applied a cell dynamical system computational method to investigate the effect of moving walls on lamellae formation of diblock copolymers. The effect of confinement between parallel walls, simulated in boundary conditions, generated well-aligned lamellae, parallel to the walls for larger values of the surface interaction s. For small s, the movement of the walls provokes formation of perpendicular lamellae. When the distance between the walls is comparable to the molecular sizes, the domains are forced to accommodate between the walls and, for certain conditions, we observe frustration. If we allow the walls to move during phase segregation, the accommodation of the lamellae changes. Our results also demonstrate a slow time evolution of the size of lamellae due to competition between the stochastic velocity of movement and the surface interaction. In this case the pattern exhibits both parallel and perpendicular lamellae. [1] W. A. M. Morgado, S. Martins, M. Bahiana, M. S. O. Massunaga, Comp. Phys. Comm. 121/122, 327 (1999). [ Links ] [2] F. S. Bates and G. H. Fredrickson, Phys. Today, 52 (1999). [ Links ] [3] H. Elbes, K. Fukunaga, R. Stadler et al., Macromolecules 32, 1204 (1999). [ Links ] [4] G. T. Pickett, A. C. Balazs, Macromolecules 30, 3097 (1997). [ Links ] [5] G. Brown, A. Chakrabarti, J. Chem. Phys. 102, 3310 (1995). [ Links ] [6] M. S. Turner, Phys. Rev. Lett. 69, 1788 (1992). [ Links ] [7] A. Menelle, T. P. Russel, S. Anastasiadis, Phys. Rev. Lett. 68, 67 (1992). [ Links ] [8] P. Lambooy, T. P. Russel, G. J. Kellogg et al., Phys. Rev. Lett. 72, 2899 (1994). [ Links ] [9] G. J. Kellogg, D. G. Walton, A. M. Mayes et al. Phys. Rev. Lett. 76 2503 (1996). [ Links ] [10] W. A. M. Morgado, S. Martins, M. Bahiana, M. S. O. Massunaga, Phys. Rev. E 61, 4118 (2000). [ Links ] [11] W. A. M. Morgado, S. Martins, M. Bahiana, M. S. O. Massunaga, Physica A 283, 208 (2000). [ Links ] [12] K. Kitahara, Y. Oono, D. Jasnow, Mod. Phys. Lett. B 2, 765 (1988). [ Links ] [13] Y. Oono, Y. Shiwa, Mod. Phys. Lett. B 1, 49 (1987). [ Links ] [14] Y. Oono, M. Bahiana, Phys. Rev. Lett. 61, 1109 (1988). [ Links ] [15] I. W. Hamley, Macromolecules 36, 9 (2000) [ Links ] Received on 4 February, 2004
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332004000300014&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-19T15:16:43Z","content_type":null,"content_length":"36493","record_id":"<urn:uuid:eb1be9be-56aa-4aa5-88ea-419da3603c6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
numerical value of Hebrew letters Each letter in the Hebrew alphabet has a numerical value. These values can be used to write , as the used some of their letters (I, V, X, L, C, M) to represent numbers. Alef through Yod have the values . Yod through Qof have the values 10 through 100, counting by 10s. Qof through Tav have the values 100 through 400, counting by 100s. Final letters have the same value as their non-final counterparts. The number 11 would be rendered Yod-Alef, the number 12 would be Yod-Bet, the number 21 would be Kaf-Alef, the word (Tav-Vav-Resh-He) has the numerical value 611, etc. The only significant oddity in this pattern is the number , which if rendered as 10+5 would be a name of , so it is normally written Tet-Vav (9+6). Because of this of assigning numerical values to letters, every has a numerical value. There is an entire discipline of Jewish mysticism known as that is devoted to finding hidden meanings in the numerical values of words. For example , the number is very significant, because it is the numerical value of the word , meaning . It is interesting to note that the numerical value of Vav (often transliterated as W) is , and therefore has the numerical value of It's an amusing notion , but Hebrew numbers just don't work that way. In Hebrew numerals, the position of the letter/digit is irrelevant; the letters are simply added up to determine the value. To say that Vav-Vav-Vav is six hundred and sixty-six would be like saying that the Roman numeral III is one hundred and eleven. The numerical value of Vav-Vav-Vav in Hebrew would be 6+6+6=18, so WWW is equivalent to life! (It is also worth noting that the significance of the number is a part of Christian numerology , and has no basis that I know of in Jewish thought).
{"url":"http://everything2.com/title/numerical+value+of+Hebrew+letters","timestamp":"2014-04-21T08:19:26Z","content_type":null,"content_length":"21298","record_id":"<urn:uuid:28fa5b7f-3e8e-4f98-8629-89f5571c2dac>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
In The So-called Feedback Configuration Of The ... | Chegg.com Linear Circuit Please attach a picture Image text transcribed for accessibility: In the so-called feedback configuration of the figure below, E(s) is the Laplace transform of the error between the reference signal xref(t) and some function or modification of the response y(t). G1(s)=ng(s)/dg(s) is called the plant of the system and represents a physical process or special circuit while F(s)=nf(s)df(s) is a feedback controller to be designed. You will need to use the initial and final value Theorems are here As a first step, determine E(s) in terms of Xref(s), Y(s), and F(s), but not G1(s). Find the transfer function H(s) = E(S)/Xref(s) in terms of ng(s), nf(s), dg(s), and df(s) Under what conditions does e(t) rightarrow 0 as rightarrow infinity when xref(t) = K0u(t)? Under what conditions does e(t) rightarrow Ke 0 (nonzero constant) as t rightarrow infinity when xref (t)=k0u(t)? Under what conditions and how could you determine e(0 +) from E(s)= H (s)Xref(s). Suppose dg(s) had a pair of poles on the imaginary axis. How could you design an F(s) to cancel these poles? in Vout(s) and Iin(s). Determine the Thevenin equivalent impedance seen by the independent current source. Finally, determine the response Vout(t) to the input iin(t) = 3e-t cos 21 A, assuming the inductor has zero initial current. Electrical Engineering Answers (3) • Please attach a picture Rating:5 stars 5 stars 1 TheGladiator answered 4 hours later
{"url":"http://www.chegg.com/homework-help/questions-and-answers/called-feedback-configuration-figure-e-s-laplace-transform-error-reference-signal-xref-t-f-q4289792","timestamp":"2014-04-18T21:10:49Z","content_type":null,"content_length":"21992","record_id":"<urn:uuid:226599e3-ae3d-4176-9dbc-21974b5fb938>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois theory for polynomials in several variables up vote 15 down vote favorite I feel a bit ashamed to ask the following question here. What is (actually, is there) Galois theory for polynomials in $n$-variables for $n\geq2$? I am preparing a large audience talk on Lie theory, and decided to start talking about symmetries and take Galois theory as a "baby" example. I know that Lie groups are somehow to differential equations what discrete groups are to algebraic equations. But I nevertheless would expect Lie (or algebraic) groups to appear naturally as higher dimensional analogs of Galois groups. Namely, the Galois group $G_P$ of a polynomial $P(x)$ in one variable can be defined as the symmetry group of the equation $P(x)=0$ (very shortly, the subgroup of permutations of the solutions/roots that preserves any algebraic equation satisfied by them). Then one of the great results of Galois theory is that $P(x)=0$ is solvable by radicals if and only if the group $G_P$ is solvable (meaning that its derived series reaches $\{1\}$). I was wondering what is the analog of the story in higher dimension (i.e. for equations of the form $P(x_1,\dots,x_n)=0$. I would naively expect algebraic group to show up... I googled the main key words and found this presentation: on the last slide it is written that the task at hand is to develop a Galois theory of polynomials in two variables This convinced me to anyway ask the question EDIT: the first "idea" I had I first thought about the following strategy. Consider $P(x,y)=0$ as an polynomial equation in one variable $x$ with coefficients in the field $k(y)$ of rational functions in $y$, and consider its Galois group. But then we could do the opposite...what would happen? galois-theory soft-question Take the étale fundamental group of the corresponding scheme? – Qiaochu Yuan Nov 17 '11 at 23:49 That doesn't seem like the correct generalization to me. The Galois group of $f(T)$ is the quotient of the etale fundamental group of Spec $k$ by the etale fundamental group of Spec $k[T]/(f(T))$. In particular, we would like to define it as the automorphism group of something. The Galois group of a polynomial, though, is not the automorphism group of $k[T]/f(T)$ but of $k[T_1,T_2,...,T_n]/ f(T_1),etc.,etc.$. One could take the limit of the automorphism groups of $X$, some subset of $X \times X$, some subset of $X\times X\times X$, et cetera... – Will Sawin Nov 18 '11 at 2:07 1 I don't understand why I see on this question a vote to close. I like this question, and vote NOT to close, and whoever has cast a vote is being very rude by not saying why. – Theo Johnson-Freyd Nov 18 '11 at 4:08 1 @Theo I voted to close as not a real question. I have nothing further to add. – Felipe Voloch Nov 18 '11 at 10:20 1 I should also mention that the étale fundamental group can indeed be seen as a generalization of the (absolute) Galois group for objects of dimension >0 (although this may not be the generalization you're looking for). – François Brunault Nov 18 '11 at 14:14 show 5 more comments 2 Answers active oldest votes (This should really be a comment I think, but I'm not highly rated enough to leave one, so please bear with me) up vote 4 A Galois Theoretic condition for a polynomial in two variables to be solvable by radicals is found in the following paper: http://arxiv.org/abs/math/0305226. It seems to indicate that down vote something similar can be done for higher variables. Perhaps I'll ask Jochen next time I see him about this. Thank you for the link. I'll have a look as soon as possible. – DamienC Dec 9 '11 at 8:58 add comment This will not answer the question but is more than a comment in addition it may be very naive! (This is a hard question not a soft question!!!) I wonder if given the Galois group <-> étale fundamental group link works for dimension 1, should there not be a link '2-Galois thingie'<->étale 2-type, and hence a link with Grothendieck's Pursuing Stacks and his letters to Breen in 1975. The sought after model might be a profinite (?) crossed module. These are able to be seen as automorphism 2-groups of groupoids, so although up vote they are automorphism things, there is a gap to bridge before the link would work well. I have also met a similar idea when working with orbifolds, and related ideas but have not any 0 down definite reply to the particular question, rather more an addition to the question! (I hope this helps... or inspires someone to think 'outside the box'.) There would be then a similar idea for polynomials in n-variable and models for n-types??? (This may be all rubbish but it is nice to dream sometimes!) 3 I don't think that increasing the number of variables means that we have to use higher category theory. – Martin Brandenburg Nov 18 '11 at 8:40 @Martin May be not, but the possibility is there. Again it is sometimes not a question of have to' but maybe 'might'. NB. In fact <i>I</i> did not mention higher category theory as the models for 2-types are quite standard simplicial things and those are classical' (due to Whitehead 1950 or Reidemeister 1930s, and Peiffer 1940s). :-) The automorphism 2-group of group is very simply the inner automorphism morphism from G to Aut(G), so is not per se higher category theory – Tim Porter Nov 18 '11 at 10:02 David Corfield kindly reminded me of this n-cat café posting:golem.ph.utexas.edu/category/2009/12/… Kim has a lot more to say on this area, but it seems that it is a very deep and hard area. In his talk at the INI that David pointed out to me, he mentioned Pursuing Stacks and the anabelian theory mentioned therein, so perhaps (more by chace than by good knowledge) I was nearer the mark than I thought! – Tim Porter Nov 20 '11 at 16:35 add comment Not the answer you're looking for? Browse other questions tagged galois-theory soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/81209/galois-theory-for-polynomials-in-several-variables?sort=oldest","timestamp":"2014-04-21T13:04:39Z","content_type":null,"content_length":"66432","record_id":"<urn:uuid:1f0d29f0-4a66-4783-bace-ee62b6fb747b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Six Billion Factorial Date: 05/15/2003 at 23:45:14 From: DJ Subject: Permutations 6 billion p 100 I know how to do permutations but there must be a quick way to do a problem like this. Even my calculator lists error when I try to do it. Is there a formula that I don't know about? Date: 05/16/2003 at 10:53:46 From: Doctor Tom Subject: Re: Permutations Hi DJ, The problem is that your calculator can only work with numbers up to some given size, and 6B p 100 is too big for it. It is approximately (6,000,000,000)^100, since the other numbers in the product are pretty close to 6 billion. This would be a number with about 977 digits - in fact, it's approximately 6.53x10^977. - Doctor Tom, The Math Forum Date: 05/16/2003 at 12:04:47 From: Doctor Stephen Subject: Re: Permutations Hi DJ, People will tell you that there is no number that describes infinity but I believe 6000000000! is getting somewhere close. If you try to calculate the answer to the above question by finding the answer to the top and bottom of the fraction separately, you will have problems. The actual solution arises from manipulating the entire fraction to make it more calculator-friendly. We know that six billion factorial is the same as multiplying all the numbers up to six billion together. The bottom line of the fraction is then the same as multiplying all the numbers up to six billion minus one hundred together. 6 billion p 100 then becomes 6000000000!/5999999900! Lets imagine we have a simpler case for a moment, say 6 p 3. The number of permutations in this case is: (6 * 5 * 4 * 3 * 2 * 1) / (3 * 2 * 1) From this we can cancel numbers that appear on both the top and bottom lines of the fraction, leaving the answer to be 6 * 5 * 4. The same method can be applied to the more difficult case involving large numbers. The first 5999999900 terms can be removed from the top and the bottom of the fraction leaving just 100 terms, that is: 6000000000 * 5999999999 * 5999999998 * ... * 5999999901 I tried this on my small Casio calculator and got an error because the answer is still very large. The difference in doing the calculation this way is that computer software such as Microsoft Excel and most graphical calculators can cope with 100 terms of this magnitude but cannot calculate 6000000000! on its own. Giving the calculation a try in a computer mathematical program, I got an answer around 10 to the power of 1000. - Doctor Stephen, The Math Forum Date: 05/16/2003 at 12:25:46 From: Doctor Ian Subject: Re: Permutations Hi DJ, To add to what Drs. Tom and Stephen have already said, this is one of those situations where scientific notation and the properties of exponents can be useful. If, as Dr. Stephen suggests, we've reduced the problem to computing 6,000,000,000 * 5,999,999,999 * ... * 5,999,999,901 we can note that this is about equal to (6 x 10^9) * (5.999999999 x 10^9) * ... * (5.999999901 x 10^9) which can be rewritten (6 * 5.999999999 * ... * 5.999999901) * (10^9)^100 which, as Dr. Tom points out, is about equal to 6^100 * (10^9)^100 So, how do you find 6^100? If you have a calculator, you just use the y^x button to get 6^100 ~ 6.5 x 10^77 [where '~' means 'approximately equals'] and then you can combine exponents: (6.5 x 10^77) * (10^900) = 6.5 x 10^977 But what if you don't have a calculator handy (or the batteries have died)? One way is to start computing powers of 6 until you get close to a power of 10. It turns out that 6^9 = 10,077,696 So we can approximate 6^9 as 10^7. Then 6^100 = 6^(99+1) = 6^99 * 6^1 = (6^9)^11 * 6 ~ (10^7)^11 * 6 = 10^77 * 6 which might be good enough for government work, as they say. By the way, this kind of problem is a classic example of why it's important to learn to use calculators as an amplifier, but not as a Are Calculators Smart? I hope this helps. - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/62940.html","timestamp":"2014-04-21T00:16:44Z","content_type":null,"content_length":"9301","record_id":"<urn:uuid:d9a19513-faaf-475d-bc14-e6e82e19d27e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Atanasoff-Berry Computer Atanasoff Berry Computer The Atanasoff Berry Computer, later named the ABC, was built at Iowa State University from 1939-1942 by physics professor Dr. John Vincent Atanasoff and his graduate student, Clifford Berry. Why did Atanasoff invent the ABC? What did he intend for it to do? Atanasoff was a professor of Mathematics and Physics, and the 1920s and 30s were a time of active discoveries and new theories for the scientific disciplines, but especially for the physics discipline. Atanasoff’s Ph.D. work, The Dielectric Constant of Helium, was a study in theoretical physics, published in the Physical Review Vol. 36(7) in 1930. Atanasoff’s work required a great deal of mathematical calculation, which he performed on a Monroe calculator, at the time an advanced calculating machine, but which still required hours and hours of calculations. Later, as a professor at Iowa State College, Atanasoff sought to increase the speed and accuracy of scientific calculations through the development of an electronic digital computer. This effort resulted in the ABC. In his proposal to the Iowa State College for the funding of the construction of the machine, titled Computing Machine for the Solution of Large Systems of Linear Algebraic Equations, Atanasoff Since an expert computer^1 requires about eight hours to solve a full set of eight equations in eight unknowns, k is about 1/64. To solve twenty equations in twenty unknowns should thus require 125 hours. But this caculation does not take into effect the increased labor due to the greater chances of error in the larger systems ... The solution of general systems of linear equations with a number of unknowns greater than ten is not often attempted. But this is precisely what is needed to make approximate methods more effective in the solution of practical problems. Atanasoff’s original thought was to improve upon existing calculating machines, notably the IBM tabulator. He and A.E. Brandt, an Iowa State College professor of Statistics, mad emodifications to the IBM tabulator so that it could solve problems in analysis of complex spectra. This work was published in the Journal of The Optical Society of America in 1936. Unbeknown to them at the time, the authors’ work was not highly regarded by IBM, whose corporate officials saw their machine being used for purposes for which it was not meant. His next attempt was through the construction of an analog calculator called the “laplaciometer.” The Atanasoff-Hannum-Murphy Laplaciometer, a small analog caculating machine was a success in terms of accurate calculations; but Atanasoff was still not satisfied with its reliability, as its components had to be in perfect working mechanical order to guarantee accuracy. As an electrical engineer, mathematician, and physicist, Atanasoff’s thoughts turned to using electronics as a possible solution to problems of accuracy and speed in performing scientific calculations. Existing textbooks and research were not helpful, and his frustration increased as he felt closer and closer to making a major discovery, yet somehow seemed unable to pull all of his ideas together. During the winter of 1937, Atanasoff made his now infamous drive across the Iowa border, to a little roadhouse in Illinois where he stopped for a drink, was able to finally relax and let the ideas flow. The four ideas that came together were: 1. He would use electricity and electronics as the medium for the computer 2. In spite of custom, he would use base-two numbers for his computer 3. He would use condensors for memory and would use a regenerative or “jogging” process to avoid lapses that might be caused by leakage of power 4. He would compute by direct logical action and not by enumeration (counting) as used in existing analog caculating devices. He spent the next year making plans for his computer, and in March 1939 made a formal application to the college for funding a graduate assistant and for materials. Iowa State College approved a grant for $650. Atanasoff hired Clifford Berry, and they began to construct the prototype for the world’s first electronic digital computer. Computing Machine for the Solution of Large Systems of Linear Algebraic Equations, by John V. Atanasoff (application for funding to Iowa State College, with photos of early ABC) The Computer In December 1939, Atanasoff and Berry demonstrated the machine to college officials, and were awarded additional funding to build the full-scale machine, which became known as the ABC. By late spring 1940, the machine was well on its way to completion, and they submitted a manuscript describing the details of the computer, both for obtaining a patent (which would never be filed by Iowa State College) and to apply for additional funding for refinement and perfection of the construction and operation features. As building of the machine continued, Clifford Berry wrote a manual for the ABC. In the summer of 1941, John Mauchly visited Atanasoff in Ames to see the ABC. ^1In this context, a “computer” is a human person making calculations by hand.
{"url":"http://jva.cs.iastate.edu/operation.php","timestamp":"2014-04-19T01:47:26Z","content_type":null,"content_length":"8802","record_id":"<urn:uuid:59d922cb-9eac-4494-b567-624689129b3d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
The watershed transform: definitions, algorithms, and parallelization strategies. Fundamenta Informaticae Results 1 - 10 of 106 - IEEE TPAMI , 2004 "... The image foresting transform (IFT) is a graph-based approach to the design of image processing operators based on connectivity. It naturally leads to correct and efficient implementations and to a better understanding of how different operators relate to each other. We give here a precise definiti ..." Cited by 59 (21 self) Add to MetaCart The image foresting transform (IFT) is a graph-based approach to the design of image processing operators based on connectivity. It naturally leads to correct and efficient implementations and to a better understanding of how different operators relate to each other. We give here a precise definition of the IFT, and a procedure to compute it—a generalization of Dijkstra’s algorithm—with a proof of correctness. We also discuss implementation issues and illustrate the use of the IFT in a few applications. - In: IEEE Comp. Soc. Conf. Comp. Vision Pattern Recog , 2005 "... The recently introduced random walker segmentation algorithm of [14] has been shown to have desirable theoretical properties and to perform well on a wide variety of images in practice. However, this algorithm requires user-specified labels and produces a segmentation where each segment is connected ..." Cited by 40 (4 self) Add to MetaCart The recently introduced random walker segmentation algorithm of [14] has been shown to have desirable theoretical properties and to perform well on a wide variety of images in practice. However, this algorithm requires user-specified labels and produces a segmentation where each segment is connected to a labeled pixel. We show that incorporation of a nonparametric probability density model allows for an extended random walkers algorithm that can locate disconnected objects and does not require user-specified labels. Finally, we show that this formulation leads to a deep connection with the popular graph cuts method of [8, 24]. 1 "... ..." "... Persistent homology is an algebraic tool for measuring topological features of shapes and functions. It casts the multi-scale organization we frequently observe in nature into a mathematical formalism. Here we give a record of the short history of persistent homology and present its basic concepts. ..." Cited by 36 (1 self) Add to MetaCart Persistent homology is an algebraic tool for measuring topological features of shapes and functions. It casts the multi-scale organization we frequently observe in nature into a mathematical formalism. Here we give a record of the short history of persistent homology and present its basic concepts. Besides the mathematics we focus on algorithms and mention the various connections to applications, including to biomolecules, biological networks, data analysis, and geometric modeling. - JOURNAL OF MATHEMATICAL IMAGING AND VISION , 2005 "... In this paper, we investigate topological watersheds [1]. One of our main results is a necessary and sufficient condition for a map G to be a watershed of a map F, this condition is based on a notion of extension. A consequence of the theorem is that there exists a (greedy) polynomial time algorit ..." Cited by 35 (10 self) Add to MetaCart In this paper, we investigate topological watersheds [1]. One of our main results is a necessary and sufficient condition for a map G to be a watershed of a map F, this condition is based on a notion of extension. A consequence of the theorem is that there exists a (greedy) polynomial time algorithm to decide whether a map G is a watershed of a map F or not. We introduce a notion of “separation between two points ” of an image which leads to a second necessary and sufficient condition. We also show that, given an arbitrary total order on the minima of a map, it is possible to define a notion of “degree of separation of a minimum ” relative to this order. This leads to a third necessary and sufficient condition for a map G to be a watershed of a map F. At last we derive, from our framework, a new definition for the dynamics of a minimum. - JOURNAL OF MATHEMATICAL IMAGING AND VISION , 2005 "... ..." - in Proceedings of the International Conference on Computer Vision and Pattern Recognition, II , 2003 "... In this paper, we describe an algorithm called Fast Marching Watersheds that segments a triangle mesh into visual parts. This computer vision algorithm leverages a human vision theory known as the minima rule. Our implementation computes the principal curvatures and principal directions at each vert ..." Cited by 32 (2 self) Add to MetaCart In this paper, we describe an algorithm called Fast Marching Watersheds that segments a triangle mesh into visual parts. This computer vision algorithm leverages a human vision theory known as the minima rule. Our implementation computes the principal curvatures and principal directions at each vertex of a mesh, and then our hillclimbing watershed algorithm identifies regions bounded by contours of negative curvature minima. These regions fit the definition of visual parts according to the minima rule. We present evaluation analysis and experimental results for the proposed algorithm. 1. , 2007 "... The Morse-Smale complex is an efficient representation of the gradient behavior of a scalar function, and critical points paired by the complex identify topological features and their importance. We present an algorithm that constructs the Morse-Smale complex in a series of sweeps through the data, ..." Cited by 23 (11 self) Add to MetaCart The Morse-Smale complex is an efficient representation of the gradient behavior of a scalar function, and critical points paired by the complex identify topological features and their importance. We present an algorithm that constructs the Morse-Smale complex in a series of sweeps through the data, identifying various components of the complex in a consistent manner. All components of the complex, both geometric and topological, are computed, providing a complete decomposition of the domain. Efficiency is maintained by representing the geometry of the complex in terms of point sets. - DISCRETE GEOMETRY FOR COMPUTER IMAGERY, FRANCE , 2003 "... This paper is devoted to the study of watershed algorithms behavior. Through the introduction of a concept of pass value, we show that most classical watershed algorithms do not allow the retrieval of some important topological features of the image (in particular, saddle points are not correctly co ..." Cited by 17 (4 self) Add to MetaCart This paper is devoted to the study of watershed algorithms behavior. Through the introduction of a concept of pass value, we show that most classical watershed algorithms do not allow the retrieval of some important topological features of the image (in particular, saddle points are not correctly computed). An important consequence of this result is that it is not possible to compute sound measures such as depth, area or volume of basins using most classical watershed algorithms. Only one watershed principle, called topological watershed, produces correct watershed contours. "... Abstract—The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, efficient computation of the MS complex for large scale data remains a challenging problem. We describe a new algorithm and easily extensible framework for co ..." Cited by 16 (3 self) Add to MetaCart Abstract—The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, efficient computation of the MS complex for large scale data remains a challenging problem. We describe a new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes. A new divide-and-conquer strategy allows for memory-efficient computation of the MS complex and simplification on-the-fly to control the size of the output. In addition to being able to handle various data formats, the framework supports implementation-specific optimizations, for example, for regular data. We present the complete characterization of critical point cancellations in all dimensions. This technique enables the topology based analysis of large data on off-the-shelf computers. In particular we demonstrate the first full computation of the MS complex for a 1 billion/ 1024 3 node grid on a laptop computer with 2Gb memory. Index Terms—Topology-based analysis, Morse-Smale complex, large scale data. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=216985","timestamp":"2014-04-16T23:20:32Z","content_type":null,"content_length":"35656","record_id":"<urn:uuid:9e8b562e-09ca-42cf-8119-74108cb74181>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Lines of Best Fit Description: This lesson includes a teacher lead activity on gathering data and lines of best fit. Vocabulary is stressed (positive, negative or non-existing correlations). In groups, students will demonstrate knowledge through podcasting and/or demonstrations. Options are available for schools with varying degrees of technology. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Thinkfinity Lesson Plans Subject: Mathematics Title: Automobile Mileage: Age vs. Mileage Description: In this lesson, one of a multi-part unit from Illuminations, students plot data about automobile mileage and interpret the meaning of the slope and y-intercept of the least squares regression line. By examining the graphical representation of the data, students analyze the meaning of the slope and y-intercept of the line and put those meanings in the context of the real-life application. This lesson incorporates an interactive regression line applet. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: The Centroid and the Regression Line Description: This lesson, one of a multi-part unit from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the data points, using a computer-based interactive tool. Using the Regression Line Applet, students investigate the centroid of a data set and its significance for the line fitted to the data. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Least Squares Regression Description: In this nine-lesson unit, from Illuminations, students interpret the slope and y-intercept of least squares regression lines in the context of real-life data. Students use an interactive applet to plot the data and calculate the correlation coefficient and equation of the least squares regression line. These lessons develop skills in connecting, communicating, reasoning, and problem solving as well as representing fundamental ideas about data. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Correlation and the Regression Line Description: This lesson, one of a multi-part unit from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the data points, using a computer-based interactive tool. Using the Regression Line Applet, students learn about Pearson s correlation coefficient--the measure of the linear association between the horizontal variable and the vertical variable. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Bathtub Water Levels Description: In this lesson, one of a multi-part unit from Illuminations, students examine real-life data that illustrates a negative slope. Students interpret the meaning of the negative slope and y-intercept of the graph of the real-life data. By examining the graphical representation of the data, students relate the slope and y-intercept of the least squares regression line to the real-life data. They also interpret the correlation coefficient of the least squares regression line. This lesson incorporates an interactive regression line applet. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Teaching Math Anxiety Description: In this Science Update, from Science NetLinks, you'll hear about how female teachers may pass their own math anxiety to the girls they teach. Science Updates are audio interviews with scientists and are accompanied by a set of questions as well as links to related Science NetLinks lessons and other related resources. Thinkfinity Partner: Science NetLinks Grade Span: 6,7,8,9,10,11,12 Subject: Mathematics Title: The Effects of Outliers Description: This lesson, one of a multi-part unit from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the data points, using a computer-based interactive tool. Using the Regression Line Applet, students investigate the effect of outliers on a regression line and easily see their significance. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Exploring Linear Data Description: In this lesson, from Illuminations, students model linear data in a variety of settings. Students can work alone or in small groups to construct scatterplots, interpret data points and trends, and investigate the notion of line of best fit. Thinkfinity Partner: Illuminations Grade Span: 6,7,8,9,10,11,12 Subject: Mathematics Title: Traveling Distances Description: In this lesson, one of a multi-part unit from Illuminations, students interpret the meaning of the slope and y-intercept of a graph of real-life data. By examining the graphical representation of the data, students relate the slope and y-intercept of the least squares regression line to the real-life data. They also interpret the correlation coefficient of the resulting least squares regression line. This lesson incorporates an interactive regression line applet. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: Automobile Mileage: Comparing and Contrasting Description: In this lesson, one of a multi-part unit from Illuminations, students compare and contrast their findings from previous lessons of the unit. This lesson allows students the time they need to think about and discuss what they have done in the previous lessons. This lesson provides the teacher with another opportunity to listen to student discourse and assess student understanding. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Subject: Mathematics Title: The Regression Line and Correlation Description: This four-lesson unit, from Illuminations, provides students with the opportunity to investigate the relationship between a set of data points and a curve used to fit the data points, using a computer-based interactive tool. Using the Regression Line Applet, students investigate the properties of regression lines and correlation. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12 Thinkfinity Podcasts Subject: Mathematics Title: Teaching Math Anxiety Description: In this Science Update, from Science NetLinks, you'll hear about how female teachers may pass their own math anxiety to the girls they teach. Science Updates are audio interviews with scientists and are accompanied by a set of questions as well as links to related Science NetLinks lessons and other related resources. Thinkfinity Partner: Science NetLinks Grade Span: 6,7,8,9,10,11,12
{"url":"http://alex.state.al.us/all.php?std_id=54175","timestamp":"2014-04-17T06:49:23Z","content_type":null,"content_length":"106327","record_id":"<urn:uuid:d06463e3-247d-4269-9149-44b2b9115c6f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Mathematics on Dj Konservo Now that no one cares about the contents of Joe the Plumber’s trash, it seems more resources can be devoted to boring old Albert Einstein’s “E=mc²” formula. It turns out he was right: PARIS (AFP) — It’s taken more than a century, but Einstein’s celebrated formula e=mc2 has finally been corroborated, thanks to a heroic computational effort by French, German and Hungarian A brainpower consortium led by Laurent Lellouch of France’s Centre for Theoretical Physics, using some of the world’s mightiest supercomputers, have set down the calculations for estimating the mass of protons and neutrons, the particles at the nucleus of atoms. According to the conventional model of particle physics, protons and neutrons comprise smaller particles known as quarks, which in turn are bound by gluons. The odd thing is this: the mass of gluons is zero and the mass of quarks is only five percent. Where, therefore, is the missing 95 percent? The answer, according to the study published in the US journal Science on Thursday, comes from the energy from the movements and interactions of quarks and gluons. In other words, energy and mass are equivalent, as Einstein proposed in his Special Theory of Relativity in 1905. The e=mc2 formula shows that mass can be converted into energy, and energy can be converted into mass. By showing how much energy would be released if a certain amount of mass were to be converted into energy, the equation has been used many times, most famously as the inspirational basis for building atomic weapons. That’s right folks, Albert Einstein, genius and, lest we forget, war-monger. Thanks, AFP, for adding the line about Einstein’s formula being “the inspirational basis for building atomic weapons” without any further contextual information as to Einstein’s political views. There is some exciting news in the world of math this week as geeks and “about 100,000 computers” determine the largest prime number that has been proven, it’s about 13 million digits: The new number is little more than that. Prime numbers are useful “building blocks” to many equations, but using existing algorithms to find new, large primes won’t likely affect ongoing research, said Cameron Stewart, a University of Waterloo professor and the Canada Research Chair in Number Theory. There are some practical implications (computer and security encryptions are based on prime numbers), but the find is more sport. “They’re a good tool. They’re also mysterious; they’re subtle objects …” Prof. Stewart said of prime numbers. Okay, so maybe it’s not really a big deal. And yes, that is a hint of snark in my words above (including the title). In fact, if you ask me, with the aid of computers these geeks have nothing to brag I knew after I posted on the History of Math! that others would jump on the band-wagon and come out with their own posts about ancient semi-legendary mathematicians. Yes, I could sense at the time that something like a “Leave Brittany Alone” phenomenon was in our midst. Now, roughly 9 months after my post on the subject, scientificblogging.com picks up where I left off: the urban legend surrounding the death of Hippasus. I’ve scanned photocopies of photocopies of 3 pages from an arithmetic “textbook” that was hand-written by my great-great-great-grandfather in the 1810s (he dates the first page “July 2nd 1813″). There are 136 pages in all and the writing is kinda sloppy, imho. I’m going to read through it and look for mistakes. My grandfather turned 90 on the 25th of July, so I’ve been on the road lately. Luckily Melo came along for the ride to keep me company. Also here, are pics of a road-sign which can be seen not far from my grandparents’ home. I know this topic may seem out of place for this blog, but frankly, I don’t care. While reading about a completely unrelated subject I was reminded of the old 6th century B.C.E. Pythagoreans. You know… the followers of Pythagoras aka Hyperborean Apollo. In particular, I was reminded of schism that took place amongst the Pythagoreans after their leader had died. Well, I’ll let Iamblichos explain: New Comments seo services cheap on Saturday Night Live: Bailout P… healthy turkey chili… on LiveLeak.com Pulls Fitna… Deck the Hall with N… on Mmm Mmm Mmm: School Obama Song… SHADRACK on New World Order: How Secret Or… Hollister Cyber Mond… on Saturday Night Live: Bailout P… Lululemon Black Frid… on Saturday Night Live: Bailout P… http://www.jasonsqui… on New World Order: How Secret Or… Mansfield locksmiths on New World Order: How Secret Or… Locksmith in Temple on New World Order: How Secret Or… managed it service on LiveLeak.com Pulls Fitna…
{"url":"http://djkonservo.wordpress.com/category/mathematics/","timestamp":"2014-04-18T02:58:59Z","content_type":null,"content_length":"44231","record_id":"<urn:uuid:e9cd3099-ec02-40fc-8d08-1d095729240c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 793.05081 Autor: Erdös, Paul; Ordman, Edward T.; Zalcstein, Yechezkel Title: Clique partitions of chordal graphs. (In English) Source: Comb. Probab. Comput. 2, No.4, 409-415 (1993). Review: To partition the edges of a chordal graph on n vertices into cliques may require as many as n^2/6 cliques; there is an example requiring this many, which is also a threshold graph and a split graph. It is unknown whether this many cliques will always suffice. We are able to show that (1- c)n^2/4 cliques will suffice for some c > 0. Classif.: * 05C35 Extremal problems (graph theory) 05C70 Factorization, etc. Keywords: clique covering; clique partition; partition; chordal graph; cliques; threshold graph; split graph © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/79305081.htm","timestamp":"2014-04-21T02:25:43Z","content_type":null,"content_length":"3237","record_id":"<urn:uuid:81f3ad49-5f76-4a8e-ae98-afb76cce19e0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
adventures and commotions in automated theorem proving hi all! am pleased to write up a topic that Ive studied for decades and have recently collected many interesting & fascinating links from wideranging cybersurfing expedition(s): automated computer theorem proving (ATP) [c2] and the closely associated computer assisted proofs.[c1] the recent trigger/occasion for finally writing up this roaming reverie is a blog by fortnow [a2] mentioning the NYT article/obituary [a1] on Appel of Appel and Haken 1976 4-color [automated] theorem fame.[c3] in my opinion [as a computer scientist] this is one of the great theorems of the 20th century, and I can vaguely recall reading inspiring science and even newspaper articles about the breakthrough when I was young. although alas, as Fortnow alludes in his questioning subtitle “still controversial?”, not all mathematicians are so impressed. some ambivalent and/or skeptical [hardcore?] naysayers still cite that it seems to be isolated and does not seem to be tied to other deep theorems as with important bridge theorems, or that with all the special cases and computer code is not humanly comprehensible or verifiable, [etc]. the proof has been worked on by others to simplify and streamline it eg Gonthier,[a3][a9] who, now working at Microsoft research, is one of the most prominent authorities in the field with other recent breakthroughs.[f6] imho more than half of the great breakthrough of the proof was to reduce an infinite number of cases to a finite number of cases. this is related to a concept that the visionary Hilbert had of “finitistic” or “finitary” mathematics [c5] and which imho is still central yet mysterious and unexplored. computers can help with theorem proving but only after the problem is reduced to a finite number of cases (or, reducing a statement/property about infinite number of objects to a finite proof). this basic operation or conceptual leap (“finitizing”?) must be happening in all nontrivial proofs but the way it happens in every different proof seems irregular, there does not seem to be a underlying pattern that Hilbert was grasping for over a century ago—and of course both Turing’s & Gödel’s famous proofs of undecidability prove that such a hunt is inherently quixotic and somewhat chimerical; although there could be a sly “loophole”.[i4][a5] there are some outstanding online resources/refs on the subject of ATP. eg, special issue in Notices AMS [a4] and a relatively recent glowing Simons article/profile.[b4] another closely related area is “proving program termination” recently profiled in ACM.[a5] the results in the field of ATP are occasionaly dramatic and the overall field is attractive, trendy, even at times quite cutting-edge and exciting, inspiring amusing/clever parody.[b5] this is a deep connection between software and mathematics that is not totally appreciated by everybody to date. it was 1st observed long ago with the Curry Howard correspondence.[c4] the basic idea is that proofs and programs have a remarkable correspondence. its very technical, but briefly, a lemma is like a subroutine, and a theorem is like or similar to a program that runs verifying some particular property. also, there are many proofs that are like a program that runs infinitely long and verifies some property for every case, and halts if the verification fails, showing a link to the area of program termination. there are some old breakthroughs in ATP eg the robbins proof from boolean algebras, profiled in NYT.[b1] also just saw a big milestone of the Myhill Nerode theorem for FSMs [b2] which has strong connections to FSM optimization/minimization. another neat reference is the formalization of Godel’s theorem in ATP software by Shankar.[b3] another eminent/pioneering/forerunning authority and near-contrarian Zeilberger has been advocating automated proofs for many years and just got major recent recognition.[b4] another huge breakthrough was the Kepler conjecture ATP by Hales in 1997 [c6] which is actually more than ¼-century older than Fermats Last Theorem [itself 3½-century old!] and arguably far much more important than FLT or the 4-color theorem! combinatorics — while delving deeply into this subject, the topic’s deep connection to combinatorics and another old controversy [amazingly, still active!] started to pop out. combinatorics proofs, working with discrete objects, are apparently more naturally amenable to computer proofs. discrete mathematics and combinatorics are definitely highly intimate to the point of being almost married with computer science. but, combinatorics surprisingly is a relatively recent mathematical invention! [a6] is a nice recent survey. that did not occur to me when I started studying the subject when it was already decades old, but the pioneers of the field are still alive and remember vividly a time when it was groundbreaking and yet not so popular, see eg preeminent forerunner/pioneer/authority Lovasz [a8] describing this in a video interview with Widgerson.[a7] mathematicians sometimes have very long views of history and we computer scientists should realize in the long view scheme of things we are mere upstarts! this culminates in a hilarious off-the-record, yet apparently genuine quote [a6] that is about the closest an elite mathematician—who normally inhabit a rarified, genteel, discreet realm—can come to outright “trash talk”: Combinatorics is the slums of topology. —Whitehead * * * recently I have been very fascinated & near excited over WT Gowers blog directions in this area.[d1-5] I found the blog recently and in it a huge raging controversy in his latest comment threads over the win by Szeremerdi of the Abel prize (which indeed can be viewed as a dramatic combinatorics vs analysis “upset”). maybe the intense controversy is due to the very high award of $1M which is a big deal in math, where for many years the most prestigious prize of the Fields medal only awarded ~$15K! but most of the commotion is singlehandedly caused by a particular [DEL:troll:DEL] commenter named “sowa” and everyones reaction to his near-rantings. “sowa” is quite a colorful character! clearly intelligent and articulate, and yet at times bordering on a cartoonish 1-d charicature, a cranky curmudgeon, he has created his own blog entitled “stop timothy gowers” over a year ago.[e1-5] mainly it seems in reactionary complaint to Gower’s purported/supposed [hidden?] influence on the selection of Szemeredi, whose chief mathematical contributions are in combinatorics, which [in misc vernacular, stackexchange or cyber parlance] sowa “disses” and feels should be “ghettoized”. what is surprising is how many mathematicians might [covertly!] have some sympathy for his views. the jury is out on who sowa is but he seems to be an accomplished Russian mathematician with a lot of strong opinions, many of them downplaying, devaluing, or rejecting the importance of combinatorics. Gowers pulled quite a remarkable scientific near-prank right around april fools day, somewhat reminiscent of Milgrams old famous boundary-pushing sociological experiments! he put together some proofs on his web page and then asked respondents if they were “clear or unclear”. what he didnt mention was that some were [really!] computer-generated via some of his new software and a collaboration with a computer scientist! the remarkable experiment is captured in several recent blog posts,[d1-d3] and there was high polling participation due to the experiment being profiled on HackerNews. (HackerNews also led to my highest daily spike in hits due to publicity over Fukuyama’s proof.) the computer-generated proofs did fool some experts who were sure they were human generated when they were computer generated (or vice versa). so, a sophisticated 21^st century mathematical variation on the Turing test. the overall program fits in with some of Gowers older published ideas. Gowers has two remarkable and influential essays on mathematics. in [d5] he talks about the “two cultures of mathematics” mainly [roughly boiled down to] the problem solvers vs. the problem conceptualizers (or problem solvers vs theory-builders, or roughly bottom-up vs top-down, respectively). the conceptualizers tend to correspond more with topological and continuous mathematics and the problem solvers more with discrete math and combinatorics experts. in [d4] (unfortunately only online in .ps format) he imagines a futuristic sci-fi scenario [another normally fringe/unconservative area for a mathematician!] where a human works in conjunction with a computer in an expert collaboration/dialog. so the semi-debate between Gowers and his apparent archnemesis “shadow” sowa (Gowers is mostly refusing to take the bait) is entertaining, quasi-epic, and ongoing in cyberspace. last I checked, sowa was apparently stunned into total silence (a rare feat for hard core cyber trolls) and had absolutely zero response to Gowers bold, groundbreaking, maybe seminal new experiment and directions. Fields medalist Gowers collaboration with a computer scientist in the field of automated theorem proving is a striking new advance in the field, sure to make waves and further reactions. in later sections Ive collected the many stackexchange [g] and mathoverflow [h] questions related to the area that Ive found, showing a very strong and widespread degree of public interest in the subject, including from computer science.[f,i] have much more to say on the subject but for now will just “plug” my other recent effort, a brief writeup of an empirical attack on the Collatz conjecture based on FSM transducers. a. proof b. milestones c. wikipedia d. gowers e. sowa f. cstheory g. mathematics h. mathoverflow i. cs
{"url":"http://vzn1.wordpress.com/2013/05/03/adventures-and-commotions-in-automated-theorem-proving/","timestamp":"2014-04-20T18:33:03Z","content_type":null,"content_length":"81281","record_id":"<urn:uuid:7443c4eb-3b2b-4873-a83a-1ee57d27915c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Message from AMSI regarding proposed cuts at Victoria University Posted on 9 October, 2009 by Terence Tao The new director of the Australian Mathematical Sciences Institute, Geoff Prince, has written an open letter to the Vice Chancellor and President of Victoria University, Elizabeth Harman, regarding the proposed severe cuts in the mathematics and statistics departments (from 8.5 FTE to 4.5 FTE) through targeted. (We posted about these cuts in a previous post.) Concerted pressure of this type by the mathematical community can make a difference; strong protests over similar actions by the University of Southern Queensland resulted in a significant reduction in the staff cuts, and USQ afterwards hired several maths and stats faculty (including one who had they made redundant!) after they realised that the cuts that they did enact left them unable to fulfill their mathematical teaching obligations. The letter is provided in full below the fold. (Reproduced with permission.) Dear Professor Harman, I am writing to you to express the deep concern of the Australian Mathematical Sciences Institute at the planned reduction in your university’s commitment to the mathematical sciences. While strategic resourcing is a university matter, on this occasion the proposed cuts in continuing mathematics positions appear to be unwarranted in their severity, and the plans for mathematics and statistics teaching that accompany them threaten the provision of tertiary mathematics education and the training of secondary mathematics teachers in the western suburbs of Melbourne. These changes effect the broader community, and AMSI feels compelled to respond. Students at Victoria University deserve the dedication and encouragement of continuing academic staff committed to improving mathematical outcomes for students in the west. Undue reliance on casual staff will weaken the quality of the university’s mathematical program. Academics also need the opportunity for research. Without it your university will not be able to participate in any of the exciting new developments that are occurring in the modern mathematical sciences and its multidisciplinary applications, or to attract the best staff in future. The decision to abandon support for the university’s Research Group in Mathematical Inequalities and Applications particularly undermines opportunities for research in these areas. Of course your students, including those entering the teaching profession, need exposure to these developments and that won’t happen unless Victoria University commits to adequate continuing positions and research opportunities. Victoria University is an inaugural member of AMSI and has benefited AMSI’s involvement was key to the recent upward revision of the cluster funding rate for mathematics and statistics by the Commonwealth Government and in the significant reduction in HECS fees for mathematics and science students and for those pursuing teacher training in these areas. AMSI provided Victoria University with $70,000 for the installation of its Access Grid Room, a facility which hardwires your university into the national mathematical sciences teaching and research scene. AMSI has recently received funding from DEEWR for a major schools project which has the potential to deliver more students interested in mathematics to Victoria University. The mathematical sciences in Australia are facing unprecedented challenges which are widely recognized as being of national strategic importance. I have attached a copy of “A National Strategy for Mathematical Sciences in Australia” prepared by Prof. Hyam Rubinstein, Chair of the Australian Academy of Science’s Committee for the Mathematical Sciences, in consultation with the Australian Council of Heads of Mathematical Sciences. I hope that this document will help convince you that Victoria University has a leadership role to play in the provision of mathematical sciences courses to students in the western suburbs. I urge you to re-­‐engage with the university’s mathematicians to guarantee the integrity of this fundamental discipline at Victoria University. Yours sincerely, Geoff Prince Australian Mathematical Sciences Institute Commissioner Dianne Foggo Associate Professor Michelle Towstoless Professor Richard Thorn The Hon Nicola Roxon MP The Hon Marsha Thomson MP The Hon Julia Gillard MP Filed under: AMSI, VU [...] original here: Message from AMSI regarding proposed cuts at Victoria University … By admin | category: victoria university | tags: elizabeth-harman, geoff-prince, hit-out, [...] wow…. pay cuts everywhere now even in the university…
{"url":"http://austmaths.wordpress.com/2009/10/09/message-from-amsi-regarding-proposed-cuts-at-victoria-university/","timestamp":"2014-04-17T15:26:15Z","content_type":null,"content_length":"68546","record_id":"<urn:uuid:89aae35b-2682-4e70-84d7-a1643f88739c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: VELOCITY CALCULATING DEVICE, VELOCITY CALCULATION METHOD, AND NAVIGATION DEVICE Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Provided is a velocity calculating device including a vertical acceleration detector that detects a vertical acceleration generated due to an undulation of a contact surface; a horizontal angular velocity detector that detects a horizontal angular velocity generated due to the undulation; a correlation coefficient calculator that calculates a correlation coefficient that represents a degree to which an acceleration in the direction of travel is mixed into the vertical acceleration in accordance with an attachment angle; a true vertical acceleration detector that calculates a true vertical acceleration by subtracting the acceleration in the direction of travel mixed into the vertical acceleration from the vertical acceleration, the acceleration in the direction of travel mixed into the vertical acceleration being calculated using the correlation coefficient; and a velocity calculator that calculates a velocity of a moving body on the basis of the true vertical acceleration and the horizontal angular velocity. A velocity calculating device comprising:a vertical acceleration detector mounted on a moving body, the vertical acceleration detector detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with the moving body;a horizontal angular velocity detector mounted on the moving body, the horizontal angular velocity detector detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body, the angular velocity being generated due to the undulation of the contact surface;a correlation coefficient calculator that calculates a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in the vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body;a true vertical acceleration detector that calculates a true acceleration in the vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, the acceleration in the direction of travel mixed into the acceleration in the vertical direction being calculated on the basis of the correlation coefficient; anda velocity calculator that calculates a velocity of the moving body in the direction of travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis. The velocity calculating device according to claim 1,wherein the correlation coefficient calculator calculates the correlation coefficient only when the acceleration in the direction of travel is larger than a predetermined threshold. The velocity calculating device according to claim 2,wherein the correlation coefficient calculator calculates a final value of the correlation coefficient by using, as an input, a new correlation coefficient that is obtained for each sampling and by performing processing using a low pass filter. The velocity calculating device according to claim 3,wherein the correlation coefficient calculator changes a gain of the low pass filter in accordance with a magnitude of the acceleration in the direction of travel. The velocity calculating device according to claim 1, further comprising:a second correlation coefficient calculator that calculates a second correlation coefficient that represents a degree to which a yaw angular velocity of the moving body around the Z axis is mixed into the angular velocity around the horizontal axis in accordance with the attachment angle with which the body is attached to the moving body; anda true horizontal angular velocity detector that calculates a true angular velocity around the horizontal axis by subtracting the yaw angular velocity mixed into the angular velocity around the horizontal axis from the angular velocity around the horizontal axis, the yaw angular velocity mixed into the angular velocity around the horizontal axis being calculated on the basis of the second correlation coefficient,wherein the velocity calculator calculates the velocity of the moving body in the direction of travel of the moving body on the basis of the true acceleration in the vertical direction and the true angular velocity around the horizontal axis. The velocity calculating device according to claim 5,wherein the second correlation coefficient calculator calculates the second correlation coefficient only when the yaw angular velocity around the Z axis is higher than a predetermined threshold. A method of calculating a velocity, the method comprising the steps of:detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with a moving body by using a vertical acceleration detector mounted on the moving body;detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body by using a horizontal angular velocity detector mounted on the moving body, the angular velocity being generated due to the undulation of the contact surface;calculating a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in the vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body by using a correlation coefficient calculator; calculating a true acceleration in the vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in the vertical direction from the acceleration in the vertical direction by using a predetermined true vertical acceleration detector, the acceleration in the direction of travel mixed into the acceleration in the vertical direction being calculated on the basis of the correlation coefficient; andcalculating a velocity of the moving body in the direction of the travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis by using a velocity calculator. A navigation device comprising:a vertical acceleration detector mounted on a moving body, the vertical acceleration detector detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with the moving body;a horizontal angular velocity detector mounted on the moving body, the horizontal angular velocity detector detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body, the angular velocity being generated due to the undulation of the contact surface;a correlation coefficient calculator that calculates a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in a vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body;a true vertical acceleration detector that calculates a true acceleration in a vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in a vertical direction from the acceleration in a vertical direction, the acceleration in the direction of travel mixed into the acceleration in a vertical direction being calculated on the basis of the correlation coefficient;a velocity calculator that calculates a velocity of the moving body in the direction of travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis;a vertical angular velocity detector that calculates an angular velocity around the vertical axis perpendicular to the direction of travel;an angle calculator that calculates an angle by which the moving body has rotated on the basis of the angular velocity around the vertical axis; anda position calculator that calculates a position of the moving body on the basis of the velocity in the direction of travel that is calculated by the velocity calculator and the angle that is calculated by the angle calculator. BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The present invention relates to a velocity calculating device, a velocity calculating method, and a navigation device, which are suitable for, for example, a portable navigation device. 2. Description of the Related Art Existing navigation devices receive position signals (hereinafter referred to as GPS signals) from a plurality of global positioning system (GPS) satellites and calculate the present position of a vehicle on the basis of the GPS signals. However, when a vehicle in which the navigation device is placed is in a tunnel or an underground parking garage, it is difficult for the navigation device to receive GPS signals from GPS satellites and to calculate the present position on the basis of the GPS signals. Even when it is difficult to receive GPS signals, some navigation devices calculate the velocity in the direction of travel of the vehicle on the basis of the acceleration in a horizontal direction perpendicular to the direction of travel and the angular velocity around the vertical axis perpendicular to the direction of travel when the vehicle is cornering, and thereby calculate the present position of the vehicle on the basis of the velocity in the direction of travel (see, for example, Japanese Unexamined Patent Application Publication No. 2008-76389). SUMMARY OF THE INVENTION [0007] Such navigation devices can calculate velocity in the direction of travel when the vehicle is cornering, but it is difficult to calculate the velocity in the direction of travel when the vehicle is moving linearly. Therefore, it is difficult for such navigation devices to calculate the velocity in the direction of travel under all road conditions. The present invention provides a velocity calculating device, a velocity calculating method, and a navigation device that are capable of precisely calculating the velocity of a vehicle under all road According to an embodiment of the present invention, there is provided a velocity calculating device including a vertical acceleration detector mounted on a moving body, the vertical acceleration detector detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with the moving body; a horizontal angular velocity detector mounted on the moving body, the horizontal angular velocity detector detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body, the angular velocity being generated due to the undulation of the contact surface; a correlation coefficient calculator that calculates a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in the vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body; a true vertical acceleration detector that calculates a true acceleration in the vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, the acceleration in the direction of travel mixed into the acceleration in the vertical direction being calculated on the basis of the correlation coefficient; and a velocity calculator that calculates a velocity of the moving body in the direction of travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis. According to an embodiment of the present invention, there is provided a method of calculating a velocity, the method including the steps of detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with a moving body by using a vertical acceleration detector mounted on the moving body; detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body by using a horizontal angular velocity detector mounted on the moving body, the angular velocity being generated due to the undulation of the contact surface; calculating a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in the vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body by using a correlation coefficient calculator; calculating a true acceleration in the vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in the vertical direction from the acceleration in the vertical direction by using a predetermined true vertical acceleration detector, the acceleration in the direction of travel mixed into the acceleration in the vertical direction being calculated on the basis of the correlation coefficient; and calculating a velocity of the moving body in the direction of the travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis by using a velocity calculator. Thus, the degree to the acceleration of the moving body in the direction of travel is mixed into the acceleration in a vertical direction in accordance with the angle with which the body is mounted on the moving body is calculated as the correlation coefficient, the acceleration direction of travel that is mixed into the acceleration in the vertical direction is calculated on the basis of the correlation coefficient, the true acceleration in the vertical direction is calculated by subtracting the acceleration direction of travel that is mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, and the velocity of the moving body in the direction of travel is calculated on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis, whereby the velocity of the moving body can be precisely calculated irrespective of whether or not the body of the velocity calculating device is attached to the moving body with an attachment angle. According to an embodiment of the present invention, there is provided a navigation device including a vertical acceleration detector mounted on a moving body, the vertical acceleration detector detecting an acceleration in a vertical direction generated due to an undulation of a contact surface that is in contact with the moving body; a horizontal angular velocity detector mounted on the moving body, the horizontal angular velocity detector detecting an angular velocity around a horizontal axis that is perpendicular to a direction of travel of the moving body, the angular velocity being generated due to the undulation of the contact surface; a correlation coefficient calculator that calculates a correlation coefficient that represents a degree to which an acceleration in the direction of travel of the moving body is mixed into the acceleration in a vertical direction in accordance with an attachment angle with which a body including the vertical acceleration detector and the horizontal angular velocity detector is attached to the moving body; a true vertical acceleration detector that calculates a true acceleration in a vertical direction by subtracting the acceleration in the direction of travel mixed into the acceleration in a vertical direction from the acceleration in a vertical direction, the acceleration in the direction of travel mixed into the acceleration in a vertical direction being calculated on the basis of the correlation coefficient; a velocity calculator that calculates a velocity of the moving body in the direction of travel of the moving body on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis; a vertical angular velocity detector that calculates an angular velocity around the vertical axis perpendicular to the direction of travel; an angle calculator that calculates an angle by which the moving body has rotated on the basis of the angular velocity around the vertical axis; and a position calculator that calculates a position of the moving body on the basis of the velocity in the direction of travel that is calculated by the velocity calculator and the angle that is calculated by the angle calculator. Thus, the degree to the acceleration of the moving body in the direction of travel is mixed into the acceleration in a vertical direction in accordance with the angle with which the body is mounted on the moving body is calculated as the correlation coefficient, the acceleration direction of travel that is mixed into the acceleration in the vertical direction is calculated on the basis of the correlation coefficient, the true acceleration in the vertical direction is calculated by subtracting the acceleration direction of travel that is mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, the velocity of the moving body in the direction of travel is calculated on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis, and the position of the moving body is calculated on the basis of the velocity in the direction of travel and the angle that is calculated by the angle calculator, whereby the velocity of the moving body can be precisely calculated irrespective of whether or not the body of the velocity calculating device is attached to the moving body with an attachment angle. The embodiments of present invention realize a velocity calculating device and a method of calculating a velocity with which the degree to the acceleration of the moving body in the direction of travel is mixed into the acceleration in a vertical direction in accordance with the angle with which the body is mounted on the moving body is calculated as the correlation coefficient, the acceleration direction of travel that is mixed into the acceleration in the vertical direction is calculated on the basis of the correlation coefficient, the true acceleration in the vertical direction is calculated by subtracting the acceleration direction of travel that is mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, and the velocity of the moving body in the direction of travel is calculated on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis, whereby the velocity of the moving body can be precisely calculated irrespective of whether or not the body of the velocity calculating device is attached to the moving body with an attachment angle. The embodiment of present invention realizes a navigation device with which the degree to the acceleration of the moving body in the direction of travel is mixed into the acceleration in a vertical direction in accordance with the angle with which the body is mounted on the moving body is calculated as the correlation coefficient, the acceleration direction of travel that is mixed into the acceleration in the vertical direction is calculated on the basis of the correlation coefficient, the true acceleration in the vertical direction is calculated by subtracting the acceleration direction of travel that is mixed into the acceleration in the vertical direction from the acceleration in the vertical direction, the velocity of the moving body in the direction of travel is calculated on the basis of the true acceleration in the vertical direction and the angular velocity around the horizontal axis, and the position of the moving body is calculated on the basis of the velocity in the direction of travel and the angle that is calculated by the angle calculator, whereby the velocity of the moving body can be precisely calculated irrespective of whether or not the body of the velocity calculating device is attached to the moving body with an attachment angle. BRIEF DESCRIPTION OF THE DRAWINGS [0016] FIG. 1A is a diagram illustrating a vehicle traveling on a concave road surface, and FIG. 1B is a diagram illustrating the vehicle traveling on a convex road surface; FIG. 2 is a diagram illustrating a vehicle traveling along a curve; FIG. 3 is a diagram illustrating a method of calculating the present position using a velocity and an angle; FIG. 4 is a diagram illustrating the overall structure of a PND; FIG. 5 is a diagram illustrating the definition of the coordinate system associated with the PND; FIG. 6 is a diagram illustrating sensors included in the PND; FIG. 7 is a block diagram illustrating of the circuit structure of the PND; FIG. 8 is a block diagram illustrating the structure of a velocity calculator; FIG. 9 is graph illustrating the relationship between a height and an angle; FIGS. 10A and 10B are graphs illustrating the angle of a road surface when a vehicle is traveling at a low velocity; FIGS. 11A and 11B are graphs illustrating the angle of a road surface when a vehicle is traveling at a high velocity; FIG. 12 is a graph illustrating the angle of a road surface when a vehicle is traveling at a very low velocity; FIG. 13 is a diagram illustrating a vibration due to a cradle; FIG. 14 is a graph illustrating a total acceleration and a total angular velocity after being high pass filtered; FIGS. 15A to 15H are graphs illustrating the total angular velocity that has been Fourier transformed for every 4096 data points; FIGS. 16A to 16H are graphs illustrating the total acceleration that has been Fourier transformed for every 4096 data points; FIGS. 17A to 17D are graphs illustrating a comparison of low pass filtering performed on the total acceleration; FIGS. 18A to 18D are graphs illustrating a comparison of low pass filtering performed on the total angular velocity; FIG. 19 is a graph illustrating the relationship between a front acceleration and a rear acceleration when the vehicle is traveling at a low velocity; FIGS. 20A and 20B are graphs illustrating the relationship between the front acceleration and the rear acceleration when the vehicle is traveling at a medium velocity and at a high velocity; FIGS. 21A to 21F are graphs illustrating a simulation result of the acceleration, the pitch rate, and the velocity when the PND is placed at three different positions; FIG. 22 is a graph illustrating the relationship between the maximum value and the minimum value; FIG. 23 is a graph illustrating the relationship between the velocity and the number of data points; FIGS. 24A and 24B are diagrams illustrating accelerations and pitch rates for arcs having different lengths; FIG. 25 is a flowchart illustrating a process of calculating the present position using velocity calculation; FIGS. 26A and 26B are graphs illustrating examples of measurement results of the acceleration, the angular velocity, and the velocity; FIGS. 27A and 27B are graphs illustrating a first comparison between the measurement results and references; FIGS. 28A and 28B are graphs illustrating a second comparison between the measurement results and the references; FIGS. 29A and 29B are graphs illustrating a third comparison between the measurement results and the references; FIGS. 30A and 30B are graphs illustrating a fourth comparison between the measurement results and the references; FIGS. 31A and 31B are graphs illustrating a fifth comparison between the measurement results and the references; FIGS. 32A to 32C are graphs illustrating a first comparison between the measurement results and the references when the vehicle is traveling along a curve; FIGS. 33A to 33C are graphs illustrating a second comparison between the measurement results and the references when the vehicle is traveling along a curve; FIGS. 34A to 34C are graphs illustrating a third comparison between the measurement results and the references when the vehicle is traveling along a curve; FIGS. 35A and 35B are graphs illustrating a comparison between a route on a map and a travel path of the vehicle; FIG. 36 is a graph illustrating a comparison between the velocity and the distance measured with a PND placed on a light car and the velocity and the distance calculated on the basis of GPS signals; FIG. 37 is a graph illustrating a comparison between the velocity and the distance measured with a PND placed on a minivan and the velocity and the distance calculated on the basis of GPS signals; FIG. 38 is a diagram illustrating of an PND according to the second embodiment in an upward tilt position; FIG. 39 is a graph illustrating the ratio of the GPS velocity to the autonomous velocity when the PND is in the upward tilt position; FIG. 40 is a diagram illustrating a position at which an abnormal value EV1 is output; FIG. 41 is a diagram illustrating a position at which an abnormal value EV2 is output; FIG. 42 is a diagram illustrating a position at which an abnormal value EV3 is output; FIG. 43 is a graph illustrating a comparison between the autonomous velocity calculated by performing velocity calculation and the GPS velocity; FIG. 44 is a diagram illustrating a correlation between the X axis acceleration and the Z axis acceleration; FIG. 45 is a block diagram illustrating a velocity calculator according to a second embodiment; FIG. 46 is a block diagram illustrating a velocity calculator according to another embodiment; and FIG. 47 is a diagram illustrating an example of the way the PND is used according to another embodiment. DESCRIPTION OF THE PREFERRED EMBODIMENTS [0063] Hereinafter, embodiments for carrying out the present invention (hereinafter referred to as embodiments) will be described in the following order with reference to the drawings. 1. First Embodiment (an example in which a navigation device is not tilted) 2. Second Embodiment (an example removing the influence of unwanted acceleration component due to a tilt of the navigation device) 3. Other Embodiments 1. First Embodiment 1-1. Fundamental Principle In the following description, a personal navigation device (hereinafter referred to as a PND) is used as an example of a first embodiment of the present invention, and the fundamental principle of calculating the velocity and the present position of a vehicle using the PND will be described. 1-1-1. Principle of Velocity Calculation In practice, a road on which a vehicle travels is seldom flat, and is generally concave as illustrated in FIG. 1A or generally convex as illustrated in FIG. 1B. In the coordinate system associated with the vehicle, the X axis extends in the front-back direction, the Y axis extends in a horizontal direction perpendicular to the X axis, and the Z axis extends in the vertical direction. The PND is placed, for example, on the dashboard of the vehicle. When the vehicle travels on the concave road (FIG. 1A), a three-axis acceleration sensor of the PND detects a downward acceleration α along the Z axis with a sampling frequency of, for example, 50 Hz. A Y axis gyro sensor of the PND detects an angular velocity ω around the Y axis (hereinafter referred to as a pitch rate) perpendicular to the direction of travel of the vehicle with a sampling frequency of, for example, 50 Hz. For the PND, the sign of the downward acceleration α along the Z axis is defined as positive. The sign of the pitch rate ω upwardly rotating, with respect to the direction of travel, along an imaginary circle that is formed along a concave road surface illustrated in FIG. 1A is defined as positive. The PND calculates the velocity V of the vehicle in the direction of travel 50 times per second using the acceleration α detected by the three-axis acceleration sensor and the pitch rate ω detected by the Y axis gyro sensor in accordance with the following equation (1). = α z ω y ( 1 ) ##EQU00001## When the vehicle travels on a convex road (FIG. 1B), the three-axis acceleration sensor of the PND detects an upward acceleration α ' along the Z axis with a sampling frequency of, for example, 50 Hz, and the Y axis gyro sensor of the PND detects a pitch rate ω ' around the Y axis with a sampling frequency of, for example, 50 Hz. The PND calculates the velocity V' of the vehicle in the direction of travel 50 times per second using the acceleration α ' detected by the three-axis acceleration sensor and the pitch rate ω ' detected by the Y axis gyro sensor in accordance with the following equation (2). ' = α z ' ω y ' ( 2 ) ##EQU00002## For convenience of description here, a negative acceleration is described as the acceleration α '. In practice, the three-axis acceleration sensor detects the acceleration α ' as a negative value of the acceleration α . Likewise, a negative pitch rate is described as the pitch rate ω '. In practice, the Y axis gyro sensor detects the pitch rate ω ' as a negative value of the pitch rate ω . Therefore, in practice, the velocity V' is also calculated as the velocity V. 1-1-2. Principle of Calculating Present Position Next, the principle of calculating the present position on the basis of the velocity V, which have been calculated by using the above principle of velocity calculation, and the angular velocity around the Z axis will be described. Referring to FIG. 2, when the vehicle is, for example, turning to the left, a Z axis gyro sensor of the PND detects an angular velocity around the Z axis (hereinafter referred to as a yaw rate) ω with a sampling frequency of, for example, 50 Hz. Referring to FIG. 3, the PND calculates the displacement from a previous position P0 to a present position P1 on the basis of the velocity V at the previous position P0 and an angle θ that is calculated by multiplying the yaw rate ω detected by the gyro sensor by a sampling period (in this case, 0.02 s). The PND calculates the present position P1 by adding the displacement to the previous position P0. 1-2. Structure of PND The specific structure of the PND, which calculates the velocity of a vehicle using the fundamental principle described above, will be described. 1-2-1. External Structure of PND Referring to FIG. 4, a PND 1 has a display 2 on a front surface thereof. The display 2 can display a map image corresponding to map data stored in, for example, a nonvolatile memory (not shown) of the PND 1. The PND 1 is supported by and is mechanically and electrically connected to a cradle 3 that is attached to a dashboard of a vehicle with a suction cup 3A. Thus, the PND 1 operates using electric power supplied by a battery of the vehicle through the cradle 3. When the PND 1 is detached from the cradle 3, the PND 1 operates using electric power supplied by an internal battery. The PND 1 is disposed so that the display 2 extends perpendicular to the direction of travel of the vehicle. FIG. 5 illustrates the coordinate system associated with the PND 1. The X axis extends in the front-back direction of the vehicle, the Y axis extends in a horizontal direction perpendicular to the X axis, and the Z axis extends in the vertical direction. In the coordinate system, the direction of travel of the vehicle is defined as the positive direction along the X axis, the direction to the right is defined as the positive direction along the Y axis, and the direction downward is defined as the positive direction along the Z axis. 1-2-2. Sensor Structure of PND Referring to FIG. 6, the PND 1 includes a three-axis acceleration sensor 4, a Y axis gyro sensor 5, a Z axis gyro sensor 6, and a barometric pressure sensor 7. The three-axis acceleration sensor 4 detects an acceleration α along the X-axis, an acceleration α along the Y-axis, and the acceleration α along the Z-axis respectively as voltages. The Y axis gyro sensor 5, the Z axis gyro sensor 6, and the barometric pressure sensor 7 respectively detect the pitch rate ω around the Y axis, the yaw rate ω the around the Z axis, and an ambient pressure PR respectively as voltages. 1-2-3. Circuit Structure of PND Referring to FIG. 7, a controller 11 of the PND 1, which is a central processing unit (CPU), controls the PND 1 in accordance with an operating system that is read from a memory 12 that includes a nonvolatile memory. In the PND 1, the controller 11 performs velocity calculation and other processes described below in accordance with various application programs that are read from the memory 12. In order to perform the velocity calculation and other processes, the controller 11 includes, as functional blocks, a GPS processor 21, a velocity calculator 22, an angle calculator 23, a height calculator 24, a position calculator 25, and a navigator 26. A GPS antenna ANT of the PND 1 receives GPS signals from GPS satellites, and the GPS signals are sent to the GPS processor 21 of the controller 11. The GPS processor 21 obtains present position data NPD1 by accurately measuring the present position of the vehicle on the basis of orbit data obtained by demodulating the GPS signals and data on the distances between the GPS satellites and the vehicle, and sends the present position data NPD1 to the navigator 26. The navigator 26 reads map data of a region including the present position of the vehicle from the memory 12 on the basis of the present position data NPD1, and generates a map image including the present position, outputs the map image to the display 2, and thereby displays the map image. The three-axis acceleration sensor 4 detects the accelerations α , α and α with a sampling frequency of, for example, 50 Hz, and sends acceleration data AD that represents the acceleration α to the velocity calculator 22 of the controller 11. The Y axis gyro sensor 5 detects the pitch rate ω with a sampling frequency of, for example, 50 Hz, and sends pitch rate data PD that represents the pitch rate ω to the velocity calculator 22 of the controller 11. The velocity calculator 22 calculates the velocity V 50 times per second in accordance with equation (1) using the acceleration α , which corresponds to the acceleration data AD supplied by the three-axis acceleration sensor 4, and the pitch rate ω , which corresponds to the pitch rate data PD supplied by the Y axis gyro sensor 5, and sends velocity data VD that represents the velocity V to the position calculator 25. The Z axis gyro sensor 6 detects the yaw rate ω at a sampling frequency of, for example, 50 Hz, and sends yaw rate data YD that represents the yaw rate ω to the angle calculator 23 of the controller 11. The angle calculator 23 calculates the angle θ with which the vehicle turns to the right or to the left by multiplying the yaw rate ω , which corresponds to the yaw rate data YD supplied by the Z axis gyro sensor 6, by a sampling period (in this case, 0.02 s), and sends angle data DD that represents the angle θ to the position calculator 25. The position calculator 25 calculates the displacement from the previous position P0 to the present position P1 illustrated in FIG. 3 on the basis of the velocity V, which corresponds to the velocity data VD supplied by the velocity calculator 22, and the angle θ, which corresponds to the angle data DD supplied by the angle calculator 23. The position calculator 25 calculates the present position P1 by adding the displacement to the previous position P0, and sends present position data NPD2, which represents the present position P1, to the navigator 26. The barometric pressure sensor 7 detects the ambient pressure PR with a sampling frequency of, for example, 50 Hz, and sends barometric pressure data PRD that represents the barometric pressure PR to the height calculator 24. The height calculator 24 calculates the height of the vehicle on the basis of the barometric pressure PR, which corresponds to the barometric pressure data PRD supplied by the barometric pressure sensor 7, and sends height data HD that represents the height of the vehicle to the navigator 26. The navigator 26 reads map data of a region including the present position of the vehicle from the memory 12 on the basis of the present position data NPD2 supplied by the position calculator 25 and the height data HD supplied by the height calculator 24, generates a map image including the present position, outputs the map image to the display 2, and thereby displays the map image. 1-3. Velocity Calculation Process Next, a velocity calculation process performed by the velocity calculator 22 will be described in detail. In this process, the velocity calculator 22 calculates the velocity V on the basis of the acceleration α , which corresponds to the acceleration data AD supplied by the three-axis acceleration sensor 4, and the pitch rate ω , which corresponds to the pitch rate data PD supplied by the Y axis gyro sensor 5. Referring to FIG. 8, in order to perform the velocity calculation, the velocity calculator 22 includes, as functional blocks, a data acquirer 31, a high pass filter 32, a low pass filter 33, a velocity calculating section 34, a smoother/noise filter 35, and a velocity output section 36. The data acquirer 31 of the velocity calculator 22 acquires the acceleration data AD supplied by the three-axis acceleration sensor 4 and the pitch rate data PD supplied by the Y axis gyro sensor 5, and sends the acceleration data AD and the pitch rate data PD to the high pass filter 32. The high pass filter 32 removes direct-current components from the acceleration data AD and the pitch rate data PD, which are supplied by the data acquirer 31, to generate acceleration data AD1 and pitch rate data PD1, and sends the acceleration data AD1 and the pitch rate data PD1 to the low pass filter 33. The low pass filter 33 performs low pass filtering (described below) on the acceleration data AD1 and the pitch rate data PD1, which are supplied by the high pass filter 32, to generate acceleration data AD2 and pitch rate data PD2, and sends the acceleration data AD2 and the pitch rate data PD2 to the velocity calculating section 34. The velocity calculating section 34 performs velocity calculation (described below) using the acceleration data AD2 and the pitch rate data PD2, which are supplied by the low pass filter 33, to generate velocity data VD1, and sends the velocity data VD1 to the smoother/noise filter 35. The smoother/noise filter 35 performs smoothing and noise filtering (described below) on the velocity data VD1, which is supplied by the velocity calculating section 34, to generate velocity data VD, and sends the velocity data VD to the velocity output section 36. The velocity output section 36 sends the velocity data VD, which is supplied by the smoother/noise filter 35 and represents the velocity V of the vehicle, to the position calculator 25. Thus, the velocity calculator 22 calculates the velocity V of the vehicle on the basis of the acceleration data AD supplied by the three-axis acceleration sensor 4 and the pitch rate data PD supplied by the Y axis gyro sensor 5. 1-3-1. Low Pass Filtering Next, low pass filtering, which is performed by the low pass filter 33 on the acceleration data AD1 and the pitch rate data PD1 supplied by the high pass filter 32, will be described in detail. FIG. 9 illustrates the relationship between a height H, which is based on the barometric pressure PR corresponding to the barometric pressure data PRD obtained by the barometric pressure sensor 7, and an angle φ around the Y axis with respect to a horizontal direction, which is based on the pitch rate ω corresponding to the pitch rate data PD obtained by the Y axis gyro sensor 5. Regarding the angle φ, the upward direction with respect to the direction of travel (the X axis) is defined as positive. Referring to FIG. 9, there is a correlation between the height H and the angle φ as can be seen from the fact that when the height H sharply decreases from about the 12001st data point (240 s), i.e., when the vehicle travels downhill, the angle φ sharply decreases from about 0.5 deg to about -2.5 deg. When the height H changes, the angle φ changes in accordance with the change in the height H. Thus, the PND 1 can detect the undulation of a road surface in the direction of travel of the vehicle using the Y axis gyro sensor 5. FIG. 10A illustrates the angle φ of FIG. 9. FIG. 10B illustrates the angle φ of FIG. 10A from the 5001st data point to the 6001st data point. During this time, the vehicle travels at a low velocity that is lower than 20 km/h. As can be seen from FIG. 10B, the angle φ oscillates once to twice per second. Thus, when a vehicle is traveling at a low velocity lower than 20 km/h, the PND 1 mounted on the vehicle detects the angle φ, which is based on the pitch rate ω corresponding to the pitch rate data PD obtained by the Y axis gyro sensor 5, as an oscillation having a frequency in the range of 1 to 2 Hz. As with FIG. 10A, FIG. 11A illustrates the angle φ of FIG. 9. FIG. 11B illustrates the angle φ of FIG. 11A from the 22001st data point to the 23001st data point. During this time, the vehicle travels at a high velocity that is higher than 60 km/h. As can be seen from FIG. 11B, when the vehicle is traveling at a high velocity higher than 60 km/h, the PND 1 also detects the angle φ, which is based on the pitch rate ω corresponding to the pitch rate data PD obtained by the Y axis gyro sensor 5, as an oscillation having a frequency in the range of 1 to 2 Hz. Moreover, as illustrated in FIG. 12, when the vehicle is traveling at a very low velocity that is lower than 10 km/h, the PND 1 also detects the angle φ, which is based on the pitch rate ω corresponding to the pitch rate data PD obtained by the Y axis gyro sensor 5, as an oscillation having a frequency in the range of 1 to 2 Hz. Therefore, using the Y axis gyro sensor 5, the PND 1 detects the pitch rate ω as an oscillation having a frequency in the range of 1 to 2 Hz irrespective of the velocity of the vehicle. The PND 1 is supported by the cradle 3, which is attached to the dashboard of the vehicle with the suction cup 3A. Referring to FIG. 13, the cradle 3 includes a body 3B, which is disposed on the suction cup 3A, and a PND supporter 3D. One end of the PND supporter 3D is supported by the body 3B at a support point 3C that is located at a predetermined height, and the PND 1 is supported by the PND supporter 3D at the other end of the PND supporter 3D. Therefore, when the vehicle vibrates due to the undulation of a road surface, the PND 1 vibrates up and down around the support point 3C of the PND supporter 3D with, for example, an acceleration α and an angular velocity ω Therefore, in practice, the three-axis acceleration sensor 4 detects an acceleration (hereinafter referred to as a total acceleration) α that is the sum of the acceleration α (FIG. 1) along the Z axis, which is generated by the vibration of the vehicle due to the undulation of the road surface, and the acceleration α , which is generated by the vibration of the PND 1 around the support point 3C of the PND supporter 3D. The Y axis gyro sensor 5 detects an angular velocity (hereinafter referred to as a total angular velocity) ω that is the sum of the pitch rate ω (FIG. 1) around the Y axis, which is generated by the vibration of the vehicle due to the undulation of the road surface, and the angular velocity ω , which is generated by the vibration of the PND 1 around the support point 3C of the PND supporter 3D. Therefore, the low pass filter 33 acquires the acceleration data AD1, which represents the total angular velocity ω , and the pitch rate data PD1, which represents the total acceleration ω , through the data acquirer 31 and the high pass filter 32. FIG. 14 illustrates the total acceleration α and the total angular velocity ω , which respectively correspond to the acceleration data AD1 and the pitch rate data PD1 that have been high pass filtered by the high pass filter 32. FIGS. 15A to 15F are graphs illustrating the total angular velocity ω of FIG. 14, which has been Fourier transformed for every 4096 data points. In particular, FIG. 15A is a graph of the total angular velocity ω of FIG. 14 from the 1st to the 4096th data point, which has been Fourier transformed. Likewise, FIGS. 15B, 15C, and 15D are graphs of the total angular velocity ω of FIG. 14 from the 4097th data point to the 8192nd data point, the 8193rd data point to the 12288th data point, and the 12289th data point to the 16384th data point, respectively, each of which has been Fourier transformed. FIGS. 15E, 15F, 15G, and 15H, are graphs of the total angular velocity ω of FIG. 14 from the 16385th data point to the 20480th data point, the 20481st data point to the 24576th data point, the 24577th data point to the 28672nd data point, and the 28673rd data point to the 32768th data point, respectively, each of which has been Fourier transformed. As can be clearly seen from FIGS. 15C to 15H, a frequency component in the range of 1 to 2 Hz and a frequency component of about 15 Hz have large values. That is, the Y axis gyro sensor 5 of the PND 1 detects the total angular velocity ω that is the sum of the pitch rate ω , which oscillates with a frequency in the range of 1 to 2 Hz due to the aforementioned undulation of the road surface, and the angular velocity ω , which oscillates with a frequency of about 15 Hz due to the cradle 3 that supports the PND 1. FIGS. 16A to 16F are graphs illustrating the total acceleration α of FIG. 14, which has been Fourier transformed for every 4096 data points. In particular, FIG. 16A is a graph of the total acceleration α of FIG. 14 from the 1st data point to the 4096th data point, which has been Fourier transformed. Likewise, FIGS. 16B, 16C, and 16D are graphs of the total acceleration α of FIG. 14 from the 4097th data point to the 8192nd data point, the 8193rd data point to the 12288th data point, and the 12289th data point to the 16384th data point, respectively, each of which has been Fourier transformed. FIGS. 16E, 16F, 16G, and 16H, are graphs of the total acceleration α of FIG. 14 from the 16385th data point to the 20480th data point, the 20481st data point to the 24576th data point, the 24577th data point to the 28672nd data point, and the 28673rd data point to the 32768th data point, respectively, each of which has been Fourier transformed. Considering the fact that the total angular velocity ω (FIGS. 15C to 15H) has the frequency component in the range of 1 to 2 Hz and the frequency component of about 15 Hz, it is estimated that the total acceleration α also has a frequency component in the range of 1 to 2 Hz and a frequency component of about 15 Hz. That is, the three-axis acceleration sensor 4 of the PND 1 detects the total acceleration α , which is the sum of the acceleration α , which oscillates with a frequency in the range of 1 to 2 Hz due to the aforementioned undulation of the road surface, and the acceleration α , which oscillates with a frequency of about 15 Hz due to the cradle 3 that support the PND 1. Therefore, the low pass filter 33 performs low pass filtering on the acceleration data AD1 and the pitch rate data PD1, which are supplied by the high pass filter 32, so as to remove the frequency component of about 15 Hz, i.e., the acceleration α and the angular velocity ω that are generated due to the cradle 3 that supports the PND 1. FIG. 17A is a graph of data that is the same as that of FIG. 16H, which is plotted with a logarithmic vertical axis. FIGS. 17B, 17C and 17D are graphs of the total acceleration α from the 28673rd data point to the 32768th data point, on which infinite impulse response (IIR) filtering with a cutoff frequency of 2 Hz has been performed twice, four times, and six times, respectively, and on which Fourier transformation has been performed. FIG. 18A is a graph of data that is the same as that of FIG. 15H, which is plotted with a logarithmic vertical axis. FIGS. 18B, 18C and 18D are graphs of the total angular velocity ω from the 28673rd data point to the 32768th data point, on which infinite impulse response (IIR) filtering with a cutoff frequency of 2 Hz is performed twice, four times, and six times, respectively, and on which Fourier transformation is performed. As can be seen from FIGS. 17B to 17D and FIGS. 18B to 18D, the PND 1 can remove the frequency component of about 15 Hz from the acceleration data AD1 and the pitch rate data PD1, which are supplied by the high pass filter 32, by performing the IIR filtering with a cutoff frequency of 2 Hz four times or more on the acceleration data AD1 and the pitch rate data PD1. Therefore, the low pass filter 33 according to the embodiment performs the IIR filtering with a cutoff frequency of 2 Hz four times on the acceleration data AD1 and the pitch rate data PD1, which are supplied by the high pass filter 32, to generate an acceleration data AD2 and pitch rate data PD2, and sends the acceleration data AD2 and the pitch rate data PD2 to the velocity calculating section Thus, the low pass filter 33 removes the acceleration α , which is generated due to the vibration of the PND supporter 3D around the support point 3C of the cradle 3, from the total acceleration α , and thereby extracts only the acceleration α , which is generated due to the undulation of the road surface. Moreover, the low pass filter 33 removes the angular velocity ω , which is generated due to the vibration of the PND supporter 3D around the support point 3C of the cradle 3, from the total angular velocity ω , and thereby extracts only the pitch rate ω , which is generated due to the undulation of the road surface. 1-3-2. Velocity Calculation Next, velocity calculation performed by the velocity calculating section 34 will be described in detail. The velocity calculating section 34 calculates the velocity V on the basis of the acceleration data AD2 and the pitch rate data PD2 supplied by the low pass filter 33. FIGS. 19, 20A, and 20B respectively illustrate the acceleration α corresponding to the acceleration data AD2, which is generated when the vehicle is traveling at a low velocity lower than 20 km/h, at a medium velocity equal to or higher than 20 km/h and lower than 60 km/h, and at a high velocity equal to or higher than 60 km/h. For each of the velocity ranges, a case in which the PND 1 is placed on the dashboard in a front part of the vehicle and a case in which the PND 1 is placed near to the rear window in a rear part of the vehicle are illustrated. In FIGS. 19, 20A, and 20B, the acceleration α that is detected by the PND1 placed in the front part of the vehicle is referred to as the front acceleration and the acceleration α that is detected by the PND1 placed in the rear part of the vehicle is referred to as the rear acceleration. As can be seen from FIGS. 19, 20A, and 20B, the phase of the rear acceleration is delayed with respect to the phase of the front acceleration irrespective of the velocity of the vehicle. This phase delay is approximately equal to the wheelbase divided by the velocity of the vehicle, the wheelbase being the distance between the front wheel axis and the rear wheel axis of the vehicle. FIGS. 21A to 21C respectively illustrate an example of a simulation result representing the relationship between the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2 when the PND 1 is placed on the dashboard (at a position away from the front wheel axis by 30% of the wheelbase), at the center, and at a position above the rear wheel axis of the vehicle. FIGS. 21D to 21F illustrate the velocity V calculated using equation (1) on the basis of the acceleration α and the pitch rate ω obtained from the simulation result illustrated in FIGS. 21A to 21C. In this simulation, it is assumed that a vehicle having a wheelbase of 2.5 m travels at a velocity of 5 m/s on a road surface having a sinusoidal undulation with an amplitude of 0.1 m and a wavelength of 20 m. As can be seen from FIGS. 21A to 21C, the phase of the acceleration α is delayed when the position of the PND 1 is moved toward the back of the vehicle. In contrast, the phase of the pitch rate ω is not delayed irrespective of the position of the PND 1 on the vehicle. Therefore, as illustrated in FIG. 21B, the phase difference between the acceleration α and the pitch rate ω is negligible when the PND 1 is placed at the center of the vehicle. Thus, as illustrated in FIG. 21E, the velocity V, which is calculated using equation (1), is substantially constant. However, as illustrated in FIGS. 21A and 21C, when the position of the PND 1 is moved forward or backward from the center of the vehicle, the phase difference between the acceleration α and the pitch rate ω increases. Therefore, as illustrated in FIGS. 21D and 21F, due to the phase difference between the acceleration α and the pitch rate ω , the velocity V calculated using equation (1) has a larger error than the velocity V calculated when the PND 1 is placed at the center of the vehicle (FIG. 21E). In particular, when the autonomous velocity V of the vehicle is lower than 20 km/h, the phase difference between the acceleration α and the pitch rate ω is large, so that the calculation error of the autonomous velocity V increases. Therefore, referring to FIG. 22, the velocity calculating section 34 extracts the maximum value and the minimum value of the acceleration α , which corresponds to the acceleration data AD2 supplied by the low pass filter 33, from a range of 25 or 75 data points centered around a data point Pm that corresponds to the previous position P0 (FIG. 3). The maximum and minimum values will be referred to as the maximum acceleration α ,max and the minimum acceleration α ,min, respectively. Moreover, the velocity calculating section 34 extracts the maximum value and the minimum value of the pitch rate ω , which corresponds to the pitch rate data PD2 supplied by the low pass filter 33, from a range of 25 or 75 data points centered around the data point Pm. The maximum and minimum values will be referred to as the maximum pitch rate ω ,max and the minimum pitch rate ω ,min, respectively. That is, the velocity calculating section 34 extracts the maximum and minimum accelerations α ,max and α ,min and the maximum and minimum pitch rates ω ,max and ω ,min from a range that is larger than the largest possible phase difference that may be generated between the acceleration α and the pitch rate ω The velocity calculating section 34 calculates the velocity V in the direction of travel at the previous position P0 (FIG. 3) in accordance with the following equation (3), which is rewritten from equation (1), using the maximum and minimum accelerations α ,max and α ,min, which are extracted from the acceleration data AD2, and the maximum and minimum pitch rates ω ,max and ω ,min, which are extracted from pitch rate data PD2, to generate velocity data VD1, and sends the velocity data VD1 to the smoother/noise filter 35. = α z , max - α z , min ω y , max - ω y , min ( 3 ) ##EQU00003## Thus, even when there is a phase difference between the acceleration α and the pitch rate ω , the velocity calculating section 34 can calculate, by using equation (3), the velocity V from which the influence of the phase delay is removed. Referring to FIG. 23, when calculating the velocity V in the direction of travel at the previous position P0 while the vehicle is accelerating, the velocity calculating section 34 uses a range of 25 data points if the velocity V at the second previous position (not shown) (hereinafter referred to as a former velocity) is in the range of 0 km/h to 35 km/h, and the velocity calculating section 34 uses a range of 75 data points if the former velocity V is higher than 35 km/h. When calculating the velocity V in the direction of travel at the previous position P0 while the vehicle is decelerating, the velocity calculating section 34 uses a range of 75 data points if the former velocity V is equal to or higher than 25 km/h, and the velocity calculating section 34 uses a range of 25 data points if the former velocity V is lower than 25 km/h. Thus, the velocity calculating section 34 switches the data range between 25 data points and 75 data points in accordance with the velocity V when extracting the maximum and minimum accelerations α ,max and α ,min and the maximum and minimum pitch rates ω ,max and ω When the velocity V of the vehicle is equal to or lower than, for example, 25 km/h, the acceleration α and the pitch rate ω change sharply in response to a slight change in the road surface. Therefore, the velocity calculating section 34 uses a narrow data range in order to deal with a sharp change. When the velocity of the vehicle is equal to or higher than 35 km/h, the influence of a suspension of the vehicle is large and the acceleration α and the pitch rate ω change slowly. Therefore, the velocity calculating section 34 sets a wide data range in order to deal with a slow change Thus, the velocity calculating section 34 changes the data range, from which the maximum and minimum accelerations α ,max and α ,min and the maximum and minimum pitch rates ω ,max and ω ,min are extracted, in accordance with the velocity V of the vehicle, so that the conditions of the road surface and the vehicle that change in accordance with the velocity V can be taken into account, whereby the velocity V can be calculated more precisely. Moreover, when calculating the maximum and minimum accelerations α ,max and α ,min and the maximum and minimum pitch rates ω ,max and ω ,min, the velocity calculating section 34 changes the data range with a hysteresis between the case when the vehicle is accelerating and the case when the vehicle is decelerating. Thus, frequency of changing of the data range around a switching velocity is reduced as compared to a case in which the velocity calculating section 34 calculates the velocity V by changing the data range without a hysteresis. As a result, the velocity calculating section 34 can reduce the calculation error of the velocity V that may occur due to frequent switching of the data range, whereby the velocity V can be calculated more precisely. 1-3-3. Smoothing and Noise Filtering Next, smoothing and noise filtering performed by the smoother/noise filter 35 on the velocity data VD1, which has been calculated by the velocity calculating section 34, will be described in detail. The smoother/noise filter 35 performs low pass filtering, which is first-order IIR with a variable cutoff frequency, on the velocity data VD1 supplied by the velocity calculating section 34. To be specific, when calculating the velocity V in the direction of travel at the previous position P0, the smoother/noise filter 35 determines the cutoff frequency on the basis of the former velocity V When the velocity of the vehicle is equal to or higher than, for example, 60 km/h, the velocity V calculated by the velocity calculating section 34 of the PND 1 includes a large amount of noise, and thereby the velocity V considerably deviates. Therefore, the smoother/noise filter 35 uses a low pass filter having a low cutoff frequency when the former velocity V is equal to or higher than 60 km/h. In contrast, the smoother/noise filter 35 uses a low pass filter having a high cutoff frequency when the former velocity V is lower than 60 km/h. When the velocity V calculated by the velocity calculating section 34 is lower than, for example, 10 km/h, the pitch rate ω , which is the denominator of equation (1) or (3), may be small, so that the velocity V calculated using the equation (1) or (3) may become considerably higher than the real value. Therefore, the smoother/noise filter 35 acquires the acceleration data AD2 and the pitch rate data PD2, which have been low pass filtered, from the low pass filter 33. If the pitch rate ω corresponding to the pitch rate data PD2 is lower than a predetermined threshold, the smoother/noise filter 35 determines that the velocity V is excessively high and sets the value of the velocity V, after being low pass filtered, at 0. If an arc B1 of the undulation of a road surface is larger than the wheelbase W of the vehicle as illustrated in FIG. 24A, the PND 1 can accurately calculate the velocity V using the aforementioned fundamental principle. However, if an arc B2 of the undulation of a road surface is smaller than the wheelbase W of the vehicle as illustrated in FIG. 24B, an acceleration α in a vertical direction of the vehicle and an angular velocity ω around the Y axis centered around the rear wheel of the vehicle are generated when the front wheel of the vehicle rolls over the undulation. At this time, the three-axis acceleration sensor 4 and the Y axis gyro sensor 5 of the PND 1 detect the acceleration α and the angular velocity ω (FIG. 24B), instead of detecting the acceleration α and the pitch rate ω (FIG. 24A), which are generated due to a vibration having a frequency in the range of 1 to 2 Hz due to the undulation of the road surface. The acceleration α is larger than the acceleration α , which is generated when the arc B1 of the undulation of the road surface is larger than the wheelbase W of the vehicle. The angular velocity ω is higher than the pitch rate ω , which is generated when the arc B1 of the undulation of the road surface is larger than the wheelbase W of the vehicle. A velocity V (hereinafter also referred to as a small-arc velocity) is calculated using equation (1) or (3) on the basis of the acceleration α and the angular velocity ω , which are generated when the arc B2 of the undulation of the road surface is smaller than the wheelbase W of the vehicle. Because the acceleration α changes more than the angular velocity ω , the velocity V is considerably higher than the velocity V, which is calculated using equation (1) or (3) on the basis of the acceleration α and the angular velocity ω generated when the arc B1 of the undulation of the road surface is larger than the wheelbase W of the vehicle. Therefore, when the arc B2 of the undulation of the road surface is smaller than the wheelbase W of the vehicle, the velocity calculator 22 of the PND 1 calculates the small-arc velocity V on the basis of the acceleration α and the angular velocity ω , which leads to calculating the velocity V as an excessively high value. The smoother/noise filter 35 acquires, from the low pass filter 33, the acceleration data AD2 and the pitch rate data PD2, which have been low pass filtered, and determines whether or not the acceleration α corresponding to the acceleration data AD2 and pitch rate ω corresponding to the pitch rate data PD2 are higher than predetermined thresholds. If the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2 are higher than the predetermined thresholds, the smoother/noise filter 35 determines that the velocity V is excessively high and uses the former velocity V instead of the velocity V that has been low pass filtered. That is, the smoother/noise filter 35 uses the former velocity V if the velocity V is excessively high when the velocity of the vehicle is not very low, because it is likely that the velocity V is not accurate in such a case. Thus, if the velocity V that has been low pass filtered is excessively high, the smoother/noise filter 35 sets the velocity V at 0 when the velocity of the vehicle is very low and sets the velocity V at the former velocity V when the velocity of the vehicle is not very low, whereby the velocity V can be calculated more accurately. 1-4. Process of Position Calculation using Velocity Calculation Referring to the flowchart of FIG. 25, a process of position calculation using the aforementioned velocity calculation, which is performed by the controller 11 of the PND 1, will be described. The controller 11 starts the process from a start step of a routine RT1. In step SP1, the data acquirer 31 of the velocity calculator 22 acquires the acceleration data AD detected by the three-axis acceleration sensor 4 and the pitch rate data PD detected by the Y axis gyro sensor 5, and the controller 11 proceeds to step SP2. In step SP2, the high pass filter 32 of the velocity calculator 22 of the controller 11 performs high pass filtering on the acceleration data AD and the pitch rate data PD, and the controller 11 proceeds to step SP3. In step SP3, the low pass filter 33 of the velocity calculator 22 of the controller 11 performs low pass filtering, which is fourth-order IIR filtering with a cutoff frequency of, for example, 1 Hz, on the acceleration data AD1 and the pitch rate data PD1, which have been high pass filtered, and the controller 11 proceeds to step SP4. In step SP4, the velocity calculating section 34 of the velocity calculator 22 of the controller 11 calculates the velocity V using equation (3) on the basis of the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2, which have been low pass filtered, and the controller 11 proceeds to step SP5. In step SP5, the controller 11 performs smoothing and noise filtering on the velocity data VD representing the velocity V, which has been calculated in step SP4. To be specific, the controller 11 performs low pass filtering having a variable cutoff frequency on the velocity data VD1 representing the velocity V, which has been calculated in step SP4. If the controller 11 determines that the velocity V that has been low pass filtered is excessively high, the controller 11 sets the velocity V at 0 when the velocity of the vehicle is lower than, for example, 10 km/h and sets the velocity V at the former velocity V when the velocity of the vehicle is equal to or higher than 10 km/h, and the controller 11 proceeds to step SP6. In step SP6, the angle calculator 23 of the controller 11 acquires the yaw rate data YD detected by the Z axis gyro sensor 6, and the controller 11 proceeds to step SP7. In step SP7, the angle calculator 23 of the controller 11 calculates the angle data DD representing the angle θ by multiplying the yaw rate ω corresponding to the yaw rate data YD by the sampling period 0.02 s, and the controller 11 proceeds to step SP8. In step SP8, the controller 11 calculates the present position data NPD2 on the basis of the velocity data VD, on which smoothing and noise filtering have been performed in step SP5, and the angle data DD, which has been calculated in step SP7, and the controller 11 proceeds to step SP9. In step SP9, the controller 11 reads from the memory 12 a map data including the present position of the vehicle on the basis of the present position data NPD2 supplied by the position calculator 25, generates a map image including the present position, and outputs the map image to the display 2, and the controller 11 proceeds to step SP10 where the process finishes. 1-5. Measurement Results FIGS. 26 to 37 illustrate measurement results obtained by the aforementioned velocity calculation. FIGS. 26 to 35 illustrate measurement results when the PND 1 is placed on a sedan. FIGS. 36 and 37 illustrate measurement results when the PND 1 is placed on a light car and a minivan, respectively. FIG. 26A illustrates the acceleration α corresponding to the acceleration data AD detected by the three-axis acceleration sensor 4 and the pitch rate ω corresponding to the pitch rate data PD detected by the Y axis gyro sensor 5. FIG. 26B illustrates the velocity V calculated using the acceleration α and the pitch rate ω using equation (3). As can be seen from FIGS. 26A and 26B, the acceleration α measured by the PND 1 increases when velocity V of the vehicle increases, while the pitch rate ω measured by the PND 1 remains substantially constant. FIGS. 27A, 28A, 29A, 30A, and 31A are graphs illustrating the velocity V, which is calculated by the PND 1 by performing velocity calculation, and the distance D, which is calculated using the velocity V. FIGS. 27B, 28B, 29B, 30B, and 31B are graphs illustrating a reference velocity V that is calculated from the speed pulse of the vehicle on which the PND 1 is mounted and a reference distance D calculated using the reference velocity V . FIGS. 27A to 31B illustrate cases when the vehicle on which the PND 1 is placed travels on different roads. The velocity calculated from the speed pulse of the vehicle will be referred to as a reference velocity, and the distance calculated using the reference velocity will be referred to as a reference FIG. 27A illustrates the velocity V, which is calculated using the velocity calculation according to the embodiment, and the distance D calculated using the velocity V. FIG. 27B illustrates the reference velocity V and the reference distance D , which are to be compared with the velocity V and the distance D illustrated in FIG. 27A. As illustrated in FIGS. 27A and 27B, the graph of the velocity V is substantially similar to that of the reference velocity V . The error between the distance D, which is calculated on the basis of the velocity V, and the reference distance D is smaller than 10%. FIG. 28A illustrates the velocity V, which is calculated using the velocity calculation according to the embodiment, and the distance D calculated using the velocity V. FIG. 28B illustrates the reference velocity V and the reference distance D , which are to be compared with the velocity V and the distance D illustrated in FIG. 28A. FIG. 29A illustrates the velocity V, which is calculated using the velocity calculation according to the embodiment, and the distance D calculated using the velocity V. FIG. 29B illustrates the reference velocity V and the reference distance D , which are to be compared with the velocity V and the distance D illustrated in FIG. 29A. FIG. 30A illustrates the velocity V, which is calculated using the velocity calculation according to the embodiment, and the distance D calculated using the velocity V. FIG. 30B illustrates the reference velocity V and the reference distance D , which are to be compared with the velocity V and the distance D illustrated in FIG. 30A. FIG. 31A illustrates the velocity V, which is calculated using the velocity calculation according to the embodiment, and the distance D calculated using the velocity V. FIG. 31B illustrates the reference velocity V and the reference distance D , which are to be compared with the velocity V and the distance D illustrated in FIG. 31A. As in the case of FIG. 26A, the velocity V illustrated in FIGS. 27A, 28A, 29A, 30A, and 31A, which illustrates the cases when the vehicle travels on different roads, are substantially similar to the reference velocity V illustrated in FIGS. 27B, 28B, 29B, 30B, and 31B, respectively. The error between the distance D, which is calculated on the basis of the velocity V, and the reference distance D is smaller than 10%. FIG. 32A is a graph of the velocity V and the distance D that are calculated by the PND 1 using the velocity calculation. FIG. 32B is a graph of the reference velocity V and the reference distance D , which is calculated from the reference velocity V . FIG. 32C is a graph of the yaw rate ω , which is detected by the Z axis gyro sensor 6 of the PND 1. Referring to FIG. 32C, the yaw rate ω that is higher than 20 deg/s indicates a right turn of the vehicle, and the yaw rate ω that is smaller than -20 deg/s indicates a left turn of the vehicle. As can be seen from FIG. 32C, even when the vehicle repeats right turns and left turns, the velocity V calculated by the PND 1 (FIG. 32A) is substantially similar to the reference velocity V (FIG. 32B). The error between the distance D, which is calculated on the basis of the velocity V, and the reference distance D is smaller than 10%. FIG. 33A is a graph of the velocity V and the distance D that are calculated by the PND 1 using the velocity calculation when the vehicle travels on a road that is different from that of FIG. 32A. FIG. 33B is a graph of the reference velocity V and the reference distance D that is calculated from the reference velocity V . FIG. 33C is a graph of the yaw rate ω , which is detected by the Z axis gyro sensor 6. FIG. 34A is a graph of the velocity V and the distance D that are calculated by the PND 1 using the velocity calculation when the vehicle travels on a road that is different from those of FIGS. 32A and 33A. FIG. 34B is a graph of the reference velocity V and the reference distance D that is calculated from the reference velocity V . FIG. 34C is a graph of the yaw rate ω , which is detected by the Z axis gyro sensor 6. As can be seen from these results, when the vehicle travels along a large number of curves, the velocity V calculated by the PND 1 is substantially similar to the reference velocity V , and the error between the distance D, which is calculated on the basis of the velocity V, and the reference distance D is smaller than 10%. FIG. 35A illustrates a map including a route K of a vehicle from a start S to a goal G. FIG. 35B illustrates a travel path T of the vehicle, which is a plot of the present positions of the vehicle calculated by the PND 1 mounted on the vehicle. The travel path T (FIG. 35B) is substantially isometric and similar to the route K (FIG. 35A) along which the vehicle has traveled. As can be seen from this fact, the PND 1 can substantially accurately calculate the present position. FIG. 36 illustrates the velocity V and the distance D, which are calculated by the PND 1 placed on a light car. For comparison with the velocity V and the distance D, FIG. 36 also illustrates the velocity V , which is calculated on the basis of GPS signals received with the GPS antenna ANT, and the distance D , which is calculated from the velocity V Hereinafter, the velocity that is calculated on the basis of GPS signals received by the GPS antenna ANT will be referred to as the GPS velocity, and the distance that is calculated from the GPS velocity will be referred to as the GPS distance. FIG. 37 illustrates the velocity V and the distance D, which are calculated by the PND 1 placed on a minivan. For comparison with the velocity V and the distance D, FIG. 37 also illustrates the velocity V , which is calculated on the basis of GPS signals received by the GPS antenna ANT, and the distance D , which is calculated from the velocity V As can be seen from FIGS. 36 and 37, for vehicles having different sizes, i.e. wheelbases, the velocity V calculated by the PND 1 according to the embodiment of the present invention is substantially similar to the GPS velocity V , and the error between the distance D, which is calculated on the basis of the velocity V, and the GPS distance D is smaller than 10%. In FIGS. 36 and 37, when the vehicle is in a tunnel and the like and does not receive GPS signals, the GPS velocity V is set at 0. 1-6. Operation and Effect In the PND 1 having the structure described above, the three-axis acceleration sensor 4 detects the acceleration α along the Z axis perpendicular to direction of travel of the vehicle, which is generated due to the undulation of a road surface, and the Y axis gyro sensor 5 detects the pitch rate ω around the Y axis perpendicular to the direction of travel of the vehicle, which is generated due to the undulation of a road surface. The PND 1 calculates the velocity V using equation (1) or (3) on the basis of the acceleration α detected by the three-axis acceleration sensor 4 and the pitch rate ω detected by the Y axis gyro sensor 5. Thus, the PND 1, which has a simple structure including the three-axis acceleration sensor 4 and the Y axis gyro sensor 5, can accurately calculate the velocity V of the vehicle under all road conditions even when it is difficult for the PND 1 to receive GPS signals. The PND 1 has a good usability because the PND 1 is detachable from the vehicle and it is not necessary for a user to carry out a cumbersome task of connecting a cable to receive speed pulse signals from the vehicle. The Z axis gyro sensor 6 of the PND 1 detects the yaw rate ω around the Z axis perpendicular to the direction of travel of the vehicle, and the PND 1 calculates present position on the basis of the velocity V and the yaw rate ω Thus, the PND 1, which has a simple structure including the three-axis acceleration sensor 4, the Y axis gyro sensor 5, and the Z axis gyro sensor 6, can accurately calculate the present position of the vehicle under all road conditions even when it is difficult for the PND 1 to receive GPS signals. When calculating the velocity V, the PND 1 performs low pass filtering on the acceleration data AD1 and the pitch rate data PD1. Thus, the PND 1 can remove from the acceleration α and the angular velocity ω components that are generated due to the cradle 3 and oscillate at a frequency of, for example, about 15 Hz, which is substantially higher than those of the acceleration α and the pitch rate ω , which are generated due to undulation of a road surface and oscillate at a frequency of, for example, 1 to 2 Hz. Thus, the PND 1 can more accurately calculate the velocity V using the acceleration α and the pitch rate ω from which the vibration component generated due to the cradle 3 is removed. The PND 1 extracts the maximum acceleration α ,max and the minimum acceleration α ,min from the range of 25 to 75 data points of the acceleration α around the data point P , and extracts the maximum pitch rate ω ,max and the minimum pitch rate ω ,min from the range of 25 to 75 data points of the pitch rate ω around the data point P The PND 1 calculates the velocity V using equation (3) from the maximum and minimum accelerations α ,max and α ,min and the maximum and minimum pitch rates ω ,max and ω Thus, the PND 1 uses data points in a range that is wider than the phase difference between the acceleration α and the pitch rate ω , the phase delay being changeable in accordance with the position at which the PND 1 is placed in the vehicle, thereby removing the influence of the phase difference between the acceleration α and the pitch rate ω When the velocity V calculated using equation (3) on the basis of the acceleration α and the pitch rate ω is excessively high, the PND 1 sets the velocity V at 0 when the vehicle is traveling at a very low velocity, and otherwise the PND 1 sets the velocity at the former velocity V , thereby calculating the velocity V more accurately. With the above structure, the PND 1 according to the first embodiment detects the acceleration α along the Z axis, which is generated due to the undulation of a road surface, and the pitch rate ω around the Y axis, which is generated due to the undulation of the road surface, and calculates the velocity V using the acceleration α and the pitch rate ω , thereby accurately calculating the velocity V under all road conditions. 2. Second Embodiment A PND 50 according to a second embodiment (FIGS. 4 to 8) differs from the PND 1 according to the first embodiment in that the PND 50 can calculate the velocity V more precisely with consideration of the tilt of the PND 50. 2-1. Negative Influence when PND is used in Upward Tilt Position The PND 50 according to the second embodiment removes a negative influence that is produced in the velocity calculation when the PND 50 is used in a position that is tilted in the direction of travel along the X axis and around the Y axis (FIG. 5) by Q degrees (for example, 120 degrees) as illustrated in FIG. 38 (hereinafter referred to as an upward tilt position). The upward tilt position may occur when the PND 50 is initially attached to the cradle 3 in the upward tilt position or when the PND 50 is attached to the cradle 3 and then tilted to be in the upward tilt position. The following discussion holds true also when the PND 50 is used in a downward tilt position, which is opposite to the upward tilt position and in which the PND 50 is tilted in the direction of travel along the X axis and around the Y axis (FIG. 5) by -Q degrees. Referring to FIG. 39, for example, when the PND 50 is used, for example, in a position in which the body of the PND 50 is perpendicular to the Z axis (FIGS. 5 and 13) and is not tilted along the X axis, the velocity ratio VP (GPS velocity V /autonomous velocity V), which is the ratio of the GPS velocity V to the velocity V calculated using the velocity calculation process according to the embodiment of the present invention (hereinafter referred to as an autonomous velocity V) is ideally a constant value "1". However, in practice, when the PND 50 is used in the upward position in which the PND 50 is tilted in the direction of travel along the X axis and around the Y axis (FIG. 5) by Q degrees (for example, 120 degrees), the velocity ratio VP (GPS velocity V /autonomous velocity V) are detected as three abnormal values EV1 to EV3, which are excessively high. It has been found that the abnormal value EV1 corresponds to a timing (elapsed time) when the vehicle is accelerating from a parking lot to a traffic lane of an expressway as illustrated in FIG. 40. It has been found that the abnormal values EV2 and EV3 each correspond to a timing (elapsed time) when the vehicle is accelerating in a traffic lane as illustrated in FIGS. 41 and 42. That is, as illustrated in FIG. 43, the PND 50 generally outputs the autonomous velocity V that is extremely larger than the GPS velocity V when the vehicle is accelerating or decelerating. When the PND 50 is used in a position in which the body of the PND 50 is perpendicular to the Z axis, the acceleration α along the X axis and the acceleration α along the Z axis is uncorrelated. However, when the PND 50 is used in the upward tilt position, the acceleration α along the X axis and the acceleration α along the Z axis is correlated (has a slope indicated by a line) as illustrated in FIG. 44, in that the acceleration α along the Z axis changes in accordance with the acceleration α along the X axis. Accordingly, when the PND 50 is used in the upward tilt position, the acceleration α along the X axis is mixed into the acceleration α along the Z axis, which is used to calculate the autonomous velocity V, when the vehicle is accelerating or decelerating. That is, with the PND 50, the acceleration α along the X axis is mixed into the acceleration α along the Z axis, whereby the acceleration α along the Z axis is overestimated and an error is generated in the calculation result of the autonomous velocity V. As described above using FIG. 22, the autonomous velocity V is calculated by using the difference between the maximum acceleration α ,max and the minimum acceleration α ,min, in which the values of the acceleration α along the Z axis include the acceleration α along the X axis. Therefore, it is necessary for the PND 50 to learn a correlation coefficient K (described below) that represents the degree to which the acceleration α along the X axis is mixed into the acceleration α along the Z axis and calculate a true acceleration α ' along the Z axis from which the acceleration a along the X axis is removed beforehand in accordance with the following equation (4). Thus, the PND 50 can reduce en error of the autonomous velocity V with respect to the GPS velocity V , and can make the velocity ratio VP (GPS velocity V /autonomous velocity V) illustrated in FIG. 39 become closer to an ideal value "1" irrespective of the elapsed time. When the acceleration α along the X axis is near "0", slight noise may considerably influence the value of the acceleration α and the noise may become dominant. Therefore, when calculating the autonomous velocity V, the PND 50 calculates the correlation coefficient K only when the acceleration |α | is larger than a predetermined threshold TH (for example, 0.075 m/s 2-2. Principle of Velocity Calculation A velocity calculator 52 calculates the correlation coefficient K between the acceleration α along the Z axis and the acceleration α along the X axis by learning, and calculates the true acceleration α ' along the Z axis by removing the acceleration α along the X axis that is mixed into the acceleration α along the Z axis by using the correlation coefficient K. The velocity calculator 52 calculates the autonomous velocity V using the acceleration α ' along the Z axis and the pitch rate ω around the Y axis in accordance with the following equation (5). = α z ' ω y ( 5 ) ##EQU00004## When calculating the correlation coefficient K beforehand by learning, the velocity calculator 52 calculates a correlation coefficient Kn calculated on the basis of, for example, the result of the n-th sampling in accordance with the following equation (6). K n = α z α x ( 6 ) ##EQU00005## However, the calculation results of the correlation coefficient Kn have a large deviation (error) between samples. Therefore, it is difficult to use the correlation coefficient Kn to correct the acceleration α along the Z axis (to remove the acceleration α along the X axis that is mixed into the acceleration α along the Z axis) for the next sampling. Therefore, it is necessary for the velocity calculator 52 to use a large number of correlation coefficients Kn, each of which including an error, to obtain the final value of the correlation coefficient K that has a small error. A simple method, for example, is to obtain the final value of the correlation coefficient K by averaging all the correlation coefficients Kn obtained by sampling in a certain past period. However, this method has two demerits. One is that it is necessary to prepare a buffer for storing the correlation coefficient Kn in the past period. The other is that the method is inefficient because the method does not consider the fact that the error of the correlation coefficient Kn is small when the acceleration |α | along the X axis is considerably larger than "0" (larger than the threshold TH). Therefore, when calculating the final value of the correlation coefficient K, the velocity calculator 52 uses a correlation coefficient learning section (described below) including an infinite impulse response (IIR) filter, so that it is not necessary for the velocity calculator 52 to have a buffer for storing all the correlation coefficient Kn in a past period. The correlation coefficient learning section calculates the final value of the correlation coefficient K using the following equation (7). =(Kn-K)Gain+K (7) Thus, the correlation coefficient learning section stores only the correlation coefficient K that is the result of the previous learning. In equation (7), Gain is a predetermined constant. Instead of using the Gain as a constant, however, the correlation coefficient learning section changes the value of the Gain in accordance with the acceleration |α | along the X axis, so that the correlation coefficient Kn having a smaller error is weighted heavily and thereby the final value of the correlation coefficient K converges faster on an appropriate final value. To be specific, the correlation coefficient learning section calculates the Gain in equation (7) using the following equation (8). = Reference Gain α x Reference α x ( 8 ) ##EQU00006## The reference α is set at 0.15 m/s (for example, corresponding to 200 digits (200 data points) in the X axis acceleration direction of FIG. 44). The reference Gain is set at 1/10000 (for example, corresponding to a time constant of 200 seconds (10000/50=200) for the sampling frequency of 50 Hz according to the embodiment). That is, the correlation coefficient learning section sets, for example, the reference α at 0.15 m/s and the reference Gain at 1/10000, and calculates the Gain using the values of the acceleration |α | along the X axis sampled by the three-axis acceleration sensor 4 in the past 200 seconds. That is, when calculating the Gain, the correlation coefficient learning section updates the output of the three-axis acceleration sensor 4 every 200 seconds. Thus, with consideration of the case in which the upward tilt position (the tilt angle) of the PND 50 is changed, the correlation coefficient learning section does not use old data in order to prevent miscalculating the Gain by using data before the tilt position (the tilt angle) of the PND 50 is changed. 2-3. Velocity Calculation FIG. 45 is a block diagram of the velocity calculator 52, in which parts corresponding to those of FIG. 8 are represented by the same numerals. As illustrated in FIG. 45, the velocity calculator 52 is similar to the velocity calculator 22 except that the velocity calculator 52 further includes a Z axis direction acceleration corrector 70 and a correlation coefficient learning section 71. The velocity calculator 52 calculates the final value of the correlation coefficient K when the PND 50 is mounted on the cradle 3, the vehicle is in a GPS measurement area, and the GPS velocity is equal to or higher than 1.0 m/s. Thus, the velocity calculator 52 calculates the correlation coefficient K only when the vehicle is traveling, and does not calculate the correlation coefficient K when the user is holding the PND 50 or adjusting the tilt angle of the PND 50. The data acquirer 31 of the velocity calculator 52 sends the acceleration data AD representing the acceleration α , the acceleration data AX representing the acceleration α , and the pitch rate data PD supplied by the Y-axis gyro sensor 5 to the high pass filter 32. The high pass filter 32 removes direct-current components (and thereby removes offset components) from the acceleration data AD, the acceleration data AX, and the pitch rate data PD to generate acceleration data AD1, acceleration data AX1, and pitch rate data PD1, and sends the acceleration data AD1, the acceleration data AX1, and the pitch rate data PD1 to the low pass filter 33. The low pass filter 33 performs the aforementioned low pass filtering on the acceleration data AD1, the acceleration data AX1, and the pitch rate data PD1 to generate an acceleration data AD2 and an acceleration data AX2. The low pass filter 33 sends the acceleration data AD2 to the Z axis direction acceleration corrector 70 and the correlation coefficient learning section 71, and sends the pitch rate data PD2 to the velocity calculating section 34. The correlation coefficient learning section 71 calculates the final value of the correlation coefficient K by using the acceleration α along the Z axis represented by the acceleration data AD2 and the acceleration α along the X axis represented by the acceleration data AX2 in accordance with equations (6) and (8), and outputs the final value of the correlation coefficient K to the Z axis direction acceleration corrector 70. The Z axis direction acceleration corrector 70 calculates the true acceleration α ' along the Z axis by correcting, in accordance with equation (4), the acceleration α represented by the acceleration data AD2 by using the acceleration α represented by the acceleration data AX2 and the final value of the correlation coefficient K supplied by the correlation coefficient learning section 71, and sends the acceleration data AD3 representing the true acceleration α ' along Z axis to the velocity calculating section 34. The velocity calculating section 34 calculates, in accordance with equation (5), the autonomous velocity V including a small error by using the true acceleration α ' along the Z axis represented by the acceleration data AD3 supplied by the Z axis direction acceleration corrector 70 and the pitch rate ω around the Y axis represented by the pitch rate data PD2, and sends the velocity data VD1 representing the autonomous velocity V to the smoother/noise filter 35. The smoother/noise filter 35 performs smoothing and noise filtering (described above) on the velocity data VD1, which is supplied by the velocity calculating section 34, to generate velocity data VD, and sends the velocity data VD to the velocity output section 36. The velocity output section 36 sends the velocity data VD, which is supplied by the smoother/noise filter 35 and represents the velocity V of the vehicle, to the position calculator 25. Thus, even when the PND 50 is in the upward tilt position, the velocity calculator 52 according to the second embodiment can more precisely calculate the autonomous velocity V by using the true acceleration α ' along the Z axis from which the acceleration α that is mixed into the acceleration α has been removed. 2-4. Operation and Effect As with the first embodiment, in the PND 50 according to the second embodiment and having the structure described above, the three-axis acceleration sensor 4 detects the acceleration α along the Z axis perpendicular to direction of travel of the vehicle, which is generated due to the undulation of a road surface, and the Y axis gyro sensor 5 detects the pitch rate ω around the Y axis perpendicular to the direction of travel of the vehicle, which is generated due to the undulation of a road surface. When the PND 50 is used in the upward tilt position, the PND 50 considers that the acceleration α along the X axis is mixed into the acceleration α detected by the three-axis acceleration sensor 4, and learns the degree of mixture as the final value of the correlation coefficient K. The PND 50 calculates the true acceleration α ' along the Z axis, from which the acceleration α along the X axis has been removed, in accordance with equation (4), and calculates the autonomous velocity V that is more precise on the basis of the acceleration α ' along the Z axis and the pitch rate ω around the Y axis in accordance with equation (5). Therefore, as compared with the first embodiment, the PND 50 can calculate the autonomous velocity V that is more precise in that an error generated when the PND 50 is attached to the vehicle in the upward tilt position has been removed. When the value of the acceleration α along the X axis is about "0", noise may become dominant. Therefore, when calculating the autonomous velocity V, the velocity calculator 52 calculates the correlation coefficient K only when the acceleration |α | is larger than the predetermined threshold TH. Thus, the PND 50 can eliminate the influence of noise by removing the acceleration α along the X axis from the acceleration α along the Z axis when calculating the true acceleration α ' along the Z axis, whereby the PND 50 can more precisely calculate the autonomous velocity V. When calculating the Gain by using equation (8), the velocity calculator 52 adjusts the timing for updating the output value of the three-axis acceleration sensor 4, which is used for calculating the autonomous velocity V, in accordance with the reference Gain. Thus, even when the upward tilt position of the PND 50 is slightly changed (tilt angle is changed), the PND 50 can accurately calculate the autonomous velocity V by using only new data and without being influenced by old data. With the structure described above, even when the PND 50 according to the second embodiment is in the upward tilt position, the PND 50 can precisely calculate the autonomous velocity V in consideration of an error generated due to the tilt angle of the PND 50. 3. Other Embodiments In the first embodiment described above, the velocity V is calculated using equation (3) on the basis of the maximum and minimum accelerations α ,min and α ,max, which are extracted from the acceleration α corresponding to the acceleration data AD2, and the maximum and minimum angular velocities ω ,max and ω ,min which are extracted from the pitch rate ω corresponding to the angular velocity data DD2. However, the present invention is not limited thereto. The velocity calculating section 34 may calculate the variances of the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2, which are supplied by the low pass filter 33, for, for example, a range of 25 data points or 75 data points around the data point P corresponding to the previous position P0. Then, the velocity calculating section 34 may calculate the velocity V by dividing the variance of the acceleration α by the variance of the pitch rate ω Alternatively, the velocity calculating section 34 may calculate the deviations of the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2, which are supplied by the low pass filter 33, for, for example, a range of 25 data points or 75 data points around the data point P corresponding to the previous position P0. Then, the velocity calculating section 34 may calculate the velocity V by dividing the deviation of the acceleration α by the deviation of the pitch rate ω In the first and second embodiments described above, the three-axis acceleration sensor 4, the Y axis gyro sensor 5, and the Z axis gyro sensor 6 respectively measure the accelerations α , α , α , the pitch rates ω , and the yaw rate ω with a sampling frequency of 50 Hz. However, the present invention is not limited thereto. The three-axis acceleration sensor 4, the Y axis gyro sensor 5, and the Z axis gyro sensor 6 may respectively measure the accelerations α , α , α , the pitch rates ω , and the yaw rate ω with a sampling frequency of, for example, 10 Hz instead of 50 Hz. In the first and second embodiments described above, the velocity V is calculated using the acceleration α and the pitch rate ω that are detected with a sampling frequency of 50 Hz. However, the present invention is not limited thereto. The velocity calculators 22 and 52 of the PNDs 1 and 50 may calculate the averages of the acceleration α and the pitch rate ω , which are detected with a sampling frequency of 50 Hz, for, for example, every 25 data points, and may calculate the velocity V using the averages of the acceleration α and the pitch rate ω In this case, the velocity calculators 22 and 52 of the PNDs 1 and 50 calculate the averages of the acceleration α and pitch rate ω , which are detected with a sampling frequency of 50 Hz, for, for example, every 25 data points, thereby calculating the velocity V twice per second. Thus, a processing load for the controllers 11 of the PNDs 1 and 50 due to velocity calculation can be reduced. In the first and second embodiments described above, the high pass filter 32 performs high pass filtering on the acceleration data AD and the pitch rate data PD, which have been detected by the three-axis acceleration sensor 4 and the Y axis gyro sensor 5. However, the present invention is not limited thereto. The PNDs 1 and 50 may not perform high pass filtering on the acceleration data AD and the pitch rate data PD, which have been detected by the three-axis acceleration sensor 4 and the Y axis gyro sensor 5. In the first and second embodiments, the high pass filter 32 and the low pass filter 33 perform high pass filtering and low pass filtering on the acceleration data AD and the pitch rate data PD, which have been detected by the three-axis acceleration sensor 4 and the Y axis gyro sensor 5. However, the present invention is not limited thereto. The PNDs 1 and 50 may perform, in addition to the high pass filtering and low pass filtering, moving average filtering on the acceleration data AD and the pitch rate data PD. The PNDs 1 and 50 may perform filtering that is an appropriate combination of high pass filtering, low pass filtering, and moving average filtering on the acceleration data AD and the pitch rate data PD. In the first and second embodiments described above, when calculating, for example, the velocity V at the previous position P0 using the acceleration α and the pitch rate ω , if it is determined that the velocity V at the previous position P0 is excessively high, the velocity V at the present position P0 is set at the former velocity V . However, the present invention is not limited thereto. If the velocity V at the previous position P0 is higher than the former velocity V by a predetermined threshold, the velocity calculators 22 and 52 of the PNDs 1 and 50 may set the velocity V at the previous position P0 at a value that equals the former velocity V plus a velocity that will be increased by acceleration of the vehicle. If the velocity V at the previous position P0 is lower than the former velocity V by a predetermined threshold, the velocity calculator 22 of the PND 1 may set the velocity at the previous position P0 at a value that equals the former velocity V minus a velocity that will be decreased by deceleration of the vehicle. In the first embodiment described above, the velocity V is calculated on the basis of the acceleration α and the pitch rate ω using equation (3). However, the present invention is not limited thereto. The controller 11 of the PND 1 may compare the velocity V, which is calculated on the basis of the acceleration α and the pitch rate ω using equation (3), with the GPS velocity V , which is calculated on the basis of GPS signals. When the velocity V has an error with respect to the GPS velocity V , the controller 11 of the PND 1 may calculate, for example, a correction factor for correcting the velocity V by using a linear function or a polynomial function of a second or a higher degree so as to minimize the error, and stores the correction factor in the memory 12. Therefore, the velocity calculator 22 of the PND 1 may calculate the velocity V on the basis of the acceleration α and the pitch rate ω respectively detected by the three-axis acceleration sensor 4 and the Y-axis gyro sensor 5 using equation (3), read the correction factor from the memory 12, and correct the velocity V using the correction factor and a linear function or a polynomial function of a second or a higher degree. In this case, the PND 1 can more precisely calculate the velocity V by learning beforehand the correction factor for correcting the velocity V on the basis of the GPS velocity V calculated on the basis of GPS signals. When calculating the correction factor used to correct the velocity V with respect to the GPS velocity V , the controller 11 of the PND 1 may divide the range of the velocity V into a plurality of velocity regions, such as a super low velocity region, a low velocity region, a medium velocity region, and a high velocity region, and may calculate a correction factor for each of the velocity regions. When calculating the correction factor used to correct the velocity V with respect to the GPS velocity V , the controller 11 of the PND 1 may calculate the correction factor only when the vehicle is traveling at a high velocity that is equal to or higher than a predetermined value, such as 60 km/h. In the second embodiment described above, the PND 50 calculates the velocity V by removing beforehand an influence that is produced when the PND 50 is used in the upward tilt position in which the PND 50 is tilted in the direction of travel along the X axis and around the Y axis (FIG. 5) by Q degrees (for example, 120 degrees). However, the present invention is not limited thereto. The PND 50 may calculate the autonomous velocity V by removing beforehand an influence that is produced when the PND 50 is used in a position tilted with respected to the roll direction (a rightward tilt position or a leftward tilt position). In this case, with the PND 50, the Z axis yaw angular velocity (yaw rate) ω , which is detected by the Z-axis gyro sensor 6, is mixed into the Y axis pitch rate ω , which is detected by the Y-axis gyro sensor 5. Thus, with the PND 50, the pitch rate ω may become overestimated, and an error may be generated in the calculation result of the velocity V (the autonomous velocity may be underestimated). Therefore, the PND 50 learns a correlation coefficient K that represents the degree to which the yaw rate ω around the Z axis is mixed into the pitch rate ω around the Y axis, and calculates a true pitch rate ω ' around the Y axis from which the yaw rate ω around the Z axis is removed beforehand from the pitch rate ω around the Y axis in accordance with the following equation (9). When the yaw rate ω around the Z axis is near "0", slight noise may considerably influence the value of the yaw rate ω and the noise may become dominant. Therefore, when calculating the autonomous velocity V , the PND 50 calculates the correlation coefficient K only when the yaw rate |ω | is larger than a predetermined threshold TH (for example, 1 deg/s). A velocity calculator (described below) of the PND 50 calculates the correlation coefficient K between the pitch rate ω around the Y axis and the yaw rate ω around the Z axis by learning, and calculates the true pitch rate ω ' around the Y axis by removing the yaw rate ω around the Z axis that is mixed into the pitch rate ω around the Y axis by using the correlation coefficient K The velocity calculator calculates the autonomous velocity V using the acceleration α along the Z axis and the true pitch rate ω ' around the Y axis in accordance with the following equation (10). V y = α z ω y ' ( 10 ) ##EQU00007## When calculating the correlation coefficient K beforehand by learning, the velocity calculator calculates a correlation coefficient K calculated on the basis of, for example, the result of the n-th sampling in accordance with the following equation (11). K y n = ω y ω z ( 11 ) ##EQU00008## However, the calculation results of the correlation coefficient K have a large deviation (error) between samples. Therefore, it is difficult to use the correlation coefficient K to correct of the pitch rate ω around the Y axis (removing the yaw rate ω around the Z axis that is mixed into the pitch rate ω around the Y axis) for the next sampling. Therefore, it is necessary for the velocity calculator to use a large number of correlation coefficients K , each of which including an error, to obtain the final value of the correlation coefficient K that has a small error. A simple method, for example, is to obtain the final value of the correlation coefficient K by averaging all the correlation coefficient Kn obtained by sampling in a past period. However, this method has two demerits. One is that it is necessary to prepare a buffer for storing the correlation coefficient Kn in the past period. The other is that the method is inefficient because the method does not consider the fact that the error of the correlation coefficient K is small when the yaw rate |ω | around the Z axis is considerably larger than "0" (larger than the threshold TH Therefore, when calculating the final value of the correlation coefficient K, the velocity calculator uses a correlation coefficient learning section (described below) including an infinite impulse response (IIR) filter, so that it is not necessary for the velocity calculator to have a buffer for storing all the correlation coefficient K in a past period. The correlation coefficient learning section calculates the final value of the correlation coefficient K using the following equation (12). Thus, the correlation coefficient learning section stores only the correlation coefficient K that is the result of the previous learning. In equation (12), Gain is a predetermined constant. Instead of using the Gain as a constant, however, the correlation coefficient learning section changes the value of the Gain in accordance with the yaw rate |ω | around the Z axis, so that the correlation coefficient K having a smaller error is weighted and thereby the final value of the correlation coefficient K converges faster on an appropriate final value. To be specific, the correlation coefficient learning section calculates the Gain in equation (12) using the following equation (13) Gain y = Reference Gain y ω x Reference ω x ( 13 ) ##EQU00009## Description of the meaning of equation (13) is omitted because it is the same as that of equation (8). FIG. 46 is a block diagram of a velocity calculator 92, in which parts corresponding to those of FIG. 45 are represented by the same numerals. As illustrated in FIG. 46, the velocity calculator 92 is similar to the velocity calculator 52 except that the velocity calculator 92 includes a pitch rate corrector 100 instead of the Z axis direction acceleration corrector 70 and includes a correlation coefficient learning section 101 instead of the correlation coefficient learning section 71. The data acquirer 31 of the velocity calculator 92 sends the acceleration data AD representing the acceleration α , the yaw rate data AZ representing the yaw rate ω around the Z axis, and the pitch rate data PD representing the pitch rate ω around the Y axis supplied by the Y-axis gyro sensor 5 to the high pass filter 32. The high pass filter 32 removes direct-current components (and thereby removes offset components) from the acceleration data AD, the yaw rate data AX, and the pitch rate data PD to generate acceleration data AD1, yaw rate data AZ1, and pitch rate data PD1, and sends the acceleration data AD1, the acceleration data AX1, and the pitch rate data PD1 to the low pass filter 33. The low pass filter 33 performs the aforementioned low pass filtering on the acceleration data AD1, the yaw rate data AZ1, and the pitch rate data PD1 to generate acceleration data AD2, yaw rate data AZ2, and pitch rate data PD2. The low pass filter 33 sends the acceleration data AD2 to the velocity calculating section 34, and sends the pitch rate data PD2 and the yaw rate data AZ2 to the correlation coefficient learning section 101 and the pitch rate corrector 100. The correlation coefficient learning section 101 calculates the final value of the correlation coefficient K by using the pitch rate ω around the Y axis represented by the pitch rate data PD2 and the yaw rate ω around the Z axis represented by the yaw rate data AZ2 in accordance with equations (11) and (13), and outputs the final value of the correlation coefficient K to the pitch rate corrector 100. The pitch rate corrector 100 calculates the true pitch rate ω ' around the Y axis by correcting, in accordance with equation (9), the pitch rate ω around the Y axis represented by the pitch rate data PD2 by using the yaw rate ω represented by yaw rate data AZ2 and the final value of the correlation coefficient K supplied by the correlation coefficient learning section 101, and sends pitch rate data PD3 representing the true pitch rate ω ' to the velocity calculating section 34. The velocity calculating section 34 calculates, in accordance with equation (10), the autonomous velocity V including a small error by using the true pitch rate ω ' around the Y axis represented by the acceleration data PD3 supplied by the pitch rate corrector 100 and the acceleration α along the Z axis represented by the acceleration data AD2, and sends the velocity data VD2 representing the autonomous velocity V to the smoother/noise filter 35. The smoother/noise filter 35 performs smoothing and noise filtering (described above) on the velocity data VD2, which is supplied by the velocity calculating section 34, to generate velocity data VD, and sends the velocity data VD to the velocity output section 36. The velocity output section 36 sends the velocity data VD, which is supplied by the smoother/noise filter 35 and represents the autonomous velocity V of the vehicle, to the position calculator 25. Thus, even when the PND 50 is in the tilt position tilted in a roll direction, the velocity calculator 92 can more precisely calculate the autonomous velocity V by using the true pitch rate ω ' around the Y axis from which the yaw rate ω that is mixed into the pitch rate ω around the Y axis has been removed. The PND 50 may remove beforehand the influence when the PND 50 is used in an upward tilt position and the influence when the PND 50 is used in a position tilted with respect to a roll direction (a rightward tilt position or a leftward tilt position) and calculate the autonomous velocity V. In the first and second embodiments described above, the PNDs 1 and 50 perform navigation while the PNDs 1 and 50 are supplied with electric power. However, the present invention is not limited thereto. When the power button (not shown) is pressed and the PNDs 1 and 50 are powered off, the PNDs 1 and 50 may store, in the memory 12, the present position, the height, and the like at the moment when the power button is pressed. When the power button is pressed again and the PNDs 1 and 50 are powered on, the PNDs 1 and 50 may read the present position, the height, and the like from the memory 12, and may perform navigation on the basis of the present position, the height, and the like in accordance with the process of calculating the present position. In the first and second embodiments described above, the PNDs 1 and 50 calculate the velocity V while the PNDs 1 and 50 are supported on the cradle 3 placed on the dashboard of the vehicle. However, the present invention is not limited thereto. When it is detected that the PNDs 1 and 50 are mechanically or electrically disconnected from the cradle 3, the velocity V may be set at 0 or maintained at the former velocity V In the first and second embodiments described above, the PNDs 1 and 50 are used in a landscape position. However, the present invention is not limited thereto. As illustrated in FIG. 47, the PNDs 1 and 50 may be used in a portrait position. In this position, the PNDs 1 and 50 may detect the yaw rate ω around the Z axis with the Y axis gyro sensor 5 and the pitch rate ω around the Y axis with the Z axis gyro sensor 6. In the first and second embodiments described above, the three-axis acceleration sensor 4, the Y axis gyro sensor 5, the Z axis gyro sensor 6, and the barometric pressure sensor 7 are disposed inside the PNDs 1 and 50. However, the present invention is not limited thereto. The three-axis acceleration sensor 4, the Y axis gyro sensor 5, the Z axis gyro sensor 6, and the barometric pressure sensor 7 may be disposed outside the PNDs 1 and 50. The PNDs 1 and 50 may include an adjustment mechanism disposed on a side thereof so that a user can adjust the attachment angles of the three-axis acceleration sensor 4, the Y axis gyro sensor 5, the Z axis gyro sensor 6, and the barometric pressure sensor 7. In this case, the PNDs 1 and 50 allow a user to adjust the adjustment mechanism so that, for example, the rotation axis of the Y axis gyro sensor 5 is aligned in the vertical direction with respect to the vehicle even when the display 2 is not substantially perpendicular to the direction of travel of the vehicle. In the first and second embodiments described above, the velocity V is determined as excessively high if the pitch rate ω corresponding to the pitch rate data PD2 is lower than a predetermined threshold and if the acceleration α corresponding to the acceleration data AD2 and the pitch rate ω corresponding to the pitch rate data PD2 are higher than predetermined thresholds. However, the present invention is not limited thereto. The controllers 11 may determine that the velocity V is excessively high if the velocity V calculated by the velocity calculating section 34 is higher than the former velocity V by a predetermined value. In this case, the smoother/noise filter 35 may set the velocity V at 0 when the velocity V calculated by the velocity calculating section 34 is higher than the former velocity V by a predetermined value and when the former velocity is at a low velocity lower than, for example, 10 km/h. The smoother/noise filter 35 may set the velocity V at the former velocity V when the velocity V calculated by the velocity calculating section 34 is higher than the former velocity V by a predetermined value and the former velocity is equal to or higher than, for example, 10 km/h. In the first and second embodiments described above, the controllers 11 of the PNDs 1 and 50 performs the process of calculating the present position of the routine RT1 and the like in accordance with application programs stored in the memory 12. However, the present invention is not limited thereto. The controllers 11 of the PNDs 1 and 50 may perform the process of calculating the present position in accordance with application programs that are installed from storage media, downloaded from the Internet, or installed by using other methods. In the embodiments described above, the PNDs 1 and 50, each of which corresponds to a velocity calculating device according to the present invention, includes the three-axis acceleration sensor 4 corresponding to a vertical acceleration detector, the Y-axis gyro sensor 5 corresponding to a horizontal angular velocity detector, the correlation coefficient learning section 71 corresponding to a correlation coefficient learning section, the Z axis direction acceleration corrector 70 corresponding to a true vertical acceleration detector, and the velocity calculating section 34 corresponding to a velocity calculator. However, the present invention is not limited thereto. A velocity calculating device according to the present invention may include a vertical acceleration detector, a horizontal angular velocity detector, a correlation coefficient learning section, a true vertical acceleration detector, and a velocity calculator, which have different structures. The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2009-213447 filed in the Japan Patent Office on Sep. 15, 2009, the entire content of which is hereby incorporated by reference. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof. Patent applications by Tomohisa Takaoka, Kanagawa JP Patent applications by SONY CORPORATION Patent applications in class Using inertial sensor Patent applications in all subclasses Using inertial sensor User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20110066377","timestamp":"2014-04-20T20:24:28Z","content_type":null,"content_length":"169368","record_id":"<urn:uuid:2e600486-208d-4ed7-8d96-1629ed9ee961>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake City, GA Geometry Tutor Find a Lake City, GA Geometry Tutor ...Since 2008 I have been volunteering at two different elementary schools tutoring ESL students (K-5) in math, English/reading/writing, and biology. I am very patient and enjoy working with kids. I understand that not every student is the same, and that we all learn at a different pace; therefore, I do my best to accommodate lessons to suit the student's needs.I enjoy tutoring Math. 29 Subjects: including geometry, chemistry, reading, Spanish ...During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry, and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS Algebra 1. I also run an After-School Math Tutorial program where I tutor students who are in Math 1-3. 13 Subjects: including geometry, chemistry, physics, calculus ...With regards to history, I was a history major in college before switching but still have a passion for learning about it. Math has always come easy to me and I have just had to relearn a lot while I was studying for the GRE. I have just graduated college and am currently trying to get back into grad school. 7 Subjects: including geometry, algebra 1, prealgebra, American history ...I have passed the Elementary Math qualifying test. I am a huge fan of the game and I relate basketball to mathematics in my teaching. I have done research into the history and the fundamentals of basketball. 11 Subjects: including geometry, algebra 1, algebra 2, SAT math ...I have worked in after school programs helping middle school students and some high school students with math homework. I chose to become a tutor because I love math and I love to help others understand math and its' importance. I can help almost anyone with algebra 1 and algebra 2, Geometry, pre-algebra, discrete math, and anything in between. 9 Subjects: including geometry, algebra 1, grammar, algebra 2 Related Lake City, GA Tutors Lake City, GA Accounting Tutors Lake City, GA ACT Tutors Lake City, GA Algebra Tutors Lake City, GA Algebra 2 Tutors Lake City, GA Calculus Tutors Lake City, GA Geometry Tutors Lake City, GA Math Tutors Lake City, GA Prealgebra Tutors Lake City, GA Precalculus Tutors Lake City, GA SAT Tutors Lake City, GA SAT Math Tutors Lake City, GA Science Tutors Lake City, GA Statistics Tutors Lake City, GA Trigonometry Tutors Nearby Cities With geometry Tutor Atlanta Ndc, GA geometry Tutors Clarkston, GA geometry Tutors Conley geometry Tutors East Point, GA geometry Tutors Ellenwood geometry Tutors Forest Park, GA geometry Tutors Fort Gillem, GA geometry Tutors Hapeville, GA geometry Tutors Jonesboro, GA geometry Tutors Lithonia geometry Tutors Morrow, GA geometry Tutors Red Oak, GA geometry Tutors Rex, GA geometry Tutors Riverdale, GA geometry Tutors Stockbridge, GA geometry Tutors
{"url":"http://www.purplemath.com/Lake_City_GA_geometry_tutors.php","timestamp":"2014-04-19T10:16:01Z","content_type":null,"content_length":"24211","record_id":"<urn:uuid:d50e16bc-60a6-475a-ac75-2348978898eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
4aSAb2. Scattering calculations by a hybrid GTD-boundary integral equation method. Session: Thursday Morning, June 19 Author: Paul E. Barbone Location: Dept. of Aerosp. and Mech. Eng., Boston Univ., Boston, MA 02215 Author: Ofer M. Michael Location: Dept. of Aerosp. and Mech. Eng., Boston Univ., Boston, MA 02215 We consider the application of a hybrid asymptotic/boundary integral equation (BIE) method to the problem of scattering from prismatically shaped objects. The hybrid method is based on patching a short wavelength asymptotic expansion of the scattered field to a BIE evaluation of the near field. In patching, the diffracted field shape functions with unknown amplitude are forced to agree smoothly with the solution in the near field along a curve at a prescribed distance from the diffraction points. This allows us to replace the original boundary value problem with an asymptotically equivalent boundary value problem, the domain of which is small and efficiently discretized. Since the domain of the numerical problem is small and may be chosen at will, we completely circumvent non-uniqueness problems associated with ``forbidden frequencies.'' Thus very high-frequency calculations can be performed using single layer potential equations with no problems of ill conditioning. The hybrid scattering solution shall be compared to a complete analytic field representation found using matched asymptotic expansions. [Work supported by ONR.] ASA 133rd meeting - Penn State, June 1997
{"url":"http://www.auditory.org/asamtgs/asa97pen/4aSAb/4aSAb2.html","timestamp":"2014-04-17T06:43:25Z","content_type":null,"content_length":"2214","record_id":"<urn:uuid:78f7fa6e-dd11-4907-9ad4-084a8ae4afc2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Different kinds of Tricki article Quick description This page is a very general navigation page with links to other pages that list Tricki articles of similar types and give further links. For a different route into the Tricki, there is also a navigation page that is more concerned with the kinds of problems that the articles are helping to solve than with the nature of the articles themselves. Categories of article Expositions of mathematical techniques The heart of the Tricki is a large collection of articles about specific mathematical techniques that are useful for various classes of problems. To find these articles, use the various search tools, or try following links from navigation pages. General problem-solving tips ^♦ Many Tricki articles are about methods for solving particular classes of problems. But when one is doing mathematical research, many of the most useful tips are not methods of this kind but are more like general research strategies that can in principle be applied to virtually any mathematical problem. This page contains links to many articles about such strategies. Front pages for different areas of mathematics While the Tricki is mainly about methods rather than subject matter, it may nevertheless be helpful to narrow down a search by concentrating on the area to which your problem belongs (though you then risk missing a method that applies to your problem but also applies more generally). This page contains links to general navigation pages for different areas of mathematics. What kind of problem am I trying to solve? Some major classes of problems, such as solving equations, cut across subject areas. It is an important part of the Tricki philosophy that traditional subject classifications should be used only when they are useful in narrowing down a problem type, so there are also navigation pages for general classes of problem. How to use X, where X is a mathematical concept or statement Anybody who has taken an undergraduate mathematics course, and certainly anybody who has taught undergraduate mathematics, will know that there is a huge difference between being familiar with a theorem and knowing how to use it. However, it is a widely adopted convention in textbooks and lectures to give a theorem and its proof and then to hope that the audience will somehow work out how it is applied. One way this is done is through the setting of exercises, and often the main difficulty in solving an exercise is spotting the appropriate theorem to use. Something similar can be true at the research level too: a problem that seems hard to one mathematician may well be easy to another who recognises that it is a consequence of a theorem that is designed to deal with exactly that difficulty. This is a navigation page with a list of Tricki articles, each of which is entitled "How to use X" for some X. Personal success stories in mathematical research If you read an average mathematical paper, it will present you with some results, but it is most unlikely that it will tell you how those results were discovered. Part of the reason for this is that journals would tend to regard a detailed account of that kind as a bit self-indulgent and not what its pages are for. And yet such accounts can be hugely informative for other people, so it seemed a good idea to have a space for them on the Tricki. If you wish to write such an article, then it will be even better if you can write accompanying articles explaining the techniques you used (or else include links to existing Tricki articles that explain them). Login or register to post comments Inline comments The following comments were made inline in the article. You can click on 'view commented text' to see precisely where they were made. Links in headings Mon, 06/04/2009 - 10:40 — Sune Kristian Jakobsen (not verified) The first time I read this article, I didn't notice that this was a link: It looks exactly like the other headings, and most other headings in the tricky are not links. The above "Expositions of mathematical techniques" is not a link, so I didn't expect this to be. Perhaps links in headings should be underlined, or have a different color? Or we should try to avoid link in headings (like • Login or register to post comments I agree, and this dates from Tue, 07/04/2009 - 17:49 — gowers I agree, and this dates from when I used to write articles differently. I'll try to do something about it soon. • Login or register to post comments Not too much is lost, Tue, 07/04/2009 - 18:05 — olof Not too much is lost, hopefully, since the link to each page also occurs in the paragraph immediately following the heading. • Login or register to post comments I've now turned the headings Wed, 15/04/2009 - 20:23 — gowers I've now turned the headings into ordinary text-style links and removed the artificial links in the succeeding paragraphs, for stylistic consistency. And I think it's clearer too, though the headings do look a little small. • Login or register to post comments
{"url":"http://www.tricki.org/article/Different_kinds_of_Tricki_article","timestamp":"2014-04-18T22:31:52Z","content_type":null,"content_length":"30293","record_id":"<urn:uuid:a0e5d08e-e882-4869-8b2c-5747d265b671>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Cell Biology; May 2012 Homework assignments: • Homework and activities for Week 3: Set 1 will be based on the HW problems on the site for Jun Allard's Material • NEW! Here is Set 2 Lectures and Course Material You can find the topics covered, lecture notes, and links to material in the table below. Here is some reference material with some further details on topics covered on week 1 and few topics from week 2. Course Notes Week 3 Guest lecturers for Week 3: Dr. Raibatak (Dodo) Das and Dr William (Bill) R Holmes (Dept of Mathematics, UBC). Link to Bill's Material Return to Course Home Page
{"url":"http://www.math.ubc.ca/~keshet/MCB2012/Week3.html","timestamp":"2014-04-17T01:00:48Z","content_type":null,"content_length":"5666","record_id":"<urn:uuid:45a53858-eac2-41ac-83b3-801a1cbdc5f9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Pleasant Hill, CA Trigonometry Tutor Find a Pleasant Hill, CA Trigonometry Tutor ...I could explain the main points precisely and concisely (Algebra is my PhD area). All have improved in their understanding and in various degree their grades. I provide extra practice and drill problems. I have substantial experience tutoring Calculus (including AP, AB/BC) and Multivariate Calculus. 15 Subjects: including trigonometry, calculus, GRE, algebra 1 ...In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and differential equations, as well as lower division physics. Teaching math and physics is exciting for me because I am passionate ... 25 Subjects: including trigonometry, physics, calculus, statistics ...Learning science and math can be difficult at times but with a little help anyone can master the principles and discover a vast, exciting, and ever expanding body of knowledge! I would be honored to help you in your quest for this knowledge. I have a bachelor's degree in Physics from U.C. 12 Subjects: including trigonometry, chemistry, physics, calculus ...Microbiology was my major in college at the University of California, Davis. I received top grades in all of my courses and I was awarded the Department Citation of Excellence for Microbiology. I also worked in a Medical Microbiology and Immunology laboratory for 3 years while in college. 29 Subjects: including trigonometry, chemistry, reading, calculus ...I believe that the best way to learn and develop new skills is through practice with positive re-enforcement. As a student is introduced to a new concept, I focus on making sure the student has fully comprehended each step before moving on to the next topic. Practice is key! 53 Subjects: including trigonometry, English, reading, writing Related Pleasant Hill, CA Tutors Pleasant Hill, CA Accounting Tutors Pleasant Hill, CA ACT Tutors Pleasant Hill, CA Algebra Tutors Pleasant Hill, CA Algebra 2 Tutors Pleasant Hill, CA Calculus Tutors Pleasant Hill, CA Geometry Tutors Pleasant Hill, CA Math Tutors Pleasant Hill, CA Prealgebra Tutors Pleasant Hill, CA Precalculus Tutors Pleasant Hill, CA SAT Tutors Pleasant Hill, CA SAT Math Tutors Pleasant Hill, CA Science Tutors Pleasant Hill, CA Statistics Tutors Pleasant Hill, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/pleasant_hill_ca_trigonometry_tutors.php","timestamp":"2014-04-16T10:31:57Z","content_type":null,"content_length":"24498","record_id":"<urn:uuid:30cb7a85-f177-4242-81b6-24c300db3fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick math question in Binary Quick math question in Binary Hey, guys! Thank you for every one who commented on my previous thread about todays use with Binary. Getting more indepth, I came across this one math problem that has become very irritating, because it just doesn't make sense to me... It's pretty simple. My problem is found while converting a binary number into its decimal form. Here it is: The Binary number were converting is "11101.01". The formula which represents the relationship between a digit,base, and position is: We know Binary is a base-2 number system. So, we put the base number "2 above all the numbers. Than we go to the step were my problem resides... We now add their subscripts. Starting with "-2" at the far right, with -1, 0, 1, 2, 3, 4 continuing towards the far left. Now we multiply the base and subscript, and once we find the andswer, we then multiply it with the binary nuber below. After I do all the steps, I get everything correct EXCEPT the base number 2 with the subscript 0. Maybe I just don't fully understand the existence for the subscript, but isn't base 2 multiplied by subscript 0, equal 0? And when you multiply it by 1, it still equals 0? But all the tutorials I have read and covered say base 2 with the subcript 0 equal, 1??? Can any one explain to me how this can defy the laws of math as I know, haha. But in all seriousness, I just need to understand the formula used to get 2 times 0 , times the binary number 1, equaling 1? It isn't 2 times 0, it's 2 to the power of 0, and any non-zero number to the power of 0 is 1. (0 to the power of 0 is undefined.) Let ** mean "to the power of" and consider 2**2 * 2**3 = 2**5 Notice that to multiply the two numbers 2**2 and 2**3 you simply have to add their exponents. But if you can multiply just by adding exponents, then 2**2 * 2**0 = 2**2 And to make that true, 2**0 must be equal to 1. Bit 0 is the first bit, going right to left, in a non-decimal. You can think of it as the exponent. 1001 = 1 * 2^3 + 0 * 2^2 + 0 * 2^1 + 1 * 2^0 = 8 + 0 + 0 + 1 (any base to the power of 0 is always 1) = 9. 4 3 2 1 0 -1 -2 exponent 2 2 2 2 2 2 2 base 16 8 4 2 1 0.5 0.25 value Originally Posted by cplusplusnoob I get everything correct EXCEPT the base number 2 with the subscript 0. I just need to understand the formula used to get 2 times 0 , times the binary number 1, equaling 1? His (somewhat long-winded) question is "Why does 2 to the power of 0 equal 1?" which I answered in post#2. I believe he understands the other aspects of binary powers. Thank you to everyone whom commented! I really do appreciate the help! But, none of the comments really answered my question to WHY the rule, that any number to the power of 0 is 1, exists. oogabooga (I couldn't help chuckle why spelling your name haha), your first comment helped explain the properties of the base and subscript, but it didn't really reveal the logic and mathmatics BEHIND the rule, justifying its purpose. It still appeared to me just being an "announced" mathematics without proof; because we could encounter a problem such as (-3)**0, and just using the facts from what you guys gave me I wouldn't be able to explain how -3 could be 1, except just saying its a rule. But now I was able to learn a more in depth explanation for my problem, and now I understand! I apologize if my thread was in the wrong place, or if it still is. I know you guys prefer questions over programming or math that relates to an actual programming issue. Not something I should of learned from 6th grade math haha. I explained exactly why (-3)**0 is equal to 1. (-3)**2 TIMES (-3)**3 EQUALS (-3)**5, simply by adding the exponents. That part is obvious if you think about it. (-3)**2 is (-3)(-3), (-3)**3 is (-3)(-3)(-3), multiplying them gives (-3)(-3)(-3) (-3)(-3) or (-3)**5. Therefore, since adding exponents multiplies numbers with the same base, (-3)**2 TIMES (-3)**0 EQUALS (-3)**2, by adding the exponents. For that to be true, (-3)**0 must be equal to 1. It's that simple. There is no deeper explanation. But now I was able to learn a more in depth explanation for my problem, and now I understand! What do you mean by this? Moved to General Discussions. 2^2 = 2*2 2^1 = 2 2^0 = ??? Well, we obviously can't have '???' because that's not helpful, so we have to PICK something that 2^0 is equal to. Note that everything remains consistent if we multiply the RHS of this progression by 1: 2^2 = 2*2*1 2^1 = 2*1 2^0 = 1 Another way to think about it is that you are taking the value and repeatedly multiplying it into a running product. This product naturally has to be started at 1. If it started at 0, then it would never be anything BUT 0. And if it started at something other than 1, its particular value would influence the result for no reason. Yet another way of seeing why is: a = b^c = b^(x+y) = b^x*b^y. Now let x=c, y=0 (this is consistent with b^c = b^(x+y)): Divide both sides by b^c: Check this out: Proof that a number to the zero power is one - math lesson from Homeschool Math I'm sorry if I miss wrote my intentinal meaning in my previous comment. I didn't mean to state your comment had no help towards my question. You gave the exact answer to my question. But as you can see, I am like a caveman when it comes to math...My ability to understand math concepts through purely text writing is difficult. Poor understanding in basic math is not the best quality in the field programming, hehe. I was confused, because I was thinking what if we had two DIFFERENT base's, such as: 5**2 * 1**0. Simply adding their exponents couldn't result into the right answer. But now I understand the concept from the help of you, my fellow programmers who also commented, and the video I stated above. Thank you also for you help! Your second method, was a similiar example I found in a video explaining the logic behind this rule.
{"url":"http://cboard.cprogramming.com/general-discussions/146802-quick-math-question-binary-printable-thread.html","timestamp":"2014-04-18T00:26:57Z","content_type":null,"content_length":"16153","record_id":"<urn:uuid:a81b8f65-03ac-4540-a084-be9e06143b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Card Game Analogous to Monty Hall? Date: 12/12/2000 at 08:10:55 From: Gabriel Lozano Subject: New aspect of Monty Hall. 50% proof? I hope you can help me on this new aspect of the good old Monty Hall problem. Here is the new problem: There are three cards in front of you, two that are kings, and one that's an ace. You are trying to correctly guess which one is the ace. Say you pick the far-left card, and next, the middle card is revealed to be a king. Before having the far-left card revealed, you are given the choice of switching to the far right card. Are you better off switching or staying, and how much better off are you? I have come up with a way that makes it seem as if the answer is 1/2 instead of 2/3. Please tell me what is wrong with the way I am Assume that there are 2 people. At the beginning, you have 3 cards. You and a friend both pick a card - he picks the left card, you pick the right card. The middle card is shown to be a king. According to the classic answer that says that the chances improve 2/3, there is a 2/3 chance that the other card I didn't pick is the ace, and a 1/3 chance that the card I picked is the ace. To my friend, he has a 2/3 chance of my card being the ace, and 1/3 chance of his own being the ace. Our chances of switching and getting the correct card are both equal, so shouldn't our chances be equal? Think of it this way: To me, my card has a 1/3 chance of being correct. To my friend, it has a 2/3 chance of being correct. To me, my friend's card has a 2/3 chance of being correct. To my friend, his card has a 1/3 chance of being correct. My chance of being correct is thus 1/3 + 2/3 = 3/3. His chance of being correct is 3/3. Thus, our chances are equal, at 50%. Someone offered the counter-argument that (here accepting the fact that the answer is 2/3), "If both people switch, 2/3 of the time, they're right." I say, however, that if one person switches 3/3 of the time, he is right only 2/3 of the time. Which means that if the second person switches whenever the other person does, he is only right 1/3 of the time. Wouldn't this be a counter-example, that the odds dropped to 1/3 instead of rising to 2/3? Thanks for your time. Date: 12/12/2000 at 11:14:04 From: Doctor TWE Subject: Re: New aspect of Monty Hall. 50% proof? Hi Gabriel - thanks for writing to Dr. Math. You are correct in your assessment. In the situation you describe, each of you has a probability of 1/2 of having the ace. The difference between your scenario and the Monty Hall scenario is that Monty Hall knows beforehand where the prize (the ace) is, and deliberately reveals the remaining non-prize (a king). In your scenario, there was a 1/3 chance that you would turn over the middle card and reveal the ace - and neither of you wins. This won't happen in the Monty Hall problem because Monty NEVER reveals the prize. Here's how the two scenarios look on tree diagrams: (For simplicity, I'll assume that you'll pick the right door/card in both scenarios). Case I: You stick with your first choice The prize is equally likely to be behind L, C, or R (you pick R): 1/3 / 1/3 | \ 1/3 / | \ Prize (Ace): L C R If the prize is behind R, Monty is equally likely to reveal doors L or C, since they're both losers. (It really doesn't matter, since the outcome is the same in either case.) If the prize is behind L, Monty will always reveal C, since he knows that that's the other loser. Likewise, if the prize is behind C, Monty will reveal L. So our tree now looks like this: 1/3 / 1/3 | \ 1/3 / | \ Prize: L C R 1 | 1 | 1/2 / \ 1/2 | | / \ Monty shows: C L L C Since our strategy in this case is to stick with our original choice, the first two branches are losers (in which the prize was behind L or C), but the last two branches are winners (in which prize was behind R). Completing the tree: 1/3 / 1/3 | \ 1/3 / | \ Prize: L C R 1 | 1 | 1/2 / \ 1/2 | | / \ Monty shows: C L L C Outcome: (L) (L) (W) (W) Probability: 1/3 1/3 1/6 1/6 So our probability of winning with this strategy is: P(W) = 1/6 + 1/6 = 1/3. Case II: You switch Again, the prize is equally likely to be behind L, C, or R (you pick R). We'll make another tree diagram: 1/3 / 1/3 | \ 1/3 / | \ Prize (Ace): L C R If the prize is behind R, Monty is equally likely to reveal doors L or C, since they're both losers. If the prize is behind L, Monty will always reveal C, since he knows that that's the other loser. Likewise, if the prize is behind C, Monty will reveal L. So our tree now looks 1/3 / 1/3 | \ 1/3 / | \ Prize: L C R 1 | 1 | 1/2 / \ 1/2 | | / \ Monty shows: C L L C This time, our strategy is to switch doors, so the first two branches are winners (in which we originally picked L or C, but switched to R), but the last two branches are losers (in which we originally picked R, but switched to either L or C.) Completing the tree: 1/3 / 1/3 | \ 1/3 / | \ Prize: L C R 1 | 1 | 1/2 / \ 1/2 | | / \ Monty shows: C L L C Outcome: (W) (W) (L) (L) Probability: 1/3 1/3 1/6 1/6 So our probability of winning with this strategy is: P(W) = 1/3 + 1/3 = 2/3 twice as good as our chances in case I. The ace is equally likely to be L, C, or R (you pick R, your friend picks L): 1/3 / 1/3 | \ 1/3 / | \ Prize (Ace): L C R Now you reveal the card not chosen (C) and discover that it is not the ace. What you have done is to eliminate the middle branch from the sample space, so our tree now looks like this: 1/3 / 1/3 | \ 1/3 / | \ Prize (Ace): L C R Outcome: (W) (X) (L) Since the sample space is now 2/3 (because we've eliminated the middle branch), your probability of winning (without switching) is: P(W) = (1/3) / (2/3) = 1/2 This is a conditional probability. We can state it as P(W|C<>ace). This is read as "the probability of winning GIVEN THAT the center card (C) is not the ace." In this scenario, you can easily see that your friend's probability of winning (without switching) is also (1/3)/(2/3) = 1/2, and indeed, switching gains you nothing. I hope this helps. If you have any more questions, write back. - Doctor TWE, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56668.html","timestamp":"2014-04-16T14:04:37Z","content_type":null,"content_length":"12796","record_id":"<urn:uuid:e616b53d-6187-45c0-985d-293c2c9798d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Mean reverting strategies and volatility September 27, 2010 By kafka Mean reverting strategies are beating on mean reversion of the prices. There are various flavors of mean reverting strategies, but as a proxy I chose RSI(2). You can find many entries on blogosphere about this strategy, but nowadays its popularity dried up. What made me wondering is that there was an idea about correlation between return of such strategy and market volatility. That means, that during high volatility periods this strategy produces higher return and during low volatility -lower. To test this idea, I built two strategies. RSI Simple goes long, if RSI(2) indicator is below 10 or it goes short, if RSi(2) is above 90. It closes the open position, then indicator is above/below 70 /30 respectively. RSI Garch follows the same rules as RSI Simple, except one additional rule. By using Garch model I forecast volatility for next day. If volatility value is greater that 65 % of values during the year (252 days), then order is executed. By adding this filter I can catch most volatile days of the last year. RSI simple RSI Garch Profit factor 2.14 2.22 Winning % 0.69 0.7 Profit avg. 197 250 Loss avg. -207 -272 Sharpe 0.93 1.0278 The result of Garch filter are slightly better, but the question remains – is it worth of adding it? Next thing I tried to look at correlation between volatility of the market and return. Actually, it was not so trivial to implement that. The problem is the duration of the investment, which is not fix in days (it can take between 1 to 19 days to close a position). So, I had two approaches- either measure volatility at the beginning of the investment or to fix max investment time (for example 5 days) and then measure volatility on the last day of investment. Despite the differences the results are similar and I will present former. As you can see from the graph below there’s correlation between return of RSI(2) strategy and volatility of the last 20 days. However, I would attribute this correlation to the fact, that return (profit or loss) tend to be higher, then volatility is higher. This thought is supported by R^2, which was only 0.1. You can see return ranges against volatility ranges on the next graph. As in the first example above, I can’t see hard evidence of the correlation. Volatility filter can slightly improve return of RSI(2) strategy, but it is not significant. In the future I will run the same test on pairs trading strategy (another flavor of mean reverting daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/mean-reverting-strategies-and-volatility/","timestamp":"2014-04-19T22:41:05Z","content_type":null,"content_length":"38271","record_id":"<urn:uuid:34e3ddc8-4713-4ca9-bc6b-5ecfbf9e7ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Need some clarification on a formula October 18th 2011, 05:58 AM #1 Senior Member Apr 2010 Need some clarification on a formula In the discrete case, is the summing notation used here interpreted such that I have to sum with regard to the sample space of Y, and then sum the result of that with respect to the sample space of X? Or do I consider both sample spaces and just sum once? The former sounds like a lot of work. Re: Need some clarification on a formula The sum is ultimately over the set of pairs $\{(x,y)\mid x\in S_X,y\in S_Y\}$. You can imagine this set as a matrix. It does not matter whether you sum it rows first or columns first; you have to compute $g(x,y)p_{XY}(x,y)$ for each pair $(x,y)$ and then add them all together. Re: Need some clarification on a formula I see, thanks. October 18th 2011, 06:25 AM #2 MHF Contributor Oct 2009 October 18th 2011, 06:31 AM #3 Senior Member Apr 2010
{"url":"http://mathhelpforum.com/advanced-statistics/190698-need-some-clarification-formula.html","timestamp":"2014-04-21T06:31:10Z","content_type":null,"content_length":"35329","record_id":"<urn:uuid:42596d31-fe23-426e-bdac-436192663df2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Granny shot Addendum: In the face of Professor Pi's extraordinary mathematics, I have been forced to reconsider my write-up. I think Ben Franklin said it best when he said that the greatest tragedy in the world is when a beautiful theory is murdered by a gang of brutal facts. This node has earned me a nice chunk of XP though, and reworked it may yet prove feasible (there is a little bit of historical data that wasn't made completely impotent). So for posterity's sake I'll leave my unedited version here. Besides, what good is a counterpoint without its point? The genealogy of the granny shot is lost to me, and hours of tedious research have done little to alleviate this situation. It was likely developed by a rather athletic individual before the evolution of basketball into a national passtime. The granny shot is an underhand throw of a basketball at the hoop. The name was derived from the resemblance of the shooter to a hunched-over old hag. It is often considered that it should only be used by children and special people. However, contrary to popular belief, the granny shot is scientifically the superior way to shoot a free throw. There are several reasons for this: • Trajectory: A standard NBA-approved basketball has a circumfrence between 29 to 30 inches (for the non-prodigy or calculator-deprived, that is a diameter of 9.40 inches). The hoop itself has a diameter of 18 inches. In other words, one circle is quite a bit bigger than the other. Now, imagine the trajectory of the standard free throw. It starts at the player's hand and goes in a low-curve arc towards the basket. Imagine the hoop from the ball's perspective. It is entering at a very slight angle. Instead of aiming for a circle with a 8.60 inch margin for error, it is aiming for an ellipse with a much narrower margin. A granny shot, however, goes up and down at an arc with a much higher curve. It goes more down than forward, making our basketball-cam see more circle than ellipse. The greater margin consistantly grants greater accuracy on free throws. Note: The highest margin of error would be from an extremely high arc. However, with the more power applied, accuracy would suffer, undermining the advantage. • Spin: Much like the English on a pool ball, the spin on a basketball affects how it will enter the basket. The common overhead free throw puts a backspin on the ball; the granny shot puts frontspin on it. Imagine the ball hitting the front of the hoop. With backspin, it hits, bounces back, and misses. With frontspin, however, it goes forward--into the basket! If it hits the backboard, the arc of descent will cause it to drop directly into the basket after impact. The arc and English of the free throw will bounce the ball back to you. No point. You've let down the • Muscle: For anyone who has ever lived a non-sedentary lifestyle, it is common knowledge that the biceps (which flex the arm) are stronger than the triceps (which straighten it). With the normal free throw, the arms are straightened while shooting. It requires more force, which results in less detail and subtlety in those ever-sensitive motor neurons. With the granny shot, you are using your strongest arm muscle. Actually getting the ball to the hoop is not a concern of those pythons. That leaves you to concentrate on more important things, like actually scoring a point. • Control: Almost every professional basketball player shoots in the classic style. One hand behind the ball, one hand steadying it, over the head. To represent why this is inefficient, try to balance a golfball on your index finger, while using your other index finger to steady it. Then try to balance it while holding it with both fingers. It is much easier to control the ball when both hands are on it. Control means points. • Sight: A lot of things can go wrong while trying to make a basket. Having the ball in your line of sight can really help your shot. You can see the ball from start to finish. Any minor adjustments can be made, resulting in that precious free throw. The granny shot should not normally be used in regular play, because people will have their arms up and attempting to block your shot. The jump shot gets over these obstacles--the granny shot is a poor idea in this case. Now, some people may offer the repartee of “Well if it’s so cool, why isn’t EVERYONE doing it?” Well, basketball is a spectator sport, particularly in the United States. The sportsmen are showmen as well. The granny shot is associated with children and old people. It’s a blow to their pride to use it. However, there have been actual NBA players to use the granny shot. As a matter of fact, former record holder Rick Barry, who retired with a .900 free throw percentage using what NBA.com described as his “odd, outdated underhand style.” There was even a time when he missed only 9 free throws in an entire season. He is ranked 15th top scorer of the NBA and 6th of the ABA. Those are some impressive numbers. It’s all about the physics. Rist, Curtis. DISCOVER MAGAZINE “The Physics of Foul Shots: Underhanded Achievement” NBA.com. NBA Legends http://www.nba.com/history/barry_bio.html?_requestid=7052
{"url":"http://everything2.com/title/Granny+shot","timestamp":"2014-04-19T19:58:08Z","content_type":null,"content_length":"40322","record_id":"<urn:uuid:a53b0a88-e1d5-4503-8a3e-aa9fcc364335>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
H.S. Lahman on OO and MDA Blog root page next post in category A state machine is an implementation of the mathematical notion of finite state automata. In a finite state automata an arbitrary set of input symbols is read sequentially. The automata consists of individual states and the state changes each time a symbol is processed. The number of possible states is finite and the change of states is known as a transition. The new state is determined by evaluating a state transition function: new state = F(current state, current symbol) Note that the new state is solely determined by the identity of the current state and the input symbol. Since the input symbols are arbitrary, the new state is also arbitrary once one is in a given state. Similarly, there is no way to deduce what the previous state was based solely on being in the current state. (BTW, there is nothing to prevent the new state from being the current state in While this formalism provides a sound basis, it is not directly very useful for software development unless something else happens besides transitioning from one state to another. So for software we elaborate a bit on the notion of finite state automata be introducing the notion of a state action that does something when one transitions into a new states. So in a software context a state machine is a finite state automata with the following elements. State. A state is a condition where some set of rules and policies apply. In an OO context those rules and policies will be extracted from the problem space and they represent intrinsic behavior responsibilities of the underlying problem space entity. Transition. A transition is a valid change from one state to another. In general one cannot transition from the current state to any arbitrary state; typically only one or a few state changes are valid. In an OO context the valid transitions represent constraints on the sequence in which various state rules and policies may be executed during the life of the object. Event. An event is equivalent to an input symbol in a finite state automata. In an OO context an event is a message with a specify identity and, optionally, a packet of data that will be processed by the state action. Only the event identity is relevant to evaluating the state transition function. Action. The state action executes the rules and policies that are appropriate for the state whenever there is a transition into the state. That is true even if the previous state happened to be the same. (Transitions back to the same state are known as reflexive tranitions.) In the OO context a state action is a method. In an OO context we map an entity's behavior responsibilities into a single object state machine during the process of abstraction. That object state machine is known as the object life cycle. Each state action represents a method that encapsulates a single, logically indivisible behavior responsibility at the level of abstraction of the object's class. In an OO context the events that trigger transitions are almost always generated outside the receiving state machine. Those events represent collaborations with other objects in the subsystem or external stimuli.
{"url":"http://pathfinderpeople.blogs.com/hslahman/object_state_machines/","timestamp":"2014-04-19T22:52:55Z","content_type":null,"content_length":"37078","record_id":"<urn:uuid:08153dbf-c195-4d9a-a8f0-390029eab7a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Palatine Science Tutor Find a Palatine Science Tutor ...My economics background includes completing advanced 400-level coursework across macroeconomics and microeconomics and such graduate-level coursework as Time-Series Econometrics and Cross-Section Econometrics. While earning my BA in Economics with Honors from the College of William and Mary, I t... 57 Subjects: including biostatistics, ESL/ESOL, philosophy, GED ...I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. 16 Subjects: including physics, algebra 1, algebra 2, calculus ...I am currently teaching math and science in a 6th grade setting. I work with a wide range of abilities. With over twenty years of experience I will be able to foster a curriculum that not only ties into the common core but can solidify any gaps your child might have had in the past. 19 Subjects: including zoology, elementary math, geology, astronomy ...As a counselor I conducted ACT test taking strategies preparation. Also, I taught a senior seminar class that was basically a post-secondary workshop. The majority of the focus was on college - selection of appropriate schools, applications, scholarships, personal statements, FAFSA completion, financial aid award letters and staying in college. 19 Subjects: including psychology, reading, ESL/ESOL, writing ...I have plenty of background knowledge in the entire biology field but my strongest attributes are in genetics, microbiology, anatomy, medical terminology and biochemistry specifically. I am in the process of pursuing my Master's Degree and tutoring will not only help refresh my knowledge in the ... 8 Subjects: including chemistry, biochemistry, organic chemistry, biology
{"url":"http://www.purplemath.com/Palatine_Science_tutors.php","timestamp":"2014-04-18T21:18:15Z","content_type":null,"content_length":"23985","record_id":"<urn:uuid:78e42dd2-a668-4780-b568-ec3ecafc2f46>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Parts of a square A lesson plan for grades 3–4 Mathematics Students investigate the ways shapes can be divided into equal pieces with one or two cuts. It provides a review of the following vocabulary terms: square, triangle, and rectangle; congruent, one-half, and one-fourth. The other lessons in this unit build on this introductory lesson. Illuminations provides helpful activity sheets and resources and detailed directions for completing the Students will: • Explore ways to divide a square into two equal parts with a single cut. • Explore ways to divide a square into four equal parts. • Use geometric and fractional terms to name the parts. NCTM Standards and Expectations: • Identify, compare, and analyze attributes of two- and three-dimensional shapes and develop vocabulary to describe the attributes. • Develop understanding of fractions as parts of unit wholes, as parts of a collection, as locations on number lines, and as divisions of whole numbers. • Recognize and generate equivalent forms of commonly used fractions, decimals, and percents. • Common Core State Standards □ Mathematics (2010) ☆ Grade 3 ○ Number & Operations—Fractions ■ 3.NOF.1Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b. ■ 3.NOF.3Explain equivalence of fractions in special cases, and compare fractions by reasoning about their size. Understand two fractions as equivalent (equal) if they are the same size, or the same point on a number line. Recognize and generate simple equivalent... ☆ Grade 4 ○ Number & Operations—Fractions ■ 4.NOF.1Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to... North Carolina Curriculum Alignment Mathematics (2004) Grade 3 • Goal 1: Number and Operations - The learner will model, identify, and compute with whole numbers through 9,999. □ Objective 1.05: Use area or region models and set models of fractions to explore part-whole relationships. ☆ Represent fractions concretely and symbolically (halves, fourths, thirds, sixths, eighths). ☆ Compare and order fractions (halves, fourths, thirds, sixths, eighths) using models and benchmark numbers (zero, one-half, one); describe comparisons. ☆ Model and describe common equivalents, especially relationships among halves, fourths, and eighths, and thirds and sixths. ☆ Understand that the fractional relationships that occur between zero and one also occur between every two consecutive whole numbers. ☆ Understand and use mixed numbers and their equivalent fraction forms. Grade 4 • Goal 1: Number and Operations - The learner will read, write, model, and compute with non-negative rational numbers. □ Objective 1.03: Solve problems using models, diagrams, and reasoning about fractions and relationships among fractions involving halves, fourths, eighths, thirds, sixths, twelfths, fifths, tenths, hundredths, and mixed numbers.
{"url":"http://www.learnnc.org/lp/external/5532","timestamp":"2014-04-21T15:00:11Z","content_type":null,"content_length":"13391","record_id":"<urn:uuid:0991d51b-6f8c-482c-9c53-748c22fda03c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
The Internal Monoidal Product As we’re talking about enriched categories, we’re always coming back to the monoidal category $\mathcal{V}$. This has an underlying category $\mathcal{V}_0$, which is then equipped with a monoidal product — an ordinary functor $\otimes:\mathcal{V}_0\times\mathcal{V}_0\rightarrow\mathcal{V}_0$. But as usual we don’t want to work with ordinary categories and functors unless we have to. Luckily, we can turn this monoidal product into a $\mathcal{V}$-functor between $\mathcal{V}$-categories: $\mathrm{Ten}:\mathcal{V}\otimes\mathcal{V}\rightarrow\mathcal{V}$. Here, $\mathrm{Ten}$ refers to “tensor product”. On objects we do the same thing as before — $\mathrm{Ten}(X,Y)=X\otimes Y$ — because the objects of the $\mathcal{V}$-category $\mathcal{V}$ are the same as those of the ordinary category $\mathcal{V}_0$. But now we have to consider how this functor should act on the hom-objects. So we recall that we define the internal hom-functor as $\hom_\mathcal{V}(X,Y)=Y^X$, using the closed structure on $\mathcal{V}$. So to have a $\mathcal{V}$-functor we need morphisms $\mathrm{Ten}:\hom_{\mathcal{V}\otimes\mathcal{V}}((X_1,Y_1),(X_2,Y_2))\rightarrow\hom_\mathcal{V}(X_1\otimes Y_1,X_2\otimes Y_2)$. On the left we defined the hom-object for the product $\mathcal{V}$-category as $\hom_\mathcal{V}(X_1,X_2)\otimes\hom_\mathcal{V}(Y_1,Y_2)$, which is then defined as $X_2^{X_1}\otimes Y_2^{Y_1}$. On the right we have the exponential $(X_2\otimes Y_2)^{X_1\otimes Y_1}$. But by the closure adjunction an arrow $X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ is equivalent to an arrow $(X_2^{X_1}\otimes Y_2^{Y_1})\otimes(X_1\otimes Y_1)\ rightarrow(X_2\otimes Y_2)$. Now we can just swap around the factors on the left to get $(X_2^{X_1}\otimes X_1)\otimes(Y_2^{Y_1}\otimes Y_1)$, and then inside each set of parentheses we can use the evaluation morphism we get from the closure adjunction, leaving us with $X_2\otimes Y_2$. Putting together the swap and the evaluations we get the arrow we want. And then the closure adjunction flips this to the morphism we needed to define the monoidal product on hom-objects. The underlying ordinary functor $\mathrm{Ten}_0$ of the $\mathcal{V}$-functor $\mathrm{Ten}$ is the old monoidal product $\otimes$ again. On objects we already have the same action, so we need to check that the underlying function of the morphism $\mathrm{Ten}:X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ is the same as the function $\otimes:\hom_{\mathcal{V}_0} (X_1,X_2)\times\hom_{\mathcal{V}_0}(Y_1,Y_2)\rightarrow\hom_{\mathcal{V}_0}(X_1\otimes Y_1,X_2\otimes Y_2)$. We already know that the underlying set of $B^A$ is $\hom_{\mathcal{V}_0}(A,B)$ and the cartesian product of hom-sets underlies the monoidal product of hom-objects, so we at least know that the underlying source and target objects are correct. So what’s the underlying function? We have the arrow $\mathrm{Ten}:X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ and we need to produce a function $\mathrm{Ten}_0:\hom_{\ mathcal{V}_0}(\mathbf{1},X_2^{X_1})\times\hom_{\mathcal{V}_0}(\mathbf{1},Y_2^{Y_1})\rightarrow\hom_{\mathcal{V}_0}(\mathbf{1},(X_2\otimes Y_2)^{X_1\otimes Y_1})$. In each of these hom-sets we can use the closure adjunction to get a function $\hom_{\mathcal{V}_0}(X_1,X_2)\times\hom_{\mathcal{V}_0}(Y_1,Y_2)\rightarrow\hom_{\mathcal{V}_0}(X_1\otimes Y_1,X_2\otimes Y_2)$. But this is clearly function $(f,g)\mapsto f\otimes g$ for the ordinary tensor product. In light of this tight relationship between $\mathrm{Ten}$ and $\otimes$, I’ll usually just write $\otimes$ for each. Again, when I don’t specify whether I’m talking about the ordinary or the enriched functor I’ll default to the enriched version. No comments yet. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/08/28/the-internal-monoidal-product/?like=1&_wpnonce=cb13afa090","timestamp":"2014-04-17T18:43:40Z","content_type":null,"content_length":"80795","record_id":"<urn:uuid:6d3876f2-6d99-4192-a2a8-760570fe00e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Formal Groups and Applications Preface Introduction Table of Contents Index Supplementary Material &nbsp &nbsp AMS Chelsea This book is a comprehensive treatment of the theory of formal groups and its numerous applications in several areas of mathematics. The seven chapters of the book present basics Publishing and main results of the theory, as well as very important applications in algebraic topology, number theory, and algebraic geometry. Each chapter ends with several pages of historical and bibliographic summary. One prerequisite for reading the book is an introductory graduate algebra course, including certain familiarity with category theory. 2012; 573 pp; hardcover Graduate students and research mathematicians interested in formal groups and their applications in other areas of mathematics. Volume: 375 List Price: US$72 Member Price: Order Code: CHEL/
{"url":"http://ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-375-H","timestamp":"2014-04-17T01:43:05Z","content_type":null,"content_length":"15123","record_id":"<urn:uuid:9aa36418-d2c7-417e-bbe6-5d50c1274468>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Endogenous count variable in a binary model [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Endogenous count variable in a binary model From "Leda Inga" <ledainga@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Endogenous count variable in a binary model Date Tue, 27 May 2008 18:46:20 -0500 Thanks Viktor. I've read qvf help and understand that it allows for different family distribution and that has support for instrumental variables. Nevertheless, I still have some doubts: 1. In previous emails (http://www.stata.com/statalist/archive/2005-01/msg00464.html), it has been said that qvf cannot be used for endogeneity if running a probit/logit models (which is my case). Is this fact also true for endogeneity caused by omitted variables instead of simultaneity?? 2. I've read qvf help and the paper wrote about it (http://www.stata-journal.com/sjpdf.html?articlenum=st0049) and havenīt found what estimation method is used for the instrumentalization. I think that it is OLS since the endogenous variables of the examples provided in both mentioned sources are continuous. Is this is the case, I wouldn't be using an appropiate model for my endogenous variable, since it is count data. Is there a way of managing with an endogenous (by omitted variables) count data? 3. What would be the procedure if the endogenous count variable also enters the model with a square term? 2008/5/27 Viktor Slavtchev <viktor.slavtchev@uni-jena.de>: > perhaps you can use -qvf- > search qvf > findit qvf > hth > viktor > Leda Inga wrote: >> Hi, >> I'm runing a binary regression with survey data and I'm interested in >> the effect of a count explanatory variable (called CP) which might >> have a diminishing impact. Theoretically both variables, the dependent >> and the explanatory variable of my interest, could be determined by >> some others factors which can't be controlled and so go in the error >> terms. My objectives are: 1) Test if CP has significant effect on the >> probability of ocurrence of the event I'm studying, 2) If so, test if >> the effect is diminishing, based on the significance of a cuadractic >> term (CP^2), and finally 3) Know at which point the effect of CP >> reaches it peak. >> Since ivprobit doesn't allow for a cuadractic term and is for >> continuos data, I didn`t use it. Instead I ran a count data model, >> saved the predicted values and generated a new variable equal to the >> square of these. Then I ran a binary regression including both, the >> predicted values of CP and the square of them. But I'm not sure if >> this estimation procedure is correct and if I'm really getting >> consistent betas and standard errors. >> Just to give some details, CP takes values from 0 to 20, has a mean >> 7.48 and a variance of 12.8. >> I would really appreciate any help, >> Leda >> * >> * For searches and help try: >> * http://www.stata.com/support/faqs/res/findit.html >> * http://www.stata.com/support/statalist/faq >> * http://www.ats.ucla.edu/stat/stata/ > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-05/msg01104.html","timestamp":"2014-04-16T10:21:52Z","content_type":null,"content_length":"9120","record_id":"<urn:uuid:de713409-5f99-4b7d-b87c-722ae32af0db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
August 30th 2009, 08:58 PM (a) If the maximum acceleration that is tolerable for passengers in a subway train is 1.74 m/s2 and subway stations are located 846 m apart, what is the maximum speed a subway train can attain between stations? (b) What is the travel time between stations? (c) If a subway train stops for 22.0 s at each station, what is the maximum average speed of the train, from one start-up to the I solved for the max speed but then i noticed the 22 s part of the problem and i am now very confused and unsure of how it works into the rest of the problem. August 30th 2009, 09:30 PM (a) If the maximum acceleration that is tolerable for passengers in a subway train is 1.74 m/s2 and subway stations are located 846 m apart, what is the maximum speed a subway train can attain between stations? (b) What is the travel time between stations? (c) If a subway train stops for 22.0 s at each station, what is the maximum average speed of the train, from one start-up to the I solved for the max speed but then i noticed the 22 s part of the problem and i am now very confused and unsure of how it works into the rest of the problem. We assume that there is a mad subway driver at work ... (Rofl) 1. The subway is accelerated on the first 423 m and must be decelerated on the following 423 m. According to $d = \dfrac12 \cdot a \cdot t^2$ you'll get $432\ m = \dfrac12 \cdot 1.74\frac m{s^2} \cdot t^2~\implies~t = 22.05\ s$ According to $v = a \cdot t$ the maximum speed of the train is $v_{max} = 38.367\ \frac ms = 138.1\ \frac{km}h$ (Did I mention that the driver is mad?) 2. According to the results at 1. the train needs 44,1 s from one station to the next. 3. With a 22 s stop at the station the elapsed time from one stop to the next is 66 s, the total distance is 846 m, thus the average speed is: $v_{average} = \dfrac{846\ m}{66\ s} = 12.82\ \frac ms = 46.1\ \frac{km}h$ Obviously $v_{max} = 3 \cdot v_{average}$
{"url":"http://mathhelpforum.com/calculus/99892-confused-print.html","timestamp":"2014-04-17T04:07:55Z","content_type":null,"content_length":"6897","record_id":"<urn:uuid:59980f61-e194-4aa9-871a-a11da4fab67a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Math: Can you divide by a variable? Question #1: In the equation, can you divide both sides by x? Question #2: In the equation (x – 3)(x + 5) = (2x + 1)(x + 5), can you divide both sides by (x + 5)? Question #3: Question #4: Dividing by a variable or by an algebraic expression The short answer is: NO. You see, it’s mathematically illegal to divide by zero, and if you don’t know the value of the variable, then you could be breaking the law without knowing it. Ask any judge — not knowing that you’re breaking the law generally is not an excuse that holds very well up in court. In much the same way, not knowing whether you are dividing by zero, because you are dividing by an unknown, is just as bad as dividing by zero directly. What do you do instead? Well, there are two alternatives. One method is: instead of dividing by the variable, factor it out. For example, with Question #1: If the produce of two or three or more factors equals zero, this means one of the factors must equal zero. Here, either x = 0 or (x + 3) = 0, which leads to solutions of x = 0 and x = –3. The second method is two break the problem into two cases, one in which the variable or expression does equal zero, and one in which it doesn’t. Treat the two cases separate. For example, in Question #2: Case I: let’s consider the case in which (x – 5) = 0. Well, if this equaled zero, the equation would be true, so this is a solution. One solution is x = 5. Case II: let’s consider the case in which (x – 5) ≠ 0, that is, the case in which x ≠ 5. Well, now we are guaranteed that (x – 5) ≠ 0 is not equal to zero, so dividing both sides by this expression is now perfectly legal, and this leads to the simple equation x – 3 = 2x + 1, which has a solution of x = –4. Thus, the overall solutions to this problem are x = 5 and x = –4. Canceling a variable or expression Similarly, the blanket answer to the cancelling question is also, NO!, for the same reason. If there is any possibility that your variable or expression equals zero, then cancelling would be a 100% illegal activity. For Question #3 — for all values of x other than x = 0, for the entire continuous infinity of numbers on the number line excluding that solitary value, yes, the fraction 2x/5x would equal 2/5. BUT, when x = 0, that statement is no longer true — it is not even false — it is profoundly meaningless. It would be like asking whether the number 163 has a flavor — even posing the question implies a profound misconstruing of essential nature of what a number is. For this one, we would have to say — whatever the question is asking, whatever the question is doing, we have to recognize that x = 0 is not at all a possible value; having eliminating that value, we can proceed with whatever the rest of the problem may be. Question #4 is a particularly interesting one. First of all, as with the previous example, we run into major difficulties when the factor-to-be-cancelled equals zero. As with the other questions, we can’t just do a blanket cancelling with impunity. As with the previous two questions, we have to consider cases. If (x + 2) = 0, then the expression on the left becomes 0/0, profoundly meaningless, and any statement setting this equal to anything else would be sheer nonsense. If (x + 2) = 0, then nothing equals anything else in this problem, so x = –2 is definitely not a legitimate answer. Now, what happens in the case in which (x + 2) ≠ 0? Well, in this case, this factor does not equal zero, so it can be cancelled, which leads to: Now, we have the same expression on both sides of the equation. This means, these two sides would be equal for all values of x, as long as the expression is defined. This means the whole continuous infinity of the number line is legal, barring a couple isolated exceptions. One is x = –4, which makes the denominator equal zero — something divided by zero cannot equal anything, because something divided by zero has already departed from the realm in which any mathematically meaningful statement is possible. And, of course, as we discovered above, x = –2 cannot be a solution either. Therefore, the solution consists of all real numbers, the entire continuous infinity of the real number line, except for the values x = –4 and x = –2. Don’t divide by variables or by algebraic expressions. Don’t cancel by variables or by algebraic expressions. Always consider whether the factor by which you would want to divide could equal zero, and either factor it out or consider the process in separate cases.
{"url":"http://magoosh.com/gmat/2013/gmat-math-can-you-divide-by-a-variable/","timestamp":"2014-04-19T06:51:48Z","content_type":null,"content_length":"57933","record_id":"<urn:uuid:87432445-9313-471e-bd1e-9083513ce4e5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Latitude by Meridian Altitude A. Latitude from Meridian Altitude Observations of the Sun meridian observation of the sun is among the simplest celestial observations to take and to calculate. It is somewhat surprising, therefore, that Clark doesn't mention taking an observation for the latitude at that camp until October 5, and when he does, he writes: "Latitude of this place from the mean of two observations is 46°34'56.3" North." That's it. No mention of the sun's altitude they observed nor the date they took the observations, though it might be supposed that the last observation was taken on the 5th. The average latitude the captains obtained lies about 4'55" (5 ½ miles) north from the south side of the current river junction. It is almost certain that the captains used the sextant to take the two meridian observations while at Clearwater Canoe Camp. Although the captains did not give the sun's altitude for these two observations nor did they show their calculations, they had, since the mouth of the Ohio River, recorded several observations they made with the sextant. For these observations they gave both the observed altitude^1 and their calculated latitude.^2 Furthermore, while at Camp Dubois, Clark left many examples of his methodology and gave his calculated latitude. As it is likely that the captains used the essentially the same methodology throughout the course of the expedition, it is possible to approximate what they might have observed at Clearwater Canoe Camp, even if it is just from the average. The captains persistently made a mistake in calculating a latitude from an observation with the sextant while using the artificial horizon. This mistake was that they first divided the observed double altitude by two, then subtracted the full index error of the sextant (+8'45"), making all their calculated latitudes too high by 4'22 ½". By correcting just that one error, the average latitude of Clearwater Canoe Camp recalculated from the average of their observations turns out to be 46°30'34" N—a difference of just 29" too far north (about ½ mile), not 4'55" too far north. The captains likely made other lesser mistakes in their calculating methodology,^3 so it is not enough just to subtract 4'22 ½" from their latitude and have the latitude they should have obtained from the observation. Without the altitudes they observed, however, one can only suppose their observations might have provided latitudes that were somewhat more accurate than what their calculated latitudes appear to show. --Robert N. Bergantino, 11/04 1. Observed altitude: This is the angle in degrees above the horizon that an observer instrumentally measures for a celestial body. With the sextant the captains usually measured the sun's upper limb; with the octant, its lower limb. 2. Calculated latitude: the observed altitude is but a starting place to determine the latitude. Many "corrections" are needed before the observed altitude is turned into a latitude. Usually the observed altitude is corrected first for the instrument's index error. When an artificial horizon is used the resultant angle must be divided by two. Next, the effect of refraction is subtracted and the effect of parallax is added. Then the semidiameter of the sun or moon is added if the lower limb was observed, or subtracted if the upper limb was observed. The declination of the celestial body at the time of the observation must be determined and algebraically subtracted. When all the above have been done, the result has to be subtracted from 90°, which, finally, is the calculated latitude. (And this is one of the simpler mathematical operations involving celestial observations.) 3. Among these, the most important might be errors in determining the sun's semidiameter and the sun's declination from estimated longitude. Comment 1. In the calculation shown in the left-hand column, the double altitude of the sun's upper limb is what the captains should have observed to fit the index error, refraction, semidiameter, parallax, and declination given in the following steps. Double Altitude: When taking a meridian altitude of the sun with a sextant or octant and using an artificial horizon, a ray from the sun that has been reflected from the index mirror to the horizon mirror and to the eye is matched with a ray that has been reflected from the artificial horizon through the horizon glass into the eye. This procedure gives the observer a true horizon on land, but doubles the true altitude of the object observed. This angle, therefore, must be divided by two to give the true observed altitude. Comment 2. The calculation in the column on the right shows how the captains might have arrived at a latitude of 46°34'56" from the same "observed" double altitude. Using their method of calculating the latitude, and to fit the index error, refraction, semidiameter, parallax and declination, this would be the average of the two double altitudes actually observed. Comment 3. In the third step in both calculations, note the phrase "refraction at 990 feet." Refraction is affected mainly by the density of the air above the observer. As one's altitude increases the air becomes less dense; thus refraction typically decreases with altitude. But temperature also controls the density of the air, and that must be factored in. A common formula for refraction is: ((983 x barometric pressure inches) / (460 + temperature in °F) x cotangent sun's altitude). By this formula, at an altitude of 990 feet above sea level and a temperature of 60°F, the refraction would be 1'05", whereas at sea level it would be 1'07.5" For most observations using a sextant or octant it isn't worth while to re-calculate refraction for every observation. The tabulated refraction at sea level found in the Tables Requisite is adequate. Besides, Lewis and Clark had no way of measuring their altitude above sea level at any point, and would necessarily have used the Tables Requisite, which they carried in their portable library. But for me, every second counts. Funded in part by the Idaho Governor's Lewis and Clark Trail Committee.
{"url":"http://lewis-clark.org/content/content-article.asp?ArticleID=2077","timestamp":"2014-04-17T07:09:36Z","content_type":null,"content_length":"40757","record_id":"<urn:uuid:35ca3e75-0988-4d1d-9bdb-825d9ef7baf2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
petsc-3.4.4 2014-03-13 MATIS = "is" - A matrix type to be used for using the Neumann-Neumann type preconditioners. This stores the matrices in globally unassembled form. Each processor assembles only its local Neumann problem and the parallel matrix vector product is handled "implicitly". Operations Provided MatMult() - . MatMultAdd() MatMultTranspose() - . MatMultTransposeAdd() MatZeroEntries() - . MatSetOption() MatZeroRows() - . MatZeroRowsLocal() MatSetValues() - . MatSetValuesLocal() MatScale() - . MatGetDiagonal() MatSetLocalToGlobalMapping() - Options Database Keys: -mat_type is -sets the matrix type to "is" during a call to MatSetFromOptions() Notes: Options prefix for the inner matrix are given by -is_mat_xxx You must call MatSetLocalToGlobalMapping() before using this matrix type. You can do matrix preallocation on the local matrix after you obtain it with MatISGetLocalMat() See Also PC, MatISGetLocalMat(), MatSetLocalToGlobalMapping() Index of all Mat routines Table of Contents for all manual pages Index of all manual pages
{"url":"http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATIS.html","timestamp":"2014-04-21T08:23:58Z","content_type":null,"content_length":"3809","record_id":"<urn:uuid:86b1d8fa-35b4-4ed1-85e2-6f8acb291ec1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Villa Rica ACT Tutor Find a Villa Rica ACT Tutor ...I used both QuickBooks Premier and Enterprise versions for a major non-profit corporation, using Premier for single-office organizations and Enterprise for multiple-office organizations. I have used the export to Excel feature to design specialized Excel workbooks. I produced the Field Financia... 27 Subjects: including ACT Math, English, accounting, French ...These days I work via desktop or laptop with Windows-based OS predominantly, but increasingly find myself working across other platforms such as tablets and smart phones and with other OS including Droid and Mac OS and on cross-platform compatibility issues. After all these years from browser wa... 126 Subjects: including ACT Math, chemistry, English, biology ...I have tutored individuals from elementary school up to college level. I have tutored the child that is in elementary school in all subjects. I have tutored middle and high school students in 18 Subjects: including ACT Math, reading, geometry, algebra 1 ...I am currently a graduate student in a joint program between Emory and Georgia Tech pursuing a PhD in biomedical engineering. I got my bachelor's from Vanderbilt in Nashville, TN, but I went to high school in Gwinnett County here in Atlanta. I was valedictorian of my high school, got a 35 on th... 17 Subjects: including ACT Math, chemistry, writing, physics I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including ACT Math, geometry, statistics, probability Nearby Cities With ACT Tutor Bowdon Junction ACT Tutors Bremen, GA ACT Tutors Buchanan, GA ACT Tutors Chattahoochee Hills, GA ACT Tutors Clarkdale, GA ACT Tutors Dallas, GA ACT Tutors Fairburn, GA ACT Tutors Hiram, GA ACT Tutors Mount Zion, GA ACT Tutors Palmetto, GA ACT Tutors Powder Springs, GA ACT Tutors Rockmart ACT Tutors Temple, GA ACT Tutors Villa Rica, PR ACT Tutors Waco, GA ACT Tutors
{"url":"http://www.purplemath.com/villa_rica_act_tutors.php","timestamp":"2014-04-18T13:42:08Z","content_type":null,"content_length":"23700","record_id":"<urn:uuid:92320037-3229-47f8-92c8-854cc41710ec>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-ME] Conditional AIC with lmer Kyle Edwards kedwards at ucdavis.edu Wed Feb 13 19:28:11 CET 2008 Hi all, I posted on this topic last week, but someone pointed out to me that my post was not in plain text. Apologies for that error. With a colleague, I have been trying to implement the Conditional AIC described by Vaida and Blanchard 2005 Biometrika, "Conditional Akaike information for mixed-effects models". This quantity is derived in a way analogous to the AIC, but is appropriate for scenarios where one is interested in the particular coefficient estimates for individual random effects. The formula for the asymptotic CAIC is given as -2*log(likelihood of observed values, conditional on ML estimates of fixed effects and empirical Bayes estimates of random effects) + 2*K where K = rho + 1, and rho = "effective degrees of freedom" = trace of the hat matrix mapping predicted values onto observed values. After some thinking and some off-list advice, we have decided that appropriate code for CAIC is CAIC <- function(model) { sigma <- attr(VarCorr(model), 'sc') observed <- attr(model, 'y') predicted <- fitted(model) cond.loglik <- sum(dnorm(observed, predicted, sigma, log=TRUE)) rho <- hatTrace(model) p <- length(fixef(model)) N <- nrow(attr(model, 'X')) K.corr <- N*(N-p-1)*(rho+1)/((N-p)*(N-p-2)) + N*(p+1)/((N-p)*(N-p-2)) CAIC <- -2*cond.loglik + 2*K.corr where K.corr is the finite-sample correction for K, for ML model fits. I am posting this so that 1) This code can be of use to any other souls in the statistical wilderness trying to do model selection with mixed models, and 2) So that wiser minds can point out any errors in our approach. More information about the R-sig-mixed-models mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-mixed-models/2008q1/000578.html","timestamp":"2014-04-20T10:50:02Z","content_type":null,"content_length":"4411","record_id":"<urn:uuid:dedfca06-9677-4dfb-ad87-fb0e43e59241>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid's Elements Book I, Proposition 5: (Pons Asinorum) In isosceles triangles the angles at the base are equal to one another, and, if the equal straight lines are produced further, then the angles under the base will be equal to one another. Click the figure bellow to see the illustration. Read more at: Euclid's Elements Book I, Proposition 5 1 comment: 1. Neha PrabhuneDecember 26, 2010 at 4:32 AM let alpha=a and beta=b. As ABC is an isosceles triangle ,base angles are congruent. AB=BC implies angle BAC = angle BCA hence a=a'...(1) angle DAC + angle CAB= 180..(angles in a linear pair) angle ECA + angle BCA= 180..(angles in a linear pair) angle DAC + angle CAB = angle ECA + angle BCA b + a = a'+ b' b = a - a'+ b' b = b'....(by (1))
{"url":"http://gogeometry.blogspot.com/2010/02/euclids-elements-book-i-proposition-5.html","timestamp":"2014-04-20T06:47:36Z","content_type":null,"content_length":"55091","record_id":"<urn:uuid:099824ea-78d0-4c61-980c-7846d9964f40>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
MATHEMATICA BOHEMICA, Vol. 128, No. 2, pp. 137-146 (2003) On the $\sigma$-finiteness of a variational measure Diana Caponetti Diana Caponetti, Dipartimento di Matematica, Universita della Calabria, I-87036 Arcavacata di Rende (CS), Italy, e-mail: caponetti@unical.it Abstract: The $\sigma$-finiteness of a variational measure, generated by a real valued function, is proved whenever it is $\sigma$-finite on all Borel sets that are negligible with respect to a $\ sigma$-finite variational measure generated by a continuous function. Keywords: variational measure, $H$-differentiable, $H$-density Classification (MSC2000): 26A39, 26A24 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] [Journals Homepage] © 2004–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/MB/128.2/3.html","timestamp":"2014-04-17T00:56:36Z","content_type":null,"content_length":"2196","record_id":"<urn:uuid:eaed50a5-0002-4495-995b-375b9edee6ce>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding General Relativity A Troubled Man How is it that gravity is the result of the difference in time dilation between ones' hand and head? That's an important part that your friend did not explain! It comes down to what constitutes an "inertial path", i.e., the path in spacetime taken by an object with some initial motion that has no forces on it (gravity is not a force here). The key point about inertial motion is that the motion in the "spatial direction" is perpendicular to the motion in the "time direction" (you move in both space and time, so both count in inertial motion). That's the part that wasn't explained to you-- if time is going faster at your head than at your feet, it forces your motion in x,t space (x is the vertical coordinate of your height above the ground) to curve (so you change x coordinate, not just t coordinate like you might normally imagine for an object that starts out stationary and has no forces on it). That's because the direction that is "perpendicular to time" is curving downward, if time is going faster (in a relative way) at your head than at your feet. This is also called "curvature of spacetime", because one can give a geometric description of it. There are actually many different ways to think about what is happening there, they are all just "stories" being told from the perspective of various different choices of coordinate system (so I would object to your friend's language that this is what gravity is "really doing", if you don't like that picture there are many others that sound completely different on the surface). How can a body in free fall be exerting a force? Isn't the surface of the earth an accelerated frame, hence it is the surface accelerating upwards towards the body in free fall? Your friend is not talking about a body in free fall, but rather the situation of when you are standing on the ground. The surface of the Earth is indeed an accelerated frame, and so is the person standing on it (they can feel the force on their feet). That force is needed to make you follow a noninertial path-- the inertial path is the one where you are in free fall, and if you are in free fall, you exert no forces on the Earth at all (though you do slightly curve the spacetime the Earth is moving through).
{"url":"http://www.physicsforums.com/showthread.php?p=4233842","timestamp":"2014-04-20T21:22:54Z","content_type":null,"content_length":"88626","record_id":"<urn:uuid:61f445aa-8777-45fb-97c1-ba8051bacfa6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
User Matthew Wampler-Doty bio website w-d.org location South Park, Colorado age 29 visits member for 3 years, 9 months seen Apr 9 at 16:11 stats profile views 159 I am working on my MSc in Logic at the ILLC at Universiteit van Amsterdam. I am interested in automated deduction, formal logic, proof theory, computation, abstract algebra, game theory, and category
{"url":"http://mathoverflow.net/users/7348/matthew-wampler-doty?tab=favorites&sort=added","timestamp":"2014-04-16T07:30:51Z","content_type":null,"content_length":"40848","record_id":"<urn:uuid:979de7cf-b5b5-4a46-a7d1-35aea8004430>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the potential difference across May 9th 2008, 08:53 AM #1 What is the potential difference across What is the potential difference across (i) AB (ii) BC. (i) $\frac{200}{400} \times 6 = 3V$ Is this right? No, it's wrong : the two $100\Omega$ resistors are in parallel, not in series. Hence $R_{BC}$ is given by $\frac{1}{R_{BC}}=\frac{1}{100}+\frac{1}{100}$. nope i am afraid you are wrong there. first you need to find the effective resistance of the resistors in parallel(the 100 ohm resistors). Lets say we call them $<br /> R_1$ and $<br /> R_2$ $<br /> \frac{1}{R_{eff}} = \frac{1}{R_1} + \frac{1}{R_2}<br />$ $<br /> \frac{1}{R_{eff}} = \frac{1}{100} + \frac{1}{100}<br />$ $<br /> \frac{1}{R_{eff}} = \frac{1}{50}<br />$ $<br /> R_{eff} =50 ohm<br />$ Then you can use voltage divider rule to obtain the voltages. for example voltage across AB $<br /> = \frac{R_{AB}}{R_{Total}} V_{IN}<br />$ $<br /> = \frac{200}{200+50} \cdot 6<br />$ $<br /> = 4.8V<br />$ try doing for V across BC Here is an extra long explanation on how to do it. When the resistors are connected in parallel, the current at the splitting node divides. The amount of division depends on how much each branch resists. Since both are 100 ohms, the current will split equally. Now lets say current 'I' is flowing through the loop, then at the branch each hundred ohm carries 'I/2' current. Since the potential difference is IR, a current 'I/2' through 'R' produces the same potential difference as a current 'I' flowing through a resistance 'R/2'. Since R here is 100 ohm, R/2 is 50 ohm. This means potential drop across AB is 200I and BC is 50I. The overall drop is therefore 250I. But the overall drop is equal to the applied voltage, which is 6. So 250I = 6 and this means 200I = 4.8V and 50I = 1.2V. This means potential drop across AB is 4.8V and BC is 1.2V. May 9th 2008, 09:02 AM #2 May 9th 2008, 09:04 AM #3 Mar 2008 http://en.wikipedia.org/wiki/Malaysia now stop asking me where is malaysia... May 9th 2008, 09:14 AM #4
{"url":"http://mathhelpforum.com/math-topics/37765-what-potential-difference-across.html","timestamp":"2014-04-20T20:25:50Z","content_type":null,"content_length":"43483","record_id":"<urn:uuid:f41a73ae-0d33-454f-9dbd-6e4b7335c89f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Hairer, Martin - Department of Mathematics, Courant Institute of Mathematical Sciences, New York University • Exponential Mixing for a Stochastic PDE Driven by Degenerate Noise • October 1, 2009 16:0 WSPC -Proceedings Trim Size: 9in x 6in proceedings Periodic homogenization with an interface • A Theory of Hypoellipticity and Unique Ergodicity for Semilinear Stochastic PDEs • Ergodicity of hypoelliptic SDEs driven by fractional Brownian motion • Modulation Equations: Stochastic Bifurcation in Large Domains • An Introduction to Stochastic PDEs July 24, 2009 • Stationary solutions for a model of amorphous thin-film growth • The Annals of Probability 2011, Vol. 39, No. 2, 648682 • Convergence of Markov Processes August 2, 2010 • Spectral Properties of Hypoelliptic Operators J.-P. Eckmann1,2 • GniCodes { Matlab Programs for Geometric Numerical Integration • Uniqueness of the Invariant Measure for a Stochastic PDE Driven by Degenerate Noise • Exponential Mixing Properties of Stochastic PDEs Through Asymptotic Coupling • July 30, 2003 9:16 WSPC/Trim Size: 10in x 7in for Proceedings ICMP COUPLING STOCHASTIC PDES • Analysis of SPDEs Arising in Path Sampling Part I: The Gaussian Case • Analysis of SPDEs Arising in Path Sampling Part II: The Nonlinear Case • A Bayesian Approach to Data Assimilation , A. M. Stuart1 • October 1, 2009 21:42 WSPC -Proceedings Trim Size: 9in x 6in proceedings Hypoellipticity in Infinite Dimensions • Singular perturbations to semilinear stochastic heat equations • Rough Stochastic PDEs April 29, 2011 • A SPATIAL VERSION OF THE IT ^O-STRATONOVICH CORRECTION MARTIN HAIRER AND JAN MAAS • Stationary distributions for diffusions with inert drift Richard F. Bass, Krzysztof Burdzy, Zhen-Qing Chen and Martin Hairer • Ergodic properties of a class of non-Markovian June 26, 2007 • Regularity of Laws and Ergodicity of Hypoelliptic SDEs Driven by Rough Paths • Periodic Homogenization for Hypoelliptic Diffusions March 29, 2004 • Yet another look at Harris' ergodic theorem for Markov chains • Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and • UNIVERSITE DE GEN`EVE FACULTE DES SCIENCES Section de Physique Professeur J.-P. ECKMANN • Ergodic Properties of Markov Processes March 9, 2006 • Ergodic theory for Stochastic PDEs July 10, 2008 • Communications in Mathematical Physics manuscript No. (will be inserted by the editor) • Introduction to Hypoelliptic Schrodinger type operators • Multiscale expansion of invariant measures for SPDEs September 30, 2003 • Invariant measures for stochastic PDE's in unbounded J.-P. Eckmann • The Annals of Probability 2007, Vol. 35, No. 5, 19501977 • On Malliavin's proof of Hormander's theorem March 10, 2011 • Malliavin calculus for highly degenerate 2D stochastic Navier-Stokes equations • Multiscale Analysis for SPDEs with Quadratic Nonlinearities • Ergodicity of Stochastic Differential Equations Driven by Fractional Brownian Motion • Slow energy dissipation in chains of anharmonic oscillators • Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations • From ballistic to diffusive behavior in periodic July 16, 2007 • Ergodicity of the 2D Navier-Stokes Equations with Degenerate Stochastic Forcing • How hot can a heat bath get? October 30, 2008 • Submitted to the Annals of Applied Probability SPECTRAL GAPS IN WASSERSTEIN DISTANCES AND THE 2D • Sampling Conditioned Diffusions Martin Hairer Andrew Stuart Jochen Voss • On the controllability of conservative systems Martin Hairer • Non-asymptotic mixing of the MALA algorithm August 20, 2010 • A simple framework to justify linear response theory February 17, 2010 • A version of Hormander's theorem for the fractional Brownian motion • Ergodic theory for infinite-dimensional stochastic processes Martin Hairer • Ergodic properties of highly degenerate 2D stochastic Navier-Stokes equations • Solving the KPZ equation September 30, 2011 • Triviality of the 2D stochastic Allen-Cahn equation February 14, 2012 • Stochastic PDEs with multiscale structure February 14, 2012 • Spectral Gaps for a Metropolis-Hastings Algorithm in Infinite Dimensions
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/44/331.html","timestamp":"2014-04-17T05:56:59Z","content_type":null,"content_length":"16349","record_id":"<urn:uuid:5ff02096-c867-48c5-915b-96abb14ea9f0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Critical regions June 8th 2010, 01:14 PM #1 May 2009 7. It is known from past records that 1 in 5 bowls produced in a pottery have minor defects. To monitor production a random sample of 25 bowls was taken and the number of such bowls with defects was recorded. (a) Using a 5% level of significance, find critical regions for a two-tailed test of the hypothesis that 1 in 5 bowls have defects. The probability of rejecting, in either tail, should be as close to 2.5% as possible. Critical regions is "X is equal and bigger than 1" or "X is equal to and bigger than 10" Basically, I do not understand how can "X is equal and bigger than 1" be one of the critical region? X~B(25, 0.2) I am thinking the P("X are equal and bigger than 1") = 0.0274 therefore, it goes over 2.5% hence, it shouldn't be the critical region?? by the way, The probability for "X equal and bigger than 10" = 0.0173 Could please you explain why one of the critical region is not P(X=0) which is 0.0038 instead of X less than and equal to 1? THank you. 7. It is known from past records that 1 in 5 bowls produced in a pottery have minor defects. To monitor production a random sample of 25 bowls was taken and the number of such bowls with defects was recorded. (a) Using a 5% level of significance, find critical regions for a two-tailed test of the hypothesis that 1 in 5 bowls have defects. The probability of rejecting, in either tail, should be as close to 2.5% as possible. Critical regions is "X is equal and bigger than 1" or "X is equal to and bigger than 10" Basically, I do not understand how can "X is equal and bigger than 1" be one of the critical region? X~B(25, 0.2) I am thinking the P("X are equal and bigger than 1") = 0.0274 therefore, it goes over 2.5% hence, it shouldn't be the critical region?? by the way, The probability for "X equal and bigger than 10" = 0.0173 Could please you explain why one of the critical region is not P(X=0) which is 0.0038 instead of X less than and equal to 1? THank you. Because the question, specifies as close to 2.5% as possible, P(X=0) is 0.38% were as for $P(X\leq 1 )$ is 2.74%, and this value is closer to 2.5% ,if the question did not say as close to 2.5%, than you would look for a value in the tables that is less than or equal to 0.025, which in this case would be p(x=0) June 9th 2010, 12:14 PM #2 Super Member Sep 2008
{"url":"http://mathhelpforum.com/statistics/148293-finding-critical-regions.html","timestamp":"2014-04-16T05:05:29Z","content_type":null,"content_length":"34764","record_id":"<urn:uuid:e34996c8-80e5-44e0-befd-b89cbb6235bb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2003 [00053] [Date Index] [Thread Index] [Author Index] Non-negative Least Squares (NNLS) algorithm • To: mathgroup at smc.vnet.net • Subject: [mg43770] Non-negative Least Squares (NNLS) algorithm • From: mdw at ccu1.auckland.ac.nz (Michael Woodhams) • Date: Fri, 3 Oct 2003 02:28:54 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com A little while ago, I looked at this group and the web for an implementation of the NNLS algorithm, but all I found was a post in this group from 1997 from somebody also looking for an implementation, so I had to do it myself. The problem: Given a matrix A (usually more rows than columns) and vector f, find vector x such that: 1) All elements of x are non-negative 2) subject to contraint (1), the Euclidian norm (2-norm) of vector (A.x-f) is minimized. (Note that the unconstrained problem - find x to minimize (A.x-f) - is a simple application of QR decomposition.) The NNLS algorithm is published in chapter 23 of Lawson and Hanson, "Solving Least Squares Problems" (Prentice-Hall, 1974, republished SIAM, 1995) Some preliminary comments on the code: 1) It hasn't been thoroughly tested. 2) I only have a few months experience programming Mathematica (but lots of programming experience in general) so there are probably style issues. I am happy to receive constructive criticism. 3) There is some ugliness where a chunk of code is repeated twice to be able to have a simple 'while' loop, rather than one that breaks out in the middle. Arguably breaking the middle of the loop would be better than repeating the code. 4) I've left in a bunch of debugging statements. 5) I've cut-and-pasted this from my notebook, which is a bit ugly with the "\[LeftDoubleBracket]" instead of "[[" etc. 6) I haven't paid much attention to efficiency - e.g. I multiply by "Inverse[R]" rather than trying to do a backsubstitution (R is a triangular matrix.) (* Coded by Michael Woodhams, from algorithm by Lawson and Hanson, *) (* "Solving Least Squares Problems", 1974 and 1995. *) bitsToIndices[v_] := Select[Table[i, {i, Length[v]}], v[[#]] == 1 &]; NNLS[A_, f_] := Module[ {x, zeroed, w, t, Ap, z, q, \[Alpha], i, zeroedSet, positiveSet, toBeZeroed, compressedZ, Q, R}, (* Use delayed evaluation so that these are recalculated on the fly as \ needed : *) zeroedSet := bitsToIndices[zeroed]; positiveSet := bitsToIndices[1 - zeroed]; (* Init x to vector of zeros, same length as a row of A *) debug["A=", MatrixForm[A]]; x = 0 A\[LeftDoubleBracket]1\[RightDoubleBracket]; debug["x=", x]; (* Init zeroed to vector of ones, same length as x *) zeroed = 1 - x; debug["zeroed=", zeroed]; w = Transpose[A].(f - A.x); debug["w=", w]; While[zeroedSet != {} && Max[w\[LeftDoubleBracket]zeroedSet\[RightDoubleBracket]] > 0, debug["Outer loop starts."]; (* The index t of the largest element of w, *) (* subject to the constraint t is zeroed *) t = Position[w zeroed, Max[w zeroed], 1, debug["t=", t]; zeroed\[LeftDoubleBracket]t\[RightDoubleBracket] = 0; debug["zeroed=", zeroed]; (* Ap = the columns of A indexed by positiveSet *) Ap = debug["Ap=", MatrixForm[Ap]]; (* Minimize (Ap . compressedZ - f) by QR decomp *) {Q, R} = QRDecomposition[Ap]; compressedZ = Inverse[R].Q.f; Create vector z with 0 in zeroed indices and compressedZ entries \ elsewhere *) z = 0 x; z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = debug["z=", z]; While[Min[z] < 0, (* There is a wart here : x can have zeros, giving infinities or indeterminates. They don't matter, as we ignore those elements (not in postitiveSet) but it will \ produce warnings. *) debug["Inner loop start"]; find smallest x\[LeftDoubleBracket] \[RightDoubleBracket] - z\[LeftDoubleBracket]q\[RightDoubleBracket]) (* such that : q is not zeroed, z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0 *) \[Alpha] = Infinity; For[q = 1, q <= Length[x], q++, If[zeroed\[LeftDoubleBracket]q\[RightDoubleBracket] == 0 z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0, \[Alpha] = \[LeftDoubleBracket]q\[RightDoubleBracket] - debug["After trying index q=", q, " \[Alpha]=", ]; (* if *) ]; (* for *) debug["\[Alpha]=", \[Alpha]]; x = x + \[Alpha](z - x); debug["x=", x]; toBeZeroed = Abs[x\[LeftDoubleBracket]#\[RightDoubleBracket]] < 10^-13 &]; debug["toBeZeroed=", toBeZeroed]; zeroed\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = x\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = 0; (* Duplicated from above *) (* Ap = the columns of A indexed by positiveSet *) Ap = Transpose[ debug["Ap=", MatrixForm[Ap]]; (* Minimize (Ap . compressedZ - f) by QR decomp *) {Q, R} = QRDecomposition[Ap]; compressedZ = Inverse[R].Q.f; Create vector z with 0 in zeroed indices and compressedZ entries \ elsewhere *) z = 0 x; z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = debug["z=", z]; ]; (* end inner while loop *) x = z; debug["x=", x]; w = Transpose[A].(f - A.x); debug["w=", w]; ]; (* end outer while loop *) ]; (* end module *) And some final comments: Don't 'reply' to the e-mail address in the header of this post - it is old and defunct, and not updated because of spammers. I am at massey.ac.nz, and my account name is m.d.woodhams. (3 year post-doc appointment, so by the time you read this, that address might also be out of date.) I put this code in the public domain - but I'd appreciate it if you acknowledge my authorship if you use it. I will be writing a bioinformatics paper about "modified closest tree" algorithm that uses this, and I might (not decided about this) give a link to Mathematica code, which will include the above. If this happens, you could cite that paper. This software comes with no warrantee etc etc. Your warrantee is that you have every opportunity to test the code yourself before using it. If you are reading this long after it was posted, I may have an improved version by then. Feel free to enquire by e-mail.
{"url":"http://forums.wolfram.com/mathgroup/archive/2003/Oct/msg00053.html","timestamp":"2014-04-21T15:08:05Z","content_type":null,"content_length":"41498","record_id":"<urn:uuid:a49fd545-11db-4289-8b00-3b9c0f10c01d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Norm on the Lp space May 20th 2010, 04:01 PM #1 Mar 2010 Norm on the Lp space In showing that $\Vert .\Vert{p}$ defines a norm on $L_{p} (p\geq1)$, I'm not so sure about checking the axiom $\Vert f \Vert_{p}=0$ iff $f=0$. A textbook mentions defining an equivalence relation on $L_{p}$ space but I don't really get what that's all about. Don't take what I am about to say too seriously, wait for another member to verify. It defines a seminorm on the space. The common thing to do in this case is to define an equivalence relation on your space by modding out by the kernel of $\|\cdot\|_p$ Drexel is right except that $L^p$ is already defined as the equivalence class of functions under the relation f ~ g iff $f(x)=g(x)$ almost everywhere. The main reason for this is that if you are doing measure theory (e.g. probability) you essentially don't care what happens on null sets. You will notice that people will abuse the notation a lot, by for example saying let $f \in L^p$ and consider f(x), but there are ways around this. May 20th 2010, 04:10 PM #2 May 20th 2010, 06:15 PM #3
{"url":"http://mathhelpforum.com/differential-geometry/145759-norm-lp-space.html","timestamp":"2014-04-17T05:27:46Z","content_type":null,"content_length":"39984","record_id":"<urn:uuid:97a1fbb3-3d14-4eef-80a3-3f259b55f0af>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Global solutions to the compressible Euler equations with geometrical structure Results 1 - 10 of 17 - in R 3+1 , Comm.Math.Phys "... : A simple two-fluid model to describe the dynamics of a plasma is the Euler-Poisson system, where the compressible electron fluid interacts with its own electric field against a constant charged ion background. The plasma frequency produced by the electric field plays the role of `mass' term to the ..." Cited by 25 (2 self) Add to MetaCart : A simple two-fluid model to describe the dynamics of a plasma is the Euler-Poisson system, where the compressible electron fluid interacts with its own electric field against a constant charged ion background. The plasma frequency produced by the electric field plays the role of `mass' term to the linearized system. Based on this `Klein-Gordon' effect, we construct global smooth irrotational flows with small velocity for the electron fluid. 1 Introduction A plasma is a collection of moving electrons and ions. At high frequencies, a simple-fluid model for a plasma breaks down. The electrons and ions tend to move independently, and charge separations occur. The greater inertia of the ions implies that they will be unable to follow the rapid fluctuation of the fluid, only electrons partake in the motion. The ions merely provide a uniform background of positive charge. One of the simplest two-fluid model for a plasma is the Euler-Poisson system @ t n +r \Delta (nu) = 0 (1) @ t u + u... - In: Handbook of Mathematical Fluid Dynamics , 2002 "... Abstract. Some recent developments in the study of the Cauchy problem for the Euler equations for compressible fluids are reviewed. The local and global well-posedness for smooth solutions is presented, and the formation of singularity is exhibited; then the local and global well-posedness for disco ..." Cited by 14 (4 self) Add to MetaCart Abstract. Some recent developments in the study of the Cauchy problem for the Euler equations for compressible fluids are reviewed. The local and global well-posedness for smooth solutions is presented, and the formation of singularity is exhibited; then the local and global well-posedness for discontinuous solutions, including the BV theory and the L ∞ theory, is extensively discussed. Some recent developments in the study of the Euler equations with source terms are also reviewed. - Comm. Math. Sci "... Abstract. The system of balance laws describing a compressible fluid flow in a nozzle forms a non-strictly hyperbolic systems of partial differential equations which, also, is not fully conservative due to the effect of the geometry. First, we investigate the general properties of the system and det ..." Cited by 8 (3 self) Add to MetaCart Abstract. The system of balance laws describing a compressible fluid flow in a nozzle forms a non-strictly hyperbolic systems of partial differential equations which, also, is not fully conservative due to the effect of the geometry. First, we investigate the general properties of the system and determine all possible wave combinations. Second, we construct analytically the solutions of the Riemann problem for any values of the left- and right-hand states. For certain values we obtain up to three solutions whose structure is carefully described here. In some range of Riemann data, no solution exists. When three solutions are available, then exactly one of them contains two stationary waves which are super-imposed in the physical space. We include also numerical plots of these solutions. 1. - SIAM J. APPLIED MATH , 2004 "... We introduce a new potential interaction functional and use it to define a new Glimm-type functional that bounds the total variation of the conserved quantities at time t>0by the total variation at time t = 0+ in Glimm approximate solutions of a general resonant nonlinear balance law. ..." Cited by 5 (1 self) Add to MetaCart We introduce a new potential interaction functional and use it to define a new Glimm-type functional that bounds the total variation of the conserved quantities at time t>0by the total variation at time t = 0+ in Glimm approximate solutions of a general resonant nonlinear balance law. - J. Hyperbolic Differ. Equ "... Abstract. We propose and prove convergence of a front tracking method for scalar conservation laws with source term. The method is based on writing the single conservation law as a 2 × 2 quasilinear system without a source term, and employ the solution of the Riemann problem for this system in the f ..." Cited by 3 (2 self) Add to MetaCart Abstract. We propose and prove convergence of a front tracking method for scalar conservation laws with source term. The method is based on writing the single conservation law as a 2 × 2 quasilinear system without a source term, and employ the solution of the Riemann problem for this system in the front tracking procedure. In this way the source term is processed in the Riemann solver, and one avoids using operator splitting. Since we want to treat the resonant regime, classical arguments for bounding the total variation of numerical solutions do not apply here. Instead compactness of a sequence of front tracking solutions is achieved using a variant of the singular mapping technique invented by Temple [69]. The front tracking method has no CFL–condition associated with it, and it does not discriminate between stiff and non-stiff source terms. This makes it an attractive approach for stiff problems, as is demonstrated in numerical examples. In addition, the numerical examples show that the front tracking method is able to preserve steady–state solutions (or achieving them in the long time limit) with good accuracy. 1. "... We study the Riemann problem for isothermal flow of a gas in a thin pipe with a kink in it. This is modeled by a 2 \Theta 2 system of conservation laws with Dirac measure sink term concentrated at the location of the bends in the pipe. We show that the Riemann problem for this system of equations al ..." Cited by 2 (0 self) Add to MetaCart We study the Riemann problem for isothermal flow of a gas in a thin pipe with a kink in it. This is modeled by a 2 \Theta 2 system of conservation laws with Dirac measure sink term concentrated at the location of the bends in the pipe. We show that the Riemann problem for this system of equations always has a unique solution, given an extra condition relating the speeds on both sides of the kink. Furthermore, we study the related problem where the flow is perturbed by an continuous addition of momentum at distinct points. Under certain conditions we show that also this Riemann problem has a unique solution. August 19, 1998 0. , 1997 "... The purpose of this paper is to investigate the wave behavior of hyperbolic conservation laws with a moving source. When the speed of the source is close to one of the characteristic speeds of the system, nonlinear resonance occurs and instability may result. We will study solutions with a single tr ..." Cited by 1 (0 self) Add to MetaCart The purpose of this paper is to investigate the wave behavior of hyperbolic conservation laws with a moving source. When the speed of the source is close to one of the characteristic speeds of the system, nonlinear resonance occurs and instability may result. We will study solutions with a single transonic shock wave for a general system u t + f(u) x = g(x; u). Suppose that the i-th characteristic speed is close to zero. We propose the following stability criterion: l i @g @u r i ! 0 for nonlinear stability, l i @g @u r i ? 0 for nonlinear instability Here l i and r i are the i-th normalized left and right eigenvectors of df du respectively. By using a variation of the Glimm scheme and studying the evolution of the single transonic shock wave, we prove the existence of solutions and verify the asymptotic stability (or instability). 1 Introduction In this paper, we study the time-asymptotic stability and instability of solutions to systems of conservation laws with a moving sourc... , 2008 "... We construct stationary solutions to the barotropic, compressible Euler and Navier-Stokes equations in several space dimensions with spherical or cylindrical symmetry. For given Dirichlet data on a sphere or a cylinder we first construct smooth and radially symmetric solutions to the Euler equations ..." Cited by 1 (1 self) Add to MetaCart We construct stationary solutions to the barotropic, compressible Euler and Navier-Stokes equations in several space dimensions with spherical or cylindrical symmetry. For given Dirichlet data on a sphere or a cylinder we first construct smooth and radially symmetric solutions to the Euler equations in the exterior domain. On the other hand, stationary smooth solutions in the interior domain necessarily become sonic and can not be continued beyond a critical inner radius. We then use these solutions to construct entropy-satisfying shocks for the Euler equations in the region between two concentric spheres or cylinders. Next we construct smooth Navier-Stokes solutions converging to the previously constructed Euler shocks in the small viscosity limit. In the process we introduce a new technique for constructing smooth solutions, which exhibit a fast , 2003 "... We describe the generic solution of the Riemann problem near a point of resonance in a general 2x2 system of balance laws coupled to a stationary source. The source is treated as a conserved quantity in an augmented 3x3 system, and Resonance is between a nonlinear wave family and the stationary sou ..." Cited by 1 (0 self) Add to MetaCart We describe the generic solution of the Riemann problem near a point of resonance in a general 2x2 system of balance laws coupled to a stationary source. The source is treated as a conserved quantity in an augmented 3x3 system, and Resonance is between a nonlinear wave family and the stationary source. Transonic compressible Euler flow in a variable area duct, as well as spherically symmetric flow, are shown to be special cases of the general class of equations studied here.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1666407","timestamp":"2014-04-25T06:08:55Z","content_type":null,"content_length":"36441","record_id":"<urn:uuid:d86b839e-b6eb-4b1b-9250-b17b2f28818a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Doylestown, PA SAT Math Tutor Find a Doylestown, PA SAT Math Tutor ...Making up for a tedious PowerPoint presentation by being an exceptional speaker is no longer required. PowerPoint is one of Microsoft's best programs. You will be amazed at how easy it will be to familiarize yourself with the various aspects of this program. 27 Subjects: including SAT math, calculus, statistics, geometry ...Just as many superior students with excellent grades are shocked and dismayed by disappointing scores, so too are talented writers frequently underwhelmed by their essay evaluations. These surprises, both common and predictable, result from a bizarre truth: the right approach for other academic ... 23 Subjects: including SAT math, reading, English, writing ...I stress literature, vocabulary and critical reading skills. I also work with students who have trouble putting their thoughts into essays. I have a unique perspective because I also tutor SAT 43 Subjects: including SAT math, English, reading, writing I have tutored high school, college students, and working adults for more than 1800 hours over 9 years (this includes more than 300 hours with WyzAnt). Subjects include all high school MATH courses and most college MATH courses, plus SAT Prep--Critical Reading, Writing, and Math, ACT Prep--Math, the... 17 Subjects: including SAT math, calculus, geometry, statistics ...I am proficient at teaching children at all levels of math. I will focus on enhancing a student's learning by showing them various approaches to different math problems and formulas. I will not only help students who may be struggling with math, but also help high-achievers who might be more advanced than their classmates. 40 Subjects: including SAT math, English, writing, geometry Related Doylestown, PA Tutors Doylestown, PA Accounting Tutors Doylestown, PA ACT Tutors Doylestown, PA Algebra Tutors Doylestown, PA Algebra 2 Tutors Doylestown, PA Calculus Tutors Doylestown, PA Geometry Tutors Doylestown, PA Math Tutors Doylestown, PA Prealgebra Tutors Doylestown, PA Precalculus Tutors Doylestown, PA SAT Tutors Doylestown, PA SAT Math Tutors Doylestown, PA Science Tutors Doylestown, PA Statistics Tutors Doylestown, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Doylestown_PA_SAT_Math_tutors.php","timestamp":"2014-04-17T19:32:50Z","content_type":null,"content_length":"24055","record_id":"<urn:uuid:6b6ee789-0bc3-4e71-85dd-f3d43faa5ac5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Zero Forcing (ZF) in Wireless Communications? 1. 20th March 2007, 03:12 #1 what is zero forcing I am working on MIMO BC, can any body help me for following. What is Zero Forcing (ZF) in Wireless Communications? Arif Khan 2. 20th March 2007, 03:14 #2 Junior Member level 3 Join Date Mar 2006 9 / 9 zero forcing Hi Arif, ZF means when u have mutiple users and ur transmission is intended for one particular user and zero transmission (ideally) for others. But in practice there is interference for other users when u considering BC. I hopw this will help u. 1 members found this post helpful. 3. 29th March 2007, 05:19 #3 Full Member level 1 Join Date Dec 2005 90 / 90 Re: What is Zero Forcing (ZF) in Wireless Communications? ZF is just a detector, which has the property that it inverts the channel (which introduces cross-talking) and consequently from each user's perspective, there is on interference from others, namely forcing interference to zeros. Turn to MIMO BC, I knew A.Goldsmith from Stanford has series of paper on this topic. Join Date Sep 2006 92 / 92 Re: What is Zero Forcing (ZF) in Wireless Communications? The fundamental behind the zero forcing lies in the ZERO FORCING Filters used in Wireless communication. Zero forcing filters are used to minimize Inter Symbol Interference (ISI) to avoid cross talk between users. When the input pulses are not properly pulse shaped they may overlap with each other and this my result in ISI. Popular pulse shaping techniques are Raised cosine Pulse shaping, Gaussian Pulse shaping , Duo-binary pulse shaping etc... In Raised cosine pulse shaping the pulse shape attains zero at every multiples of 1/T (T=Bit Duration). So if another adjacent pulse with its maximum is made to sit at this point then two pulses are sent with out much ISI. Normally there is some amount of noise at the zero point that will be added due to the channel. So the value at zero point is essentially not zero now . Zero forcing filters are used to make NULL the value at zero point there by introducing the possibility of sending another pulse with its maximum places at this point, thereby increasing the transmission rate. 1 members found this post helpful. Newbie level 4 Join Date Apr 2007 1 / 1 Re: What is Zero Forcing (ZF) in Wireless Communications? the word zero forcing in communiction particularly digital communication means that u have an equilizer that takes an input signal comming from a distant place contaminated with noise and distortion and ir u process this signal without passing it through this equilizer ur system is not go no work , one of the many reasons is that the signal strength is deteriorated....so when we pass a digital signal through a zero forcing equilier it detects the zero crossing of the incomming signal and in this way get and information about the phase , frequency of the incomming signal...and if there is a regenerative repeater also after as is in digital communication it would use the information provided by zero forcing equilizer and generate new pulses 1 members found this post helpful.
{"url":"http://www.edaboard.com/thread91261.html","timestamp":"2014-04-17T18:23:36Z","content_type":null,"content_length":"71643","record_id":"<urn:uuid:11b34ce3-0779-4efb-93b1-a82b4f424f22>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Multi-level Pie Charts By: Jeff Clark Date: Thu, 27 Jul 2006 I promised a few months ago to describe something I call a Multi-level Pie Chart or Radial Treemap. I spent some time developing the idea as an alternative to the standard Treemap a few years ago before discovering that it had been done before. Despite the fact the idea has been around since 2000 it seems to be little known. I will illustrate the concept using survival data for the people on board the RMS Titanic which sank in 1912. The information I am starting from includes a record for each person on board, their gender (Male, Female), class (First, Second, Third, Crew), age (Adult, Child), and whether they survived (Yes,No). We can use simple pie charts to show the proportion of people in the various passenger classes or the relative proportion of Males and Females. A Multi-level Pie Chart lets us see the proportion for second-order breakdowns. How many First Class passengers were Female ? How many crew were Male ? See the figure below. I think it is fairly intuitive. The inner circle shows exactly the same pie chart we started with giving the proportions for Passenger Class. The second ring out from the center gives the data representing gender. The angular region representing crew is in yellow and is subdivided into Crew-Male and Crew-Female regions based on the gender proportion for all crew members. Similarly for the other passenger classes. The idea can be extended to as many levels as desired. The figure below illustrates this and also shows one potential problem for this type of layout - many of the 'slices' will be thin and difficult to clearly label. These images were created with an interactive tool that has a number of capabilities designed to alleviate this. One of these methods is to hide various segments to give more space to the remaining ones. This image hides everything but the Third class passengers and also uses a tooltip to show details on one particular segment. Another useful capability is to have colour carry important information. The graph below shows the survival rate using a green-red colour gradient. Bright green represents 100% survival and bright red is 0%. The nodes are sorted by survival rate as well. A number of things can be seen at a glance: • Female survival rate was higher than males within every passenger type • Survival rates for passenger class was in the order First, Second, Third, and finally Crew • First class, female, adult passenger survival rate was close to 100% (actually 97.22%) • Male children in third class berths had a survival rate comparable to crew (around 25%) • Within almost all Class/Gender combinations, children had higher surivial rates than adults • The worst survival rate was for second class adult males (around 8%) Thanks for the Link, Boing Boing ! Blog Radial Treemap for US Auto Sales
{"url":"http://www.neoformix.com/2006/MultiLevelPieChart.html","timestamp":"2014-04-21T09:36:50Z","content_type":null,"content_length":"10581","record_id":"<urn:uuid:a98d7b33-3013-4748-a280-1497c3a91483>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology § 203 Date: Feb 2, 2013 5:17 AM Author: fom Subject: Re: Matheology § 203 On 2/2/2013 4:02 AM, WM wrote: > On 2 Feb., 09:57, fom <fomJ...@nyms.net> wrote: >> On 2/1/2013 9:48 AM, WM wrote: >>> On 1 Feb., 16:35, William Hughes <wpihug...@gmail.com> wrote: >>>> Let P(n) be >>>> 0.111... is not the nth line >>>> of >>>> 0.1000... >>>> 0.11000... >>>> 0.111000... >>>> ... >>>> Clearly for every natural number n >>>> P(n) is true. >>>> This means there is no natural >>>> number m for which P(m) is true. >>>> It is not simply that we cannot find m, >>>> we know that m does not exist. >>> More. We know that P(n) = 0.111... = 1/0 does not exist as an >>> actually infinite sequence of 1's. >> Hmm.... >> As I watch you make these arguments, it occurs to me... >> What proof do you have that some sequence is not infinitely >> long? > Even with no regard to Tristram Shandy who disproves actual infinite, > we can say: The sequence for 1/9 = 0.111... cannot have indices that > differ from all indices of its finite approximations. So you cannot > distinguish 0.111... by looking at digits from its finite > approximations. And you cannot use it in any discourse because every > message is finite and needs an endoffile signal to be meaningful. > Concluding: The property that a sequence of digits does not end cannot > be obtained from its digits. (Remember, 0.111... is not a sequence of > digits but is only a rule to construct a sequence of digits. The rule > yields the sequence, but the sequence does not yield the rule.) Ok. So, you are introducing the kind of arguments used by Wittgenstein. Of course, Wittgenstein never gave a coherent explanation for classical mathematics. His criticisms, however, are easily seen to be forebears of much of the discrete mathematics that has become so important with the advent of computing technology. > That means in mathematics, understood as discourse, there is no > decimal representation of 1/9. What God may use for his mathematics is > not my problem. And, this points to modern pragmatics. The tradition of Russell, Carnap, Tarski, Quine, ... is classified as "ideal language theory" in contrast to "natural language theory." This is one of the disservices done by mathematics departments that have classes on foundational topics without a coherent curriculum surrounding those topics.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8231079","timestamp":"2014-04-18T10:46:26Z","content_type":null,"content_length":"4351","record_id":"<urn:uuid:869b1117-43f0-4e91-90cd-9ab171f2bbd4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
100,000 digits of pi 11-22-2004 #46 Registered User Join Date Mar 2003 Zach L. Not read the book, what does he say about it? I guess you weren't replying to me, but the number Phi shows up in tons of places. I haven't read the Da Vinci code, but the book I mentioned is all about where the number phi surfaces. Some -the construction of a perfect pentagran -relationship between adjacent chambers in a sea mollusk -sunflower florets -crystals of materials -shapes of galaxies -pyramids and the parthenon -the mona lisa Those are the instances that are listed on the back cover, so I haven't given much away. It's truly fascinating if you enjoy math, as it not only discusses these things but the book comes with an ample amount of proofs. > -the mona lisa How does that work? > -the mona lisa How does that work? "Da Vinci code" makes an excellent reading.. But not sure if true christains would like it... The ration 1.618 (PHI):1 can be found on Mona Lisa's face,ration of height and width of the face)... A lot of classical art (Greek) had people/things with that proportion. There is a reason it got the name "Golden ratio". It also shows up in the closed form solution to the Fibonacci sequence. Don't know if DaVinci Code would hold up mathematically to a lot of the books I enjoy reading, though. The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop. In addition to Darkness's example, here are some practical examples (that Da Vinci himself discovered by digging up and measuring corpses): any person's height from feet to head divided by measure from feet to navel, shoulder to fingertip divided by elbow to fingertip, leg to foot over knee to foot. Basically Dan Brown calls it the "divine proportion" (or something comparable) and details how it shows up practically everywhere in nature. The number of femail to male bees in any hive on earth yields a ratio of Phi, the measure of one rotation to the next on a snail shell is Phi, each spiral on a sea shell compared to a neighboring spiral is Phi, and perhaps the most interesting is values grow larger in the Fibonacci sequence the quotient of any two neighboring terms approaches Phi. Quite interesting stuff really. Tom Hanks may star in the film next year. That'll be great; it's really a fantastic book. Zach L. Don't know if DaVinci Code would hold up mathematically to a lot of the books I enjoy reading, though. Its not a maths realted book... But an excellent thriller which makes reference to Da Vinci's work... "Da Vinci code" makes an excellent reading.. But not sure if true christains would like it.... I dunno if that's true. I'm a hardcore Christian, and I loved the book. It's fiction, so it's not a big deal. In addition to Darkness's example, here are some practical examples (that Da Vinci himself discovered by digging up and measuring corpses): any person's height from feet to head divided by measure from feet to navel, shoulder to fingertip divided by elbow to fingertip, leg to foot over knee to foot. Did you get this from the book, or is it actually fact? Because that's not possible. While lucky's example isn't necessarily right for ALL people it's actually the most basic idea of where the golden ratio comes from. This is what it stems from: Take a line, and make a partition in it such that you have three measurements: the total length of the line, the length from the right side of the line to the partition (call this section A), then the length from the partition to the left side of the line (section B). Now you have two ratio. Ratio1 = the ratio between the length of the line to the larger parition Ratio2 = the ratio between the larger partition to the smaller partition When ratio1 == ratio2, it equals the golden ratio. I believe this is similar to what lucky meant, more specifically: any person's height from feet to head divided by measure from feet to navel Last edited by Darkness; 11-22-2004 at 04:27 PM. Did you get this from the book, or is it actually fact? Because that's not possible. That's from the book. You should read the book. That's from the book. You should read the book. I'd rather not if it presents BS like that as fact. >> That'll be great; it's really a fantastic book. you gotta be kidding me. Dan Brown is such a mediocre author. The idea was pretty cool, but Brown can't write worth sh!t. >>Did you get this from the book, or is it actually fact? Because that's not possible. this is indeed a fact. Pythagoras was the first one to see the ratio in the human body. There are many resources about this topic. edit:: golden ration in humna body: http://goldennumber.net/body.htm Last edited by axon; 11-22-2004 at 05:58 PM. some entropy with that sink? entropysink.com there are two cardinal sins from which all others spring: Impatience and Laziness. - franz kafka I've already laughed at this link on AIM, now I'm going to do it here. Ummm... instead of laughing why do you disagree with it so much? After all its just a generalization/approximation. Or better yet, do the measurements on your body and let us know the results [edit] I do agree that the bottom part of that webpage takes it a little too far though with the whole 5 appendages/5 indexes/5 senses thing. [/edit] [edit2] Ok, I looked at the rest of that website and a lot of it is pretty sketchy/laughable, BUT the human body example has been well documented for hundreds of years [/edit2] Last edited by PJYelton; 11-22-2004 at 07:19 PM. [edit] I do agree that the bottom part of that webpage takes it a little too far though with the whole 5 appendages/5 indexes/5 senses thing. [/edit] That's the part I laughed about. And axon and I did do the measuring thing for me - I came out with a 1.68 ratio of my height to head -> fingertip and feet -> navel. That doesn't prove anything, though. Like I was telling axon, my ex-girlfriend had short legs, so she'd come up with some weird results. Also, my ratio isn't the golden ratio everyone seems to think everyone will come up with. You'd think with such precision in their numbers they could actually be accurate. What I'm saying is it's obvious that there are certain proportions the human body usually adheres to. There is no single number that works for every person, or even for one person. It's nonsense. 11-22-2004 #47 11-22-2004 #48 11-22-2004 #49 11-22-2004 #50 11-22-2004 #51 11-22-2004 #52 11-22-2004 #53 11-22-2004 #54 Registered User Join Date Mar 2003 11-22-2004 #55 11-22-2004 #56 11-22-2004 #57 11-22-2004 #58 11-22-2004 #59 11-22-2004 #60
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/58833-100-000-digits-pi-4.html","timestamp":"2014-04-20T00:13:16Z","content_type":null,"content_length":"110989","record_id":"<urn:uuid:23b2f09c-8a26-4e00-83db-6d1e0383384e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search Earth and space science Sort by: Per page: Now showing results 1-10 of 40 In this activity, students will examine line plots of NASA data and see that the sun heats up land, air, and water. Students will practice drawing conclusions based on graphed data of cloudy vs. clear sky observations. The lesson provides detailed... (View More) procedures, related links and sample graphs, follow-up questions, extensions, and teacher notes. Designed for student use, MY NASA DATA LAS samples micro datasets from large scientific data archives, and provides structured investigations engaging students in exploration of real data to answer real world questions. (View Less) In this activity, students will read a color plot of Earth's absorption of the sun's radiation, and see that solar energy is unevenly distributed across the Earth's surface. The lesson provides detailed procedures, related links and sample graphs,... (View More) follow-up questions, extensions, and teacher notes. Designed for student use, MY NASA DATA LAS samples micro datasets from large scientific data archives, and provides structured investigations engaging students in exploration of real data to answer real world questions. (View Less) In this data analysis activity, students interpret basic line plots of wind speed using authentic NASA data. The lesson provides detailed procedures, related links and sample graphs, follow-up questions, extensions, and teacher notes. Designed for... (View More) student use, MY NASA DATA LAS samples micro datasets from large scientific data archives, and provides structured investigations engaging students in exploration of real data to answer real world questions. (View Less) This activity engages students in reading a bar graph using authentic NASA data. Students will identify major parts of bar graphs and make a generalization based their interpretation of the graphed data. The lesson provides detailed procedures,... (View More) related links and sample graphs, follow-up questions, extensions, and teacher notes. Designed for student use, MY NASA DATA LAS samples micro datasets from large scientific data archives, and provides structured investigations engaging students in exploration of real data to answer real world questions. (View Less) This collection of 103 individual sets of math problems derives from images and data generated by NASA remote sensing technology. Whether used as a challenge activity, enrichment activity and/or a formative assessment, the problems allow students to... (View More) engage in authentic applications of math. Each set consists of one page of math problems (one to six problems per page) and an accompanying answer key. Based on complexity, the problem sets are designated for two grade level groups: 6-8 and 9-12. Also included is an introduction to remote sensing, a matrix aligning the problem sets to specific math topics, and four problems for beginners (grades 3-5). (View Less) This is a collection of mathematical problems about transits in the solar system. Learners can work problems created to be authentic glimpses of modern science and engineering issues, often involving actual research data. This activity provides a visual example of convection in fluids. Students will record their predictions and observations on diagrams of the experimental set-up showing convection currents. Materials required include hot and cold colored water,... (View More) thermometers, stopwatch, and index cards. This activity is part of the MY NASA DATA Scientist Tracking Network unit, designed to provide practice in accessing and using authentic satellite data. (View Less) This self-paced, interactive tutorial enables learners to identify and measure iceberg size from remotely-sensed satellite images. Two techniques are explored: the geometric shape method, which provides a rapid rough estimate of area; and the pixel... (View More) count method, which employs special software to measure the size more accurately. This resource is part of the tutorial series, Satellite Observations in Science Education, and is the second of three modules in the tutorial, Hunting Icebergs. (Note: requires Java plug-in) (View Less) This self-paced, interactive tutorial guides learners through the decision-making process in locating data that will enable the identification of tabular icebergs, including: selecting the appropriate satellite orbit, and identifying the optimal... (View More) solar and infrared wavelength values to discriminate between water and ice in remotely-sensed images. This resource is part of the tutorial series, Satellite Observations in Science Education, and is the first of three modules in the tutorial, Hunting Icebergs. (Note: requires Java plug-in) (View Less) This is an activity about measurement. Learners will label key points and features on a rectangular equal-area map and measure the distance between pairs of points in order to calculate the actual physical distance on the Sun that the point pairs... (View More) represent. This is Activity 5 of the Space Weather Forecast curriculum. (View Less) «Previous Page1234 Next Page»
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics&materialsCost=Free&learningTime=30+to+45+minutes","timestamp":"2014-04-18T17:21:31Z","content_type":null,"content_length":"70705","record_id":"<urn:uuid:5e5ca6c2-ce25-42ee-a493-28aaa389f91f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization and L'Hospital's rule October 25th 2009, 05:14 PM #1 Oct 2009 Optimization and L'Hospital's rule 1) $y = \lim_{x\to\infty} x^{\frac{ln(7)}{1+ln(x)}}$ $ln(y) = \lim_{x\to\infty} \frac{ln(7)ln(x)}{1+lnx}$ Deriving the right side gives me: $ln(y) = \lim_{x\to\infty} \frac{\frac{ln(x)}{7}+\frac{ln(7)}{x}}{\frac{1}{x} }$$=> \frac{\infty + 0}{0}$ Since the top goes towards a positive number (infinity) and the bottom goes to 0, $ln(y) = \infty => y = e^{\infty}$ Is my logic wrong? 2) A cone-shaped paper drinking cup is to be made to hold 33 cm^3 of water. Find the height and radius of the cup that will use the smallest amount of paper. (Give your answers correct to two decimal places.) $V = 33 = \frac{1}{3} \pi r^2h$ $SA = \pi rs + \pi r^2$ where $s = \sqrt{h^2+r^2}$ $SA = \pi r\sqrt{h^2+r^2}+ \pi r^2$ $h = \frac{99}{\pi r^2}$ $=> SA = \pi r \sqrt{(\frac{99}{\pi r^2})^2 + r^2}+ \pi r^2$ Am I starting this off the right way? Because if I am, this looks disgusting. I was trying to derive the Surface Area formula just in terms of r, and then finding the minimum I could plug it back into the volume formula to gind the minimum h? Am I wrong in my approach? 1) $y = \lim_{x\to\infty} x^{\frac{ln(7)}{1+ln(x)}}$ $ln(y) = \lim_{x\to\infty} \frac{ln(7)ln(x)}{1+lnx}$ Deriving the right side gives me: $ln(y) = \lim_{x\to\infty} \frac{\frac{ln(x)}{7}+\frac{ln(7)}{x}}{\frac{1}{x} }$$=> \frac{\infty + 0}{0}$ Since the top goes towards a positive number (infinity) and the bottom goes to 0, $ln(y) = \infty => y = e^{\infty}$ Is my logic wrong? 2) A cone-shaped paper drinking cup is to be made to hold 33 cm^3 of water. Find the height and radius of the cup that will use the smallest amount of paper. (Give your answers correct to two decimal places.) $V = 33 = \frac{1}{3} \pi r^2h$ $SA = \pi rs + \pi r^2$ where $s = \sqrt{h^2+r^2}$ $SA = \pi r\sqrt{h^2+r^2}+ \pi r^2$ $h = \frac{99}{\pi r^2}$ $=> SA = \pi r \sqrt{(\frac{99}{\pi r^2})^2 + r^2}+ \pi r^2$ Am I starting this off the right way? Because if I am, this looks disgusting. I was trying to derive the Surface Area formula just in terms of r, and then finding the minimum I could plug it back into the volume formula to gind the minimum h? Am I wrong in my approach? You don't need L.H $\ln(y) = \lim_{x\to\infty} \frac{\ln(7) \ln(x)}{1+\ln(x)}= \lim_{x\to\infty} \frac{\ln(7)}{1+\frac{1}{\ln(x)}}= \ln(7)$ Your start seems fine on the cone except I don't think you need the $\pi r^2$ term this would put a top on the cone cup. October 25th 2009, 05:56 PM #2
{"url":"http://mathhelpforum.com/calculus/110460-optimization-l-hospital-s-rule.html","timestamp":"2014-04-19T15:43:49Z","content_type":null,"content_length":"39857","record_id":"<urn:uuid:158e0bcf-2167-45a7-b03c-35207712915e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
A measure of the change in aggregate production caused by changes in government taxes. The tax multiplier is the negative marginal propensity to consume times one minus the slope of the aggregate expenditures line. The simple tax multiplier includes ONLY induced consumption. More complex tax multipliers include other induced components. Two related multipliers are the expenditures multiplier, which measures the change in aggregate production caused by changes in an autonomous aggregate expenditure, and the balanced-budget multiplier which measures the change in aggregate production from equal changes in both taxes and government purchases. The tax multiplier measures the change in aggregate production triggered by an autonomous change in government taxes. This multiplier is useful in the analysis of fiscal policy changes in taxes. The tax multiplier differs from the expenditures multiplier based on how the autonomous change affects aggregate expenditures. The tax multiplier reflects the fact that a given autonomous change in taxes does NOT result in an equal change in aggregate expenditures. Taxes change disposable income, which causes changes in both consumption expenditures and saving. And only consumption expenditures affect aggregate expenditures. The expenditures multiplier, however, reflects the fact that a given autonomous change in an expenditure results in an equal change in aggregate expenditures. The tax multiplier is actually a family of multipliers that differ based on which components of the Keynesian model are assumed to be induced by aggregate production and income. The simple tax multiplier, as the name suggestions, is the simplest variation and includes only induced consumption. Every other component -- investment expenditures, government purchases, taxes, exports, and imports -- are assumed to be autonomous. More complex tax multipliers include different combinations of induced components, ranging all of the way up to the "complete" tax multiplier that realistically includes all induced components. Induced consumption, investment, and government purchases all increase the value of the expenditures multiplier. Induced taxes and imports both decrease the value of the expenditures multiplier. The Simple Tax Multiplier The simple tax multiplier is the ratio of the change in aggregate production to an autonomous change in government taxes when consumption is the only induced expenditure. Autonomous tax changes trigger the multiplier process and induced consumption provides the cumulatively reinforcing interaction between consumption, aggregate production, factor payments, and income. The formula for this simple tax multiplier. (m[tax]), is: m[tax] = - MPC x 1MPS = - MPCMPS Where MPC is the marginal propensity to consume and MPS is the marginal propensity to save. This formula is almost identical to that for the simple expenditures multiplier. The only difference is the inclusion of the negative marginal propensity to consume (- MPC). If, for example, the MPC is 0.75 (and the MPS is 0.25), then an autonomous $1 trillion change in taxes results in an opposite change in aggregate production of $3 trillion. Two Differences The key feature of the simple tax multiplier that differentiates it from the simple expenditures multiplier is how taxes affect aggregate expenditures. In particular, taxes do not affect aggregate expenditures directly (as do government purchases or investment expenditures). They affect aggregate expenditures indirectly through disposable income and consumption. This gives rise to two important differences compared to the simple expenditures multiplier. • First, a change in taxes causes an opposite change in the disposable income of the household sector. An increase in taxes decreases disposable income and an decrease in taxes increases disposable income. This is why the simple tax multiplier has a negative value. • Second, the household sector reacts to the change in disposable income caused by the change in taxes by changing both consumption and saving. How much consumption changes is based on the MPC. The MPC means that for each one dollar change in taxes, consumption and thus aggregate expenditures change by a only fraction. The fraction is equal to the MPC. The reason, of course, is that the taxes affect income and income is divided between saving and taxes. Suppose, for example, that the government sector reduces taxes by $1 trillion with the goal of stimulating aggregate production and warding off a business-cycle contraction. This tax reduction increases disposable income by $1 trillion. The household sector spends part and saves part of this income. The division between consumption and saving is based on the marginal propensities to consume and save. If the marginal propensity to consume is 0.75, then consumption increases by $750 billion. This $750 billion change in consumption then triggers the multiplier process much like that for an autonomous change in investment expenditures. The difference, however, is the full $1 trillion change in investment triggers the multiplier process, but only 75 percent of the change in taxes works its way into the multiplier. More Complex Tax Multipliers The simple tax multiplier assumes that consumption is the only induced component. In the real world, however, consumption is not the only induced expenditure. Investment, government purchases, taxes, and net exports (through imports) are also induced. A more complete, more realistic, and more complex multiplier includes induced components. Here is the formula for just such a multiplier, which can be labeled m[tax-all]. m[tax-all] = - MPC{1 - [MPC + MPI + MPG - (MPC x MPT) - MPM]} This particular multiplier has a number of abbreviations containing the letters "MP." These are the assorted induced components, with "MP" standing for marginal propensity. In fact, the batch of abbreviations within the brackets "[]" is actually the slope of the aggregate expenditures line. Let's run through the cast of characters in this formula. • MPC is the marginal propensity to consume. • MPI is the marginal propensity to invest. • MPG is the marginal propensity for government purchases. • MPT is the marginal propensity to tax. • MPM is the marginal propensity to import. This complex tax multiplier can be used to determine the change in aggregate production resulting from a change in taxes. This particular tax multiplier formulation includes all induced components. However, several other tax multipliers that include different combinations of these induced components can be identified. One multiplier can include only induced consumption and induced investment. Another can include induced consumption, induced government purchases, and induced taxes. The possibilities are almost Other Multipliers The tax multiplier is one of several Keynesian multipliers. Two other related multipliers are expenditures multiplier and balanced-budget multiplier. • Expenditures Multiplier: The expenditures multiplier measures changes in aggregate production caused by changes in an autonomous expenditure. Like the tax multiplier this comes in several varieties, simple and complex, depending on which expenditures and other components are induced by aggregate production and income. It differs from the tax multiplier in that aggregate expenditures change by full amount of the autonomous change. • Balanced-Budget Multiplier: The balanced-budget multiplier measures the combined impact on aggregate production of equal changes in government purchases and taxes. The simple balanced-budget multiplier has a value equal to one. Two other multipliers arise from the financial, or money, side of the economy. They are the deposit expansion multiplier and the money multiplier. The deposit expansion multiplier measures the change in bank deposits caused by a change in bank reserves. The money multiplier measures the change in money caused by a change in bank reserves. Both are useful in the analysis of monetary policy. <= TAX INCIDENCE TAX PROPORTIONALITY => Recommended Citation: TAX MULTIPLIER, AmosWEB Encyclonomic WEB*pedia, http://www.AmosWEB.com, AmosWEB LLC, 2000-2014. [Accessed: April 16, 2014]. Check Out These Related Terms... | | | | | | | | | | Or For A Little Background... | | | | | | | | | | | | And For Further Study... | | | | | Search Again? Back to the WEB*pedia
{"url":"http://www.amosweb.com/cgi-bin/awb_nav.pl?s=wpd&c=dsp&k=tax+multiplier","timestamp":"2014-04-17T06:46:20Z","content_type":null,"content_length":"47146","record_id":"<urn:uuid:20be9c9d-58ae-4f6b-bcf0-5784cda9c9da>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Let's Join O.R. Forces to Crack the 17x17 Challenge Let’s Join O.R. Forces to Crack the 17×17 Challenge I recently came across this post by bit-player, which refers to this post by William Gasarch, on a very interesting feasibility problem: given an $m \times n$ grid, assign one of $c$ colors to each position on the grid so that no rectangle ends up with the same color in all of its vertices (see the original post for applications of this problem). Many results are known for different values of $m$, $n$, and $c$, but the challenge (which comes with a reward of US$289) is to decide whether a feasible solution exists when $m=n=17$ and $c = 4$. Apparently, some attempts have been made to use Integer Programming (IP) to solve this problem: SAT-solvers and IP-programs have been used but have not worked— however, I don’t think they were tried that seriously. By writing this post, I hope to get enough O.R. people excited to brainstorm together in search of a solution. Given this is a feasibility problem, another method of choice here would be Constraint Programming (more on that later). I decided to begin with the first IP formulation that came to mind. I didn’t expect it to close the deal, but I wanted to see how far it would take me. Here it is: let the binary variable $x_{ijt}$ equal 1 if the cell in row $i$ and column $j$ receives color $t$. There are two groups of constraints. Every cell must have a color: $\sum_{t=1}^c x_{ijt} = 1, \enspace \forall \; i,j$ Every rectangle can have at most 3 vertices with the same color: $x_{ijt} + x_{i+a,jt} + x_{i,j+b,t} + x_{i+a,j+b,t} \leq 3, \enspace \forall \; i,j,a,b,t$ $1 \leq i \leq m-1$, $1 \leq j \leq n-1$, $1 \leq a \leq m-i$, and $1 \leq b \leq n-j$. The objective function does not matter (in theory), but my limited experiments indicate that it’s better to minimize the sum of all $x_{ijt}$ than it is to maximize it. Gasarch also provides this diagram with a rectangle-free subset of size 74 for a 17 x 17 grid. I then added constraints of the form $x_{ij1}=1$ for every cell $(i,j)$ with an “R” in the diagram. This may be too restrictive, since it’s not clear to me whether a feasible solution to the 17 x 17 grid must contain that diagram as a subset. If that’s true, the latter constraints help with symmetry breaking and also substantially reduce the problem size. To reduce some of the color symmetry in the problem, I also arbitrarily chose cell (1,1) to contain color 2, i.e. $x_{112}=1$. Note that this still allows colors 3 and 4 to be swapped. I could have set $x_{123}=1$, but that would mean taking a risk. Finally, for the 17 x 17 grid, it is suspected that each line and each row will have three colors used four times and one color used five times, hence, I also included the following constraints: $4 \leq \sum_{j=1}^n x_{ijt} \leq 5, \enspace \forall \; i,t$ $4 \leq \sum_{i=1}^m x_{ijt} \leq 5, \enspace \forall \; j,t$ Once again, if my understanding is correct, there’s no formal proof that the above constraints are valid. Here’s a ZIMPL model that can be used to generate the .LP files and here’s the 17 x 17 LP. I set CPLEX’s MIP emphasis to 4 (look for hidden feasible solutions) and first ran a 14 x 14 as a warm-up. CPLEX 12.1 finds a feasible solution in under 5 seconds (I substituted 1 for 4 in the color usage constraints above). I stopped the 15 x 15 problem after 12 fruitless hours of search, so 14 x 14 is apparently the largest of the easy instances. I started the 17 x 17 run last Thursday. The initial IP has 1156 variables and 74,620 constraints. Pre-processing reduces that to 642 variables and 15,775 constraints. After approximately 122 hours and 77,000 nodes on a dual-core machine (3.79 GHz), no solution was found. Now it’s time for smarter ideas. Here are a few candidates: 1) Try a column-generation approach. It’s easy to find solutions for 8×8 and 9×9 grids, which will appear as sub-grids of a 17×17 solution. So it may be possible to write a set partitioning formulation (with side constraints) that has the 8×8 and 9×9 solutions as variables. 2) Try a meta-heuristic approach (e.g. simulated annealing). This problem is probably one of those for which an almost-feasible solution (like this one) can be very far from a feasible solution. 3) Try constraint programming. I think the main difficulty here will be finding a good variable and value selection heuristic. I started building a Comet model for the problem using the global constraint cardinality; here it is. It still does not include the constraints that fix the color of the rectangle free subset provided by Gasarch, but those are very easy to add (see the code that is commented out). Let me know what your thoughts and hunches are! 6 responses to “Let’s Join O.R. Forces to Crack the 17×17 Challenge” 1. What about a non-naive distributed GA ? Greetings from RS/Brasil =) 2. Sure. Care to elaborate? :-) 3. That’s the problem, with my time, when I finally finish the code, someone would have solved the problem hehe ! But I’ll try to think a faster way to do that ;) 4. There’s quite a lot of symmetry to be broken in this problem. I know that the latest versions of CPLEX have some automatic symmetry breaking, but I wonder if more specialized symmetry breaking could be helpful here. 5. In your formulation, you can add more, stronger inequalities, as follows: Let i < j < k <= m, and p < q < r <= n. If you now add up the inequalities for the 5 squares (i,p-j,q), (i,q-j,r), (j,p-k,q), (j,q-k,r) and (i,p-k,r) all coefficients are multiples of 2, so you can divide by 2 and round down the rhs from 5*3/2 to 7. Of course there are a lot of them, and adding all at once will slow down the LP solving, but they should be helpful in a cutting plane 6. Brian, doesn’t fixing the colors shown in the rectangle-free subset remove all the symmetries from row and column interchange? We would then be left with color symmetries (i.e. given a feasible solution, we can obtain other feasible solutions by permuting colors). Which symmetries were you referring to? Filed under Challenge, Constraint Programming, Integer Programming, Modeling
{"url":"https://orbythebeach.wordpress.com/2009/12/15/lets-join-o-r-forces-to-crack-the-17x17-challenge/","timestamp":"2014-04-19T01:48:27Z","content_type":null,"content_length":"79775","record_id":"<urn:uuid:17934be2-7cdb-45ef-843f-a303d7c6012b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Challenge 13: Comet Catastrophe Copyright © University of Cambridge. All rights reserved. 'Weekly Challenge 13: Comet Catastrophe' printed from http://nrich.maths.org/ In July 1994 the Comet Shoemaker-Levy 9 struck the planet Jupiter at a speed of around $60\textrm{ km s}^{-1}$ . Although it broke up before impact, the core of the original comet was around $5\ textrm{ km}$ diameter. Its estimated density was $0.3 - 0.7\textrm{ g cm}^{-3}$. Imagine that such a comet had struck the earth. How much would it have changed the earth's velocity relative to the sun? (Note that the mass of the earth is about $6\times 10^{24}\textrm{ kg}$). Did you know ... ? The mathematics of the orbits of comets and planets and cricket balls is identical and forms a large part of university applied mathematics courses.
{"url":"http://nrich.maths.org/7055/index?nomenu=1","timestamp":"2014-04-17T18:26:20Z","content_type":null,"content_length":"4100","record_id":"<urn:uuid:134b5cd5-d7c4-4f24-be1a-0dcc5b61b649>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
AeroRocket simulation software for rockets and airplanes AeroRocket Software Products AeroCFD® | AeroWindTunnel | AeroSpike | Nozzle | AeroIsp AeroDRAG & Flight Simulation™ | AeroCP | HyperCFD | AeroFinSim AeroEuler | StarTravel | Wind Tunnel Course | Buy Latest Publications “Potential Vortex Transient Analysis and Experiment”, viXra e-print archive, (2014) "Hydrodynamic Analogue for Curved Space-Time and General Relativity", viXra e-print archive, (2014) Experimental Rocket Launches Sprint Experimental Rocket Sprint Model Rocket AeroRocket specializes in subsonic, supersonic and hypersonic aerodynamics, Computational Fluid Dynamics (CFD), warp drive physics and aerospace related software development for rockets, airplanes and gliders. Other services include wind tunnel testing using the AeroRocket designed and fabricated subsonic wind tunnel and supersonic blow-down wind tunnels. John Cipolla Chief Aerodynamicist, AeroRocket and WarpMetrics Nozzle 3.7 Example: External shock pattern for an overexpanded SSME rocket nozzle. Nozzle 3.7 is a one-dimensional isentropic with cross-sectional area variation, compressible flow computer program for the analysis of converging-diverging nozzles. Nozzle internal flow may be entirely subsonic, entirely supersonic or a combination of subsonic and supersonic including shock waves in the diverging part of the nozzle. Shock waves are clearly identified as vertical red lines on all plots. The cross-sectional shape in the axial direction of the nozzle is specified by selecting from five standard nozzle types or by defining nozzle geometry using the Free-Form nozzle geometry method. Nozzle plots color contours of pressure ratio, temperature ratio, density ratio and Mach number and has a slider bar that displays real-time values of all nozzle flow properties. New in this version is the ability to determine shock-angle, jet-angle (plume-angle) and Mach number for axisymmetric and two-dimensional nozzles in the region near the lip for underexpanded and overexpanded flow. The converging-diverging nozzle featured in the new AeroRocket supersonic blow-down wind tunnel was designed using Nozzle 3.7 applying the concept of a normal shock diffuser. Finally, use AeroRocketCAD to generate Nozzle 3.7 and AeroCFD shapes from AutoCAD DXF geometry. More ... AeroCFD 5.2 Example: The Masten Space Systems XA-1.0 vertical takeoff rocket has been modeled using AeroCFD 5.2. AeroCFD is a "true" three-dimensional axisymmetric and two-dimensional CFD program that solves the inviscid Euler equations for subsonic, transonic and supersonic flow using automatic mesh generation and graphical results visualization. AeroCFD provides a maximum of 100 cells in the axial direction, 50 cells in the transverse direction and 10 cells in the circumferential (3-D) or thickness (2-D) direction. The latest version of AeroCFD has increased the number of finite-volumes available for analysis from 18,000 cells to 50,000 cells without increasing run time. Due to its "true" 3-dimensional formulation, AeroCFD provides non-zero lift and non-zero pitching moment for axisymmetric shapes at angle of attack. Model geometry is specified by selecting from a library of standard shapes. Nose sections are defined using one of five basic shapes that include Conical, Ogive, Elliptical, Parabolic and Sears-Haack with power series coefficient. The user has the option for adding up to two constant diameter sections, one variable diameter transition section and one variable diameter boat tail section to complete the library of user-defined shapes. For added flexibility AeroCFD can import up to 1,000 X-R data points for generating axisymmetric and two-dimensional designs that require grid clustering in regions where shock waves dominate the flow. Flow fields are displayed using fill-contour plots, line-contour plots and surface distribution plots for pressure coefficient, pressure ratio, temperature ratio, density ratio and Mach number. See how to easily perform high power rocket CFD's and generate multiple fin sets using AeroCFD. See the new AeroCFD demo which illustrates how simple it is to plot flow fields, determine Cd and Xcp using AeroCFD. Customer Comment: I am a user of AeroCFD (formerly known as VisualCFD) for our successful Carmack Prize winning 100k' amateur rocket flight. We used your tool to derive a Cd curve for our airframe that was clearly superior to other amateur rocketry tools. Ken Biba, Team AeroPac SUPERSONIC BLOW-DOWN WIND TUNNEL (1" DIAMETER) A new supersonic blow-down wind tunnel is available for testing aerodynamic shapes. The new 1" inside-diameter supersonic blow-down wind tunnel, having a test section blockage factor less than 3%, now joins the successful 1/2" supersonic wind tunnel. The AeroRocket 1" diameter supersonic blow-down wind tunnel performs drag measurements up to Mach 3. In addition, AeroRocket's expertise in the fabrication of miniature wind tunnel models makes possible the measurement of supersonic drag coefficient for designs ranging from simple high power rockets to the very complex HTV-3X and X-30 NASP. Mach number verses time is measured during the blow-down process using a pitot-static pressure probe for measuring total pressure (Po) and static pressure (Ps) of a compressible fluid in this case air. Click here to view a QuickTime movie of a 2.5 second segment of a typical blow-down wind tunnel test using the 1" diameter supersonic wind tunnel where nearly constant Mach 1.6 flow is maintained for approximately 2.5 seconds, an eternity compared to typical shock tube performance. StarShip VASIMR Analysis: StarTravel performs two-body astrodynamics analyses of spacecraft and satellites knowing burnout velocity and flight-path angle at burnout. For this purpose StarTravel uses two-body astrodynamics for determining sub-orbital, orbital and interplanetary motion around the Earth and Sun. In addition, StarTravel performs general heliocentric and Hohmann Transfer orbital analyses. New in the latest version of StarTravel is the ability to determine the ballistic trajectory of rockets and missiles launched vertically, horizontally and everything in between. Also, perform a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) analysis or a standard constant specific impulse analysis (nuclear propulsion) by specifying starting exhaust velocity and ending exhaust velocity or constant exhaust velocity for heliocentric flight to planets and stars.
{"url":"http://aerorocket.com/","timestamp":"2014-04-18T20:43:36Z","content_type":null,"content_length":"25786","record_id":"<urn:uuid:ea306453-aa17-4441-9118-abf4b65d8805>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Applying generalizability theory with R One of the things I like about my current job is getting to apply generalizability theory. I have to confess that I didn't appreciate G theory at first, but now that I'm analyzing data from performance assessments (i.e., ratings of instructional effectiveness) I think it's great for a number of reasons. I like the concept of universe scores, treating items as a random facet, estimating multiple sources of error, the ability to focus on different objects of measurement (e.g., ratees and raters), and G theory's strong emphasis on finding ways to increase generalizability (i.e., decision studies). And as someone who has far more experience with item response theory (IRT) where conditional standard errors of measurement (CSEMs) are the norm, I'm especially intrigued by the prospect of using G theory to estimate CSEMs. It's pretty obvious from this blog that I like statistical programming with because it helps me learn methods far better than simply reading about them or using a pre-existing software package. In that vein, I decided to write some functions to conduct G and D studies with . I've been using synthetic data from Brennan's book and Shavelson and Webb's book to ensure that my results match theirs. The nice thing about using for G theory is that you can choose functions for many different packages to fully apply G theory. For example, by using lme4 to estimate variance components, arm to extract standard errors of random intercepts, and ggplot2 to create faceted plots, I am able to create "vertical ruler" plots, which are prevalent among practitioners of many-facets Rasch. Such plots make it easy to identify the (statistically significantly) highest and lowest performers as well as the most lenient and stringent raters. The plots below illustrate how I've been implementing G theory with . The plots were produced from Brennan's synthetic data set number 4, and the dependability coefficients match those from GENOVA. I've never developed an package, but given that no one else has so far, I'd be interested in doing so with the code I'm developing. 1 Comment I use R also for doing G theory analysis. I have the same routine set up for using the lmer models and then arm package. You've added to what I have done by producing some informative plots. Nice job on these!
{"url":"http://blog.lib.umn.edu/moor0554/canoemoore/2012/10/g_theory_r.html","timestamp":"2014-04-19T07:07:34Z","content_type":null,"content_length":"25718","record_id":"<urn:uuid:b8b13b49-a681-4479-bdfe-2a1f92af4c64>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical methods Q - Find Domain where analytic February 19th 2011, 03:08 PM #1 Jul 2010 Mathematical methods Q - Find Domain where analytic Hi, im really stuck on this question. Thankyou for any help in advance i) Find a domain on which the function g1(z) = ln[(z − 4)/(z^2 − 4)] is single valued and analytic. Provide two alternative constructions: a) Take the principal branch cut for ln (z) and b) take the branch cut for ln (z) to be R+. ii) Find a domain on which the function g2(z) = arcsinh z is single valued and analytic. Use the principal branch cut for ln (z). Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-geometry/171866-mathematical-methods-q-find-domain-where-analytic.html","timestamp":"2014-04-17T08:14:40Z","content_type":null,"content_length":"29962","record_id":"<urn:uuid:1acbe423-1e83-49f6-af3d-b44ed20eb6e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Laurent Series Expansion Re: Laurent Series Expansion Hi clumsy shark; I am unfamiliar with the terminology. To expand in a Laurent series we need the expansion around the poles. These are the zeros of the denominator. I get for them 2 and i and -i. What happens with that ring stuff? That is where I am stuck. Do you want to expand it around 2? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19478","timestamp":"2014-04-21T12:36:56Z","content_type":null,"content_length":"11028","record_id":"<urn:uuid:604b18f6-1fb7-41dd-84e8-2b776f25ff41>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
First integrals of a 3D incompressible flow up vote 2 down vote favorite Let $\Omega$ be an unbounded periodic smooth domain of $\mathbb{R}^3$. We are Given an incompressible vector field $q:\Omega\subset\mathbb{R}^3\rightarrow \mathbb{R}^3$ (i.e. $\nabla\cdot q\equiv 0$ over $\Omega$) which satisfies $q\cdot \nu =0$ over $\partial \Omega$. By first integrals of $q$, we mean any function $w\in H^1(\Omega)$, $w$ is periodic and $q\cdot\nabla w=0$. Obviously a function $w$ which is a first integral is going to be constant over the streamlines of $q$. These first integrals turn out to be important in studying some qualitative properties of the KPP minimal speed within large drift etc. Existence of nontrivial first integrals (other than the constant functions) is the question in the 3 dimensional case. One easy situation, is the case where we have a shear flow (the streamlines of $q$ are unidirectional). In this particular case, we know that nontrivial first integrals exist. While looking at this kind of problems, one can think of another particular situation which is also interesting: the case where the vector field $q$ vanishes outside a cylindrical component, $V$, and inside, this vector field exhibits an ergodic character. That is, there is a streamline of $q$ which traverses every point in $V$. In such case, it could be proved that any first integral $w$ must be constant over the ergodic component $V$ (the proof is not straight forward though). Now that we have introduced all these things, Q. Do we know or can we, give an explicit example of an incompressible field which vanishes outside a cylindrical component $V$, satisfies $q\cdot\nu=0$ on $\partial V$, while $V$ is an ergodic component for $q$. Construction of such flows was done in papers of Prof. Pesin but it involved a lot of complicated steps-- the result led to more than ergodicity (in fact, Pesin constructed Bernoulli flows over compact manifolds of dimension $\geq 3$). As ergodicity over a component for a vector field is not as sophisticated as Bernoulli, one still hopes to have an explicit example of an incompressible field which admits an ergodic component $V$ (cylindrical) and vanishes outside $V$ as well as on $\partial V$? ap.analysis-of-pdes fluid-dynamics ca.analysis-and-odes ergodic-theory add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ap.analysis-of-pdes fluid-dynamics ca.analysis-and-odes ergodic-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/133134/first-integrals-of-a-3d-incompressible-flow","timestamp":"2014-04-17T18:36:15Z","content_type":null,"content_length":"48785","record_id":"<urn:uuid:0b7b098b-15ce-4959-b389-b6d36093f871>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Island Lake Precalculus Tutor Find a Island Lake Precalculus Tutor ...I am also doing one-on-one tutoring for high school students for the past three years. After my tutoring all of my students see the hike in their grades. I am able to help in pre-algebra, algebra, Geometry, College-algebra, Trigonometry and Calculus. 12 Subjects: including precalculus, calculus, trigonometry, statistics ...I have background in peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college. This is a field I am passionate in and have much background in on a personal, out-of-classroom basis. I received a 26 on this section when I took the exam. 16 Subjects: including precalculus, chemistry, algebra 2, calculus ...When I teach, I usually go to the basic so that when a student forgets the law or a formula, he/she can navigate through the problem without those law/formula. When I teach a person, I usually go through many problems because I believe the only way to learn mathematics is through practice. I believe solving more problems allows students to learn how to approach different questions. 6 Subjects: including precalculus, geometry, algebra 1, algebra 2 I have helped hundreds of students improve their mathematical thinking skills over the past eight years. I have taught the entire mathematics curriculum at my high school in Chicago, including AP Statistics and AP Calculus, and I have been providing highly personalized mathematics tutoring along th... 11 Subjects: including precalculus, calculus, statistics, geometry Hi! Thank you for considering my tutoring services. I have a diverse background that makes me well suited to help you with your middle school through college level math classes, as well as physics, mechanical engineering, intro computer science and Microsoft Office products. 17 Subjects: including precalculus, calculus, physics, geometry
{"url":"http://www.purplemath.com/island_lake_precalculus_tutors.php","timestamp":"2014-04-20T15:56:53Z","content_type":null,"content_length":"24365","record_id":"<urn:uuid:c6fed303-6541-4a24-b3e2-96fc695dd4f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Numoracle Recipes My plan was to finish a full series on Oracle tools for Basic Statistics, before moving on to typical "blog-like" postings - but I realize I'll need at least another month (or two) to get to topics like ANOVA and Multiple Regression. I feel compelled to get a basic feel for a technique/concept before writing up examples of the functions here (which anybody can quickly get from the Oracle SQL Reference Manual). So I am (reluctantly) breaking my rule to sneak in this note about a talk I gave in Nov '07 at the Worcester Polytechnic '07 Colloqium Series - before it loses its relevance. The talk provides the motivation for in-database mining, and the presentation slides offer a good intro to Oracle Data Mining in the 11gR1 Database release. Oracle has touted the benefits of in-database mining for over 4 years now, with steady improvements to the product and an expanding customer base. The recent announcement from market leader SAS on integration with Teradata is a nice vindication of this approach. Details of the integration, whether this engineering effort pans out, and how the market receives the integrated product remains to be seen - but this is good for the industry. SAS still owns the bully pulpit in Analytics, and this move can hopefully lead to greater awareness of the benefits of this approach, and consolidation in this space. Marcos has promised to write about these developments from his perspective as a scientist/developer - so I will hold off my thoughts in anticipation of that post. Impressions on WPI - good school with active AI and Database research groups. I had the pleasure of meeting Prof. Ruiz and Prof. Heffernan from the AI research group - they have some interesting projects there, and Prof. Rundentstein and Prof. Mani from the database research group. Check these programs out if you are an aspiring CS student. In the previous post, we showed how to compute the confidence interval when σ - i.e. the population standard deviation, is known. But in a typical situation, σ is rarely known. Computing confidence Interval for μ when σ is Unknown: The solution then is to use the sample standard deviation, and use a variant of the standardized statistic for normal distribution z = (X_bar - Student's T Distribution: For a normally distributed population, the Student's T distribution is given by this standardized statistic: t = (X_bar - μ)/(S/√n), with (n - 1) degrees of freedom (df) for the deviations, where S is the sample standard deviation, and n is the sample size. Key points: • The t distribution resembles the bell-shaped z (normal) distribution, but with wider tails than z, with mean zero (the mean of z), and with variance tending to 1 (the variance of z) as df increases. For df > 2, σ² = df/(df - 2). • The z (normal) distribution deals with one unknown μ - estimated by the random variable X_bar, while the t distribution deals with two unknowns μ and σ - estimated by random variables X_bar and S respectively. So it tacitly handles greater uncertainty in the data. • As a consequence, the t distribution has wider confidence intervals than z • There is a t-distribution for each df=1,..n. • For a sample (n < 30) taken from normally distributed population, a (1 - α) 100% confidence interval for μ when σ is unknown is X_bar ± t[α/2] s/√n. This is the better distribution to use for small samples - with (n - 1) df, and unknown μ and &sigma. • But larger samples (n ≥ 30), and/or with larger df, the t distribution can be approximated by a z distribution, and the (1 - α)100% confidence interval for μ is X_bar ± z[α/2] s/√n (Note: Using the sample sd itself). We will pause to understand df in this context (based on Aczel's book). We noted earlier that the df helps as compensating factor - here is how. Assume a population of five numbers - 1, 2, 3, 4, 5. The (known) population mean is μ = (1+2+3+4+5)/5 = 7.5. Assume we are asked to sample 5 numbers and find the squared standard deviation (ssd) based on μ: x | x_bar | deviation | deviation_squared 3 | 3 | 0.0 | 0 2 | 3 | -1.0 | 1 4 | 3 | 1.0 | 1 1 | 3 | -2.0 | 4 2 | 3 | -1.0 | 1 Sum of Squared Deviation = 7 Given the mean, the deviation computation for the random 5 samples effectively retain 5 degrees of freedom. Next, assume we don't know the population mean, and instead are asked to compute the deviation from one random number. Our goal is to choose a number that will minimize the deviation. A readily available number is the s sample mean (3+2+4+1+2)/5 = 2.4 - so we will use it: x | x_bar | deviation | deviation_squared 3 | 2.4 | 0.6 | 0.36 2 | 2.4 | -0.4 | 0.16 4 | 2.4 | 1.6 | 2.56 1 | 2.4 | -1.4 | 1.96 2 | 2.4 | -0.4 | 0.16 Sum of Squared Deviation = 5.2 The use of sample mean biases the SSD downward from 7 (actual) to 5.2. But given the choice of a mean, the deviation for the same random 5 samples retain df = (5 - 1) = 4 degrees of freedom. Subsequent choices of 2 means - (3+2)/2, (4+1+2)/3 - or 3 means, would reduce the SSD down further; at the same time, reducing the degrees of freedom for the deviation for the 5 random samples: df= (5-2), df=(5-3) and so on. As an extreme case, if we consider the sample mean of each sampled number as itself, then we have: x | x_bar | deviation | deviation_squared 3 | 3 | 0 | 0 2 | 3 | 0 | 0 4 | 4 | 0 | 0 1 | 1 | 0 | 0 2 | 2 | 0 | 0 Sum of Squared Deviation = 0 which reduces SSD to 0, and the deviation df to (5-5) = 0. So in general, • deviations (and hence SSD) for a sample of size n taken from a known population mean μ will have df = n • deviations for a sample of size n taken from the sample mean X_bar will have df = (n - 1) • deviations for a sample of size n taken from k ≤ n different numbers (typically mean of sample points) will have df = n - k. Confidence interval using T-distribution applies for the narrow case of n < 30, normal population; the Z distribution covers larger samples. The practical use this distribution appears more to be in comparing two populations using the Student's T-Test - which requires understanding the concepts of Hypothesis Testing. So I will defer the code for an equivalent confidence_interval() routine based on T-distribution for later. The key goal of inferential statistics is to make predictions/observations about the population (the whole) as a generalization of observations made on a random sample (the part). In the previous post, we discussed common techniques to derive samples from a population. In this post, we will discuss sampling distributions - a key building block for the practice of statistical inference. These tools help answer questions such as: "What should be the sample size to make a particular inference about this population?" or "100 random water samples along this river show an average of 50 ppm (parts per million) of this toxin, with standard deviation of 4.5 - how much is the river contaminated on average with 95% confidence interval", and so on. The objective in the next few posts is to discuss the use of Oracle SQL statistical functions for various sampling distributions. But if you are a typical DB developer with novice/intermediate knowledge of statistics (like me), spending some time on these foundational concepts may be worthwhile. I am currently using Complete Business Statistics and the Idiot's Guide to Statistics • The various numerical measures - such as mean, variance etc - when applied to a sample, are called sample statistics or simply statistics. • When these numerical measures are applied to a population, they are called population parameters or simply parameters. • An estimator of a population parameter is the sample statistic used to estimate the parameter. The sample statistic - mean, X_bar - estimates the population mean μ; the sample statistic - variance, S² - estimates the population variance σ². • When a single numerical value is the estimate, it is called a point estimate. For example, when we sample a population and obtain a value for X_bar - the statistic - we get a specific sample mean, denoted by x_bar (lower-case), which is the estimate for population mean μ. When the estimate covers a range or an interval, it is called an interval estimate - the unknown population parameter is likely to be found in this interval • A sample statistic, such as X_bar, is a random variable; the values of this randome variable depend on the values in the random sample from which the statistic is computed; the sample itself depends on the population from which it is drawn. This random variable has probability distribution, which is called the sampling distribution • The principal use of sampling distributions and its related concepts is to help predict how close the estimate is to the population parameter, and with what probability. Central Limit theorem: The sample statistic sample mean X_bar exhibits a unique behavior - regardless of the population distribution (uniform, exponential, other), in the limit n → ∞ ("n tends to infinity", where n is the sample size), the sampling distribution of X_bar tends to a normal distribution. The rate at which the sampling distribution approaches normal distribution depends on the population distribution. Now, if the population itself is normally distributed, then X_bar is normally distributed for any sample size. This is the essence of what is called Central Limit theorem. Formally, when a population with mean μ and standard deviation σ is sampled, the sampling distribution of the sample mean X_bar will tend to a normal distribution with (the same) mean μ and standard deviation σ[x_bar] = σ/√n, as the sample size n becomes large. "Large" is empirically defined to be n ≥ 30. The value σ[x_bar] is called the standard error of the mean. "Okay,... so what is the big deal?". The big deal is that we can now estimate the population mean (regardless of the population's distribution) using the familiar technique (that we saw in an earlier post) for standard normal distribution. Now, it is not common that one or more of the population parameters (like standard deviation) are always known. The computations have to be modified to accommodate these unknowns - which brings us to two more concepts associated with sampling distributions. Degrees of Freedom (DF): If we are asked to choose three random numbers a, b and c, we are free to choose any three numbers without any restrictions - in other words, we have 3 degrees of freedom. But if the three numbers are put together in a model a + b + c = 10, then we have just 2 degrees of freedom - choice of a and b can be arbitrary , but c is constrained to take a specific value that satisfies the model. The use of df appears to be a compensatory mechanism in the computations, specific to the context/situation in which is it applied - so we'll discuss this in the context of the technique we are Confidence Interval: An interval estimate, with its associated measure of confidence is called confidence interval. It is a range of numbers that probably contains the unknown population parameter, with an adjoining level of confidence that it indeed does. This is better than a point estimate in that it gives some indication of the accuracy of the estimation. In an earlier post, we briefly touched upon the transformation of a normal random variable (X, with arbitrary μ and σ) to a standard normal variable (Z, with μ = 0 and σ = 1). The transformations are X to Z: Z = (X - μ)/σ and Z to X: X = μ + Zσ. Applying the latter transformation to standardized sampling distribution with mean μ and standard deviation σ/√n, the confidence interval for the population mean is μ ± Z σ/√n. A typical question will be "Give me the 95% confidence interval for the population mean". Given the confidence level, and the knowledge that the area under the standard normal curve is 1, we can obtain the value of Z from the standard normal table. For example, a 95% confidence level translates to an area of 0.95 symmetrically distributed around the mean, leaving 0.025 as areas on the left and right tails. From the table, Z = -1.96 for P=0.025, and Z = 1.96 for P=(0.025+0.95). So the 95% confidence interval for the population mean, when the population standard deviation is known, is given by μ ± 1.96 σ/√n We'll wrap this section reinforcing some concepts for use later: • In probability-speak, the statement "95% confidence interval for the population mean" implies that "there is a 95% probability that a given confidence interval from a given random sample from the same population will contain the population mean". It does NOT imply a "95% probability that the population mean is a value in the range of the interval". In the figure, sample mean x for a specific sample falls within the interval - based on this, the confidence interval is considered to contain the population mean μ. If x for another sample falls in the tail region, then that confidence interval cannot assert that it contains μ. • The quantity Z σ/√n is called sampling error or margin of error. • The combined area under the curve in the tails (i.e. 1 - 0.95 = 0.05 in the above example) is called level of significance α, and/or error probability. • The area under the curve excluding the tails under the curve in the tails (1 - α) is called confidence coefficient. • The confidence coefficient x 100, expressed as a percentage, is the confidence level. • The Z value that cuts off the area under the right tail (i.e. the area α/2 on the right of the curve, 1.96 in our example) is denoted as z[α/2]. • For a small sample (n < 30), or a sample taken from a normally distributed population, the (1 - α) 100% confidence interval for μ with known σ is X_bar ± z[α/2] σ/√n Confidence Interval for Population Mean with Known σ: Excel has a CONFIDENCE() function to compute the confidence interval. See a simple equivalent for Oracle SQL below. The function takes in a table name, the column name representing the sampled quantity, and level of significance value of 0.05, 0.01, or 0.1 (that corresponds to the three popular confidence levels - 95%, 99%, and 90% - respectively), and returns an object that contains the sample mean, sample error, the lower and upper bounds of the interval. CREATE OR REPLACE TYPE conf_interval_t AS OBJECT ( pop_mean NUMBER, sample_err NUMBER, lower NUMBER, upper NUMBER); CREATE OR REPLACE FUNCTION confidence_interval ( table_name IN VARCHAR2, column_name IN VARCHAR2, sample_percent IN NUMBER, alpha IN NUMBER DEFAULT 0.05, seed IN NUMBER DEFAULT NULL) RETURN conf_interval_t IS pop_mean NUMBER; pop_stddev NUMBER; sample_sz NUMBER; z NUMBER; err NUMBER; v_stmt VARCHAR2(32767); v_stmt := 'SELECT AVG(' || column_name || '), count(*) ' || 'FROM (SELECT * ' || 'FROM ' || table_name || ' SAMPLE(' || sample_percent || ')'; IF (seed IS NOT NULL) THEN v_stmt := v_stmt || ' SEED(' || seed || ')'; END IF; v_stmt := v_stmt || ')'; EXECUTE IMMEDIATE v_stmt INTO pop_mean, sample_sz; v_stmt := 'SELECT STDDEV(' || column_name || ') ' || 'FROM ' || table_name; EXECUTE IMMEDIATE v_stmt INTO pop_stddev; IF (alpha = 0.05) THEN z := 1.96; ELSIF (alpha = 0.01) THEN z := 2.57; ELSIF (alpha = 0.1) THEN z := 1.64; END IF; err := z * pop_stddev / SQRT(sample_sz); RETURN (conf_interval_t(pop_mean, err, (pop_mean - err), (pop_mean + err))); END confidence_interval; I used this function to find the 90%, 95%, and 99% confidence interval for the population mean of ORDERS_TOTAL in the ORDERS table, with an approx sample size of 15, with a seed to enable repeatable runs from the SQL sampler. Notice how the interval widens and becomes less precise as the confidence level increases. The true population mean is also shown to be contained in the interval SQL> select confidence_interval('ORDERS', 'ORDER_TOTAL', 15, 0.1, 3) from dual; CONFIDENCE_INTERVAL('ORDERS','ORDER_TOTAL',15,0.1,3)(POP_MEAN, SAMPLE_ERR, LOWER, UPPER) CONF_INTERVAL_T(24310.9188, 21444.6451, 2866.27368, 45755.5638) SQL> select confidence_interval('ORDERS', 'ORDER_TOTAL', 15, 0.05, 3) from dual; CONFIDENCE_INTERVAL('ORDERS','ORDER_TOTAL',15,0.05,3)(POP_MEAN, SAMPLE_ERR, LOWER, UPPER) CONF_INTERVAL_T(24310.9188, 25628.9661, -1318.0473, 49939.8848) SQL> select confidence_interval('ORDERS', 'ORDER_TOTAL', 15, 0.01, 3) from dual; CONFIDENCE_INTERVAL('ORDERS','ORDER_TOTAL',15,0.01,3)(POP_MEAN, SAMPLE_ERR, LOWER, UPPER) CONF_INTERVAL_T(24310.9188, 33605.3279, -9294.4092, 57916.2467) SQL> select avg(order_total) from orders; Fine - but how do we find the confidence interval when σ is unknown (which is the norm in practice)? Enter T (or Student's) Distribution - we will look at this in the next post. The key goal of inferential statistics is to make predictions/observations about the population (the whole) as a generalization of observations made on a random sample (the part). In this post, we will present tools/techniques in Oracle for sampling data. For ensuring high accuracy in the results of a statistical inference (technique), the sample dataset should minimally have these properties: • the sample must be drawn randomly from the population • every segment of the population must be adequately and proportionately represented • the sample should not be biased - a classic example is the non-response bias seen in survey/poll data - where the respondents ignore/refuse to answer a particular question ("Have you smoked marijuana?" in a health questionnaire). We will use this succinct introduction to sampling techniques (or alternatively, this resource) as a basis to discuss probabilistic (a.k.a random) sampling with and without replacement, clustered sampling, and stratified sampling in Oracle. The SAMPLE clause in the SQL SELECT statement supports simple random sampling and clustered sampling. Note that the sampling frame here is simply the set of rows returned by the statement - so you can control how many, and which, of the rows are sampled using filters in the WHERE clause. Here are some examples based on the OE.ORDERS table provided with the Sample Schema (connect oe/oe in SQL*Plus to see this table). Example: Random Sampling without replacement: Get random samples from ORDERS table with each row having a 15% chance of being in the sample SQL> select order_id,order_status,order_total from orders sample (15) seed (3); ---------- ------------ ----------- 2360 4 990.4 2361 8 120131.3 2384 3 29249.1 2400 2 69286.4 2401 3 969.2 2417 5 1926.6 ---------- ------------ ----------- 2423 3 10367.7 16 rows selected. The SEED clause in the above statement helps with repeatability of results from one run to the next. Next, achieving clustered sampling requires a BLOCK qualifier in the SAMPLE clause, as in select order_id,order_status,order_total from orders sample block (15) seed (3); Example: Random Sampling with replacement: can be accomplished with some minimal coding. In the example below, we using a hashing function (ORA_HASH) on the sequence of enumerated (using ROWNUM pseudo-column) rows to randomize the selection from the table to be sampled. Let's try this: "Get me random samples with replacement with sample size 10 rows from the ORDERS table". -- stage a view with row numbers tacked on to original table create view orders_view as select rownum rnum, o.* from orders o; -- create a sequence create sequence orders_seq10; -- create a mapping table create table orders_map(rnum number, rhsh number); -- the requested sample size is 10 for i in 1..10 insert into orders_map (rnum) values (orders_seq10.nextval); end loop; -- the complete orders table is the sampling frame -- mark random sampled entries in mapping table update orders_map set rhsh = ora_hash(rnum, (select count(*) from orders)); -- use the mapping table and orders_view to create a view with sampled rows create view orders_sample10_view as select o.* from orders_view o, orders_map m where o.rnum = m.rnum; This is the result of the above code snippet, when run in the OE schema using SQL*Plus: SQL> select * from orders_sample10_view; RNUM ORDER_ID ORDER_MO CUSTOMER_ID ORDER_STATUS ORDER_TOTAL SALES_REP_ID ------ ---------- -------- ----------- ------------ ----------- ------------ 11 2455 direct 145 7 14087.5 160 12 2379 direct 146 8 17848.2 161 13 2396 direct 147 8 34930 161 14 2406 direct 148 8 2854.2 161 15 2434 direct 149 8 268651.8 161 16 2436 direct 116 8 6394.8 161 17 2446 direct 117 8 103679.3 161 18 2447 direct 101 8 33893.6 161 19 2432 direct 102 10 10523 163 20 2433 direct 103 10 78 163 10 rows selected. For repeatability of sampled results, change all "create view" statements above to "create table". It is also useful if the data to be sampled is persisted in a table, rather than presented to the sampling queries as a view. Stratified Sampling can also be coded using SQL. But rather than provide an example query that is specific to a given table, we will provide a query generator function here. This is adapted from the output of code generated by Oracle Data Miner for stratified sampling. The function accepts a source and result table name, the name of the stratifying column (a.k.a. "variable" in statistics, "attribute" in data mining), the sampling size as a percentage of the table size, a scalar comparison operator ('<', '=', '>'), and an indicator to specify if each strata should roughly have the same number of elements. An important note - Oracle Data Miner - the GUI interface for the Oracle Data Mining platform provides sampling features, and generated SQL/PLSQL code that corresponds to the sampling input. This example has been adapted from this generated code. CREATE OR REPLACE TYPE TARGET_VALUES_LIST IS TABLE OF VARCHAR2(32); result_table_name IN VARCHAR2, source_table_name IN VARCHAR2, strat_attr_name IN VARCHAR2, percentage IN NUMBER, op IN VARCHAR2, equal_sized_strata IN BOOLEAN DEFAULT FALSE) v_stmt VARCHAR2(32767); tmp_str VARCHAR2(4000); sample_count PLS_INTEGER; attr_names TARGET_VALUES_LIST; attr_values TARGET_VALUES_LIST; counts VALUE_COUNT_LIST; counts_sampled VALUE_COUNT_LIST; v_minvalue NUMBER; v_stmt := 'SELECT column_name ' || 'FROM user_tab_columns ' || 'WHERE table_name = ' || '''' || UPPER(source_table_name) || ''''; EXECUTE IMMEDIATE v_stmt BULK COLLECT INTO attr_names; v_stmt := 'SELECT /*+ noparallel(t)*/ ' || strat_attr_name || ', count(*), ' || 'ROUND ((count(*) * ' || percentage || ')/100.0) ' || 'FROM ' || source_table_name || ' WHERE ' || strat_attr_name || ' IS NOT NULL ' || 'GROUP BY ' || strat_attr_name; EXECUTE IMMEDIATE v_stmt BULK COLLECT INTO attr_values, counts, counts_sampled; IF (equal_sized_strata = TRUE) THEN FOR i IN counts.FIRST..counts.LAST IF (i = counts.FIRST) THEN v_minvalue := counts(i); ELSIF (counts(i) > 0 AND v_minvalue > counts(i)) THEN v_minvalue := counts(i); END IF; END LOOP; FOR i IN counts.FIRST..counts.LAST counts(i) := v_minvalue; END LOOP; END IF; v_stmt := 'CREATE TABLE ' || result_table_name || ' AS ' || 'SELECT '; FOR i IN attr_names.FIRST..attr_names.LAST IF (i != attr_names.FIRST) THEN v_stmt := v_stmt || ','; END IF; v_stmt := v_stmt || attr_names(i); END LOOP; v_stmt := v_stmt || ' FROM (SELECT /*+ no_merge */ t.*, ' || 'ROW_NUMBER() OVER ' || '(PARTITION BY ' || strat_attr_name || ' ORDER BY ORA_HASH(ROWNUM)) RNUM ' || 'FROM ' || source_table_name || ' t) ' || 'WHERE RNUM = 1 OR '; FOR i IN attr_values.FIRST..attr_values.LAST IF (i != attr_values.FIRST) THEN tmp_str := ' OR '; END IF; IF (counts(i) <= 2) THEN sample_count := counts(i); sample_count := counts_sampled(i); END IF; tmp_str := tmp_str || '(' || strat_attr_name || ' = ''' || attr_values(i) || '''' || ' AND ORA_HASH(RNUM, (' || counts(i) || ' -1), 12345) ' || op || sample_count || ') '; v_stmt := v_stmt || tmp_str; END LOOP; Now, cut and paste the above code in a SQL*Plus session (for this example, the Sample Schema OE/OE session), and then invoke the function in a SQL*Plus session using this wrapper. The inputs are pretty self-explanatory - we are asking to sample roughly 20% of the ORDERS table, stratified based on values in the ORDERS_STATUS column, and to place the sampled output in the table SAMPLED_ORDERS. set serveroutput on result_table_name => 'sampled_orders', source_table_name => 'orders', strat_attr_name => 'order_status', percentage => 20, op => ' < ')); this will return the following sampling query (output formatted for easy readability): CREATE TABLE sampled_orders AS FROM (SELECT /*+ no_merge */ t.*, ROW_NUMBER() OVER (PARTITION BY order_status ORDER BY ORA_HASH(ROWNUM)) RNUM FROM orders t) WHERE RNUM = 1 OR (order_status = '1' AND ORA_HASH(RNUM, (7 -1), 12345) < 1) OR (order_status = '6' AND ORA_HASH(RNUM, (9 -1), 12345) < 2) OR (order_status = '2' AND ORA_HASH(RNUM, (7 -1), 12345) < 1) OR (order_status = '5' AND ORA_HASH(RNUM, (15 -1), 12345) < 3) OR (order_status = '4' AND ORA_HASH(RNUM, (12 -1), 12345) < 2) OR (order_status = '8' AND ORA_HASH(RNUM, (17 -1), 12345) < 3) OR (order_status = '3' AND ORA_HASH(RNUM, (9 -1), 12345) < 2) OR (order_status = '7' AND ORA_HASH(RNUM, (3 -1), 12345) < 1) OR (order_status = '0' AND ORA_HASH(RNUM, (11 -1), 12345) < 2) OR (order_status = '10' AND ORA_HASH(RNUM, (5 -1), 12345) < 1) OR (order_status = '9' AND ORA_HASH(RNUM, (10 -1), 12345) < 2); If you execute this SQL in the same OE session, the resulting table is: SQL> select * from sampled_orders; ORDER_ID ORDER_MO CUSTOMER_ID ORDER_STATUS ORDER_TOTAL SALES_REP_ID ---------- -------- ----------- ------------ ----------- ------------ 2363 online 144 0 10082.3 2369 online 116 0 11097.4 2403 direct 162 0 220 154 2439 direct 105 1 22150.1 159 2408 direct 166 1 309 158 2444 direct 109 1 77727.2 155 2358 direct 105 2 7826 155 2400 direct 159 2 69286.4 161 2375 online 122 2 103834.4 2450 direct 147 3 1636 159 2423 direct 145 3 10367.7 160 ORDER_ID ORDER_MO CUSTOMER_ID ORDER_STATUS ORDER_TOTAL SALES_REP_ID ---------- -------- ----------- ------------ ----------- ------------ 2385 online 147 4 295892 2437 direct 103 4 13550 163 2389 online 151 4 17620 2364 online 145 4 9500 2377 online 141 5 38017.8 2425 direct 147 5 1500.8 163 2394 direct 109 5 21863 158 2457 direct 118 5 21586.2 159 2426 direct 148 6 7200 2410 direct 168 6 45175 156 2427 direct 149 7 9055 163 ORDER_ID ORDER_MO CUSTOMER_ID ORDER_STATUS ORDER_TOTAL SALES_REP_ID ---------- -------- ----------- ------------ ----------- ------------ 2446 direct 117 8 103679.3 161 2402 direct 161 8 600 154 2434 direct 149 8 268651.8 161 2447 direct 101 8 33893.6 161 2365 online 146 9 27455.3 2372 online 119 9 16447.2 2359 online 106 9 5543.1 2368 online 149 10 60065 30 rows selected. SQL> select count(*) from orders; That is, 30 rows (a bit more than 20%) out of 105 are sampled, using order_status as the stratifying column. Try this example on your own schemas and tables, and send me some feedback. The maximum length of the string returned from GENERATE_STRATFIED_SQL() is 32767. Based on the interest, I can post another function that can handle SQL strings of <= 64K in size. Given the explosive growth in data, sampling continues to remain an active, relevant research area in statistics, data mining, and computer science. In the previous post, we looked at continuous probability distributions. You can determine if the data in a given column in a given table follows a particular distribution using routines in the DBMS_STAT_FUNCS package. The example shows a table with four columns - each pre-loaded with data synthesized to fit a particular continuous distribution. You can substitute the table name and column name for your experiment. A minor annoyance in the design of this API - most other Oracle packages will default to the current session name, but this one explicitly expects the schema name as input - which, in my case, is DMUSER - please change this to your schema name accordingly. set echo on; set serveroutput on; create table disttab ( num1 number, num2 number, num3 number, num4 number, cat1 varchar2(2)); rem num1: Numbers generated based on normal distribution rem num2: Numbers generated based on exponential distribution rem num3: Numbers generated based on weibull distribution rem num4: Numbers generated based on uniform distribution insert into disttab values (9.604955, 1.05536, 4.126087, 22.950835, 'AA'); insert into disttab values (13.022139, 1.714142, 4.999804, 32.598089, 'AA'); insert into disttab values (11.572116, 3.697564, 2.81854, 24.552021, 'AA'); insert into disttab values (9.817124, 1.530935, 2.131106, 6.359504, 'AA'); insert into disttab values (10.146569, 3.295829, 1.510639, 25.218639, 'AA'); insert into disttab values (11.280488, 0.721109, 3.145515, 23.672146, 'BB'); insert into disttab values (9.26679, 1.390282, 4.074397, 11.262112, 'BB'); insert into disttab values (14.303472, 1.327971, 2.51907, 22.675373, 'BB'); insert into disttab values (11.686556, 0.225337, 2.941825, 23.582254, 'BB'); insert into disttab values (13.124479, 7.265271, 0.945059, 29.18001, 'BB'); insert into disttab values (8.601027, 7.060104, 6.078573, 14.878128, 'BB'); insert into disttab values (12.241662, 0.257739, 3.395142, 31.148244, 'CC'); insert into disttab values (13.781857, 4.281371, 1.349627, 23.862069, 'CC'); insert into disttab values (7.827007, 1.347487, 5.836949, 10.76229, 'CC'); insert into disttab values (9.106408, 1.253113, 5.116857, 6.594224, 'CC'); insert into disttab values (11.066785, 4.56512, 3.393899, 22.435955, 'CC'); insert into disttab values (10.71079, 2.700015, 1.922642, 7.635145, 'DD'); insert into disttab values (9.13019, 5.199126, 3.763481, 32.061213, 'DD'); insert into disttab values (7.873859, 0.978657, 2.268487, 1.030052, 'DD'); insert into disttab values (7.731724, 2.382977, 2.639425, 5.676622, 'DD'); insert into disttab values (12.828234, 1.867099, 3.99808, 26.000458, 'DD'); insert into disttab values (12.125892, 1.01285, 3.345311, 8.026281, 'DD'); insert into disttab values (9.800528, 5.869301, 3.840932, 29.928523, 'EE'); insert into disttab values (10.605782, 3.145211, 2.13718, 27.398604, 'EE'); insert into disttab values (14.054569, 4.089033, 2.436408, 4.483585, 'EE'); insert into disttab values (8.120606, 2.155303, 1.787835, 19.513588, 'EE'); insert into disttab values (13.093059, 0.220456, 3.456848, 24.855135, 'EE'); insert into disttab values (8.421441, 2.4819, 2.817669, 21.137668, 'FF'); insert into disttab values (11.899697, 2.507618, 3.770983, 4.016285, 'FF'); insert into disttab values (9.601342, 1.12639, 3.21053, 28.643809, 'FF'); insert into disttab values (9.32297, 10.003288,6.890515, 33.67171, 'FF'); insert into disttab values (6.896019, 10.76641, 3.123496, 29.077463, 'FF'); insert into disttab values (12.542443, 0.228756, 4.081015, 33.542652, 'FF'); insert into disttab values (14.038144, 7.326175, 3.53459, 11.731359, 'FF'); Let us first start with Normal Distribution. The mean (same as E(x)) and standard deviation are computed for the column num1 and provided as input. mean number; stdev number; sig number; select avg(num1) into mean from disttab; select stddev(num1) into stdev from disttab; dbms_output.put_line('NORMAL DISTRIBUTION - SHAPIRO_WILKS'); 'dmuser', 'disttab', 'num1', 'SHAPIRO_WILKS', mean, stdev, sig); dbms_output.put_line('Mean : ' || round(mean, 4)); dbms_output.put_line('Stddev : ' || round(stdev, 4)); dbms_output.put_line('Sig : ' || to_char(sig,'9.9999')); The output of this procedure looks like this: W value : .9601472834631918774434537597940068448938 Mean : 10.7426 Stddev : 2.1094 Sig : .2456 The high value for the W returned from the Shapiro-Wilks test (explained here) indicates an agreement with the null hypothesis that the data follows normal distribution. The significance (0.25) is also relatively high for the Shapiro-Wilks test (0.05 being the typical threshold). Next, Exponential distribution. For the above dataset, lambda - i.e. the rate of arrival - is provided (Note that lambda is NOT simply 1/mean(num3), it is 1/E(x) and we don't know how the expected value was computed) lambda number; mu number; sig number; lambda := 0.3194; -- select 1/mean into lambda from (select mean(num2) from disttab); mu := NULL; sig := NULL; dbms_output.put_line('EXPONENTIAL DISTRIBUTION - KOLMOGOROV_SMIRNOV'); 'dmuser', 'disttab', 'num3', 'KOLMOGOROV_SMIRNOV', lambda, mu, sig); dbms_output.put_line('Lambda : ' || lambda); dbms_output.put_line('Mu : ' || mu); dbms_output.put_line('Sig : ' || to_char(sig,'9.9999')); The output shows: D value : .0919745379005003387429254774811052604723 Lambda : .3194 Mu : 0 Sig : .9237466160 The low D value (0.09) and high significance (0.92) suggests that the data fits an exponential distribution well. Next, Weibull Distribution. The alpha input parameter to the procedure corresponds to η - the scale, and the mu to γ - the location - in the Weibull probability density function discussed earlier. alpha number; beta number; alpha := 3; mu := 0; beta := 4; sig := NULL; dbms_output.put_line('WEIBULL DISTRIBUTION - KOLMOGOROV_SMIRNOV'); 'dmuser', 'disttab', 'num3', 'KOLMOGOROV_SMIRNOV', alpha, mu, beta, sig); dbms_output.put_line('Alpha : ' || alpha); dbms_output.put_line('Mu : ' || mu); dbms_output.put_line('Beta : ' || beta); dbms_output.put_line('Sig : ' || to_char(sig,'9.9999')); The output shows: D value : .2575286246007637604103723582952313687414 Alpha : 3 Mu : 0 Beta : 4 Sig : .0177026134 The Kolmogorov-Smirnov test does not appear emphatic enough, we will try other tests later. Next, Uniform Distribution. A number; B number; A := 1; B := 34; sig := NULL; dbms_output.put_line('UNIFORM DISTRIBUTION - KOLMOGOROV_SMIRNOV'); 'dmuser', 'disttab', 'num4', 'CONTINUOUS', 'KOLMOGOROV_SMIRNOV', A, B, sig); dbms_output.put_line('A : ' || A); dbms_output.put_line('B : ' || B); dbms_output.put_line('Sig : ' || to_char(sig,'9.9999')); The output shows: D value : .2083979233511586452762923351158645276292 A : 1 B : 34 Sig : .0904912415 The Kolmogorov-Smirnov test does not appear emphatic enough, we will try other tests later. Finally, let us try fitting a small dataset to Poisson Distribution. create table pdisttab (num1 number); insert into pdisttab values(1); insert into pdisttab values(2); insert into pdisttab values(3); insert into pdisttab values(4); insert into pdisttab values(5); mean number; stdev number; sig number; mean := 3.0; stdev := 1.58114; dbms_output.put_line('POISSON DISTRIBUTION - KOLMOGOROV_SMIRNOV'); 'dmuser', 'pdisttab', 'num1', 'KOLMOGOROV_SMIRNOV', mean, sig); dbms_output.put_line('Mean : ' || mean); dbms_output.put_line('Stddev : ' || stdev); dbms_output.put_line('Sig : ' || to_char(sig,'9.9999')); The output of: D value : .08391793845975204 Mean : 3 Stddev : 1.58114 Sig : .9999999999 emphatically is in agreement with the null hypothesis that the data fits a Poisson distribution. That was fun, was'n't it? Now, we have introduced several new terms and concepts here (esp if you reviewed the DBMS_STAT_FUNCS docs) - viz. goodness-of-fit testing if a given data sample fits a particular distribution based on a null hypothesis, using various test types, the test metrics, and the significance output of a test, and various other parameters. In the upcoming posts, we will try to understand these new concepts. I also presume that the various references in the previous post and this one may have been useful to experienced/intermediate statisticians also. Continuous Probability Distributions A continuous probability distribution is a (infinitely large) table that lists the continuous variables (outcomes) of an experiment with the relative frequency (a.k.a probability) of each outcome. Consider a histogram that plots the probability (y axis) that a particular job will get done within a time interval (x axis). As you keep making the interval shorter and more fine-grained, the step-like top of the histogram eventually melds into a curve - called the continuous probability distribution. The total area under this probability curve is 1, the probability that the value of x is between two values a and b is the area under f(x) between a and b and f(x) <= 0 for all x For continuous distributions, the probability for any single point in the distribution is 0, you can compute a non-zero probability only for an interval between two values of the continuous variable x. Oracle SQL provides statistical functions to determine if the values in a given column fit a particular distribution. Before we proceed to the examples, let us look at some of the popularly known Normal (Gaussian) Probability Distribution The most common continuous probability distribution - to the point of being synonymous with the concept is Normal Probability Distribution, represented graphically by a bell curve, plotted with the continuous value on the x axis, and the probability along the y axis, that is symmetric about the mean x value, with the two ends tapering off to infinity. The curve has these properties: • The mean, median and mode are the same • the distribution is bell-shaped and symmetrical about the mean • the area under this curve is always = 1 The generic normal distribution can have any mean value and standard deviation. For example, weather info may indicate an annual average rainfall in Boston of 38 inches with standard deviation 3 inches. The smaller the standard deviation (say, 2 inches instead of 3), the steeper the bell curve about the mean. Now if the mean were to shift to, say 40, the symmetric bell curve will shift two places to the right too. The probability density function for normal distribution is given by: f(x) = 1/(σ√(2π))e^-0.5 * ((x - μ)/σ)² The Standard Normal Distribution is a special case of normal distribution with σ=0 and μ=1 as shown below (graph not to scale). The standard z-score is a derivative of the standard normal distribution. It is given by z = (x - μ)/σ. The value of z is then cross-checked against a standard normal table or grid, to arrive at the probability of a required interval - repeat interval. Unlike discrete random variables in a discrete probability distribution, continuous variables can have infinite values for a given event - so the probability can be computed only for an interval or range of values. Continuing with the rainfall example, queried the probability that the annual rainfall next year at Boston will be <= 40.5 inches - we will compute z = (40.5 - 38)/3 = 0.8333. Stated another way, this means that a rainfall of 40.5 inches is 0.8333 standard deviations away from the mean. From the standard normal table, the probability is 0.7967 - that is, roughly 80%. Microsoft Excel's NORMDIST() function provides this functionality, but I was surprised to find no function in Oracle SQL with equivalent simplicity - I'll file a feature bug after some more research. The Oracle OLAP Option provides a NORMAL() function as part of its OLAP DML interface. This Calc-like interface is different from SQL - so we will defer this for later. Application to Data Mining A key use of the z-score is as a "normalizing" data transformation for mining applications. Note that this concept is completely unrelated to database normalization. The stolen car example in a previous post was a simple example of prediction - we used a few categorical attributes like a car's color, type to predict if a car will be stolen or not. In the business world, the applications are more grown-up and mission-critical. One example is churn prediction - i.e. finding out if a (say, wireless) customer would stay loyal with the current provider, or move on ("churn") to competitor (in which case, the current provider could try to entice him/her to stay with appropriate promos). The customer data used for such churn prediction applications contains categorical (e.g. gender, education, occupation) and numerical (e.g. age, salary, fico score, distance of residence from a metro) attributes/columns in a table. The data in these numeric columns will be widely dispersed, across different scales. For e.g. values within salary can be from 10s of thousands to several millions. Two numerical attributes will be in different scales - example salary (30K - 2 mil) vs age (1-100). Such disparity in scales, if left untreated, can throw most mining algorithms out of whack - the attributes with higher range of values will start outweighing those in the lower range during the computation of prediction. For such algorithms, the numerical data is normalized to a smaller range [-1, 1] or [0, 1] using the z-transform, to enable uniform handling of numerical data by the algorithm. Min-max and decimal scaling are other data normalization techniques. Here is one primer on mining data transformations. We will discuss mining transformations using DBMS_DATA_MINING_TRANSFORM package and Oracle SQL in a separate post. Uniform Distribution Ever waited outside a airport terminal under a sign that says "Rental-Cars/Long-term Parking - Pickup Every 10 minutes"? Your wait time is an example of uniform distribution - assuming a well-run airport, you arrive at the stop and expect to wait between 5 to max 15 minutes for your shuttle. This simplest of continuous distributions has the probability function f(x) = 1/ (b - a) a <= x <= b, f(x) = 0 for all other values of x and is graphically represented as shown. The probability that a uniformly distributed random variable X will have values in the range x[1] to x[2] is: P(x[1] <= X <= x[2]) = (x[2] - x[1])/(b - a), a <= x[1] < x[2] <= b. The mean E(X) = (a+b)/2 and variance V(X) = (b - a)²/12. To use the shuttle bus example, probability that the wait time will be 8 to 11 minutes is P(8 <= X <= 11) = (11 - 8)/(15 - 10) = 3/5 = 0.6 Exponential Distribution Consider that an event occurs with an average frequency (a.k.a. rate) of λ and this average frequency is constant. Consider that, from a given point in time, you wait for the event to occur. This waiting time follows an exponential distribution - depicted in the adjoining figure. The probability density function is given by: f(x) = λ e^-λx where λ is the frequency with which the event occurs - expressed as a particular number of times per time unit. The mean, or more appropriately, the expected value E(X) of the distribution is μ = 1/λ, the variance is σ² = (1/λ)². Take the same shuttle bus example. If the bus does not stick to any schedule and randomly goes about its business of picking up passengers, then the wait time is exponentially distributed. This sounds a bit weird, but there are several temporal phenomena in nature that exhibit such behavior: • The time between failure of the ordinary light bulb (which typically just blows out suddenly), or some electronic component, follows an exponential distribution. The mean time between failure, μ is an important metric in the parts failure/warranty claims domain • The time between arrivals, i.e. inter-arrival time of customers at any check-in counter is exponentially distributed. A key property of an exponentially distributed phenomenon is that it is memory-less - best explained with an example. Suppose you buy a 4-pack GE light at Walmart with a MTBF of 7000 hours (~ 10 months). Assume a bulb blows out, you plug in a new one, the time to next failure of this bulb will remain exponentially distributed. Say, this second bulb fails, and you change the bulb a month (24x30=720 hours) later, the time to next failure will remain exponentially distributed. The time between failure is independent of when the bulb failed and when (i.e. the passage of time before) it was replaced. The probability functions are best stated in terms of failure (survival). The probability that an item will survive until x units of time, given a MTBF of μ units can be stated as: P(X >= x) = e^-λx x >= 0 and conversely, failing before x units of time is given by P(X <= x) = 1 - e^-λx x >= 0 where λ = 1/μ. For example - if Dell claims your laptop fails following an exponential distribution with MTBF 60 months, and the warranty period is 90 days (3 months), what percentage of laptops does Dell expect to fail within the warranty period? P(X <= 3 months) = 1 - e^-(1/60)*3) = 0.048 ~ 5% of laptops. Weibull Distribution Weibull distribution is a versatile distribution that can emulate other distributions based on its parameter settings, and is a widely used, important tool for reliability engineering/survival analysis. An extensive coverage on Reliability analysis is provided here. I am mentioning this distribution here mainly to set up the next post. There are several other popular probability distributions - we will revisit them in the future on a needs-basis. For now, let us break off and look at some Oracle code in the next post. A Probability Distribution is a table/graph that depicts the assignment of probabilities to the assumption of specific values by a given random variable. The following concepts are useful to understand probability distributions: • If Event A can occur in p possible ways and Event B can occur in q possible ways, then both A and B can occur in p x q ways. • The number of different ways that a set of objects can be arranged is called Combination. The number of combinations of n objects taken r at a time is given by nCr = n! / (n - r)! r! • The number of different ways that a set of objects can be arranged in order is called Permutation. The number of permutations of n objects taken r at a time is given by nPr = n! / (n - r)! Here is a PL/SQL code snippet to compute factorial: FUNCTION factorial(p_n IN NUMBER) RETURN NUMBER IS IF p_n IS NULL OR p_n < 0 THEN RAISE_APPLICATION_ERROR(-20000, 'Invalid Input Value'); ELSIF p_n <= 1 THEN RETURN 1; RETURN factorial(p_n-1) * p_n; END IF; EX> Compute 9! select factorial(9) from dual; I was curious to see how far I can push this function - the maximum value of n was 83.7 with NUMBER types, and 84.7 when I changed the input parameter and return type to BINARY_DOUBLE SQL> select factorial(83.7) from dual; SQL> select factorial(83.71) from dual; SQL> select factorial2(84.7) from dual; SQL> select factorial2(84.71) from dual; EX> Compute the number of combinations of 9 objects taken 3 at a time. select factorial(9)/(factorial(9-3) * factorial(3)) from dual; EX> Compute the number of different ways of arranging 9 objects taken 3 at a time. select factorial(9)/factorial(9-3) from dual; Discrete Probability Distributions • The discrete probability distribution is a table that lists the discrete variables (outcomes) of an experiment with the relative frequency (a k a probability) of each outcome. Example: Tossing a coin two times gives you the combinations (H,H), (H,T), (T,H), (T,T) and hence, the following tuples for (#Heads, Frequency, Relative_Frequency): (0, 1, 1/4=0.25), (1, 2, 2/4=0.5), (2, 1, 1/4=0.25). This is the probability distribution for # heads after flipping a coin twice. • Mean or Expected value of the discrete probability distribution μ = ∑[i=1_to_n] x[i] * P(x[i]) For the coin example, μ = 0 * 0.25 + 1 * 0.5 + 2 * 0.25 = 1 • Variance of the discrete probability distribution σ² = ∑[i=1_to_n] (x[i] - μ)² * P(X[i]) • Standard deviation is the square root of the variance Binomial Probability Distribution A binomial or Bernoulli experiment is one which consists of a fixed number of trials, each independent of the other, each with only two possible outcomes, with a fixed probability for success or failure representation in each outcome. The Bernoulli process counts the number of successes over a given number of attempts, or in other words, the random variable for a Binomial distribution is the number of successes over given number of attempts. • The probability of r successes in n trials with probability of success p and probability of failure q is given by P(r, n) = (n! / (n - r)! r!) p^r q^(n - r) • The binomial probability distribution is a table of (r, P(r, n)) which can be subsequently graphed, as discussed in this example EX> Over the next 7 days, assume a 40% chance of rain and 60% chance of no rain. The probability that it will rain exactly 2 days over the next 7 days is P(2, 7) = (7! / (7 - 2)! 2!) 0.4^2 0.6^(7 - 2), which can be computed using select factorial(7) * power(0.4,2) * power(0.6,(7-2))/ (factorial(7-2) * factorial(2)) p_2_7 from dual; The probability that it will rain at least 6 days over the next 7 days is P(r >= 6) = P(6,7)+P(7,7), computed using select (factorial(7) * power(0.4,6) * power(0.6,(7-6))/ (factorial(7-6) * factorial(6))) + (factorial(7) * power(0.4,7) * power(0.6,(7-7))/ (factorial(7-7) * factorial(7))) p_r_ge_6 from dual; Finally, the probability that it will rain no more than 2 days over the next 7 days is P(r <= 2) = P(0,7) + P(1,7) + P(2,7) • The mean of a binomial distribution is μ = np • The standard deviation is σ² = npq Excel has a function BINOMDIST(r, n, p, cumulative). p is the probability of success, set cumulative=TRUE if you want the probability of r or fewer successes, set cumulative=FALSE if you want exactly r successes. Here is the PL/SQL version: FUNCTION binomdist(r NUMBER, n NUMBER, p NUMBER, cumulative BOOLEAN DEFAULT FALSE) RETURN NUMBER IS ri NUMBER; ret NUMBER; fn NUMBER; ret := 0; fn := factorial(n); FOR ri IN REVERSE 0..r LOOP ret := ret + (fn * power(p, ri) * power((1-p),(n - ri)))/ (factorial(n - ri) * factorial(ri)); IF NOT cumulative THEN END IF; END LOOP; RETURN ret; END binomdist; Poisson Probability Distribution The random variable for Poission distribution is the number of occurrences of the event over a measurable metric (time, space). In a Poisson process, the (measured) mean number of occurences of an event is the same for each interval of measurement, and the number of occurrences in a particular interval are independent of number of occurrences in other intervals. • The probability of exactly r occurrences over a given interval is given by P(r) = μ^r * e^(-μ)/ r! • The variance of the Poisson distribution is the same as the (observed) mean. • A goodness of fit test helps verify if a given dataset fits the Poisson distribution A simple example of a Poisson process is customer arrival at your favorite coffee shop. Assume that you know that an average of 25 customers walk into a Dunkin Donuts every hour, then the likelihood of exactly 31 customers walking into the customer in the next hour is select power(25,31) * exp(-25)/factorial(25) p_31 from dual; Just as we saw in Binomial distribution, the probability that no more than 31 customers will walk into the coffee shop is P(r <= 31) = P(0)+P(1)+..+P(31). Inversely, the probability that at least 31 customers will walk into the coffee shop is P(r >= 31) = 1 - P(r < 31). Obviously, this leads up to the need for a function similar to POISSON(r, μ, cumulative) in Excel - where cumulative = FALSE indicates computation of exactly r occurrences, and cumulative = TRUE indicates r or fewer. FUNCTION poissondist(r NUMBER, mu NUMBER, cumulative BOOLEAN DEFAULT FALSE) RETURN NUMBER IS ri NUMBER; ret NUMBER; ret := 0; FOR ri IN REVERSE 0..r LOOP ret := ret + (power(p, ri) * exp(-mu)/factorial(ri)); IF NOT cumulative THEN END IF; END LOOP; RETURN ret; END poissondist; Poisson approximation - a Poisson distribution can be used to approximate a Binomial distribution if the number of trials (in the binomial experiment) is >= 20 and the probability of success p is <= In contrast to descriptive statistics, Inferential statistics describes the population based on information gleaned from a sample taken from the population. Fundamental to understanding statistical inference is the concept of probability. An experiment is the process of measuring/observing an activity. An outcome is a particular result of the experiment - outcomes are also called Random Variables. Random variables can be discrete - when it can assume a countable number of values (e.g. one of six outcomes from rolling a dice) or Continuous - when the variable can assume uncountably infinite values in a given range of values (time, a person's height). The Sample space is all possible outcomes of the experiment. An event is an outcome of interest. The Probability of Event A occurring is: P(A) = # of possible outcomes in which Event A occurs/ Total # outcomes in the sample space. Basic properties of probability: • P(A) = 1 implies Event A will occur with certainty • P(A) = 0 implies Event A will not occur with certainty • 0 >= P(A) >= 1 • The sum of all probabilities for events in the sample space must be 1 • All outcomes in the sample space that are not part of Event A is called the complement of Event A (named A'). P(A') = 1 - P(A) • Given two events A and B, P(A) or P(B) - i.e. probabilities of each of the events occuring - without the knowledge of the other events occurrence - is called the prior probability. • Given two events A and B, the probability of event A occurring given that event B has occurred - denoted by P(A / B) - is called the conditional probability or posterior probability of Event A given that Event B has occurred. On the flip side, if P(A / B) = P(A), then events A and B are termed independent. • Given two events A and B, the probability of both A and B occurring at the same time is called the joint probability for A and B, computed as P(A and B) = P(A) * P(B) • Given two events A and B, the probability of either A or B occurring is called the union of events A and B. If events A and B do not occur at the same time (i.e. are mutually exclusive), then P(A or B) = P(A) + P(B). If events A and B occur at the same time, i.e. are not mutually exclusive, then P(A or B) = P(A) + P(B) - P(A and B) • Law of Total Probability: P(A) = P(A / B)P(B) + P(A / B')P(B') • The Bayes Theorem for probabilities provides the ability to reverse the conditionality of events and compute the outcome: P(A / B) = P(A) * P(B / A) / (P(A) * P(B / A) + P(A') * P(B / A')) Note that the act of finding the probability for given event is tantamount to predicting that a given event will occur with a given level of certainty or chance - quantified by the probability. This is a good segue to look at a real business problem and its solution based on Bayes theorem. A modern, predictive, loan processing application builds analytical models based on millions of historical loan applicant records (training data), and uses these models to predict the credit-worthiness (a k a risk of loan default) of an applicant by classifying the applicant into Low, Medium, High or such risk categories. In data mining lingo, a new applicant record is now scored based on the model. At the time of this writing (Aug-Sep 2007), the sub-prime lending woes and its effect on US and world markets is the main story. The trillions lost in this mess is fodder for Quant skeptics/detractors, but as a BusinessWeek cover story ("Not So Smart" - Sep 3 2007) explains, the problem was not analytics per se - the problems were with how various managements (mis)used analytics or (mis)understood their data. Returning to probability concepts, instead of A and B, the events become A and B[i], i=1..n. The event A (or more appropriately for this example, the target variable Risk) is a dependent variable that assumes one of discrete values (called classes - low, medium, high) based on predictor variables B[i] through B[n] (age, salary, gender, occupation, and so on). The probability model for this classifier is P(A / B[1],..,B[n]). We just shifted the language from statistics into the realm of data mining/predictive analytics. The Bayes theorem intrinsically assumes conditional dependence between B[i] through B[n]. Now if n is large, or if each B[i] takes on a large number of values, computing this model becomes intractable. The Naive Bayes probabilistic model greatly simplifies this by making a naive/strong assumption that B[i] through B[n] are conditionally independent - the details are provided here. You can build a Naive Bayes model using the Oracle Data Mining Option, and predict the value for a target variable in new records using SQL Prediction Functions. The following example illustrates the process. EX> Given a small, synthetic dataset about the attributes of stolen cars, predict if a particular car will be stolen - based on its attributes. create table stolen_cars( id varchar2(2), color varchar2(10), ctype varchar2(10), corigin varchar2(10), stolen varchar2(3)); Table created. insert into stolen_cars values ('1', 'Red','Sports','Domestic','yes'); insert into stolen_cars values ('2', 'Red','Sports','Domestic','no'); insert into stolen_cars values ('3', 'Red','Sports','Domestic','yes'); insert into stolen_cars values ('4', 'Yellow','Sports','Domestic','no'); insert into stolen_cars values ('5', 'Yellow','Sports','Imported','yes'); insert into stolen_cars values ('6', 'Yellow','SUV','Imported','no'); insert into stolen_cars values ('7', 'Yellow','SUV','Imported','yes'); insert into stolen_cars values ('8', 'Yellow','SUV','Domestic','no'); insert into stolen_cars values ('9', 'Red','SUV','Imported','no'); insert into stolen_cars values ('10', 'Red','Sports','Imported','yes'); Commit complete. model_name => 'cars', mining_function => dbms_data_mining.classification, data_table_name => 'stolen_cars', case_id_column_name => 'id', target_column_name => 'stolen'); PL/SQL procedure successfully completed. create table new_stolen_cars ( id varchar2(2), color varchar2(10), ctype varchar2(10), corigin varchar2(10)); Table created. insert into new_stolen_cars values ('1', 'Red','SUV','Domestic'); insert into new_stolen_cars values ('2', 'Yellow','SUV','Domestic'); insert into new_stolen_cars values ('3', 'Yellow','SUV','Imported'); insert into new_stolen_cars values ('4', 'Yellow','Sports','Domestic'); insert into new_stolen_cars values ('5', 'Red','Sports','Domestic'); Commit complete. select prediction(cars using *) pred, prediction_probability(cars using *) prob from new_stolen_cars; -- Results PRE PROB --- ---------- no .75 no .870967746 no .75 no .529411793 yes .666666687 The query scores each row in the new_stolen_cars table, returning the prediction, and the certainty of this predition. This dataset is very small, but a cursory glance at the results indicates that the predictions are correct - based on the training data. For example, the model predicts 'No' for a domestic yellow sports car - the training data has no such instance. The model predicts 'Yes' for a domestic red sports car, with > 50% certainty - the training data does support this prediction. You can obtain the details of this model using: select * from table(dbms_data_mining.get_model_details_nb('cars')); The SQL output is a collection of objects and may not look pretty at first glance. But once you understand the schema of the output type - viz. - you can decipher the output to the following simple format: DM_CONDITIONAL('COLOR', NULL, 'Red', NULL, .4), DM_CONDITIONAL('COLOR', NULL, 'Yellow', NULL, .6), DM_CONDITIONAL('CTYPE', NULL, 'SUV', NULL, .6), DM_CONDITIONAL('CORIGIN', NULL, 'Domestic', NULL, .6), DM_CONDITIONAL('CORIGIN', NULL, 'Imported', NULL, .4), DM_CONDITIONAL('CTYPE', NULL, 'Sports', NULL, .4)) DM_CONDITIONAL('COLOR', NULL, 'Red', NULL, .6), DM_CONDITIONAL('CORIGIN', NULL, 'Imported', NULL, .6), DM_CONDITIONAL('CORIGIN', NULL, 'Domestic', NULL, .4), DM_CONDITIONAL('CTYPE', NULL, 'Sports', NULL, .8), DM_CONDITIONAL('CTYPE', NULL, 'SUV', NULL, .2), DM_CONDITIONAL('COLOR', NULL, 'Yellow', NULL, .4)) This shows the target variable (STOLEN), its value ('yes', 'no'), the prior probability (0.5), and the conditional probability contributed by each predictor/predictor value pair towards each target/ class value. Such ability to score transactional customer data directly from the database (in other words, deploy the model right at the source of customer data) with such simplicity is a key Oracle differentiator and competitive advantage over standalone data mining tools. For more on ODM, consult the references provided in this blog. Descriptive statistics summarizes and displays information about just the observations at hand - i.e. the sample and population are the same. Using data in the sample schema, we will quickly run through descriptive statistics concepts. Displaying descriptive statistics Frequency distribution is a table that organizes a number of values into intervals (classes). A histogram is the visual equivalent of frequency distribution - a bar graph that represents the number of observations in each class as the height of each bar. EX> Compute the frequency distribution of all employee salaries - across 10 classes. select intvl, count(*) freq from (select width_bucket(salary, (select min(salary) from employees), (select max(salary)+1 from employees), 10) intvl from HR.employees) group by intvl order by intvl; -- Result from this query ----- -------- 8 rows selected. Relative frequency distribution provides the number of observations in each class as a percentage of the total number of observations. EX> Compute the relative frequency distribution of all employee salaries - across 10 classes. with Q_freq_dist as (select intvl, count(*) freq from (select width_bucket(salary, (select min(salary) from employees), (select max(salary)+1 from employees), 10) intvl from HR.employees) group by intvl order by intvl) select intvl, freq/(select count(*) from employees) rel_freq from Q_freq_dist; -- Result from this query ----- ---------- 1 .429906542 2 .093457944 3 .214953271 4 .140186916 5 .074766355 6 .018691589 7 .018691589 10 .009345794 8 rows selected. Cumulative frequency distribution provides the percentage of observations that are less than or equal to the class of interest. EX> Compute the cumulative frequency distribution of all employee salaries - across 10 classes. with Q_freq_dist as (select intvl, count(*) freq from (select width_bucket(salary, (select min(salary) from employees), (select max(salary)+1 from employees), 10) intvl from HR.employees) group by intvl order by intvl), Q_rel_freq_dist as (select intvl, freq/(select count(*) from employees) rel_freq from Q_freq_dist) select intvl, sum(rel_freq) over (order by intvl rows between unbounded preceding and current row) cum_freq from Q_rel_freq_dist; -- Result from this query ----- ---------- 1 .429906542 2 .523364486 3 .738317757 4 .878504673 5 .953271028 6 .971962617 7 .990654206 8 rows selected. The width_bucket function creates equi-width histograms. A complementary function, NTILE, helps you bin values into intervals of equal height - i.e. with same count of values in each bin. This is useful for computing quartiles, quintiles etc. EX> The following query bins the 107 records/observations into approx 10 equi-height bins. select htbin, count(*) ht from (select ntile(10) over (order by salary) htbin from hr.employees) group by htbin order by htbin; ----- ------ 10 rows selected. Anyone who has given a competitive exam (SAT thro GRE/GMAT) should be familiar with the term percentile. Given n data points, the Pth percentile represents the value which resides above P% of the values, and is given by percentile = (n+1) * P/100 EX> Assume a student took the GRE test in 2004 and received an overall GRE score is 2150 (out of the possible 2400) and ranked at 89th percentile. What does this mean? Based on information provided here, it means, of the worldwide total of 408,948, the student ranked above (408948+1) * 89/100 ~= 363965 examinees, and conversely, 408948-363964= 44982 students ranked above this student. You can compute percentiles in Oracle SQL using CUME_DIST() as follows. EX> Rank all sales reps and their managers by their salary percentile. select job_id, last_name, salary, round(cume_dist() over (partition by job_id order by salary) * 100) as pctile from employees where job_id like '%SA_%' order by job_id, pctile desc, salary; JOB_ID LAST_NAME SALARY PCTILE ------ -------------- ------ ------ SA_MAN Russell 14000 100 SA_MAN Partners 13500 80 SA_MAN Errazuriz 12000 60 SA_MAN Cambrault 11000 40 SA_MAN Zlotkey 10500 20 SA_REP Ozer 11500 100 SA_REP Abel 11000 97 SA_REP Vishney 10500 93 SA_REP Tucker 10000 90 SA_REP Bloom 10000 90 SA_REP King 10000 90 The partition by job_id enables to rank within job_id. You can infer things like Ms. Bloom has 90% of her fellow sales rep earning as much or less than her (order by salary clause), Mr. Zlotkey is in the last quintile for managers, and so on. EX> Suppose a new job candidate names his salary requirements - say 7.5K - you can use the same function - in what Oracle calls its aggregate usage to find out what percentile this salary number would fit in, as follows: select round(cume_dist(7500) within group order by (salary) * 100, 0) offer_pctile from employees If the new candidate joins the sales force (a.k.a when his record is entered into this table), his salary will be in the 64th percentile. The PERCENT_RANK() function is similar to CUME_DIST() function - see the SQL Reference Manual in the Oracle docs for details. Measures of Central Tendency - Mean, Median, Mode Mean or Average = sum(all observations)/ count_of_observations. Note the difference between sample mean and population mean for use later. EX> Compute average salary of employees in the company. select min(salary) min_s, max(salary) max_s, round(avg(salary),2) avg_s from HR.employees; EX> Compute average quantity sold of products from all sale transactions. select prod_id, count(*), sum(quantity_sold) sum_qty, avg(quantity_sold) avg_qty from SH.sales group by prod_id order by prod_id; We can also compute the Mean of grouped data from a frequency distribution using x_bar = (Σ[_1_to_m](freq[i] * x[i]))/Σ[_1_to_m](freq[i]) where m is the number of classes. EX> Compute the mean of the frequency distribution discussed above. with Q_freq_dist as ( select intvl, count(*) freq from (select width_bucket(salary, (select min(salary) from employees), (select max(salary)+1 from employees), 10) intvl from HR.employees) group by intvl order by intvl) select sum(intvl * freq)/sum(freq) from Q_freq_dist Median is the value in a dataset in which half the observations have a higher value and the other half a lower value. EX> Compute the median salary per department. select department_id, median(salary) from HR.employees group by department_id; Mode is the observation in the dataset that occurs most frequently. EX> Compute the mode per department. select department_id, stats_mode(salary) from HR.employees group by department_id; Measures of Dispersion These metrics measure how a set of data values are dispersed around (a k a deviate from) the measure of central tendency (typically the mean). Variance represents the relative distance between the data points and the mean of a dataset, and is computed as the sum of squared deviation of each data point from the mean. σ^2 = (Σ[1_to_n] (x[i] - x_bar)²)/(n-1) EX> Compute the variance, sample variance, and population variance of employee salaries. select variance(salary), var_samp(salary), var_pop(salary) from HR.employees; Standard deviation is the square root of variance, and generally more useful than variance because it is in units of the original data - rather than its square. EX> Compute the standard deviation, sample standard deviation, and population standard deviation of employee salaries. select stddev(salary), stddev_samp(salary), stddev_pop(salary) from HR.employees; For several commonly occuring datasets, the values tend to cluster around the mean or median - so that the distribution takes on the shape of a symmetrical bell curve. In such situations, the Chebyshev's Theorem states that at least (1 - 1/k²) x 100%, k > 1, will fall within k-standard-deviations from the mean. If the distribution is more or les symmetrical about the mean (i.e. closer to normal distribution) then the more rigorous empirical rule states that approximately 68% of the data will fall within 1 standard deviation of the mean, and 95% of the data within 1 standard deviations of the mean, and 99.7% of the data within 3 standard deviations of the mean. Skewness is a measure of the unevenness or asymmetry of the distribution. For a perfectly symmetric distribution, the mean is the same as mode is same as the median. Generally, for a skewed distribution, it is observed that the mean is to side of the median is to the side of the mode - the side being right (=ve skew) or left (-ve skew) depending on the skew. Skewness of a given population is computed by Skew = Σ[1_to_N] (x[i] - μ)/σ)³/N, where μ is the population mean, or σ is the standard deviation. Kurtosis is the measure of peakedness of the distribution of the population. Kurtosis = Σ[1_to_N](x[i] - μ)/σ)^4/N. EX> Compute the skewness and kurtosis for the employee salary. -- Skewness select avg(val3) from (select power((salary - avg(salary) over ())/ stddev(salary) over (), 3) val3 from employee); -- Kurtosis select avg(val4) from (select power((salary - avg(salary) over ())/ stddev(salary) over (), 4) val4 from employee); This brevity is made possible thanks to Oracle SQL analytical functions (avg(salary) over () implies compute average over all rows in the employees). Note that you can use the partition and order by clauses inside the () to group data across different employee classes and scope the computation to the group if required. From a performance standpoint, note that the computation intrinsically demands multiple scans (2 at the least) over the employees table (or the tables behind the employees view) - so for a potentially large employees table, the computations may involve use of sort (i.e. on-disk) memory rather than just the PGA (the database' equivalent of a PC's RAM). For more on such performance implications of SQL, begin your education with Oracle Concepts or SQL Tuning Guide, or look up the various references provided here.
{"url":"http://numoraclerecipes.blogspot.com/","timestamp":"2014-04-16T08:39:36Z","content_type":null,"content_length":"166312","record_id":"<urn:uuid:cf805032-4631-4f1a-a4b2-24209c66af52>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
CONTEST # 1 Welcome to the first-ever Tin's Fins NFL Draft Competition. First Prize is a $75 gift certificate to NFLshop.com. Second Prize is a $25 gift certificate to NFLshop.com. Third Prize is a $15 gift certificate to NFLshop.com. OK, here's how this will work.All entries MUST be e-mailed to 'tinshaker@gmail.com with the subject "CONTEST" All entries must be in my inbox by midnight Eastern time on Friday night.Ok, now the actual contest.Every site has got a mock draft contest so I thought I'd do something a little different.Below are ten questions/challenges. In parenthesis are the point values for each question. The fan with the most points wins, with the second most points comes in second and so on. In the case of a tie for first place, the tie breaker question #11 will be used to break the tie, and the loser of the tie-breaker then becomes the second place finisher. This would push the second placer down to third, etc. The same tie-break formula would be used for a 2nd or 3rd place tie-breaker. If there is a three-way tie (or more) and the first tie-break results in two or more people still being tied, then the order in which I received the e-mail entry will be used to number the entrants, and I will then use a random number generator to choose a winner. Only other rule is that Tinshaker can't play.Here are the questions: 1. Predict the first five picks in correct order. It doesn't matter if a team trades the pick, the number of the pick is what's important. (1 point for each name, 1 point for being the correct number /name combo, and bonus 3 points for getting all five picks perfect).note: you may get a couple of free points here. 2. Predict the teams that will draft a QB in the first round. (1 point for each correct team) 3. Predict how many WRs will be drafted in the first round w/ bonus if you can name them. (1 point for getting the number right, 1 point for each name you list, however, you will lose a point if you list someone who is not chosen in the first round, ie if you get 5 right and 1 wrong, you will end up with 4 (correct names minus wrong name) + 5 (being the number chosen). 4. Predict the first cornerback chosen. (1 point for name, 1 bonus point if you name the team) 5. Predict the first safety taken in the draft. (1 point for name, 1 bonus point for team). 6. Predict the longest clock time between picks 15-20. Which team in that range (including 15 and 20) will take the longest time to announce their choice? (4 points for naming the team. If there are trades in that range, then we will stick to the number rather than the team). 7. Predict all 9 Miami Dolphin draftees. (since there maybe trades, I have to grade this by each pick. 2 points for each player correctly identified in any order. No matter how many picks we end up with, a bonus of 10 points will be given to each entrant who correctly predicts at least 50% of the players). 8. Predict the top 8 offensive linemen drafted. (1 point for each name, 8 bonus points if you get them in the right order). 9. Will the Dolphins trade up, down, or stick with the #25 pick? (3 points) 10. Predict the position of the first Miami pick, ie DT, WR, CB, etc. All pass-rushers under 265lbs will be considered an OLB for this question. (10 points) 11. Guess how many combined-college-career touchdowns were scored by all of the 2009 Miami draftees. (The closest wins, and TD passes count, whether it be a QB or a halfback option, as do special team TDs, and defensive TDs). Yes I know this is hard but that's why I said 'GUESS'. The best way to do this would be to take your answers from question # 7 and add up the TDs for those players. As you can see, it's very unlikely there will be any ties, but #11 is the tie-breaker so think about it for 5 seconds and make a guess. If two entrants are tied, and one fails to answer #11, but the other does answer it, the latter would automatically win. Even if he/she put 3000. Any questions should be addressed in the comments section so everyone can see the question/answer. Once again, entrants must e-mail their entries to me by midnight Friday. Good luck! Below is a printable version in pdf format: Contest PDF
{"url":"http://tinsfins.blogspot.com/2009/04/contest-1.html","timestamp":"2014-04-17T15:31:40Z","content_type":null,"content_length":"81940","record_id":"<urn:uuid:eb2dea43-468a-4ca5-b032-7ef5f9affa39>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
MATHEMATICA BOHEMICA, Vol. 132, No. 1, pp. 59-74 (2007) Properties of a hypothetical exotic complex structure on $\Bbb C P^3$ J. R. Brown J. Ryan Brown, Georgia College & State University, Department of Mathematics, CBX 017, Milledgeville, GA 31061, USA, e-mail: ryan.brown@gcsu.edu Abstract: We consider almost-complex structures on $\mathbb {C}\text P^3$ whose total Chern classes differ from that of the standard (integrable) almost-complex structure. E. Thomas established the existence of many such structures. We show that if there exists an "exotic" integrable almost-complex structures, then the resulting complex manifold would have specific Hodge numbers which do not vanish. We also give a necessary condition for the nondegeneration of the Frölicher spectral sequence at the second level. Keywords: complex structure, projective space, Frölicher spectral sequence, Hodge numbers Classification (MSC2000): 53C56, 53C15, 58J20, 55T99 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] [Journals Homepage] © 2007–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/MB/132.1/7.html","timestamp":"2014-04-20T05:50:29Z","content_type":null,"content_length":"2538","record_id":"<urn:uuid:4158f7a3-33cc-42b7-a5ff-d4be80b892cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Circus Physics: Conservation of Angular Momentum Website Detail Page published by the Public Broadcasting Service technical implementer: the Show of Force Productions This video-based resource examines conservation of angular momentum through the motion of an acrobat doing aerial flips. It explores how a tucked position decreases the acrobat's moment of inertia, resulting in increased rotational velocity. To slow his rotation, he extends his legs. Through the acrobat's motion, the video illustrates how angular momentum is conserved for a body in flight. Supplementary materials help students integrate concepts to perform calculations relating to angular momentum and moment of inertia. This resource was developed in conjunction with the PBS series Circus. See Related Materials for a link to the full set of 8 Circus Physics video-based lessons. Subjects Levels Resource Types Classical Mechanics - Instructional Material - Rotational Dynamics = Activity = Conservation of Angular Momentum - High School = Instructor Guide/Manual Education Practices - Audio/Visual - Technology = Movie/Animation = Multimedia Appropriate Courses Categories Ratings - Conceptual Physics - Activity - Algebra-based Physics - Assessment - AP Physics - New teachers Intended Users: Access Rights: Free access © 2010 Public Broadcasting System angular momentum, angular momentum video, moment of inertia, physics videos, video-based learning Record Cloner: Metadata instance created November 19, 2013 by Caroline Hall Record Updated: November 28, 2013 by Bruce Mason Other Collections: AAAS Benchmark Alignments (2008 Version) 2. The Nature of Mathematics 2B. Mathematics, Science, and Technology • 9-12: 2B/H3. Mathematics provides a precise language to describe objects and events and the relationships among them. In addition, mathematics provides tools for solving problems, analyzing data, and making logical arguments. 4. The Physical Setting 4F. Motion • 9-12: 4F/H8. Any object maintains a constant speed and direction of motion unless an unbalanced outside force acts on it. Next Generation Science Standards Disciplinary Core Ideas (K-12) Forces and Motion (PS2.A) • The motion of an object is determined by the sum of the forces acting on it; if the total force on the object is not zero, its motion will change. The greater the mass of the object, the greater the force needed to achieve the same change in motion. For any given object, a larger force causes a larger change in motion. (6-8) • Momentum is defined for a particular frame of reference; it is the mass times the velocity of the object. (9-12) Science and Engineering Practices (K-12) Using Mathematics and Computational Thinking (5-12) • Mathematical and computational thinking at the 9â 12 level builds on Kâ 8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on mathematical models of basic assumptions. (9-12) □ Use mathematical representations of phenomena or design solutions to support and revise explanations. (9-12) Common Core State Standards for Mathematics Alignments Standards for Mathematical Practice (K-12) MP.2 Reason abstractly and quantitatively. High School â Algebra (9-12) Seeing Structure in Expressions (9-12) Supplements • A-SSE.1.a Interpret parts of an expression, such as terms, factors, and coefficients. Contribute Creating Equations^? (9-12) Related • A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. High School â Functions (9-12) Materials Linear, Quadratic, and Exponential Models^? (9-12) • F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another. • F-LE.5 Interpret the parameters in a linear or exponential function in terms of a context. ComPADRE is beta testing Citation Styles! <a href="http://www.compadre.org/precollege/items/detail.cfm?ID=13047">Show of Force Productions. Circus Physics: Conservation of Angular Momentum. Arlington: Public Broadcasting Service, (Public Broadcasting Service, Arlington, 2010), WWW Document, (http://www.pbs.org/opb/circus/classroom/circus-physics/angular-momentum/). Circus Physics: Conservation of Angular Momentum (Public Broadcasting Service, Arlington, 2010), <http://www.pbs.org/opb/circus/classroom/circus-physics/angular-momentum/>. Circus Physics: Conservation of Angular Momentum. (2010). Retrieved April 16, 2014, from Public Broadcasting Service: http://www.pbs.org/opb/circus/classroom/circus-physics/ Show of Force Productions. Circus Physics: Conservation of Angular Momentum. Arlington: Public Broadcasting Service, 2010. http://www.pbs.org/opb/circus/classroom/circus-physics/ angular-momentum/ (accessed 16 April 2014). Circus Physics: Conservation of Angular Momentum. Arlington: Public Broadcasting Service, 2010. Show of Force Productions. 16 Apr. 2014 <http://www.pbs.org/opb/circus/classroom/ @misc{ Title = {Circus Physics: Conservation of Angular Momentum}, Publisher = {Public Broadcasting Service}, Volume = {2014}, Number = {16 April 2014}, Year = {2010} } %T Circus Physics: Conservation of Angular Momentum %D 2010 %I Public Broadcasting Service %C Arlington %U http://www.pbs.org/opb/circus/classroom/circus-physics/angular-momentum/ %O text/html %0 Electronic Source %D 2010 %T Circus Physics: Conservation of Angular Momentum %I Public Broadcasting Service %V 2014 %N 16 April 2014 %9 text/html %U http://www.pbs.org/opb/circus/classroom/circus-physics/angular-momentum/ : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ. Circus Physics: Conservation of Angular Momentum: Is Part Of Circus Physics A link to the full collection of Circus Physics video resources developed for high school physics. relation by Caroline Hall Is Part Of Circus Physics A link to the full collection of Circus Physics video resources developed for high school physics. relation by Caroline Hall See details... Know of another related resource? Login to relate this resource to it.
{"url":"http://www.compadre.org/Precollege/items/detail.cfm?ID=13047","timestamp":"2014-04-16T22:29:32Z","content_type":null,"content_length":"51243","record_id":"<urn:uuid:75500859-1d26-4503-97f6-94bc6fae3eca>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00291-ip-10-147-4-33.ec2.internal.warc.gz"}