content
stringlengths
86
994k
meta
stringlengths
288
619
Help with Algebra?? 07-11-2003, 08:26 PM #22 07-11-2003, 08:23 PM #21 Re:Help with Algebra?? Re:Help with Algebra?? Hey Lovechild and everyone. I can't even do simple algebra. When the time comes what should i do for nagitive exponents?? And square roots raise to a power?? okay step one We extend both divisions by one (formed as the divisor of the other division - 5x / 5x is one as is x / x) 2 * 5x 3(x-1) * x -------- + ------------- = 1 x * 5x 5x * x Now we can collect these into one division 2 * 5x + 3(x-1) * x ------------------------ = 1 And clean it up a bit 10x + 3x^2 - 3x ---------------------------- = 1 Now multiple both sides by 5x^2 10x + 3x^2 - 3x = 5x^2 And divide both sides by x 10 + 3x - 3 = 5x clean up a bit 3x + 7 = 5x substract 3x from both sides 7 = 2x switch the sides 2x = 7 divide by 2 x = ----- = 3.5 Re:Help with Algebra?? [quote author=dsantamassino link=board=14;threadid=7355;start=0#msg67853 date=1057954992] Hey Lovechild and everyone. I can't even do simple algebra. When the time comes what should i do for nagitive exponents?? And square roots raise to a power?? One step at the time, let's get the algebra down first then we can work on the more advanced stuff later.. I would really look around for a tutor if I was you, a few hours in person training can do wonders, especially for brushing up on basic stuff like algebra and other common concepts. I'm happy to help but doing this over the internet is really not that easy. Re:Help with Algebra?? If you ask me i'm kind of cheating by writing down what you told me just now. I don't have a clue. Can we try a different and easier method with less confused?? Please reply back. Re:Help with Algebra?? [quote author=dsantamassino link=board=14;threadid=7355;start=0#msg67857 date=1057955748] If you ask me i'm kind of cheating by writing down what you told me just now. I don't have a clue. Can we try a different and easier method with less confused?? Please reply back. I'm not much for using another method - I would rather explain to you why this is good, it's good because it works every time and it's absolutely straightforward once you get a hang of it, don't worry it took me a while also. What specifically don't you understand. Re:Help with Algebra?? Lovechild. Would it be easier if you made a long distance call and do it over phone?? Re:Help with Algebra?? [quote author=dsantamassino link=board=14;threadid=7355;start=0#msg67859 date=1057955863] Lovechild. Would it be easier if you made a long distance call and do it over phone?? You want ME to call you overseas, and fork over the phone bill as well - now you are pushing it buddy. Re:Help with Algebra?? yeah thats it. What i don't understand is everything. I told you what i did and i guess i did it wrong. Re:Help with Algebra?? [quote author=dsantamassino link=board=14;threadid=7355;start=0#msg67861 date=1057956035] yeah thats it. What i don't understand is everything. I told you what i did and i guess i did it wrong. Okay, here are the basics. You can extend a division by 1 without it changing the value of the division. Here we use that to turn two divisions into one. 1 can be written as fx. 1 5x --- or --- 1 5x Now multiplication of a fraction with another faction is simple just multiple directly like this: 2 4 2 * 4 8 --- * --- = ------ = --- 3 2 3 * 2 6 Now we use this to extend all divisions in the problem by one - since that doesn't alter the value of the total equation it's perfectly legal. Now we want to get one division so we extend by one using --- + --- 2x 3x ---- = 1 Freely extend the other division but this - following the prior rule: 1 1 * 2x --- + ------- 2x 3x * 2x Now we do the same to the other division, but here we of course use the other division to pick our constant. 1 * 3x 1* 2x -------- + --------- 2x * 3x 3x * 2x The point here is getting the same in this section y <=== HERE because then they have a common divisor and thus we are allowed to do this 1 * 3x + 1* 2x 2x * 3x This has the same value still. Now we can clean up a bit 3x + 2x 6x ^2 And now we have a single division with which to continue calculations. 07-11-2003, 08:32 PM #23 07-11-2003, 08:35 PM #24 07-11-2003, 08:37 PM #25 07-11-2003, 08:37 PM #26 07-11-2003, 08:38 PM #27 07-11-2003, 08:40 PM #28 07-11-2003, 08:52 PM #29 07-11-2003, 08:59 PM #30
{"url":"http://www.linuxhomenetworking.com/forums/showthread.php/8992-Help-with-Algebra?p=77669&viewfull=1","timestamp":"2014-04-18T13:39:33Z","content_type":null,"content_length":"61527","record_id":"<urn:uuid:eb1d010a-2434-4317-ae31-85427e16639a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2005 [00655] [Date Index] [Thread Index] [Author Index] Re: Questions regarding MatrixExp, and its usage • To: mathgroup at smc.vnet.net • Subject: [mg63355] Re: [mg63335] Questions regarding MatrixExp, and its usage • From: "Michael Chang" <michael_chang86 at hotmail.com> • Date: Sun, 25 Dec 2005 02:19:33 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com Hi Pratik, >From: Pratik Desai <pdesai1 at umbc.edu> To: mathgroup at smc.vnet.net >To: Michael Chang <michael_chang86 at hotmail.com>, >"mathgroup at smc.vnet.net" <mathgroup at smc.vnet.net> >Subject: [mg63355] Re: [mg63335] Questions regarding MatrixExp, and its usage >Date: Sat, 24 Dec 2005 11:49:02 -0500 >Michael Chang wrote: >>Hi Pratik, >>Many thanks for your response and help! >>My only concern is about the usage of MatrixPower -- all of the >>Mathematica online documentation and examples using this function seem to >>indicate that it is only valid for an *integer* power p. >>Since MatrixExp[aMatrix,p] exists (and is unique) for all square "aMatrix" >>values and any *complex* value of "p", I guess that I began wondering >>under what conditions this might be equal to >> MatrixPower[MatrixExp[aMatrix],p] >>? Perhaps mathematically this only holds for *integer* values of p? I >>don't know ... >>Anyways, many thanks again, and Happy Holidays! >>>From: Pratik Desai <pdesai1 at umbc.edu> To: mathgroup at smc.vnet.net >>>To: "michael_chang86 at hotmail.com" <michael_chang86 at hotmail.com> >>>Subject: [mg63355] Re: [mg63335] Questions regarding MatrixExp, and its usage >>>Date: Sat, 24 Dec 2005 09:30:52 -0500 >>>michael_chang86 at hotmail.com wrote: >>>>For any arbitrary (possibly complex-valued) square matrix A, >>>>Mathematica enables the computation of the matrix exponential of A via >>>>In[1]: A={{ some square matrix}}; >>>>In[2]: expA=MatrixExp[A]; >>>>I was therefore wondering if >>>>MatrixExp[A p]==(MatrixExp[A]^p) >>>>where 'p' is an arbitrary complex number, and the '^' operator is my >>>>attempt to denote the matrix power, and *not* an element-by-element >>>>power for each individual matrix entry. Or does such an expression >>>>only hold for real-valued square A matrices? Or am I completely lost >>>>here ...? >>>>As usual, any and all help would be greatly appreciated! >>>How about MatrixPower >>>matx[A_?MatrixQ, p_]=MatrixPower[MatrixExp[A], p] >>>Hope this helps >You will never know unless you try :-) >0.982433\[InvisibleSpace]+3.14159 \[ImaginaryI] >{{-2.670947256395083, 0, 0}, {0, -2.670947256395083, 0}, {0, 0, >{{-2.6709472563950825, 0, 0}, {0, -2.6709472563950825, 0}, {0, 0, >I think in my experience with mathematica if there are some limitation with >a particular function, the documentation always seems to highlight it >somewhere, and I did not see any explicit disclaimers regarding the >limitation for MatrixPower only working with integers. To be perfectly >honest, I don't know why In[33] works perhaps someone else on the forum can >Happy Holidays to you! >PS: I hope you don't mind my posting your reply on the forum >Pratik Desai Many thanks for your help again! :) Here's an example that has me concerned: In[1]: params={theta->Pi^Pi,p->Sqrt[2]}; In[2]: aa=theta {{Cot[theta],Csc[theta]},{-Csc[theta],-Cot[theta]}}; In[3]: test1=Simplify[MatrixExp[aa p]/.params]; In[4]: test2=Simplify[MatrixPower[MatrixExp[aa],p]/.params]; In[5]: N[test1-test2] Out[5]: {{-0.230217 + 0. \[ImaginaryI], -2.06142 + 0. \[ImaginaryI]}, { 2.06142\[InvisibleSpace] + 0. \[ImaginaryI], 1.12075\[InvisibleSpace] + 0. \[ImaginaryI]}} So ... assuming that all intermediate calculations are done properly, and that I haven't done anything 'improper', it appears that, in general: MatrixExp[aMatrix p] != MatrixPower[MatrixExp[aMatrix],p] for 'p' an arbitrary real number; it only seems to hold for p an integer ... Does this seem reasonable? I'm somewhat mathematically 'challenged', although perhaps this is 'intuitive' to others ... Happy holidays, and joyeuses fêtes! • Follow-Ups:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00655.html","timestamp":"2014-04-18T03:10:03Z","content_type":null,"content_length":"39631","record_id":"<urn:uuid:316a5fe8-0f51-4e9f-b343-cc49445da8f4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
matlab regexprep replace nth occurence up vote 0 down vote favorite Matlab documentation states that it is possible to replace the Nth occurrence of the pattern in regexprep. I am failing to see how to implement it and google is not returning anything useful. Basically the string I have is :,:,1 and I want to replace the second occurrence of : with an arbitrary number. Based on the documentation: I do no understand how the N option should be used. I have tried 'N',2 or just '2'. Note that the position of the : could be anywhere. I realize there are other ways of doing this other than regexprep but I don't like having a problem linger. Thanks for the help! regex matlab Which version do you use? check help regexprep, maybe it's version dependend?! I use octave and can't use this special option – Tobas Jul 3 '12 at 19:24 add comment 1 Answer active oldest votes up vote 0 down vote accepted The above works. If the you know the format of the string to be fixed, you could for example do: s=':,:,4'; s(3)='9'; no regular expressions involved – Amro Jul 3 '12 at 19:58 Like I said, there are plenty of ways of doing the above task. Your method is one of them. – nicky Jul 3 '12 at 20:03 add comment Not the answer you're looking for? Browse other questions tagged regex matlab or ask your own question.
{"url":"http://stackoverflow.com/questions/11317525/matlab-regexprep-replace-nth-occurence?answertab=active","timestamp":"2014-04-24T14:54:29Z","content_type":null,"content_length":"63981","record_id":"<urn:uuid:0852928a-9b96-4e27-9bed-bea03b37f50f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Somerdale, NJ ACT Tutor Find a Somerdale, NJ ACT Tutor ...My degree is from RPI in Theoretical Mathematics, and thus included taking Discrete Mathematics as well as a variety of other proof-writing courses (Number Theory, Graph Theory, and Markov Chains to name a few). I absolutely love writing proofs and thinking problems out abstractly. I favor the S... 58 Subjects: including ACT Math, chemistry, calculus, reading ...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. 14 Subjects: including ACT Math, physics, ASVAB, calculus ...I have bachelor's degree in mathematics from Rutgers University. This included taking classes such as Calculus 1, 2, 3,4, and Advanced Calculus (where we prove the theorems used in calculus 1. Also, while at Rutgers, I worked as a math tutor, tutoring students in subjects that included calculus 1 and 2. 16 Subjects: including ACT Math, English, calculus, physics ...I bring these specialized skills to your desk to solve your writing problems from the most logical angles of attack. You will notice that I ask a lot of questions and think about what I say before I respond to your answers. I will ask you to read what you have written out loud while I read along so I can absorb that information and discover what needs improvement. 37 Subjects: including ACT Math, reading, writing, English ...My goal is to provide a simple explanation to complex ideas at the student's level to help them master math and science skills and concepts. As a teaching assistant for four years in graduate school and a tutor as an undergraduate, I have tutored various levels of math as well as chemistry. Con... 9 Subjects: including ACT Math, chemistry, algebra 2, geometry Related Somerdale, NJ Tutors Somerdale, NJ Accounting Tutors Somerdale, NJ ACT Tutors Somerdale, NJ Algebra Tutors Somerdale, NJ Algebra 2 Tutors Somerdale, NJ Calculus Tutors Somerdale, NJ Geometry Tutors Somerdale, NJ Math Tutors Somerdale, NJ Prealgebra Tutors Somerdale, NJ Precalculus Tutors Somerdale, NJ SAT Tutors Somerdale, NJ SAT Math Tutors Somerdale, NJ Science Tutors Somerdale, NJ Statistics Tutors Somerdale, NJ Trigonometry Tutors Nearby Cities With ACT Tutor Ashland, NJ ACT Tutors Barrington, NJ ACT Tutors Clementon ACT Tutors Echelon, NJ ACT Tutors Haddon Heights ACT Tutors Hi Nella, NJ ACT Tutors Laurel Springs, NJ ACT Tutors Lawnside ACT Tutors Magnolia, NJ ACT Tutors Runnemede ACT Tutors Stratford, NJ ACT Tutors Tavistock, NJ ACT Tutors Voorhees ACT Tutors Voorhees Kirkwood, NJ ACT Tutors Voorhees Township, NJ ACT Tutors
{"url":"http://www.purplemath.com/Somerdale_NJ_ACT_tutors.php","timestamp":"2014-04-17T07:37:25Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:42dd0cdc-75db-44c2-92e4-3f630c37507e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Gauss' Lemma and algebraic integers May 24th 2010, 03:29 PM #1 Oct 2008 Gauss' Lemma and algebraic integers Let a be an algebraic integer and suppose that f(a)=0 for a polynomial f, with f in Q[x] (a polynomial with rational coefficients). Suppose f(x) is irreducible and monic. Show that f(x) is in Z I feel kind of stupid... I know that Gauss' lemma reduces problems in Q[x] to the integers but I don't see exactly how to draw this conclusion. Let a be an algebraic integer and suppose that f(a)=0 for a polynomial f, with f in Q[x] (a polynomial with rational coefficients). Suppose f(x) is irreducible and monic. Show that f(x) is in Z I feel kind of stupid... I know that Gauss' lemma reduces problems in Q[x] to the integers but I don't see exactly how to draw this conclusion. since $f(x)$ is irreducible, every root $a_j$ of $f(x)$ (conjugates of $a$) is also an algebraic integer. now recall that we can write $f(x)=x^n - s_1x^{n-1} + s_2x^{n-2} - \cdots + (-1)^n s_n,$ $s_1=\sum_{j=1}^n a_j, \ s_2 = \sum_{1 \leq i < j \leq n} a_ia_j , \ \cdots , s_n=a_1a_2 \cdots a_n.$ thus each $s_j$ is an algebraic integer. we also have $s_j \in \mathbb{Q}$ and so $s_j \in \ mathbb{Z},$ because a rational algebraic integer is an integer. one thing, you should've posted this question in number theory subforum. Last edited by NonCommAlg; May 24th 2010 at 04:37 PM. Let a be an algebraic integer and suppose that f(a)=0 for a polynomial f, with f in Q[x] (a polynomial with rational coefficients). Suppose f(x) is irreducible and monic. Show that f(x) is in Z I feel kind of stupid... I know that Gauss' lemma reduces problems in Q[x] to the integers but I don't see exactly how to draw this conclusion. This is another approach directly from the definition of an algebraic integer. By definition, an element $\alpha \in K$ is called an algebraic integer if $\alpha$ is the root of some monic polynomial with coefficients in $\mathbb{Z}$, where K is an extension field of $\ mathbb{Q}$. Let $g(x) \in \mathbb{Z}[x]$ be such a monic polynomial with $g(\alpha)=0$. If $f(\alpha)=0$, then $f(x)=g(x)h(x)$, where $f(x), h(x) \in \mathbb{Q}[x]$ and $g(x) \in \mathbb{Z}[x]$. If f(x) is irreducible in $\mathbb{Q}[x]$, then $g(x) \in \mathbb{Z}[x]$ should also be irreducible in $\mathbb{Q}[x]$ and h(x) should be a unit in $\mathbb{Q}[x]$, i.e., a unit in $\mathbb{Q}$. Further, if f(x) is monic, then f(x)=g(x). Thus $f(x) \in \mathbb{Z}[x]$. ----------(Some additional remarks)----------- A corollary from the above definition is that the algebraic integers in $\mathbb{Q}$ are integers $\mathbb{Z}$. For instance, if $\beta$ is an algebraic integer in $\mathbb{Q}$, then the minimal polynomial of $\beta=a/b \in \mathbb{Q}$, where a and b are integers and $b eq 0$, is $bx - a$. Since $\beta$ is an algebraic integer by hypothesis, $\beta$ should be the root of a monic polynomial with coefficients in $\mathbb{Z}$. Thus b is 1 and $\beta \in \mathbb{Z}$. A corollary from the above definition is that the algebraic integers in $\mathbb{Q}$ are integers $\mathbb{Z}$. For instance, if $\beta$ is an algebraic integer in $\mathbb{Q}$, then the minimal polynomial of $\beta=a/b \in \mathbb{Q}$, where a and b are integers and $b eq 0$, is $bx - a$. Since $\beta$ is an algebraic integer by hypothesis, $\beta$ should be the root of a monic polynomial with coefficients in $\mathbb{Z}$. Thus b is 1 and $\beta \in \mathbb{Z}$. It's true that $\beta$ should be the root of some monic polynomial with integer coefficients, but why should $bx-a$ have to be that polynomial? Saying "there exists a monic polynomial with integer coefficients having $\beta$ as a root" is far from saying "all polynomials with integer coefficients having $\beta$ as a root are monic". The second statement is not true! It's true that $\beta$ should be the root of some monic polynomial with integer coefficients, but why should $bx-a$ have to be that polynomial? Saying "there exists a monic polynomial with integer coefficients having $\beta$ as a root" is far from saying "all polynomials with integer coefficients having $\beta$ as a root are monic". The second statement is not true! I can't for the life of me remember why $\frac ab$ is not an algebraic integer for $b\geq2$. Can anyone say why? Because of Gauss's lemma : we know that if $f(x) = g(x)h(x) \in \mathbb{Z}[x]$ is monic, and $g,h \in \mathbb{Q}[x]$, then in fact $g,h \in \mathbb{Z}[x]$. So if $f$ has the rational root $\ alpha$, we can write $f(x)=(x-\alpha)q(x)$; we have $x-\alpha, q(x) \in \mathbb{Q}[x]$ and therefore $x-\alpha \in \mathbb{Z}[x] \Rightarrow \alpha \in \mathbb{Z}$. (It might not be immediately obvious that $q(x)$ has rational coefficients, but think about it!) you can see it directly: suppose $r = \frac{a}{b}, \ a \in \mathbb{Z}, \ b \in \mathbb{N}, \ \gcd(a,b)=1,$ is an algebraic integer. then, by definition, there exists $p(x)=x^n + c_1x^{n-1} + \ cdots + c_n \in \mathbb{Z}[x]$ such that $p(r)=0.$ thus $a^n + bc_1a^{n-1} + \cdots + b^na_n=b^n p(r)=0$ and so $b \mid a^n,$ which is impossible unless $b=1$, because $\gcd(a,b)=1.$ No, this is true. Irreducibility of f forces the choice of g to be irreducible in Q[x], and g(x) in Z[x] is a monic polynomial satisfying g(a)=0, where a is an algebraic integer. g should always exists for sure in this case, otherwise a is not an algebraic integer. It's true that $\beta$ should be the root of some monic polynomial with integer coefficients, but why should $bx-a$ have to be that polynomial? Saying "there exists a monic polynomial with integer coefficients having $\beta$ as a root" is far from saying "all polynomials with integer coefficients having $\beta$ as a root are monic". The second statement is not true! Because bx-a is the minimal polynomial of $\beta$ and the minimal polynomial of an algebraic integer has integer coefficients and monic for sure. It's true that the minimal polynomial over $\mathbb{Q}$ of an algebraic integer has integer coefficients - but this is a consequence of Gauss's lemma. It's not trivial! May 24th 2010, 04:05 PM #2 MHF Contributor May 2008 May 25th 2010, 05:15 AM #3 May 2010 May 25th 2010, 05:52 PM #4 MHF Contributor May 2008 May 25th 2010, 06:03 PM #5 May 25th 2010, 06:16 PM #6 May 25th 2010, 06:24 PM #7 May 25th 2010, 07:20 PM #8 MHF Contributor May 2008 May 25th 2010, 07:57 PM #9 May 2010 May 25th 2010, 08:23 PM #10
{"url":"http://mathhelpforum.com/advanced-algebra/146273-gauss-lemma-algebraic-integers.html","timestamp":"2014-04-21T14:55:01Z","content_type":null,"content_length":"84869","record_id":"<urn:uuid:b9fea1b9-4ae9-42cf-b3a5-e331c8beccc6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Ring Theory Question March 12th 2011, 01:18 PM #1 Mar 2011 Ring Theory Question Hello Experts, Here is the question, and what I did: Q: Given a ring with division D char(D) != 2, F = Centralizer of D (means that F becomes a field). Given that x in D isn't in F but x^2 is included in F. Needed to prove that there exists y in D and y*x*y^(-1) = -x and also that y^2 is in C_D({x}) where C_D is the centralizer of the set {x} sub set of D. What I did is: I know that x is not in F so there exists such s in D that sx!=xs Let's call sx-xs = y there is y^-1 because every non zero element in D is invertible. Then I just tried to plug it in the equation: (sx-xs)*x*(sx-xs)^(-1) => (sx-xs)^(-1) should be 1/(sx-xs) but it gives nothing. Please tell me how to solve it....I know that I miss something, please guide me step by step. Hello Experts, Here is the question, and what I did: Q: Given a ring with division D char(D) != 2, F = Centralizer of D (means that F becomes a field). Given that x in D isn't in F but x^2 is included in F. Needed to prove that there exists y in D and y*x*y^(-1) = -x and also that y^2 is in C_D({x}) where C_D is the centralizer of the set {x} sub set of D. What I did is: I know that x is not in F so there exists such s in D that sx!=xs Let's call sx-xs = y there is y^-1 because every non zero element in D is invertible. Then I just tried to plug it in the equation: (sx-xs)*x*(sx-xs)^(-1) => (sx-xs)^(-1) should be 1/(sx-xs) but it gives nothing. Please tell me how to solve it....I know that I miss something, please guide me step by step. Firstly, I can't even tell what this is saying. What is the division of the ring? March 12th 2011, 07:19 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/174390-ring-theory-question.html","timestamp":"2014-04-16T11:40:30Z","content_type":null,"content_length":"34853","record_id":"<urn:uuid:b3ce054d-6fbb-44cd-ba06-76513ff345c4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Select the verbal phrase that corresponds with the given expression. 5(x + y) Five times the quantity x plus y The sum of x and y to the fifth power Five times x plus y x plus five times y • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5074961ae4b0b4f7c79c2385","timestamp":"2014-04-18T00:42:53Z","content_type":null,"content_length":"39996","record_id":"<urn:uuid:9c0ae129-4c42-4806-821d-613d4f108631>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Gauge theories From Scholarpedia Gauge theories refers to a quite general class of quantum field theories used for the description of elementary particles and their interactions. The theories are characterized by the presence of vector fields, and as such are a generalization of the older theory of Quantum Electrodynamics (QED) that is used to describe the electromagnetic interactions of charged elementary particles with spin 1/2. Local gauge invariance is a very central issue. An important feature is that these theories are often renormalizable when used in 3 space- and 1 time dimension. 1. Maxwell's equations and gauge invariance The simplest example of a gauge theory is electrodynamics, as described by the Maxwell equations. The electric field strength \(\vec E(\vec x,t)\) and the magnetic field strength \(\vec B(\vec x,t)\) obey the homogeneous Maxwell equations (in SI units): \[\tag{1} \vec\nabla\times\vec E+{\partial \vec B\over\partial t}=0\] \[\tag{2} \vec\nabla\cdot\vec B=0\ .\] According to Poincaré's Lemma, Eq. (2) implies that there exists another vector field \(\vec A(\vec x,t)\) such that \[\tag{3} \vec B=\vec\nabla\times \vec A \ .\] Since Eq. (1) now reads \[\tag{4} \vec\nabla\times(\vec E+{\partial\vec A\over\partial t})=0\ ,\] we can also conclude that there is a potential field \(\Phi(\vec x,t)\) such that \[\tag{5} \vec E=-\vec\nabla\Phi-{\partial \vec A\over\partial t} \ .\] The field \(\Phi\) is the electric potential field; the vector field \(\vec A\) is called the vector potential field. The strengths of these potential fields are determined by the inhomogeneous Maxwell equations, which are the equations that relate the strengths of the electromagnetic fields to the electric charges and currents that generate these fields. The use of potential fields often simplifies the problem of solving Maxwell's equations. What turns this theory into a gauge theory is the fact that the values of these potential fields are not completely determined by Maxwell's equations. Consider an electromagnetic field configuration \((\vec E(\vec x,t),\,\vec B(\vec x,t))\ ,\) and suppose that it is described by the potential fields \((\Phi(\vec x,t),\,\vec A(\vec x,t))\ .\) Then, using any arbitrary scalar function \(\Lambda(\ vec x,t)\ ,\) one can find a different set of potential fields describing the same electric and magnetic fields, by writing \[\tag{6} \Phi'=\Phi+{\partial\Lambda\over\partial t}\ ,\quad\vec A'=\vec A-\vec\nabla\Lambda \ .\] Inspecting Equations (3) and (5), one easily observes that \(\vec E=\vec E'\) and \(\vec B=\vec B'\ .\) Thus, the set (\(\Phi',\,\vec A'\)) and (\(\Phi,\,\vec A\)) describe the same physical situation. Because of this, we call the transformation (6) a gauge transformation. Since \(\Lambda\) may be chosen to be an arbitrary function of the points \((\vec x,t)\) in space-time, we speak of a local gauge transformation. The fact that the electromagnetic fields are invariant under these local gauge transformations turns Maxwell's theory into a gauge theory. In relativistic quantum field theory, the field \(\psi(\vec x,t)\) of a non-interacting spinless particle would typically obey the equation \[\tag{7} (\vec\nabla^2-{\partial^2\over\partial t^2})\psi=m^2\psi \ ,\] where units where used such that the velocity of light \(c=1\ ,\) and Planck's constant \(\hbar=1\ .\) This gives the dispersion relation between energy and momentum as dictated by Special Relativity \[\tag{8} E=\sqrt{{\vec p}^{\,2}+m^2} \ ,\] Suppose now that the particle in question carries an electric charge \(q\ .\) How is its equation then affected by the presence of electro-magnetic fields? It turns out that one cannot write the correct equations using the fields \(\vec E\) and \(\vec B\) directly. Here, one can only choose to add terms depending on the (vector) potential fields instead: \[\tag{9} (\vec\nabla-iq\vec A)^2\psi-({\partial\over\partial t}+iq\Phi)^2\psi=m^2\psi\ .\] It can be verified that this equation correctly produces waves that are deflected by the electro-magnetic forces in the way one expects. For instance, the energy \(E\) is easily seen to be enhanced by an amount \(q\,\Phi(\vec x,t)\ ,\) which is the potential energy of a charged particle in an electric potential field. However, what happens to this equation when performing a gauge transformation? It appears as if the equation changes, so that the solution for the field \(\psi\) should change as well. Indeed, \(\psi \) changes in the following way: \[\tag{10} \psi'=e^{-iq\Lambda}\psi \ ,\quad {\partial\psi'\over\partial t}=e^{-iq\Lambda}({\partial\psi\over\partial t}-iq\psi {\partial\Lambda\over\partial t} ) \ .\] Thus, the field \(\psi\) makes a rotation in the complex plane. This is closely related to a 'scale transformation', which would result if one were to remove the 'i' from Eq. (10). It was Hermann Weyl who noted that this symmetry transformation simply redefines the scale of the field \(\psi\ ,\) and introduced the word 'gauge' to describe this feature. The combinations \[\tag{11} \vec D\psi=(\vec\nabla-iq\vec A)\psi \quad,\quad D_t\psi=({\partial\over\partial t}+iq\Phi)\psi \ ,\] are called covariant derivatives, because they are chosen in such a way that the derivatives of the function \(\Lambda(\vec x,t)\) cancel out in a gauge transformation: \[\tag{12} (\vec D\psi)'=e^{-iq\Lambda}(\vec\nabla-iq(\vec\nabla\Lambda)-iq(\vec A-\vec\nabla\Lambda)) \psi = e^{-iq\Lambda} (\vec D\psi)\ ,\] \[\tag{13} ( D_t\psi)'=e^{-iq\Lambda}({\partial\over\partial t}-iq{\partial\Lambda\over\partial t}+iq( \Phi+{\partial\Lambda\over\partial t})) \psi = e^{-iq\Lambda} (D_t\psi)\ ,\] and this makes it easy to see that Equation (10) correctly describes the way \(\psi\) transforms under a local gauge transformation, obeying the same field equation (9) both before and after the transformation (all terms in the equation are multiplied by the same exponential \(e^{-iq\Lambda}\ ,\) so that that factor is immaterial). The absolute value, \(|\psi(\vec x,t)|^2\) does not change at all under a gauge transformation, and indeed this is the quantity that corresponds to something that is physically observable: it is the probability that a particle can be found at \((\vec x,t)\ .\) A rule of thumb is that local gauge invariance requires all derivatives in our equations to be replaced by covariant derivatives. 2. Yang-Mills theory In the 1950s, it was known that the field equations for the field of a proton, \(P(\vec x,t)\ ,\) and the field of a neutron, \(N(\vec x,t)\ ,\) are such that one can rotate these fields in a complex two-dimensional space: \[\tag{14} \left({P'(\vec x,t)\atop N'(\vec x,t)}\right)=\left({a\quad b\atop c\quad d}\right)\left({P(\vec x,t)\atop N(\vec x,t)}\right) \ ,\] where the matrix \( U=\left({a\quad b\atop c\quad d}\right)\) may contain four arbitrary complex numbers, as long as it is unitary (\(U\,U^\dagger=I\)), and usually, the determinant of \(U\) is restricted to be 1. Since these equations resemble the rotations one can perform in ordinary space, to describe spin of a particle, the symmetry in question here was called isospin. In 1954, C.N. Yang and R.L. Mills published a very important idea. Could one modify the equations in such a way that these isospin rotations could be regarded as local gauge rotations? this would mean that, unlike the case that was known, the matrices \(U\) should be allowed to depend on space and time, just like the gauge generator \(\Lambda(\vec x,t)\) in electromagnetism. Yang and Mills were also inspired by the observation that Einstein's theory of gravity, General Relativity, also allows for transformations very similar to local gauge transformations: the replacement of the coordinate frame by other coordinates in an arbitrary, space-time dependent way. To write down field equations for protons and neutrons, one needs the derivatives of these fields. The way these derivatives transform under a local gauge transformation implies that there will be terms containing the gradients \(\vec\nabla U\) of the matrices \(U\ .\) To make the theory gauge-invariant, these gradients would have to be cancelled out, and in order to do that, Yang and Mills replaced the derivatives \(\vec\nabla\) by covariant derivatives \(\vec D=\vec\nabla -ig\vec A(\vec x,t)\ ,\) as was done in electromagnetism, see Equation (11). Here, however, the fields \(\vec A\) had to be matrix-valued, just as the isospin \(U\) matrices: \[\tag{15} \vec A=\left({\vec a_{11}\quad \vec a_{12}\atop \vec a_{21}\quad \vec a_{22}}\right)\ ,\] \[ \hbox{Tr}\,\vec A=\vec a_{11}+\vec a_{22}=0\ ,\quad \vec a_{11}=\vec a_{11}^{\,*}\,,\quad \vec a_{21} =\vec a_{12}^{\,*}\ . \] Since the \(U\) matrices contain four coefficients with one constraint (the determinant has to be 1), one ends up with a set of three new vector fields (there are 3 independent real vectors in the matrix (15)). At first sight, they appear to be the fields of a vector particle with isospin one. In practice, this should correspond to particles with one unit of spin (i.e., the particle rotates about its axis), and its electric charge could be neutral or one or minus one unit. Yang-Mills theory therefore predicts and describes a new type of particles with spin one that transmit a force not unlike the electro-magnetic force. The fields that are equivalent to Maxwell's electric and magnetic fields are obtained by considering the commutator of two covariant derivatives: \[\tag{16} [D_\mu,\,D_\nu]=D_\mu D_\nu-D_\nu D_\mu= -ig(\partial_\mu A_\nu-\partial_\nu A_\mu-ig[A_\mu,\,A_\nu]) = -ig F_{\mu\nu}\ ,\] where the indices take the values \(\mu,\ \nu=0,1,2,3\ ,\) with 0 referring to the time-component. Since \( F_{\mu\nu}=-F_{\nu\mu}\ ,\) this tensor has 6 independent components, three forming an electric vector field, and three a magnetic field. Each of these components is also a matrix. The commutator, \([A_\mu,\,A_\nu]\) is a new, non-linear term, which makes the Yang-Mills equations a lot more complicated than the Maxwell system. In other respects, the Yang-Mills particles, being the energy quanta of the Yang-Mills fields, are similar to photons, the quanta of light. Yang-Mills particles also carry no intrinsic mass, and travel with the speed of light. Indeed, these features were at first reasons to dismiss this theory, because massless particles of this sort should have been detected long ago, whereas they were conspicuously absent. 3. The Brout-Englert-Higgs mechanism The theory was revived when it was combined with spontaneous breakdown of local gauge symmetry, also known as the Brout-Englert-Higgs mechanism. Consider a scalar (spinless) particle described by a field \(\phi(\vec x,t)\ .\) This field is assumed to be a vector field, in the sense that it undergoes some rotation when a gauge transformation is performed. In practice this means that the particle carries one or several kinds of charges that make it sensitive to the Yang-Mills force, and often it has several components, which means there are various species of this particle. Such particles must obey Bose-Einstein statistics, which implies that it can undergo Bose-Einstein condensation. In terms of its field \(\phi\) this means the following: In the vacuum the field \(\phi\) takes a non-vanishing value \(F\ .\) This is usually written as \[\tag{17} \langle\phi(\vec x,t)\rangle=F \ .\] After a local gauge transformation, this would look like \[\tag{18} \langle\phi'(\vec x,t)\rangle=U(\vec x,t)\,F \ ,\] where \( U(\vec x,t) \) is a matrix field representing the local gauge transformation. It is often said that, therefore, the vacuum is not gauge-invariant, but, strictly speaking, this is not correct. The situation described by Equation (18) is the same vacuum as (17); it is only described differently. However, this property of the vacuum does have important consequences. Due to the fact that the rotated field now describes the same situation as the previous value, there is no different physical particle associated to the rotated field. Only the length of the vector \(\phi\) has physical significance. This length is gauge-invariant. therefore, only the length of the vector \( \phi \) is associated to one type of particle, which must be neutral for the Yang-Mills forces. This particle is now called the Higgs particle. As the Higgs field is a constant source for the Yang-Mills field strength, the Yang-Mills field equations are modified by it. Due to the Higgs field, the Yang-Mills "photons" described by the Yang-Mills field \(A_\mu(\vec x,t)\) get a mass. This can also be explained as follows. Massless photons can only have two helicity states, that is, they can spin only in two directions. This is related to the fact that light can be polarized in exactly two directions. Massive photons (particles with non-vanishing mass and with one unit of spin), can always spin in three directions. This third rotation mode is now provided by the Higgs field, which itself loses several of its physical components. The total number of physical field components stays the same before and after the Brout-Englert-Higgs mechanism. A further consequence of this effect on the Yang-Mills field is that the force transmitted by the massive photons is a short-range one (the range of the force being inversely proportional to the mass of the photon). The weak interactions could now be successfully described by a Yang-Mills theory. The set of local gauge transformations forms the mathematical group \(SU(2)\times U(1)\ .\) This group generates 4 species of photons (3 for \(SU(2)\) and 1 for \( U(1)\)). The Brout-Englert-Higgs mechanism breaks this group down in such a way that a subgroup of the form \(U(1)\) remains. This is the electromagnetic theory, with just one photon. The other three photons become massive; they are responsible for the weak interactions, which in practice appear to be weak just because these forces have a very short range. With respect to electromagnetism, two of these intermediate vector bosons, \(W^\pm\ ,\) are electrically charged, and a third, \( Z^0\ ,\) is electrically neutral. When the latter's existence was derived from group theoretical arguments, this gave rise to the prediction of a hitherto unnoticed form of the weak interaction: the neutral current interaction. This theory, that combines electromagnetism and the weak force into one, is called the electro-weak theory, and it was the first fully renormalizable theory for the weak force (see Chapter 5). 4. Quantum Chromodynamics When it was understood that the weak interactions, together with the electromagnetic ones, can be ascribed to a Yang-Mills gauge theory, the question was asked how to address the strong force, a very strong force with relatively short range of action, which controls the behavior of the hadronic particles such as the nucleons and the pions. It was understood since 1964 that these particles behave as if built from subunits, called quarks. Three varieties of quarks were known (up, down, and strange), and three more would be discovered later (charm, top, and bottom). These quarks have the peculiar property that they permanently stick together either in triplets, or one quark sticks together with one anti-quark. Yet when they approach one another very closely, they begin to behave more freely as individuals. These features we now understand as, again, being due to a Yang-Mills gauge theory. Here, we have the mathematical group \(SU(3)\) as local gauge group, while now the symmetry is not affected by any Brout-Englert-Higgs mechanism. Due to the non-linear nature of the Yang-Mills field, it self-interacts, which forces the fields to come in patterns quite different from the electromagnetic case: vortex lines are formed, which form unbreakable bonds between quarks. At close distances, the Yang-Mills force becomes weak, and this is a feature that can be derived in an elementary way using perturbation expansions, but it is a property of the quantized Yang-Mills system that hitherto had been thought to be impossible for any quantum field theory, called asymptotic freedom. The discovery of this feature has a complicated history. \(SU(3)\) implies that every species of quark comes in three types, referred to as color: they are "red", "green" or "blue". The field of a quark is therefore a 3-component vector in an internal 'color' space. Yang-Mills gauge transformations rotate this vector in color space. The Yang-Mills fields themselves form 3 by 3 matrices, with one constraint (since the determinant of the Yang-Mills gauge rotation matrices must be kept equal to one). Therefore, the Yang-Mills field has 8 colored photon-like particles, called gluons. Anti-quarks carry the conjugate colors ("cyan", "magenta" or "yellow"). The theory is now called Quantum chromodynamics (QCD). It is also a renormalizable theory. The gluons effectively keep the quarks together in such a way that their colors add up to a total that is color-neutral ("white" or a "shade of gray"). This is why either three quarks or one quark and one anti-quark can sit together to form a physically observable particle (a hadron). This property of the theory is called permanent quark confinement. Because of the strongly non-linear nature of the fields, quark confinement is in fact quite difficult to prove, whereas the property of asymptotic freedom can be demonstrated exactly. Indeed, a mathematically air-tight demonstration of confinement, with the associated phenomenon of a mass gap in the theory (the absence of strictly massless hadronic objects) has not yet been given, and is the subject of a $1,000,000,- prize, issued by the Clay Mathematics Institute of Cambridge, Massachusetts. One cannot choose all field equations at will. They must obey conditions such as energy conservation. This implies that there is an action principle (action = reaction), and this principle is most conveniently expressed by writing the Lagrangian for the theory. The Lagrangian (more precisely, Lagrange density) \( \mathcal{L}(\vec x,t)\) is an expression in terms of the fields of the system. For a real scalar field \(\Phi\) it is \[\tag{19} \mathcal{L}=-{1\over 2}\Big((\vec D\Phi)^2-(D_t\Phi)^2+m^2\Phi^2\Big)\ , \] and for the Maxwell fields it is \[\tag{20} \mathcal{L}={1\over 2}(\vec E^2-\vec B^2)=-{1\over 4}\sum_{\mu,\nu}F_{\mu\nu}F_{\mu\nu}\ , \] where the summation is the Lorentz covariant summation over the Lorentz indices \(\mu,\ \nu\ .\) The field equations can all be derived from this expression by demanding that the action integral, \[\tag{21} S=\int\mathrm{d}^3\vec x\mathrm{d}t\,\mathcal{L}(\vec x,t)\ , \] where \(\mathcal{L}\) is the sum of the Lagrangians of all fields in the system, be stationary under all infinitesimal variations of these fields. This is called the Euler-Lagrange principle, and the equations are the Euler-Lagrange equations. For gauge theories this generalizes directly: one writes \[\tag{22} \mathcal{L}=-{1\over 4}\hbox{Tr}\sum_{\mu,\nu}F_{\mu\nu}F_{\mu\nu}+ ...\ ,\] using the expression (16) for the gauge fields \(F_{\mu\nu}\ ,\) and adds all terms associated to the other fields that are introduced. All symmetries of the theory are the symmetries of the Lagrangian, and the dimensionality of all coupling strengths can easily be read off from the Lagrangian as well, which is of importance for the renormalization procedure (see next chapter). 6. Renormalization and Anomalies According to the laws of quantum mechanics, the energy in a field consists of energy packets, and these energy packets are in fact the particles associated to the field. Quantum mechanics gives extremely precise prescriptions on how these particles interact, as soon as the field equations are known and can be given in the form of a Lagrangian. The theory is then called quantum field theory (QFT), and it explains not only how forces are transmitted by the exchange of particles, but it also states that multiple exchanges should occur. In many older theories, these multiple exchange gave rise to difficulties: their effects seem to be unbounded, or infinite. In a gauge theory, however, the small distance structure is very precisely prescribed by the requirement of gauge-invariance. In such a theory one can combine the infinite effects of the multiple exchanges with redefinitions of masses and charges of the particles involved. This procedure is called renormalization. In 3 space and 1 time dimension, most gauge theories are renormalizable. This allows us to compute the effects of multiple particle exchanges to high accuracy, thus allowing for detailed comparison with experimental data. Renormalization requires that masses and coupling strengths of particles be defined very carefully. If all coupling parameters of a theory are given a mass-dimensionality that is zero or positive, the number of divergent expressions stays under control. Usually, requiring the theory to remain gauge invariant throughout the renormalization procedure leaves no ambiguity for the definitions. However, it is not obvious that unambiguous, gauge invariant definitions exist at all, since gauge invariance has to hold for all interactions, whereas only a few infinite expressions can be replaced by finite ones. The proof that showed how and why unambiguous renormalized expressions can be obtained, could be most elegantly obtained by realizing that gauge theories can be formulated in any number of space-time dimensions. It was even possible to define all Feynman diagrams unambiguously for theories in spaces where the dimensions are \(3-\epsilon\ ,\) where \(\epsilon\) is an infinitesimal quantity. Taking the limit \(\epsilon\rightarrow 0\) requires the subtraction of poles of the form \(C_n/\epsilon^n\) from the original, "bare" mass and coupling parameters. The result is a set of unique, finite and gauge invariant expressions. In practice, it was found that this procedure, called dimensional regularization and renormalization is also convenient for carrying out technically complicated calculations of loop diagrams. However, there is a special case where extension to dimensions different from the canonical one is impossible. This is when fermionic particles exhibit chiral symmetry. Chiral symmetry is a symmetry that distinguishes left-rotating from right rotating particles, and indeed it plays a crucial role in the Standard Model. Chiral symmetry is only possible if space is 3 dimensional, and so does not allow for dimensional renormalization. Indeed, sometimes chiral symmetry cannot be preserved when renormalizing the theory. An anomaly occurs, called chiral anomaly. It was first discovered when a calculation of the \(\pi_0\rightarrow\gamma\gamma\) decay amplitude gave answers that did not follow the expected symmetry pattern. Since the gauge symmetries of the Standard Model do distinguish left rotating from right rotating particles (in particular, only left-rotating neutrinos are produced in a weak interaction), anomalies were a big concern. It so happens, however, that all anomalous amplitudes that would jeopardize gauge invariance and hence the self consistency of our equations, all cancel out. This is related to the fact that certain "grand unified" extensions of the Standard Model are based on anomaly free gauge groups (see Chapter 7). The anomaly has a direct physical implication. A topologically twisted field configuration called the instanton (because it represents an event at a given instant in time), represents exactly the gauge field configuration where the anomaly is maximal. It causes a violation of the conservation of some of the gauge charges. When there is an anomaly, at least one of the charges involved cannot be a gauge charge, but must be a charge to which no gauge field is coupled, like baryonic charge. Indeed, in the electroweak theory, instantons trigger the violation of the conservation laws of baryons. It is now believed that this might explain the imbalance between matter and antimatter that must have arisen during early phases of the Universe. 7. Standard Model Apart from the weak force, the electromagnetic force and the strong force, there is the gravitational force acting upon elementary particles. No other elementary forces are known. At the level of individual particles, gravity is so weak that it can be ignored in most cases. Suppose now that we take the \( SU(2)\times U(1)\) Yang-Mills system, together with the Higgs field, to describe electromagnetism and the weak force, and add to this the \(SU(3)\) Yang-Mills theory for the strong force, and we include all known elementary matter fields, being the quarks and the leptons, with their appropriate transformation rules under a gauge transformation; suppose we add to this all possible ways these fields can mix, a feature observed experimentally, which can be accounted for as a basic type of self-interaction of the fields. Then we obtain what is called the Standard Model. It is one great gauge theory that literally represents all our present understanding of the subatomic particles and their interactions. The Standard Model owes its strength to the fact that it is renormalizable. It has been subject of numerous experimental experiments and observations. It has withstood all these tests remarkably well. One important modification became inevitable around the early 1990's: in the leptonic sector, also the neutrinos carry a small amount of mass, and their fields mix. This was not totally unexpected, but highly successful neutrino experiments (in particular the Japanese Kamiokande experiment) now had made it clear that these effects are really there. They actually implied a further reinforcement of the Standard Model. One ingredient has not yet been confirmed: the Higgs particle. Observation of this object is expected in the near future, notably by the Large Hadron Collider at CERN, Geneva. The simplest versions of the Standard model only require one single, electrically neutral Higgs particle, but the 'Higgs sector' could be more complicated: the Higgs could be much heavier than presently expected, or there could exist more than one variety, in which case also electrically charged scalar particles would be found. The Standard Model is not perfect from a mathematical point of view. At extremely high energies (energies much higher than what can be attained today in the particle accelerators), the theory becomes unnatural. In practice, this means that we do not believe anymore that everything will happen exactly as prescribed in the theory; new phenomena are to be expected. The most popular scenario is the emergence of a new symmetry called supersymmetry, a symmetry relating bosons with fermions (particles such as electrons and quarks, which require Dirac fields for their description). 8. Grand Unified Theories It is natural to suspect that the electroweak forces and the strong forces should also be connected by gauge rotations. This would imply that all forces among the subatomic particles are actually related by gauge transformations. There is no direct evidence for this, but there are several circumstances that appear to point in this direction. In the present version of the Standard Model, the \ ( SU(3)\) Yang-Mills fields, describing the strong force, indeed exhibit very large coupling strengths, whereas the \(U(1)\) sector, describing the electric (and part of the weak) sector, has a tiny coupling strength. One can now use the mathematics of renormalization, in particular the so-called renormalization group, to calculate the effective strengths of these forces at much higher energies. It is found that the \(SU(3)\) forces decrease in strength, due to asymptotic freedom, but that the \(U(1)\) coupling strength increases. The \(SU(2)\) force varies more slowly. At extremely high energies, corresponding to ultra short distance scales, around \(10^{-32}\) cm, the three coupling strengths appear to approach one another, as if that is the place where the forces unite. It was found that \(SU(2)\times U(1)\) and \(SU(3)\) fit quite nicely in a group called \(SU(5)\ .\) They indeed form a subgroup of \(SU(5)\ .\) One may then assume that a Brout-Englert-Higgs mechanism breaks this group down to a \(SU(2)\times U(1)\times SU(3)\) subgroup. One obtains a so-called Grand Unified Field theory. In this theory, one assumes three generations of fermions, each transforming in the same way under \(SU(5)\) transformations (mathematically, they form a \(\mathbf{10}\) and a \(\overline\mathbf{5}\) representation). The \(SU(5)\) theory, however, predicts that the proton can decay, extremely slowly, into leptons and pions. The decay has been searched for but not found. Also, in this model, it is not easy to account for the neutrino mass and its mixings. A better theory was found where \( SU(5)\) is enlarged into \(SO(10)\ .\) The \(\mathbf{10}\) and the \(\overline\mathbf{5}\) representations of \(SU(5) \) together with a single right handed neutrino field, combine in to a \(\mathbf{16}\) representation of \(SO(10)\) (one for each of the three generations). This grand unified model puts the neutrinos at the same level as the charged leptons. Often, it is extended to a supersymmetric version. 9. Final remarks Any gauge theory is constructed as follows. First, choose the gauge group. This can be the direct product of any number of irreducible, compact Lie groups, either of the series \(SU(N)\ ,\) \(SO(N)\) or \(Sp(2N)\ ,\) or the exceptional groups \(G_2,\ F_4,\ E_6, E_7,\) or \(E_8\ .\) Then, choose fermionic (spin 1/2) and scalar (spin 0) fields forming representations of this local gauge group. The left helicity and the right helicity components of the fermionic fields may be in different representations, provided that the anomalies cancel out. Besides the local gauge group, we may impose exact and/or approximate global symmetries as well. Finally, choose mass terms and interaction terms in the Lagrangian, described by freely adjustable coupling parameters. There will be only a finite number of such parameters, provided that all interactions are chosen to be of the renormalizable type (this can now be read off easily from the theory's Lagrangian). There are infinitely many ways to construct gauge theories along these lines. However, it seems that the models that are most useful to describe observed elementary particles, are the relatively simple ones, based on fairly elementary mathematical groups and representations. One may wonder why Nature appears to be so simple, and whether it will stay that way when new particles and interactions are discovered. Conceivably, more elaborate gauge theories will be needed to describe interactions at energies that are not yet attainable in particle accelerators today. Related subjects are Supersymmetry and Superstring theory. They are newer ideas about particle structure and particle symmetries, where gauge invariance also plays a very basic role. Further reading See Also Becchi-Rouet-Stora-Tyutin symmetry, Englert-Brout-Higgs-Guralnik-Hagen-Kibble_mechanism, Gauge invariance, Slavnov-Taylor identities, Zinn-Justin equation External links
{"url":"http://www.scholarpedia.org/article/Gauge_theories","timestamp":"2014-04-18T10:35:31Z","content_type":null,"content_length":"76944","record_id":"<urn:uuid:db0d7e2d-43f9-4f18-9433-63a4369b4f8a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Model Data Structure • The Import and SimplifyModel commands of the BlockImporter package return a Maple record that describes a converted Simulink model. This help page describes the elements of that record. The first major section, Variables, describes the syntax of the variables used in the fields of the record. The second major section, Fields, describes the format of each of the fields (slots) of the • Indexed names are used in the generated equations to represent signals, variables, and parameters in the diagram. The base of the indexed name identifies the type of the signal: u is an input signal, y is an output signal, x is a state-variable, fn is a function, Source is a source signal, Sink is a sink signal, and K is a parameter. Input/Output Signals • Input/output signals are the inputs/outputs of the blocks in the diagram. • The base symbol u identifies input signals; the base symbol y identifies output signals. • The first index is the block identifier for the block to which the signal is referenced. • The second index is the port number. • The remaining indices specify the port index of a vector/matrix port. • For example, the signal u[3,2,1,1] is an input to block 3; it connects to the second inport port, which is a matrix port, this is the (1,1) index of the matrix. • Unevaluated function calls represent expressions too complicated to be readily represented by algebraic expressions. • The base symbol fn identifies a function. • The first index is a name corresponding to the type of block that the function represents; MATLABFcn and Lookup are two possibilities. • Parameters correspond to mask parameters and MATLAB® global parameters used in the block equations. • The base symbol K identifies parameters. • The first index is the block identifier corresponding to the subsystem block in which the mask parameter is defined. A MATLAB® global variable is assigned a block identifier of 0. • The second index is a string corresponding to the name of the MATLAB® parameter. • The values of mask parameters are given in the parameters field (see below). • For example, K[3,"A0"] is the Simulink mask parameter A0 defined in subsystem 3. K[0,"x"] is the MATLAB® global variable x. Sink Signals • A sink signal is a signal that is terminated by a sink block, such as a Scope or Display. • The base symbol Sink identifies sink signals. • The first index is the name of the type of block. • The second index is the block identifier. • The third index is the port number. • The remaining indices specify the port index of a vector/matrix port. • For example, the signal Sink[Scope,3,2,1] is a connection to the Scope with block identifier 3; it connects to the second inport port, which is a vector port, this is the (1) index of the Source Signals • A source signal is a signal that is generated by a source block, such as a Ramp or Step. • The base symbol Source identifies source signals. • The first index is the name of the type of block. • The second index is the block identifier. • The third index is the port number. • The remaining indices specify the port index of a vector/matrix port. • For example, the signal Source[Step,3,2,1] is a connection to the Scope with block identifier 3; it connects to the second inport port, which is a vector port, this is the (1) index of the • The Maple expression for the source is contained in the inputeqs field. State Variables • A state variable corresponds to a dependent variable in a differential equation. • The base symbol x identifies state variables. • The first index is the block identifier for the block in which the state variable is associated. • The remaining indices specify the particular variable; they generally correspond to the port index of the input signal to which the state variable is associated. For example, the state variables for an integrator with block identifier 3 and a port width of two might be x[3,1,1] and x[3,1,2]. The data structure consists of a record with the following fields: • equations = list(equation) Contains the equations that define (1) the outputs of the blocks of the system, (2) the state equations (if any) of the blocks, and (3) the links between blocks. The left side of each equation should be an output. For equations that define the outputs of a block, these are the names of the output signal, for example, y[4,1,1] = 3*u[4,1,1]. For link equations, these are the inputs of the destination block, for example, u[5,1,1] = y[4,1,1]. • initialeqs = list(function = anything) Defines the initial values of the state variables of the system. The left side of each equation is an unevaluated function call with the form x[num, ix](0), where num is the block identifier of the functional block in which the state variable appeared; and ix is a sequence of integers that uniquely identify that state variable in the block. The right side of the equation is the initial value of the state variable. It may be a numeric value or a parameter. • inputvars = list(indexed) The names of the inputs of the system or subsystem. These are connected to the outputs of source blocks (Sin, Ramp, etc) and Inport blocks of subsystems. • linkeqs = list(indexed = indexed) Equations that identify the outputs of blocks with the inputs of the blocks they connect. For example, the equation u[2,1,1] = y[3,1,2] indicates that the • outputvars = list(indexed) The names of the output of the system or subsystem. These are connected to the inputs of sink blocks and Outport blocks of subsystems. • parameters = list(indexed = anything) Specifies the values of subsystem mask and MATLAB® global parameters used in equations. The left side of each equation consists of an indexed name K[num,str], where num is the block identifier of the subsystem in which parameter was defined (0 if the parameter is a MATLAB® global), and str is a string corresponding to the parameter name. The right side of the equation is the value of the name. It may be expressed in terms of other block parameters. A table whose indices consist of indexed names and whose entries are corresponding procedures. • sourceeqs = list(indexed = algebraic) Equations that define the time-behavior of sources. The left side is source symbol (see above). The right side is the definition of the source. • statevars = list(indexed) The state-variables of the system. State variables are the dependent variables in differential equations. See Also BlockImporter, BlockImporter[Import], BlockImporter[SimplifyModel] Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/AddOns/view.aspx?path=BlockImporter/datastructure","timestamp":"2014-04-18T05:46:48Z","content_type":null,"content_length":"178680","record_id":"<urn:uuid:e8282c3c-b06c-4dbc-8ecc-9dbb32914cb0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Promote your favorite R functions November 9, 2010 By David Smith The 27 base and recommended libraries of the standard R 2.12 distribution together contain 3556 functions (you can check using the code posted after the jump). Many of the functions are commonly used: c, data.frame, rnorm, lm. But some of those functions, while being extremely useful, may be less well known to many R users. Some examples I'd wish I'd learned about earlier include tapply, agrep, and formatC. There are also several really useful help pages that aren't associated with specific functions at all, like Syntax and .Machine, that don't get the exposure they deserve. To help get some of these "hidden gems" of the R documentation better known, we've added a "Function of the Day" section to the home page of inside-R.org. What R functions and help pages do you wish you'd known about earlier? Add a comment to your favorite pages in the Language Reference explaining why they deserve more love, and we'll consider the comments nominations for the Function of the inside-R.org: Language Reference # list of base and recommended packages in R 2.12 base.rec <- c( n.obj <- rep(0, length(base.rec)) names(n.obj) <- base.rec # count objects in each package for(p in base.rec) { n.obj[p] <- length( # total count for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/promote-your-favorite-r-functions/","timestamp":"2014-04-19T14:39:22Z","content_type":null,"content_length":"41687","record_id":"<urn:uuid:4776df0a-2dee-4171-8940-97c401850c27>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
XY-Chains, sudokuwiki.org Looking at exactly the same starting cell it appears we can make further eliminations, this time 6s in column 3. We go clock-wise, this time, round the rectangle. It proves 6 will either be on B3 or If you want to finish the puzzle by yourself, look out for a third elimination with those same four cells using 2s on column 8, or step through with the solver. Same cells - different XY-Chain: Load Example or : From the Start
{"url":"http://www.sudokuwiki.org/Print_XY_Chains","timestamp":"2014-04-17T01:16:46Z","content_type":null,"content_length":"6305","record_id":"<urn:uuid:96f7bb97-d9bc-413f-8fc9-a6d78a8a571c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you calculate voltage? the equation for electric force requires two charges and the distance between them. wouldn't that mean the voltage increases and decreases depending on its distance from the ground? You seem to be confusing electric potential (e.g. volts) with electric field strength (having units like volts per metre). Or maybe you're confusing the potential at the balloon with a potential induced at a specified distance from it. Note that the actual potential at such a point is the sum of induced potentials from all sources, including the earth.
{"url":"http://www.physicsforums.com/showthread.php?p=3881158","timestamp":"2014-04-18T08:14:29Z","content_type":null,"content_length":"27373","record_id":"<urn:uuid:889b6de2-de96-4520-93dd-a4af2f33598b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm Improvement through Performance Measurement: Part 5 Algorithm Improvement through Performance Measurement: Part 1 Algorithm Improvement through Performance Measurement: Part 2 Algorithm Improvement through Performance Measurement: Part 3 Algorithm Improvement through Performance Measurement: Part 4 Algorithm Improvement through Performance Measurement: Part 5 Random numbers are used in many domains, from networking protocols, to cryptography, to selection of a starting point for integrated circuit place and route algorithms, simulations, and computer graphics. When benchmarking algorithm performance, arrays of random input data at times provide the worst case input (but sometimes do not). In this article, I explore several random-number generators (RNGs), along with their strengths and weaknesses, as well as their affects on performance of sorting algorithms. I used the following method in In-place Hybrid N-bit-Radix Sort, and In-place Hybrid Binary-Radix Sortto generate random 32-bit unsigned values: unsigned long randomValue = ((unsigned long)rand()) << 30 | ((unsigned long)rand()) << 15 | ((unsigned long)rand()); This method uses the built-in rand() function, which generates a random number in the range 0 to RAND_MAX ( which is 32767). Each call to rand() generates 15 random bits, and three calls to rand() are used to generate 32 random bits for the unsigned long data type (15 bits, 15 bits, and 2 bits). Function srand was first used to seed this pseudo-random generator -- to provide a consistent starting point so as to produce a repeatable sequence of random numbers to fill an array with. I used a different random-number generator technique for all performance measurements in Stable Hybrid MSD N-bit-Radix Sort: float f1 = ranX( &randState ); // 0.0 .. 1.0 float f2 = ranX( &randState ); // 0.0 .. 1.0 unsigned long tmp1 = (unsigned long)( (( 1 << 16 ) - 1 ) * f1 ); unsigned long tmp2 = (unsigned long)( (( 1 << 16 ) - 1 ) * f2 ); unsigned long result = ( tmp1 << 16 ) | tmp2; where ranX() function can be either ran0(),ran1(), or ran2() function from [4], with ran2() used for performance measurements. This method generates 16 bits per ranX() function call. However, fewer number of bits could be generated per call by adding a masking operation as follows: float f1 = ranX( &randState ); // 0.0 .. 1.0 unsigned long eightBits = (unsigned long)( (( 1 << 16 ) - 1 ) * f1 ) & 0xff; This implementation generates 8 random bits per call to the random generation function, by taking eight least significant bits of the result. Bits other than the least significant ones could be retained by using a different mask followed by a right shift. To produce a 32-bit random number using this method, four results of 8 bits each would need to be generated and concatenated together as: unsigned long result32bit = ( eightBit[ 0 ] << 24 ) | ( eightBit[ 1 ] << 16 ) | ( eightBit[ 2 ] << 8 ) | eightBit[ 2 ]; Table 1 shows the number of unique values that each of the random number generators created when filling an array of 100 million 32-bit unsigned elements. Windows C rand(), as well as three functions from [4] and Mersenne twister [5] are compared. In all cases, 32-bit unsigned array elements were generated by calling each function multiple times. For example, 1-bit at a time required 32 function calls to generate a 32-bit value; and 16-bits at a time required two function calls. The least significant bits were extracted and concatenated into a 32-bit value. For Table 1, an array of 100 million 32-bit values was generated. STL's unique() function was used to count the number of unique values within this array. All of the functions were seeded with the same value of 2, negating it for ran0(), ran1(), and ran2() as they require. From this simple and fairly weak test (of using STL unique() function) it is evident that the Windows C rand() function can produce poor random number sequences, with only 4K unique numbers within the 100 million element array (only 0.004% unique). However, it can also produce sequences with a large percentage of unique values. This test is weak because it would get fooled by an incrementing 32-bit number sequence, which would have 100 million unique values, but would not be random. But, the test is adequate to detect this issue. Table 1 shows a problem in the Windows C rand() function, that becomes especially evident when extracting 10 bits or fewer per function call. This does not seem to be a problem with only a single lower bit. When extracting 11 bits or more, C rand() function produced more than 92% of values that are unique. The other four RNGs tested do not exhibit this issue no matter how many bits are extracted per function call. Their lower bits do not exhibit problems. These RNGs produce more than 97.7% of values that are unique.
{"url":"http://www.drdobbs.com/open-source/algorithm-improvement-through-performanc/223101043","timestamp":"2014-04-23T18:29:03Z","content_type":null,"content_length":"97679","record_id":"<urn:uuid:1fd895e0-8d6d-4a9c-afc6-58150cd6233e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Games for the Verification of Timed Systems Vinayak Prabhu EECS Department University of California, Berkeley Technical Report No. UCB/EECS-2008-97 August 15, 2008 Models of timed systems must incorporate not only the sequence of system events, but the timings of these events as well to capture the real-time aspects of physical systems. Timed automata are models of real-time systems in which states consist of discrete locations and values for real-time clocks. The presence of real-time clocks leads to an uncountable state space. This thesis studies verification problems on timed automata in a game theoretic framework. For untimed systems, two systems are close if every sequence of events of one system is also observable in the second system. For timed systems, the difference in timings of the two corresponding sequences is also of importance. We propose the notion of bisimulation distance which quantifies timing differences; if the bisimulation distance between two systems is epsilon, then (a) every sequence of events of one system has a corresponding matching sequence in the other, and (b) the timings of matching events in between the two corresponding traces do not differ by more than epsilon. We show that we can compute the bisimulation distance between two timed automata to within any desired degree of accuracy. We also show that the timed verification logic TCTL is robust with respect to our notion of quantitative bisimilarity, in particular, if a system satisfies a formula, then every close system satisfies a close formula. Timed games are used for distinguishing between the actions of several agents, typically a controller and an environment. The controller must achieve its objective against all possible choices of the environment. The modeling of the passage of time leads to the presence of zeno executions, and corresponding unrealizable strategies of the controller which may achieve objectives by blocking time. We disallow such unreasonable strategies by restricting all agents to use only receptive strategies --- strategies which while not being required to ensure time divergence by any agent, are such that no agent is responsible for blocking time. Time divergence is guaranteed when all players use receptive strategies. We show that timed automaton games with receptive strategies can be solved by a reduction to finite state turn based game graphs. We define the logic timed alternating-time temporal logic for verification of timed automaton games and show that the logic can be model checked in EXPTIME. We also show that the minimum time required by an agent to reach a desired location, and the maximum time an agent can stay safe within a set of locations, against all possible actions of its adversaries are both computable. We next study the memory requirements of winning strategies for timed automaton games. We prove that finite memory strategies suffice for safety objectives, and that winning strategies for reachability objectives may require infinite memory in general. We introduce randomized strategies in which an agent can propose a probabilistic distribution of moves and show that finite memory randomized strategies suffice for all omega-regular objectives. We also show that while randomization helps in simplifying winning strategies, and thus allows the construction of simpler controllers, it does not help a player in winning at more states, and thus does not allow the construction of more powerful controllers. Finally we study robust winning strategies in timed games. In a physical system, a controller may propose an action together with a time delay, but the action cannot be assumed to be executed at the exact proposed time delay. We present robust strategies which incorporate such jitters and show that the set of states from which an agent can win robustly is computable. Advisor: Pravin Varaiya and Thomas A. Henzinger BibTeX citation: Author = {Prabhu, Vinayak}, Title = {Games for the Verification of Timed Systems}, School = {EECS Department, University of California, Berkeley}, Year = {2008}, Month = {Aug}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-97.html}, Number = {UCB/EECS-2008-97}, Abstract = {Models of timed systems must incorporate not only the sequence of system events, but the timings of these events as well to capture the real-time aspects of physical systems. Timed automata are models of real-time systems in which states consist of discrete locations and values for real-time clocks. The presence of real-time clocks leads to an uncountable state space. This thesis studies verification problems on timed automata in a game theoretic For untimed systems, two systems are close if every sequence of events of one system is also observable in the second system. For timed systems, the difference in timings of the two corresponding sequences is also of importance. We propose the notion of bisimulation distance which quantifies timing differences; if the bisimulation distance between two systems is epsilon, then (a) every sequence of events of one system has a corresponding matching sequence in the other, and (b) the timings of matching events in between the two corresponding traces do not differ by more than epsilon. We show that we can compute the bisimulation distance between two timed automata to within any desired degree of accuracy. We also show that the timed verification logic TCTL is robust with respect to our notion of quantitative bisimilarity, in particular, if a system satisfies a formula, then every close system satisfies a close formula. Timed games are used for distinguishing between the actions of several agents, typically a controller and an environment. The controller must achieve its objective against all possible choices of the environment. The modeling of the passage of time leads to the presence of zeno executions, and corresponding unrealizable strategies of the controller which may achieve objectives by blocking time. We disallow such unreasonable strategies by restricting all agents to use only receptive strategies --- strategies which while not being required to ensure time divergence by any agent, are such that no agent is responsible for blocking time. Time divergence is guaranteed when all players use receptive We show that timed automaton games with receptive strategies can be solved by a reduction to finite state turn based game graphs. We define the logic timed alternating-time temporal logic for verification of timed automaton games and show that the logic can be model checked in We also show that the minimum time required by an agent to reach a desired location, and the maximum time an agent can stay safe within a set of locations, against all possible actions of its adversaries are both computable. We next study the memory requirements of winning strategies for timed automaton games. We prove that finite memory strategies suffice for safety objectives, and that winning strategies for reachability objectives may require infinite memory in general. We introduce randomized strategies in which an agent can propose a probabilistic distribution of moves and show that finite memory randomized strategies suffice for all omega-regular We also show that while randomization helps in simplifying winning strategies, and thus allows the construction of simpler controllers, it does not help a player in winning at more states, and thus does not allow the construction of more powerful controllers. Finally we study robust winning strategies in timed games. In a physical system, a controller may propose an action together with a time delay, but the action cannot be assumed to be executed at the exact proposed time delay. We present robust strategies which incorporate such jitters and show that the set of states from which an agent can win robustly is computable.} EndNote citation: %0 Thesis %A Prabhu, Vinayak %T Games for the Verification of Timed Systems %I EECS Department, University of California, Berkeley %D 2008 %8 August 15 %@ UCB/EECS-2008-97 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-97.html %F Prabhu:EECS-2008-97
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-97.html","timestamp":"2014-04-21T12:14:03Z","content_type":null,"content_length":"12086","record_id":"<urn:uuid:90dbe3b4-bd04-4ef6-89f1-6ccd8dfb7520>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
22.1 Poisson Exponential ANOVA 598 CH A P T E R 22: Contingency Table Analysis have no need to compute chi-square values (except for comparison with the Bayesian conclusions). There are many advantages of a Bayesian approach. As usual, a significant advantage is never having to compute a p value. Bet- ter yet is that the Bayesian analysis provides credible intervals on the conjoint probabilities and on any desired comparison of conditions. I will call our modeling framework Poisson exponential ANOVA because it uses a Poisson likelihood distribution with an exponential link function from an underlying ANOVA model. This terminology is not conventional, and it might even be misleading if readers mistakenly infer from the term "ANOVA" that there is a metric predicted variable involved. Nevertheless, the terminology is highly descriptive of the structural elements of the model. The model appears in the lower right cell of Table 14.1, p. 385, where its relation to other cases of the generalized linear model is evident. 22.1 POISSON EXPONENTIAL ANOVA 22.1.1 What the Data Look Like To motivate the model, we need first to understand the structure of the data. An example of the sort of data we'll be dealing with is shown in Table 22.1. The data come from a classroom poll of students at the University of Delaware (Snee, 1974). Respondents reported their hair color and eye color, with each variable split into four nominal levels as indicated in Table 22.1. The cells of the table indicate the frequency with which each combination occurred in the sample. Each respondent falls in one and only one cell of the table. The data to be predicted are the cell frequencies. The predictors are the nominal variables. This situation is analogous to two-way ANOVA, which also had two nominal predictors but had several metric values in each cell instead of a single frequency. For data like these, we can ask a number of questions. We could wonder about one variable at a time and ask questions such as "Are there more brown-eyed Table 22.1 Frequencies of Different Combinations of Hair Color and Eye Color (Data from Snee, 1974.) Eye Color Hair Color Black Blond Brunette Red Blue 20 94 84 17 Brown 68 7 119 26 Green 5 16 29 14 Hazel 15 10 54 14
{"url":"http://my.safaribooksonline.com/book/-/9780123814852/22dot1-poisson-exponential-anova/2211_what_the_data_look_like","timestamp":"2014-04-17T01:23:14Z","content_type":null,"content_length":"87800","record_id":"<urn:uuid:5a5907cc-e7bd-48b7-8724-1ab568c84f1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
local galois representation with higher coefficient up vote 0 down vote favorite Suppose K is a local field , G is its galois group, V a fine dimensional Vector space over F, which is a sub field of K, and totally ramified over $Q_p$. Consdider the linear action of G on V (V is not just a $Z_p$ representation ), are there similar theories dealing with such situation like Fontaine's theory, someting like filtered $\varphi$ module with srong p-divisible peoperties and maybe with some extra structure? Are there any reference? Thank you! ag.algebraic-geometry arithmetic-geometry nt.number-theory I have edited it to more easy case. – TOM Sep 6 '12 at 7:09 You also changed the setting completely. Is $F$ an extension of $K$ or a subfield of $K$?? – Laurent Berger Sep 6 '12 at 7:15 I am sorry, F should be a subfield of K. – TOM Sep 6 '12 at 7:18 2 If you're looking at linear representations of $G$ with coefficients, then everything works "the same". See for example 3.1 of Breuil-Mézard's 2002 Duke paper. – Laurent Berger Sep 6 '12 at 11:50 It is helpful, thank you! – TOM Sep 6 '12 at 12:08 add comment 1 Answer active oldest votes If $F$ is not finite but rather equal to $C_p$ then this is really Sen's theory (see for instance Fontaine's course notes in Astérisque 295). If $F$ is merely a finite extension of $K$, then I'm not sure that you need to introduce a lot of machinery: restrict your representation to $G_F$ so that it's linear, do what you have to do, and then take in account the extra structure that you had. Alternatively, a semilinear representation is the same as an element of $H^1(G,GL_d(F))$, so you could use Galois-cohomological techniques, especially the inflation-restriction sequence with $G_F$ and $G_K$. up vote 3 down vote EDIT : this answered the question for $F$-semilinear representations of $G_K$ with $F$ an extension of $K$. Since then the OP has modified his question so my answer is not relevant anymore add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry arithmetic-geometry nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/106468/local-galois-representation-with-higher-coefficient","timestamp":"2014-04-24T12:36:04Z","content_type":null,"content_length":"58233","record_id":"<urn:uuid:9bb107cf-e87c-4adb-a665-b5f564ce68a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Improved Easing Functions Animation is just moving something over time. The rate at which the something moves is defined by a function called an easing equation or interpolation function. It is these equations which make something move slowly at the start and speed up, or slow down near the end. These equations give animation a more life like feel. The most common set of easing equations come from Robert Penner's book and webpage. Penner created a very complete set of equations, including some fun ones like 'spring' and 'bounce'. Unfortunately their formulation isn't the best. Here is one of them in JavaScript They really obscure the meaning of the equations, making them hard to understand. I think they were designed this way for efficiency reasons since they were first implemented in Flash ActionScript, where speed would be important on the in-efficient. It makes them much harder to understand and extend, however. Fortunately, we can refactor it to be a lot clearer. Let's start with the easeInCubic equation. Ignore the x parameter (I'm not sure why it's there since it's not used). t is the current time, starting at zero. d is the duration in time. b and c are the starting and ending values. If we divide t by d before calling this function, then t will always be in the range of 0 to 1, and d can be left out of the function. If we define the returned version of t to also be from 0 to 1, then the actual interpolation of b to c can also be done outside of the function. Here is some code which will call the easing equation then interpolate the results: var t = t/d; t = easeInCubic(t); var val = b + t*(c-b); We have moved all of the common code outside the actual easing equation. What that leaves is a value t from 0 to 1 which we must transform into a different t. Here's the new easeInCubic function. function easeInCubic(t) { return Math.pow(t,3); That is the essence of the equation. Simply raising t to the power of three. The other formulation might be slightly faster, but it's very confusing, and the speed difference today is largely irrelevant since modern VMs can easily optimize the cleaner form. Now let's try to transform the second one. An ease out is the same as an ease in except in reverse. If t when from 0 to 1 then the out version will go from 1 - 0. To get this we subtract t from 1 as function cubicOut(t) { return 1-Math.pow(1-t,3); However, this looks awfully close to the easeIn version. Rather than writing a new equation we can factor out the differences. Subtract t from 1 before passing it in, then subtract the result from one after getting the return value. The out form just invokes the in form: function easeOutCubic(t) { return 1 - easeInCubic(1-t); Now we can write other equations in a similarly compact form. easeInQuad and easeOutQuad go from: function easeInQuad(t) { return t*t; } function easeOutQuad(t) { return 1-easeInQuad(1-t); } Now let's consider the easeInOutCubic. This one smooths both ends of the equation. In reality it's just scaling the easeIn to the first half of the t, from 0 to 0.5. Then it applies an easeOut to the second half, from 0.5 to 1. Rather than this complex form: We can compose our previous functions to define it like so: function cubicInOut(t) { if(t < 0.5) return cubicIn(t*2.0)/2.0; return 1-cubicIn((1-t)*2)/2; Much cleaner. Here is the original form of elastic out, which gives you a cartoon like bouncing effect: and here is the reduced form: function easeOutElastic(t) { var p = 0.3; return Math.pow(2,-10*t) * Math.sin((t-p/4)*(2*Math.PI)/p) + 1; Moral of the story: "Math is your friend" and "always refactor". These new equations will be included in a future release of Amino, my cross platform graphics library. posted Fri Mar 01 2013
{"url":"http://joshondesign.com/2013/03/01/improvedEasingEquations","timestamp":"2014-04-17T15:31:05Z","content_type":null,"content_length":"7938","record_id":"<urn:uuid:225e6fcc-fa1e-48ab-8eae-acbd71683a24>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics A modified epidemiological model for computer viruses. (English) Zbl 1185.68133 Summary: Since the computer viruses pose a serious problem to individual and corporative computer systems, a lot of effort has been dedicated to study how to avoid their deleterious actions, trying to create anti-virus programs acting as vaccines in personal computers or in strategic network nodes. Another way to combat viruses propagation is to establish preventive policies based on the whole operation of a system that can be modeled with population models, similar to those that are used in epidemiological studies. Here, a modified version of the SIR (Susceptible-Infected-Removed) model is presented and how its parameters are related to network characteristics is explained. Then, disease-free and endemic equilibrium points are calculated, stability and bifurcation conditions are derived and some numerical simulations are shown. The relations among the model parameters in the several bifurcation conditions allow a network design minimizing viruses risks. 68M99 Computer system organization 68N99 Software 92D30 Epidemiology [1] Denning, P. J.: Computers under attack, (1990) [2] P.S. Tippett, The kinetics of computer viruses replication: A theory and preliminary survey, in: Safe Computing: Proceedings of the Fourth Annual Computer Virus and Security Conference, New York, March 1991, pp. 66 – 87. [3] Cohen, F.: A short course of computer viruses, Computer and security 8, 149-160 (1990) [4] Forrest, S.; Hofmayer, S. A.; Somayaj, A.: Computer immunology, Communications of the ACM 40, No. 10, 88-96 (1997) [5] Piqueira, J. R. C.; Navarro, B. F.; Monteiro, L. H. A.: Epidemiological models applied to viruses in computer networks, Journal of computer science 1, No. 1, 31-34 (2005) [6] Kephart, J. O.; Hogg, T.; Huberman, B. A.: Dynamics of computational ecosystems, Physical review A 40, No. 1, 404-421 (1989) [7] Kephart, J. O.; White, S. R.; Chess, D. M.: Computers and epidemiology, IEEE spectrum, 20-26 (1993) [8] J.O. Kephart, G.B. Sorkin, M. Swimmer, An immune system for cyberspace, in: Proceedings of the IEEE International Conference on Systems, Men, and Cybernetics, Orlando, CA, October 1997, pp. 879 – 884. [9] Piqueira, J. R. C.; Cesar, F. B.: Dynamical models for computer viruses propagation, Mathematical problems in engineering (2008) · Zbl 1189.68036 · doi:10.1155/2008/940526 [10] Billings, L.; Spears, W. M.; Schartz, I. B.: A unified prediction of computer virus spread in connected networks, Physics letters A 297, 261-266 (2002) · Zbl 0995.68007 · doi:10.1016/S0375-9601 [11] Newman, M. E. J.; Forrest, S.; Balthrop, J.: Email networks and the spread of computer viruses, Physical review E 66, 035101-1-035101-4 (2002) [12] Mishra, B. K.; Saini, D.: Mathematical models on computer viruses, Applied mathematics and computation 187, No. 2, 929-936 (2007) · Zbl 1120.68041 · doi:10.1016/j.amc.2006.09.062 [13] Mishra, B. K.; Jha, N.: Fixed period of temporary immunity after run of the anti-malicious software on computer nodes, Applied mathematics and computation 190, 1207-1212 (2007) · Zbl 1117.92052 · doi:10.1016/j.amc.2007.02.004 [14] Draief, M.; Ganesh, A.; Massouili, L.: Thresholds for virus spread on networks, Annals of applied probability 18, No. 2, 359-378 (2008) · Zbl 1137.60051 · doi:10.1214/07-AAP470 [15] Piqueira, J. R. C.; De Vasconcelos, A. A.; Gabriel, C. E. C.J.; Araujo, V. O.: Dynamic models for computer viruses, Computers & security 27, No. 7 – 8, 355-359 (2008) [16] Murray, J. D.: Mathematical biology, (2002) [17] Guckenheimer, J.; Holmes, P.: Nonlinear oscillations, dynamical systems and bifurcation of vector fields, (1983) [18] Ogata, K.: Modern control engineering, (1997) [19] Moler, C. B.: Numerical computing with Matlab, (2004)
{"url":"http://zbmath.org/?q=an:1185.68133","timestamp":"2014-04-21T07:18:33Z","content_type":null,"content_length":"25556","record_id":"<urn:uuid:fa03d629-0059-4a39-a941-2960d3380597>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
New trie data structures which support very fast search operations Results 1 - 10 of 20 , 1987 "... We consider the 2-dimensional range searching problem in the case where all point lie on an integer grid. A new data structure is preented that solves range queries on a U U grid in O(k + loglog U) time using O(n log n) storage, where n is the number of points and k the number of reported answers ..." Cited by 36 (0 self) Add to MetaCart We consider the 2-dimensional range searching problem in the case where all point lie on an integer grid. A new data structure is preented that solves range queries on a U U grid in O(k + loglog U) time using O(n log n) storage, where n is the number of points and k the number of reported answers. Although the query "... We introduce and analyze a method to reduce the search cost in tries. Traditional trie structures use branching factors at the nodes that are either fixed or a function of the number of elements. Instead, we let the distribution of the elements guide the choice of branching factors. This is accomp ..." Cited by 32 (8 self) Add to MetaCart We introduce and analyze a method to reduce the search cost in tries. Traditional trie structures use branching factors at the nodes that are either fixed or a function of the number of elements. Instead, we let the distribution of the elements guide the choice of branching factors. This is accomplished in a strikingly simple way: in a binary trie, the i highest complete levels are replaced by a single node of degree 2i; the compression is repeated in the subtries. This structure, the level-compressed trie, inherits the good properties of binary tries with respect to neighbour and range searches, while the external path length is significantly decreased. It also has the advantage of being easy to implement. Our analysis shows that the expected depth of a stored element is \Theta (log \Lambda n) for uniformly distributed data. , 1999 "... fast pattern matching queries. The scheme provides a general framework for representing information about repetitions, i.e., multiple occurrences of the same string in the text, and for using the information in pattern matching. Well-known text indexes, such as suffix trees, suffix arrays, DAWGs and ..." Cited by 27 (0 self) Add to MetaCart fast pattern matching queries. The scheme provides a general framework for representing information about repetitions, i.e., multiple occurrences of the same string in the text, and for using the information in pattern matching. Well-known text indexes, such as suffix trees, suffix arrays, DAWGs and their variations, which we collectively call suffix indexes, can be seen as instances of the , 1996 "... In this paper we consider solutions to the static dictionary problem ���� � on RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational instructions are ���� � in. Our main result is a tight upper and lower bound ���� � ���©���������������� ..." Cited by 19 (5 self) Add to MetaCart In this paper we consider solutions to the static dictionary problem ���� � on RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational instructions are ���� � in. Our main result is a tight upper and lower bound ���� � ���©��������������������� of on the time for answering membership queries in a set of � size when reasonable space is used for the data structure storing the set; the upper bound can be obtained using space ������ � �� � ���� �. Several variations of this result are also obtained. Among others, we show a tradeoff between time and circuit depth under the unit-cost assumption: any RAM instruction set which permits a linear space, constant query time solution to the static dictionary problem must have an instruction of depth �������©���������������©���� � , where � is the word size of the machine (and ���© � the size of the universe). This matches the depth of multiplication and integer division, used in the perfect hashing scheme by Fredman, Komlós and Szemerédi. - J. of Algorithms , 1993 "... We introduce data-structural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k) worstcase time and space per deletion, where k is the total number of deque operations, and constant worst-case t ..." Cited by 15 (4 self) Add to MetaCart We introduce data-structural bootstrapping, a technique to design data structures recursively, and use it to design confluently persistent deques. Our data structure requires O(log 3 k) worstcase time and space per deletion, where k is the total number of deque operations, and constant worst-case time and space for other operations. Further, the data structure allows a purely functional implementation, with no side effects. This improves a previous result of Driscoll, Sleator, and Tarjan. 1 An extended abstract of this paper was presented at the 4th ACM-SIAM Symposium on Discrete Algorithms, 1993. 2 Supported by a Fannie and John Hertz Foundation fellowship, National Science Foundation Grant No. CCR-8920505, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) under NSF-STC88-09648. 3 Also affiliated with NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. Research at Princeton University partially supported by the National Science Foundatio... - Information Processing Letters , 1990 "... In this paper we show how to implement bounded ordered dictionaries, also called bounded priority queues, in O(log log N) time per operation and O(n) space. Here n denotes the number of elements stored in the dictionary and N denotes the size of the universe. Previously, this time bound required O(N ..." Cited by 14 (0 self) Add to MetaCart In this paper we show how to implement bounded ordered dictionaries, also called bounded priority queues, in O(log log N) time per operation and O(n) space. Here n denotes the number of elements stored in the dictionary and N denotes the size of the universe. Previously, this time bound required O(N) space [E77]. , 1996 "... ) Paolo Ferragina 1 and S. Muthukrishnan 2 1 Dipartimento di Informatica, Universit`a di Pisa, Italy. ferragin@di.unipi.it 2 Dept. of Computer Science, Univ. of Warwick, UK. muthu@dcs.warwick.ac.uk 1 Introduction We consider the following dynamic data structural problem. We are given a rooted ..." Cited by 11 (1 self) Add to MetaCart ) Paolo Ferragina 1 and S. Muthukrishnan 2 1 Dipartimento di Informatica, Universit`a di Pisa, Italy. ferragin@di.unipi.it 2 Dept. of Computer Science, Univ. of Warwick, UK. muthu@dcs.warwick.ac.uk 1 Introduction We consider the following dynamic data structural problem. We are given a rooted tree of n nodes and a set f1; 2; : : :; Cg of colors. Each node u has a subset of these colors, say of size d u , and P u du = D. Note D C. The problem is to dynamically maintain this tree under updates, that is, insert(p; c) and delete(p; c) operations, and answer find(p; c) queries. The operations insert(p; c) and delete(p; c) respectively add and remove the color c from the node pointed to by pointer p (the tree does not change topology under these dynamic operations). The find(p; c) query returns the nearest ancestor, if any, of the node pointed to by p (possibly that node itself) which has the color c, 1 c C. If no such ancestor exists, Find(p; c) returns Null. We call this the... - In Proc. 5th International Workshop on Experimental Algorithms (WEA , 2006 "... Abstract. In this paper, we present an experimental study of the spacetime tradeoffs for the dictionary problem, where we design a data structure to represent set data, which consist of a subset S of n items out of a universe U = {0, 1,...,u − 1} supporting various queries on S. Our primary goal is ..." Cited by 8 (1 self) Add to MetaCart Abstract. In this paper, we present an experimental study of the spacetime tradeoffs for the dictionary problem, where we design a data structure to represent set data, which consist of a subset S of n items out of a universe U = {0, 1,...,u − 1} supporting various queries on S. Our primary goal is to reduce the space required for such a dictionary data structure. Many compression schemes have been developed for dictionaries, which fall generally in the categories of combinatorial encodings and data-aware methods and still support queries efficiently. We show that for many (real-world) datasets, data-aware methods lead to a worthwhile compression over combinatorial methods. Additionally, we design a new data-aware building block structure called BSGAP that presents improvements over other data-aware methods. 1 , 1994 "... Some previously proposed algorithms are re-examined. They were designed to find all sets in a collection that have no subset in the collection, but are easily modified to find all sets that have no supersets. One is shown to have a worst-case running-time of O(N 2 = log N ), where N is the sum of ..." Cited by 6 (3 self) Add to MetaCart Some previously proposed algorithms are re-examined. They were designed to find all sets in a collection that have no subset in the collection, but are easily modified to find all sets that have no supersets. One is shown to have a worst-case running-time of O(N 2 = log N ), where N is the sum of the sizes of all the sets. This is lower than the only previously known sub-quadratic worst-case upper bound for this problem. Key words: Analysis of algorithms, set-theoretic algorithms, extremal sets. 1 Introduction Yellin and Jutla [3] tackled the following fundamental problem, for some applications of which see [2]. Given is a collection F = fS 1 ; : : : ; S k g, where each S i is a set over the same domain. A set is a minimal (resp. maximal) set of F iff it has no strict subset (resp. superset) in F . Find the extremal sets of F , i.e., those that are minimal or maximal. With the problem size chosen as N = P i S i , Yellin and Jutla presented an abstract algorithm that requires O(N ... - Algorithmica "... Abstract. We present an O(n log n)-time algorithm to solve the threedimensional layers-of-maxima problem, an improvement over the prior O(n log n log log n)-time solution. A previous claimed O(n log n)-time solution due to Atallah, Goodrich, and Ramaiyer [SCG’94] has technical flaws. Our algorithm i ..." Cited by 5 (0 self) Add to MetaCart Abstract. We present an O(n log n)-time algorithm to solve the threedimensional layers-of-maxima problem, an improvement over the prior O(n log n log log n)-time solution. A previous claimed O(n log n)-time solution due to Atallah, Goodrich, and Ramaiyer [SCG’94] has technical flaws. Our algorithm is based on a common framework underlying previous work, but to implement it we devise a new data structure to solve a special case of dynamic planar point location in a staircase subdivision. Our data structure itself relies on a new extension to dynamic fractional cascading that allows vertices of high degree in the control graph. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=600736","timestamp":"2014-04-18T08:32:31Z","content_type":null,"content_length":"37630","record_id":"<urn:uuid:2368d8e6-b33b-4e24-82f8-1b67e22cdec9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Slides from my JMM talk, "How to Count With Topology" Slides from my JMM talk, “How to Count With Topology” Back from San Diego, recovering from the redeye. It was a terrific Joint Math Meetings this year; I saw lots of old friends and great talks, but had to miss a lot of both, too. A couple of people asked me for the slides of my talk, “How To Count with Topology.” Here they are, in .pdf: If you find this stuff interesting, these blog posts give a somewhat more detailed sketch of the papers with Venkatesh and Westerland I talked about. 3 thoughts on “Slides from my JMM talk, “How to Count With Topology”” 1. Nice talk. The situation as summed up on your last slide (“Topology as conjecture machine”) reminds me strongly of the philosophy of random matrix theory as a conjecture machine in analytic number theory. 2. Actually, there’s a very close connection. There’s a parallel tradition, starting with Friedman and Washington, of developing heuristics for conjectures of this kind via the statistics of l-adic random matrices. Part of the story, which I didn’t have time to put in the talk, is that in situations where both machines make a prediction, they are typically the same — so the topological story serves to validate random matrix heuristics. But there are some situations (e.g. those treated in my paper with Jain and Venkatesh about lambda-invariants, or in my paper with Cais and Zureick-Brown about random Dieudonne modules) where I don’t know how to tell a topology story, only a random matrix story, and some (like Batyrev-Manin) where I don’t know how to tell a random matrix story, only a topology story. Maybe I’ll make another post about this at some point! 3. I agree, these slides are very nice! I’d have been happy to attend the talk… A minor addendum to the slides : explicit error terms in the Davenport-Heilbronn theorem were known before the papers which are mentioned (the first due to Belabas, Bhargava and Pomerance, but also, already around 1995, Belabas and Fouvry add versions on average over arithmetic progressions to perform sieve for quadratic number fields with almost prime discriminant and small 3-rank.) Concerning the previous comment, there are definitely connections between these types of conjectures and those involving random matrices, but one may note that whereas the Cohen-Lenstra-type conjectures typically involve the “behavior at s=1″ of Dirichlet series (in particular, location and order of poles, or in geometric terms, number of irreducible components), the RMT conjectures for, e.g., moments of L-functions, involve the behavior on the critical line (i.e., they concern zeros of L-functions). Tagged arithmetic statistics, jmm13, slides, stable cohomology, talks
{"url":"http://quomodocumque.wordpress.com/2013/01/12/slides-from-my-jmm-talk-how-to-count-with-topology/","timestamp":"2014-04-20T09:09:41Z","content_type":null,"content_length":"62027","record_id":"<urn:uuid:77b962b1-eb43-476c-a1e6-cb2addbf841c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
e Ma Overview of the MathML Package Calling Sequence • MathML is an evolving Internet standard for the communication of structured mathematical formulae between applications, especially for use on the World Wide Web. The purveyors of Math ML are a committee of the World Wide Web Consortium (W3C) who publish periodic revisions of the standard. The current MathML implementation in Maple is based on revision 2.0 of that standard. The MathML standard is a publicly available document that can be viewed or downloaded by using a Web browser at http://www.w3c.org/Math/. (The precise version of the standard used in the development of this package is given below.) • MathML, as a standard, is still in its early stages. It will take time before applications fully comply and for it to fulfill its true potential for offering seamless Web connectivity and operability between applications. We have tried to provide an implementation that complies to the standard as it existed at the time of development, and to provide the necessary tools to help you benefit from this existing standard immediately. For the latest information on MathML, and to get useful applets and other components to take maximum advantage of the standard, visit http:// • Each command in the MathML package can be accessed by using either the long form or the short form of the command name in the command calling sequence. As the underlying implementation of the MathML package is a module, it is also possible to use the form MathML:-command to access a command from the package. For more information, see Module List of MathML Package Commands • The MathML package contains commands for importing and exporting Maple expressions from and to MathML text. The following is a list of available commands. Export ExportContent ExportPresentation Import ImportContent ImportModified • To display the help page for a particular MathML command, see Getting Help with a Command in a Package. About MathML • MathML (or Mathematical Markup Language) is an XML application, which means that it consists of text interspersed with "tags" similar to those found in HTML. For instance, the number can be represented in MathML by the encoding "<math><mn>2</mn></math>". A more complicated example is the following representation of . The indentation indicates a hierarchical nesting of the structures formed by these tags (called "elements"). All mathematical expressions are represented in this way by reflecting their nested subexpression structure. • Because it is necessary, in general, to specify both what an expression means and how it is to appear when rendered, MathML provides two different kinds of encoding for expressions and a mechanism for combining the two. The first, shown in the example above, is called "Content" MathML. It represents the semantics of an expression, but gives little or no information about how it should appear when the expression is to be printed in a book or on a computer display. A second form of representation, known as "Presentation" MathML is used to encode the information needed to render an expression properly. • The Maple expression , for example, can be represented semantically by the Content MathML. To convey the information needed to print the expression in the usual way, a different sort of encoding is used. This encoding tells an application (such as a Web browser) how to render the expression but gives no indication of its meaning. • So that applications with widely differing purposes can communicate with each other, MathML allows both representations of an expression to be "packaged" together into a single element. There are, in fact, two ways of doing this: Mixed Mode MathML and Parallel Mode MathML. The Maple MathML package uses Parallel Mode MathML, in which the content representation is bundled to the presentation representation as an XML annotation. • In addition to the Content and Presentation representations, MathML allows application specific data to be attached to an expression. This package uses and produces such an annotation in the form of a string of Maple language code. • A graphical user interface to some of the functionality provided by this package is available via the worksheet File menu option Export As HTML with MathML, and is integrated into the cut and paste facilities of the Maple graphical user interface. • Not every Maple expression can be represented in MathML, nor can every expression written in MathML markup be taken as a valid representation of some Maple object. When it is not possible to effect a translation between Maple and MathML, the commands in this package raise an exception. If possible, some indication of where the translator had difficulties is also returned. See Also codegen[eqn], codegen[MathML], Copy as MathML, Exporting as MathML, latex, module, Paste MathML, UsingPackages, with, XMLTools "Mathematical Markup Language (MathML) Version 2.0" (http://www.w3c.org/TR/MathML2/) Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=MathML","timestamp":"2014-04-25T09:18:38Z","content_type":null,"content_length":"176940","record_id":"<urn:uuid:f5a36e97-8756-42d4-99b4-98ec2f966f90>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Maple on Athena Articles in "Maple on Athena" Page: How can I copy and paste text between Maple and other applications? (IS&T Contributions) Page: How can I do the least squares problem in Maple? (IS&T Contributions) Page: How can I plot data in Maple? (IS&T Contributions) Page: How can I plot differential equations in Maple? (IS&T Contributions) Page: How can I print Maple plots? (IS&T Contributions) Page: How can I save, restore and export information with Maple? (IS&T Contributions) Page: How can I set up a Maple initialization file? (IS&T Contributions) Page: How can I write functions in Maple? (IS&T Contributions) Page: How do I input and output data using Maple? (IS&T Contributions) Page: How do I use Maple? (IS&T Contributions) Page: What does the error "byte-range lock unlock ignored" mean in Maple? (IS&T Contributions) Page: Where can I get more information about Maple? (IS&T Contributions) Can't find it? Adaptavist Theme Builder (4.2.3) Powered by Atlassian Confluence 3.5.13, the Enterprise Wiki Articles in "Maple on Athena" Page: How can I copy and paste text between Maple and other applications? (IS&T Contributions) Page: How can I do the least squares problem in Maple? (IS&T Contributions) Page: How can I plot data in Maple? (IS&T Contributions) Page: How can I plot differential equations in Maple? (IS&T Contributions) Page: How can I print Maple plots? (IS&T Contributions) Page: How can I save, restore and export information with Maple? (IS&T Contributions) Page: How can I set up a Maple initialization file? (IS&T Contributions) Page: How can I write functions in Maple? (IS&T Contributions) Page: How do I input and output data using Maple? (IS&T Contributions) Page: How do I use Maple? (IS&T Contributions) Page: What does the error "byte-range lock unlock ignored" mean in Maple? (IS&T Contributions) Page: Where can I get more information about Maple? (IS&T Contributions) Can't find it? Page: How can I copy and paste text between Maple and other applications? (IS&T Contributions) Page: How can I do the least squares problem in Maple? (IS&T Contributions) Page: How can I plot data in Maple? (IS&T Contributions) Page: How can I plot differential equations in Maple? (IS&T Contributions) Page: How can I print Maple plots? (IS&T Contributions) Page: How can I save, restore and export information with Maple? (IS&T Contributions) Page: How can I set up a Maple initialization file? (IS&T Contributions) Page: How can I write functions in Maple? (IS&T Contributions) Page: How do I input and output data using Maple? (IS&T Contributions) Page: How do I use Maple? (IS&T Contributions) Page: What does the error "byte-range lock unlock ignored" mean in Maple? (IS&T Contributions) Page: Where can I get more information about Maple? (IS&T Contributions)
{"url":"http://kb.mit.edu/confluence/display/category/Maple+on+Athena","timestamp":"2014-04-19T19:39:05Z","content_type":null,"content_length":"63269","record_id":"<urn:uuid:8c329b7b-aba1-46cb-9ab4-cec4cb454225>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 23rd 2010, 12:06 AM #1 Junior Member Nov 2009 How exactly would I do this? The normal to the ellipse x^2/a^2 + y^2/b^2 = 1 at P(x1, y1) meets the x-axis in N and the Y-axis in G. Prove PN/NG = (1-e^2)/e^2 See this thread for some general comments about ellipses that should help with this problem. Remember that the eccentricity is given by $e^2 = 1 - \frac{b^2}{a^2}$. I've spent a few days on this but I cant solve it. PLEASE HELP!!! I'm not entirely surprised. This is a longer and messier calculation than I would have guessed. Here is an outline of how to do it. Take P to be the point $(a\cos\theta,b\sin\theta)$. Then (see the link in my other comment above) the normal at P has equation $yb\cos\theta - xa\sin\theta = (b^2 - a^2)\cos\theta\sin\theta$. Put y = 0 in that equation to see that N is the point $\Bigl(\frac{(a^2-b^2)\cos\theta}a,0\Bigr)$. Put x = 0 to see that G is the point $\Bigl(0,\frac{(b^2-a^2)\sin\theta}b\Bigr)$. Then use the usual distance formula to see that $PN^2 = \Bigl(\frac{b^2\cos\theta}a\Bigr)^2 + b^2\sin^2\theta = \frac{b^2(b^2\cos^2\theta + a^2\sin^2\theta)}{a^2}$. Similarly $NG^2 = (a^2-b^2)^2\Bigl(\frac{\cos^2\theta}{a^2} + \frac{\sin^2\theta}{b^2}\Bigr) = \frac{(a^2-b^2)^2(b^2\cos^2\theta + a^2\sin^2\theta)}{a^2b^2}$. Therefore $\Bigl(\frac{PN}{NG}\Bigr)^2 = \frac{b^4}{(a^2-b^2)^2}$, and so $\frac{PN}{NG} = \frac{b^2}{a^2-b^2} = \frac{a^2(1-e^2)}{a^2e^2} = \frac{1-e^2}{e^2}$. February 23rd 2010, 12:20 AM #2 February 26th 2010, 03:38 PM #3 Junior Member Nov 2009 February 27th 2010, 06:56 AM #4
{"url":"http://mathhelpforum.com/geometry/130271-ellipse.html","timestamp":"2014-04-17T10:00:23Z","content_type":null,"content_length":"41450","record_id":"<urn:uuid:5a62117d-e6d4-4014-96ce-c2f957710b63>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Find contract curve, Find contract curve, Microeconomics Consider two individualsĀ M and F who must split 20 units of good X and 10 units of good Y. Suppose we can represent M's preference with the utility function Um =X^^2mYm and Fs preference with the utility Uf = min{Xf, 2Yf} Where Xm and Xf indicate their X consumption, while Ym and Yf indicate their Y consumption a) Derive the MRS for M b) Interpret F's utility function c) Find and graph the contract curve d) What is the ratio of the price of X to the Price of Y in competitive equilibrium? e) To which allocation will M and F trade? Indicate this outcome clearly in your graph Posted Date: 2/25/2013 2:02:57 AM | Location : United States Your posts are moderated Is the terms of trade (TOT) explained as the ratio of the value of exports to the value of imports? How does the TOT relate to the exchange rate? The terms of trade (TOT) is ex can you help me answer an economics question how do you find the average fixed costs using total fixed costs and total product? Normal 0 false false false EN-IN X-NONE X-NONE MicrosoftInternetExplorer4 What are the parts of valuable economics paper? The consequence of economics research is an economic conclusion. Usually a valuable economics paper comprises three parts: a. Once the organization has decided to move forward with the development of a new or modified system, it is time to determine what tasks are necessary to move the project from initia Question : (a) Using a simple example, diffrence between inter - industry trade and intra - industry trade? (b) Illustrate the reasons for the existence of external economie Normal 0 false false false EN-IN X-NONE X-NONE MicrosoftInternetExplorer4 Q. What do you meant by Investment? Investment: Investment represents production which isn't consumed though rather is utilized in the production of other additional output. In law of diminishing marginal utility its assumptions, limitation, and its practical importance
{"url":"http://www.expertsmind.com/questions/find-contract-curve-30136506.aspx","timestamp":"2014-04-18T05:30:06Z","content_type":null,"content_length":"29499","record_id":"<urn:uuid:57f9fcb7-0cee-4ca1-9d99-7812d1d5f033>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Elizabeth, NJ Trigonometry Tutor Find an Elizabeth, NJ Trigonometry Tutor Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a... 15 Subjects: including trigonometry, chemistry, calculus, statistics ...I share with my students the GRE test-taking strategies I used to score in the 99th percentile, but I also focus on making sure the student has a solid foundation in the core math and English concepts tested. Over the past 4 years, I have tutored many students for the GED, with great results. I am now also tutoring students for the TASC, which has replaced the GED in NYS. 34 Subjects: including trigonometry, English, reading, writing ...Of course, the numbers don't tell the whole story, and I think what I really bring to the table is the patience and experience to bring students towards mastery themselves. Thank you and I look forward to hearing from you! SAT I: Math: 800Verbal: 800SAT II:Physics: 800Chemistry: 800Math IIC: 80... 36 Subjects: including trigonometry, English, chemistry, calculus ...My clients come mainly from disciplines in academia and medicine, but also include executives, software developers, clothing designers, salespeople, a housekeeping crew, members of the Consulate of Ecuador to NY, engineers at a major firm, underprivileged women at a non-profit providing job readi... 39 Subjects: including trigonometry, reading, English, Spanish ...I have been tutoring for the STEM (Science, Technology, Engineering and Mathematics) for about a year. I tutor Math, Biology, Chemistry, Anatomy, Physics, Trigonometry, Algebra, Pre-calculus, Calculus, and Elementary (K-6th). I am very patient and persistent. I tend to change complicated problems to easy ones, by changing them into different steps. 13 Subjects: including trigonometry, reading, chemistry, calculus
{"url":"http://www.purplemath.com/Elizabeth_NJ_Trigonometry_tutors.php","timestamp":"2014-04-19T20:18:53Z","content_type":null,"content_length":"24523","record_id":"<urn:uuid:e57b5844-8b82-45dd-98e2-0604f7de216d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Charlestown, PA Prealgebra Tutor Find a Charlestown, PA Prealgebra Tutor ...My goal is to serve you and your learning needs! There is no one-size-fits-all when it comes to education. Let's think outside the box and help you to succeed! 20 Subjects: including prealgebra, reading, statistics, biology ...Algebra 2 continues the basic principles learned in algebra 1 regarding equations and functions. My goal is to provide a simple explanation to complex ideas at the student's level to help them master algebraic skills and concepts. I will work with the student to obtain a better understanding of... 9 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...Of course there will always be some memorization involved, but I try to keep that to a minimum! I am flexible with hours and accessible via text, phone and e-mail to answer quick questions during non-tutoring hours. I look forward to meeting you and will be happy to answer any questions you may have!I have played volleyball for the last ten years, starting my freshman year of high 10 Subjects: including prealgebra, geometry, ASVAB, algebra 1 ...I have almost three years of experience of one-to-one teaching. I have a very good success rate with all of my pupils. I am happy to travel up to 20 miles. 38 Subjects: including prealgebra, reading, public speaking, economics ...I had long term corporate assignments teaching groups from Latin America, an Ecuadorian production foreman, and a family from Italy. I am an Instructional Aide, Autistic Support, at the Lower Merion School District. At Harriton High School I tutor ELL in English, Math, and Chemistry. 35 Subjects: including prealgebra, English, reading, chemistry Related Charlestown, PA Tutors Charlestown, PA Accounting Tutors Charlestown, PA ACT Tutors Charlestown, PA Algebra Tutors Charlestown, PA Algebra 2 Tutors Charlestown, PA Calculus Tutors Charlestown, PA Geometry Tutors Charlestown, PA Math Tutors Charlestown, PA Prealgebra Tutors Charlestown, PA Precalculus Tutors Charlestown, PA SAT Tutors Charlestown, PA SAT Math Tutors Charlestown, PA Science Tutors Charlestown, PA Statistics Tutors Charlestown, PA Trigonometry Tutors Nearby Cities With prealgebra Tutor Chesterbrook, PA prealgebra Tutors Devault prealgebra Tutors Eagle, PA prealgebra Tutors Frazer, PA prealgebra Tutors Gulph Mills, PA prealgebra Tutors Ithan, PA prealgebra Tutors Kimberton prealgebra Tutors Linfield, PA prealgebra Tutors Rahns, PA prealgebra Tutors Romansville, PA prealgebra Tutors Saint Davids, PA prealgebra Tutors Southeastern prealgebra Tutors Strafford, PA prealgebra Tutors Upton, PA prealgebra Tutors Valley Forge prealgebra Tutors
{"url":"http://www.purplemath.com/Charlestown_PA_Prealgebra_tutors.php","timestamp":"2014-04-20T13:30:36Z","content_type":null,"content_length":"24176","record_id":"<urn:uuid:6864b107-fb63-4e9a-8c12-a5bf598275fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
These are some educational java applets I wrote to help visualize various concepts in math, physics, and engineering. You should be able to view them with any Java-capable browser. If you don't have Java, get the Java plug-in. Oscillations and Waves Ripple Tank (2-D Waves) Applet Ripple tank simulation that demonstrates wave motion, interference, diffraction, refraction, Doppler effect, etc. 2-D Waves Applet Demonstration of wave motion in 2-D. 3-D Waves Applet Demonstration of wave motion in 3-D. Coupled Oscillations Applet Demonstration of longitudinal wave motion in oscillators connected by springs. Dispersion Applet Dispersion and group velocity. Loaded String Applet Simulation of wave motion of a string. Rectangular Membrane Waves Applet Vibrational modes in a 2-d membrane. Circular Membrane Waves Applet Vibrational modes in a 2-d circular membrane (drum head). Bar Waves Applet Bending waves in a bar. Vowels Applet The acoustics of speech. Box Modes Applet Acoustic standing waves in a 3-d box. Acoustic Interference Applet Generates audio interference between your speakers. Signal Processing Fourier Series Applet Frequency analysis of periodic functions. Digital Filters Filters digital signals and plays the output on your speakers. Electricity and Magnetism: Statics 2-D Electrostatics Applet Demonstrates static electric fields and steady-state current distributions. 2-D Electrostatic Fields Applet Demonstrates electric fields in various 2-D situations; also shows Gauss's law. 3-D Electrostatic Fields Applet Demonstrates electric fields in various 3-D situations. 3-D Magnetostatic Fields Applet Demonstrates magnetic fields in various situations. 2-D Electrodynamics Applet (TE) Demonstrates electromagnetic radiation. 2-D Electrodynamics Applet (TM) Demonstrates electromagnetic radiation, induction, and magnetostatics. Analog Circuit Simulator Applet Demonstrates various electronic circuits. Cavity Modes Applet Electromagnetic waves in a 3-d rectangular cavity. Waveguide Modes Applet Electromagnetic waves in a waveguide. Antenna Applet Generates antenna radiation patterns. Fresnel Diffraction Applet Generates Fresnel diffraction patterns. Quantum Mechanics Hydrogen Atom Applet Shows the orbitals (wave functions) of the hydrogen atom. Molecular Orbitals Applet Shows the orbitals (wave functions) of the hydrogen molecular ion. 1-D Quantum Mechanics Applet Single-particle quantum mechanics states in one dimension. 1-D Quantum Crystal Applet Periodic potentials in one dimension. 2-D Quantum Crystal Applet Periodic potentials in two dimensions. 1-D Quantum Transitions Applet Radiative transitions (absorption and stimulated emission) in one dimension. Atomic Dipole Transitions Applet Radiative transitions (absorption and stimulated emission) in atoms. 2-D Rectangular Square Well Applet Rectangular square well (particle in a box) in two dimensions. 2-D Circular Square Well Applet Circular square well in two dimensions. 2-D Quantum Harmonic Oscillator Applet Harmonic oscillator in two dimensions. Quantum Rigid Rotator Applet Particle confined to the surface of a sphere. 3-D Quantum Harmonic Oscillator Applet Harmonic oscillator in three dimensions. Linear Algebra Dot Product Applet Demonstrates the dot product or scalar product of two vectors. Matrix Applet Demonstrates 2-d transformations using a matrix. Vector Calculus 2-D Vector Fields Applet Demonstrates various properties of vector fields, including divergence and curl, etc. 3-D Vector Fields Applet Demonstrates vector fields in 3 dimensions. Includes the Lorenz Attractor and Rossler Attractor. Gas Molecules Simulation Applet Demonstrates the kinetic theory of gases. Thermal Camera Pictures Some sample pictures taken with a thermal (infrared) camera. (This is not an applet but I thought I'd throw it in here anyway.) A Sense of Scale Provides a visual comparison of various distances, from very small objects like protons and electrons, to distances between galaxies. (Not an applet, but I thought I'd include it here anyway.) Ordinary Differential Equations Applet Visual differential equation solver. Euler's Equation Applet Demonstrates Taylor series expansion of complex exponentials. Licensing info. Links to other educational sites with math/physics-related information or java applets useful for teaching: And when you get tired of learning, here is some fun stuff:
{"url":"http://www.falstad.com/mathphysics.html","timestamp":"2014-04-17T03:50:00Z","content_type":null,"content_length":"20054","record_id":"<urn:uuid:385e7ca3-775a-4eef-b821-95c46c22bc16>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Would Weight and Height Be Considered Rational or Irrational Re: Would Weight and Height Be Considered Rational or Irrational I am not sure the question has meaning. Those are mathematical terms, they represent sets. In the real world we have mostly measurements. They are accurate to some specified amount of decimal places. Even though we may be talking about a hypotenuse we say 1.41 miles not square root of 2 miles. Since measurements are terminating decimals the answer in my opinion is Rational. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=167123","timestamp":"2014-04-20T21:13:58Z","content_type":null,"content_length":"9675","record_id":"<urn:uuid:b339b08c-0d7f-421b-9413-30b989e5436d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Felix Klein on intuition In 1893 Felix Klein visited Northwestern University in Evanston, Illinois, in the United States and gave twelve Lectures on Mathematics. On 2 September he gave the sixth of these lectures to space intuition. We give below extracts from that lecture:- Space intuition The inquiry naturally presents itself as to the real nature and limitations of geometrical intuition .... [I distinguish] between what I call the naive and the refined intuition. It is the latter that we find in Euclid: he carefully develops his system on the basis of well-formulated axioms, is fully conscious of the necessity of exact proofs, and so forth .... The naive intuition, on the other hand, was especially active during the period of the genesis of differential and integral calculus. Thus Newton [did not ask] himself whether there might not be continuous functions having no derivative .... At the present time we are living in a critical period similar to that of Euclid. It is my private conviction ... that Euclid's period must also have been preceded by a naive stage of development In my opinion, the naive intuition is not exact, while the refined intuition is not properly intuition at all, but arises through the logical development from axioms considered as perfectly exact. The first half of this statement [implies that] we do not picture in our mind an abstract mathematical point, but substitute something concrete for it. In imagining a line, we do not picture a length without breadth, but a strip of a certain width. [Abstractions] in this case are regarded as holding only approximately, or as far as may be necessary .... I maintain that in ordinary life we actually operate with such inexact definitions. Thus we speak without hesitancy of the directions and curvature of a river or a road, although the line in this case certainly has considerable width .... As regards the second half of my proposition, there are actually many cases where the conclusions derived by purely logical reasoning from exact definitions cannot be verified by intuition. To show this, I select examples from the theory of automorphic functions, because in more common geometrical illustrations our judgment is warped by the familiarity of the ideas .... Let any number of non-intersecting circles 1, 2, 3, 4, ... , be given, and let every circle be reflected (i.e. transformed by inversion, or reciprocal radii vectors) upon every other circle; then repeat this operation again and again, ad infinitum. The question is, what will be the configuration formed by the totality of all the circles, and in particular, what will be the position of the limiting points? There is no difficulty in answering these questions by purely logical reasoning, but the imagination seems to fail utterly when we try to form a mental image of the result .... When the original points of contact happen to lie on a circle being excluded, it can be shown analytically that the continuous curve which is the locus of all the points of contact is not an analytical curve. It is easy enough to imagine a strip covering all these points, but when the width of the strip is reduced beyond a certain limit, we find undulations, and it seems impossible to clearly picture the final outcome. Note that we have here an example of a curve with indeterminate derivatives arising out of purely geometrical considerations, while it might be supposed from the usual treatment of such curves that they can only be defined by artificial analytical series .... Kopcke has [concluded] that our space intuition is exact as far as it goes, but so limited as to make it impossible for us to picture curves without tangents .... Pasch believes - and this is the traditional view - that it is in the end possible to discard intuition entirely, basing all of science on axioms alone. This idea of building up science purely on the basis of axioms has since been carried still further by Peano, in his logical calculus .... I am of the [firm] opinion that, for the purposes of research it is always necessary to combine intuition with the axioms. I do not believe, for instance, that it would have been possible to derive the results discussed in my [previous] lectures, the splendid researches of Lie, the continuity of the shape of algebraic curves and surfaces, or the most general forms of triangles, without the constant use of geometrical intuition .... What has been said above places geometry among the applied sciences. Let me make a few general remarks on these sciences. I should lay particular stress on the heuristic value of the applied sciences as an aid to discovering new truths in [pure] mathematics. Thus I have shown (in my little book on Riemann's theories) that the abelian integrals can best be understood and illustrated by considering electric currents on closed surfaces. In an analogous way, theorems concerning differential equations can be derived from the consideration of sound-vibrations, and so on .... The ordinary mathematical treatment of any applied science substitutes exact axioms for the approximate results of experience, and deduces from these axioms the rigid mathematical conclusions. [But] it must not be forgotten that mathematical developments transcending the limit of exactness of the science are of no practical value. It follows that a large portion of abstract mathematics remains without any practical application, the amount of mathematics that can be usefully employed in any science being in proportion to the degree of accuracy attained in that science .... As examples of extensive mathematical theories that do not exist for applied science, consider the distinction between the commensurable and the incommensurable. It seems to me, therefore, that Kirchhoff makes a mistake when he says in his Spectral Analyse that absorption takes place only when there is an exact coincidence between the wave-lengths. I side with Stokes, who says that absorption takes place in the vicinity of such coincidences .... All this raises the question of whether it would not be possible to create a, let us say, abridged system of mathematics adapted to the needs of the applied sciences, without passing through the whole realm of abstract mathematics .... [But no such] system ... is ... in existence, and we must for the present try to make the best of the material at hand. What I have said here concerning the use of mathematics in the applied sciences [must] not be interpreted as in any way prejudicial to the cultivation of abstract mathematics as a pure science. Apart from the fact that pure mathematics cannot be supplanted by anything else as a means for developing the purely logical powers of the mind, there must be considered here as elsewhere the necessity of the presence of a few individuals in each country developed in a far higher degree than the rest. Even a slight raising of the general level can be accomplished only when some few minds have progressed far ahead of the average .... Here a practical difficulty presents itself in the teaching, let us say, the elements of the calculus. The teacher is confronted with the problem of harmonizing two opposite and almost contradictory requirements. On the one hand, he has to consider the limited and as yet undeveloped intellectual grasp of his students and the fact that most of them study mathematics mainly with a view to the practical applications; on the other, his conscientiousness as a teacher and man of science would seem to compel him to detract in nowise from perfect mathematical rigor, and therefore to introduce from the beginning all the refinements and niceties of modern abstract mathematics. In recent years, university instruction, at least in Europe, has been tending more and more in the latter direction. [If a work like] Cours d'analyse of Camille Jordan is placed in the hands of a beginner a large part of the subject will remain unintelligible, and at a later stage, the student will not have gained the power of making use of the principles in the simple cases occurring in the applied sciences .... It is my opinion that in teaching it is not only admissible, but absolutely necessary, to be less abstract at the start, to have constant regard to the applications, and to refer to the refinements only gradually as the student becomes able to understand them. This is, of course, nothing but a universal pedagogical principle to be observed in all mathematical instruction .... I am led to these remarks by the consciousness of growing danger in Germany of a separation between abstract mathematical science and its scientific and technical applications. Such separation can only be deplored, for it would necessarily be followed by shallowness on the side of the applied sciences, and by isolation on the part of pure mathematics .... JOC/EFR August 2006 The URL of this page is:
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Extras/Klein_intuition.html","timestamp":"2014-04-20T18:23:56Z","content_type":null,"content_length":"10421","record_id":"<urn:uuid:f2f8c312-c272-4ff9-a359-ba6d42798ebc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
A Combinatorial Algorithm for Finding Maximum Cuts Finding the maximum cut of a graph is a difficult to compute problem in combinatorial optimization with several applications in the world of engineering and physics. This research develops and evaluates an exact branch and bound algorithm for the maximum cut of unweighted graphs that is designed for improved performance on sparse graphs. The module provides a general overview of the problem along with necessary mathematical background in "The Maxcut Problem" and a brief note on various approaches to the problem in "Several Algorithms". "A New Algorithm" describes a new algorithm for finding maximum cuts. Results of empirical performance evaluation appear in "Empirical Testing", which "Conclusion" further discusses.
{"url":"http://cnx.org/content/m30982/latest/?collection=col10523/latest","timestamp":"2014-04-23T12:59:52Z","content_type":null,"content_length":"116076","record_id":"<urn:uuid:a001a9f3-5030-4bed-b604-b38522499a8a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics denote a zero-symmetric left near-ring and an automorphism of . An additive endomorphism is called a -derivation if $D\left(xy\right)=\sigma \left(x\right)D\left(y\right)+D\left(x\right)y$ for all $x,y\in N$ . This paper extends some commutativity results involving derivations, due to the reviewer and G. Mason [Near-rings and near-fields, Proc. Conf., Tübingen/F.R.G. 1985, North-Holland Math. Stud. 137, 31-35 (1987; Zbl 0619.16024 )]. A typical theorem reads as follows: If is a 3-prime near-ring admitting a nontrivial such that for all $x,y\in N$ , then is Abelian. Moreover, if is 2-torsion-free and commute, then is a commutative ring. 16Y30 Near-rings 16W25 Derivations, actions of Lie algebras (associative rings and algebras) 16U70 Center, normalizer (invariant elements) for associative rings 16U80 Generalizations of commutativity (associative rings and algebras) 16N60 Prime and semiprime associative rings 16W20 Automorphisms and endomorphisms of associative rings
{"url":"http://zbmath.org/?q=an:0992.16035","timestamp":"2014-04-19T09:36:23Z","content_type":null,"content_length":"23624","record_id":"<urn:uuid:a1f276a2-d455-4c35-a82c-0a4f89f61551>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Differential Equations of Mathematical Physics ISBN-10: 9812814574 ISBN-13: 9789812814579 Description: "Functional analysis is a well-established powerful method in mathematical physics, especially those mathematical methods used in modern non-perturbative quantum field theory and statistical turbulence. This book presents a unique, modern treatment More... Publisher: World Scientific Publishing Company, Incorporated Binding: Hardcover Size: 6.25" wide x 9.25" long x 1.00" tall Weight: 1.386 Language: English 100% Satisfaction Guarantee
{"url":"http://www.textbookrush.com/browse/Books/9789812814579?isbn=9789812814579","timestamp":"2014-04-17T05:06:18Z","content_type":null,"content_length":"90886","record_id":"<urn:uuid:6df65804-c622-4d4a-8d67-658770648615>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
modulo question February 2nd 2011, 05:44 PM #1 Oct 2007 modulo question Hello. I have given the thread a new purpose, seeing as how it was a double post. Find the complete set of residues modulo 10 whose elements are a) nonnegative b) odd c) even a) {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} b) and c) are challenging me though. I have a feeling it is going to be too simple. If the modulo was any odd number, I could do this question in a heart beat (and I am not saying that is any great feat). But seeing how 10 is even, I cannot do the ol' simple a+p (modp) = a method. What can I do? What should I look for? Is it something to do with Euler's function and negative numbers? Last edited by chrisc; February 3rd 2011 at 12:09 PM. Note for mod: This was originally a double post that now has a new question (instead of creating a new thread) I have edited the first post. February 3rd 2011, 12:11 PM #2 Oct 2007
{"url":"http://mathhelpforum.com/number-theory/170050-modulo-question.html","timestamp":"2014-04-19T22:35:06Z","content_type":null,"content_length":"30877","record_id":"<urn:uuid:fe1ed87c-6095-48f9-aa51-68e8387f3b3e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Noun: sum súm Verb: sum (summed,summing) súm 1. Be a summary of "The abstract sums up the main ideas in the paper"; - summarize, summarise [Brit], sum up 2. Determine the sum of - total, tot, tot up, sum up, summate, tote up, add, add together, tally, add up Sounds like: soles, souls Derived forms: summing, summed, sums Type of: accumulation, aggregation, assemblage, assets, cognitive content, collection, content, count, enumerate, mental object, number, numerate, quantity, say, set, state, tell, unit, whole Encyclopedia: Sum
{"url":"http://www.wordwebonline.com/en/SUM","timestamp":"2014-04-20T04:05:11Z","content_type":null,"content_length":"11881","record_id":"<urn:uuid:64dcee40-a80a-464d-b1b4-47d7434c5462>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
the trash of dreams and the unspoken misgivings words for the wisdom are that this is passive agression and when the feelings of some what gets expressed should be more than as the pragmatic say , nothing) but the formula is in this is false and there is no reaching out from the set containing x simpler to say that nothing functions. and so the silent sullies the air with pent-up rage and.. there just isn't any point in trying for a s appear as mirages, revealed to be dirty little recursive functions where every new parentheses is the unburied handle of a hatchet that will complicate things for infinite value s of to come, tripping up the whenever anyone or anything needs make reference to . ( there exists some x such that for all values of y , ¬ <=> E ^ S
{"url":"http://everything2.com/title/the+trash+of+dreams+and+the+unspoken+misgivings","timestamp":"2014-04-16T13:19:26Z","content_type":null,"content_length":"21951","record_id":"<urn:uuid:1beddef6-7e25-4a82-a78a-fff3ffee4fb9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 113: Linear Algebra and Matrix Theory Fall 2013 ● Class is over! Have a great winter break! ● The Midterm is on Tuesday October 29^th in Hewlett 201 from 7-9 pm. The exam will be open book and open notes with no electronic aids and will cover the material from Axler Chapters 1 through 4. ● There will be a study session for the midterm on Thursday October 24th from 5 to 6 pm in 380X. ● For those interested in seeing more examples of proofs, write-ups of last spring’s homework can be found here. ● First day of class: September 24, 2:15 pm in 380-380F Course Description: Math 113 will cover the basics of linear algebra from a theoretical standpoint. We will cover properties of vector spaces, linear transformations, and spectral decompositions. The emphasis will be on theoretical aspects of these topics and producing rigorous proofs of results rather than on computation. The course will also focus on teaching students how to write proofs and doing so will be expected on the homeworks. Math 113 is appropriate for students who have already taken Math 51, although the latter is not a requirement.
{"url":"http://math.stanford.edu/~dankane/113/","timestamp":"2014-04-16T16:17:59Z","content_type":null,"content_length":"57113","record_id":"<urn:uuid:a0b2ef12-8fd2-4e61-a54a-4e9807b81270>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Dr. Terrance J. Quinn - Courses MOBI 7200 Modeling and Simulation 3 SCH Introduction to the modeling of biomolecular structure and dynamics. Covers three broad topics: (a) biomolecular structure; (b) molecular force field origin, composition, and evaluation techniques; and (c) simulation techniques-computational sampling by geometric optimization, Monte Carlo methods, and molecular dynamics. Differential Equations and Linear Algebra. Course includes various aspects of molecular modeling (and/or computational) molecular biochemistry techniques, protein structure, quantum and/or molecular mechanics models, techniques for energy minimization, Monte Carlo sampling, free energy simulations, drug design, active sites, transcription regulation, signal transduction, immune regulation, membrane, fibrous proteins, and virus Other Subjects College Algebra, Trigonometry, Mathematics of Finance, Business Calculus, Linear Programming, Pre-Calculus, Mathematics for Teachers, Applied Calculus, Calculus I &#150; IV, Honors Advanced Mathematics Sequence for Electrical and Civil Engineers (Multi-variable Calculus; Linear Algebra; Fourier Analysis), Probability and Statistics, Discrete Mathematics, Linear Algebra (undergrad. and grad.), Complex Variables, Abstract Algebra (undergrad. and grad.), Foundations of Mathematics (undergrad. and grad.), History and Philosophy of Mathematics, Number Theory, Graph Theory, Geometry, Differential Geometry (undergrad. and grad.), Differential Equations (undergrad. and grad.), Advanced Calculus, Measure Theory (undergrad. and grad.), Analysis, Functional Analysis and Operator Theory (undergrad. and grad.). Newtonian dynamics, gravitation, fluid mechanics, wave mechanics, electromagnetic theory, thermodynamics, Lagrangians and variational calculus, relativity and quantum theory.
{"url":"http://www.mtsu.edu/graduate/mbsphd/faculty/courses_quinn.php","timestamp":"2014-04-19T12:04:01Z","content_type":null,"content_length":"15412","record_id":"<urn:uuid:d9f7e14d-3cbd-4778-8e5b-584ec3298737>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial on Practical Prediction Theory for Classification Results 1 - 10 of 65 - IEEE Trans. on Pattern Analysis and Machine Intelligence "... Abstract—Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly larg ..." Cited by 113 (1 self) Add to MetaCart Abstract—Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly large or exactly zero. From a learning-theoretic perspective, these methods control the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization. This paper presents three contributions related to learning sparse classifiers. First, we introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a component-wise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can be derived for our classifier in the binary case. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods. "... We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning p ..." Cited by 53 (6 self) Add to MetaCart We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process. 1. - NeuroImage , 2009 "... Interpreting brain image experiments requires analysis of complex, multivariate data. In recent years, one analysis approach that has grown in popularity is the use of machine learning algorithms to train classifiers to decode stimuli, mental states, behaviors and other variables of interest from fM ..." Cited by 41 (3 self) Add to MetaCart Interpreting brain image experiments requires analysis of complex, multivariate data. In recent years, one analysis approach that has grown in popularity is the use of machine learning algorithms to train classifiers to decode stimuli, mental states, behaviors and other variables of interest from fMRI data and thereby show the data contain enough information about them. In this tutorial overview we review some of the key choices faced in using this approach as well as how to derive statistically significant results, illustrating each point from a case study. Furthermore, we show how, in addition to answering the question of ‘is there information about a variable of interest ’ (pattern discrimination), classifiers can be used to tackle other classes of question, namely ‘where is the information ’ (pattern localization) and ‘how is that information encoded ’ (pattern characterization). 1 "... We present a general PAC-Bayes theorem from which all known PAC-Bayes risk bounds are obtained as particular cases. We also propose different learning algorithms for finding linear classifiers that minimize these bounds. These learning algorithms are generally competitive with both AdaBoost and the ..." Cited by 29 (6 self) Add to MetaCart We present a general PAC-Bayes theorem from which all known PAC-Bayes risk bounds are obtained as particular cases. We also propose different learning algorithms for finding linear classifiers that minimize these bounds. These learning algorithms are generally competitive with both AdaBoost and the SVM. 1. Intoduction For the classification problem, we are given a training set of examples—each generated according to the same (but unknown) distribution D, and the goal is to find a classifier that minimizes the true risk (i.e., the generalization error or the expected loss). Since the true risk is defined only with respect to the unknown distribution D, we are automatically confronted with the problem of specifying exactly what we should optimize on the training data to find a classifier having the smallest possible true risk. Many different specifications (of what should be optimized on the training data) have been provided by using different inductive principles but the final guarantee on the true risk, however, always comes with a so-called risk bound that holds uniformly over a set of classifiers. Hence, the formal justification of a learning strategy has always come a posteriori via a risk bound. Since a risk bound can be computed from what a classifier achieves on the training data, it automatically suggests the following optimization problem for learning algorithms: given a risk (upper) bound, find a classifier that minimizes it. Despite the enormous impact they had on our understanding of learning, the VC bounds are generally very loose. These bounds are characterized by the fact that - J. Machine Learning Res , 2006 "... Given a probability measure P and a reference measure µ, one is often interested in the minimum µ-measure set with P-measure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence region ..." Cited by 27 (9 self) Add to MetaCart Given a probability measure P and a reference measure µ, one is often interested in the minimum µ-measure set with P-measure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence regions. This paper addresses the problem of estimating minimum volume sets based on independent samples distributed according to P. Other than these samples, no other information is available regarding P, but the reference measure µ is assumed to be known. We introduce rules for estimating minimum volume sets that parallel the empirical risk minimization and structural risk minimization principles in classification. As in classification, we show that the performances of our estimators are controlled by the rate of uniform convergence of empirical to true probabilities over the class from which the estimator is drawn. Thus we obtain finite sample size performance bounds in terms of VC dimension and related quantities. We also demonstrate strong universal consistency and an oracle inequality. Estimators based on histograms and dyadic partitions illustrate the proposed rules. 1 - In Proceedings of the 23rd International Conference on Machine Learning , 2006 "... We show that several important Bayesian bounds studied in machine learning, both in the batch as well as the online setting, arise by an application of a simple compression lemma. In particular, we derive (i) PAC-Bayesian bounds in the batch setting, (ii) Bayesian log-loss bounds and (iii) Bayesian ..." Cited by 16 (2 self) Add to MetaCart We show that several important Bayesian bounds studied in machine learning, both in the batch as well as the online setting, arise by an application of a simple compression lemma. In particular, we derive (i) PAC-Bayesian bounds in the batch setting, (ii) Bayesian log-loss bounds and (iii) Bayesian bounded-loss bounds in the online setting using the compression lemma. Although every setting has different semantics for prior, posterior and loss, we show that the core bound argument is the same. The paper simplifies our understanding of several important and apparently disparate results, as well as brings to light a powerful tool for developing similar arguments for other methods. 1. "... Thompson sampling is one of oldest heuristic to address the exploration / exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heur ..." Cited by 14 (2 self) Add to MetaCart Thompson sampling is one of oldest heuristic to address the exploration / exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heuristic is very easy to implement, we argue that it should be part of the standard baselines to compare against. 1 , 2006 "... This paper proposes a PAC-Bayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PAC-Baye ..." Cited by 13 (2 self) Add to MetaCart This paper proposes a PAC-Bayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PAC-Bayes, resulting in an enhancement of the predictive capabilities of the PAC-Bayes bound. In addition, it is shown that the use of this bound as a means to estimate the hyperparameters of the classifier compares favourably with cross validation in terms of accuracy of the model, while saving a lot of computational burden. "... We investigate the task of performance prediction for language models belonging to the exponential family. First, we attempt to empirically discover a formula for predicting test set cross-entropy for n-gram language models. We build models over varying domains, data set sizes, and n-gram orders, an ..." Cited by 10 (3 self) Add to MetaCart We investigate the task of performance prediction for language models belonging to the exponential family. First, we attempt to empirically discover a formula for predicting test set cross-entropy for n-gram language models. We build models over varying domains, data set sizes, and n-gram orders, and perform linear regression to see whether we can model test set performance as a simple function of training set performance and various model statistics. Remarkably, we find a simple relationship that predicts test set performance with a correlation of 0.9997. We analyze why this relationship holds and show that it holds for other exponential language models as well, including class-based models and minimum discrimination information models. Finally, we discuss how this relationship can be applied to improve language model performance. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.8.4561","timestamp":"2014-04-19T21:07:32Z","content_type":null,"content_length":"37333","record_id":"<urn:uuid:9b8da7c9-952a-4cb9-9d26-2ea5681ced1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
February 16th 2011, 05:10 AM can anyone please tell me a what a fractal is and how does it work i tried to find it online but couldnt understand! February 17th 2011, 08:15 AM See this. I have got it from net. February 18th 2011, 03:00 AM cmon man i know this bit, what i wanna know is how does this work i hav chked out sum site myself but i cant get the idea of a fractle and how it is used, may b what i need is a example........just lyk any other mathematical theory! how does this thing works? im not a student this is just to cool my curiosity! thanks in advance February 18th 2011, 05:10 AM cmon man i know this bit, what i wanna know is how does this work i hav chked out sum site myself but i cant get the idea of a fractle and how it is used, may b what i need is a example........just lyk any other mathematical theory! how does this thing works? im not a student this is just to cool my curiosity! thanks in advance I thought the introductory link was pretty informative. I guess then what we need to know to answer your question is just how much Math do you know? That will let us guide you better. February 18th 2011, 08:36 PM dear sir, i wont say im a PhD student but im a science grad from India i run Aptech computer education classes for last 4 years...............but u try to teach me as a student, im sure wiki has everything in detail but what i couldn't get is how a fractal is used in any way.Just gimme an example.pls pls. really appreciate u taking interest in my armature query! February 19th 2011, 02:56 AM mr fantastic dear sir, i wont say im a PhD student but im a science grad from India i run Aptech computer education classes for last 4 years...............but u try to teach me as a student, im sure wiki has everything in detail but what i couldn't get is how a fractal is used in any way.Just gimme an example.pls pls. really appreciate u taking interest in my armature query! Several people have tried to help you. If you cannot find an on-line resource that helps you, I doubt any of us will be able to. One practical application of fractals is in cgi. There are many others. Use Google. cmon man i know this bit, what i wanna know is how does this work i hav chked out sum site myself but i cant get the idea of a fractle and how it is used, may b what i need is a example........just lyk any other mathematical theory! how does this thing works? im not a student this is just to cool my curiosity! thanks in advance The link you were given is an excellent reply to the question you posted. It is not our fault that your original post was not specific enough.
{"url":"http://mathhelpforum.com/differential-geometry/171462-fractal-print.html","timestamp":"2014-04-19T02:58:50Z","content_type":null,"content_length":"8034","record_id":"<urn:uuid:33d658fd-e50f-4970-978a-1ec9b7451a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Ecological Archives Ecological Archives E087-081-A2 C. E. Cáceres, S. R. Hall, M. A. Duffy, A. J. Tessier, C. Helmle, and S. MacIntyre. 2006. Physical structure of lakes constrains epidemics in Daphnia populations. Ecology 87:1438 1444. Appendix B. Results from the regression tree analyses. Because some of our metrics varied among years (infection prevalence, host density, Fee’s probability) whereas others (phosphorus, surface area, maximum depth, mean depth, depth ratio) were constant among years, we fit two models. Panel A shows the result of the model that included each lake once with the three-year average for density and prevalence of infection. Panel B shows the results from the second model which considered each annual epidemic to be a unique event. We used the least squares loss function with a stopping rule of four cases per terminal node. Symbols are shaded based on their grouping in the terminal nodes. Numbers above boxes indicate cut values. [Back to E087-081]
{"url":"http://esapubs.org/archive/ecol/E087/081/appendix-B.htm","timestamp":"2014-04-18T00:18:04Z","content_type":null,"content_length":"2455","record_id":"<urn:uuid:800c0720-9863-483f-96d6-735978be56a0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Abag contains 6 purple marbles, 7 azure marbles and 10 orange marbles. What is the chance of drawing an orange marble? If an orange marble is drawn then placed back into the bag, and a second marble is drawn, what is the probability of drawing an azure marble? Give solutions exactly in reduced fraction form, separated by a comma. Best Response You've already chosen the best response. there are (6+7+10) marbles in all chance of drawing an orange is 10/(6+7+10) Best Response You've already chosen the best response. total=6+7+10=23 there are 10 orange marbles so chance of drawing an orange marble= 7/23 there 7 azure marbles so, 7/23 Best Response You've already chosen the best response. chance of drawing azure is 7/(6+7+10) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e593c200b8b1f45b47975ed","timestamp":"2014-04-18T03:31:40Z","content_type":null,"content_length":"32807","record_id":"<urn:uuid:daa185ac-d33b-4bb5-9597-2ba24d7d82c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
F Distribution and ANOVA: Facts About the F Distribution Randomly divide the class into four groups of the same size. Have each member of each group record the number of states in the United States he or she has visited. Run an ANOVA test to determine if the average number of states visited in the four groups are the same. Test at a 1% level of significance. Use one of the solution sheets at the end of the chapter (after the homework).
{"url":"http://cnx.org/content/m17062/1.11/","timestamp":"2014-04-17T06:59:47Z","content_type":null,"content_length":"104821","record_id":"<urn:uuid:50d3273b-2ddc-496d-be02-8797361a2f94>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
RMSE vs standard deviation can anyone explain what the difference is between RMSE and standard deviation. I am using RMSE in multivariate analysis but is it just the standard dev. why another name? If I recall correctly , the standard deviation is an actual population parameter whereas the RMSE is based on a model (e.g. regression analysis). In other words, the RMSE is an estimator of the standard deviation based on your model results. If it is an unbiased estimator, then it will be equal to the standard error.
{"url":"http://www.physicsforums.com/showthread.php?t=281219","timestamp":"2014-04-21T04:53:39Z","content_type":null,"content_length":"28619","record_id":"<urn:uuid:7f54db39-6f53-4bbf-9745-56fab2f2cfe9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: probability question about the dice game Date: Feb 15, 2013 10:22 AM Author: David C. Ullrich Subject: Re: probability question about the dice game On Thu, 14 Feb 2013 13:34:21 -0800 (PST), pepstein5@gmail.com wrote: >On Thursday, February 14, 2013 6:05:23 PM UTC, Jussi Piitulainen wrote: >> pepstein5@gmail.com writes: >> > >> > David Ullrich is wrong. "X to Y" means that the probability of >> > winning is (X + Y)/Y. >> Which you later corrected to the reciprocal X/(X + Y); probabilities >> need to be between 0 and 1. But then it seems to me that Ullrich says >> the same, and that's also what I meant. >No, the reciprocal of (X + Y)/ Y is Y/(X + Y) which is what I should have said. >Ullrich wrongly said X/(X + Y). What??? I didn't say anything about probabilities! I said something about odds, in particular terminology used to express statements about odds. Here's what I actually said, with a little context: >> >>> from itertools import product >> >>> die = {1,2,3,4,5,6} >> >>> dice = set(product(die, die, die, die)) > > >>> sum(int(max(a,b) > max(c,d)) for a,b,c,d in dice) >> 505 >> >>> sum(int(max(a,b) <= max(c,d)) for a,b,c,d in dice) >> 791 >>I'd say her odds are 505 for and 791 against. I hope my gambling >>vocabulary is not too far off. >The terminology would be "her odds of winning are 505 to 791". Nothing at all about probability. Look at the code I was referring to: There are 505 equally likely cases leading to a win and 791 equally likely cases leading to a loss. That makes the odds of winning precisely 505 to 795. In particular, I said nothing at all about the math, my only comment was about terminnology (this has something to do with the fact that the OP's question was about terminolopgy). It's beyond me what makes you think I said anything about any probability being X/(X+Y). And btw the terminology I gave _is_ perfectly standard. To know that it's not you'd need to... never mind.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8336891","timestamp":"2014-04-20T06:54:37Z","content_type":null,"content_length":"3802","record_id":"<urn:uuid:e3bcd10e-ce26-48bd-b94e-a5f411263175>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Kinematics Question If a particle undergoes multiple phases of motion (ex. accelerating, then decelerating, then constant acceleration, etc..) then how can you determine where the particle's final position is? I'm imagining a v-t diagram and if you get the area under all of the velocity curves then you get the total displacement, but not the final position (the particle may have been moving backwards...) How do you get the final position? Let's keep it simple and apply this only to one dimension. EDIT: Does integrating the absolute value of all the velocity equations yield total displacement while just integrating yields the final position? Are we to assume that all motion is in one dimension? If the particle is moving backwards the velocity will be negative so the change in displacement during that period, ∫vdt, will be negative. Displacement is the distance from the origin with its direction from the origin (ie. + or - x). The change in displacement is defined as the final displacement (position) minus the initial displacement .
{"url":"http://www.physicsforums.com/showthread.php?s=ddb5a17017b6994bf35fbc0bec6996d4&p=4626153","timestamp":"2014-04-16T19:04:00Z","content_type":null,"content_length":"33807","record_id":"<urn:uuid:edddd8b3-a449-429e-aeba-4802e93b9e72>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone please help with this?: The length of a rectangle is 2 inches longer than the width. If the area is 320 square inches, find the rectangle's dimensions. Round your answers to the nearest tenth of an inch. • one year ago • one year ago Best Response You've already chosen the best response. how do we determine that area of any rectangle? Best Response You've already chosen the best response. you have a problem with two unknown variables - x and y one being width and other being length - and two conditions 1) length being 2 inches longer than width and 2) area is 320. Best Response You've already chosen the best response. @amistre64 Length x Width Best Response You've already chosen the best response. correct :) using that and the given information, we can determine the rest. Area = Length x Width ; Area = 320, and it says the Length is 2 times the Width 320 = (2xWidth)xWidth 320 = 2xW^2 Best Response You've already chosen the best response. i read 2 times larger for some reason .... Best Response You've already chosen the best response. same concept tho; just a correct reading :) Area = Length x Width ; Area = 320, and it says the Length is (Width + 2) 320 = (Width+2)xWidth 320 = W^2 + 2W Best Response You've already chosen the best response. would it be (W)(W+2)=320 ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. then would you subtract 320 from both sides? Best Response You've already chosen the best response. yes, and expand the left side; this turns it into a usual looking quadratic as a result Best Response You've already chosen the best response. so it would be: W^2+2W-320=0? Best Response You've already chosen the best response. correct, and then you just use whatever method you like for the determining "w" complete the square, or quadratic formula Best Response You've already chosen the best response. since the "c" part is relatively overpowering; id opt complete the square if i had to do by hand w^2 + 2w +1 -1 -320 = 0 (w+1)^2 -321 = 0 (w+1)^2 = 321 w+1 = +- sqrt(321) w = -1 +- sqrt(321) Best Response You've already chosen the best response. then use a calculator to work out the decimal :) and ignore the negative since negative length is not defined Best Response You've already chosen the best response. yea complete the square seems better in this situation. I always seem to miss a step with the quadratic formula. So W=16.91 and L=18.91? Best Response You've already chosen the best response. nearest tenth; but yes Best Response You've already chosen the best response. THANKS FOR YOUR HELP!!!! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/502141f6e4b0713d3b89dfd9","timestamp":"2014-04-20T08:25:14Z","content_type":null,"content_length":"66933","record_id":"<urn:uuid:9c41d758-18bd-4c04-ae64-7250eac13629>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Help on solving first-order NONhomogeneous PDE September 25th 2011, 07:37 PM #1 Sep 2011 Help on solving first-order NONhomogeneous PDE Hello everybody. I'm trying to solve this first-order NONhomogenous PDE. But because my awfully bad textbook only gave examples on how to solve homogeneous PDEs, I'm not sure what to do. Here's my scratch work. I also know the theorem that if two solutions solve linear NONhomogeneous PDE, then their difference solve linear homogeneous PDE. But I don't think that's going to help here. Problem: Solve $\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}=1$ Scratch work: I know how to solve the HOMOgeneous PDE: $\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}=0$ The LHS is just the directional derivative of $u$ in the direction of $[1, 1]$. So if a curve in 3D has $[1,1]$ as a tangent vector, the curve's equation is just $\frac{dy}{dx} = 1$ so $y = x + C_1$ so $x - y = -C_1 = C$. So $u(x,y) = f(x - y)$, where $f$ is differentiable. But how do I solve the NONhomogeneous question from this? Thanks everybody. Re: Help on solving first-order NONhomogeneous PDE One solution is u = x. If you let $u = x + v$ you'll end up with $v_x + v_y = 0$. BTW - do you know the method of characteristics? Last edited by Jester; September 26th 2011 at 03:12 PM. Re: Help on solving first-order NONhomogeneous PDE Hello and thanks Danny. So $u(x,y) = x$ could've been just guessed? But if $u(x,y) = x + v$, wouldn't you get $u_x + u_y = (1 + v_x) + 0$? Since $(x + v)_y = 0$ because there is no y in here. And I don't think so since I'm taking a first course in PDEs. Re: Help on solving first-order NONhomogeneous PDE Well, you assme that $u(x,y) = v(x,y) + x$. The method of characteristics will probably come up. It's a way to solve $a(x,y,u)u_x + b(x,y,u) u_y = c(x,y,u)$. Re: Help on solving first-order NONhomogeneous PDE Re: Help on solving first-order NONhomogeneous PDE Well, you would end up with that $v$ in the end. Also, if you let $u = v + x$ then $u_x + u_y = 1$ gives $\left( v_x + 1\right) + \left( v_y + 0\right) = 1$ gives $v_x + v_y = 0$. September 26th 2011, 04:32 AM #2 September 26th 2011, 05:58 AM #3 Sep 2011 September 26th 2011, 06:20 AM #4 September 26th 2011, 08:14 AM #5 Sep 2011 September 26th 2011, 08:18 AM #6
{"url":"http://mathhelpforum.com/differential-equations/188847-help-solving-first-order-nonhomogeneous-pde.html","timestamp":"2014-04-17T20:37:53Z","content_type":null,"content_length":"53458","record_id":"<urn:uuid:292e07c9-5985-4192-944a-3fe3b79005bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Order embedding Replies: 8 Last Post: Sep 16, 2013 10:01 PM Messages: [ Previous | Next ] Re: Order embedding Posted: Sep 16, 2013 7:24 AM William Elliot wrote: > Let X,Y be (partially) ordered sets. Are these definitions correct? > f:X -> Y is order preserving when > for all x,y, (x <= y implies f(x) <= f(y). > f:X -> Y is an order embedding when > for all x,y, (x <= y iff f(x) <= f(y)). > f:X -> Y is an order isomorphism when f is surjective > and for all x,y, (x <= y iff f(x) <= f(y)). > The following are immediate consequences. > Order embedding maps and order isomorphisms are injections. > If f:X -> Y is an order embedding, > then f:X -> f(X) is an order isomorphism. > Furthermore the composition of two order preserving, order > embedding or order isomorphic maps is again resp., order > preserving, order embedding or order isomorphic. > Finally, the inverse of an order isomorphism is an order isomorphism. > That all is the basics of order maps, is it not? > Or is the more to be included? Probably all. There are also Galois connections.
{"url":"http://mathforum.org/kb/message.jspa?messageID=9258202","timestamp":"2014-04-21T05:10:00Z","content_type":null,"content_length":"26597","record_id":"<urn:uuid:ce88dd54-4298-48c4-a21f-28d1f01105bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellicott City Algebra 1 Tutor Find an Ellicott City Algebra 1 Tutor ...I have also taught Physics and Electrical Engineering courses for both undergraduate and graduate students. These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a graduate student, I took courses in mathematical methods for physics and engineering. 16 Subjects: including algebra 1, physics, calculus, geometry ...Tutoring is what I do best, and I find it very fulfilling.I've tutored precalculus at Howard Community College, which includes geometry, Algebra I and II, and linear algebra. I’ve tutored precalculus at Howard Community College, which includes geometry, algebra I and II, and linear algebra. I accept whatever level the student is currently at. 11 Subjects: including algebra 1, reading, geometry, piano ...I have also received 12 auxiliary credits in Reading via my Maryland Teaching Certification. Phonics can be such a burdensome topic for those students who seem to struggle, but I have found multiple ways to make this type of learning fun and engaging. Once a student has the confidence to learn, I believe you can teach them anything. 8 Subjects: including algebra 1, reading, writing, GED ...Lastly, I have specialized experience in audio editing, blog creation and maintenance, and implementing technology as a tool in the classroom. As a result, teachers are welcome to contact me as well. If you are interested in building your or your child's confidence in English, Math, Japanese, computer skills, or any other subject, I am the correct choice for a tutor. 33 Subjects: including algebra 1, English, writing, reading ...I taught human anatomy and physiology at UMBC for three years. I ran a lab in which we worked with models and animal dissections to understand the anatomy of the human body. I have an expertise in this field and can provide in depth education and review of the material. 17 Subjects: including algebra 1, biology, GRE, ASVAB
{"url":"http://www.purplemath.com/ellicott_city_algebra_1_tutors.php","timestamp":"2014-04-18T08:30:04Z","content_type":null,"content_length":"24411","record_id":"<urn:uuid:1af89cb6-b1b3-4031-9f91-13eaa98cdbcd>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Rational Numbers - Grade 10 [CAPS] As described in Review of past work, a number is a way of representing quantity. The numbers that will be used in high school are all real numbers, but there are many different ways of writing any single real number. This chapter describes rational numbers. Figure 1 Khan Academy video on Integers and Rational Numbers
{"url":"http://cnx.org/content/m38348/latest/?collection=col11306/1.3","timestamp":"2014-04-21T14:50:51Z","content_type":null,"content_length":"132074","record_id":"<urn:uuid:fa6d583d-bc6a-43be-8839-fc85218246e1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Sandy Springs, GA Algebra 2 Tutor Find a Sandy Springs, GA Algebra 2 Tutor I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry, algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe... 20 Subjects: including algebra 2, reading, chemistry, algebra 1 ...My approach is to find the parts of algebra that students are comfortable with and build on those to reach areas that students are less proficient at. When students realize that all of algebra follows rules that they already know, they can usually relax and have fun with it. Geometry is the subject where math teachers bring in more abstract concepts and many students are left behind. 17 Subjects: including algebra 2, chemistry, physics, geometry ...Using the understanding of bonding and periodic properties, students will predict and define the various types of bonding between elements. Students will name and write correct sample formulas for compounds. Students will describe a chemical reaction with an equation and calculate relationships described by the equation. 14 Subjects: including algebra 2, chemistry, physics, SAT math ...I am Georgia certified in Mathematics (grades 6-12) with my Masters in Mathematics Education from Georgia State University. I have classroom experience teaching Algebra and regularly held tutorials for my own students. I know what's expected from each student and I will create a plan of action to help you achieve your personal goals to better understand mathematics. 7 Subjects: including algebra 2, geometry, algebra 1, SAT math Hello, my name is Josh, and I am a doctoral student studying the Syriac language, an ancient form of Aramaic, and Coptic, the last stage of the native Egyptian language. I have experience teaching Latin to undergraduate and graduate students, as well as individual tutoring for high school students ... 5 Subjects: including algebra 2, geometry, algebra 1, prealgebra
{"url":"http://www.purplemath.com/Sandy_Springs_GA_algebra_2_tutors.php","timestamp":"2014-04-20T04:25:01Z","content_type":null,"content_length":"24543","record_id":"<urn:uuid:0686dac9-fc53-4670-bc95-6ca35984fb23>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2013/258 Witness Encryption and its ApplicationsSanjam Garg and Craig Gentry and Amit Sahai and Brent WatersAbstract: We put forth the concept of \emph{witness encryption}. A witness encryption scheme is defined for an NP language $L$ (with corresponding witness relation $R$). In such a scheme, a user can encrypt a message $M$ to a particular problem instance $x$ to produce a ciphertext. A recipient of a ciphertext is able to decrypt the message if $x$ is in the language and the recipient knows a witness $w$ where $R(x,w)$ holds. However, if $x$ is not in the language, then no polynomial-time attacker can distinguish between encryptions of any two equal length messages. We emphasize that the encrypter himself may have no idea whether $x$ is actually in the language. Our contributions in this paper are threefold. First, we introduce and formally define witness encryption. Second, we show how to build several cryptographic primitives from witness encryption. Finally, we give a candidate construction based on the NP-complete \textsc{Exact Cover} problem and Garg, Gentry, and Halevi's recent construction of ``approximate" multilinear maps. Our method for witness encryption also yields the first candidate construction for an open problem posed by Rudich in 1989: constructing computational secret sharing schemes for an NP-complete access Category / Keywords: public-key cryptography / Multilinear MapsDate: received 6 May 2013Contact author: sanjamg at cs ucla eduAvailable format(s): PDF | BibTeX Citation Version: 20130508:202916 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2013/258/20130508:202916","timestamp":"2014-04-19T17:10:02Z","content_type":null,"content_length":"3000","record_id":"<urn:uuid:f1118cc8-5b97-4713-aacd-5e1ad17d96d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Check whether function is invertible...? June 21st 2009, 01:39 PM #1 Check whether function is invertible...? Check whether $f(x) = [x] + \sqrt{\{x\}}$, $f:\mathbb{R}\rightarrow\mathbb{R}$, (where [.] & {.} represent greatest integral and fractional part function respectively) is invertible or not, if yes, then (i)find its inverse. (ii)solve the equation $f(x) = f^{-1}(x)$. I was unable to do it. Any ideas? Let $n\leq x<n+1$. $\therefore[x]=n$ $f'(x)=\frac{1}{2\sqrt{x-n}}>0$,when $x$ is not an integer Incase $f(x)$ is an integer then $f(x)=x$ which is obviously invertible. Thus $f(x)$ is clearly invertible. Let $g(x)$ be the inverse of $f(x).$ Since $f^{-1}(x)$ is not identically equal to $f(x)$.Therefore,solving $f(x)=f^{-1}(x)$ is same as solving for $f(x)=x$. On squaring, $\{x\}=1$ is impossible i.e. $x$ is an integer According to... Nearest Integer Function -- from Wolfram MathWorld the notation $f(x)=[x]$ indicates the 'nearest integer function' , so that ... $[x]= n$ , $n-\frac{1}{2} \le x < n + \frac{1}{2}$ The function defined as... $\lfloor x \rfloor = n$ , $n \le x < n+1$ is the so called 'floor function' and it is indicated as $f(x) = \lfloor x \rfloor$ Kind regards Last edited by chisigma; June 22nd 2009 at 05:30 AM. I think it is up to you to define a particular notation for any function.fardeen_gen has chosen to define floor function by [x],so let it be. In textbooks of our country(India) this is how the floor function is defined Fardeen_gen in his post first used the notation $f(x)=[x]$ and after 'verbally' described the function as 'greatest integral'... not 'floor function'... at this point some sort of doubt is Kind regards I agree.Actually, I was famaliar with this question and went on to write the solution right away. June 22nd 2009, 03:32 AM #2 June 22nd 2009, 04:55 AM #3 June 22nd 2009, 05:23 AM #4 June 22nd 2009, 05:45 AM #5 June 22nd 2009, 07:40 AM #6
{"url":"http://mathhelpforum.com/calculus/93427-check-whether-function-invertible.html","timestamp":"2014-04-16T06:22:52Z","content_type":null,"content_length":"53337","record_id":"<urn:uuid:1766c743-e7a3-4984-98a9-b47dbefbdc4f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability portable Stability provisional Maintainer Edward Kmett <ekmett@gmail.com> Safe Haskell Safe-Inferred setmapped :: (Ord i, Ord j) => IndexPreservingSetter (Set i) (Set j) i jSource This Setter can be used to change the type of a Set by mapping the elements to new values. Sadly, you can't create a valid Traversal for a Set, but you can manipulate it by reading using folded and reindexing it via setmapped. >>> over setmapped (+1) (fromList [1,2,3,4]) fromList [2,3,4,5] setOf :: Getting (Set a) s t a b -> s -> Set aSource Construct a set from a Getter, Fold, Traversal, Lens or Iso. >>> setOf folded ["hello","world"] fromList ["hello","world"] >>> setOf (folded._2) [("hello",1),("world",2),("!!!",3)] fromList [1,2,3] setOf :: Getter s a -> s -> Set a setOf :: Ord a => Fold s a -> s -> Set a setOf :: Iso' s a -> s -> Set a setOf :: Lens' s a -> s -> Set a setOf :: Ord a => Traversal' s a -> s -> Set a
{"url":"http://hackage.haskell.org/package/lens-3.8.3/docs/Data-Set-Lens.html","timestamp":"2014-04-19T14:57:54Z","content_type":null,"content_length":"6873","record_id":"<urn:uuid:42ad5348-7867-4ce8-989a-abaf7aa8965c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial fraction again I got this problem out the multiple choice section in my math book. It said if $\frac {x+p}{(x-1)(x-3)}\equiv \frac {q}{x-1}+\frac{2}{x-3}$ what are the values of p and q? Then I got another one that seem to be quite similar if $x^2+4x+p \equiv (x+q)^2+1$ what are the value of p and q? Trying to find two values is what that is confusing to me... Can some one please help me with these problem?
{"url":"http://mathhelpforum.com/algebra/109783-partial-fraction-again.html","timestamp":"2014-04-17T14:00:57Z","content_type":null,"content_length":"49073","record_id":"<urn:uuid:4a84c723-2b04-49cc-8d31-c11ee1f31b81>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Resources Exercise 4-2 Exercise 4-3 Exercise 4-5 Exercise 4-6 Exercise 4-7 Exercise 4-8 Exercise 4-10 Exercise 4-12 Exercise 4-13 Exercise 4-14 Exercise 4-15 Exercise 4-17 Exercise 4-18 Exercise 4-19 Exercise 4-20 Exercise 4-21 Exercise 4-7 Problem 4-1A Problem 4-2A Problem 4-4A Problem 4-5A Problem 4-6A Problem 4-1B Problem 4-2B Problem 4-3B Problem 4-4B Problem 4-5B Problem 4-6B Chapter 5 Exercise 5-2 Exercise 5-3 Exercise 5-4 Exercise 5-5 Exercise 5-6 Exercise 5-7 Exercise 5-8 Exercise 5-9 Exercise 5-10 Exercise 5-11 Exercise 5-12 Exercise 5-13 Exercise 5-14 Exercise 5-15 Exercise 5-16 Exercise 5-17 Exercise 5-18 Exercise 5-19 Exercise 5-20 Exercise 5-21 Problem 5-1A Problem 5-2A Problem 5-3A Problem 5-4A Problem 5-5A Problem 5-1B Problem 5-2B Problem 5-3B Problem 5-4B Problem 5-5B Chapter 6 Exercise 6-1 Exercise 6-2 Exercise 6-3 Exercise 6-6 Exercise 6-10 Exercise 6-13 Exercise 6-15 Problem 6-2A Problem 6-4A Problem 6-2B Problem 6-4B Chapter 7 Exercise 7-2 Exercise 7-7 Exercise 7-8 Exercise 7-9 Problem 7-1A Problem 7-2A Problem 7-3A Problem 7-4A Problem 7-5A Problem 7-6A Problem 7-1B Problem 7-2B Problem 7-3B Problem 7-4B Problem 7-5B Problem 7-6B Chapter 8 Exercise 8-1 Exercise 8-2 Exercise 8-3 Exercise 8-6 Exercise 8-7 Exercise 8-8 Exercise 8-9 Exercise 8-11 Exercise 8-14 Exercise 8-20 Exercise 8-21 Problem 8-1A Problem 8-2A Problem 8-3A Problem 8-4A Problem 8-6A Problem 8-1B Problem 8-2B Problem 8-3B Problem 8-4B Problem 8-6B Chapter 9 Exercise 9-1 Exercise 9-4 Exercise 9-6 Exercise 9-7 Exercise 9-8 Exercise 9-10 Problem 9-1A Problem 9-2A Problem 9-3A Problem 9-5A Problem 9-6A Problem 9-1B Problem 9-2B Problem 9-3B Problem 9-5B Problem 9-6B Chapter 10 Exercise 10-2 Exercise 10-3 Exercise 10-4 Exercise 10-5 Exercise 10-10 Exercise 10-11 Exercise 10-12 Exercise 10-13 Exercise 10-14 Exercise 10-15 Exercise 10-16 Exercise 10-18 Exercise 10-19 Exercise 10-20 Exercise 10-21 Problem 10-3A Problem 10-4A Problem 10-5A Problem 10-6A Problem 10-3B Problem 10-4B Problem 10-5B Problem 10-6B Chapter 11 Exercise 11-7 Exercise 11-8 Exercise 11-9 Exercise 11-17 Exercise 11-19 Exercise 11-20 Exercise 11-21 Problem 11-2A Problem 11-3A Problem 11-2B Problem 11-3B Chapter 12 Exercise 12-5 Exercise 12-6 Exercise 12-15 Exercise 12-19 Exercise 12-20 Exercise 12-21 Problem 12-1A Problem 12-2A Problem 12-3A Problem 12-4A Problem 12-5A Problem 12-1B Problem 12-2B Problem 12-3B Problem 12-4B Problem 12-5B Chapter 13 Exercise 13-1 Exercise 13-2 Exercise 13-3 Exercise 13-4 Exercise 13-5 Problem 13-1A Problem 13-2A Problem 13-3A Problem 13-4A Problem 13-1B Problem 13-2B Problem 13-3B Problem 13-4B Updates to Groom and Board Practice Set Within this Errata Sheet, you will find corrections to the files provided. Power Accounting System Software (P.A.S.S.) ISBN: 0-324-20510-4 Prepared by Warren Allen; Dale Klooster This best-selling educational general ledger package is enhanced with a problem checker enabling students to determine if their entries are correct. Use PASS to solve end-of-chapter problems, the continuing problem, comprehensive problems, and practice sets. Those end of chapter materials that were specifically written with P.A.S.S. in mind are designated with an icon in the text. P.A.S.S. is developed specifically for educational purposes but emulates commercial general ledger packages more closely than other educational packages. Personal Trainer ISBN: 0-324-20460-4 Specifically designed to ease the time-consuming task of grading homework, Personal Trainer lets students complete their assigned homework from the text or practice on unassigned homework online. The results are instantaneously entered into a gradebook. With annotated spreadsheets and full-blown gradebook functionality, the greatly enhanced Personal Trainer 3.0 provides an unprecedented real-time, guided, self-correcting learning reinforcement system outside the classroom. Use this resource as an integrated solution for your distance learning or traditional course. Study Guide for Managerial Accounting, Chapters 1-11 or Financial and Managerial Accounting Chapters 1-26 ISBN: 0-324-22213-0 Prepared by Carl S. Warren, Georgia State University, Athens; James M. Reeve, University of Tennesse, Knoxville The Study Guide includes quiz and test tips as well as multiple choice, fill-in-the-blank, and true-false questions. The content is also available in WebTutor Advantage. Working Papers for Managerial Accounting, Chapters 1-11 or Financial and Managerial Accounting Chapters 1-26 ISBN: 0-324-22212-2 The traditional Working Papers are available both with and without problem-specific forms for preparing solutions for Exercises, A and B Problems, the Continuing Problem, and the Comprehensive Problems from the text. These forms, with preprinted headings, provide a structure for the problems, which helps students get started and save them time. Additional blank forms are included. Working Papers Plus for Select Exercises and Problems for Managerial Accounting, Chapters 1-11 or Financial and Managerial Accounting Chapters 1-26 ISBN: 0-324-22211-4 Prepared by John W. Wanlass, De Anza College This alternative to traditional working papers offers invaluable study elements that integrate selected exercises and problems from the text with forms for preparing solutions. Copyright © 2005 South-Western. All Rights Reserved. Disclaimer | Webmaster
{"url":"http://www.swlearning.com/accounting/warren/managerial_8e/student_resources.html","timestamp":"2014-04-17T12:52:24Z","content_type":null,"content_length":"32508","record_id":"<urn:uuid:31a1bc41-8b8e-432f-9a6e-29fe3578f311>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Use Kirchhoff's Laws To Derive A Linear Ordinary ... | Chegg.com Image text transcribed for accessibility: use Kirchhoff's laws to derive a linear ordinary differential equation Relating y(t) to X(t). For x(t)= 12 volts, derive the steady state response (particular solution) of the differential equation. From the circuit, with a Dc input of 12 volts, determine the output. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/use-kirchhoff-s-laws-derive-linear-ordinary-differential-equation-relating-y-t-x-t--x-t-12-q979118","timestamp":"2014-04-20T09:10:32Z","content_type":null,"content_length":"21045","record_id":"<urn:uuid:87c192e0-faa4-4bdf-839c-680c7b3f878e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: The 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS'02) A Constant-Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem Vancouver, BC, Canada November 16-November 19 ISBN: 0-7695-1822-2 ASCII Text x Amit Kumar, Anupam Gupta, Tim Roughgarden, "A Constant-Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem," 2013 IEEE 54th Annual Symposium on Foundations of Computer Science , pp. 333, The 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS'02), 2002. BibTex x @article{ 10.1109/SFCS.2002.1181956, author = {Amit Kumar and Anupam Gupta and Tim Roughgarden}, title = {A Constant-Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem}, journal ={2013 IEEE 54th Annual Symposium on Foundations of Computer Science}, volume = {0}, year = {2002}, issn = {0272-5428}, pages = {333}, doi = {http://doi.ieeecomputersociety.org/10.1109/SFCS.2002.1181956}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - CONF JO - 2013 IEEE 54th Annual Symposium on Foundations of Computer Science TI - A Constant-Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem SN - 0272-5428 A1 - Amit Kumar, A1 - Anupam Gupta, A1 - Tim Roughgarden, PY - 2002 KW - null VL - 0 JA - 2013 IEEE 54th Annual Symposium on Foundations of Computer Science ER - We present the first constant-factor approximation algorithm for network design with multiple commodities and economies of scale. We consider the rent-or-buy problem, a type of multicommodity buy-at-bulk network design in which there are two ways to install capacity on any given edge. Capacity can be rented, with cost incurred on a per-unit of capacity basis, or bought, which allows unlimited use after payment of a large fixed cost. Given a graph and a set of source-sink pairs, we seek a minimum-cost way of installing sufficient capacity on edges so that a prescribed amount of flow can be sent simultaneously from each source to the corresponding sink. Recent work on buy-at-bulk network design has concentrated on the special case where all sinks are identical; existing constant-factor approximation algorithms for this special case make crucial use of the assumption that all commodities ship flow to the same sink vertex and do not obviously extend to the multicommodity rent-or-buy problem. Prior to our work, the best heuristics for the multicommodity rent-or-buy problem achieved only logarithmic performance guarantees and relied on the machinery of relaxed metrical task systems or of metric embeddings. By contrast, we solve the network design problem directly via a novel primal-dual algorithm. Amit Kumar, Anupam Gupta, Tim Roughgarden, "A Constant-Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem," focs, pp.333, The 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS'02), 2002 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/proceedings/focs/2002/1822/00/18220333-abs.html","timestamp":"2014-04-20T01:21:07Z","content_type":null,"content_length":"51511","record_id":"<urn:uuid:ee2f025f-ece9-44e9-b4a1-d2c6072e0d53>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Way 2 Freshers 1. 10 people meet and shake hands. The maximum number of handshakes possible if there is to be no “cycle” of handshakes is (A cycle of handshakes is a sequence of k people a1, a2, ……, ak (k > 2) such that the pairs {a1, a2}, {a2, a3}, ……, {ak-1, ak}, {ak, a1} shake hands). A. 7 B. 6 C. 9 D. 8 2.Alice and Bob play the following coins-on-a-stack game. 20 coins are stacked one above the other. One of them is a special (gold) coin and the rest are ordinary coins. The goal is to bring the gold coin to the top by repeatedly moving the topmost coin to another position in the stack. Alice starts and the players take turns. A turn consists of moving the coin on the top to a position i below the top coin (0 = i = 20). We will call this an i-move (thus a 0-move implies doing nothing). The proviso is that an i-move cannot be repeated; for example once a player makes a 2-move, on subsequent turns neither player can make a 2-move. If the gold coin happens to be on top when it’s a player’s turn then the player wins the game. Initially, the gold coin is the third coin from the top. Then A. In order to win, Alice’s first move should be a 0-move. B. In order to win, Alice’s first move should be a 1-move. C. Alice has no winning strategy. D. In order to win, Alice’s first move can be a 0-move or a 1-move. 3. After the typist writes 12 letters and addresses 12 envelopes, she inserts the letters randomly into the envelopes (1 letter per envelope). What is the probability that exactly 1 letter is inserted in an improper envelope? A. 0 B. 12/212 C. 11/12 D. 1/12 4. Given 3 lines in the plane such that the points of intersection form a triangle with sides of length 20, 20 and 30, the number of points equidistant from all the 3 lines is A. 4 B. 3 C. 0 D. 1 5. A hare and a tortoise have a race along a circle of 100 yards diameter. The tortoise goes in one direction and the hare in the other. The hare starts after the tortoise has covered 1/5 of its distance and that too leisurely. The hare and tortoise meet when the hare has covered only 1/8 of the distance. By what factor should the hare increase its speed so as to tie the race? A. 40 B. 37.80 C. 8 D. 5 6. A circular dartboard of radius 1 foot is at a distance of 20 feet from you. You throw a dart at it and it hits the dartboard at some point Q in the circle. What is the probability that Q is closer to the center of the circle than the periphery? A. 0.75 B. 1 C. 0.5 D. 0.25 7. On planet zorba, a solar blast has melted the ice caps on its equator. 8 years after the ice melts, tiny plantoids called echina start growing on the rocks. echina grows in the form of a circle and the relationship between the diameter of this circle and the age of echina is given by the formula d = 4 * v (t – 8 ) for t = 8where d represents the diameter in mm and t the number of years since the solar blast. Jagan recorded the radius of some echina at a particular spot as 8mm. How many years back did the solar blast occur? A. 24 B. 12 C. 8 D. 16 8. For the FIFA world cup, Paul the octopus has been predicting the winner of each match with amazing success. It is rumored that in a match between 2 teams A and B, Paul picks A with the same probability as A’s chances of winning. Let’s assume such rumors to be true and that in a match between Ghana and Bolivia, Ghana the stronger team has a probability of 2/3 of winning the game. What is the probability that Paul will correctly pick the winner of the Ghana-Bolivia game? A. 4/9 B. 2/3 C. 1/9 D. 5/9 9. The citizens of planet nigiet are 8 fingered and have thus developed their decimal system in base 8. A certain street in nigiet contains 1000 (in base 8 buildings numbered 1 to 1000. How many 3s are used in numbering these buildings? A. 256 B. 54 C. 192 D. 64 10. 36 people {a1, a2, …, a36} meet and shake hands in a circular fashion. In other words, there are totally 36 handshakes involving the pairs, {a1, a2}, {a2, a3}, …, {a35, a36}, {a36, a1}. Then size of the smallest set of people such that the rest have shaken hands with at least one person in the set is A. 12 B. 13 C. 18 D. 11 11.There are two boxes, one containing 10 red balls and the other containing 10 green balls. You are allowed to move the balls between the boxes so that when you choose a box at random and a ball at random from the chosen box, the probability of getting a red ball is maximizeD. This maximum probability is A. 3/4 B. 14/19 C. 37/38 D. 1/2 12.A hollow cube of size 5 cm is taken, with a thickness of 1 cm. It is made of smaller cubes of size 1 cm. If 4 faces of the outer surface of the cube are painted, totally how many faces of the smaller cubes remain unpainted? A. 900 B. 488 C. 500 D. 800 13.The IT giant Tirnop has recently crossed a head count of 150000 and earnings of $7 billion. As one of the forerunners in the technology front, Tirnop continues to lead the way in products and services in India. At Tirnop, all programmers are equal in every respect. They receive identical salaries ans also write code at the same rate.Suppose 12 such programmers take 12 minutes to write 12 lines of code in total. How many lines of code can be written by 72 programmers in 72 minutes? A. 72 B. 432 C. 12 D. 6 14.The IT giant Tirnop has recently crossed a head count of 150000 and earnings of $7 billion. As one of the forerunners in the technology front, Tirnop continues to lead the way in products and services in India. At Tirnop, all programmers are equal in every respect. They receive identical salaries ans also write code at the same rate.Suppose 12 such programmers take 12 minutes to write 12 lines of code in total. How long will it take 72 programmers to write 72 lines of code in total? A. 18 B. 72 C. 6 D. 12 15.Given a collection of points P in the plane, a 1-set is a point in P that can be separated from the rest by a line; i.e the point lies on one side of the line while the other lie on the other side. The number of 1-sets of P is denoted by n1(P).The maximum value of n1(P) over all configurations P of 11 points in the plane is A. 10 B. 3 C. 5 D. 11 16.Given a collection of points P in the plane, a 1-set is a point in P that can be separated from the rest by a line; i.e. the point lies on one side of the line while the others lie on the other side. The number of 1-sets of P is denoted by n1(P). The maximum value of n1(P) over all configurations P of 19 points in the plane is A. 18 B. 9 C. 3 D. 11 17.Alok and Bhanu play the following min-max game. Given the expression N = 9 + X + Y – Z where X, Y and Z are variables representing single digits (0 to 9), Alok would like to maximize N while Bhanu would like to minimize it. Towards this end, Alok chooses a single digit number and Bhanu substitutes this for a variable of her choice (X, Y or Z). Alok then chooses the next value and Bhanu, the variable to substitute the value. Finally Alok proposes the value for the remaining variable. Assuming both play to their optimal strategies, the value of N at the end of the game would be A. 27 B. 18 C. 20 D. 0.0 18.Alok and Bhanu play the following min-max game. Given the expression N = X – Y – Z where X, Y and Z are variables representing single digits (0 to 9), Alok would like to maximize N while Bhanu would like to minimize it. Towards this end, Alok chooses a single digit number and Bhanu substitutes this for a variable of her choice (X, Y or Z). Alok then chooses the next value and Bhanu, the variable to substitute the value. Finally Alok proposes the value for the remaining variable. Assuming both play to their optimal strategies, the value of N at the end of the game would be A. 2 B. 4 C. 9 D. -18 19.Alok and Bhanu play the following min-max game. Given the expression N = 38 + X*(Y – Z) where X, Y and Z are variables representing single digits (0 to 9), Alok would like to maximize N while Bhanu would like to minimize it. Towards this end, Alok chooses a single digit number and Bhanu substitutes this for a variable of her choice (X, Y or Z). Alok then chooses the next value and Bhanu, the variable to substitute the value. Finally Alok proposes the value for the remaining variable. Assuming both play to their optimal strategies, the value of N at the end of the game would be A. 38 B. 119 20. 10 suspects are rounded by the police and questioned about a bank robbery. Only one of them is guilty. The suspects are made to stand in a line and each person declares that the person next to him on his right is guilty. The rightmost person is not questioneD. Which of the following possibilities are true? A. All suspects are lying or the leftmost suspect is innocent. B. All suspects are lying and the leftmost suspect is innocent . A. A only B. Neither A nor B C. Both A and B D. B only 21. A sheet of paper has statements numbered from 1 to 40. For all values of n from 1 to 40, statement n says: ‘Exactly n of the statements on this sheet are false.’ Which statements are true and which are false? A. The even numbered statements are true and the odd numbered statements are false. B. The 39th statement is true and the rest are false. C. The odd numbered statements are true and the even numbered statements are false. D. All the statements are false. 22. 34 people attend a party. 4 men are single and the rest are there with their wives. There are no children in the party. In all 22 women are present. Then the number of married men at the party is A. 12 B. 8 C. 16 23. 30 teams enter a hockey tournament. A team is out of the tournament if it loses 2 games. What is the maximum number of games to be played to decide one winner? A. 60 B. 59 C. 61 D. 30 24. A and B play a game of dice between them. The dice consist of colors on their faces (instead of numbers). When the dice are thrown, A wins if both show the same color; otherwise B wins. One die has 4 red face and 2 blue faces. How many red and blue faces should the other die have if the both players have the same chances of winning? A. 3 red and 3 blue faces B. 2 red and remaining blue C. 6 red and 0 blue D. 4 red and remaining blue 25. A and B play a game of dice between them. The dice consist of colors on their faces (instead of numbers). When the dice are thrown, A wins if both show the same color; otherwise B wins. One die has 3 red faces and 3 blue faces. How many red and blue faces should the other die have if the both players have the same chances of winning? A. red and 1 blue faces B. 1 red and 5 blue faces C. 3 red and 3 blue faces D. Any of the solutions given 26. There are two containers A and B. A is half filled with wine whereas B which is 3 times the size of A contains one quarter portion wine. If both containers are filled with water and the contents are poured into container C, what portion of container C is wine? A. 30 B. 31 C. 42 D. 25 27. A sheet of paper has statements numbered from 1 to 45. For all values of n from 1 to 45, statement n says “At most n of the statements on this sheet are false”. Which statements are true and which are false? A. The odd numbered statements are true and the even numbered are false. B. The even numbered statements are true and the odd numbered are false. C. All statements are false. D. All statements are false 28. A sheet of paper has statements numbered from 1 to 25. For all values of n from 1 to 25, statement in says “At most n of the statements on this sheet are false”. Which statements are true and which are false? A. The odd numbered statements are true and the even numbered are false. B. All statements are false. C. The even numbered statements are true and the odd numbered are false. D. All statements are true . 29. Alice and Bob play the following chip-off-the-table game. Given a pile of 58 chips, Alice first picks at least one chip but not all the chips. In subsequent turns, a player picks at least one chip but no more than the number picked on the previous turn by the opponent. The player to pick the last chip wins. Which of the following is true? A. In order to win, Alice should pick 14 chips on her first turn. B. In order to win, Alice should pick two chips on her first turn. C. In order to win, Alice should pick one chip on her first turn. 30. Suppose 19 monkeys take 19 minutes to eat 19 bananas. How many minutes would it take 8 monkeys to eat 8 bananas? A. 152 B. 27 C. 19 D. 8 31. Suppose 12 monkeys take 12 minutes to eat 12 bananas. How many monkeys would it take to eat 72 bananas in 72 minutes? A. 6 B. 72 C. 12 D. 18 32. A person drives with constant speed and after some time he sees a milestone with 2 digits. Then travels for 1 hours and sees the same 2 digits in reverse order. 1 hours later he sees that the milestone has the same 2 digits with a 0 between them. What is the speed of the car? A. 54.00 mph B. 45.00 mph C. 27.00 mph D. 36.00 mph 33. Fermat’s Last Theorem is a statement in number theory which states that it is impossible to separate any power higher than the second into two like powers, or, more precisely- If an integer n is greater than 2, then the equation a^n b^n = c^n has no solutions in non-zero integers a, b, and C. Now, if the difference of any two numbers is 9 and their product is 17, what is the sum of their A. 43 B. 45 C. 98 D. 115 34. Alchemy is an occult tradition that arose in the ancient Persian empire. Zosimos of Panopolis was an early alchemist. Zara, reads about Zosimos and decides to try some experiments. One day, she collects two buckets, the first containing one litre of ink and the second containing one litre of col a. Suppose she takes one cup of ink out of the first bucket and pours it into the second bucket. After mixing she takes one cup of the mixture from the second bucket and pours it back into the first bucket. Which one of the following statements holds now? A. There is more cola in the first bucket than ink in the second bucket. B. None of the statements holds true. C. There is as much cola in the first bucket as there is ink in the second bucket. D. There is less cola in the first bucket than ink in the second bucket. IBPS CWE For PO’s Quantitative Aptitude Questions With Answers Simple Interest 1. A sum of Rs.5984 becomes rs.8976 in 6 years at simple interest, what is the rate of interest? 1) 12% 3) 8 1/3 % 4) 6% 5) None of these 2.What sum of money will amount to Rs. 8850 in three years at the ate of interest being 5%, 6% and 7% during 1st, 2nd and 3rd years respectively? 1) Rs.8500 2) Rs.8000 3) Rs.7800 4) Rs.8300 5) None of these 3.At what rate of interest per annum will a sum of money be 60% more in 5 Years? 1) 6% 2) 5% 3) 9% 4) 12% 5) None of these 4. Out of certain sum of money, 1/3rd is invested at 3%, 1/6th is invested 6% and the rest at 8%. if the simple interest for 2 years from all these investments is Rs.1020 what is the original sum? 1) Rs.9500 2) Rs.7000 3) Rs.8500 4) Rs.8000 5) None of these 5.A sum of Rs.6800 in lent out in two parts in such at way that the interest on one part at 8% p.a for 5 years is same as on the other part at 7.5% p.a for 6 years. find the first part. 1) Rs.3200 2) Rs.3600 3) Rs.3300 4) Rs.1800 5) None of these 6.A sum of Rs. 1400 becomes Rs.1652 in three years at certain rate of simple interest. what will be the amount if the rate of interest is increased by 3%? 2) Rs.1748 5) None of these 7.The rate of simple interest on a certain amount of money is 6% p.a for the first two years 9% p.a. for next five years 13% p.a. for the period beyond seven years. if the total interest on a sum at the end of ten years is Rs.1920, what is the sum? 1) Rs.9500 2) Rs.10500 3) Rs.12500 4) Rs.11750 5) None of these 8.A sum was invested at simple interest at a certain rate for two years. It would have fetched rs.180 more had it been invested at 2% higher rate. what was the sum? 1) Rs.8500 2) Rs.4500 3) Rs.5600 4) Rs.9000 5) None of these 9.A certain sum of money lent out at a certain rate of interest per annum doubles itself in 8years. in how many years will it treble itself? 1) 12 2) 18 5) None of these 10. A Certain sum of money amount to Rs.2016 in two years and Rs.2340 in five years at a certain rate of simple interest. find rate. 5)None of these Compound Interest 11.A sum of money invested at compound interest will become Rs.6560 at the end of first year and 676 at the end of second year. what is the sum? 5) None of these 12.what will be the amount on Rs.25000 in 2 years at compound interest, if the rates for the successive years be 4% and 5% per year? 1)Rs.26800 2)Rs.26725 5)None of these 13.what sum will become Rs.6690 in 3 years and Rs.10035 in six years on compound interest? 5)None of these 14.On what sum does the difference between simple and compound interest for two years at 5% per anum be Rs.153.60? 4)Can’t be determined 5)None of these 15.Population of town in 2004 was 12000.what will be the population in 2007, if it increases annually at 10%? 5)None of these 16.What is the difference between simple and compound interest for three years Rs.16000 at 10% p.a? 3) Rs.496 4)Can’t be determined 5)None of these 17.A sum of money will become double in 3years at compound interest.in what time will it become four times itself? 4)Can’t be determined 5)None of these 18.priya borrowed Rs.4000 at 5% p.a compound interest. after 2 years she repaid Rs.2210 and after 2 more years she paid the balance with interest.what was the total amount that she paid as interest? 5)None of these 19.what is the compound interest on Rs.25000 for 1 1/2 year at 8% per anum, interest being calculated half yearly? 5) None of these 20.The Compound interest on certain sum of money for the 5th year at 6% per anum is rs.583 what was the compound interest for the fourth year on same sum? 5)None of these 1)3 2)1 3)4 4)3 5)2 6)4 7)1 8)2 9)4 10)5 11)2 12)3 13)4 14)1 15)5 16)3 17)2 18)4 19)3 20)1. Click Here For More Aptitude Questions Aptitude Questions For Bank PO Exams 1. An artical sold at amount of 50% the net sale price is rs 425 .what is the list price of the artical? a) 500 b) 488 c) 480 d) 510 (Ans 500) 2. A man leaves office daily at 7pm A driver with car comes from his home to pick him from office and bring back home.One day he gets free at 5:30 and instead of waiting for driver he starts walking towards home. In the way he meets the car and returns home on car He reaches home 20 minutes earlier than usual. In how much time does the man reach home usually?? (Ans. 1hr 20min) 3. A works thrice as much as B. If A takes 60 days less than B to do a work then find the number of days it would take to complete the work if both work together? Ans. 22½days 4. How many 1’s are there in the binary form of 8*1024 + 3*64 + 3 Ans. 4 5. In a digital circuit which was to implement (A B) + (A)XOR(B), the designer implements (A B) (A)XOR(B) What is the probability of error in it ? 6. A boy has Rs 2. He wins or loses Re 1 at a time If he wins he gets Re 1 and if he loses the game he loses Re 1.He can loose only 5 times. He is out of the game if he earns Rs 5.Find the number of ways in which this is possible? (Ans. 16) 7. If there are 1024*1280 pixels on a screen and each pixel can have around 16 million colors. Find the memory required for this? (Ans. 4MB) 8. On a particular day A and B decide that they would either speak the truth or will lie. C asks A whether he is speaking truth or lying? He answers and B listens to what he said. C then asks B what A has said B says “A says that he is a liar” What is B speaking ?(a) Truth (b) Lie (c) Truth when A lies (d) Cannot be determined 9. If a man buys 1 lt of milk for Rs.12 and mixes it with 20% water and sells it for Rs.15, then what is the percentage of gain? 10. Pipe A can fill a tank in 30 mins and Pipe B can fill it in 28 mins.If 3/4th of the tank is filled by Pipe B alone and both are opened, how much time is required by both the pipes to fill the tank completely ? 11. If on an item a company gives 25% discount, they earn 25% profit. If they now give 10% discount then what is the profit percentage. (a) 40% (b) 55% (c) 35% (d) 30% (Ans. D) 12. A certain number of men can finish a piece of work in 10 days. If however there were 10 men less it will take 10 days more for the work to be finished. How many men were there originally? (a) 110 men (b) 130 men (c) 100 men (d) none of these (Ans. A) 13. In simple interest what sum amounts of Rs.1120/- in 4 years and Rs.1200/- in 5 years ? (a) Rs. 500 (b) Rs. 600 (c) Rs. 800 (d) Rs. 900 (Ans. C) 14. If a sum of money compound annually amounts of thrice itself in 3 years. In how many years will it become 9 times itself. (a) 6 (b) 8 (c) 10 (d) 12 (Ans A) 15. Two trains move in the same direction at 50 kmph and 32 kmph respectively. A man in the slower train observes the 15 seconds elapse before the faster train completely passes by him. What is the length of faster train ? (a) 100m (b) 75m (c) 120m (d) 50m (Ans B) 16. How many mashes are there in 1 squrare meter of wire gauge if each mesh is 8mm long and 5mm wide ? (a) 2500 (b) 25000 (c) 250 (d) 250000 (Ans B) 17. x% of y is y% of ? (a) x/y (b) 2y (c) x (d) can’t be determined Ans. C 18. The price of sugar increases by 20%, by what % should a housewife reduce the consumption of sugar so that expenditure on sugar can be same as before ? (a) 15% (b) 16.66% (c) 12% (d) 9% (Ans B) 19. A man spends half of his salary on household expenses, 1/4th for rent, 1/5th for travel expenses, the man deposits the rest in a bank. If his monthly deposits in the bank amount 50, what is his monthly salary ? (a) Rs.500 (b) Rs.1500 (c) Rs.1000 (d) Rs. 900 (Ans C) 20. The population of a city increases @ 4% p.a. There is an additional annual increase of 4% of the population due to the influx of job seekers, find the % increase in population after 2 years ? 21. The ratio of the number of boys and girls in a school is 3:2 Out of these 10% the boys and 25% of girls are scholarship holders. % of students who are not scholarship holders.? 22. 15 men take 21 days of 8 hrs. each to do a piece of work. How many days of 6 hrs. each would it take for 21 women if 3 women do as much work as 2 men? (a) 30 (b) 20 (c) 19 (d) 29 (Ans. A) 23. A man walks east and turns right and then from there to his left and then 45degrees to his right.In which direction did he go (Ans. North west) 24. A student gets 70% in one subject, 80% in the other. To get an overall of 75% how much should get in third subject. 25. A man shows his friend a woman sitting in a park and says that she the daughter of my grandmother’s only son.What is the relation between the two Ans. Daughter 26. How many squares with sides 1/2 inch long are needed to cover a rectangle that is 4 ft long and 6 ft wide (a) 24 (b) 96 (c) 3456 (d) 13824 (e) 14266 27. If a=2/3b , b=2/3c, and c=2/3d what part of d is b/ (a) 8/27 (b) 4/9 (c) 2/3 (d) 75% (e) 4/3 Ans. (b) 28. 2598Successive discounts of 20% and 15% are equal to a single discount of (a) 30% (b) 32% (c) 34% (d) 35% (e) 36 Ans. (b) 29. The petrol tank of an automobile can hold g liters.If a liters was removed when the tank was full, what part of the full tank was removed? (a)g-a (b)g/a (c) a/g (d) (g-a)/a (e) (g-a)/g (Ans. (c)) 30. If x/y=4 and y is not ‘0′ what % of x is 2x-y (a)150% (b)175% (c)200% (d)250% (Ans. (b)) 31. A cylinder is 6 cms in diameter and 6 cms in height. If spheres of the same size are made from the material obtained, what is the diameter of each sphere? (a) 5 cms (b) 2 cms (c) 3 cms (d) 4 cms (Ans C) 32. A rectangular plank (2)1/2 meters wide can be placed so that it is on either side of the diagonal of a square shown below.(Figure is not available)What is the area of the plank? ( Ans :7*(2)1/2 ) 33. What is the smallest number by which 2880 must be divided in order to make it into a perfect square ? (a) 3 (b) 4 (c) 5 (d) 6 (Ans. C) 34. A father is 30 years older than his son however he will be only thrice as old as the son after 5 years what is father’s present age ? (a) 40 yrs (b) 30 yrs (c) 50 yrs (d) none of these (Ans. A) 35. An article sold at a profit of 20% if both the cost price and selling price would be Rs.20/- the profit would be 10% more. What is the cost price of that article? 36. If an item costs Rs.3 in ‘99 and Rs.203 in ‘00.What is the % increase in price? (a) 200/3 % (b) 200/6 % (c) 100% (d) none of these (Ans. A) 37. 5 men or 8 women do equal amount of work in a day. a job requires 3 men and 5 women to finish the job in 10 days how many woman are required to finish the job in 14 days. a) 10 b) 7 c) 6 d) 12 (Ans 7) 38. A simple interest amount of rs 5000 for six month is rs 200. what is the anual rate of interest? a) 10% b) 6% c) 8% d) 9% (Ans 8%) 39. In objective test a correct ans score 4 marks and on a wrong ans 2 marks are —. a student score 480 marks from 150 question. how many ans were correct? a) 120 b) 130 c) 110 d) 150 (Ans130) Ans. (b) 40. What is the angle between the two hands of a clock when time is 8:30 Ans. 75(approx) 41. A student is ranked 13th from right and 8th from left. How many students are there in totality ? For More Aptitude Questions Mental Ability Sample Questions For IT Placement Test 1.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Printer 2. Author 3. Publisher 4. Correspondent 5. Reader 2.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. FLOCK 2. CROWD 3. HERD 4. SWARM 5. TEAM 3.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Jupiter 2. Sky 3. Star 4. Moon 5. Sun 4.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Sofa 2. Bed 3. Diwan 4. Chair 5. Table 5.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Cheese 2. Butter 3. Ghee 4. Milk 5. Curd 6.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Ginger 2. Tomato 3. Carrot 4. Beet 5. Potato 7.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Dictionary 2. Magazine 3. News paper 4. Library 5. Book 8.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Blind 2. Lame 3. Short 4. Deaf 5. Dumb 9.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Brigade 2. Battalion 3. Commander 4. Troop 5. Platoon 10.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. 7 2. 9 3. 11 4. 13 5. 17 11.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. 13 2. 17 3. 19 4. 23 5. 25 12.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. BY 2. DW 3. GT 4. JQ 5. LP 13.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. February 2. March 3. April 4. May 5. June 14.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. 7 – 21/3 2. 5 x 0/12 3. –3 + 36/12 4. 0 x 8/9 5. 0 + 26/13 15.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. FV 2. SH 3. JQ 4. IR 5. MN 16.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. April 2. June 3. March 4. September 5. November 17.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. January 2. October 3. August 4. June 5. December 18.Directions: In the following question, one word is different from the rest. Find out the word which does not belong to the group— 1. Duck 2. Cuckoo 3. Crow 4. Parrot 5. Pigeon 19. Of the following five alternatives, three are same in some way while the rest of the two are same in some other way. Write the smallest number of the group of the two alternatives. 1. 53 2. 59 3. 71 4. 56 5. 62 20.Of the following five alternatives, three are same in some way while the rest of the two are same in some other way. Write the largest number of the group of those two alternatives? 1. 23 2. 22 3. 19 4. 24 5. 36 21.Of the following five alternatives, four are same in some way while the remaining fifth is different. Find the different one— 1. 22 2. 32 3. 52 4. 36 5. 44 22.As ‘Wheel’ is related to ‘Vehicle’ similarly, What is related to ‘Clock’? 1. Hands 2. Nail 3. Stick 4. Pin 5. None of the above 23.As ‘Plateau’ is related to a ‘Mountain’, similarly ‘Bush’ is related to what? 1. Plants 2. Field 3. Forest 4. Trees 5. Stem 24.As ‘Astronomy’ is related to ‘Planets’, similarly ‘Astrology’ is related to what? 1. Satellites 2. Disease 3. Animals 4. Coins 5. None of the above 25.As ‘Earthquake’ is related to ‘Earth’, similarly ‘Thundering’ is related to what? 1. Earth 2. Sea 3. Fair 4. Sky 5. None of the above Click Here To More Aptitude Questions Topic Wise TCS Aptitude Questions and Answers Practice Paper 1. There are seventy clerks working in a company, of which 30 are females. Also, 30 clerks are married; 24 clerks are above 25 years of age; 19 married clerks are above 25 years, of which 7 are males; 12 males are above 25 years of age; and 15 males are married. How many bachelor girls are there and how many of these are above 25? 2. A man sailed off from the North Pole. After covering 2,000 miles in one direction he turned West, sailed 2,000 miles, turned North and sailed ahead another 2,000 miles till he met his friend. How far was he from the North Pole and in what direction? 3. Here is a series of comments on the ages of three persons J, R, S by themselves. S : The difference between R’s age and mine is three years. J : R is the youngest. R : Either I am 24 years old or J 25 or S 26. J : All are above 24 years of age. S : I am the eldest if and only if R is not the youngest. R : S is elder to me. J : I am the eldest R : S is not 27 years old. S : The sum of my age and J’s is two more than twice R’s age. One of the three had been telling a lie throughout whereas others had spoken the truth. Determine the ages of S,J,R. 4. In a group of five people, what is the probability of finding two persons with the same month of birth? 5. A father and his son go out for a ‘walk-and-run’ every morning around a track formed by an equilateral triangle. The father’s walking speed is 2 mph and his running speed is 5 mph. The son’s walking and running speeds are twice that of his father. Both start together from one apex of the triangle, the son going clockwise and the father anti-clockwise. Initially the father runs and the sonwalks for a certain period of time. Thereafter, as soon as the father starts walking, the son starts running. Both complete the course in 45 minutes. For how long does the father run? Where do the two cross each other? 6. The Director of Medical Services was on his annual visit to the ENT Hospital. While going through the out patients’ records he came across the following data for a particular day : ” Ear consultations 45; Nose 50; Throat 70; Ear and Nose 30; Nose and Throat 20; Ear and Throat 30; Ear, Nose and Throat 10; Total patients 100.” Then he came to the conclusion that the records were bogus. Was he right? 7. Amongst Ram, Sham and Gobind are a doctor, a lawyer and a police officer. They are married to Radha, Gita and Sita (not in order). Each of the wives have a profession. Gobind’s wife is an artist. Ram is not married to Gita. The lawyer’s wife is a teacher. Radha is married to the police officer. Sita is an expert cook. Who’s who? 8. What should come next? 1, 2, 4, 10, 16, 40, 64, Questions 9-12 are based on the following : Three adults – Roberto, Sarah and Vicky – will be traveling in a van with five children – Freddy, Hillary, Jonathan, Lupe, and Marta. The van has a driver’s seat and one passenger seat in the front, and two benches behind the front seats, one beach behind the other. Each bench has room for exactly three people. Everyone must sit in a seat or on a bench, and seating is subject to the following restrictions: An adult must sit on each bench. Either Roberto or Sarah must sit in the driver’s seat. Jonathan must sit immediately beside Marta. 9. Of the following, who can sit in the front passenger seat ? (a) Jonathan (b) Lupe (c) Roberto (d) Sarah (e) Vicky 10. Which of the following groups of three can sit together on a bench? (a) Freddy, Jonathan and Marta (b) Freddy, Jonathan and Vicky (c) Freddy, Sarah and Vicky (d) Hillary, Lupe and Sarah (e) Lupe, Marta and Roberto 11. If Freddy sits immediately beside Vicky, which of the following cannot be true ? a. Jonathan sits immediately beside Sarah b. Lupe sits immediately beside Vicky c. Hillary sits in the front passenger seat d. Freddy sits on the same bench as Hillary e. Hillary sits on the same bench as Roberto 12. If Sarah sits on a bench that is behind where Jonathan is sitting, which of the following must be true ? a. Hillary sits in a seat or on a bench that is in front of where Marta is sitting b. Lupe sits in a seat or on a bench that is in front of where Freddy is sitting c. Freddy sits on the same bench as Hillary d. Lupe sits on the same bench as Sarah e. Marta sits on the same bench as Vicky 13. Make six squares of the same size using twelve match-sticks. (Hint : You will need an adhesive to arrange the required figure) 14. A farmer has two rectangular fields. The larger field has twice the length and 4 times the width of the smaller field. If the smaller field has area K, then the are of the larger field is greater than the area of the smaller field by what amount? (a) 6K (b) 8K (c) 12K (d) 7K 15. Nine equal circles are enclosed in a square whose area is 36sq units. Find the area of each circle. 16. There are 9 cards. Arrange them in a 3*3 matrix. Cards are of 4 colors. They are red, yellow, blue, green. Conditions for arrangement: one red card must be in first row or second row. 2 green cards should be in 3rd column. Yellow cards must be in the 3 corners only. Two blue cards must be in the 2nd row. At least one green card in each row. 17. Is z less than w? z and w are real numbers. (I) z2 = 25 (II) w = 9 To answer the question, a) Either I or II is sufficient b) Both I and II are sufficient but neither of them is alone sufficient c) I & II are sufficient d) Both are not sufficient 18. A speaks truth 70% of the time; B speaks truth 80% of the time. What is the probability that both are contradicting each other? 19. In a family 7 children don’t eat spinach, 6 don’t eat carrot, 5 don’t eat beans, 4 don’t eat spinach & carrots, 3 don’t eat carrot & beans, 2 don’t eat beans & spinach. One doesn’t eat all 3. Find the no. of children. 20. Anna, Bena, Catherina and Diana are at their monthly business meeting. Their occupations are author, biologist, chemist and doctor, but not necessarily in that order. Diana just told the neighbour, who is a biologist that Catherina was on her way with doughnuts. Anna is sitting across from the doctor and next to the chemist. The doctor was thinking that Bena was a good name for parent’s to choose, but didn’t say anything. What is each person’s occupation? Click Here To More TCS Aptitude Questions and Answers Practice Papers Aptitude Questions and Answers With Explanation Aptitude Questions and Answers With Explanation 1.If point P is on line segment AB, then which of the following is always true? (1) AP = PB (2) AP > PB (3) PB > AP (4) AB > AP (5) AB > AP + PB 2. All men are vertebrates. Some mammals are vertebrates. Which of the following conclusions drawn from the above statement is correct. All men are mammals All mammals are men Some vertebrates are mammals. 3. Which of the following statements drawn from the given statements are correct? All watches sold in that shop are of high standard. Some of the HMT watches are sold in that shop. a) All watches of high standard were manufactured by HMT. b) Some of the HMT watches are of high standard. c) None of the HMT watches is of high standard. d) Some of the HMT watches of high standard are sold in that shop. 1. Ashland is north of East Liverpool and west of Coshocton. 2. Bowling green is north of Ashland and west of Fredericktown. 3. Dover is south and east of Ashland. 4. East Liverpool is north of Fredericktown and east of Dover. 5. Fredericktown is north of Dover and west of Ashland. 6. Coshocton is south of Fredericktown and west of Dover. 4. Which of the towns mentioned is furthest of the north – west (a) Ashland (b) Bowling green (c) Coshocton (d) East Liverpool (e) Fredericktown 5. Which of the following must be both north and east of Fredericktown? (a) Ashland (b) Coshocton (c) East Liverpool I a only II b only III c only IV a & b V a & c 6. Which of the following towns must be situated both south and west of at least one other town? A. Ashland only B. Ashland and Fredericktown C. Dover and Fredericktown D. Dover, Coshocton and Fredericktown E. Coshocton, Dover and East Liverpool. 7. Which of the following statements, if true, would make the information in the numbered statements more specific? (a) Coshocton is north of Dover. (b) East Liverpool is north of Dover (c) Ashland is east of Bowling green. (d) Coshocton is east of Fredericktown (e) Bowling green is north of Fredericktown 27. Which of the numbered statements gives information that can be deduced from one or more of the other statements? (A) 1 (B) 2 (C) 3 (D) 4 (E) 6 8. Eight friends Harsha, Fakis, Balaji, Eswar, Dhinesh, Chandra, Geetha, and Ahmed are sitting in a circle facing the center. Balaji is sitting between Geetha and Dhinesh. Harsha is third to the left of Balaji and second to the right of Ahmed. Chandra is sitting between Ahmed and Geetha and Balaji and Eshwar are not sitting opposite to each other. Who is third to the left of Dhinesh? 9. If every alternative letter starting from B of the English alphabet is written in small letter, rest all are written in capital letters, how the month “ September” be written. (1) SeptEMbEr (2) SEpTeMBEr (3) SeptembeR (4) SepteMber (5) None of the above. 10. The length of the side of a square is represented by x+2. The length of the side of an equilateral triangle is 2x. If the square and the equilateral triangle have equal perimeter, then the value of x is _______. 11. It takes Mr. Karthik y hours to complete typing a manuscript. After 2 hours, he was called away. What fractional part of the assignment was left incomplete? 12. Which of the following is larger than 3/5? (1) 1/2 (2) 39/50 (3) 7/25 (4) 3/10 (5) 59/100 13. The number that does not have a reciprocal is ____________. 14. There are 3 persons Sudhir, Arvind, and Gauri. Sudhir lent cars to Arvind and Gauri as many as they had already. After some time Arvind gave as many cars to Sudhir and Gauri as many as they have. After sometime Gauri did the same thing. At the end of this transaction each one of them had 24. Find the cars each originally had. 15. A man bought a horse and a cart. If he sold the horse at 10 % loss and the cart at 20 % gain, he would not lose anything; but if he sold the horse at 5% loss and the cart at 5% gain, he would lose Rs. 10 in the bargain. The amount paid by him was Rs._______ for the horse and Rs.________ for the cart. 16. It was calculated that 75 men could complete a piece of work in 20 days. When work was scheduled to commence, it was found necessary to send 25 men to another project. How much longer will it take to complete the work? 17. A student divided a number by 2/3 when he required to multiply by 3/2. Calculate the percentage of error in his result. 18. A dishonest shopkeeper professes to sell pulses at the cost price, but he uses a false weight of 950gm. for a kg. His gain is …%. 19. A software engineer has the capability of thinking 100 lines of code in five minutes and can type 100 lines of code in 10 minutes. He takes a break for five minutes after every ten minutes. How many lines of codes will he complete typing after an hour? 20. A man was engaged on a job for 30 days on the condition that he would get a wage of Rs. 10 for the day he works, but he have to pay a fine of Rs. 2 for each day of his absence. If he gets Rs. 216 at the end, he was absent for work for … days. 21. A contractor agreeing to finish a work in 150 days, employed 75 men each working 8 hours daily. After 90 days, only 2/7 of the work was completed. Increasing the number of men by ________ each working now for 10 hours daily, the work can be completed in time. 22. what is a percent of b divided by b percent of a? (a) a (b) b (c) 1 (d) 10 (d) 100 23. A man bought a horse and a cart. If he sold the horse at 10 % loss and the cart at 20 % gain, he would not lose anything; but if he sold the horse at 5% loss and the cart at 5% gain, he would lose Rs. 10 in the bargain. The amount paid by him was Rs._______ for the horse and Rs.________ for the cart. 24. A tennis marker is trying to put together a team of four players for a tennis tournament out of seven available. males – a, b and c; females – m, n, o and p. All team. For a team of four, all players must be able to play with each other under the following restrictions: b should not play with m, c should not play with p, and a should not play with o. 25 Which of the following statements must be false? 1. b and p cannot be selected together 2. c and o cannot be selected together 3. c and n cannot be selected together. players are of equal ability and there must be at least two males in the 26-28. The following figure depicts three views of a cube. Based on this, answer questions 26-28 26. The number on the face opposite to the face carrying 1 is _______ . 27 The number on the faces adjacent to the face marked 5 are _______ . 28. Which of the following pairs does not correctly give the numbers on the opposite faces. (1) 6,5 (2) 4,1 (3) 1,3 (4) 4,2 29. Five farmers have 7, 9, 11, 13 & 14 apple trees, respectively in their orchards. Last year, each of them discovered that every tree in their own orchard bore exactly the same number of apples. Further, if the third farmer gives one apple to the first, and the fifth gives three to each of the second and the fourth, they would all have exactly the same number of apples. What were the yields per tree in the orchards of the third and fourth farmers? 30. Five boys were climbing a hill. J was following H. R was just ahead of G. K was between G & H. They were climbing up in a column. Who was the second? 31-34 John is undecided which of the four novels to buy. He is considering a spy thriller, a Murder mystery, a Gothic romance and a science fiction novel. The books are written by Rothko, Gorky, Burchfield and Hopper, not necessary in that order, and published by Heron, Piegon, Blueja and sparrow, not necessary in that order. 1 (1) The book by Rothko is published by Sparrow. 2 (2) The Spy thriller is published by Heron. 3 (4)The Gothic romance is by Hopper. 31. Pigeon publishes ____________. 32. The novel by Gorky ________________. 33. John purchases books by the authors whose names come first and third in alphabetical order. He does not buy the books ______. 34. On the basis of the first paragraph and statement (2), (3) and (4) only, it is possible to deduce that 1. Rothko wrote the murder mystery or the spy thriller 2. Sparrow published the murder mystery or the spy thriller 3. The book by Burchfield is published by Sparrow. 35. If a light flashes every 6 seconds, how many times will it flash in 3/4 of an hour? 1. (4) A B Since p is a point on the line segment AB, AB > AP 2. Answer: (c) 3 Answer: (b) & (d). 4 – 27.Answer: 5. Answer: Fakis 6. Answer: Since every alternative letter starting from B of the English alphabet is written in small letter, the letters written in small letter are b, d, f… In the first two answers the letter E is written in both small & capital letters, so they are not the correct answers. But in third and fourth answers the letter is written in small letter instead capital letter, so they are not the answers. 7. Answer: x = 4 Since the side of the square is x + 2, its perimeter = 4 (x + 2) = 4x + 8 Since the side of the equilateral triangle is 2x, its perimeter = 3 * 2x = 6x Also, the perimeters of both are equal. (i.e.) 4x + 8 = 6x (i.e.) 2x = 8 -> x = 4. 8. Answer: 5 (y – 2) / y. To type a manuscript karthik took y hours. Therefore his speed in typing = 1/y. He was called away after 2 hours of typing. Therefore the work completed = 1/y * 2. Therefore the remaining work to be completed = 1 – 2/y. (i.e.) work to be completed = (y-2)/y 9. Answer: 10. Answer:1 One is the only number exists without reciprocal because thereciprocal of one is one itself. 11. Answer:Sudhir had 39 cars, Arvind had 21 cars and Gauri had 12 cars. Sudhir Arvind Gauri Finally 24 24 24 Before Gauri’s transaction 12 12 48 Before Arvind’s transaction 6 42 24 Before Sudhir’ s transaction 39 21 12 12. Answer: Cost price of horse: Rs. 400 & Cost price of cart: Rs. 200 Let x be the cost of horse & y be the cost of the cart. 10 % of loss in selling horse = 20 % of gain in selling the cart Therefore (10 / 100) * x = (20 * 100) * y x = 2y ———–(1) 5 % of loss in selling the horse is 10 more than the 5 % gain in selling the cart. Therefore (5 / 100) * x – 10 = (5 / 100) * y 5x – 1000 = 5y Substituting (1) 10y – 1000 = 5y 5y = 1000 y = 200 x = 400 from (1) 16. Answer: 30 days. One day work = 1 / 20 One man’s one day work = 1 / ( 20 * 75) No. Of workers = 50 One day work = 50 * 1 / ( 20 * 75) The total no. of days required to complete the work = (75 * 20) / 50 = 17. Answer: 0 % Since 3x / 2 = x / (2 / 3) 18. Answer: 5.3 % He sells 950 grams of pulses and gains 50 grams. If he sells 100 grams of pulses then he will gain (50 / 950) *100 = 19. Answer: 250 lines of codes 20. Answer: 7 days The equation portraying the given problem is: 10 * x – 2 * (30 – x) = 216 where x is the number of working days. Solving this we get x = 23 Number of days he was absent was 7 (30-23) days. 21. Answer: 150 men. One day’s work = 2 / (7 * 90) One hour’s work = 2 / (7 * 90 * 8) One man’s work = 2 / (7 * 90 * 8 * 75) The remaining work (5/7) has to be completed within 60 days, because the total number of days allotted for the project is 150 days. So we get the equation (2 * 10 * x * 60) / (7 * 90 * 8 * 75) = 5/7 where x is the number of men working after the 90th day. We get x = 225 Since we have 75 men already, it is enough to add only 150 men. 22. Answer: (c) 1 a percent of b : (a/100) * b b percent of a : (b/100) * a a percent of b divided by b percent of a : ((a / 100 )*b) / (b/100) * a ))= 1 23. Answer:Cost price of horse = Rs. 400 & the cost price of cart = 200. Let x be the cost price of the horse and y be the cost price of the cart. In the first sale there is no loss or profit. (i.e.) The loss obtained is equal to the Therefore (10/100) * x = (20/100) * y X = 2 * y —————–(1) In the second sale, he lost Rs. 10. (i.e.) The loss is greater than the profit by Rs. 10. Therefore (5 / 100) * x = (5 / 100) * y + 10 ——-(2) Substituting (1) in (2) we get (10 / 100) * y = (5 / 100) * y + 10 (5 / 100) * y = 10 y = 200 From (1) 2 * 200 = x = 400 24. Answer: Since inclusion of any male player will reject a female from the team. Since there should be four member in the team and only three males are available, the girl, n should included in the team always irrespective of others selection. Click Here To More Aptitude Questions Practice Papesr Aptitude Questions For Placements 1. The difference between the local value and face value of 7 in the numeral 667903 is: (a) 0 (b) 7896 (c) 6993 (d) 903 2. The difference between the place values of 7 and 3 in the number 527436 is : (a) 4 (b) 5 (4) 45 (d) 6970 3. The sum of the smallest six-digit number and the gretest five digit number is : (a)199999 (b) 201110 (c) 211110 (d) 1099999 4. lf the largest three digit number is subtracted from the smallest five—dlgit number then the remainder is : (a) 1 (b) 9000 (c) 9001 (dl 90001 5. 5978 + 8134 + 7014 – ? (a) 18236 (b) 19126 (c) 19216 (d) 19226 6. 18266 + 2736 + 413% = ? (ul 81329 (bl 62239 (cl 62319 (dl 62339 7. 39798 + 3798 + 378 = ? a) 43576 (b) 43974 (c) 43984 (d) 49532 8. 9358 – 0014 + 3127 =? (a) 6381 (b) 6471 (c) 6561 (d) 6741 9. 9572 – 4018 – 2164 = ? (al 3300 (bl 8390 (c) 8570 (dl 7718 10. 7589-?=3434 (ul 721 (bl 3246 (cl 4155 (dl 11023 11. 9548 + 7314 *8362 + ? (a) 8230 (b) 8410 (c) 8500 (d) 8600 12. 7845—?=8461-3569 (ul 2593 (bl 2773 (cl 3569 (dl Nom cfthooe 13. 3573 + 5729 – ?486 = 5821 (al 1 (bl 2 (cl 3 – (d) None of These 14. If 6×43 – 46y9 = 1904, which of the following should come in place of x? (al 4 (b) 6 (c) 9 (s) Cannot be determined (e) Nome of these 15. What should be the maximum value nf B in the following equation 5A9-7B2 +9c6=823 (cl 6 (bl 6 (cl 7 (dl 9 10. In the following sum, ? stands for which digit (a) 4 (b) 6 (c) 8 (d) 9 17. 5358 x 51 = ? (a) 273258 (b) 273268 (c) 273348 (d) 273858 18. 360×17=? (a) 5120 (b) 5320 (c) 6120 (d) 6130 19. 587 x 999 = ? (a) 586413 (b) 587523 (e) 614823 (d) 615173 20. 469157 x 9999 – ? (ol 4688970848 (bl 4886970748 (6) 4091100848 (d) 584649125 21. 9756 >< 99999 = ? (a) 796491244 (b) 816491244 (c) 875591244 (d) None of theee 22. The value of 112 x5^4 is (a) 6700 (b) 70000 (c) 76500 (d) 77200 23) 965421 x 625 = ? (a) 575648125 (b) 584838125 (c) 584649125 (d) 585628125 24. 12846x593x12846x407=? (a) 12848000 (b) 14203706 (c) 24038606 (d) 24064000 26. 1014 x 986 = ? (a) 998804 (bl 998814 (c) 998904 (d) 999804 26. 1307 x 1307=? (a) 1601249 (b) 1607249 (c) 1701249 (d) 11011249 27. 1399 x 1399 = ? (a) 1687401 (bl 1901641 (c) 1943211 (d) 1957201 28. 106×106+94×94=? (a) 20032 (b) 20072 (c) 21082 (d) 28082 29. 217 x 217 + 183 x 183 = ? (a) 79698 (b) 80578 (c) 80698 (d) 81268 30. 12345679 x 72 is equal to : (a) 88888888 (b) 988888888 (c) 898989898 (d) 999999998 Click Here To More Aptitude Questions General Aptitude Sample Questions | GA Sample Questions | General Aptitude (GA) Solved Questions 1) Stock options for employees are the latest step in progression from management ownership to employee ownership. Employee ownership can save loss-making companies. From the following statements, choose that one, which if true, does NOT provide support for the claim above. (a) Employee owned companies generally have higher productivity (b) Employee participation in management raises morale (c) Employee ownership tends to drive up salaries (d) Employee ownership enables workers to share in company profits 2) If log8 3 = 0.5283 and log8 5 = 0.7740, then what is the value of log8 45 ? (a) 1.6553 (b) 1.8306 (c) 3.8066 (d) 0.8178 w w w.w a y 2 f r e s h e r s . c o m 3) The following represents the summation of two numbers where X, Y and Z represent distinct digits among 0, 1, 2, …, 9. What does X represent? (a) 6 (b) 7 (c) 8 (d) 9 4) Four places A, B, C and D are situated in a city as follows: B is situated due east of A at a distance of 6 km. C can be reached from B by travelling 2 km due east and then 4 km due north. D is situated due west of C and is at equal distance from A and B. What is the distance between A and D? (a) 3.5 km (b) 4 km (c) 4.5 km (d) 5 km 5) Any government officer who allows bribery to flourish must be subject to ________ . (a) stringency (b) stricture (c) vagary (d) mockery Answers to Sample Questions: 1) (c) 2) (b) 3) (c) 4) (d) 5) (b) Click Here To More Aptitude Questions Bank Clerk Exam Quantitative Aptitude Solved Questions | Quantitative Aptitude Solved Questions For Bank Clerk Exam Quantitative Aptitude Solved Questions 1. A square garden has fourteen posts along each side at equal interval. Find how many posts are there in all four sides: (a) 56 (b) 52 (c) 44 (d) 60 2. Average age of students of an adult school is 40 years. 120 new students whose average age is 32 years joined the school. As a result the average age is decreased by 4 years. Find the number of students of the school after joining of the new students: (a) 1200 (b) 120 (c) 360 (d) 240 3. When Rs 250 added to 1/4th of a given amount of money makes it smaller than 1/3rd of the given amount of money by Rs 100. What is the given amount of money? (a) Rs 350 (b) Rs 600 (c) Rs 4200 (d) Rs 3600 4. Find the least number of candidates in an examination so that the percentage of successful candidates should be 76.8%: (a) 500 (b) 250 (c) 125 (d) 1000w w w.w a y 2 f r e s h e r s . c o m 5. The number of times a bucket of capacity 4 litres to be used to fill up a tank is less than the number of times another bucket of capacity 3 litres used for the same purpose by 4. What is the capacity of the tank? (a) 360 litres (b) 256 litres (c) 48 litres (d) 525 litres 6. a hostel. One day some students were absent as a result, the quantity of rice has been spent in the ratio of 6 : 5. How many students were present on that day? (a) 24 (b) 20 (c) 15 (d) 25 w a y 2 f r e s h e r s . c o m 7. The ratio of daily wages of two workers is 4 : 3 and one gets daily Rs 9 more than the other, what are their daily wages? (a) Rs 32 and Rs 24 (b) Rs 60 and Rs 45 (c) Rs 80 and Rs 60 (d) Rs 36 and Rs 27 8. Find the ratio of purchase price and sell price if there is loss of 12 1/2%. (a) 7 : 8 (b) 8 : 7 (c) 2 : 25 (d) 25 : 2 w w w.w a y 2 f r e s h e r s . c o m 9. 0.8 portion of a tank is filled with water. If 25 litres of water is taken out from the tank, 14 litres of excess water over the half filled up tank remains in it. Find the capacity of the tank. (a) 100 litres (b) 130 litres (c) 200 litres (d) 150 litres 10. The ratio of ages of two persons is 4 : 7 and one is 30 years older than the other. Find the sum of their ages. (a) 210 years (b) 110 years (c) 90 years (d) 140 years 11. The ratio of the age of a gentleman and his wife is 4 : 3. After 4 years this ratio will be 9 : 7. If at the time of their marriage the ratio was 5 : 3, how many years ago they were married? (a) 10 years (b) 8 years (c) 12 years (d) 15 yearsw w w.w a y 2 f r e s h e r s . c o m 12. Sum of two numbers prime to each other is 20 and their L.C.M. is 99. What are the numbers? (a) 8 and 12 (b) 14 and 6 (c) 19 and 1 (d) 11 and 9 13. Find the greatest number which on dividing 107 and 120 leaves remainders 5 and 1 respectively. (a) 25 (b) 6 (c) 9 (d) 17 14.. Express Rs 25 as percentage of Rs 75: (a) 3% (b) 30% (c) . .3 % (d) 33.3% 14. The sum of the present age of the father and his daughter is 42 years. 7 years later, the father will be 3 times old than the daughter. The present age of the father is: (a) 35 (b) 28 (c) 32 (d) 33 15. 42 oranges are distributed among some boys and girls. If each boy gets 3 then each girl gets 6. But if each boy gets 6 and each girl gets 3, it needs 6 more. The number of girls is: (a) 4 (b) 6 (c) 8 (d) 10 16. An alloy of zinc and copper contains the metals in the ratio 5 : 3. The quantity of zinc to be added to 16 kg of the alloy so that the ratio of the metal may be 3 : 1 is: (a) 2 kg (b) 4 kg (c) 3 kg (d) 8 kg Aptitude Questions For IT Companies Placements Exams | Aptitude Questions For Competitive Exams | Aptitude Solved Questions Aptitude Questions For Competitive Exam and IT Placements Solve the following and check with the answers given at the end. 1. It was calculated that 75 men could complete a piece of work in 20 days. When work was scheduled to commence, it was found necessary to send 25 men to another project. How much longer will it take to complete the work? 2. A student divided a number by 2/3 when he required to multiply by 3/2. Calculate the percentage of error in his result. 3. A dishonest shopkeeper professes to sell pulses at the cost price, but he uses a false weight of 950gm. for a kg. His gain is …%. 4. A software engineer has the capability of thinking 100 lines of code in five minutes and can type 100 lines of code in 10 minutes. He takes a break for five minutes after every ten minutes. How many lines of codes will he complete typing after an hour? 5. A man was engaged on a job for 30 days on the condition that he would get a wage of Rs. 10 for the day he works, but he have to pay a fine of Rs. 2 for each day of his absence. If he gets Rs. 216 at the end, he was absent for work for … days. 6. A contractor agreeing to finish a work in 150 days, employed 75 men each working 8 hours daily. After 90 days, only 2/7 of the work was completed. Increasing the number of men by ________ each working now for 10 hours daily, the work can be completed in time. 7. what is a percent of b divided by b percent of a? (a) a (b) b (c) 1 (d) 10 (d) 100 8. A man bought a horse and a cart. If he sold the horse at 10 % loss and the cart at 20 % gain, he would not lose anything; but if he sold the horse at 5% loss and the cart at 5% gain, he would lose Rs. 10 in the bargain. The amount paid by him was Rs._______ for the horse and Rs.________ for the cart. 9. A tennis marker is trying to put together a team of four players for a tennis tournament out of seven available. males – a, b and c; females – m, n, o and p. All players are of equal ability and there must be at least two males in the team. For a team of four, all players must be able to play with each other under the following restrictions: b should not play with m, c should not play with p, and a should not play with o. Which of the following statements must be false? 1. b and p cannot be selected together 2. c and o cannot be selected together 3. c and n cannot be selected together. 10. Five farmers have 7, 9, 11, 13 & 14 apple trees, respectively in their orchards. Last year, each of them discovered that every tree in their own orchard bore exactly the same number of apples. Further, if the third farmer gives one apple to the first, and the fifth gives three to each of the second and the fourth, they would all have exactly the same number of apples. What were the yields per tree in the orchards of the third and fourth farmers? 11. Five boys were climbing a hill. J was following H. R was just ahead of G. K was between G & H. They were climbing up in a column. Who was the second? Click Here To Download More Questions and Answers
{"url":"http://way2freshers.com/type/aptitude","timestamp":"2014-04-25T09:17:44Z","content_type":null,"content_length":"110003","record_id":"<urn:uuid:8aae6367-f233-46bf-a1da-6e3f1bdf9000>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Chronology for 1AD to 500 Previous page Chronology index Full chronology Next page About 1AD Chinese mathematician Liu Hsin uses decimal fractions. About 20 Geminus writes a number of astronomy texts and The Theory of Mathematics. He tries to prove the parallel postulate. (See this History Topic.) About 60 Heron of Alexandria writes Metrica (Measurements). It contains formulas for calculating areas and volumes. About 90 Nicomachus of Gerasa writes Arithmetike eisagoge (Introduction to Arithmetic) which is the first work to treat arithmetic as a separate topic from geometry. About 110 Menelaus of Alexandria writes Sphaerica which deals with spherical triangles and their application to astronomy. About 150 Ptolemy produces many important geometrical results with applications in astronomy. His version of astronomy will be the accepted one for well over one thousand years. About 250 The Maya civilization of Central America uses an almost place-value number system to base 20. (See this History Topic.) Diophantus of Alexandria writes Arithmetica, a study of number theory problems in which only rational numbers are allowed as solutions. By using a regular polygon with 192 sides Liu Hui calculates the value of π as 3.14159 which is correct to five decimal places. (See this History Topic.) Iamblichus writes on astrology and mysticism. His Life of Pythagoras is a fascinating account. Pappus of Alexandria writes Synagoge (Collections) which is a guide to Greek geometry. Theon of Alexandria produces a version of Euclid's Elements (with textual changes and some additions) on which almost all subsequent editions are based. About 400 Hypatia writes commentaries on Diophantus and Apollonius. She is the first recorded female mathematician and she distinguishes herself with remarkable scholarship. She becomes head of the Neo-Platonist school at Alexandria. Proclus, a mathematician and Neo-Platonist, is one of the last philosophers at Plato's Academy at Athens. About 460 Zu Chongzhi gives the approximation ^355/[113] to π which is correct to 6 decimal places. (See this History Topic.) Aryabhata I calculates π to be 3.1416. He produces his Aryabhatiya, a treatise on quadratic equations, the value of π, and other scientific problems. About 500 Metrodorus assembles the Greek Anthology consisting of 46 mathematical problems. List of mathematicians alive in 1AD. List of mathematicians alive in 500. Previous page Chronology index Next page Main Index Full chronology Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace Maps Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR August 2001 School_of_Mathematics_and_Statistics The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Chronology/1AD_500.html","timestamp":"2014-04-18T18:18:36Z","content_type":null,"content_length":"17829","record_id":"<urn:uuid:945dd755-bd1a-4afd-b719-8ecf52ca3677>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the sum of 'N' natural numbers /* Write a C program to find the sum of 'N' natural numbers*/ #include stdio.h #include conio.h void main() int i, N, sum = 0; printf("Enter an integer number\n"); scanf ("%d", &N); for (i=1; i <= N; i++) sum = sum + i; printf ("Sum of first %d natural numbers = %d\n", N, sum); Enter an integer number Sum of first 10 natural numbers = 55 Enter an integer number Sum of first 50 natural numbers = 1275 Another straight forward program which uses a for loop to increment the number. The sum value is pre instiantiated to zero and we use a for loop to increment it. The output is displayed. The other way to execute this program is using the fornula for addition of n natural numbers 2 comments: sri said... good n easy code Nesha said... Do you know how can i do the exactly same program apart from thing I want to print method of finding sum? For example, N=4
{"url":"http://cexamples.blogspot.com/2008/07/find-sum-of-n-natural-numbers.html","timestamp":"2014-04-20T16:09:34Z","content_type":null,"content_length":"40792","record_id":"<urn:uuid:6922eda7-564b-45d6-950e-7544839cd0cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Baseball Basics: Can you explain the term OPS?Baseball Basics: Can you explain the term OPS? Baseball Basics: Can you explain the term OPS? Can you explain the term OPS? This is a question I get asked about all the time, as the term has become more used in everyday baseball lingo. So, I will do my best to explain OPS and why it is has become so beneficial. Definition of OPS: OPS, is the abbreviation for the batting stat on-base plus slugging, which is the sum of a hitter’s on-base percentage (OBP) and slugging percentage (SLG). Since a batter’s success is mainly attributed to hitting for power (SLG) and getting on-base (OBP), the statistical answer to combine both is OPS. Ultimately, it is like killing two birds with one stone. OPS formula breakdown: Debate about OPS: The debate about OPS is how accurately it actually measures a player’s offensive worth because the formula counts OBP and SLG the same amount. OBP gives a more accurate measurement to a hitter’s ability to score runs than SLG. This is proven by the fact that on average a player’s SLG number is always much higher when compared to OBP. There is also the human element of whether a player comes up big in critical situations. For example if a hitter’s overall stats are only league average but he consistently excels in-game changing or pressure situations it adds more to his value. And compared to a hitter with a slightly higher OBP but doesn’t get on-base in big game circumstances would inevitably make his significance less paramount. Personally, I see OPS as a better judge of hitters overall abilities than batting average (.AVG). The reason is that OPS determines both a player’s ability to hit the ball, as well as getting safely to base. Related articles 5 Comments 1. According to Fangraphs: If you have the choice, use Weighted On-Base Average (wOBA) instead of OPS. OPS weighs both OBP and SLG% the same, while wOBA accounts for the fact that OBP is actually more valuable for scoring - Since it provides context and adjusts for park and league effects, OPS+ is better to use than straight OPS, especially if you’re comparing statistics between seasons. 2. Well, what is the difference everyone talks batting average, which is not a good tool to measure a batter by. At least OPS incorporates OBP and SLG, which gives a more accurate account. Not foul-proof but better, at least in my opinion. 3. I just explained the difference but yes, OPS is better than BA. But I thought you were reaching out to the more educated baseball fan. And it's foolproof. 4. I understand what you are saying but the section is called "Baseball Basics" and it is there for readers to email me questions that they have about baseball. When I came up with the section it was intended for my friends who are "newbie" baseball fans, so the first few posts are about the very basics of the game. And from there it has become just a Q & A section…..for anyone who has a question. The questions tend to NOT come from educated baseball fans. 5. [...] formula breakdown:(…)Read the rest of Baseball Basics: Can you explain the term OPS? (223 [...]
{"url":"http://ladylovespinstripes.com/baseball-basics-explain-term-ops/","timestamp":"2014-04-16T21:53:57Z","content_type":null,"content_length":"72299","record_id":"<urn:uuid:0ae6617b-2df5-402d-860d-31b576e66980>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of isolated equilibria April 16th 2012, 03:59 AM #1 Dec 2008 Number of isolated equilibria Suppose for a dynamical system $\dot x = f(x), x \in \mathbb R^n$ there exists a finite number of isolated equilibria, all of them are locally stable (i.e eigenvalues of the associated Jacobian for each equilibrium have negative real parts). My question is: Can the number of the of equilibria in the statement above exceed one? (sorry if it is a trivial question) Re: Number of isolated equilibria No. If p and q are two locally stable equilibria, there must exist a locally unstable equilibrium some where "between" them (lying on some curve from one to the other.) Re: Number of isolated equilibria thanks for participation but can you provide a proof or a reference to a proof ? April 16th 2012, 05:45 AM #2 MHF Contributor Apr 2005 April 16th 2012, 06:30 AM #3 Dec 2008
{"url":"http://mathhelpforum.com/differential-equations/197377-number-isolated-equilibria.html","timestamp":"2014-04-16T10:45:47Z","content_type":null,"content_length":"35306","record_id":"<urn:uuid:fd9b6e24-cf6d-48c7-9a0f-ec8087c47118>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Famous Theorems of Mathematics/Four color theorem Although technically the four color theorem has been proven, for some – professionals and amateurs alike – attempting to discover a more elegant solution to the Four Color Theorem is an engrossing pastime. In theory nothing more than a pencil, some paper, and some thought should be required. • It is easier to work with graphs than maps. Maps are equivalent to their duals, which are planar graphs. Each region corresponds to a vertex and borders between regions correspond to edges. Borders without length are disregarded. • On the other hand, working with cubic maps, we can switch prove a 3-coloring of the edges, which is equivalent to a 4-coloring of the areas. • It is assumed that it will be easier to prove the Four Color Theorem for fully-triangulated planar graphs. A planar graph which is not fully-triangulated (i.e. it is missing some edges) can easily be made fully-triangulated by adding some edges. Then, after a 4-coloring is found, those additional edges can be removed. However, if you are attempting to solve the Four Color Theorem as a route to solving P = NP, note that it is the finding of a 3-coloring of a [possibly non-fully-triangulated] graph – or proving that such a 3-coloring does not exist – that is hard. • The dual of a fully-triangulated planar graph is a cubic planar graph. Cubic means that every vertex has 3 edges. However, not every cubic planar graph has a Hamiltonian cycle – if it did, it would prove the Four Color Theorem. For examples of cubic planar graphs without a hamiltonian see: Tutte's counter-example with 46 vertices and 60 edges; Kozyrev and Grinberg's counter-example with 44 vertices; and Lederberg's or Barnette and Bosák's counter-example with 38 vertices. • We can prove that we need not discuss graphs with a connectivity lower than 4. According to Tutte, planar graphs with a connectivity of 4 or higher have a Hamiltonian cycle (of the nodes regarding the graph, or of the areas regarding the map). A Hamiltonian cycle provides us with a trivial way to impose a linear order on our nodes (or areas). Here are some of the common pitfalls for the beginner: • Not every graph can be generated by starting with a triangle and adding 1 vertex and 3 edges i.e. keeping the graph fully triangulated as each vertex is added. The smallest graph with minimum degree 4 has 6 vertices. The smallest graph with minimum degree 5 has 12 vertices. • To generate *every* graph, work it the other way around : in every graph you may eliminate a node by contracting two neighboring nodes in to a single node along their connecting edge. Inversely, we can create every graph by expanding a node into two neighboring ones. The Four Color Theorem is equivalent to finding a Bi-BiPartite solution. In other words it is sufficient to separate the vertexes into just 2 sets – if each set is itself BiPartite. This is actually a simplification because it is clear what makes a graph BiPartite: no odd cycles. If the graph can be decomposed into 2 subgraphs such that each subgraph has only even cycles or no cycles, a solution has been found. The Four Color Theorem can be equivalent to finding a solution to an equation or a set of equations and may be easier to work with in matrix form. The simplest equation is of the form (A-B)(A-C)... != 0 where A,B,C,... represent the colors of the vertexes. If 2 vertexes are adjacent, then their values are subtracted. It thus becomes a 'satisfiability problem' in that values for A,B,C... must be found that 'satisfy' the inequality. If two adjacent vertexes have the same color, then their difference is 0 and the entire equation yields 0. Inequalities are not easy to solve using matrix. To prove an equation representing a 4-coloring or an equivalent assertion, we might have to find a (matrix) equation describing planarity, our main premise. Minimum Counter Example to the Four Color TheoremEdit The Four Color Theorem (4CT) essentially says that "the vertices of a planar graph may be colored with no more than four different colors." A graph is a set of points (called vertices) which are connected in pairs by rays called edges. In a complete graph, all pairs are connected by an edge. In a planar graph only those pairs whose edges do not cross or intersect are connected. A complete planar graph is one which has exactly $3\cdot n-6$ edges. [$n$ is the number of vertices] Vertices that are connected by an edge are called adjacent vertices Vertices that are not connected are called disjoint vertices. The 4CT implies that adjacent vertices cannot have the same color; and that four colors are sufficient to meet this condition. HYPOTHESIS: If the 4CT is true for all fully triangulated planar graphs (FTPG), then it is true for all planar graphs. Therefore, it is necessary only to prove the 4CT for FTPG's! The degree of a vertex is the number of vertices with which it shares an edge. It is generally agreed that every vertex in a 5-chroma planar graph must have degree $\ge 5$. [A 5-chroma planar graph is a graph that cannot be properly colored with less than five colors. In a "properly" colored graph, no two adjacent vertices will have the same color.] If the 4CT is false, then there must be 5-chroma planar graphs. If such graphs exist, it should be possible to remove vertices and edges from such a graph until the smallest possible 5-chroma graph remains. This is a lot of work, so someone else has already done this for us. This "smallest possible" graph is called a "minimal counter-example to the Four Color Theorem"; more conveniently, MCE/ One way of proving the 4CT to be true is to prove that every graph thought to be an MCE/4CT is really not! An MCE/4CT is by definition is a 5-chroma simple loopless planar graph and by choice, a FTPG. If any vertex is removed from a true MCE/4CT, then a 4-colorable graph will always result. A proposed MCE/4CT is not a true MCE/4CT if it can be shown that there is a least one smaller graph; i.e., one with fewer vertices, that is also 5-chroma. This is a difficult task, but it is not complex. Complex in the sense that there are many configurations to analyze. It is necessary to analyze only one configuration. This is the subgraph of the MCE /4CT which consists of a vertex of degree = 5 and its 5 neighboring vertices. Every MCE/4CT has at least one such vertex. Figure 1 below is a typical subgraph of a fully triangulated planar graph |\ /| | \ / | \ | / Figure 1 where the central vertex v_0 has degree = 5. Every vertex with degree = 5 in a FTPG will have exactly the same local graph. For convenience let graph G be a proposed MCE/4CT. Then Chi(G) = 5; i.e., G is 5-chroma. If v_0 is removed as in Figure 2, | | | | \ / \ / Figure 2 then Chi(G-v) must always be <5! [(G-v) is the subgraph of G that remains after vertex v_0 has been removed.] HYPOTHESIS: There is only one vertex coloring pattern for Figure 2 that will allow Chi(G-v) to equal 4 and at the same time insure that Chi(G) is always greater than 4 when v_0 is restored. The pattern is C1-C2-C3-C4-C3. Figure 2 must have a 4-coloring. This is necessary if (G-v) is 4-colorable and G is not 4-colorable. A 3-coloring of Figure 2 would not require G to be 5-chroma. Figure 3, shows a typical 4-coloring of the vertices of the graph in Figure 2. R G | | | | B 5 3 B \ / \ / Figure 3 Usually, Figure 3 can be 3-colored But in this case,the configuration of (G-v) is such that a normal 3-coloring is not possible! None of the configurations of(G-v) are known. It is possible that they are also "unknowable"? Yet is essential that we acknowledge that they may exist! Here's a new approach! Because G is a FTPG, its dual is a cubic planar graph (CPG); i.e., C_g. If G is not 4-C, then C_g cannot be bridgeless! Now consider graph (G-v). It is not fully triangulated. But it can be fully triangulated by the addition of two edges. A typical triangulation is shown in Figure 4. |\ /| | \ / | | \ / | Figure 4. Now graph (G-v) is fully triangulated and its dual is a cubic planar graph; i.e., graph C_v. The a priori presumption is that C_v is bridgeless, and that C_G is not bridgeless!. If you want to disprove the existence of a minimal counter example, you may also concentrate on the "minimal" part. If counter examples exist, there has to be a minimal one. Now if we can prove that every supposed minimal counter example can be used to construct a smaller counter example, we disprove the existence of counter examples at all. Four Color Theorem for Maximal Planar Graphs [MPG]Edit The 4CT for MPGs is a sub theorem for the 4CT for all planar graphs. It simply states that all MPG's are 4-colorable. The advantage of MPGs is that certain statements can be made about their structure; i.e., the way that they are drawn on the page. One convenient way to draw a MPG is to start with a single vertex (V). V is drawn surrounded by 5 or more adjacent vertices. These neighbors of V form a closed ring called a "cycle" graph. This ring is in turn surrounded by a second cycle formed by some or all of the neighbors of the vertices in the first ring. A third closed ring encloses the second ring, etc. This continues until all vertices have been used. The final ring will have only three vertices. Only if the graph is maximal, will it be certain that all of the rings are closed. The concentric ring configuration is a concept that is not easy to prove! The main problem with this ring configuration is not that rings may not be closed, but that rings may be short-circuited, in which case there's not a single "next" ring, but multiple next *rings*. If there is no single outermost ring, contraction to a single node doesn't work and there's no necessity for a single final color. HYPOTHESIS: The colors of each ring are determined by the colors of the next external ring. HYPOTHESIS. It requires 4 colors in one ring to force 4 colors on the next internal ring. For any ring in a MPG there must be four coloring of MPG in which the external ring is 3-colorable. Otherwise the MPG can be used to construct counterexample to 4CT. Just join the nodes of the external ring to another new node. If these hypotheses are true, then the 4CT for MPGs is true. Rationale: The outermost ring can have only three colors. Therefore, none of the rings can have more than three colors! Last modified on 6 March 2011, at 02:45
{"url":"http://en.m.wikibooks.org/wiki/Famous_Theorems_of_Mathematics/Four_color_theorem","timestamp":"2014-04-20T13:22:04Z","content_type":null,"content_length":"26530","record_id":"<urn:uuid:cbda04f1-60c3-4521-9527-4f4f500be34a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
"The Hidden Reality" - an Interview with Brian Greene"The Hidden Reality" - an Interview with Brian Greene As I type this, I am aware that, somewhere in the cosmos, another me may be typing an introduction to his interview with the Brian Greene of his respective universe. In-fact, there may be an infinite number of us, typing away – perhaps in precisely the same way – perhaps slightly differently. The math suggests that we are not alone, and our universes may not be alone either. Parallel universes and the theories that predict their existence are the subjects of Brian Greene’s latest book, The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos. Greene is a professor of physics and mathematics at Columbia University, and is recognized for a number of groundbreaking discoveries in his field of superstring theory. His previous books include The Fabric of the Cosmos and The Elegant Universe, a finalist for the Pulitzer Prize and the inspiration for the companion Emmy and Peabody award-winning NOVA series “The Elegant Universe.” Todd: Can you talk about when you first knew you wanted to be a physicist? Brian: Sure. I was a teenager, and I had one of those moments – I think most teenagers do – where you begin to wonder why am I here. What’s it all about? What’s the point of it all, that kinda thing. The immediate next thought for me was that since we humans have been around for a while, people must have been thinking about these very questions for, in some sense, hundreds or thousands of years. So if people have been thinking about this for thousands of years, if there were an answer, we would have it. The fact that we didn’t have an answer led me to suspect that maybe the attempt to find an answer is misguided at some level. Maybe what we really need to be doing is trying to understand the question better. Not why am I here, but how is it that I’m here? How is it that there is a universe? How is it that a universe could give rise to the stars and galaxies, planets and people that we That really led me to sort of shift my focus to trying to understand in some sense the nature of reality rather than trying to answer sort of the more perplexing questions of existence. Of course physics is a subject that focuses on those very questions of the true nature of reality. Todd: Is it fair to say that there was a philosophical path that drove you to science? Brian: I think you could probably say that. It certainly went hand in hand, I should say, with a certain joy that I found in doing mathematical calculations that I can trace that back to many years even before when I was quite young, and my dad taught me the very basic operations of arithmetic. I was five years old or so, and he just set me going, and I recall I’d spend many a weekend multiplying these huge numbers together. I had big sheets of construction paper just sort of for the joy of seeing how the numbers would fit together and combine into new patterns of numbers. I had already a scientific mathematical orientation, which naturally melded with, if you will, the philosophical interest to set me on a direction that I’ve been on ever since. Todd: Can we talk about parallel universes? Brian: Sure. Todd: Let me try to reality check this with you. As I understand it, the various theories that describe how everything works, including general relativity, quantum mechanics and string theory, all allow for the possibilities of parallel universes, right? Brian: That’s right. Todd: So far, so good. In some of these universes, you and I might be having this conversation maybe in exactly the same way and maybe a bit differently. Brian: Right. Todd: Am I correct in understanding that the pathways to the various parallel universes through these theories are different? For example, with general relativity, as I understand it, parallel universes can happen because the theory allows for an infinite universe, but there’s only a finite way that stuff can be organized, so sooner or later our universe is bound to repeat. Brian: Yep, all right on target. Todd: This could be a really short interview. Brian: I can fill in any details if you like. Todd: This is actually an elaborate run up to a big question that I hope you can help me with. To continue a bit further down this road, in quantum mechanics, as I understand it, the path to parallel universes is that the fact that an action potentially has multiple outcomes means that you need multiple universes to park all of those outcomes, right? Brian: That’s correct, too. Todd: So if you have 1,000 different possible outcomes, you need 1,000 different universes. Brian: In one particular approach to quantum mechanics, which is the one that comes from that guy who I described, Hugh Everett, who initiated that idea in the late ’50s and people have been developing ever since. There are other approaches to quantum physics which do not require every outcome to happen. There are other approaches to quantum mechanics which attempt to say only one really happens, and those approaches try to introduce new mathematics to make that happen, to bridge from the range of possibilities to only one unique outcome. That approach isn’t convincing too many of us. It’s a possibility, but as I described in the book, even the Many Worlds approach to quantum mechanics is not convincing too many people. I’m pretty out front – if you have read that chapter – on explaining its potential weaknesses, its Achilles heels. The bottom line is as of today we do not know how to navigate from the fuzzy, hazy, probabilistic description of reality that comes from the basic laws of quantum physics. How do you navigate from that to the definite reality that we see when we look around or that we see in our detectors when we do an experiment? We see one definite outcome. The Many Worlds approach is one proposal for how to bridge that gap, and as you say, if it’s correct, then every outcome would happen and every outcome would need to be parked in its own universe. Todd: In string theory we have yet another way to get to multiple universes, right? You have a braneworld within which our universe resides, and you can potentially have multiple brane worlds, right? Brian: That’s exactly right. One of the big developments in the late ’90s through to today is that string theory is not only a theory that contains strings. It also contains these membrane-like objects which we call branes because they can have not only two dimensions – your typical image of a membrane – but they can have three dimensions, which we call a three-brane, co-opting the brane part of membrane and changing the number in front to indicate how many dimensions the brane has. When you study the math of these membranes, you find that we very well could be living on and in one of these three branes, one of these slabs, if you will, with other slabs, other universes out there. That is a direct consequence of the mathematics of these entities. Todd: Branes can be infinite, right? Brian: Yes. Todd: So here’s my big question: can you have a multiverse on a single brane? Brian: Absolutely. As you say, if a brane is infinite, then all of the discussion that is in, I guess Chapter 2 of the book, about what happens if there’s infinite space, would apply, as you’re saying, to that brane. You could have a variety of different universes coming from the different branes, and on each of those branes there could be a variety of different universes coming from the infinite expanse. Todd: It turns my head completely inside out and backwards, but in a good way. Brian: Everything you’re saying is right on target. Todd: Can we talk a little about where string theory came from? Brian: Yep. Todd: I understand that for a long time physicists used general relativity to study big things like stars and quantum mechanics to study small things like subatomic particles, but when a situation called for the study of a big thing in a small place like the Big Bang, the two theories just couldn’t work together, right? Brian: Yep, that’s exactly right. The math fell apart. The math of quantum mechanics and the math of general relativity , when they confront one another they are ferocious antagonists and the equations don’t work. Todd: Was it the prediction of black holes that first forced the issue to call out for some way to reconcile general relativity and quantum mechanics? Brian: Is it the first? It’s a good historical question, which I don’t know if it was the first time that people really started to worry about this. Certainly by the late ’40s and early ’50s there was definitely a recognition that the need to put gravity and quantum mechanics together was a significant one. I’m pretty certain if you were to speak to a historian of physics that you would find that even before those concrete examples that I like to use in order to help the reader ground the need for putting gravity and quantum mechanics together in a real world, real universe context, the Big Bang or black holes, people realize that once quantum mechanics came on line in the 1920s and 1930s, quantum mechanics really ultimately speaking to a framework that should apply to everything. It should apply to every force of nature. Even if one wasn’t thinking about black holes and the Big Bang and just thinking quantum mechanically, you’d want to try to bring general relativity into this framework so that you would have a complete theory, not one that was segregated. When it comes to black holes and the Big Bang, sure, now it becomes more concrete, but I think even theoretically people knew this was an issue. Todd: I’m getting the sense from your book and also the NOVA series that things are trendy in physics like anything else and, for a while, unification and the theory of everything just wasn’t really in vogue. Einstein tried to do it, and it didn’t work out. Physicists latched onto quantum mechanics and charged off in that direction, right? Brian: Yep, that’s exactly right. It is the case, as you say, like in many other fields there are areas in physics that are hot for a period of time and everybody wants to work on it, and then they may go cool for a while. I think that’s the nature of human exploration. The thing is, certain problems don’t attract attention at a given moment in time in a given historical episode, not only because they’re out of fashion, but also because they’re just so hard and nobody has any good ideas that it just doesn’t seem fruitful to pursue it further. That’s really the case with Einstein and the unified theory. Everybody knew that that was an important goal. Nobody had any really good ideas on how to pursue it, and that’s why Einstein was kinda left out in the cold when he was pursuing the unified theory. What happened in the ’60s and ’70s was that finally some ideas came on line. Once there are ideas, people were game to try to make progress. It was not so much a fad issue as opposed to nobody could figure out what to do back in Einstein’s time. Todd: I guess if Einstein can’t do it, then who can? Brian: Well, that certainly is intimidating. If Einstein were here today, he would be in the thick of it, and it would make perfect sense that now that we have some concrete tools to make progress, he’d no longer be alone in this journey. Todd: Since we’re on an Einstein thread, I’d like to pursue this a bit. I’ve got a question about the cosmological constant. My probably oversimplified understanding is that the cosmological constant was a fudge factor that Einstein introduced into his general theory of relativity because he disagreed with what his math was telling him, which was that the universe was expanding. Had he trusted his math, he would have predicted the expanding universe a decade before Hubble observed it, right? Brian: Yep. Todd: Can you talk a bit more about the story with the cosmological constant and why the story doesn’t really end there? Brian: Sure. Einstein spent ten long years trying to figure out how the force of gravity works. Newton had given us an equation for gravity in the late 1600s that is very good at making predictions about how the planet should move, how the moon should move, all under the influence of gravity, but Newton left out a big part of the story, which is how is gravity transmitted from place to place. How does the sun transmit gravity across the emptiness of space, the 93 million miles that separates us? How does gravity get from there to here? Einstein tries to fix this problem and in so doing doesn’t just fill in a gap in the Newtonian picture. He comes to a whole new version of gravity, his general theory of relativity, this monumental new view of space and time, or space in time, warp, and curve to communicate the force of gravity. That’s great, triumphant, but then he sits down and applies this new mathematical theory of gravity, not to the earth, the sun, the galaxy, but to the whole thing, the universe, the observable universe and comes to what he considered a very unpalatable conclusion, which is that math shows that the universe can’t stand still. The universe has to be either expanding or contracting. That was so at odds with what everybody thought. You look up in the sky and it looks on the larger scale, but nothing’s moving. Nothing’s changing. He was very distraught that his ten-year-long odyssey led to a theory that made a prediction that seemed blatantly wrong. He went back to the math and reconsidered his equations a little bit more fully and found that there was a way in which you could easily modify the equations so that they would no longer imply that the universe was expanding or contracting. This modification as you describe is called the cosmological constant. It’s one more term in the equations. What does it do? Well, if you want a static situation – the example I use in the book is if you have a tug of war and you want it to be static, you need both sides to have equal but opposite pull, equal but opposite forces that will cancel each other. The goal for Einstein in seeking a static universe was to counterbalance the attractive pull of gravity. Gravity pulls inward. That’s the force that’s most relevant in the largest scales of the cosmos. If it pulls inward, to balance it, you need a force to push it outward, and that’s what the cosmological constant does. It’s an outward-pushing version of gravity that can counterbalance the usual inward attractive pull of gravity giving rise to a static universe. With this mathematical realization, Einstein was a pretty happy camper. All of a sudden his theory was not in conflict with what everyone believed to be the case about the universe, that it’s static, eternal, unchanging. Ten years later, Edwin Hubble and his coworkers, through their astronomical observations, showed that the universe is expanding. It’s not static. There’s no need to balance gravity because the universe is not balanced. It’s actually dynamic and changing. Einstein is reported to have said that this was a blunder to have modified the equations, and he tossed the cosmological constant into the garbage can. You’re right. The story does not stop there because 80 years later, teams of astronomers are measuring the expansion of space to try to figure out the degree to which the expansion is slowing down over time. Everyone knew that since gravity is attractive it pulls things together and the expansion should slow over time like when you throw a ball up in the air. It goes up, but goes up slower and slower because gravity pulls it back. Shockingly they find that the universe is not slowing down in its expansion. It is speeding up. If it’s speeding up, that means you need an outward push, something that goes the opposite to gravity and that is exactly what the cosmological constant is able to do. It acts opposite to gravity. The astronomers brought back in the cosmological constant, not finely adjusted to exactly cancel attractive gravity, but having a value that would overwhelm attractive gravity giving rise to an outward push that can explain the observations of a universe that’s not just expanding but speeding up in its expansion. That’s the story. Todd: So there are not one, but potentially three stupendous discoveries or predictions that come out of the general theory of relativity. There’s the first that we know and love. There’s the second, which had he left the cosmological constant out, he would have predicted the expanding universe a decade before it was observed. And had he left it in, he would have discovered that the universe is expanding more rapidly. Brian: That’s right. Todd: In “The Elegant Universe” NOVA series, there’s a scene where Leonard Susskind talks about staring at his string equation and seeing a vibrating string. I’m astounded by the way physicists can see the universe in math. Is there a way for you to describe to a layperson how a Leonard Susskind sees vibrating strings in math or how Veneziano recognized the strong force in a 200-year-old Brian: Well, I think there’s two parts to the answer. The more general answer is what physicists do is try to find patterns, patterns in data or patterns in their equations. In essence what we try to do is line up the patterns in our mathematics with the patterns that we observe. All mathematics is is a language that is well tuned, finely honed, to describe patterns; be it patterns in a star, which has five points that are regularly arranged, be it patterns in numbers like 2, 4, 6, 8, 10 that follow very regular progression. Math is very good at being able to describe those kinds of patterns. What Leonard, for instance, is saying in that specific example is when you look at the mathematical equations that he was writing on his blackboard, he could see certain patterns imbedded in the mathematics. Those patterns were very directly describable in terms of a string as it vibrates because as a string vibrates, there are very definite patterns. If you just think of a violin string, it can sort of vibrate where the whole string goes up and down in unison, or it can vibrate a little bit more actively where half goes up while the other half goes down and they’re vibrating sort of side by side if you know what I’m saying where the left side is going up while the right is down, the right is up, the left is down. You can do a more complicated version of that where you’ve got the middle of the string is going down and the two sides are going up and vice versa. All those patterns, those very simple pictures of how a string can vibrate, translate into mathematical quantities. Leonard could see those mathematical quantities in the mathematical equation. He said, “Oh wow, those mathematical quantities are falling right into the pattern that I’m familiar with from looking at a vibrating filament, therefore this math is probably describing a vibrating Todd: So much of this is about recognizing the patterns in the math. Brian: Completely. That’s exactly what it is. For instance, when a string vibrates, you know that there’s a main tone – like a violin string has a C – but then there are overtones, and that’s what gives a violin its richness of timbre. A piano, when you play a C, there are different overtones, but again it’s all coming from a vibrating string, and the possible ways in which that vibrating string can be shaped. What Leonard could see in the mathematical equations were all the overtones, and because he could see the overtones, he could see the different representations mathematically of the shape of a vibrating string, he concluded that the math must be describing that kind of an object. Todd: Did I read somewhere that you studied piano somewhere along the line in your career? Brian: Yeah, not much, but yes I did. Todd: Do you think that might have anything to do with your attraction to string theory? Brian: No, none whatsoever, I don’t think. I will say that music as we’re all familiar with, again, is another way that pattern gets represented. What makes a Beethoven symphony spectacular, what makes a Brahms rhapsody spectacular is that the patterns are wondrous. The patterns of the notes are both wondrous, appealing, moving, emotive. Again, all you do with your ear is sense the patterns of the notes. In a way, mathematics is just a different way of representing those kinds of patterns. It represents it in a way that, for reasons that we are still baffled by, the patterns that math is very good at describing seem to emerge in nature in very natural ways. That’s why math is so good at describing the natural world. Todd: Math is the language of nature? Brian: It’s a language of pattern, which for reasons that we don’t know appears to be the language of nature. It’s not impossible that one day we’ll find a better language for describing nature that’s not mathematical. It’s not impossible. I describe that a little bit toward the end of the book. Todd: Do I understand correctly that one of the attractions of physicists to string theory is its elegance as compared with something like the standard model? Brian: Yes, that certainly is an appealing quality. Todd: My limited understanding is the standard model potentially you have to update it as new particles are discovered. String theory potentially you can use the same elegant formula as is to describe everything. Brian: Yes, if string theory is right. String theory could be wrong. It has not been experimentally confirmed, and that’s important to underscore. What leaves people dissatisfied with the standard model is that it’s an enormously long equation which has within it 57 distinct particle species. Each one you have to input into the mathematics its mass, its charge, its properties, and the nuclear forces. It just feels ungainly. We can’t help but think that there’s got to be a more compact – a more compact and a more efficient, more effective, more unified description of the particles of nature that doesn’t feel like you’re simply adding particle upon particle upon particle every time you find a new one in an experiment. As you say, string theory is a theory that, at least on paper, has the capacity to do just that. If string theory is right – it’s a big “if” – then different particles are different string vibrations. You don’t really have to update the theory if you find new particles. If the theory’s right, then every particle you find will be corresponding to some particular vibrational pattern of the string. Todd: By comparison, how long is the actual string formula? Brian: The first thing I should say is many of us believe we’ve yet to really find the full string equation. We believe that we have approximate equations, but whether that is the full equation, we’re not sure. The candidate equations that we have for the mathematics of string theory comfortably fit on a single line of a piece of paper. In analyzing those equations you have to do an enormous amount of complex calculation that then takes pages and pages and pages and fills thousands of journal pages even as we speak here today. The starting point is a pretty simple equation, much simpler than the equation of the standard model. Todd: So potentially there’s one equation that fits neatly on one line that potentially describes everything. Brian: That’s right. Todd:You talked about proving or testing the string theory. In your NOVA series, some of your colleagues said that if you can’t test it in the way that we test normal theories, it’s not science, it’s philosophy. Can you talk about testing string theory and about gravitons and sparticles? Brian: First of all, they’re absolutely right in the sense of there’s no reason to believe string theory is right until you experimentally confirm it. There’s just no way around it. Nothing I’ve ever said or really anything my colleagues would say would ever disagree with that. The question then is: are there ways in which you can test string theory? String theory is a huge challenge to test because, as we’re saying, it really comes into its own in pretty extreme realms, the Big Bang, black holes, and so forth. There are potential indirect tests that wouldn’t prove the theory right, but if they gave positive outcomes would be pretty convincing – circumstantial evidence I should say – that we’re heading in the right direction. Some of the examples that you mentioned are right on target. They’re a class of particles that string theory suggests should be in existence. They’re called supersymmetric particles or sparticles for They are partners to the known particles that no one has ever seen. We believe we haven’t seen them because they’re heavier than their known counterparts, particles like selectrons, which is a supersymmetric electron, a partner to electrons, squarks, partners of quarks, and so forth. The large Hadron Collider is looking for these supersymmetric particles, these sparticles. If they’re found, it would be enormously exciting. There’s other possibilities of detecting extra dimensions through these missing energy signals that we were talking about where particles slam into each other and some debris is kicked out. It carries away energy, and if you have a missing energy signal, it would suggest that there are other dimensions out there beyond the ones that we know about here. I’ve been working myself on the possibility of testing the string theory through astronomical observations. It’s possible that string theory could have left imprints in the microwave background radiation, this heat left over from the Big Bang. We’re looking for patterns in that heat temperature variations from one point in the sky to the other which if they fit a certain pattern would again be indirect evidence that a string theory is correct. That’s the best we can do today; try to amass a number of different experimental results that all point toward string theory being right. But there’s no way that we can definitively at this moment make any statement one way or the other. Todd: I’ve got a question about practical applications. I understand that it’s challenging or impossible to project practical applications of theoretical physics, but I seized on an example in your book. Can you talk about the role that general relativity plays in today’s GPS? Brian: The way the GPS system works is there are satellites in orbit around the earth. Those satellites need to be able to determine your position with great precision so that the little map in your GPS system in your car places you at the location that you’re really residing at. For those satellites to be able to do their job correctly they need very, very precise timing signals. They’re basically bouncing light signals between you and the satellite to determine exactly where you are. The satellite measures tiny time differences between emission and absorption of the signals that are being sent back and forth. General relativity tells us that because the satellite is in motion and because it is experiencing less gravity than it would if it was on the earth’s surface, general relativity says that time on the satellite elapses at a different rate than it would otherwise. If you don’t take into account that effect of gravity and motion on time, the satellites will make a mistake. They will get your position wrong because their timing will be out of sync. That is a very concrete way in which the general theory of relativity affects something that we do use in our everyday lives. Todd: If we don’t factor general relativity into these clocks, if I’m driving from Las Vegas to L.A., I could end up in the Pacific Ocean. Brian: That’s exactly right. Very quickly these GPS systems would become completely inaccurate if they didn’t take these corrections into account. Todd: I have arrived at my last question, which is, in the course of your career what has surprised you most about the universe. Brian: What has surprised me most about the universe? Well, I’ll give you two answers. The most concrete one, the most shocking result that we found is in fact this accelerated expansion of space that we were talking about before – completely unexpected. Most of us when we first encountered it said “come on, that can’t possibly be true,” and yet experiment after experiment is confirming that is true. Speaking from a more global perspective, the most wondrous thing about science – maybe it’s not the most surprising as of now because we’re getting used to it – but the fact that mathematics is so good at describing things that are out there. When I was – I think it was ninth grade – and I was taking my first physics class, the teacher gave us this problem, some simple problem of a ball that was attached to a piece of chewing gum that itself was attached to the ceiling, and someone lets the ball swing by the chewing gum. The goal was to figure out how the ball would move. When I sat down and did those calculations, at the end of it, I got up from my desk, and I ran down to the hall to my father and said, “Can you believe it? Look at this formula.” This mathematics that I just calculated on this piece of paper would really describe how that ball would swing. How powerful is that? I didn’t get up and do the experiment. All I did was calculate. Math describes the world. That, to me, is still the most wondrous thing about all that we do in theoretical physics. Todd: Everything potentially can be rendered into math. Brian: So far that seems to be the case. Todd: Another good reason for kids in the US to study and get their math and science scores up. Brian: Yep, without a doubt. Brian will be appearing January 31st at the Herbst Theater in San Francisco in a conversation with Michael Krasny
{"url":"http://blog.sfgate.com/tmiller/2011/01/28/the-hidden-reality-an-interview-with-brian-greene/","timestamp":"2014-04-20T19:35:40Z","content_type":null,"content_length":"74541","record_id":"<urn:uuid:a5ef27af-4ca8-4b00-b928-2a969f1cc900>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangles Help Needed December 15th 2009, 04:18 PM #1 Sep 2009 Triangles Help Needed Hi, needing some help with the following. Angles are R, S, U, V, T. Using the picture above. If RV is congruent to RT, name the two congruent angles. If RS is congruent to SV, name the two congruent angles. If SRT is congruent to STR, name the two congruent segments. If STV is congruent to SVT, name the two congruent segments. Questions answered Last edited by mr fantastic; December 16th 2009 at 05:47 PM. Reason: Restored deleted question. Are you asking for your homework to be done for you? Ummmmm, no. Please do NOT make judge mental posts based on the questions asked in my posts. I am stay at home mum- and I would like to know what my kids are learning. I don't know anyone who would take the time to take a photo of their "homework" and then sign up on a Forum, then posting it on a thread. Next time maybe think before you flame me or anyone else in a manner like that. Please do NOT make judge mental posts based on the questions asked in my posts. I am stay at home mum- and I would like to know what my kids are learning. I don't know anyone who would take the time to take a photo of their "homework" and then sign up on a Forum, then posting it on a thread. Next time maybe think before you flame me or anyone else in a manner like that. Siobhan, I am very sorry you thought Wilmer unkind. I actually don’t think that Wilmer meant to be - nor do I realize that he/she has been. You could have yourself explained your situation and what you needed done. But by simply posting a question, it did appear that you just expected an answer. We actually want to discourage that kind of posting. As we want to discourage complete answers to such postings. I actually don’t think that Wilmer meant to be - nor do I realize that he/she has been. I don't feel this way. I hope you realize my "Newbie" tag in the Forum Community gets me the least respect - it's hardest for a newbie to adjust. If someone does not have an answer, or a question to my thread, why is there a need to write a reply? Especially a reply like that? This is a Math Help Forum - I would imagine you do use it to get Math Help. I don't feel this way. I hope you realize my "Newbie" tag in the Forum Community gets me the least respect - it's hardest for a newbie to adjust. If someone does not have an answer, or a question to my thread, why is there a need to write a reply? Especially a reply like that? This is a Math Help Forum - I would imagine you do use it to get Math Help. Plato has accurately explained the situation. Thread closed. December 15th 2009, 08:13 PM #2 MHF Contributor Dec 2007 Ottawa, Canada December 16th 2009, 01:22 PM #3 Sep 2009 December 16th 2009, 01:45 PM #4 December 16th 2009, 01:56 PM #5 Sep 2009 December 16th 2009, 05:52 PM #6
{"url":"http://mathhelpforum.com/geometry/120668-triangles-help-needed.html","timestamp":"2014-04-17T12:49:03Z","content_type":null,"content_length":"47511","record_id":"<urn:uuid:af254d69-d353-49d0-8734-031125f8cd1b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I prove if a set of vectors are independent? April 4th 2010, 06:58 AM How do I prove if a set of vectors are independent? I've tried many things but have been unable to answer this question: $v$ and $u_1,...,u_k$ are vectors in $R^n$. Let $v$ be a linear combination of $u_1,...,u_k$ and have a single solution. Prove $u_1,...,u_k$ are independent. Hint: let $u_1,...,u_k$ be dependent vectors. April 4th 2010, 09:11 AM Hi jayshizwiz, I think you haven't been given a reply until now because some of the parts of your question are hard to follow. You must ask your question so carefully that the reader doesn't have to guess what you mean: what do you mean by "a single solution"? and the hint surely should be: "Suppose that the above holds, but that $u_{1}, \ldots, u_{k}$ are linearly dependent", not merely "Let $u_{1}, \ ldots, u_{k}$ be independent", because if we define these vectors to be independent, we cannot contradict ourselves! But if we suppose that they are dependent, then we can contradict the initial In this case, the proper statement of the problem should be: suppose that each $v\in\mathbb{R}^{n}$ has a unique representation as a linear multiple of the vectors $u_{1},\ldots,u_{k}.$ Then prove that $u_{1},\ldots,u_{k}$ are linearly independent. Proof (I will get you started) Suppose that the statement holds and that $u_{1}, \ldots, u_{k}$ are dependent, and let $v$ be written uniquely as $v = \lambda_{1}u_{1}+\ldots+\lambda_{k}u_{k}$(1) By linear dependence, there exist scalars $\mu_{1},\ldots,\mu_{k}$ such that $\mu_{1}u_{1}+\ldots+\mu_{k}u_{k} = 0$(2) with some $\mu_{j}eq 0,\quad 1\leq j \leq k.$ Then rearranging (2) gives $u_{j}=\frac{1}{\mu_{j}}\sum_{ieq j}{\mu_{i}u_{i}}$ Now try to substitute this representation of $u_{j}$ into the original representation of $v$ given in (1). Is this a different expression for $v$ as a linear combination of $u_{1},\ldots,u_{k}$? April 5th 2010, 10:00 AM Thanks nimon, I don't study in English so I'm trying to translate as best as I can. I still don't know where to continue with this: $<br /> <br /> v = \lambda_{1}u_{1}+\ldots+\lambda_{k}u_{k}<br />$ $<br /> <br /> u_{j}=\frac{1}{\mu_{j}}\sum_{ieq j}{\mu_{i}u_{i}}<br />$ I don't know which u vector $u_{j}$ belongs to. How do I know to replace it with $u_1$ or with $u_2$? April 7th 2010, 01:28 AM The vector $u_{j}$ we picked was any vector whose coefficient in the solution of (2) is non-zero, and we know that such a $u_{j}$ exists due to linear dependence. This $j$ could be any number between $1$ and $k$, and we don't want to assume that it is $1$ or $2$, we just know that one of them has non-zero coefficient so we let this be $u_{j}$. Given that $v=\lambda_{1}u_{1}+\ldots+\lambda_{j}u_{j}+\ldots+ \lambda_{k}u_{k}$, we can now replace $u_{j}$ in this expression with $u_{j}=\frac{1}{\mu_{j}}\sum_{ieq j}{\mu_{i}u_{i}}$ to get: $v=\mu_{1}u_{1}+\ldots+\lambda_{j}\frac{1}{\mu_{j}} \sum_{ieq j}{\mu_{i}u_{i}}+\ldots+\mu_{k}u_{k}$(3) The notation $\sum\limits_{ieq j}$ means to sum over all $i=1,\ldots,k$ but not $j$. Now just try and collect the coefficients in (3) to give $v$ as a linear multiple of $\{u_{1},\ldots,u_{k}\}\backslash \{u_{j}\}$ I hope this is helpful, and sorry for the lateness of my reply. Your English seems very good for someone who doesn't study it!
{"url":"http://mathhelpforum.com/advanced-algebra/137206-how-do-i-prove-if-set-vectors-independent-print.html","timestamp":"2014-04-17T05:09:15Z","content_type":null,"content_length":"15756","record_id":"<urn:uuid:33a68ab9-7725-4ddf-8c9f-cfbd86f71539>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Help in making sketch or diagrams please January 30th 2010, 04:55 PM Help in making sketch or diagrams please A 40-cm long pendulum is moved 45 degrees from the vertical. How far did the tip of the pendulum rise? A television antenna is situated atop a hill. From a point 200 m from the base of the cliff, the angle of elevation of the top of the antenna is 80 degrees. The angle of elevation of the bottom of the antenna from the same point is 75 degrees. How tall is the antenna? Can some1 help me to sketch dis problems? i just nid the drawing please. nid help.. thx in advance January 30th 2010, 05:05 PM Prove It A 40-cm long pendulum is moved 45 degrees from the vertical. How far did the tip of the pendulum rise? A television antenna is situated atop a hill. From a point 200 m from the base of the cliff, the angle of elevation of the top of the antenna is 80 degrees. The angle of elevation of the bottom of the antenna from the same point is 75 degrees. How tall is the antenna? Can some1 help me to sketch dis problems? i just nid the drawing please. nid help.. thx in advance 1. Draw the pendulum in the two positions you have specified. You will be able to create an isosceles triangle using the two positions of the pendulum and the distance between them. You can use the Cosine Rule to work out the distance between them. Then you can create a Right Angle Triangle using the distance you have just found as the Hypotenuse, and the other two sides as the distance travelled vertically and horizontally.
{"url":"http://mathhelpforum.com/trigonometry/126335-help-making-sketch-diagrams-please-print.html","timestamp":"2014-04-21T13:56:16Z","content_type":null,"content_length":"5209","record_id":"<urn:uuid:f6727e5c-1a93-4ca8-aaf4-5004b5bcb4d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Parallel Cost Analysis of Adaptive GMRES Implementations for Homotopy Methods Parallel Cost Analysis of Adaptive GMRES Implementations for Homotopy Methods (1997) Parallel Cost Analysis of Adaptive GMRES Implementations for Homotopy Methods. Technical Report ncstrl.vatech_cs//TR-97-22, Computer Science, Virginia Polytechnic Institute and State Full text available as: Postscript - Requires a viewer, such as GhostView TR-97-22.ps (489588) The success of homotopy methods in solving large-scale optimization problems and nonlinear systems of equations depends heavily on the solution of large sparse nonsymmetric linear systems on parallel architectures. Iterative solution techniques, such as GMRES(k), favor parallel implementations. However, their straightforward parallelization usually leads to a poor parallel performance because of global communication incurred by processors. One variation on GMRES(k) considered here is to adapt the restart value k for any given problem and use Householder reflections in the orthogonalization phase to achieve high accuracy and to reduce the communication overhead. The Householder transformations can be performed without global communications and modified to utilize an arbitrary row distribution of the coefficient matrix. The effect of this modification on the GMRES(k) performance is discussed here, as well as the abilities of parallel GMRES implementations using Householder reflections to maintain fixed efficiency with increase in problem size and number of processors. Theoretical communication cost and isoefficiency analyses are compared with experimental results on an Intel Paragon, Gray T3E, and IBM SP2.
{"url":"http://eprints.cs.vt.edu/archive/00000479/","timestamp":"2014-04-19T09:59:18Z","content_type":null,"content_length":"7732","record_id":"<urn:uuid:401ecf93-0b62-4d68-9587-64eaac87b502>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that binomial function is a probability function? December 28th 2011, 12:42 PM Proof that binomial function is a probability function? How can I prove that the binomial function is a probability function? I attached the solution I got from my slides...but I dont understand the 2nd part! Thanks for your help! December 28th 2011, 01:06 PM Re: Proof that binomial function is a probability function? The binomial probability is $P(X=k)=\binom{N}{k}p^kq^{N-k}$ where $q=1-p$ Thus, $1 = \left( {p + q} \right)^N = \sum\limits_{k = 0}^N {\binom{N}{k}p^k q^{N - k} }$ December 28th 2011, 02:26 PM Re: Proof that binomial function is a probability function?
{"url":"http://mathhelpforum.com/advanced-statistics/194743-proof-binomial-function-probability-function-print.html","timestamp":"2014-04-18T22:00:49Z","content_type":null,"content_length":"5403","record_id":"<urn:uuid:45c164eb-8c22-44ed-8f80-eeec4ebd77fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
GRE/ACT/PSAT Math Most standardized exams require similar skills and test on similar types of problems, especially when it comes to the math sections. The GRE, ACT, and PSAT have more in common than you realize. Professor Charlotte Vilkus teaches Educator’s Standardized Math course specializing in the GRE/ACT/PSAT, and will show you all of her tricks and everything you need to know in order to get the highest score possible. In addition, Charlotte has worked with students across the most popular prep books for all the major exams. Using her experience, she creates an all-inclusive course which begins with a basic overview of math concepts, then covers the types of problems you should expect on test day, and finally guides you through the best ways to approach math questions on standardized tests. Professor Vilkus obtained her BA in civil engineering with a math minor from Loyola Marymount University.
{"url":"http://www.educator.com/test-prep/gre-act-psat-math/vilkus/","timestamp":"2014-04-18T00:14:58Z","content_type":null,"content_length":"135257","record_id":"<urn:uuid:63cf9317-050c-448a-97e5-c65851104cd4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
equation of tangent lines Find the equation of 2 tangent lines to the graph of f(x)=2x-x^2 that passes through the point (1,5) the tangent line at a point (a,f(a)) is y= f ' (a) (x-a) +f(a) for your problem y=(2-2a)(x-a) + 2a -a^2 For the tangent line to pass through (1,5) 5 = (2-2a)(1-a) + 2a -a^2 solve this for a See Here's Fermat's method of finding tangents that pre-dates the calculus. Any non-vertical line through (1, 5) can be written as y= m(x-1)+ 5. Of course, to be a tangent line, it must touch the parabola at some point: we must have $m(x- 1)+ 5= 2x- x^2$ or $x^2+ (m-2)x+ 5-m=0$. A root of that equation would be the x value of the point of tangency. In fact, to be tangent that equation must have a double root. That means we must have $x^2+ (m-2)x+ 5-m= (x- a)^2= x^2- 2ax+ a^2$ for some a. Comparing coefficients, m- 2= -2a and $5- m= a^2$. From the first equation, m= 2- 2a. Putting that into the second equation, $5- (2- 2a)= 3+ 2a= a^2$. $a^2- 2a- 3= (a- 3)(a+ 1)= 0$. Use those values of a to find m, the slopes of the two tangent lines, and so the tangent lines.
{"url":"http://mathhelpforum.com/calculus/127260-equation-tangent-lines.html","timestamp":"2014-04-18T07:19:21Z","content_type":null,"content_length":"40712","record_id":"<urn:uuid:ff31fc90-d9b4-435a-80b7-119580daa7f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
GRE Math Numeric Entry Practice Test 03 1. In a sequence of numbers the first number is 3 and each number after the first is 2 more than 3 times the preceding number. What is the fourth term in the sequence? 2. If the average of 3 and x is 5, and the average of 5 and y is 7, what is the average of x and y? 3. A right circular cylinder has a volume of 81π. If the circumference of the base is 6π, what is the height? 4. Using the digits 1,2,5,9 exactly once in each number, what is the difference between the largest and the smallest number that can be formed? 5. How many numbers between 1 and 100 contain the digit 8? 6. If k is a positive integer, what is the smallest value for k to make 60k a perfect square? 7. What is the sum of x, y and z in the figure above? 8. The school library has 50 action adventure novels, 15 romances and 10 historical novels. Julie wants to take one of each type for her sick cousin to read. How many different choices of three books are available to her? 9. The fraction x/y is altered by decreasing x by 25 per cent and increasing y by 25 percent. The new fraction is what percent less than the original? 10. A CD player chooses a track at random from three discs each with 20 racks. What is the probability that it chooses track 2 of disc 2?
{"url":"http://www.majortests.com/gre/numeric_entry_test03","timestamp":"2014-04-21T04:32:08Z","content_type":null,"content_length":"12345","record_id":"<urn:uuid:f84d4363-b24f-4e0c-8486-f958dcbd081d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Bapchule Prealgebra Tutor Find a Bapchule Prealgebra Tutor ...Although I do not have a teaching certificate, I do have several people to call for references. God has given me a passion for teaching and I hope to serve Him by using the gifts He has given me:)I have taught Algebra I for at least 20 years using the Saxon and Abeka curricula. I have taught Al... 19 Subjects: including prealgebra, English, writing, algebra 1 ...Another subject I enjoy tutoring is math, most levels through Algebra 2. If you need assistance with English, especially writing, and preparing for AIMS, I can teach you techniques for putting together an outstanding essay. For students who have to deal with ADD or ADHD, I will be a good tutor ... 43 Subjects: including prealgebra, Spanish, reading, writing ...I try to instill in all students a sense of confidence so students will be more open to learning the materials, and trusting in their abilities. I also have an extensive knowledge of American and World history, government, and other social studies. Thank you! 12 Subjects: including prealgebra, algebra 1, elementary math, basketball ...I am also able to adapt different ways of presenting material and making it engaging to the you, as well as adapt to each unique student. I have an unparalleled amount of patience and also make it fun to learn math... even for those who claim to "hate" math! The process of learning and figuring it out for yourself (with my guidance) is more rewarding than having it explained to you. 17 Subjects: including prealgebra, reading, calculus, algebra 1 I am a certified teacher who is highly qualified in 6-9th grade Math. As a teacher, I see how students can get behind in Math. Because Math builds on the previous lesson, if a student doesn't understand one concept this can result in the student not understanding the next concept and then getting frustrated and come to the conclusion that they don't understand and can't do Math. 3 Subjects: including prealgebra, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Bapchule_prealgebra_tutors.php","timestamp":"2014-04-17T04:55:44Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:9d4cdb73-8602-4039-aef5-da015254c068>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
The Prime Glossary: perfect number Prime Pages: Top 5000: Many ancient cultures endowed certain integers with special religious and magical significance. One example is the perfect numbers, those integers which are the sum of their positive proper divisors. The first three perfect numbers are • 6 = 1 + 2 + 3, • 28 = 1 + 2 + 4 + 7 + 14, and • 496 = 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248. The ancient Christian scholar Augustine explained that God could have created the world in an instant but chose to do it in a perfect number of days, 6. Early Jewish commentators felt that the perfection of the universe was shown by the moon's period of 28 days. Whatever significance ascribed to them, these three perfect numbers above, and 8128, were known to be "perfect" by the ancient Greeks, and the search for perfect numbers was behind some of the greatest discoveries in number theory. For example, in Book IX of Euclid's elements we find the first part of the following theorem (completed by Euler some 2000 years later). If 2^k-1 is prime, then 2^k-1 (2^k-1) is perfect and every even perfect number has this form. It turns out that for 2^k-1 to be prime, k must also be prime--so the search for Perfect numbers is the same as the search for Mersenne primes. Armed with this information it does not take too long, even by hand, to find the next two perfect numbers: 33550336 and 8589869056. See the first page on Mersennes below for a list of all known perfect numbers. While seeking perfect and amicable numbers, Pierre de Fermat discovered Fermat’s Little Theorem, and communicated a simplified version of it to Mersenne in 1640. It is unknown if there are any odd perfect numbers. If there are some, then they are quite large (over 300 digits) and have numerous prime factors. But this will no doubt remain an open problem for quite some time. See Also: AmicableNumber, AbundantNumber, DeficientNumber, SigmaFunction Related pages (outside of this work)
{"url":"http://primes.utm.edu/glossary/page.php?sort=PerfectNumber","timestamp":"2014-04-19T04:42:28Z","content_type":null,"content_length":"6357","record_id":"<urn:uuid:913a0145-d834-4742-b67b-b90c7c7b36cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Does Matrix Multiplication Work the Way It Does? Date: 09/13/2006 at 22:07:14 From: Casey Subject: Process of Matrix Multiplication Can you please explain WHY the process of matrix multiplication involves multiplying and adding to get each entry? Adding, subtracting and scalar multiplying make sense, but multiplication is complicated and the rationale doesn't make sense to me. I understand the process. I just want to know why it works. Thank you. Date: 09/16/2006 at 09:53:49 From: Doctor Fenton Subject: Re: Process of Matrix Multiplication Hi Casey, Thanks for writing to Dr. Math. Suppose you have a system of linear equations (for simplicity, I will use two equations in two unknowns, but the principle applies to systems of all sizes). a11*x1 + a12*x2 = d1 a21*x1 + a22*x2 = d2 . (The indices are chosen to indicte the row and column of the coefficient, so a12 is the coefficient in the first equation of the second variable, x2 .) In solving this system by elimination, you can save a lot of writing by using position, instead of writing all the variable names, plus, minus, and equals signs, and carrying out row operations on the augmented matrix [ a11 a12 : d1 ] [ a21 a22 : d2 ] (I have inserted colons to emphasize that the numbers in the last column are different from the other columns, being the data values on the right side of the equations, while the other columns are coefficients of variables.) In this process, the coefficients become entries in the coefficient matrix. Now, suppose we want to make a linear change of variables, so that we introduce new variables y1 and y2 for which x1 = b11*y1 + b12*y2 x2 = b21*y1 + b22*y2 . (Such changes occur if we rotate the coordinate axes, for example, where the y's denote the coordinates in the rotated system.) If you substitute these formulas into the original system of equations, what will the coefficient matrix of the new linear system of equations in the y-variables be? For example, the first equation a11*x1 + a12*x2 = d1 a11*(b11*y1 + b12*y2) + a12*(b21*y1 + b22*y2) = d1 , or, rearranging, (a11*b11 + a12*b21)*y1 + (a11*b12 + a21*b22)*y2 = d1 , so the first row of the new coefficient matrix for the y-variables is [ (a11*b11 + a12*b21) (a11*b12 + a21*b22) ] [ ... ... ] If we designate the coefficient matrix for the new system of equations in the y variables to be the matrix C , where C = [ c11 c12 ] [ c21 c22 ] , then comparing the entries in the two matrices of coefficients, we see c11 = a11*b11 + a12*b21 and c12 = a11*b12 + a21*b22 . After you have written out the full coefficient matrix for the y-system of equations, compare it with the two coefficient matrices which are the original coefficient matrix for the original system, and the coefficient matrix of the variable transformation: A = [ a11 a12 ] B = [ b11 b12 ] [ a21 a22 ] , [ b21 b22 ] . This type of combination of two matrices is an important operation on matrices, and we call it "matrix multiplication", because we find that it has many properties of what we would expect from a multiplication (although it is not commutative). If you have any questions, please write back and I will try to explain - Doctor Fenton, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/70479.html","timestamp":"2014-04-17T07:33:23Z","content_type":null,"content_length":"8345","record_id":"<urn:uuid:a3a4d55a-8d6c-4cae-8bbc-75d9e0e82742>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Ant Colony Optimisation for Backward Production Scheduling Advances in Artificial Intelligence Volume 2012 (2012), Article ID 312132, 12 pages Research Article Ant Colony Optimisation for Backward Production Scheduling ^1Instituto Federal do Parana, Assis Chateaubriand, 80230-150 Curitiba, PR, Brazil ^2Petrobras S.A., 41770-395 Salvador, BA, Brazil ^3Doutor Jose Peroba 225, Apartment no. 1103, 41.770-235 Salvador, BA, Brazil ^4Department of Industrial Engineering, Pontifical Catholic University of Parana, 80215-901 Curitiba, PR, Brazil ^5Department of Management, Universidade Tecnológica Federal do Paraná, 80230-901 Curitiba, PR, Brazil Received 7 May 2012; Accepted 31 July 2012 Academic Editor: Deacha Puangdownreong Copyright © 2012 Leandro Pereira dos Santos et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The main objective of a production scheduling system is to assign tasks (orders or jobs) to resources and sequence them as efficiently and economically (optimised) as possible. Achieving this goal is a difficult task in complex environment where capacity is usually limited. In these scenarios, finding an optimal solution—if possible—demands a large amount of computer time. For this reason, in many cases, a good solution that is quickly found is preferred. In such situations, the use of metaheuristics is an appropriate strategy. In these last two decades, some out-of-the-shelf systems have been developed using such techniques. This paper presents and analyses the development of a shop-floor scheduling system that uses ant colony optimisation (ACO) in a backward scheduling problem in a manufacturing scenario with single-stage processing, parallel resources, and flexible routings. This scenario was found in a large food industry where the corresponding author worked as consultant for more than a year. This work demonstrates the applicability of this artificial intelligence technique. In fact, ACO proved to be as efficient as branch-and-bound, however, executing much faster. 1. Production Scheduling Still a Differential for Competitiveness The globalised world economic scenario makes entrepreneurial competitiveness unavoidable and being competitive has become an indispensable prerequisite to organisations that strive for success. Within this context, manufacturing activities become especially important for they decisively influence performance, directly affecting (and being affected by) forecast, planning, and scheduling Shop-floor production scheduling, which within the hierarchical production planning covers disaggregate and detailed decisions in short time frame, consists in allocating activities (production orders or jobs) to resources, by obeying sequencing and setup restrictions, with focus on getting the best possible results from limited available resources, and, at the same time, aiming at reducing production costs and meeting service levels as fast and efficiently as possible. To make all this happen in cases where production and financial resources are limited and restrictions are many, adequate algorithms techniques and intelligence are necessary. Almost four decades ago, Garey et al. [1] classified production scheduling problems as being NP-hard, which in practical ways means that it is very difficult for one to obtain an optimal solution through exact algorithms and also demand unacceptable execution (computer or effort) time. The difficulty in using exact techniques for the solution of these problems leads to the use of approximate methods, known as heuristics or metaheuristics, which try to find good acceptable solutions (not necessarily optimal ones) within reasonable computer time. In this study, a metaheuristic known as ant colony optimisation (ACO) was applied to a specific production scheduling problem found in productive systems having(i)one processing stage,(ii)parallel resources with different production capacities,(iii)backward scheduling (based on due dates), and, (iv)products with many possible (flexible) production routings. This particular type of production scenario was found in a large food industry that makes, among many other different products, chocolate bars and eggs—mostly for Easter festivities—common in many countries worldwide, especially in Latin America. For this particular company, production needs to make all forecasted orders by a given due data, and since demand is highly seasonal, most of the labor is hired under temporary contracts. One can imagine that better production schedules imply less money in hiring temporary workers. Because Easter is an important date for these chocolate products (retailers must receive products one to two months prior to the Easter festivities), backward scheduling approach is also used in this type of scenario. From the literature, one can see that the ACO metaheuristic has been applied to solve complex production scheduling. C. J. Liao and C. C. Liao [2], for example, presented an ACO algorithm applied to agile manufacturing. Shyu et al. [3] proposed an application of ACO to a scheduling shop-floor problem with two machines. Rajendran and Ziegler [4] analyzed two ACO scheduling algorithms for flow-shops. Bauer et al. [5] implemented ACO for solving a production-scheduling problem with one machine. Lin et al. [6] conducted a study using ACO for production scheduling and also proposed the inclusion of two new features inspired by real ants. Ying and Lin [7] also used ant colony systems to solve production scheduling problems. For the evaluation of the implemented ACO algorithm, production schedules are analyzed according to two performance measures:(a)maximum completion time or makespan (i.e., total time needed to manufacture all production orders), and (b)computer processing time (effort) for the creation of a production schedule. To measure the proposed ACO’s efficiency, comparisons are made with a similar system implemented using branch-and-bound optimisation. For these comparisons, different scenarios (configurations) of the proposed problem are tested. This paper explains the proposed ACO implemented and all tests and analysis performed. The paper is organised as follows. Section 2 presents a bird’s eye view on the ACO metaheuristic. Section 3 details the manufacturing scenario. Section 4 shows how the ACO software was implemented. Section 5 describes the design of experiments (DOEs) performed, tests, and analysis conducted. Main conclusions and suggestions for future studies are presented in Section 6. 2. ACO Metaheuristic As a background support to understand how ACO was used, this section briefly shows the use of ACO metaheuristic to the travelling salesman problem (TSP) and also describe the ACO metaheuristic applied to production scheduling. 2.1. ACO Metaheuristic Applied to TSP In ant colony optimisation, a given number of ants leave their nest to search for food and there are many possible paths an ant can take to get there. During their walk, ants leave pheromone, which is a substance that tells other ants about paths they can take for food. Each ant will do a certain number of trips from the nest to the source of food and back to the nest. In each of these trips, the ants deposit in the performed path a certain quantity of pheromone. There will be a standard quantity in the case that the path travelled by the ant does not present improvements compared to the best previous track, otherwise, there will be a larger quantity of pheromone, in case the path travelled by the ant is shorter than the previous best path. Meanwhile, there is a continuously decrease in the existing quantities of pheromone in all paths, due to the pheromone evaporation. Finally, the choice of the paths is based in a probability which depends of the quantity of the pheromone on a given arc and its distance. It is important to emphasize that the smaller the path, the greater will be the concentration of the pheromone, and consequently, the greater will be the probability of being chosen. TSP consists in a set of localities to be visited, each one only once, by any agent which, after completing a loop (cycle), has to go back to the origin position. The goal of this problem is to find the path tour that forms a tour passing through all cities. An instance of TSP can be represented by the valued graph , where represents the nodes (localities to be visited) and represents the arcs in the graph, where each arc has a cost given by the distance. This distance is denoted by and indicates the distance between the city and , as We assume that exist ants in the system and that each ant has the following characteristics.(i)Chooses the next city to be visited with a probability which depends on the distance and the quantity of pheromone in the arcs which link every two cities.(ii)In order to force the ants to perform a feasible tour, transfers to cities which have already been visited are discarded until a tour is completed.(iii)When a loop is completed, each ant deposits a certain quantity of pheromone on the arc visited. Be the intensity of pheromone in arc in time . Each ant in time chooses the next city to which it will go in time . Defining one ACO iteration as the movements realised by ants in the interval , then the iterations of each ant form a loop, that is, each ant realises a tour passing by all the cities. On every one, the intensity of the pheromone is updated by where is a coefficient (constant) with (1-) representing the pheromone evaporation between the times and of the arc and , where is the quantity of the pheromone deposited in the arc by the -th ant between the times and . The coefficient has to be adjusted in a value smaller than “1” in order to avoid unlimited accumulation of pheromone. Normally, the intensity of pheromone in time 0, , is adjusted as an integer positive constant . The rule in order to satisfy the constraint that each ant visits different cities is to associate to each ant a list, called list, which stores the cities which have already been visited and forbids the ant visit them again before the tour has already been completed. When a tour is completed, the tabu list is used to calculate the present solution of the ant (i.e., the distance travelled in the path). It is defined as for the vector which grows dynamically and contains the list of -th and and the -th city visited by ant at the present tour. Defining attractiveness as the quantity , we determine the probability of transaction of city to city by the -th ant as where , being the number of cities of the problem and and are parameters that control the relative importance of the intensity of the pheromone versus the attractiveness. In this way, the transaction probability is a combination between the attractiveness and the intensity of the pheromone in time . According to Dorigo et al. [8], there are many different forms to compute the value of . One of them is denominated Ant-cycle which is calculated as where is a constant and is the length of the way travelled by the -th ant. 2.2. ACO Metaheuristic Applied to the Production Scheduling Based on Dorigo et al. [9], Ventresca and Ombuki [10] and Mazzucco Jr. [11], the representation of the production scheduling problem in the form of ant systems may be built through a disjunctive graph. This graph can be defined as , where is the set of vertexes of the graph, which corresponds to the set of operations to be scheduled, represented here by . Two fictional operations, described as “0” and “,” are also added to the set , that is, , representing the origin (nest) and the destination (food source) nodes. Group is a set of arcs connecting consecutive operations from the same job (task, activity, or production order), the arcs that connect the operation 0 to the first real operation of each job and the last operation of each one of them to operation . Group is a set of the edges connecting two operations to be performed by the same resource (in this production scenario, a machine), and it may be expressed as . Each arc has a pair of numbers , which are a concentration of pheromone and visibility, respectively. The last one can be considered as the operation processing time at node . Figure 1 is a representative ant system graph of a production scheduling problem. In this graph, nodes represent the operations to be performed and each operation corresponds to the processing of a certain quantity of product in a single machine. Each of these operations belongs to a determined job , , and respectively. In this example, a job can be defined as a production order with a certain quantity of product that must pass through two machines ( and ). This way, a job is defined as a set of operations. In the graph, the operations numbered 1, 3, and 5 are executed by the same machine . Likewise, operations 2, 4, and 6 are executed by the same machine and belong to jobs , , and , respectively. The initial and final operations, labeled “0” and “,” are fictional, that is, they are not performed; however, they are required by the metaheuristics. They only exist so that the oriented graph, by being oriented, may have an initial and final operation (nest and food nodes in ACO representation). Since they do not have a processing time, these operations do not affect the production scheduling process. The nodes numbered from 1 to 6 represent the operations to be scheduled and each operation can be symbolically presented as , where : represents the operation; : represents the job; : represents the machine. Each operation is indexed according to its position number and the job it belongs to. Node 1, for instance, corresponds to operation , which means that if an ant passes through this node, the system will schedule the first operation of job 1 to machine 1 (). Node 2 corresponds to operation , meaning that operation 2 (second operation) of job 1 can be assigned to machine 2. That happens to all the nodes corresponding to a determined operation, as shown on the right side of the figure. An orientation set of all the edges transforms the graph in Figure 1 into an oriented graph and represents one of the possible solutions to the modeled problem. In the same way, an orientation set defines a sequence or permutation of the operations processed by each machine in . 3. The Manufacturing Scenario Considered in this Project The problem covered in this research consists in optimising production scheduling systems having only one processing stage with parallel resources and flexible routings. In other words, any product has one single operation (or processing stage), which may require one or more resources. Hence, each job has only one operation = and the objective is then to schedule a set of jobs: = within a minimum time-frame (makespan). This scenario was found in a particular food industry where part of this study was accomplished. The considered productive system is characterised by having parallel resources, that is, an operation can be formed by more than one machine or productive resources (as explained below). Thus, there is a set of M machines available, where . It is worth mentioning such machines can be different in capacity and efficiency and, therefore, the system present different processing capacities for the same product. Another important characteristic of the considered productive system is that products can have flexible routings, which means that a particular product can have more than one possible process plan. It is assumed in this paper that each job J can be processed by any of the M machines, that is, each job J has a flexible routing. Schematically, , , and , where RF[1] is the flexible routing of job 1, RF[2] is the flexible routing of job 2, and is the flexible routing of job S. Each flexible routing is formed by a set of similar operations. Schematically, , , and , where represents operation 1 of job 1 processed by machine 1, represents operation 1 of job 2 processed by machine 1, and represents operation 1 of job S processed by machine 1. The system implemented also uses backward scheduling. Each job J has a due date that must be met. The problem thus consists in scheduling all operations in order to minimise the total time needed for their execution (makespan), bearing in mind the delivery due date of all products. There are also other restrictions in this scenario.(a)Suppliers lead times: the system should not schedule an order if needed material(s) is (are) not available. (b)Setup times: this is in fact a setup dependent scheduling algorithm. The food company studied considers minimizing total production time (makespan) as the main optimisation objective and, therefore, this is the objective function used by the ACO system implemented. This performance measure represents the length (total time) of the production schedule. In other words, it is the ending time of the last job scheduled minus the starting time of the first job scheduled. One can also consider makespan as the ending time of the last operation to be processed minus the starting time of the first machine that begins its operation. Although minimizing the maximum makespan is the objective of the ACO system developed, this research also considered the total computer time (effort) as a performance measure to evaluate ACO against another optimisation technique: branch-and-bound (BB), which was used in another study and is used in this paper as a benchmark to help us evaluate the efficiency of the proposed ACO. 4. The ACO Production Scheduling Implemented As previously mentioned, each product process plan (routing) is made of a single operation (monostage). Therefore, one can say that each flexible routing has a set of similar operations and when one of these operations is chosen, the others are discarded. Under the proposed ACO graph model, each job’s flexible routing (RFs) corresponds to a set of nodes (see dashed square lines at Figure 2). In the beginning these are “virtual” models, available for an ant to pass through. Once an ant chooses and passes through a node, it becomes a chosen node, meaning that the operation of this job has been scheduled (all remaining similar possible operations in the RF are discarded). As previously mentioned, the initial node (nest) and the final node (food) are parts of the ACO graph. These nodes are fictitious, that is, are not actual production scheduling activities, while all other nodes correspond to operations that can be scheduled. The edges correspond to the time or duration of the operation to be scheduled and there are no edges linking operations from the same group (RF), that is, from the same job, considering that they will be eliminated if they are scheduled. There are, however, nonoriented edges that link the operation groups (RF) and indicate that all the jobs need to be scheduled, regardless of the sequencing (see connecting arrows among dashed squares). Ants leave the “Nest” aiming to find a source of “Food,” however, all jobs must be scheduled before the ant gets to the food node (the complete path between nest and food nodes comprises a feasible schedule). In the beginning, the ants find different sources of food, and as the algorithm evolves, the number of food sources converges to the “best food sources.” 4.1. Structure of the Implemented Software The ACO technique uses the natural behavior of an ant colony, which tries to find the shortest paths between nest and the food through the communication among the agents (ants) using pheromone, a path of “smell for food” for the other ants. The ACO system has a set of configuration parameters that impact the quality and performance of the algorithm. In this project, six ACO parameters were analysed.(i)Number of ants (NA): the quantity of ants simultaneously searching for food (each ant will create a possible schedule).(ii)Number of travels of each ant (NT): it is the number of times that each ant will travel between the nest and the source of food.(iii)Quantity of initial pheromone (QIP): it is the quantity of pheromone that already exists in all the paths before the ants start their travels. (iv)Quantity of added pheromone (QAP): it is a quantity of pheromone left by an ant in the path taken between the nest to the food nodes. In this implementation, pheromone is added right after the ant reaches the food node.(v) Evaporation percentage (EP): it is the quantity of pheromone lost (that evaporates) in each path as time passes.(vi)Best response valorisation (BRP). The better the solution found by an ant, the more pheromone is added to its path. The basics of the ACO system implemented are described in Figure 3. It briefly consists of (a) reading the input data; (b) initializing the system, that is, the ants and their respective tabu and feasible nodes’ lists; (c) creating the graphs for each ant, with its respective nodes and edges. The stopping criterion is based on the total number of ants and the numbers of travels (from nest to food) each ant must achieve. During the schedule process, each ant goes from node to node, in the ACO graph each ant keeps track of its onw path being created. The logic of this process lies on the fact that as an ant leaves the nest it starts scheduling production orders until it reaches the food (which means that needed jobs have been scheduled). The scheduling of an order means that a given node was selected and has been included in the ant’s path to food. An ant randomly chooses the next operation (or the next node to go to) based on the operation processing time and on the quantity of pheromone present at edges connecting the node where the ant currently is and the other possible nodes. The more pheromone exists in an arc (edge), the greater the probability for an ant to select it. When an ant schedules all of its possible operations, it means that it has reached a source of food and one travel is complete. The solution found by each ant is analysed, and depending on the response quality, each ant’s path created will get a specific quantity of pheromone, according to previously established quality criteria (the better the schedule the more pheromone are deposited in every arc in the path). Since the probability of selecting a node also depends on quantity of pheromone in the arc, better solutions tend to influence other ants to choose the same path in the When all ants complete the number of needed travels, the scheduling process ends and the best response is presented. 4.2. The ACO System Class Structure The ant colony optimisation algorithm was implemented following an object-oriented structure. A total of eight object classes were implemented are briefly explained next and the dependency among these classes are depicted at Figure 4.(i)ACO_SFS implements setup and configuration procedures, run (execute) commands, evaporation execution, savings, and so forth (“SFS” stands for shop-floor scheduling.)(ii)Ant_SFS implements: an ant, tabu list procedures, keeps track of edges taken, current node position, calculates probabilities, verifies next possible (feasible) nodes, moves to next node, updates pheromone, and so forth.(iii)Config_ACO_SFS procedures for the creation and configuration of the ACO.(iv)Graph_SFS implements the graph where ants will walk through. A graph is basically a list of nodes, a list of edges. In the very beginning, a graph has only two nodes: nest and food.(v)Edge_SFS an edge is a set of two nodes (begin and end node) and an information about pheromone. Remember that different ants can walk through the same edge and are affected by the same pheromone in the edge.(vi)Node_SFS a node corresponds to an operation that can be schedule. A production order is basically made of a set of possible operations.(vii)FeasibleNode_SFS a feasible node is node in the graph were the ant can go to, which means that the ant can “schedule” that operation.(viii)Way_SFS this class implements the path an ant takes. It is basically a list of edges. Several other classes were also developed. These classes implemented the production scheduling logic and attributes, such as object classes to model resources, production orders, products, production routings (process plans), calendars, setup matrices, among others (for conciseness reasons, these classes will not be explained.) A screenshot of the system is shown at Figure 5. 5. Experiments Planning and Analysis Two types of experiments have been made in this project: (a) factorial experiments () have been used to verify the influence of each ACO configuration parameter in the objective function (makespan + computer time); (b) experiments to verify the ACO efficiency in relation to branch-and-bound have been carried out through an analysis of variance. 5.1. Influence of the ACO Configuration Parameters on Performance The objective of this first type of analysis is twofold: (a) to understand how the values of the input (or configuration) parameters affect the solution quantity and computer executing time needed; (b) these factorial experiments will also be used to help us identify the best (optimum) values for the ACO system configuration parameters. In the factorial analysis performed, two levels for the six parameters (BRP, NA, QIP, QAP, NT, and EP) are considered leading to 64 (2^6) different ACO configuration to be tested. The low and high levels for this analysis are shown on Table 1. After defining the scope of the experiments, an analysis of variance (ANOVA) was performed. For this, each one of the 64 configuration scenarios was run sixteen times (sample size was chosen arbitrarily). Table 2 shows the ANOVA experiment results obtained. By the analysis of Table 2 and based on the theory about ANOVA, one can conclude that the parameters BRP, EP, NT QIP, and QUAP significantly affected the problem results. The only parameter that did not seem to affect performance was the number of ants (NA) always according to a 95% of confidence level. It is possible that the difference between the low and high values for the NA parameter was not enough to cause any significant impact on the system performance and also, possibly the number of travels has hiden some of the effect of this parameter. A detailed analysis is given next. If the best response valorisation parameter is set too low, the quantity of pheromone deposited in the path will not be sufficient to “make” other ants to follow such path. On the other hand, if BRP is set too high, it forces all ants to follow a single path, prematurely converging to a solution that might be far from a good one. Parameter EP (evaporation rate) determines the quantity of pheromone lost (that evaporates) in the paths as time passes by. The fact that a higher quantity of substance evaporates within a specific period of time can be important, so that a local minimum solution is not chosen. Parameter NT (number of travels that each ant must execute) clearly affected the solution quality. As said before, this parameter probably hid the NA effect on performance. The importance of parameter QIP (quantity of initial pheromone in the system) is noticed when compared to the quantity of pheromone added after a travel is complete. Setting QIP too high and using low QAP may not influence ants to take the best paths. But both parameters significantly affect the response quality. Finally, as explained previously, parameter NA (number of ants in the system) had no significant influence in the quality of the problem responses. This result may also be caused by the fact that in the implemented ACO, the pheromone is only added to the paths after an ant reaches the food node, not during the search. This, however, is closer to the natural behavior of ants, which by carrying food back to their nest, scracth their bellies on the ground, “leaving” the pheromone. One would assume, however, that the number of ants do affect the answer, however, for the two levels considered in this ACO system (20 and 50 ants) and the strong influence of the NT (number of travels), this did not come true. Table 3 brings a summary of this analysis, showing which variables are and are not significant in the proposed experiment executed. The main conclusion in the first phase of this study is that most of the configuration parameters considered, which affect directly or indirectly the quantity of pheromone in a path, significantly impact the response quality (except the number of ants, as explained before). In the next phase, the configuration parameters used for comparative tests are set, according to the methodology proposed by Montgomery and Runger [12], which basically consists in choosing the variable level in which the sum of the response averages is higher. This way, the parameters chosen were As previously mentioned, the parameter referring to the number of ants in the system did not seem to be significant for schedule quality (makespan). According to Montgomery and Runger [12], in such cases, a specific parameter level must be chosen so as to optimise economy, operation or any other strong technical factor when executing the algorithm. This was the only parameter to be changed, by attributing 10 as its value. In so doing, there was a decrease in the computer time required to achieve the response and the maintenance of its quality. 5.2. Analysing the ACO Metaheuristics Efficiency in Comparison to Branch-and-Bound In order to test the ACO metaheuristics efficiency in production scheduling optimisation (in scenarios similar to the one adopted in this paper), comparative tests of the solution generated by the ACO metaheuristics were made with solutions obtained through the use of a branch-and-bound optimisation method (already implemented by the authors of this article). For the execution of this second phase of these experiments, a productive system that makes 20 different items was considered. The experiments also considered products with several possible process plans (i.e, routings). These scenarios considered five possible machines for the operations and a total of 20, 40, or 60 jobs. Other six scenarios considered ten possible machines for the operation, and 20 to 120 jobs to be scheduled. This is summarised at Table 4. Considering a scenario with 5 machines and a particular product A, this product’s operation could be done at machine I, II, III, IV, or V. Production capacities in each machine are 5, 6, 7, 8, and 9 units/hour, respectively (so machines are not identical). Considering a scenario with 10 machines, product A could go to machine I, II, III, IV, V, VI, VII, VIII, IX, or X. Production capacities are 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 units/hour, respectively. Each of the 9 scenarios has a number of POs (production orders) to be scheduled. Here, each PO corresponds to a Job that comprises a given quantity of a particular product that need to be made. In this part of the paper, it was considered that each scenario would be characterised by having 20, 40, 60, 80, 100, or 120 POs. Figure 6 shows the quantity of each product to be produced in each PO per scenario. The methodology used for the comparative analysis of the (ACO and BB) techniques was the hypothesis test with two samples, in order to determine the difference between two averages and the difference between two variances. Table 5 shows the results obtained with the experiments of the previously described scenarios, in which the column named BB brings the results achieved with branch-and-bound and the column named ACO, the results with ACO. Firstly, an test was applied to the results of the two response variables studied: objective function (makespan) and computer time (effort). Such test aimed to verify whether the variances between the two techniques were significantly different. After that, a -test could be applied to verify how significant the difference between the result was, and then, to infer about the ACO technique efficiency production scheduling optimisation. 5.2.1. The -Test for the Response Variables (Prestep) The -test was executed to verify the difference in variance between the two samples, regarding both makespan and computer time needed to achieve the best results. This is a prestep to define the appropriate test to compare averages (Section 5.2.2). For this test, a package called Data Analysis from Microsoft Excel was employed, more specifically, the -Test function “two samples for variances,” with a significance level . Table 6 shows the results obtained. Since > (0.83 > 0.29), the null hypothesis must not be accepted, once it assumes that the variances of the two samples are equal. Hence, there is statistical evidence that the difference between the variances is significant, which means that, within a 95% confidence level, the variances in makespan obtained with ACO is different from the one obtained using BB. Table 7 presents the -test result regarding the variance analysis of the computer needed to achieve the results for each technique. Since > (100.69 > 3.44), the null hypothesis shall not be accepted, once it assumes that the variances of the two samples are equal, that is, there is evidence, again, that the difference between the variances is significant concerning the computer time (effort). Summing up, -tests showed that the variance regarding the minimisation of maximum makespan was highly better using BB compared to ACO. However, computer (processing) time variance using ACO was with less than using BB. 5.2.2. -Test for the Response (Objective) Variables Considered -test results for the two response variables showed that for both cases, the variances between the samples are different. That leads to the analysis of the difference between the results averages of the two studied objectives (makespan and computer time) through the use a -test employing two samples and presuming different variances. To do so, the Data Analysis tool from Microsoft Excel was again used, more specifically the -test function: two samples presuming different variances, with a significance level . Table 8 presents these results. Referring to the makespan between the ACO and BB techniques, the null hypothesis must be accepted, once the -value = 40% is higher than the significance level adopted (5%), that is, the difference between the averages is not significant. This conclusion is interesting because it says that ACO is behaving as well as BB, regarding minimisation of makespan (in this productive scenario and for the ACO and BB softwares implemented). -test results regarding the computer time of the two compared techniques are presented in Table 9. So, regarding computer time (effort), the null hypothesis must be rejected, since the -value (4.7%) is lower than the significance level adopted (5.0%). In other words, there exists significant difference between the averages. By looking at the averages shown on Table 9, one can confirm that ACO runs much faster that BB. Despite the fact that the makespan was similar using ACO and BB, Ant Colony executed much faster than Branch-and-Bound. Table 10 summarises these results. The objectives that guided this research were supported by two basic ideas. First, the intention to verify whether the ACO metaheuristics was a feasible technique for solving backward production scheduling optimisation in monostage productive systems, with parallel manufacturing resources, different production capacities, and flexible routings. Second, the study also intended to evaluate the efficiency of ACO compared to the branch-and-bound technique. By using statistical -tests and -tests, the ACO metaheuristics efficiency was, in fact proved, taking into account the quality of the generated production plan (in terms of makespan) and computer time required for the creation of production schedules. Although there was no enough statistical difference between BB and ACO regarding makespan, if one assumes that BB gives a good answer, then ACO will perform similarly. In terms of computer time, however, ant colony performed much faster than branch-and-bound. It is important to emphasise that these results were obtained for the type of production scenarios considered. 6. Final Considerations This paper focused on verifying whether ant colony optimisation could be effectively applied to a production scheduled problem found in some types of food industries operating with backward scheduling and considering monostage productive systems, parallel resources, and flexible routings. This analysis studied the metaheuristics configuration variables regarding their influence on the variations and averages of response variables. It showed that the ACO configuration parameters (best response valorisation, evaporation, quantity of initial pheromone, quantity of added pheromone, and number of travels) proved to be significant in relation to their influence in the response quality (95% reliability). The only variable among the studied ones that did not prove to be significant was the number of ants in the system. Besides verifying which system configuration variables affect the algorithm response quality, these experiments also helped to set the ACO configuration variables later used to test the efficiency of the ACO method. Hence, regarding the ACO technique efficiency analysis, quality was measured in terms of makespan and computer time spent to achieve good responses, while efficiency analysis was done by comparison of ACO results with branch-and-bound optimisation. Regarding makespan, it was not possible to point out any significant difference between the two methods. This led us to conclude that ACO is as efficient as BB in backward production scheduling in monostage problems with routing flexibility. Regarding computer time, -tests revealed that the ACO technique runs much faster than branch-and-bound in solving the type of production scenario considered, with large number of resources and jobs (production orders or tasks). One could verify that the time needed to achieve the response increases in higher proportion using branch-and-bound than using the ACO technique. It is important to point out that the conclusions from this study refer strictly to the type of production problems herein covered. For future studies, some suggestions are (i)productive systems with more than one processing stage must be considered, (ii)implementation (and tests) in a forward scheduling environment, (iii) different ACO implementation characteristics can also be tested, like, for instance, enabling evaporation to occur in each move of the ants and not only when they reach a food source (the same thing for pheromone deposit),(iv)different metaheuristics configuration (setup) variables could be tested, and(v)other possible implementations could consider a multiobjective function, with other objectives, such as minimizing lateness, average flow time, setups, and resources′ idleness. 1. Garey, MR, and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, 1979. 2. C. J. Liao and C. C. Liao, “An ant colony optimisation algorithm for scheduling in agile manufacturing,” International Journal of Production Research, vol. 46, no. 7, pp. 1813–1824, 2008. View at Publisher · View at Google Scholar · View at Scopus 3. S. J. Shyu, B. M. T. Lin, and P. Y. Yin, “Application of ant colony optimization for no-wait flowshop scheduling problem to minimize the total completion time,” Computers and Industrial Engineering, vol. 47, no. 2-3, pp. 181–193, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. C. Rajendran and H. Ziegler, “Two ant-colony algorithms for minimizing total flowtime in permutation flowshops,” Computers and Industrial Engineering, vol. 48, no. 4, pp. 789–797, 2005. View at Publisher · View at Google Scholar · View at Scopus 5. A. Bauer, B. Bullnheimer, R. F. Hartl, and C. Strauss, An Ant Colony Optimizations Approach for the Single Machine Total Tardiness Problem, Department of Management Science. Universidade de Vienna, Vienna, Austria, 1999. 6. B. M. T. Lin, C. Y. Lu, S. J. Shyu, and C. Y. Tsai, “Development of new features of ant colony optimization for flowshop scheduling,” International Journal of Production Economics, vol. 112, no. 2, pp. 742–755, 2008. View at Publisher · View at Google Scholar · View at Scopus 7. K. C. Ying and S. W. Lin, “Multiprocessor task scheduling in multistage hybrid flow-shops: an ant colony system approach,” International Journal of Production Research, vol. 44, no. 16, pp. 3161–3177, 2006. View at Publisher · View at Google Scholar · View at Scopus 8. M. Dorigo, V. Maniezzo, and A. Colorni, “Positive feedback as a search strategy,” Tech. Rep. 91-016, Dipartimento di Elettronica, Politecnico di Milano, Milano, Italy, 1991. 9. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 26, no. 1, pp. 29–41, 1996. View at Scopus 10. M. Ventresca and B. Ombuki, “Ant colony optimization for job-shop scheduling problem,” Tech. Rep., Department of Computer Science, St. Catharines, Canadá, 2004. 11. J. Mazzucco Jr., Uma abordagem híbrida do problema da Programação da produção através dos algoritmos simulated annealing e genético [tese de doutorado (Doutorado Engenharia de Produção, UFSC)], Florianópolis, 1999. 12. D. C. Montgomery and G. C. Runger, Estatística Aplicada à Engenharia, LTC, Rio de Janeiro, Brazil, 2004.
{"url":"http://www.hindawi.com/journals/aai/2012/312132/","timestamp":"2014-04-19T21:15:16Z","content_type":null,"content_length":"195562","record_id":"<urn:uuid:214be783-37e2-4287-838f-bcefbcac964f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathnium is a program for interactive numerical computations. With its comprehensive library of functions for a variety of problems in applied mathematics, and with its facilities for the definition and manipulation of arrays of numbers as basic data objects, Mathnium allows you to solve numerical problems rather painlessly and without a great deal of programming effort. You can also extend the capabilities of Mathnium by using it as a programming language. While it is not a general purpose language, its facilities are more than sufficient for you to be able to implement your algorithms in a manner which is not only extremely concise but also very close to the mathematical description of the algorithms. Mathnium is written in Java, and it provides convenient language constructs which do not require you to actually write formal Java programs to uses existing Java classes. As a result, very large number of existing libraries of Java classes, most of them available from open source projects, can be very easily used within the Mathnium environment. Mathnium is inspired by Matlab, a very successful and versatile language for scientific computing. Although it should be possible to execute within the Mathnium environment a Matlab program that does not use any functions that are not in the Mathnium library, no attempt has been made (or is planned) to make Mathnium completely compatibile with Matlab. The simplest use of Mathnium is as a calculator: you type in a expression or a group of expressions separated by commas, and Mathnium types the results back. >>2+3, 3^2 # A simple statement >>for i=1:10 # A control structure > s=s+i > end It is also possible to execute a set of commands from a text file by using the function exec. If you issue the command Mathnium will read and execute the subsequent commands from the file named cfile.m. Moreover, if the name of a file to be executed has the suffix .m, you just have to type in its name without double quotes. Thus, if myfile is a currently undefined variable, the command will result in the execution the statements in the file myfile.m . All the input in a line following one of the three strings #, % or // is treated as comment. Multi-line comments commencing with the string /* are terminated by the string */. >>a=2 % comment >>b=10 # comment >>c=a+b // comment >>Multi line comment A Mathnium session may be terminated by using the command exit or quit. In addition to integers and floating point numbers, Mathnium allows you to deal with complex numbers in a natural way. A complex number is specified by either appending the string _j to the imaginary part of the number, or by appending the imaginary part of the number to the string j_. Initially, the value of the variable i is set to be sqrt(-1), and so pre or post-multiplication by i may also be used to make any real valued expression imaginary. All the arithmetic operators valid for real numbers can be used with complex numbers as well. 2 + 3i 8 + 3i -3 + 15i 0.8462 + 1.2308i There is no explicit limit on the number of chracters in identifiers (variable names) whose first character must be a letter which may be followed by numeric digits, letters, or underscors. The names may not begin with j_ or i_, as any name of the form j_xxxxx or i_xxxxx is interpreted as sqrt(-1)*xxxxx. Likewise, the variable names may not terminate with _j or _i, as any name of the form xxxxx_j or xxxx_i is interpreted as sqrt(-1)*xxxxx. Arrays are basic data types in Mathnium in that they can be used as operands for various ioperators. A one-dimensional array is just a row or a column of numbers. A two-dimensional array consists of rows of numbers, with each row consisting of the same number of elements. An array with p rows, each of which contains q numbers, is said to be a p by q array or an array containing p rows and q columns. Thus, a one-dimensional array is either a p by 1 or a 1 by q array. Two arrays are of the same size if they have the same number of columns and rows. The simplest way to enter an array is to enclose it in square brackets, separating the rows either by a semi-colon or by a carriage return. >>a=[2 3 4; 6 3 2] >>b=[6 5 3 2] Multi-dimensional arrays may be defined by using the function cat which catenates a set of specified arrays along a particular dimension. If the first is input to the function is one, it stacks the successive elements in rows of a matrix, and if it is two, it stacks them in columns. Clearly, for higher values of the first input, the function cat stacks the elements along the specified dimension of a multi-dimensional array. >>//catenate along the third dimension >>cat(3,[2 3 ; 4 5],[7 8; 9 10]) [:,1:2, 1] [:,1:2, 2] >>//catenate along the fourth dimension >>cat(4,[2 3 ; 4 5],[7 8; 9 10]) [:,1:2, 1, 1] [:,1:2, 1, 2] The arithmetic operators + - * / ^ can be used with the scalars (i.e., arrays with one row and one column) in the usual way for addition, subtraction, multiplication, division and exponentiation. However, you have to be a little bit more careful when using these operators with arrays. A scalar can be added to an array of numbers by using the addition operator +. The result is an array obtained by adding the scalar to each element of the array used as the operand. The operators -, *, / and ^ can be used in a similar fashion with a scalar and an array as the operands. The operators +, - and ^ can be used with two arrays of the same size as the operands. The result is obtained by the element-wise application of the operator to the corresponding elements of the two arrays, and is an array of the same size as the operands. For the similar element-by-element multiplication or division with two arrays of numbers, you can use the operators .* and ./. The operator .^ corresponds to element-by-element exponentiation. >>a=[2 3 4] >>b=[6 7 8] -4 -4 -4 >>a./b // Right element by element division 0.3333 0.4286 0.5 >>a.\b // Left element by element division 3 2.3333 2 If the operands are arrays, the operators *, / and \ correspond to matrix multiplication, right division, and left division respectively. Thus, these operators can be used with arrays only if the number of rows in the second operand is the same as the number of columns in the first operand. The number of rows in the resulting array is the same as the number of rows in the first operand, and the number of columns in the resulting array is the same as the number of columns in the second operand. Thus, a system of linear equations with a square coefficient matrix can be solved by using the matrix division operator. (If the divisor is not a square matrix, the least square solution of a system of equations is returned.) >>a=[2 3 ; 8 9] >>b=[7 9 10; 88 32 67] 33.5 2.5 18.5 -20 1.3333 -9 >>c=[6 2; 9 10;12 18] -6.3333 2.3333 -0.1667 1.1667 The operators .' and ' can be used for obtaining, respectively, the transpose and the Hermitian conjugate of matrices. >>a=rand(2,3) // rand(2,3) returns a 2X3 matrix of random numbers. 0.9897 0.1033 0.8171 0.2327 0.0397 0.7688 0.9897 0.2327 0.1033 0.0397 0.8171 0.7688 >>a=a+j_rand(2,3) // the prefix j_ multiplies the expression by sqrt(-1). 0.9897 + 0.4806i 0.1033 + 0.7386i 0.8171 + 0.2327 + 0.6523i 0.0397 + 0.9303i 0.7688 + 0.9897 - 0.4806i 0.2327 - 0.6523i 0.1033 - 0.7386i 0.0397 - 0.9303i 0.8171 - 0.3632i 0.7688 - 0.1731i 0.9897 + 0.4806i 0.2327 + 0.6523i 0.1033 + 0.7386i 0.0397 + 0.9303i 0.8171 + 0.3632i 0.7688 + 0.1731i The following relational and logical operators are defined in Mathnium: == equal != not equal > greater than >= greater than or equal < less than &lt== less than or equal ~ (or !) unary negation || (or |) logical or && (or &) logical and The relational and logical operators accept following types of arguments: (i) Scalars, (ii) An array and a scalar, (iii) Two arrays of the same size. If the operands for relational operators are scalars, they return 1 (for TRUE) if the relationship is satisfied by the operands; otherwise the result of these operations is zero (for FALSE). In other cases, the relational operators yield an array of zeros or ones obtained by pairwise comparison of each element of the operands. Note that if one of the arguments is a scalar and the other an array, the same scalar is compared with each of the elements of the array. The logical operators behave in the same way as the relational operators except that they accept only integers and integer arrays as operands. The logical operators assume any nonzero integer to be TRUE and zero to be FALSE, and return zeros or ones. >>[2 10 5]==2 >>[3 8 ; 4 10] != [ 4 8 ; 3 10] >>[2 7 0]&&0 >>[2 7 0]&&1 >>~[2 3 2] >>~[2 0 8] The Mathnium interpreter comes with close to four hundred functions for numerical computations, graphics, and input-output. The detailed descriptions of the functions are available in elsewhere. Here is a brief overview of the kinds of computations that that can be performed with Mathnium. The functions for numerical computations cover the following areas: • Linear Algebra • Integration, Differentiation and Taylor Series Expansion • Evaluation of Mathematical Functions • Solution of Nonlinear Equations and Function Minimization • Ordinary Differential Equations • One and Two-Dimensional Fast Fourier Transforms • Data Analysis: Sorting and Simple Statistics • Polynomial Arithmetic What follow are some examples of the use of some of the functions for numerical computations. The first function that you should learn to use is rand. It allows you to conveniently generate arrays for input to various functions with which you want to play around in order to familiarize yourself with Mathnium. The call rand(p,q) generates a p by q array of real numbers which are uniformly distributed between 0 and 1. 0.5848 0.7915 0.296 0.4346 0.1586 0.4306 0.5582 0.2974 0.0235 0.3031 0.3388 0.5262 0.7698 0.5108 0.039 0.6881 The last call returns the array of eigenvalues of the matrix a in the diagonals of the output d and the matrix of eigenvectors in the variable c. The call illustrates two basic features of the syntax for function calls in Mathnium: (i) the output arguments are separated from the inputs and (ii) functions can return multiple outputs. These features lead to considerable clarity in function definitions as you will begin to appreciate as you become more familiar with the system. Let us try to see the accuracy of the eigenvalue computations by calculating the norm of a matrix whose elements should all be zero in an infinite precision calculation. The function svd computes the singular value decomposition of a matrix: >>a=rand(100,100) # Define a matrix >>[u,s,v]=svd(a) # Compute the SD >>norm(u'*a*v-s) # Check the accuracy of the results The function qr is one more in a long list of Mathnium functions for matrix decompositions. If called with three output arguments, it returns a lower triangular matrix r, a unitary matrix q and a permutation matrix e such that q*r = a*e' Here is an example for the use of qr: >>q'*q // q is unitary. 1 -2.7756e-017 1.1102e-016 -5.5511e-017 -2.7756e-017 1 -4.1633e-017 -1.5613e-017 1.1102e-016 -4.1633e-017 1 -8.3267e-017 -5.5511e-017 -1.5613e-017 -8.3267e-017 1 >>r // r is upper triangular. -1.2933 -0.5724 -0.8276 -0.1615 0 -0.8898 0.0062 -0.1368 0 0 -0.4123 -0.1434 0 0 0 0.1931 >>q*r-a*e' // This should vanish. Scaled by 10^-16 2.2204 2.2204 2.2204 0.5551 0.2776 -1.1102 0.2776 -0.5551 1.1102 0.5551 2.2204 0.5551 2.2204 0 1.1102 0.1388 We give just one more example from linear algebra. The function simplex is an implementation of the simplex algorithm for solution of linear programming problems. The simplest of such problems are of the form Given a row vector c, a column vector b and a matrix a, find the vector x that minimizes c*x subject to the constraint a*x = b. Of course, simplex solves the more general problems also, but the following example is for the restricted problem: >>a=[1 6 -1 0; 0 -3 4 1] >>b=[2 ; 8] >>c=[0 -2 4 0] >>x // The solution >>cx // The optimal cost function >>a*x // This should be the same as b. Most of the common mathematical functions are available in Mathnium. Many Mathnium functions have been defined such that they accept an array of numbers as input, and return a table of the values of the mathematical function for each element of the input array. >>sin([0 pi/10 pi/4]) 0 0.309 0.7071 Columns 1 through 8 1 0.9975 0.99 0.9776 0.9604 0.9385 0.912 0.8812 Columns 9 through 11 0.8463 0.8075 0.7652 Note here that in the call to bessj, for calculating the Bessel function of the first kind of the order zero, the input [0:.1:1] corresponds to a table of numbers between zero and one, with increments of 0.1. This construct with colons can be used to conveniently generate an array with regularly increasing or decreasing values. The input to many Mathnium functions may be in the range of inputs for which they return complex values. In fact, all the Mathnium trigonometric and hyperbolic functions, including the inverse functions, can be called with any real or complex numbers as the input so long as the corresponding value of the function is bounded. >>y=asin(x) % Some of the elements of the input are > 1. >>y % Columns 1 through 3 0 0.2014 0.4115 Columns 4 through 6 0.6435 0.9273 1.5708 Columns 7 through 9 1.5708 + 0.6224i 1.5708 + 0.867i 1.5708 + Columns 10 through 11 1.5708 + 1.1929i 1.5708 + 1.317i >>sin(y) % should be same as x Columns 1 through 3 0 0.2 0.4 Columns 4 through 6 0.6 0.8 1 Columns 7 through 9 1.2 +4.0617e-017i 1.4 +5.9995e-017i 1.6 +7.647 Columns 10 through 11 1.8 +9.1644e-017i 2 +1.0606e-016i The function series returns the Taylor series expansion of an expression in a single variable. The first input to this function is required to be the function (of a single variable) whose series is needed. Such arguments may be specified in in a number of ways including the following: • Use the construct $func or @func to sepecify an existing function func as the argument. • Use an inline function of the form @(x)expr where expr is an expression defined for the argument x as, for example, in @(x)sin(x)*x-1. 1 0 -0.1667 0 >>series(@(x)sin(x)/x,0,10) // Use 10 terms in the series to compute the expression. Columns 1 through 8 1 0 -0.1667 0 0.0083 0 -0.0002 0 Column 9 The first input to series is function object, defined here as an inline function. The second input is the value of the variable about which you want to expand the function. In this case the function series yields the limit of sin(x)/x as x goes to zero. The inputs to the function deriv are also similar. The function calculates the first derivative of a scalar function in a single variable. Let us now check the result of deriv . The function fzero yields the root of an expression in single variable: The last input is our guess of the root. Here is a check of the accuracy of the result: As with many other functions, you can use optional input arguments in fzero. The last of the three optional inputs allowed in fzero controls the criterion for convergence of the root. Let us use our own value for this input. You can see that the accuracy of the root has improved. You may also have noted that since we did not want to use the first two of the three allowed optional inputs to froot, we just used nulls in their places. Here is one more example of the use of froot. The function fminbnd minimizes a function of a single variable. The first input to fminbnd is the function to be minimized, and the two subsequent inputs specify the limits of The third input is the name of the function which we want to minimize: >>f // The estimated minimum value >>v // The point at which the minimum is estimated to lie. We can now check the value of the function to be minimized at points near the purported minimum: >>bessj(0,[.99 1 1.01]*v) -0.4025 -0.4028 -0.4025 You can solve a set of equations by the Newton's method by using the function newton, and use fminbfgs to minimize a function of several variables. Both newton and fminbfgs require you to define a function that also returns the Jacobian/gradient of the appropriate quantities. In the following example, the simple equations x[1]*x[2] = 10 x[1]+x[2] = 7 are solved by using newton. >>function [y,dy]=equations(x) > y=[x[1]*x[2]-10;x[1]+x[2]-7] > dy=[x[2] x[1];1 1] If the Jacobian of the set of nonlinear equations to be solved is not available, the function broyden may be used. >>//Solve the two nonlinear equations by the broyden's method. For more details, see the descriptions of these functions. The radix-2 Fast-Fourier transform algorithm is implemented in the function fft. You can calculate the inverse transform by using ifft. >>//Construct a time series with three dominant frequencies >>//added to a sequence of random numbers. >>//Plot the time series. >>title("Time Series") >>//Compute the transfrom. >>//Plot the amplitudes. The functions fft2d and ifft2d calculate the Fast-Fourier transform pairs for two-dimensional arrays. The function rungekutta is an implementation of the adaptive Fourth-order method for the solution of ordinary differential equations. In the example shown here, solution is obtained for the coupled equations governing the Bessel functions of the first kind of order zero and one at twenty points starting at 1, with the distances between the points being 0.1. >>r0=1 # The initial value of the independent coordinate >>y0=[bessj(0,r0);bessj(1,r0)] # The initial condition >>h=.1; # The distance between the sampling points >>(r,y)=rungekutta(@(r,y)[-y[2];y[1]-y[2]/r],r0,y0,h,20) # Solve >>yb=[bessj(0,r) bessj(1,r)] # Equivalent values from library functions >>norm(y-yb) # difference As exemplified above for the function fft, Mathnium provides a comprehensive library for graphics. The simplest function to use is plot, which draws an x-y plot of the inputs. As shown below, the plots can be easily annotated and saved as images. >>xlabel(" X ") >>ylabel(" Y ") >>title({"Bessel function","Order 0"}); >>// Save as a 700 X 550 image in a gif file. Functions are available for drawing the following types of graphs: • x-y line and scatter plots. • Two-D and three-d bar charts. • Two-D and three-d pie charts. • Contour plots. • Mesh plots and surface plots of functuons dependent on two variables. Obviously, the usefulness of Mathnium as a tool for numerical computing can be readily extended by using it as a programming language. The usual control structures such as while and for loops, as well as the if-block and switch block are all supported. Further, functions as well as classes can be defined to implement algorithms not already available in the library bundled with Mathnium. Here is the function for computing the Fibonocci sequence. It does not use recursive calls, and stores the results in a static array for quick retrieval of the result in case it has already been function fib(n) if(n<=2);return n;end // Static store for the elements of the sequence that have been already computed. static scratch // initialization // number of elements already available // extend the static store if needed for i=m+1:n // return the element corresponding to the input argument. return scratch[n] The following fractal tree was drawn using the call fractaltree(6). The complete function is shown below the image. >// Display the file containing the function fractaltree Draws a fractal tree of depth n. n should be less than 6 as otherwise it may take too long. function fractaltree(n) branch([0,0],[0 .5],1,pi/2,0,n) hold off function branch(x,y,len,theta,n,nmax) Draws a line between the points (x[1],y[1]) and (x[2],y[2]). and five branches of length len/2. One branch is an extension of the line itself and the other four branches are at -60,-30,30, and 60 degrees with respect to the main branch starting from the point (x[2],y[2]). len: twice the length of the branches theta: the angle between the line and the x-axis. n: recursion depth nmax: maximum recursion depth // Maximum depth of recursion // colors static red=java.awt.Color.red; static green=java.awt.Color.green; // clear the current graphics window if(n==0); cleargraph();end // choose red or green color // The main branch line([x[1] y[1] x[2] y[2]],'color',color); // Initialization axis([-1,1,0,2]); % Scaled limits of the drawing area. axis('square'); // Aspect ratio of the drawing area. notics(); # Suppress ticmarks and tic labels. hold on; // Do not erase // Now draw the five branches for i=1:5 >>// Draw a fractaltree of depth 6 using the function shown above. >>// Save the figure as an image file.
{"url":"http://www.mathnium.com/help/lang/intro.html","timestamp":"2014-04-16T13:08:39Z","content_type":null,"content_length":"29896","record_id":"<urn:uuid:2ae8f7ae-e6bd-40b8-934a-750796c4a9b1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Valley Science Tutor ...I currently work mainly as a math tutor teaching students in 3rd through 11th grade. I would say what drives me is that I have a colossal love of learning, an unending passion. Ever since first learning about Newton’s laws, I have been inspired and awe-struck by science and mathematics. 40 Subjects: including chemistry, ACT Science, philosophy, reading ...Another approach is to use clever acronyms or phrases as mnemonics tools. Of course, I do not impose one approach or the other on the student. I adapt to each student’s needs. 24 Subjects: including organic chemistry, calculus, precalculus, trigonometry ...Moreover, I completed classes in student development in a Master's program for counseling at the University of Florida. I look forward to working with you! I like teaching zoology (science of animals) in a mentoring-type environment and for small groups. 19 Subjects: including physiology, ecology, biology, European history ...I have been practicing guitar and piano since I was 4, and have studied under Grammy-nominated Jazz pianist -- Mark Little. I have performed at the Beach Chalet with vibraphonist Jerry Gross, and at the Savannah Jazz Club alongside the legendary "Pee Wee" Ellis. My philosophy of tutoring is tha... 48 Subjects: including biology, physical science, anatomy, biochemistry ...I've taught 7th and 8th graders in all subjects, 11th and 12th graders in English and I currently teach ESL to adults from different international backgrounds. I've earned two AmeriCorps Education awards for teaching high school in inner city schools in New York and I have over three years experience coaching high school basketball. I'm also a professionally represented novelist. 31 Subjects: including psychology, biology, philosophy, algebra 1
{"url":"http://www.purplemath.com/mill_valley_ca_science_tutors.php","timestamp":"2014-04-17T01:21:49Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:22965068-8fa7-4c51-bef1-da03c4510893>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Valley Science Tutor ...I currently work mainly as a math tutor teaching students in 3rd through 11th grade. I would say what drives me is that I have a colossal love of learning, an unending passion. Ever since first learning about Newton’s laws, I have been inspired and awe-struck by science and mathematics. 40 Subjects: including chemistry, ACT Science, philosophy, reading ...Another approach is to use clever acronyms or phrases as mnemonics tools. Of course, I do not impose one approach or the other on the student. I adapt to each student’s needs. 24 Subjects: including organic chemistry, calculus, precalculus, trigonometry ...Moreover, I completed classes in student development in a Master's program for counseling at the University of Florida. I look forward to working with you! I like teaching zoology (science of animals) in a mentoring-type environment and for small groups. 19 Subjects: including physiology, ecology, biology, European history ...I have been practicing guitar and piano since I was 4, and have studied under Grammy-nominated Jazz pianist -- Mark Little. I have performed at the Beach Chalet with vibraphonist Jerry Gross, and at the Savannah Jazz Club alongside the legendary "Pee Wee" Ellis. My philosophy of tutoring is tha... 48 Subjects: including biology, physical science, anatomy, biochemistry ...I've taught 7th and 8th graders in all subjects, 11th and 12th graders in English and I currently teach ESL to adults from different international backgrounds. I've earned two AmeriCorps Education awards for teaching high school in inner city schools in New York and I have over three years experience coaching high school basketball. I'm also a professionally represented novelist. 31 Subjects: including psychology, biology, philosophy, algebra 1
{"url":"http://www.purplemath.com/mill_valley_ca_science_tutors.php","timestamp":"2014-04-17T01:21:49Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:22965068-8fa7-4c51-bef1-da03c4510893>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Cumulative Distribution and Residence Time Distribution Curves Consider the following mixture distribution: You can change this distribution's properties by setting values for , , , and . This Demonstration plots this distribution. For specific values of , , , and , one can obtain a bimodal distribution, which mimics the residence time distribution () of a batch reactor. This Demonstration also computes the cumulative distribution curve, , which as expected exhibits two plateaus in the case of a bimodal . The cumulative distribution is given by the following definition: , where is the residence time distribution. [1] H. S. Fogler, Elements of Chemical Reaction Engineering , 3rd ed., Upper Saddle River, NJ: Prentice Hall, 1999.
{"url":"http://demonstrations.wolfram.com/CumulativeDistributionAndResidenceTimeDistributionCurves/","timestamp":"2014-04-17T00:51:02Z","content_type":null,"content_length":"45177","record_id":"<urn:uuid:5c281a40-c2f8-4e78-b146-7694ceefd004>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
How to determine a prime number in Java A very important question in mathematics and security is telling whether a number is prime or not. This is pretty useful when encrypting a password. In this tutorial, you will learn how to find whether a number is prime in simple cases. Trivial Cases We learned numbers are prime if the only divisors they have are 1 and itself. Trivially, we can check every integer from 1 to itself (exclusive) and test whether it divides evenly. For example, one might be tempted to run this algorithm: //checks whether an int is prime or not. boolean isPrime(int n) { for(int i=2;i<n;i++) { return false; return true; This doesn’t seem bad at first, but we can make it faster – much faster. Consider that if 2 divides some integer n, then (n/2) divides n as well. This tells us we don’t have to try out all integers from 2 to n. Now we can modify our algorithm: //checks whether an int is prime or not. boolean isPrime(int n) { for(int i=2;2*i<n;i++) { return false; return true; With some more efficient coding, we notice that you really only have to go up to the square root of n, because if you list out all of the factors of a number, the square root will always be in the middle (if it happens to not be an integer, we’re still ok, we just might over-approximate, but our code will still work). Finally, we know 2 is the “oddest” prime – it happens to be the only even prime number. Because of this, we need only check 2 separately, then traverse odd numbers up to the square root of n. In the end, our code will resemble this: //checks whether an int is prime or not. boolean isPrime(int n) { //check if n is a multiple of 2 if (n%2==0) return false; //if not, then just check the odds for(int i=3;i*i<=n;i+=2) { return false; return true; As you can see, we’ve gone from checking every integer (up to n to find out that a number is prime) to just checking half of the integers up to the square root (the odd ones, really). This is a huge improvement, especially considering when numbers are large. Let’s say you write a program where you’re asked to check whether many numbers are prime; not just once. Even though our program above is highly optimized for that algorithm, there exists another way specifically suited for this situation: The Prime Sieve. Here’s the basic idea: 1. Assume every integer greater than or equal to 2 is prime. 2. Start at the beginning of the list, if the number is prime, cross out every multiple of that number off the list. They are not prime. 3. Go to the next number, if it is crossed out, skip it – it is not prime. If it is not crossed out, it must be prime, cross out it’s multiples. 4. Repeat Let’s see what this means. Consider the list: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ... 2 is prime… cross out it’s multiples. Our list now looks like: 2 3 4 5 [DEL:6:DEL] 7 [DEL:8:DEL] 9 [DEL:10:DEL] 11 [DEL:12:DEL] 13 [DEL:14:DEL] 15 [DEL:16:DEL] 17 [DEL:18:DEL] 19 [DEL:20:DEL] … You can see why 2 is the only prime. By now doing it with 3, we cross out 6 (already crossed out), 9, 12(already crossed out), 15, etc. Eventually, your list will look like this: 2 3 4 5 [DEL:6:DEL] 7 [DEL:8:DEL] [DEL:9:DEL] [DEL:10:DEL] 11 [DEL:12:DEL] 13 [DEL:14:DEL] [DEL:15:DEL] [DEL:16:DEL] 17 [DEL:18:DEL] 19 [DEL:20:DEL] … And our primes are the ones left over: (2,3,5,7,11,13,17,19,23,29,…). In code, you might want to keep track of this list as an array. Meaning you’ll go through n numbers to set up this “sieve”, but you’ll make up for it when repeatedly calling the function, since it will return an instantaneous value whether a number is prime or not. Here’s what it will look like. Of course, you can edit this yourself to suit your needs: import java.util.Arrays; //global array just to keep track of it in this example, //but you can easily do this within another function. // will contain true or false values for the first 10,000 integers boolean[] primes=new boolean[10000]; //set up the primesieve public void fillSieve() { Arrays.fill(primes,true); // assume all integers are prime. primes[0]=primes[1]=false; // we know 0 and 1 are not prime. for (int i=2;i<primes.length;i++) { //if the number is prime, //then go through all its multiples and make their values false. if(primes[i]) { for (int j=2;i*j<primes.length;j++) { public boolean isPrime(int n) { return primes[n]; //simple, huh? what is “boolean isprime” ? no idea. boolean isPrime() is a method you write in Java or another programming language. it produces a boolean value (true or false) which you can use in other logic. hey there.. am trying to find out the prime numbers from 0 to maximum integer value.. but, neither the boolean array, nor the long array, nor the integer array allows me to store such a huge value.. is there any other option…?? please rephrase your question, and give an example what you wish to have as result. The number of primes is not finite. i need to print all the prime numbers from 0 to maximum integer value in java.. the value of max integer is 2,147,483,647.. Now I understand! There is a fundamental time/memory trade-off principle in computer science and you want to put everything into memory. There is no reason for this, because you only are interested in the end result: a big list. My approach would be: write to disk, one at at time, then after a good night’s sleep when the program is done, (meaning: all file writes had been successful AND the program is finished), read the contents from disk and do as you like, print them on your t-shirt or whatever. Make sure you have 20GB free disk space for the file. So the key to success is IMHO keep the program stupid and let it calculate every number from scratch. Do not use any sieve method. Since you only would have the program do the job once, the total time between start of programming and end result would be minimal. If you have more time to kill OR suspect power outages, let the program run on your office pc. If you have no office pc, you can write some code to backup after every 1000 writes to another file on another disk drive. When the program starts, check the most recent file in the write folder, if it is corrupt check the backup drive, and start from that point on again. If you export the program to a jar and put it in your startup folder, when your pc reboots it will pick up from where it started. Something like that. Does that help? The process of deciding if a number is prime is “np-hard” in jargon, because dertermining the prime factors of a number is (if it were not, cryptography would not exist). Meaning there are no methods smarter than the stupid naieve approach. Assuming you are good with computers, then you can cheat like this: a) you can find 1+MAXINT/2 separate computers (MAXINT/2 zombies and 1 master) b) each zombie can communicate in light speed with the 1 master computer. The master counts 3 to MAXINT (+2 every time) and activates a zombie computer (for every number being a possible prime). Every time a number was found to be prime the master saves it to disk. The zombie has the following tasks: 1) become active (get a number) 2) add very fast all multiples of that number and store (except the number gotten), until MAXINT reached. Computers can do that. If MAXINT reached, report READY to the master. 3) report the lowest (initially the original number*2) number on their list. Can be done in parallel with task 2. 4) remove the lowest number. If no number available, report DONE and shutdown the zombie. Now every time the master takes a new number, its zombies hear it and if it is not a prime, one zombie reports not a prime, and all zombies remove any numbers on their list = 1 zombie (3,6,9,..); number=5-> 1 extra zombie (storing 5,10,15 …) etc. Observe the following: 1) the first zombies will do the most work, because the most often their numbers will have to be added in the first place and removed once the master prime moved on. 2) every action is “constant time”, and very little time. say the master is currently at number 293875397, which could be a prime or not, it will have 293 million zombie computers at its disposal and they all can return in an instant if that number is on their list (or not, in which case the master moves on). Of course, there is cost in the activation cost of that many zombies, but that is only one per prime. If you were willing to do this, you would have more fun applying the above approach to crack the codes of the NSA or CIA. //checks whether an int is prime or not. boolean isPrime(int n) { //check if n is a multiple of 2 if (n%2==0) return false; //if not, then just check the odds for(int i=3;i*i<=n;i+=2) { return false; return true; It seems wrong………………………………….. check in 13195 o/p is 3,7,35,65,91… but 35 and 65 r not prime……… running isPrime(65) returns true. //simple one… ur code is little complex for a given no to know is prime or not public static boolean isPrime(int number){ for(int i=2; i<number; i++){ if(number%i == 0){ return false; //number is divisible so its not prime return true; //number is prime now Its a nice solution using the divisibility test but I found something equally interesting using nothing but Math class rounding properties • http://[email protected] DUSNT WORK, 0/10 WOULDNT TRUST AGAIN… ALSO; KFC IS HIRING No it doesn’t. But the reason is that the html functionality sucks when copy pasting. for (int i=from;ifrom are prime for (int i=from;ifrom are prime probably because of the “>” in the comment. Which proves the site itself is not capable of handling code. for (int i=from;i” in the comment. Which proves the site itself is not capable of handling code. Is it not more interesting how much time it will take to factor a really big number into two or more primes (and how many pairs as a side). If the original is a prime, any algorithm will take the most time, of course. How this can be used for encrypting I never understood. Below you will find a tested working class. It uses caching to speed up recalculating + will resize the cache when needed. Its main method contains some sample tests. Enjoy! package kos.lib.math; import java.util.Arrays; import java.util.Vector; * CachedPrimes is used primarily to get the next prime bigger than a given number, * fast. * By its nature it is a singleton (load once, use forever). * Example usage: * CachedPrimes.main( number-to-test cache) : test if 1511 is a prime and also set up a cache. * new CachedPrimes(x).isPrime(x) : for a x decide if it is prime. * CachedPrimes cp = new CachedPrimes(x); * loop -&amp;gt; cp.isPrime(x) : for a number of x decide very fast if it is prime. * @author Kos public class CachedPrimes { * DEFAULT_CACHE_OF_PRIMES is the minimum amount of cached primes. * Cache does not shrink below that. * @version 1.1 2012-10-21 adding resize functionality * @version 1.0 2012-08 initial * @author Kos private static final int DEFAULT_CACHE_OF_PRIMES = 100; * safeguard against blowing up the jvm. * Even if client creates 1000 CachedPrimes instances, they all re-use the same cache, * so that part is covered. public static final int MAX_CACHE_OF_PRIMES = 1000*1000*1000; //a billion should suffice. * global array just to keep track of it in this example, * but you can easily do this within another function. * will contain true or false values for the first 10,000 integers. * Note kos: this will still take up unnecessary space, that is, 10000x8 bits. * What you really want is something of 10000/8 bytes. static boolean[] primes=new boolean[DEFAULT_CACHE_OF_PRIMES]; static boolean initialized = false; static int size = DEFAULT_CACHE_OF_PRIMES; //allow for growing the primes array. * sets up the primesieve. * Keep private, so that the call must be made from the constructor, like * CachedPrimes().next() or CachedPrimes.next(); private static void initialize() { Arrays.fill(primes,true); // assume all integers are prime. primes[0]=primes[1]=false; // we know 0 and 1 are not prime. initialized = true; * This method can be used to hack directly into the internal store of cached primes. * Sieving the part [len .._newsize-1] without sieving [1..len]: * fillSieve starts with a 'known set'. * Example usage: * if from &amp;gt; primes.length, will fill, otherwise exit. * @param from * @version 2012-10-21 1.0 initial * @author Kos public static void fillSieve(int from) { int base = 2; //pre: primes[from..] are all false. Some of primes[2..from] are true. for (int i=from;ifrom are prime * pre: Some of primes[2..from] are true. primes[from..] are all true. * if they are a multiple, make em false. for (int i=base;i&amp;lt;primes.length;i++) { * if i is not prime, then primes[i] was already set to false, * because i is a multiple of a prime p and something, and p was already passed, * so only if primes[i]==true we need all multiples up to primes.length. if(primes[i]) { for (int j=base;from&amp;lt;i*j &amp;amp;&amp;amp; i*j&amp;lt;primes.length ; j++) { public CachedPrimes(){ public CachedPrimes(int cache){ * This method is the generic constructor. * @version 2012-09 1.0 initial * @author Kos private void _setup(int cache) { primes=new boolean[cache]; * METHODS * This method grows or shrinks the cache * @param newsize * @version 1.0 2012-10-21 initial * @author Kos private static void resize(int newsize){ //no initialize needed because it is private System.out.println(&quot;newsize &amp;gt; &quot;+MAX_CACHE_OF_PRIMES+&quot; -&amp;gt; newsize &quot;+newsize+&quot; rejected -&amp;gt; CachedPrimes cache size kept at &quot; + DEFAULT_CACHE_OF_PRIMES); newsize = MAX_CACHE_OF_PRIMES; if(newsize CachedPrimes cache size kept above &quot; + DEFAULT_CACHE_OF_PRIMES); int len = primes.length; int half = len&amp;gt;&amp;gt;1; int db = len&amp;lt; newsize &amp;amp;&amp;amp; newsize &amp;gt; half ){ System.out.println(&quot;newsize &quot;+newsize+&quot; rejected (size=&quot;+len+&quot;)-&amp;gt; CachedPrimes shrinks only below &quot; + half); return; //debug because it is a side effect of the caller //at least double it, or shrink it to below half: int _newsize = len&amp;lt; newsize &amp;amp;&amp;amp; newsize &amp;lt; db ? db : newsize; boolean[] _primes= Arrays.copyOf(primes,_newsize); //calculate the rest if needed primes = _primes; * until the end.. that is, until [_newsize-1] * This method returns (if already computed, else computes) if n is prime. * Example usage: * isPrime(11) -&amp;gt; true * isPrime(12) -&amp;gt; false * isPrime( 0 ) -&amp;gt; false + warning. * isPrime(-11) -&amp;gt; false + warning. * isPrime(1e1111) -&amp;gt; illegal argument (too big). * @param n * @return * @version 2012- 1.0 initial * @author Kos public static boolean isPrime(int n) { throw new IllegalArgumentException(n+&quot;==n&amp;gt;MAX_CACHE_OF_PRIMES=&quot; + MAX_CACHE_OF_PRIMES+&quot; -&amp;gt; isPrime(n) n too big. Exiting...&quot;); if(n==1) return false; return primes[n]; //simple, huh? * This method returns the same number if it is a prime, or else the nearest prime &amp;gt;n. * Example usage: * CachedPrimes.next(15) -&amp;gt; 17 *CachedPrimes.next(150000) -&amp;gt; resizes the cache, possibly blowing up the jvm. * @param n * @return * @version 1.1 2012-10-24 forgot the _n++ -&amp;gt; loop hang. * @version 1.0 2012-10 initial * @author Kos public static int next(int n){ if (!initialized) initialize(); int _n=n; if( isPrime(_n)) return _n; System.out.println(&amp;quot;resizing to 2*&amp;quot;+primes.length +&amp;quot; ...&amp;quot;); resize( primes.length &amp;lt;n with +10, else to n+10. Doubling is too expensive. * This method tests given numbers by setting up a cache. * If no cache was specified, it is derived from the numbers to test. * Example usage: * main() : test if 1511 is a prime * main( 1 number) : decide if it is prime. * main( 101;143,155 ) -&amp;gt; test these numbers * main( 101 10000 ) -&amp;gt; test 101 using cache of 10000 * main( 101,143,155 10000) -&amp;gt; test these numbers) using cache of 10000 * @param args : 0 : number(s) to test for , by default 1511 * 1 : the cache, by default 10000 public static void main(String[] args) { * if there were no arguments: int cache = 100; //default amount of digits each number Vector test = new Vector(); * process the arguments and set up if (args.length == 0){ test.add( 1511 );// default number of primes printed System.out.println(&quot;Running main without arguments.&quot;); System.out.println(&quot;arg[0] -&amp;gt; number to test set to &quot;+ test ); System.out.println(&quot;arg[1] -&amp;gt; cache set to &quot;+ cache ); * process the number(s) to test String[] parts = args[0].split(&quot;,;&quot;); test = new Vector( parts.length); for(String part : parts){ //program will crash on non-numbers. Integer t = Integer.parseInt( part.trim()); test.add( t ); } //no warning, just ignore illegal numbers * compute the cache or read it from second arg: int maxt=0; //first &quot;normal&quot; prime is 2&amp;gt;0. for( int t : test){ if (t&amp;gt;maxt) maxt=t; //for an array of 100, 99 is the max number to test: cache= Integer.parseInt( args[1] ); if (cache=max(test) * output System.out.println(&quot;Cache set to &quot;+cache+&quot;...&quot;); int t = test.get(0); long start = System.currentTimeMillis(); boolean isPrime = new CachedPrimes(cache).isPrime( t ); long delta = System.currentTimeMillis()-start; System.out.println(&quot;Testing &quot;+t+&quot; -&amp;gt; is &quot; + (isPrime ? &quot;not &quot; :&quot;&quot;) + &quot;prime&quot;); System.out.println(&quot;Setting up cache of &quot;+cache+&quot; numbers took &quot;+delta+&quot; ms&quot;); } else{ long start = System.currentTimeMillis(); CachedPrimes cp = new CachedPrimes(cache); long delta = System.currentTimeMillis()-start; System.out.println(&quot;Setting up cache of &quot;+cache+&quot; numbers took &quot;+delta+&quot; ms&quot;); System.out.println(&quot;Testing &quot;+test.size()+&quot; numbers for primeness...&quot;); for(int t: test ){ System.out.println(t+&quot; -&amp;gt; &quot; + (cp.isPrime(t) ? &quot;not &quot; :&quot;&quot;) + &quot;prime&quot;); System.out.println( &quot; 9 &quot; + CachedPrimes.isPrime(9) ); System.out.println( &quot; 11 &quot; + CachedPrimes.isPrime(11) ); System.out.println( &quot;1111 &quot; + CachedPrimes.isPrime(1111) ); System.out.println( &quot;1113 &quot; + CachedPrimes.isPrime(1113) ); System.out.println( &quot;1117 &quot; + CachedPrimes.isPrime(1117) ); }//~class CachedPrimes Guys what are you trying to do here? Trying to learn Java, or trying to program algorithms? Incredible how sloppy you are, and you are trying to re-invent the wheel, and badly at that: Vaengai WTF is for(i=3;i=2) Datruchosen WTF is a PrimeNumberException? Simply use a singleton for creating a cache once. Great Post! this helped me a lot with an app im working on. • http://shivpratap.com * Test for prime numbers * @param n * @return public static boolean isPrime(int n) { if(n < 4) return true; //test for all multiples of 2 if ((n &amp; 1) == 0) return false; //test for all multiples of 3 if ((n%3) == 0) return false; //other wise test all odd numbers, but we are checking only for probable prime numbers of the // form 6K+1/6K-1 k>1; int sqrt = (int) Math.sqrt(n); for(int i=6; i<=sqrt; i+=6) { return false; return false; return true; private void generatePrimeLessThan(int x){ int number[] = new int[x]; int i=0; int j=0; boolean isPrime = false; System.out.print(i+&amp;quot; &amp;quot;); private void generatePrimeNumberSeq(int x){ int number[]=new int[x]; int i=3,j=0; int count=1; boolean isPrime=true; System.out.print(number[i]+&amp;quot; &amp;quot;); The code I posted last week had a minor error; here is the correct code (I still haven’t been able to implement a more efficient method of determining primality, however). public static boolean isPrime( int primer ) throws PrimeNumberException int i, j, k = 0; if( primer 2 &amp;&amp; primer%2 == 0 ) return false; for( i = 3; i &lt; primer; i += 2 ) for( j = i; j &lt; primer; j += 2 ) k = i * j; /*I incorrectly wrote this outside the loops */ if( k == primer ) return false; return true; public static boolean isPrime ( int num ) boolean prime = true; int limit = (int) Math.sqrt ( num ); for ( int i = 2; i &lt;= limit; i++ ) if ( num % i == 0 ) prime = false; return prime; Fails on -1. Fails on 1000*1000*1000. Performance also horrible. by the way, where you see “1*/” in my previous post in the first branch statement, should really be less than 1 symbolically … don’t know what’s up with that □ http://www.mkyong.com Hi datruchosen, Current comment does not support posting source code correctly, i’m still working on it. At the moment i modified your code, please review. And really appreciated sharing your isPrime() * This java method, isPrime( z ), takes an integer input, z, and * * determines its primality. If z is prime, isPrime( z ) returns * * true; otherwise it returns false. In the case that an integer value * * z &lt; 1 is entered, isPrime( z ) will throw an exception, as an integer * * z is prime if, and ONLY if, for z = x * y, x = 1, or y = 1. * public static boolean isPrime( int primer ) throws PrimeNumberException int i = 0; int j = 0; int k = 0; if( primer < 1) else if ( primer == 2 ) return true; /*2 is the only even prime*/ else if ( primer % 2 == 0 &amp;&amp; primer &gt; 2 ) return false; /*every other prime is odd*/ k = i * j; for( i = 3; i <= primer; i += 2 ) for( j = i; j < primer; j += 2 ) if( k == primer ) return false; /*not all odds are primes*/ return true; /*when value of primer IS prime*/ // Although this method is effective, as it works for any valid integer input to the method isPrime(), it is inefficient, due to the for loops nested within the if-else statement. A better solution would be to make isPrime() a recursive method.// only solution 1 is correct ~_~ • http://twitter.com/Mexflubber Actually it can be optimized by stoping at your current number squared root, as that’s the highest divisible number that number will have. Also you could create your own class and make it Enumerable, so you can use a yield return without having to calculate all of them before you need them. Isn’t it better to use addition instead of multiplication in the inner for statement? for (int j=2*i;j<primes.length;j+=i) { I'd like to know how to list prime numbers greater than Integer.MAX_VALUE. Or use BigInteger.isPrime() BigInteger will fail on heap space if cached and on performance if computed. Also the primes > Integer.MAX_VALUE is infinite, so please specify what you really want. True, the second solution is missing an “=” and I guess it was typeformatted wrong with the html tags. And yeah, the second solution forgot whether n==2. The prime sieve is pretty cool, though. Good job overall, just a couple of typos. Second solutions fails by including 4 as a prime. Third solution fails by excluding 2 as a prime. shut up dumb ugly c.un.t Second solution is a total fail ! Damn how can you publish such things without having them tested . This should work. private static boolean isPrime(int num) { int upLimit = (int) Math.sqrt(num); for (int i = 2; i <= upLimit; i++) { if (num % i == 0) return false; return true;
{"url":"http://www.mkyong.com/java/how-to-determine-a-prime-number-in-java/","timestamp":"2014-04-20T03:10:59Z","content_type":null,"content_length":"144652","record_id":"<urn:uuid:fd3bef08-0935-49b6-b70f-f37b7299b29d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse Fourier transform June 11th 2011, 01:35 PM #1 May 2010 Inverse Fourier transform Hi there. I have some trouble with this. I have to find the inverse fourier transform for: $\frac{e^{i 6\omega}}{\omega}$ So I'm using a table. I have that: $F(sg(t))=\frac{2}{i \omega}$ and $F(\delta(t-t_0))=e^{-i \omega t_0}$ Then $F^{-1}(e^{i6\omega})=\delta(t+6)$ and $F^{-1}\left (\frac{1}{\omega}\right )=F^{-1}\left ( \frac{i}{2}\frac{2}{i \omega}\right )=\frac{i}{2}sg(t)$ Finally using the properties for convolutions: $F^{-1}\left ( \frac{e^{i 6\omega}}{\omega}\right )=F^{-1}\left ( e^{i 6\omega}\right ) * F^{-1}\left ( \frac{1}{\omega}\right )=2\pi\left[ \delta (t+6)*\frac{i}{2}sg(t)\right]$ where (*) represents the convolution. Well, everything fine till there, but when I tried to corroborate my result with mathematica I get: I don't know whats wrong. Last edited by Ulysses; June 11th 2011 at 02:36 PM. Re: Inverse Fourier transform You're output from Mathematica didn't display. Can you re-post it. In any event, I think there's an easier way for this: the Fourier transform of $f(t-t_0)$ is $F(\omega)e^{-i\omega t_0}$. Hence the inverse of $\frac{e^{6i\omega}}{\omega}$ is $\frac{i}{2}\text{sgn}(t+6)$. June 14th 2011, 05:17 PM #2 Senior Member May 2010 Los Angeles, California
{"url":"http://mathhelpforum.com/differential-geometry/182852-inverse-fourier-transform.html","timestamp":"2014-04-18T04:52:51Z","content_type":null,"content_length":"34272","record_id":"<urn:uuid:3f39e05f-46ad-4222-9274-36326c022742>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinate Graphing Pictures Printable Introduction for Coordinate Graphing Pictures Printable: The coordinate graphing is called the Cartesian coordinates plane. The graph consists of a pair of perpendicular lines called coordinate axes. The vertical axis is the y axis and the horizontal axis is the x axis. The point of intersection of these two axes is called the origin of coordinate graphing pictures printable. It is the zero point of the both axes. Furthermore, point to the right of the origin on the x axis and above the origin on the y axis represent positive real numbers. Points to the left of the origin on the x axis or below the origin on the y axis represent negative real numbers. There are four regions cut off by the coordinate graphing axes are, in counter clocking direction form the top right, called the first, second, third and fourth quadrant, respectively. The first quadrant contains all points with two positive coordinates. Coordinate Graphing of X and Y Axis – Coordinate Graphing Pictures Printable: In the graph show, two points are identified by the ordered pair, (x,y) of numbers, The x coordinate graphing pictures is the first number and the y coordinates graphing pictures is the second Coordinates graphing pictures: To plot a point on the graph when given the coordinate graphing pictures, draw perpendicular lines from the number line coordinates to the point where the two lines intersect. To find the coordinate graphing pictures of a given point on the graph, draw perpendicular lines from the point to the coordinates of the graph number comma is used to separate the two. In this case, point A has the coordinate graphing (4,2) and the coordinates graphing of point B are (-3,-5). For any two point A and b with coordinate graphing pictures(X,Y) and (x[n],Y[n]), respectively, the distance between A and B is represented by AB = √((X[A] - X[B])^2 + (Y[A] - Y[B])^2) This is commonly known as the distance formula for the Pythagorean theorem. Plotting Points of Coordinate Graphing – Coordinate Graphing Pictures Printable Plot the point (-3,2) on the graph coordinate below: Step 1 coordinates graphing is to find the point 3 along the x axis. Draw a dashed vertical line through that point. See the below diagram Step 2 coordinate graphing is to find the point 2 along the y axis. Draw a dashed horizontal line through the point. Step 3 coordinate graphing is to plot the point where the two dashed lines intersect with a solid circle. This is point (-3,2). See the below figures. This is how the coordinate graphing principle printable will display.
{"url":"http://makematheasy.edublogs.org/2013/01/30/coordinate-graphing-pictures-printable/","timestamp":"2014-04-18T08:32:37Z","content_type":null,"content_length":"22512","record_id":"<urn:uuid:6bb007b5-23db-4c52-afcd-c6e44913d944>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 1996 [00004] [Date Index] [Thread Index] [Author Index] Re: Speed of dot product in Mathematica • To: mathgroup at smc.vnet.net • Subject: [mg5247] Re: Speed of dot product in Mathematica • From: Robert Knapp <rknapp> • Date: Fri, 15 Nov 1996 03:33:58 -0500 • Organization: Wolfram Research • Sender: owner-wri-mathgroup at wolfram.com Carlos A. Felippa wrote: > The speed of dot products in Mathematica 2.2 depends > significantly on implementation. For example, timing > n=10000; a=Table[1.,{n}]; b=a; > Print[Timing[a.b]]; > Print[Timing[s=Sum[a[[i]]*b[[i]],{i,1,n}]]] > Print[Timing[s=0;Do[s+=a[[i]]*b[[i]],{i,1,n}]]]; > Print[Timing[s=0;For[i=0,i<=n,i++,s+=a[[i]]*b[[i]] ]]]; > on a Mac 8500 gives > {0.0666667 Second, 10000.} > {1.48333 Second, 10000.} > {2.38333 Second, Null} > {3.95 Second, Null} > Is there a way to speed up the Sum form, for example using Compile, so that > it achieves a performance similar to that of the built-in dot operator? In Mathematica 2.2, there is no good way to use Compile on the Sum version. However, in Mathematica 3.0, it is very easy to compile and the results are quite good. Timings below are on a Pentium Pro 200. n=10000; a=Table[1.,{n}]; b=a; {0.04 Second,10000.} {0.34 Second,10000.} cf1 = Compile[{{x, _Real, 1},{y, _Real, 1}},x.y]; cf2 = Compile[{{x, _Real, 1},{y, _Real, 1}}, {0.01 Second,10000.} {0.06 Second,10000.} So you can improve the Sum version so that the built in Dot is only 1.5 times faster. However at the same time, the Compiled Dot is even faster > This is important in matrix routines where the dot product involves only > portions of rows or columns, or where the stride is not unity. The improvement I showed above should work for these also. > BTW, several of the LinearAlgebra Package functions (e.g. lufactor) > use the For-loop implementation. As shown above, that has the worst > performance, being 60 times slower than the built-in operator. There is a new built in command called LUDecomposition in version 3.0, which is much more efficient thatn this. Rob Knapp
{"url":"http://forums.wolfram.com/mathgroup/archive/1996/Nov/msg00004.html","timestamp":"2014-04-17T04:28:34Z","content_type":null,"content_length":"36326","record_id":"<urn:uuid:62ed9536-761d-424e-b70c-020040e832f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-6-847-2006 2-D reconstruction of atmospheric concentration peaks from horizontal long path DOAS tomographic measurements: parametrisation and geometry within a discrete approach Hartl A. ^1 Song B. C. ^1 Pundt I. ^1 Institute of Environmental Physics, University of Heidelberg, Germany 17 03 2006 6 3 847 861 This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article is available from http://www.atmos-chem-phys.net/6/847/2006/acp-6-847-2006.html The full text article is available as a PDF file from http://www.atmos-chem-phys.net/6/847/2006/acp-6-847-2006.pdf In this study, we theoretically investigate the reconstruction of 2-D cross sections through Gaussian concentration distributions, e.g.&nbsp;emission plumes, from long path DOAS measurements along a limited number of light paths. This is done systematically with respect to the extension of the up to four peaks and for six different measurement setups with 2-4 telescopes and 36 light paths each. We distinguish between cases with and without additional background concentrations. Our approach parametrises the unknown distribution by local piecewise constant or linear functions on a regular grid and solves the resulting discrete, linear system by a least squares minimum norm principle. We show that the linear parametrisation not only allows better representation of the distributions in terms of discretisation errors, but also better inversion of the system. We calculate area integrals of the concentration field (i.e.&nbsp;total emissions rates for non-vanishing perpendicular wind speed components) and show that reconstruction errors and reconstructed area integrals within the peaks for narrow distributions crucially depend on the resolution of the reconstruction grid. A recently suggested grid translation method for the piecewise constant basis functions, combining reconstructions from several shifted grids, is modified for the linear basis functions and proven to reduce overall reconstruction errors, but not the uncertainty of concentration integrals. We suggest a procedure to subtract additional background concentration fields before inversion. We find large differences in reconstruction quality between the geometries and conclude that, in general, for a constant number of light paths increasing the number of telescopes leads to better reconstruction results. It appears that geometries that give better results for negligible measurement errors and parts of the geometry that are better resolved are also less sensitive to increasing measurement
{"url":"http://www.atmos-chem-phys.net/6/847/2006/acp-6-847-2006.xml","timestamp":"2014-04-17T18:27:09Z","content_type":null,"content_length":"5244","record_id":"<urn:uuid:c68480c7-b9d5-4589-a605-3eac1fbf4af9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
The Gravity of Photons dmitry - from my understanding, given any particular atom emitting a photon, we know the exact energy required for an electron to drop to the next lowest level when it emits a photon, and thus we do know its frequency. using your laser analogy, "A photon of red-orange light from a HeNe laser has a wavelength of 632.8 nm. Using the equation gives a frequency of 4.738X1014 Hz or about 474 trillion cycle per second." so, we do know exactly the frequency of that photon, and thus cannot know anything about it's position, as per HUP. however, a wiki article states this: "Being massless, they cannot be localized without being destroyed; technically, photons cannot have a position eigenstate , and, thus, the normal Heisenberg uncertainty principle ΔxΔp > h / 2 does not pertain to photons." here is a quote from an article on the copenhagen interpretation: "It's more than simply saying we don't know which slit the photon passes through. The photon doesn't pass through just one slit at all. In other words, as the photon passes through the slits, not only don't we know it's location, it doesn't even have a location. It doesn't have a location until we observe it on the film. This paradox is the heart of what has come to be called the Copenhagen interpretation of quantum physics." source - also, "the photon is a bit of a problem, because it has turned out to be impossible to identify a position operator for it....These findings tell us two things: first, unlike electrons, photons really can't be localized to an arbitrary precision, and second, a position operator is meaningless because there really is no position to operate on. " source - APS -
{"url":"http://www.physicsforums.com/showthread.php?t=381246&page=2","timestamp":"2014-04-16T10:28:45Z","content_type":null,"content_length":"67878","record_id":"<urn:uuid:4ffc2c61-da43-4c51-bdd7-9b98370d7d62>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
§2.2: Influences and derivatives Given a voting rule $f : \{-1,1\}^n \to \{-1,1\}$ it’s natural to try to measure the “influence” or “power” of the $i$th voter. One can define this to be the “probability that the $i$th vote affects the outcome”. Definition 12 We say that coordinate $i \in [n]$ is pivotal for $f : \{-1,1\}^n \to \{-1,1\}$ on input $x$ if $f(x) \neq f(x^{\oplus i})$. Here we have used the notation $x^{\oplus i}$ for the string $(x_1, \dots, x_{i-1}, -x_i, x_{i+1}, \dots, x_n)$. Definition 13 The influence of coordinate $i$ on $f : \{-1,1\}^n \to \{-1,1\}$ is defined to be the probability that $i$ is pivotal for a random input: \[ \mathbf{Inf}_i[f] = \mathop{\bf Pr}_{{\ boldsymbol{x}} \sim \{-1,1\}^n}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})]. \] Influences can be equivalently defined in terms of “geometry” of the Hamming cube: Fact 14 For $f : \{-1,1\}^n \to \{-1,1\}$, the influence $\mathbf{Inf}_i[f]$ equals the fraction of dimension-$i$ edges in the Hamming cube which are boundary edges. Here $(x,y)$ is a dimension-$i$ edge if $x$ and $y$ agree in all coordinates but the $i$th; it is a boundary edge if $f(x) \neq f(y)$. Examples 15 For the $i$th dictator function $\chi_i$ we have that coordinate $i$ is pivotal for every input $x$; hence $\mathbf{Inf}_i[\chi_i] = 1$. On the other hand, if $j \neq i$ then coordinate $j$ is never pivotal; hence $\mathbf{Inf}_j[\chi_i] = 0$ for $j \neq i$. Note that the same two statements are true about the negated-dictator functions. For the constant functions $\ pm 1$, all influences are $0$. For the $\mathrm{OR}_n$ function, coordinate $1$ is pivotal for exactly two inputs, $(-1, 1, 1, \dots, 1)$ and $(1, 1, 1, \dots, 1)$; hence $\mathbf{Inf}_1[\mathrm {OR}_n] = 2^{1-n}$. Similarly, $\mathbf{Inf}_i[\mathrm{OR}_n] = \mathbf{Inf}_i[\mathrm{AND}_n] = 2^{1-n}$ for all $i \in [n]$. The $\mathrm{Maj}_3$ is depicted in Figure 2; the points where it’s $+1$ are coloured grey and the points where it’s $-1$ are coloured white. Its boundary edges are highlighted in black; there are $2$ of them in each of the $3$ dimensions. Since there are $4$ total edges in each dimension, we conclude $\mathbf{Inf}_i[\mathrm{Maj}_3] = 2/4 = 1/2$ for all $i \in [3]$. For majority in higher dimensions, $\mathbf{Inf}_i[\mathrm{Maj}_n]$ equals the probability that among $n-1$ random bits, exactly half of them are $1$. This is roughly $\frac{\sqrt{2/\pi}}{\sqrt{n}}$ for large $n$ — we will see this in the exercises and in a future chapter. Influences can also be defined more “analytically” by introducing the derivative operators. Definition 16 The $i$th (discrete) derivative operator $\mathrm{D}_i$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\mathrm{D}_i f : \{-1,1\}^n \to {\mathbb R}$ defined by \ [ \mathrm{D}_i f (x) = \frac{f(x^{(i\mapsto 1)}) - f(x^{(i \mapsto -1)})}{2}. \] Here we have used the notation $x^{(i \mapsto b)} = (x_1, \dots, x_{i-1}, b, x_{i+1}, \dots, x_n)$. Notice that $\ mathrm{D}_if(x)$ does not actually depend on $x_i$. The operator $\mathrm{D}_i$ is a linear operator: i.e., $\mathrm{D}_i(f+g) = \mathrm{D}_i f + \mathrm{D}_i g$. If $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued then $$\mathrm{D}_if(x) = \begin{cases} 0 & \text{if coordinate i is not pivotal for x,} \\ \pm 1 & \text{if coordinate i is pivotal for x.} \end {cases} \label{eqn:derivative-of-boolean}$$ Thus $\mathrm{D}_if(x)^2$ is the $0$-$1$ indicator for whether $i$ is pivotal for $x$ and we conclude that $\mathbf{Inf}_i[f] = \mathop{\bf E}[\mathrm{D} _if({\boldsymbol{x}})^2]$. We take this formula as a definition for the influences of real-valued boolean functions. Definition 17 We generalize Definition 13 to functions $f : \{-1,1\}^n \to {\mathbb R}$ by defining the influence of coordinate $i$ on $f$ to be \[ \mathbf{Inf}_i[f] = \mathop{\bf E}_{{\ boldsymbol{x}} \sim \{-1,1\}^n}[\mathrm{D}_if({\boldsymbol{x}})^2] = \|\mathrm{D}_i f\|_2^2. \] The discrete derivative operators are quite analogous to the usual partial derivatives. For example, $f : \{-1,1\}^n \to {\mathbb R}$ is monotone if and only if $\mathrm{D}_i f(x) \geq 0$ for all $i$ and $x$. Further, $\mathrm{D}_i$ acts like formal differentiation on Fourier expansions: Proposition 18 Let $f : \{-1,1\}^n \to {\mathbb R}$ have the multilinear expansion $f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,x^S$. Then $$\label{eqn:deriv-formula} \mathrm{D}_i f(x) = \sum_ {\substack{S \subseteq [n] \\ S i i} } \widehat{f}(S)\,x^{S \setminus \{i\}}.$$ Since $\mathrm{D}_i$ is a linear operator, the proof follows immediately from the observation that \[ \mathrm{D}_i x^S = \begin{cases} x^{S \setminus \{i\}} & \text{if $S \ni i$,} \\ 0 & \text{if $S \not \ni i$.} \end{cases} \] By applying Parseval’s Theorem to the Fourier expansion \eqref{eqn:deriv-formula}, we obtain a Fourier formula for influences: Theorem 19 For $f : \{-1,1\}^n \to {\mathbb R}$ and $i \in [n]$, \[ \mathbf{Inf}_i[f] = \sum_{S \ni i} \widehat{f}(S)^2. \] In other words, the influence of coordinate $i$ on $f$ equals the sum of $f$’s Fourier weights on sets containing $i$. This is another good example of being able to “read off” an interesting combinatorial property of a boolean function from its Fourier expansion. In the special case that $f : \{-1,1\}^n \to \{-1,1\}$ is monotone there is a much simpler way to read off its influences: they are the degree-$1$ Fourier coefficients. In what follows, we write $\widehat{f}(i)$ in place of $\widehat{f}(\{i\})$. Proposition 20 If $f : \{-1,1\}^n \to \{-1,1\}$ is monotone then $\mathbf{Inf}_i[f] = \widehat{f}(i)$. Proof: By monotonicity, the $\pm 1$ in \eqref{eqn:derivative-of-boolean} is always $1$; i.e., $\mathrm{D}_if(x)$ is the $0$-$1$ indicator that $i$ is pivotal for $x$. Hence $\mathbf{Inf}_i[f] = \ mathop{\bf E}[\mathrm{D}_i f] = \widehat{\mathrm{D}_if}(\emptyset) = \widehat{f}(i)$, where the third equality used Proposition 18. $\Box$ This formula allows us a neat proof that for any $2$-candidate voting rule that is monotone and transitive-symmetric, all of the voters have small influence: Proposition 21 Let $f : \{-1,1\}^n \to \{-1,1\}$ be transitive-symmetric and monotone. Then $\mathbf{Inf}_i[f] \leq 1/\sqrt{n}$ for all $i \in [n]$. Proof: Transitive-symmetry of $f$ implies that $\widehat{f}(i) = \widehat{f}(i’)$ for all $i, i’ \in [n]$ (using Exercise 1.29(a)); thus by monotonicity, $\mathbf{Inf}_i[f] = \widehat{f}(i) = \ widehat{f}(1)$ for all $i \in [n]$. But by Parseval, $1 = \sum_S \widehat{f}(S)^2 \geq \sum_{i=1}^n \widehat{f}(i)^2 = n \widehat{f}(1)^2$; hence $\widehat{f}(1) \leq 1/\sqrt{n}$. $\Box$ This bound is slightly improved in the exercises. The derivative operators are very convenient for functions defined on $\{-1,1\}^n$ but they are less natural if we think of the Hamming cube as $\{\mathsf{True}, \mathsf{False}\}^n$; for the more general domains we’ll look at in later chapters they don’t even make sense. We end this section by introducing some useful definitions that will generalize better later on. Definition 22 The $i$th expectation operator $\mathrm{E}_i$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by \[ \mathrm{E}_i f (x) = \mathop{\bf E}_{{\boldsymbol {x}}_i}[f(x_1, \dots, x_{i-1}, {\boldsymbol{x}}_i, x_{i+1}, \dots, x_n)]. \] Whereas $\mathrm{D}_i f$ isolates the part of $f$ depending on the $i$th coordinate, $\mathrm{E}_i f$ isolates the part not depending on the $i$th coordinate. In the exercises you are asked to verify the following: Proposition 23 For $f : \{-1,1\}^n \to {\mathbb R}$, □ $\displaystyle \mathrm{E}_i f (x) = \frac{f(x^{(i \mapsto 1)}) + f(x^{(i \mapsto -1)})}{2}$, □ $\displaystyle \mathrm{E}_i f (x) = \sum_{S \not \ni i} \widehat{f}(S)\,x^{S}$, □ $\displaystyle f(x) = x_i \mathrm{D}_i f(x) + \mathrm{E}_i f(x)$. Note that in the decomposition $f = x_i \mathrm{D}_i f + \mathrm{E}_i f$, neither $\mathrm{D}_i f$ nor $\mathrm{E}_i f$ depends on $x_i$. This decomposition is very useful for proving facts about boolean functions by induction on $n$. Finally, we will also define an operator very similar to $\mathrm{D}_i$ called the $i$th Laplacian: Definition 24 The $i$th directional Laplacian operator $\mathrm{L}_i$ is defined by \[ \mathrm{L}_i f = f - \mathrm{E}_i f. \] Notational warning: many authors use the negated definition, $\ mathrm{E}_i f – f$. In the exercises you are asked to verify the following: Proposition 25 For $f : \{-1,1\}^n \to {\mathbb R}$, □ $\displaystyle \mathrm{L}_i f (x) = \frac{f(x)- f(x^{\oplus i})}{2}$, □ $\displaystyle \mathrm{L}_i f (x) = x_i \mathrm{D}_i f(x) = \sum_{S \ni i} \widehat{f}(S)\,x^{S}$, □ $\displaystyle \langle f, \mathrm{L}_i f \rangle = \langle \mathrm{L}_i f, \mathrm{L}_i f \rangle = \mathbf{Inf}_i[f]$. line before Def 22: chpaters -> chapters In the proof of Proposition 20, the equality that uses Proposition 18 is the third, not the second. See the following pages for another approach to boolean derivatives. Differential Logic : Introduction There’s another exposition of differential logic, leading up to the logical analogue of Taylor series, at the following address: Differential Propositional Calculus
{"url":"http://www.contrib.andrew.cmu.edu/~ryanod/?p=351","timestamp":"2014-04-18T03:05:45Z","content_type":null,"content_length":"78293","record_id":"<urn:uuid:bbbea41c-1a38-424a-8feb-d75d5f23045d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions D. Corfield, B. Schölkopf and V. Vapnik Journal for General Philosophy of Science Volume 40, Number 1, , 2009. We compare Karl Popper’s ideas concerning the falsifiability of a theory with similar notions from the part of statistical learning theory known as VC-theory. Popper’s notion of the dimension of a theory is contrasted with the apparently very similar VC-dimension. Having located some divergences, we discuss how best to view Popper’s work from the perspective of statistical learning theory, either as a precursor or as aiming to capture a different learning activity. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00006305/","timestamp":"2014-04-19T14:41:39Z","content_type":null,"content_length":"7652","record_id":"<urn:uuid:ff15588c-54c0-4b33-ac49-5f55a81b11c6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to index Generalized Search Trees (GiST) for Database Systems J. M. Hellerstein, J.F. Naughton, A. Pfeffer One-line summary: GiST's are like R-trees with virtual methods, so they can support a variety of data types (extensible); special-case optimizations can be included to increase performance with simple linear-range data. Overview/Main Points • Search key: any arbitrary predicate that holds for each datum below the key. • Search tree: hierarchy of partitions of a dataset, in which each partition has categorization that holds for all data in the partition. • Unlike R-trees, don't require that p --> q (p is a predicate below node N, q is a predicate at node N). R-trees are overly restrictive since this is in fact the case, so we might do better by allowing the lower predicates to be some property logically orthogonal to the upper predicates. • Methods you need to "override": 1. Consistent(E,q): given E=(p,ptr), return false if (p^q) can be guaranteed unsatisfiable (i.e. datum cannot be in this subtree). 2. Union(E1,E2,...,En): return some predicate that holds for all tuples stored below E1 thru En. E.g. find an r such that (p1 OR p2 OR ... OR pN) --> r. 3. Compress 4. Decompress 5. Penalty(E1,E2): domain-specific penalty to insert E2 into subtree rooted at E1, e.g. the "area penalty" from R-trees. 6. PickSplit(P): split P into two sets of entries each of size at least kM. Tree minimum fill factor controlled here. • Specialization routines FindMin and numeric compares can be used to optimize behavior for linearly-ordered domains. • Issues: key overlap may occur either because of data overlap or because key compression destroys distinguishing information (e.g. bounding boxes overlap even though objects don't). • Hot spot: specific predicate satisfiable by many tuples; correlation factor measures the likelihood that (p OR q) --> (p^q). How does GiST behave with hot spots? Unknown. • Future work: indexability--is a given collection of data practically "indexable" for a given set of queries? (need an "indexability theory"); indexing nonstandard domains; query optimization should account for (not-well-defined) cost of searching a GiST; lossy key compression techniques. Unifies B-trees, R-trees, and others into a generalized extensible structure. (Shades of C++ come to mind.) Also the only publication I know to correctly use the singular "spaghetto". Extensibility: two examples were pre-existing, third was closely related to R-trees (set indexing). A "way out" example might have been interesting. Evidently lots of work yet to be done to flesh this stuff out (as advertised in class!). Back to index
{"url":"http://carlstrom.com/stanford/quals/mirror/swig.stanford.edu/pub/summaries/database/gist.html","timestamp":"2014-04-20T13:18:55Z","content_type":null,"content_length":"3472","record_id":"<urn:uuid:a3ec950b-0d1f-4abc-8ca9-fe27ece7f79a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Dan Tyler Abell University of Maryland "For his contributions in applying advanced mathematical theory of Taylor series of several complex variables to determine the domain of convergence for dynamical systems and for his contribution in advancing and determining an optimal symplectification scheme for the Taylor map applicable particularly to long term tracking in accelerator physics." Dan Abell earned his BA in physics at Swarthmore College in 1982. He then taught physics and math at the Dublin School, a small prep school in New Hampshire, before going on to pursue graduate work. He received an M.Sc. from the University of Maryland in 1989 for experimental work in surface physics, and then switched to theoretical work in dynamical systems and accelerator theory. Working under the guidance of Prof. Alex Dragt, Dan earned his Ph.D., also from the University of Maryland, in 1995. Their research centered on two particular topics: (i) the relationship between the domain of convergence of a given Taylor series map and the singularities of the motion in the underlying dynamical system; and (ii) optimal schemes for symplectifying a given Taylor series map for the purposes of doing long-term tracking studies in accelerator physics. Dan is continuing the research on Cremona symplectification, the latter topic---now with particular emphasis on applications to the Large Hadron Collider.
{"url":"http://www.aps.org/units/dpb/awards/recipient.cfm?first_nm=Dan&last_nm=Abell&year=1996","timestamp":"2014-04-19T17:51:27Z","content_type":null,"content_length":"15419","record_id":"<urn:uuid:80700574-81dc-4bfb-aa20-3b9341c989b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
KEY NOUN: pl. ge·om·e·tries a. The mathematics of the properties, measurement, and relationships of points, lines, angles, surfaces, and solids. b. A system of geometry: Euclidean geometry. c. A geometry restricted to a class of problems or objects: solid geometry. d. A book on geometry. a. Configuration; arrangement. b. A surface shape. 3. A physical arrangement suggesting geometric forms or lines. Middle English , from Old French, from Latin , from Greek , from to measure land ; see in Indo-European roots OTHER FORMS:ge·om
{"url":"https://education.yahoo.com/reference/dictionary/entry/geometry","timestamp":"2014-04-20T13:27:00Z","content_type":null,"content_length":"23576","record_id":"<urn:uuid:a43f4f7b-49f0-4303-957e-1c1071154ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
There's been more feedback discussion at Climate Audit . And since people are talking from different professional points of view (lots from EEs lately) I thought it would be useful to try to draw together the terminologies, and relate the concepts. Here's a TOC: I'll draw heavily on in this post. Here's a diagram of a feedback amplifier: Engineers think of how this circuit would operate on sinusoids, so the open loop gain A and the feedback factor β may be frequency dependent. The formula for closed loop gain is: If β is positive, in this convention, the feedback is negative, because it is fed back to the negative input port. And the closed loop gain is less than open-loop. But note that the statement that it is negative is frequency dependent. The closed loop gain is the factor by which you multiply a frequency component of the input I(f) to get the output O(f).\[O(f)=A_{fb}(f) I(f)\] And if you go to the time domain by inverse Fourier Transform, this becomes a convolution: \[O(t)=\int_{-\infty}^{t}\hat{A}(t-\tau)I(\tau) d\tau\] where \(\hat{A}\) is the inverse Fourier Transform of the closed loop gain A Factor: For consistency, set α = -β. Then the ranges are: 1. α < 0 : negative feedback - stable 2. 0 < α < 1/A[OL] : positive feedback - stable (but getting less so) 3. α > 1/A[OL] : positive feedback - unstable (oscillation) EE's often use a to represent time series in a kinda frequency domain way. I think of it as a modified discrete-time Fourier transform . It renders the transfer function as, often, a rational function and allows you to study the stability and other aspects of the response in terms of the location of its poles. More later Time Series Feedback is the AR part of . The transfer function is analogous to a version with exogenous inputs (ARMAX) . Or you can look at the original version as formally analogous with the noise acting as input. Anyway, if you treat the lag operator as a symbolic variable, you see again a rational function acting as a transfer function. That is developed into a Z-transform equivalence Factor: In time series, there isn't a commonly used equivalent of the boundary between positive and negative feedback, in the amplifier sense. There is a stability criterion, which can be got from the Z-transform expression. It requires locating the roots of the denominator polynomial (poles). Then, in this formulation, the roots have to have magnitude >1 for stability. For 1st order: \ [y_n+a_1 y_{n-1}=...\] the requirement is that |a[1]| < 1. Climate - equilibrium This goes back at least to the Cess definition used by Zhang et al 94 , for example. Suppose you have equilibrium T=0 and flux=0. Then you impose a forcing G W/m2. The system incorporates a number of temperature dependent flux terms F , which, with the sign convention of G, each vary with T as = -λ So the new equilibrium temperature is T = G/λ, where λ is the sum of the λ . The nett feedback is negative, else runaway, but the components of the sum could be positive or negative. Globally, for example, there is a base "no-feedback sensitivity" feedback of about 3.3 W/m2/C, based just on BB radiation. The exact amount is not important here. That gives the often quoted 1.2 C per CO2 doubling. Although not usually thought of as a feedback, it goes into the sum of feedback factors. The λ that add to it are called negative feedbacks, because they add to the stabilization. Those that subtract are positive. Comparison with Elec and Time Series - much simpler.It's equilibrium (DC) - there are no dynamics. Time does not appear. Climate - non-equilibrium DS11 and SB11 add some thermal dynamics, with the equation \[C dT/dt = G - \lambda T\] measuring the time response to the perturbation provided by G. Their time scales are too short to assume equilibrium. This approaches steady state as the temperature approaches G/λ, the sensitivity value. C is the thermal capacity. Adding the gross dynamics does not change the feedback concepts (though it expresses the potential for thermal runaway). In EE terms, C is a single capacitance,and you could think of it as a resistance (1/λ)-capacitance(C) circuit. But I don't think that changes any of our feedback issues. The DE shows an ARMA analogue if you convert the derivative to a difference: \[T_n-T_{n-1} = -\lambda {\delta}t T_n + {\delta}t F(t_n) \] or \[T_n = (T_{n-1}+ {\delta}t F(t_n))/(1+\lambda {\delta}t) T_n \] which starts to look like the closed loop gain equation. But it's also very like ARMAX(1,0,1) The differential equation has the solution: \[O(t)=\int_{-\infty}^{t}e^{\lambda*(t-\tau)}F(\tau) d\tau\] very like the iFT time domain expression of the closed loop gain. It has, however, a restricted transfer function, corresponding in EE terms to a single pole. This can be seen by Laplace Transform: \[s \hat{T} - T(0)=-\lambda \hat{T} +\hat{F}\] or \[\hat{T}= (T(0) + \hat{F})/(s +\ There are plenty of ways to generalise the de approach. T could be a vector and λ a square matrix, which would give multiple poles. Or λ could be a function of t, perhaps with a convolution Comparison with Elec and Time Series - simple dynamics of heat storage. But there are no time-scales associated with the individual feedbacks. They are not reactive. I don't think there is any need to consider phase shift in the feedback. 8 comments: 1. That's useful. In my experience lapsed EEs are often thrown by positive feedback, assuming it must inevitably lead to runaway. This was initially incomprehensible to me, as your feedback equation shows - positive feedback < 1 does not lead to runaway. And indeed this is the basis of many early (super-regenerative) radio receivers, which used a single amplifier stage with positive feedback to achieve high gains with few components. I think the problem may spring from the fact that many electronic components vary in gain with temperature, and so the gain can vary significantly over time (including going > 1), as a result of which positive feedback tends to be avoided in practice. Kevin C 2. Kevin, Yes, you can hear the super-regenerative effect in an acoustic feedback loop just below where it starts howling. Very frequency-selective - it sounds awful. There's the same gap in climate. WV feedback is positive but does not presently lead to runaway. 3. Nick Stokes: WV feedback is positive but does not presently lead to runaway. Good discussion, NIck. Only comment I have to add is that in general when you have net amplification (positive feedback > 1), this is unphysical unless there is a corresponding stabilizing nonlinearity (mathematically the nonlinearity needs to appear in the "damping" sector of the system of differential equations. Nonlinearities that just make the system stiffer for example, won't stabilize runaway In general effect of the nonlinearity will be to "dampen" the amplification, so that at a sufficiently highly amplitude operating point, the system will remain stable, in spite of the underlying If you start the system at a low enough temperature, the system will 'run away' until it reaches a "just stable" operating point. Similarly, start it at too high of a temperature, and it will drop until it oscillates about the stability point. Likely one can extend these concepts to the case for example of a "step function" in CO2 forcing (pick either sign). Another point to make is the fact that water can change state in our climate, and the frozen version of it drastically affects the net albedo of the Earth, there is a sort of net-amplification/ nonlinearity rolled into one here. Get enough warming, and you will get a runaway condition, in which most of the ice melts (as the ice albedo decreases, that acts as a stabilizing nonlinearity). Similar idea if you get enough cooling, the increase in albedo can lead to a runaway precipitation event (e.g., an ice age). I'll let you take a stab at that example, if you want, as to what that would be classified as, in EE literature. I suspect chemistry may have language better suited for this type of scenario. 4. Thanks, Carrick. Yes, people talk too freely about positive feedback leading to infinite heating etc. It's really just a transition to a state where non-linearities change the gradient so the same feedback doesn't apply. I think the classic electric circuit with high positive feedback is the bistable multivibrator. It takes a pulse to get from it's quasi-stable state, then quickly passes through the region where it is a positive feedback amplifier, then to its other state. In climate, one time where we might have had instability was in the recovery from the Younger Dryas period. A sudden temperature change (maybe) but soon limited by the non-linear response. In chemistry, I think the language is "explosion" :) 5. Or maybe "runaway chemical reaction"? Or thermal runaway? It's also instructive to look at historical examples of runaway climate change and look at the reasons they were thought to occur. (This doesn't include ice ages, which I think should be considered too as well as sudden shifts from glaciation to near ice free conditions.) These seem to be the main ones: Water freezing (response to cooling, negative feedback of frozen water) Ice melting (response to heating from other forcings, usually) Methane released (associated with thawing or tectonic activity) Massive volcanic eruptions (tectonic activity). Asteroid impacts. Biosphere disruptive events (introduction of new species, either through evolution or tectonic plates.) 6. Nice post Nick. You sent me around the internet reading up on Z transforms. 7. Thanks, Jeff, I've always liked Z-transforms - polynomials seem to look more natural. 8. And β of the climate system is -0.4 because the ~200 W/M2 solar absorbed by the system is amplified to ~500 W/m2 to the surface.
{"url":"http://moyhu.blogspot.com/2011/09/feedback.html","timestamp":"2014-04-18T18:10:22Z","content_type":null,"content_length":"118763","record_id":"<urn:uuid:4e72faaf-cab0-4514-8ab1-fbe29e339223>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Student Projects Practicing Statistics How to Write a Scientific Paper or Poster IRB Approval GAISE Report The American Statistical Association (ASA) funded the Guidelines for Assessment and Instruction in Statistics Education (GAISE) Project, which consists of two groups, one focused on K-12 education and one focused on introductory college courses. The report in the link below presents the recommendations developed by the college group. The report includes a brief history of the introductory college course and summarizes the 1992 report by George Cobb that since that time has been considered to be a generally accepted set of recommendations for teaching these courses. The six recommendations are: 1. Emphasize statistical literacy and develop statistical thinking 2. Use real data 3. Stress conceptual understanding rather than mere knowledge of procedures 4. Foster active learning in the classroom 5. Use technology for developing conceptual understanding and analyzing data 6. Use assessments to improve and evaluate student learning The report concludes with suggestions for how to make these changes, and includes numerous examples in the appendix to illustrate details of the recommendations. http://www.amstat.org/Education/gaise The Undergraduate Statistics Education Initiative (USEI) encourages math programs to “explore ways to expand and improve undergraduate statistics education” with proposals and recommendations for programs beyond the teaching of introductory statistics, Undergraduate Statistics Education Initiative (USEI), in 1999 a committee funded by ASA met to "Promote Undergraduate Statistics to Improve the Workforce of the Future". This initiative has led to multiple workshops and symposiums. http://www.amstat.org/education/index.cfm?fuseaction=usei The 2004 CUPM Guide recommends that “every mathematical science major should study statistics or probability with an emphasis on data analysis”. CUPM Curriculum Guide 2004, a report by the Committee on the Undergraduate Program in Mathematics of The Mathematical Association of America. http://www.maa.org/cupm/cupm2004.pdf CRAFTY Workshop in Statistics The Curricular Foundations Workshop in Statistics, a Curriculum Reform and the First Two Years (CRAFTY) workshop on Statistics, stated: • The 1991 CUPM recommendation that every mathematical sciences major should study statistics or probability with an emphasis on data analysis has been almost universally ignored. • There are even more compelling reasons for the recommendation today than there were 10 years ago: (1) Data analysis plays a crucial role in many aspects of academic, professional, and personal life. (2) The job market for mathematics majors is largely in fields that use data. (3) Future teachers will need knowledge of statistics and data analysis to be current with the new NCTM Standards and with the new and highly popular AP Statistics course. (4) The study of statistics provides an opportunity for students to gain frequent experience with the interplay between abstraction and context that we regard as critical for all mathematical sciences students. Moore, T., Peck, R., and Rossman, A. (2000), “Calculus Reform and the First Two Years (CRAFTY).” http://www.maa.org/cupm/crafty/cf_project.html The Bio2010 committee recommended that: • Concepts, examples, and techniques from mathematics and the physical and information sciences should be included in biology courses and biological concepts and examples should be included in other science courses. • Faculty in biology, mathematics, and physical sciences must work collaboratively to find ways of integrating mathematics and physical sciences into life science courses as well as providing avenues for incorporating life science examples that reflect the emerging nature of the discipline into courses taught in mathematics and physical sciences. BIO2010: Transforming Undergraduate Education for Future Research Biologists. (2003), “A New Biology Curriculum.” National Academies Press, Chapter 2, pp. 47-48. http://www.nap.edu/books/0309085357/ My Favorite Articles about Statistics Education: 1. Bryce, Gould, Notz, and Peck, “Curriculum Guidelines for Bachelor of Science Degrees in Statistical Science,” American Statistician, Feb. 2001, (55) No. 1, page 9. 2. Cobb, G. (1992), “Teaching Statistics”, in L.A. Steen (ed.) Heeding the Call for Change: Suggestions for Curricular Action, Mathematical Association of America, The committee’s report was unanimously endorsed by the Board of Directors of the American Statistical Association. 3. Cobb, G. (1993), ‘Reconsidering Statistics Education: A National Science Foundation Conference’, Journal of Statistics Education 1(1) 4. delMas, R., Garfield, J., and Chance, B., “Tools for Teaching and Assessing Statistical Inference,” #DUE-9752523, 2/1/98-10/31/2000. http://www.gen.umn.edu/research/stat_tools/ 5. delMas, R., Garfield, J., and Chance, B., “The Web-based ARTIST Project,” #DUE-0206571, 8/15/2002-4/30/2006. http://www.gen.umn.edu/artist/ 6. Gal, I. (2002). Adults’ Statistical Literacy: Meanings, Components, Responsibilities. International Statistical Review, 70, 1-51. 7. Garfield , J. (2000) Evaluating the Statistics Education Reform. Final Report to the National Science Foundation. http://education.umn.edu/EdPsych/Projects/Impact.html 8. Garfield, J., Hogg, B., Schau, C., and Whittinghill, D. (2002), “First Courses in Statistical Science: The Status of Educational Reform Efforts.” Journal of Statistics Education, V(10), Number 2. 9. Journal of Statistics Education Data Archive, http://www.amstat.org/publications/jse/ 10. Moore D., and discussants (1997), “New pedagogy and new content: the case of statistics,” International Statistical Review, 65, pp123-165. 11. Moore, T., Editor, (2000), Resources for Undergraduate Instructors Teaching Statistics, MAA Notes (52), The MAA and the ASA. 12. Pearl, D., “CAUSEweb: A Digital Library of Undergraduate Statistics Education.” #DUE-0333672, 10/1/03 – 9/3005. http://www.causeweb.org 13. Rumsey, D. J. (2002). Statistical Literacy as a Goal for Introductory Statistics Courses. Journal of Statistics Education [Online], 10(3). 14. Snell, L., Doyle, P., Garfield, J., Moore, T., Peterson, B., and Shah, N., (1999), Chance Project Website, including Chance News and Chance Course, NECUSE and #DUE-9653416. http:// 15. Utts, J. (2003). What educated citizens should know about statistics and probability? The American Statistician, 57 (2), 74-79.
{"url":"http://web.grinnell.edu/individuals/kuipers/stat2labs/gaise.html","timestamp":"2014-04-17T01:39:42Z","content_type":null,"content_length":"16629","record_id":"<urn:uuid:31cd5500-b087-46a2-a5b5-902486555de8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Package org.apache.commons.math3.analysis.differentiation This package holds the main interfaces and basic building block classes dealing with differentiation. See: Description • Class Summary Class Description DerivativeStructure Class representing both the value and the differentials of a function. DSCompiler Class holding "compiled" computation rules for derivative structures. FiniteDifferencesDifferentiator Univariate functions differentiator using finite differences. GradientFunction Class representing the gradient of a multivariate function. JacobianFunction Class representing the Jacobian of a multivariate vector function. SparseGradient First derivative computation with large number of variables. Package org.apache.commons.math3.analysis.differentiation Description This package holds the main interfaces and basic building block classes dealing with differentiation. The core class is DerivativeStructure which holds the value and the differentials of a function. This class handles some arbitrary number of free parameters and arbitrary differentiation order. It is used both as the input and the output type for the UnivariateDifferentiableFunction interface. Any differentiable function should implement this interface. The UnivariateFunctionDifferentiator interface defines a way to differentiate a simple UnivariateFunction and get a UnivariateDifferentiableFunction. Similar interfaces also exist for multivariate functions and for vector or matrix valued functions. SCaVis 1.7 © jWork.org
{"url":"http://jwork.org/scavis/api/doc.php/org/apache/commons/math3/analysis/differentiation/package-summary.html","timestamp":"2014-04-20T00:58:23Z","content_type":null,"content_length":"26249","record_id":"<urn:uuid:62c86d11-eb6d-4009-9f05-45b7f61a0d5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit involving floor function + limit involving fundamental trig limit HELP! :) April 14th 2013, 11:18 PM Limit involving floor function + limit involving fundamental trig limit HELP! :) Hi all! I have a couple questions I'm struggling with in my University homework, would love some guidance! Not after just the answer, that does not help me learn! First up is the following limit: Attachment 27962 I am struggling to understand how I should go about this question, I am aware the answer is -1, but would like some guidance as to how to get onto the right track towards that answer. Next up is the following limit: Attachment 27963 I believe I can apply L'Hopital's rule, but the question specifically asks me to write it in the form: Attachment 27964 where y and z are functions of x, so that the fundamental trig limit can be utilized and A is a real number scalar. Thus, I am not sure how to express it in that form, so guidance as to what trig identities I should use etc. will be greatly appreciated! Thanks in advanced for any help! April 15th 2013, 02:30 AM Re: Limit involving floor function + limit involving fundamental trig limit HELP! :) First up is the following limit: Attachment 27962 Next up is the following limit: Attachment 27963 For the first one, graph the function near $x=-2$. You do not need L'Hopital's rule for the second: $\frac{\sin(2x)}{\sin(5x)}=\frac{2}{5}\frac{\frac{\ sin(2x)}{2x}}{\frac{\sin(5x)}{5x}}$. April 15th 2013, 08:37 AM Re: Limit involving floor function + limit involving fundamental trig limit HELP! :) I've attached some remarks on the floor function. I hope it helps. Attachment 27969 April 16th 2013, 12:36 AM Re: Limit involving floor function + limit involving fundamental trig limit HELP! :) Thanks very much for the help guys, much appreciated!
{"url":"http://mathhelpforum.com/calculus/217498-limit-involving-floor-function-limit-involving-fundamental-trig-limit-help-print.html","timestamp":"2014-04-21T00:17:10Z","content_type":null,"content_length":"7841","record_id":"<urn:uuid:cb29a2ed-29fe-4d64-8da8-83d9a2f9ba8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Modular Arithmetic This visually illustrates various properties of modular arithmetic by creating an "operation table" modulo , where 0 is represented by black, 1 by white, and other values by intermediate colors. The allowed numbers can be restricted to be nonzero or the units modulo , and the operations are modular addition, subtraction, powers, and sums of squares. This visual display of modular arithmetic can be used to illustrate different principles of modular arithmetic (such as the existence of additive or multiplicative inverses modulo ). Because 0 is always represented by black and 1 by white, it makes these values easy to spot in the table. For example, to determine which numbers have multiplicative inverses mod 12, you simply need to move the modulus slider to 12, the operation to times, and then determine which rows of the table have the white "1" square. By using this Demonstration, you can see a wide range of values and different operations for modular arithmetic, which is quite useful for those beginning to study algebra/number theory.
{"url":"http://demonstrations.wolfram.com/ModularArithmetic/","timestamp":"2014-04-17T06:45:03Z","content_type":null,"content_length":"43290","record_id":"<urn:uuid:4f724290-00ee-4904-8949-af5047f0b86f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Replies: 10 Last Post: Apr 2, 2013 3:31 PM Messages: [ Previous | Next ] Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Posted: Apr 1, 2013 7:28 AM Robert Clark <rgregoryclark@yahoo.com> wrote: > Thanks for bringing this thread up again which I hadn't seen when it > first appeared. I didn't know Clarke had written such a story with > such a highly abstract mathematical topic at its focus. As the others > mentioned, I think Clarke just wanted to write a story based on the > topological concept of a "one-sided" surface. I thought I was familiar with all of ACC's science fiction, but I must have overlooked this one. I've just read it - a most interesting storyline. Thanks to the OP for mentioning it. Nige Danton - Replace the obvious with g.m.a.i.l Date Subject Author 3/30/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Robert Clark 3/30/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? HOPEINCHRIST 3/31/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Brian Q. Hutchings 3/31/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? HOPEINCHRIST 4/1/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Guest 4/1/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Brian Q. Hutchings 4/2/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Bill Taylor 4/2/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Robert Clark 4/2/13 Re: Arthur C. Clarke's "The Wall of Darkness" - what's the point? Brian Q. Hutchings
{"url":"http://mathforum.org/kb/message.jspa?messageID=8800586","timestamp":"2014-04-20T09:28:26Z","content_type":null,"content_length":"26331","record_id":"<urn:uuid:7229a3ff-5ff1-4a2f-8b4b-8f189948c66f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
A Agresti Affiliation: University of Florida Country: USA Research Grants Detail Information 1. Dealing with discreteness: making 'exact' confidence intervals for proportions, differences of proportions, and odds ratios more exact A Agresti Department of Statistics, University of Florida, Gainesville, Florida 32611 8545, USA Stat Methods Med Res 12:3-21. 2003 2. Modelling ordered categorical data: recent advances and future challenges A Agresti Department of Statistics, University of Florida, Gainesville, Florida 32611 8545, USA Stat Med 18:2191-207. 1999 ..Throughout, we suggest problem areas for future research and we highlight challenges for statisticians who deal with ordinal data... 3. Frequentist performance of Bayesian confidence intervals for comparing proportions in 2 x 2 contingency tables Alan Agresti Department of Statistics, University of Florida, Gainesville, Florida 32611, USA Biometrics 61:515-23. 2005 4. Simple improved confidence intervals for comparing matched proportions Alan Agresti Department of Statistics, University of Florida, Gainesville, FL 32611 8545, USA Stat Med 24:729-40. 2005 ..The improvement of the interval for the difference of probabilities is to add two observations to each sample before applying it. The improvement for estimating an odds ratio transforms a confidence interval for a single proportion... 5. Effects and non-effects of paired identical observations in comparing proportions with binary matched-pairs data Alan Agresti Department of Statistics, University of Florida, Gainesville, FL 32611 8545, USA Stat Med 23:65-75. 2004 ..We also discuss extension of this result to matched sets... 6. On small-sample confidence intervals for parameters in discrete distributions A Agresti Department of Statistics, University of Florida, Gainesville 32611 8545, USA Biometrics 57:963-71. 2001 ..We illustrate for a variety of discrete problems, including interval estimation of a binomial parameter, the difference and the ratio of two binomial parameters for independent samples, and the odds ratio... 7. Exact inference for categorical data: recent advances and continuing controversies A Agresti Department of Statistics, University of Florida, Gainesville, 32611 8545, U S A Stat Med 20:2709-22. 2001 ..In general, adjusted exact methods based on the mid-P-value seem a reasonable way of reducing the severity of this problem... 8. On logit confidence intervals for the odds ratio with small samples A Agresti Department of Statistics, University of Florida, Gainesville 32611 8545, USA Biometrics 55:597-602. 1999 9. Modeling a categorical variable allowing arbitrarily many category choices A Agresti Department of Statistics, University of Florida, Gainesville 32611 8545, USA Biometrics 55:936-43. 1999 ..These tests are alternatives to the weighted chi-squared test and the bootstrap test proposed by Loughin and Scherer for this hypothesis... 10. Strategies for comparing treatments on a binary response with multi-centre data A Agresti Department of Statistics, University of Florida, Gainesville 32611 8545, USA Stat Med 19:1115-39. 2000 ..This article discusses these matters in the context of various strategies for analysing such data, in particular focusing on special problems presented by sparse data... 11. Simultaneous confidence intervals for comparing binomial parameters Alan Agresti Department of Statistics, University of Florida, Gainesville, Florida 32611, USA Biometrics 64:1270-5. 2008 ..For the difference of proportions, the proposed method has performance comparable to a method proposed by Piegorsch (1991, Biometrics 47, 45-52)... 12. Modeling and inference for an ordinal effect size measure Euijung Ryu Department of Health Sciences Research, Mayo Clinic, Rochester, MN 55905, U S A Stat Med 27:1703-17. 2008 ..The methods are illustrated for a study comparing treatments for shoulder-tip pain... 13. Multivariate extensions of McNemar's test Bernhard Klingenberg Department of Mathematics and Statistics, Williams College, Williamstown, Massachusetts 01267, USA Biometrics 62:921-8. 2006 ..We apply the test to safety data for a drug, in which two doses are evaluated by comparing multiple responses by the same subjects to each one of them... Research Grants14 Alan Agresti; Fiscal Year: 2000 Alan Agresti; Fiscal Year: 1993 ..For both types of analyses, semi-parametric methods will be developed to handle cases in which traditional maximum likelihood approaches are awkward or infeasible... Alan Agresti; Fiscal Year: 2003 ..The two types of improved intervals will be compared (In some cases, approximate may be better because of the inherent conservativeness of exact methods for discrete data), and extensions will be developed for stratified data. ..
{"url":"http://www.labome.org/expert/usa/university/agresti/a-agresti-221317.html","timestamp":"2014-04-20T21:06:56Z","content_type":null,"content_length":"23536","record_id":"<urn:uuid:8c337598-14af-4cf0-af0f-b8206bb966d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Especulações em biologia, informação e complexidade Dado a complexidade do tema abordado neste artigo de Gregory Chaitin, pesquisador da IBM, vou deixá-lo em inglês para os biólogos e matemáticos considerarem cum granum salis. Tenho aqui em mente especialmente dois professores com quem troquei emails amistosos e respeitosos – Dr. Décio Krause (Introdução aos fundamentos axiomáticos da ciência, São Paulo, Editora Pedagógica e Universitária, 2002), Departamento de Filosofia da UFSC, e J. C. M. Magalhães, Fundamentos lógicos da teoria sintética da evolução, tese de doutoramento em Genética, UFPR. Chaitin não defende o design inteligente. Há vários links de artigos dele em um site de importante universidade americana. Speculations on biology, information and complexity Gregory Chaitin, IBM Research, New York Abstract: It would be nice to have a mathematical understanding of basic biological concepts and to be able to prove that life must evolve in very general circumstances. At present we are far from being able to do this. But I'll discuss some partial steps in this direction plus what I regard as a possible future line of attack. Can Darwinian evolution be made into a mathematical theory? Is there a fundamental mathematical theory for biology? Darwin = math ?! In 1960 the physicist Eugene Wigner published a paper with a wonderful title, "The unreasonable effectiveness of mathematics in the natural sciences.'' In this paper he marveled at the miracle that pure mathematics is so often extremely useful in theoretical physics. To me this does not seem so marvelous, since mathematics and physics co-evolved. That however does not diminish the miracle that at a fundamental level Nature is ruled by simple, beautiful mathematical laws, that is, the miracle that Nature is comprehensible. I personally am much more disturbed by another phenomenon, pointed out by I.M. Gel'fand and propagated by Vladimir Arnold in a lecture of his that is available on the web, which is the stunning contrast between the relevance of mathematics to physics, and its amazing lack of relevance to biology! Indeed, unlike physics, biology is not ruled by simple laws. There is no equation for your spouse, or for a human society or a natural ecology. Biology is the domain of the complex. It takes 3 × 109 bases = 6 × 109 bits of information to specify the DNA that determines a human being. Darwinian evolution has acquired the status of a dogma, but to me as a mathematician seems woefully vague and unsatisfactory. What is evolution? What is evolving? How can we measure that? And can we prove, mathematically prove, that with high probability life must arise and evolve? In my opinion, if Darwin's theory is as simple, fundamental and basic as its adherents believe, then there ought to be an equally fundamental mathematical theory about this, that expresses these ideas with the generality, precision and degree of abstractness that we are accustomed to demand in pure mathematics. Look around you. We are surrounded by evolving organisms, they're everywhere, and their ubiquity is a challenge to the mathematical way of thinking. Evolution is not just a story for children fascinated by dinosaurs. In my own lifetime I have seen the ease with which microbes evolve immunity to antibiotics. We may well live in a future in which people will again die of simple infections that we were once briefly able to control. Evolution seems to work remarkably well all around us, but not as a mathematical theory! In the next section of this paper I will speculate about possible directions for modeling evolution mathematically. I do not know how to solve this difficult problem; new ideas are needed. But later in the paper I will have the pleasure of describing a minor triumph. The program-size complexity viewpoint that I will now describe to you does have some successes to its credit, even though they only take us an infinitesimal distance in the direction we must travel to fully understand evolution. A software view of biology: Can we model evolution via evolving software? I'd like to start by explaining my overall point of view. It is summarized here: Life = Software ? program >>> COMPUTER >>> OUTPUT DNA >>> DEVELOPMENT/PRGENANCY >>> organism (Size of program in bits) ≈ (Amount of DNA in bases) × 2 So the idea is firstly that I regard life as software, biochemical software. In particular, I focus on the digital information contained in DNA. In my opinion, DNA is essentially a programming language for building an organism and then running that organism. More precisely, my central metaphor is that DNA is a computer program, and its output is the organism. And how can we measure the complexity of an organism? How can we measure the amount of information that is contained in DNA? Well, each of the successive bases in a DNA strand is just 2 bits of digital software, since there are four possible bases. The alphabet for computer software is 0 and 1. The alphabet of life is A, G, C, and T, standing for adenine, cytosine, guanine, and thymine. A program is just a string of bits, and the human genome is just a string of bases. So in both cases we are looking at digital information. My basic approach is to measure the complexity of a digital object by the size in bits of the smallest program for calculating it. I think this is more or less analogous to measuring the complexity of a biological organism by 2 times the number of bases in its DNA. Of course, this is a tremendous oversimplification. But I am only searching for a toy model of biology that is simple enough that I can prove some theorems, not for a detailed theory describing the actual biological organisms that we have here on earth. I am searching for the Platonic essence of biology; I am only interested in the actual creatures we know and love to the extent that they are clues for finding ideal Platonic forms of life. How to go about doing this, I am not sure. But I have some suggestions. It might be interesting, I think, to attempt to discover a toy model for evolution consisting of evolving, competing, interacting programs. Each organism would consist of a single program, and we would measure its complexity in bits of software. The only problem is how to make the programs interact! This kind of model has no geometry, it leaves out the physical universe in which the organisms live. In fact, it omits bodies and retains only their DNA. This hopefully helps to make the mathematics more tractable. But at present this model has no interaction between organisms, no notion of time, no dynamics, and no reason for things to evolve. The question is how to add that to the model. Hopeless, you may say. Perhaps not! Let's consider some other models that people have proposed. In von Neumann's original model creatures are embedded in a cellular automata world and are largely immobile. Not so good! There is also the problem of dissecting out the individual organisms that are embedded in a toy universe, which must be done before their individual complexities can be measured. My suggestion in one of my early papers that it might be possible to use the concept of mutual information---the extent to which the complexity of two things taken together is smaller than the sum of their individual complexities---in order to accomplish this, is not, in my current opinion, particularly fruitful. In von Neumann's original model we have the complete physics for a toy cellular automata universe. Walter Fontana's ALChemy = algorithmic chemistry project went to a slightly higher level of abstraction. It used LISP S-expressions to model biochemistry. LISP is a functional programming language in which everything---programs as well as data---is kept in identical symbolic form, namely as what are called LISP S-expressions. Such programs can easily operate on each other and produce other programs, much in the way that molecules can react and produce other molecules. I have a feeling that both von Neumann's cellular automata world and Fontana's algorithmic chemistry are too low-level to model biological evolution. (A model with perhaps the opposite problem of being at too high a level, is Douglas Lenat's AM = Automated Mathematician project, which dealt with the evolution of new mathematical concepts.) So instead I am proposing a model in which individual creatures are programs. As I said, the only problem is how to model the ecology in which these creatures compete. In other words, the problem is how to insert a dynamics into this static software (Thomas Ray's Tierra project did in fact create an ecology with software parasites and hyperparasites. The software creatures he considered were sequences of machine language instructions coexisting in the memory of a single computer and competing for that machine's memory and execution time. Again, I feel this model was too low-level. I feel that too much micro-structure was included.) Since I have not been able to come up with a suitable dynamics for the software model I am proposing, I must leave this as a challenge for the future and proceed to describe a few biologically relevant things that I can do by measuring the size of computer programs. Let me tell you what this viewpoint can buy us that is a tiny bit biologically relevant. Pure mathematics has infinite complexity and is therefore like biology Okay, program-size complexity can't help us very much with biological complexity and evolution, at least not yet. It's not much help in biology. But this viewpoint has been developed into a mathematical theory of complexity that I find beautiful and compelling---since I'm one of the people who created it---and that has important applications in another major field, namely metamathematics. I call my theory algorithmic information theory, and in it you measure the complexity of something X via the size in bits of the smallest program for calculating X, while completely ignoring the amount of effort which may be necessary to discover this program or to actually run it (time and storage space). In fact, we pay a severe price for ignoring the time a program takes to run and concentrating only on its size. We get a beautiful theory, but we can almost never be sure that we have found the smallest program for calculating something. We can almost never determine the complexity of anything, if we chose to measure that in terms of the size of the smallest program for calculating it! This amazing fact, a modern example of the incompleteness phenomenon first discovered by Kurt Gödel in 1931, severely limits the practical utility of the concept of program-size complexity. However, from a philosophical point of view, this paradoxical limitation on what we can know is precisely the most interesting thing about algorithmic information theory, because that has profound epistemological implications. The jewel in the crown of algorithmic information theory is the halting probability Ω, which provides a concentrated version of Alan Turing's 1936 halting problem. In 1936 Turing asked if there was a way to determine whether or not individual self-contained computer programs will eventually stop. And his answer, surprisingly enough, is that this cannot be done. Perhaps it can be done in individual cases, but Turing showed that there could be no general-purpose algorithm for doing this, one that would work for all possible programs. The halting probability Ω is defined to be the probability that a program that is chosen at random, that is, one that is generated by coin tossing, will eventually halt. If no program ever halted, the value of Ω would be zero. If all programs were to halt, the value of Ω would be one. And since in actual fact some programs halt and some fail to halt, the value of Ω is greater than zero and less than one. Moreover, Ω has the remarkable property that its numerical value is maximally unknowable. More precisely, let's imagine writing the value of Ω out in binary, in base-two notation. That would consist of a binary point followed by an infinite stream of bits. It turns out that these bits are irreducible, both computationally and logically: · You need an N-bit program in order to be able to calculate the first N bits of the numerical value of Ω. · You need N bits of axioms in order to be able to prove what are the first N bits of Ω. · In fact, you need N bits of axioms in order to be able to determine the positions and values of any N bits of Ω, not just the first N bits. Thus the bits of Ω are, in a sense, mathematical facts that are true for no reason, more precisely, for no reason simpler than themselves. Essentially the only way to determine the values of some of these bits is to directly add that information as a new axiom. And the only way to calculate individual bits of Ω is to separately add each bit you want to your program. The more bits you want, the larger your program must become, so the program doesn't really help you very much. You see, you can only calculate bits of Ω if you already know what these bits are, which is not terribly useful. Whereas with π = 3.1415926... we can get all the bits or all the digits from a single finite program, that's all you have to know. The algorithm for π compresses an infinite amount of information into a finite package. But with Ω there can be no compression, none at all, because there is absolutely no structure. Furthermore, since the bits of Ω in their totality are infinitely complex, we see that pure mathematics contains infinite complexity. Each of the bits of Ω is, so to speak, a complete surprise, an individual atom of mathematical creativity. Pure mathematics is therefore, fundamentally, much more similar to biology, the domain of the complex, than it is to physics, where there is still hope of someday finding a theory of everything, a complete set of equations for the universe that might even fit on a T-shirt. In my opinion, establishing this surprising fact has been the most important achievement of algorithmic information theory, even though it is actually a rather weak link between pure mathematics and biology. But I think it's an actual link, perhaps the first. Computing Ω in the limit from below as a model for evolution I should also point out that Ω provides an extremely abstract---much too abstract to be satisfying---model for evolution. Because even though Ω contains infinite complexity, it can be obtained in the limit of infinite time via a computational process. Since this extremely lengthy computational process generates something of infinite complexity, it may be regarded as an evolutionary process. How can we do this? Well, it's actually quite simple. Even though, as I have said, Ω is maximally unknowable, there is a simple but very time-consuming way to obtain increasingly accurate lower bounds on Ω. To do this simply pick a cut-off t, and consider the finite set of all programs p up to t bits in size which halt within time t. Each such program p contributes 1/2|p|, 1 over 2 raised to p's size in bits, to Ω. In other words, Ω = limt→∞ (∑|p|≤t & halts within time t 2−|p|). This may be cute, and I feel compelled to tell you about it, but I certainly do not regard this as a satisfactory model for biological evolution, since there is no apparent connection with Darwin's The classical work on a theoretical mathematical underpinning for biology is von Neumann's posthumous book [2]. (An earlier account of von Neumann's thinking on this subject was published in [1], which I read as a child.) Interestingly enough, Francis Crick---who probably contributed more than any other individual to creating modern molecular biology---for many years shared an office with Sydney Brenner, who was aware of von Neumann's thoughts on theoretical biology and self-reproduction. This interesting fact is revealed in the splendid biography of Crick [3]. For a book-length presentation of my own work on information and complexity, see [4], where there is a substantial amount of material on molecular biology. This book is summarized in my recent article [5], which however does not discuss biology. A longer overview of [4] is my Alan Turing lecture [6], which does touch on biological questions. For my complete train of thought on biology extending over nearly four decades, see also [7,8,9,10,11]. For information on Tierra, see Tom Ray's home page at http://www.his.atr.jp/~ray/. For information on ALChemy, see For information on Douglas Lenat's Automated Mathematician, see [12] and the Wikipedia entry For Vladimir Arnold's provocative lecture, the one in which Wigner and Gel'fand are mentioned, Wigner's entire paper is itself on the web 1. J. Kemeny, "Man viewed as a machine,'' Scientific American, April 1955, pp. 58-67. 2. J. von Neumann, Theory of Self-Reproducing Automata, University of Illinois Press, Urbana, 1967. 3. M. Ridley, Francis Crick, Eminent Lives, New York, 2006. 4. G. Chaitin, Meta Math!, Pantheon Books, New York, 2005. 5. G. Chaitin, "The limits of reason,'' Scientific American, March 2006, pp. 74-81. (Spanish translation published in the May 2006 issue of Investigación y Ciencia.) 6. G. Chaitin, "Epistemology as information theory: from Leibniz to Ω,'' European Computing and Philosophy Conference, Västerås, Sweden, June 2005. 7. G. Chaitin, "To a mathematical definition of 'life','' ACM SICACT News, January 1970, pp. 12-18. 8. G. Chaitin, "Toward a mathematical definition of 'life','' R. Levine, M. Tribus, The Maximum Entropy Formalism, MIT Press, 1979, pp. 477-498. 9. G. Chaitin, "Algorithmic information and evolution,'' O. Solbrig, G. Nicolis, Perspectives on Biological Complexity, IUBS Press, 1991, pp. 51-60. 10. G. Chaitin, "Complexity and biology,'' New Scientist, 5 October 1991, p. 52. 11. G. Chaitin, "Meta-mathematics and the foundations of mathematics,'' Bulletin of the European Association for Theoretical Computer Science, June 2002, pp. 167-179. 12. D. Lenat, "Automated theory formation in mathematics,'' pp. 833-842 in volume 2 of R. Reddy, Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, August 1977, William Kaufmann, 1977.
{"url":"http://pos-darwinista.blogspot.com/2006/11/especulaes-em-biologia-informao-e_13.html?pfstyle=wp","timestamp":"2014-04-17T15:27:39Z","content_type":null,"content_length":"121934","record_id":"<urn:uuid:474641b0-8ec7-4041-9ba0-76c9f0ac961c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00032-ip-10-147-4-33.ec2.internal.warc.gz"}