content
stringlengths
86
994k
meta
stringlengths
288
619
Basic biology and nanotechnology November 30th 2011, 02:00 PM #1 Junior Member Aug 2008 Basic biology and nanotechnology I am not quite sure where this one is supposed to go, but I think there isn't any advanced math concepts in the solution of this problem, so I am posting it here... /* --- --- */ Chromosome packing is a key aspect of eukaryotic systems. a) The first level of chromosome packing are nucleosomes. A single nucleosome core is 11 nm in diameter and contains 147 base pairs (bp) of DNA. The linker region of a nucleosome can contain up to 80 base pairs. The DNA double helix measures 0.34 nm/bp. i. What packing ratio (the ratio of DNA length to nucleosome core diameter) has been achieved by wrapping DNA around the histone octamer? ii. The packing ratio of the whole nucleosome is going to be lower than just the nucleosome core due to the linker region. Assuming the packing ratio of the whole nucleosome is 2.12, determine the number of nucleotides in the linker region. /* --- ---*/ The first part is easy: the ratio equals the number of base pairs times their size divided by the diameter of the sphere or 147*0.34 / 11 which is around 4.54. But the second part seems quite elusive... I think the base pairs that are not wrapped around a nucleosome core are not packed at all, so the packaging ratio is 1. But I am all lost after that... Re: Basic biology and nanotechnology This is not a math question. Place your interesting question is a biology forum. Re: Basic biology and nanotechnology It is a math problem, with relevancy to reality. To make it sound more math-like you can replace "nucleosome core" with "sphere" and "base pairs" with "circles". When I said I was not sure where to place the problem, I was not wondering what site to post it to, only where here to post it to, because I am not quite sure under what math category this falls. But it definitely is mathematics... November 30th 2011, 05:24 PM #2 December 1st 2011, 02:10 AM #3 Junior Member Aug 2008
{"url":"http://mathhelpforum.com/algebra/193107-basic-biology-nanotechnology.html","timestamp":"2014-04-16T05:45:22Z","content_type":null,"content_length":"35691","record_id":"<urn:uuid:abcf0193-d158-46f9-8a12-c8b880caa156>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2002 [00088] [Date Index] [Thread Index] [Author Index] Re: RE: final results: creating adjacency matrices • To: mathgroup at smc.vnet.net • Subject: [mg36968] Re: [mg36934] RE: [mg36584] final results: creating adjacency matrices • From: Daniel Lichtblau <danl at wolfram.com> • Date: Thu, 3 Oct 2002 00:17:35 -0400 (EDT) • References: <200210020732.DAA20749@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com "Moliterno, Thomas" wrote: > First thanks to all, and in particular Bobby Treat, for your help with > this question. > The best solution was as follows: > lst = ReadList["c:\\data.txt", {Number, Number}] > adjacenceMatrix[ > x:{{_, _}..}] := Module[{actors, events}, > {actors, events} = Union /@ Transpose[x]; > Array[If[MemberQ[x, {actors[[#1]], events[[#2]]}], 1, 0] & , > {Length[actors], Length[events]}]] > a = adjacenceMatrix[lst]; > b = a . Transpose[a]; > c = b (1 - IdentityMatrix[Length[b]]) > C is the desired symmetric matrix with off diagonal values of >=0, > indicating the number of times two actors participate in the same event. > The diagonal is set to 0. > A few items in response to Bobby's message, below. While c is, in fact, > a huge matrix with lots of cells equal to zero, that is exactly how we > need it structured for our analysis and research question (not relevant > to the list, but I'd be happy to discuss off list). Processing time is > actually not too bad!! I'm running a PIII 900 with 512 SDRAM, and the > code ran a 177 x 3669 matrix in under 90 seconds. MatrixForm [c] > presented no problems in viewing in the front end, but then it's only > 177 x 177. > Thanks again to all, > Tom > ********************************************** > Thomas P. Moliterno > Graduate School of Management > University of California, Irvine > tmoliter at uci.edu > ********************************************** > [...] There are several ways to go about this and which is best will vary based on relative number of events vs. number of actors. Below I show three variations. The first is a minor recoding of the one above. The second iterates over all pairs of actors. The third looks at all events for common actors. I then show three examples. The first two methods have the advantage that they do not require that events be positive integers. With some extra work the third method could also get around this restriction. toAdjacency0[data:{{_, _}..}] := Module[ {actors, events, mat1, mat2}, {actors, events} = Union /@ Transpose[data]; mat1 = Array[If[MemberQ[data, {actors[[#1]], events[[#2]]}], 1, 0] & {Length[actors], Length[events]}]; mat2 = mat1 . Transpose[mat1]; mat2 * (1-IdentityMatrix[Length[mat2]]) toAdjacency1[origdata_] := Module[ {data=Union[origdata], mat}, data = Map[Last, Split[data,#1[[1]]===#2[[1]]&], {2}]; mat = Table [If [j>k, Length[Intersection[data[[j]],data[[k]]]], 0], {j,Length[data]}, {k,Length[data]}]; toAdjacency2[origdata_] := Module[ {data=Sort[Map[Reverse,Union[origdata]]], mat, len, event}, data = Map[Last, Split[data,#1[[1]]===#2[[1]]&], {2}]; dim = Length[Union[Flatten[data]]]; len = Length[data]; mat = Table[0, {dim}, {dim}]; Do [ event = data[[j]]; Do [ Do [ mat[[event[[m]],event[[k]]]] += 1; mat[[event[[k]],event[[m]]]] += 1, data1 = Table[{Random[Integer,{1,1000}], Random[Integer,50]}, {10000}]; data2 = Table[{Random[Integer,{1,1000}], Random[Integer,100]}, {10000}]; data3 = Table[{Random[Integer,{1,1000}], Random[Integer,200]}, {10000}]; Timings are on a 1.5 GHz machine running the Mathematica 4.2 kernel under Linux. In[107]:= Timing[m0 = toAdjacency0[data1];] Out[107]= {5.44 Second, Null} In[108]:= Timing[m1 = toAdjacency1[data1];] Out[108]= {10.5 Second, Null} In[109]:= Timing[m2 = toAdjacency2[data1];] Out[109]= {16.24 Second, Null} In[110]:= m0 === m1 === m2 Out[110]= True Note that for this example the result is not terrible sparse (less than In[112]:= Count[Flatten[m0], 0] Out[112]= 191374 In[115]:= Timing[m0 = toAdjacency0[data2];] Out[115]= {11.51 Second, Null} In[116]:= Timing[m1 = toAdjacency1[data2];] Out[116]= {10.92 Second, Null} In[117]:= Timing[m2 = toAdjacency2[data2];] Out[117]= {9.07 Second, Null} Curiously this was the first example I tried, and all three methods perform about the same in this case. The result, not suprisingly, is more sparse (40%) because we have the same number of actors and pairs as previously, but now with more events to spread out over the pairs. In[118]:= Count[Flatten[m0], 0] Out[118]= 403232 When we get sparser still, the third method begins to dominate and the first is relatively slower. In[119]:= Timing[m0 = toAdjacency0[data3];] Out[119]= {22.73 Second, Null} In[120]:= Timing[m1 = toAdjacency1[data3];] Out[120]= {10.88 Second, Null} In[121]:= Timing[m2 = toAdjacency2[data3];] Out[121]= {4.96 Second, Null} Now sparsity is over 60%. In[122]:= Count[Flatten[m0], 0] Out[122]= 624350 The relative speed of this last method, in this instance, is derived from the fact that individual event lists are on average half the size of the previous case. Hence the main loop is expected to improve on average by a factor of 2 (you get a factor of 4 for iterating over all pairs in each event, but lose a factor of 2 because there are twice as many event lists). My guess is that a preprocessor that assesses number of actors vs. number of events would be the best way to choose between the first and third methods (which, inexplicably, are labelled as methods 0 and 2). It is not clear to me whether the middle approach will ever dominate. I have not given much thought to concocting examples where it would because offhand I suspect they would be pathological as in dense and with large intersections. As a last remark I'll note that these might run significantly faster if coded with Compile. Whether that is viable depends on the form of the data. In the above example where everything is a machine integer that approach would certainly work. Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Oct/msg00088.html","timestamp":"2014-04-19T02:16:49Z","content_type":null,"content_length":"40427","record_id":"<urn:uuid:19c578ef-3ed2-428d-ae81-a8c3ca4fef5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Express a Function in Terms of Another Function Date: 09/18/97 at 13:29:20 From: Migdalia Anes Subject: Expressing a function in terms of another function The altitude perpendicular to the hypotenuse of a right triangle is 12 cm. Express the length of the hypotenuse in terms of the perimeter. I tried lots of things, but they're wrong. I really don't know how to get it! Date: 09/24/97 at 17:08:08 From: Doctor Rob Subject: Re: Expressing a function in terms of another function Draw a diagram of a right triangle with hypotenuse h, perimeter p, and one leg x, other leg p-h-x: / | / | / | / | h / | p-h-x / | / | /`-.12 | / `-. | Applying the Pythagorean Theorem to the small triangles gives the two parts of the hypotenuse as Sqrt[x^2-144] and Sqrt[(p-h-x)^2-144]. Their sum must be h. The Pythagorean Theorem on the large triangle gives another equation. Together we have: x^2 + (p-h-x)^2 = h^2, h = Sqrt[x^2-144] + Sqrt[(p-h-x)^2-144]. To remove the radicals from the second equation, rewrite it as h - Sqrt[x^2-144] = Sqrt[(p-h-x)^2-144], and square both sides: h^2 - 2*h*Sqrt[x^2-144] + x^2 - 144 = (p-h-x)^2 - 144. Add 144 to both sides, and replace (p-h-x)^2 by h^2 - x^2 from the first equation. h^2 - 2*h*Sqrt[x^2-144] + x^2 = h^2 - x^2. Isolating the radical on the right side, you get x^2 = h*Sqrt[x^2-144]. Now square both sides again, and the radicals disappear: x^4 = h^2*(x^2-144). Combine this with the first equation, and you get the system x^4 - h^2*x^2 + 144*h^2 = 0, x^2 + (p-h-x)^2 - h^2 = 0. You want to eliminate x from these equations, to get one equation involving h and p alone. Expand the second equation: x^2 + p^2 + h^2 + x^2 - 2*h*p - 2*p*x + 2*h*x - h^2 = 0, 2*x^2 + (2*h-2*p)*x + (p^2-2*h*p) = 0. Find the quotient Q of the quartic polynomial divided by the quadratic one as polynomials in x. Subtract Q times the quadratic one from the quartic one. That will leave, fortuitously, a quadratic equation in h. Factor it, and set each factor linear in h equal to zero. These can be solved for h to give the desired expressions. Of course you have to check your answer, since squaring twice could have introduced spurious solutions! Actually, you can use the known fact that h < p/2 to throw out one of the solutions. Once you find h in terms of p, there will be two values of x for each value of p, because we didn't specify if x was the longer leg or the shorter one. Thus x will be the root of a quadratic equation, namely the last one given above, with coefficients functions of p. -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/54555.html","timestamp":"2014-04-19T02:01:47Z","content_type":null,"content_length":"7855","record_id":"<urn:uuid:b71d1af2-8eb4-486d-b627-1fcab6a8b35a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
IMLOG10 function Returns the common logarithm (base 10) of a complex number in x + yi or x + yj text format. Inumber is a complex number for which you want the common logarithm. ● Use COMPLEX to convert real and imaginary coefficients into a complex number. ● The common logarithm of a complex number can be calculated from the natural logarithm as follows: The example may be easier to understand if you copy it to a blank worksheet. ● Create a blank workbook or worksheet. ● Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help ● Press CTRL+C. ● In the worksheet, select cell A1, and press CTRL+V. ● To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B Formula Description (Result) =IMLOG10("3+4i") Logarithm (base 10) of 3+4i (0.69897 + 0.402719i)
{"url":"http://office.microsoft.com/en-us/excel-help/imlog10-function-HP010062331.aspx?CTT=5&origin=HP010079186","timestamp":"2014-04-16T05:07:01Z","content_type":null,"content_length":"22568","record_id":"<urn:uuid:f535d3c1-6abb-4ad0-9b25-da8429a0f36c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
(NBI) A problem I'm looking forward to (NBI) A problem I’m looking forward to One of the prompts for week 3 of the new blogger initiation is to show a math problem that we particularly like. In preparation for all of my new classes this summer, I spent quite a bit of time working through all the homework problems. I came across one at the end of chapter 2 in my precalculus book that I am really excited for. It is at the end of a unit on polynomial functions. A large portion of the unit is spent on characteristics of the graphs of polynomials and curve sketching. Key things like the sign of the leading coefficient, degree of the polynomial, and symmetry based upon if the function is even or odd are what is focused on in the instruction. List at least three reasons that the graph shown is not the graph of$f(x)={-4x^3-3x^2+5x+2}$. I really like this problem for a few reasons. I think it really does a good job synthesizing all of the material covered in the unit. A student who correctly answers shows that they understand the necessary concepts covered in the unit. It also gets students writing instead of just “solving math problems,” which is a point of emphasis of our administrator this year. I’m looking forward to see what my students do with this problem when the time comes. I’ll let you know when we get there! What do you think? Until next time . . .
{"url":"http://reflectionsandtransformations.wordpress.com/2012/09/04/nbi-a-problem-im-looking-forward-to/","timestamp":"2014-04-19T17:17:47Z","content_type":null,"content_length":"49065","record_id":"<urn:uuid:5b947b20-9e35-4e7e-b636-344baa77552c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
What is integral improper in x=0, x=+infintty, y=dx/x^4+1? - Homework Help - eNotes.com What is integral improper in x=0, x=+infintty, y=dx/x^4+1? int^(oo)_0 (dx)/(x^4+1) We can use partial fractions with a trick. Now we can use partial fractions to turn `1/((x^2-sqrt(2)x+1)(x^2+sqrt(2)x+1))= A/(x^2-sqrt(2)x+1)+(Bx)/(x^2-sqrt(2)x+1)+` `C/(x^2+sqrt(2)x+1)+(Dx)/(x^2+sqrt(2)x+1) ` I assume you can do partial fractions and get `A=-sqrt(2)/4` , `C=sqrt(2)/4` and `B=D=1/2` We also have to complete the square on the bottom `x^2-sqrt(2)x+1/2+1/2=(x-sqrt(2)/2)^2+1/2=1/2(2(x-sqrt(2)/2)^2+1)` and Then we need to integrate `-sqrt(2)/4*2/(2(x-sqrt(2)/2)^2+1), sqrt(2)/4*2/(2(x+sqrt(2)/2)^2+1)` and the two logrithmic integrals. When you do this we get We get `int 1/(x^4+1)=sqrt(2)/4(arctan(sqrt(2)x+1)+arctan(sqrt(2)x-1)+1/2ln((x^2+sqrt(2)x+1)/(x^2-sqrt(2)x+1)))` Evaluating this a x=0 we get 0 `lim_(x->oo) arctan(sqrt(2)x+1)=pi/2` `lim_(x->oo) arctan(sqrt(2)x-1)=pi/2` And `lim_(x->oo) ln((x^2+sqrt(2)x+1)/(x^2-sqrt(2)x+1))=0` `int^(oo)_0 1/(x^4+1) = sqrt(2)/4(pi/2+pi/2+1/8*0) = (sqrt(2)pi)/4` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-integral-improper-x-0-x-infintty-y-dx-x-4-1-306446","timestamp":"2014-04-20T00:46:11Z","content_type":null,"content_length":"25722","record_id":"<urn:uuid:b70e2307-f17c-486e-a1bd-5c47c5c894d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 24th 2009, 12:37 AM #1 Mar 2009 ICICI Bank offers a 1-Year Loan to a company at an interest rate of 20 percent payable at maturity, while Citibank offers on a discount basis at a 19% interest rate for the same period. How much should ICICI Bank decrease/increase the interest rate to match up the effective interest rate of Citibank? 1 Increase by 3.5% 2 Decrease by 1.8% 3 Increase by 1% 4 Decrease by 1.4% ICICI Bank offers a 1-Year Loan to a company at an interest rate of 20 percent payable at maturity, while Citibank offers on a discount basis at a 19% interest rate for the same period. How much should ICICI Bank decrease/increase the interest rate to match up the effective interest rate of Citibank? 1 Increase by 3.5% 2 Decrease by 1.8% 3 Increase by 1% 4 Decrease by 1.4% A "discount note" subtracts the interest from the original amount borrowed, then at the end you pay back the amount "borrowed". So, for example, if you wanted to wind up with $1000 in hand from Ciitibank, at 19% interest (how old is this problem?) how much would you have to "borrow"? Well, if you borrowed A dollars for one year, you would have to pay 0.19A in interest so you would actually receive A- .19A= .81A= 1000. You would have to "borrow" A= 1000/.81= 1235 dollars. The interest would be $235. If, instead, you borrowed $1000 from ICICI Bank at "r" interest, you would have to pay 1000r= 235 to match that or r= 235/1000= 0.235 or 23.5%. ICICI Bank would have to increase interest by 3.5% in order to match Citibank's rate. I really appreciate you taking time in explaining how it works however is there an easier way to help me understand i am actually not very good at maths and this questions was asked in one of the entrance test for MBA (Masters in Business administration) by one of management institutes. Though the answer you gave was correct as per the solved answer sheet for this entrance exams but i am still not very clear as to how you reached to that conclusion. Is this question suppose to be related to increase% / decrease percent and discounts? March 24th 2009, 03:13 AM #2 MHF Contributor Apr 2005 March 24th 2009, 07:29 PM #3 Mar 2009
{"url":"http://mathhelpforum.com/business-math/80335-interest-rate.html","timestamp":"2014-04-17T16:26:24Z","content_type":null,"content_length":"36531","record_id":"<urn:uuid:ac17ea31-a89c-4dfd-86ff-974e7f409759>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
libMesh: A C++ Finite Element Library I have a user trying to configure libMesh on a Linux cluster but it keeps running into problems with TBB. He's not alone as I have also had a few other reports of people who still receive error even after adding the --diable-tbb flag to the configure line. Are we being overly-agressive when snooping for tbb? I haven't really dived into this much yet but just wanted to see if anyone is aware of a quick fix. The compile errros are attached at the bottom of this thread. ---------- Forwarded message ---------- From: Capps, Nathan Allen <ncapps@...> Date: Wed, Feb 27, 2013 at 8:30 AM Subject: RE: Compiling problems To: "Permann, Cody J" <cody.permann@...> Cc: "marie.backman@..." <marie.backman@...> I disabled tbb as you suggested. It seem to produce a similar error. *From:* Permann, Cody J [cody.permann@...] *Sent:* Wednesday, February 27, 2013 10:17 AM *To:* Capps, Nathan Allen *Subject:* Re: Compiling problems If you take a look at the "build_libmesh_moose.sh" script. It essentially calls libMesh's configure with a couple of switches we need to build moose. You can simply add more flags to that script, in this case "--disable-tbb". You can always run "./configure --help" in libmesh to see what flags and options are available. It's possible you might have to turn off cppthreads too depending on how your system is setup. On Tue, Feb 26, 2013 at 5:29 PM, Capps, Nathan Allen <ncapps@...> wrote: > Thank you for your input and how do I turn off packages which are not > needed and how do I turn off Tbb > Nathan > Sent from my iPhone > On Feb 26, 2013, at 6:28 PM, "Permann, Cody J" <cody.permann@...> > wrote: > Well - all of your problems are pointing to an issue with TBB (Intel's > Threaded Building Blocks). It looks like libMesh is detecting it but > something isn't quite right with that package. You might want to just > disable (--disable-tbb). Yeah - you won't be able to run threaded but > that's probably not much of a concern until you are running jobs on several > thousand processors. > In general, there are a lot of packages that libMesh looks for and > configures if they are on the system. You don't need them all so you can > either try to figure out why it's not working or you can just turn it off. > BTW - if you want to send output (which is useful) just send it as plain > text :) > Cody > On Tue, Feb 26, 2013 at 4:10 PM, Capps, Nathan Allen <ncapps@...>wrote: >> Cody, >> I apologize for this being a week later. Here is the output when >> compiling lib mesh. I sent you the whole output in hopes to essentially >> help you help me lol. Also I have cc a Brian's new post doc. She is going >> to be using the MOOSE system and is helping me get MOOSE compiled on are >> local cluster. If there is information you need, just let me know. >> Nathan >> ------------------------------ >> *From:* Permann, Cody J [cody.permann@...] >> *Sent:* Wednesday, February 20, 2013 3:58 PM >> *To:* Capps, Nathan Allen >> *Subject:* Re: Compiling problems >> Are you using our libmesh build script? Is there more to this >> message? It tells me which package is broken but not why. >> Cody >> On Wed, Feb 20, 2013 at 1:35 PM, Capps, Nathan Allen <ncapps@...>wrote: >>> Cody, >>> I have been trying to compile the Moose system on my local cluster, >>> and I have been running into a few errors in libmesh listed below. Also I >>> have the petcs-3.3-p4. Do you have any clues into what the problem might be? >>> collect2: ld returned 1 exit status >>> make[1]: *** [getpot_parse-opt] Error 1 >>> make[1]: Leaving directory `/home/ncapps/projects/trunk/libmesh/build' >>> make: *** [install-recursive] Error 1 >>> Nathan
{"url":"http://sourceforge.net/p/libmesh/mailman/libmesh-users/?viewmonth=201302","timestamp":"2014-04-21T00:17:09Z","content_type":null,"content_length":"123924","record_id":"<urn:uuid:dcb974db-dd97-4d0c-9758-45a708e3f037>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the Riemann Hypothesis equivalent to a $\Pi_1$ sentence? up vote 22 down vote favorite 1) Can the Riemann Hypothesis (RH) be expressed as a $\Pi_1$ sentence? More formally, 2) Is there a $\Pi_1$ sentence which is provably equivalent to RH in PA? (This is mentioned in P. Odifreddi, "Kreiseliana: about and around George Kreisel", AK Peters, 1996, on page 257. Feferman mentions that when Kreisel was trying to "unwind" the non-constructive proof of Littlewood's theorem, he needed to deal with RH. Littlewood's proof considers two cases: there is a proof if RH is true and there is another one if RH is false. But it seems that in the end, Kreisel used a $\Pi_1$ sentence weaker than RH which was sufficient for his purpose.) So we have two proofs that the RH is equivalent to a $\Pi_1$ sentence. The first is DMR 1974: http://books.google.ca/books?id=4lT3M6F745sC&pg=PA335 $$\forall n >0 \ . \ \left(\sum_{k \leq \delta(n)}\frac{1}{k} - \frac{n^2}{2} \right)^2 < 36 n^3 $$ The second is by J. Lagarias: http://www.math.lsa.umich.edu/~lagarias/doc/elementaryrh.ps $$\forall n>60 \ .\ \sigma(n) < \exp(H_n)\log(H_n)$$ But both use theorems from literature that make it difficult to judge if they can be formalized in PA. The reason that I mentioned PA is that, for Kreisel's purpose, the proof should be formalized in a reasonably weak theory. So a new question would be: 3) Can these two proofs of "RH is equivalent to a $\Pi_1$ sentence" be formalized in PA? EDIT : Why is this interesting? Here I will try to explain why this question was interesting from Kreisel's viewpoint only. Kreisel was trying to extract an upperbound out of the non-constructive proof of Littlewood. His "unwinding" method works for theorems like Littlewood's theorem if they are proven in a suitable theory. The problem with this proof was that it was actually two proofs: 1. If the RH is false then the theorem holds. 2. If the RH is true then the theorem holds. If I remember correctly, the first one already gives an upperbound. But the second one does not give an upperbound. Kreisel argues that the second part can be formalized in an arithmetic theory (similar to PA) and his method can extract a bound out of it assuming that the RH is provably equivalent to a $\Pi_1$ sentence. (Generally adding $\Pi_1$ sentences does not allow you to prove existence of more functions.) This is the part that he needs to replace the usual statement of the RH with a $\Pi_1$ statement. It seems that at the end, in place of proving that the RH is $\Pi_1$, he shows that a weaker $\Pi_1$ statement suffices to carry out the second part of the proof, i.e. he avoids the problem in this case. A simple application of proving that the RH is equivalent to a $\Pi_1$ sentences in PA is the following: If we prove a theorem in PA+RH (even when the proof seems completely non-constructive), then we can extract an upperbound for the theorem out of the proof. Note that for this purpose, we don't need to know whether the RH is true or is false. Note: Feferman's article mentioned above contains more details and reflections on "Kreisel's Program" of "unwinding" classical proofs to extract constructive bounds. My own interest was mainly out of curiosity. I read in Feferman's paper that Kreisel mentioned this problem and then avoided it, so I wanted to know if anyone has dealt with it. lo.logic nt.number-theory 3 How do you formulate RH in PA? For a Pi_1 statement equivalent to RH in ZFC, see this paper by Jeff Lagarias, An elementary problem equivalent to the Riemann hypothesis, Amer. Math. Monthly , 109 (2002), 534--543. math.lsa.umich.edu/~lagarias/zeta.html – François G. Dorais♦ Jul 14 '10 at 13:01 1 Feferman says that Kreisel is not clear about this. Kreisel claims that proofs in large parts of the theory of functions of a complex variable can be presented in a arithmetical system he calls $Z_\mu$ and then moves to constructive approximations to zeros of analytic functions. So it seems to me that he is using the computable functions to approximate the roots of zeta function to express RH. But again, there are no details on how he formalizes these in the language of arithmetic. – Kaveh Jul 14 '10 at 13:31 Thank you for the link. I haven't finished reading it but it seems to me that the problem E in the paper is easily expressible in the language of PA. It would be nice if the proof is also formalizable in PA (or a conservative extension of it). – Kaveh Jul 14 '10 at 13:40 (I'm posting my comment as a partial answer.) – François G. Dorais♦ Jul 14 '10 at 15:42 François G. Dorais and Andres Caicedo have both given partial answers to my question. I think checking that the proofs can be formalized in PA may need more serious work. I would prefer to accept both of the answers, but there is only one accepted answer, so I guess it is more appropriate to accept the first one. Thank you. – Kaveh Jul 15 '10 at 12:26 show 10 more comments 4 Answers active oldest votes I don't know the best way to express RH inside PA, but the following inequality $$\sum_{d \mid n} d \leq H_n + \exp(H_n)\log(H_n),$$ where $H_n = 1+1/2+\cdots+1/n$ is the $n$-th harmonic number, is known to be equivalent to RH. [J. Lagarias, An elementary problem equivalent to the Riemann hypothesis, Amer. Math. Monthly, 109 (2002), 5347–543.] The same paper mentions up vote 18 another inequality of Robin, $$\sum_{d \mid n} d \leq e^\gamma n \log\log n \qquad (n \geq 5041),$$ where $e^\gamma = 1.7810724\ldots$, which is also equivalent to RH. Despite the down vote appearance of $\exp,$ $\log$ and $e^\gamma$, it is a routine matter to express these inequalities as $\Pi^0_1$ statement. (Indeed, the details in Lagarias's paper make this even simpler accepted than one would originally think.) Oh, yes, that's a nice article. For some reason I thought it was older. – Andres Caicedo Jul 14 '10 at 15:59 6 I think that a reasonable way to state RH in the first-order language of arithmetic is $|\pi(x)-Li(x)| = O(\sqrt{x} \ln x)$. It's not too much of a stretch to say that this estimate is "why we care" about RH, so I think it's just as good as (if not better than!) the more familiar statement about the zeros of $\zeta(s)$. – Timothy Chow Jul 15 '10 at 4:33 1 Lagarias says that for your first equation, the inequality needs to be strict for n >= 1. And the paper also uses a strict inequality for the second equation. Once the inequalities are strict it is no longer a routine matter to express the inequalities as $\Pi$^0_1. – Russell O'Connor Jul 15 '10 at 8:54 It seems to me that it should not be difficult, as we can use the error estimates for the approximations of $\exp$ and $\log$ to solve the problem of strictness you mentioned. – Kaveh Jul 15 '10 at 12:00 @Russel O'Connor: Actually rhs not being an integer would suffice, since lhs is an integer, but I see your point, strict inequality is an $\Sigma_1$ (r.e.) relation over real numbers. 2 I checked the proof again, and it seems that the theorem mentioning the inequality is strict for $1<n$ is just an extra, since if I am not making a mistake (again), they show that the RH implies the strict form, but to prove the RH it suffices to use the weak inequality version. – Kaveh Jul 15 '10 at 13:16 show 2 more comments Yes. This is a consequence of the Davis-Matiyasevich-Putnam-Robinson work on Hilbert's 10th problem, and some standard number theory. A number of papers have details of the $\Pi^0_1$ up vote sentence. To begin with, take a look at the relevant paper in Mathematical developments arising from Hilbert's problems (Proc. Sympos. Pure Math., Northern Illinois Univ., De Kalb, Ill., 17 down 1974), Amer. Math. Soc., Providence, R. I., 1976. Thanks for the reference. Lagarias' paper also notes that this paper by M. Davis, Y. Matijasevic, and Julia Robinson (DMR 1974) give an elementary ($\Pi_1$) equivalent of the RH. Kreisel's outlines of a direct proof is also mentioned in DMR 1974, p. 334. They give an alternative indirect proof that the RH is equivalent to a $\Pi_1$ formula. Unfortunately, their proof is also using theorems from literature that (like Lagarias' paper) makes it unclear if the equivalence is provable in PA. The list of theorems they are using is on page 335. – Kaveh Jul 15 '10 at 11:55 Here is a link to page 335 of the book: books.google.ca/books?id=4lT3M6F745sC&pg=PA335 – Kaveh Jul 15 '10 at 12:05 add comment One can write a program that, given enough time, will eventually detect the presence of zeros off the critical line if any exist, by computing contour integrals of $\zeta' (s)/ \zeta(s)$ on a sequence of small squares (with rational vertices) exhausting increasingly fine finite grids that cover more and more of the critical strip to greater and greater height. From the formulae for analytic continuation of $\zeta (s) $ one can extract effective moduli of uniform continuity and from that one can approximate the integral by dividing each side of the square into some large number of equal pieces, approximating the function at those rational points, and calculating the Riemann sum. The necessary accuracy can be determined from the modulus of continuity and formulas for $\zeta$. (The grids I have in mind come within $1/n$ of the sides of the critical strip, with height going from $0$ to $n$, and are divided into squares of size $1/n^2$, so eventually any zero will be isolated inside one such square.) EDIT: to express RH in Peano Arithmetic, there are two ways. up vote One is to use Matiyasevich (sp?) theorem that for any halting problem one can construct a Diophantine equation whose solvability is equivalent to halting. Or in the same vein, use 9 down Matiyasevich/Robinson approach to Diophantine encode an elementary inequality equivalent to RH, as was done in Matiyasevich-Davis-Robinson's paper on Hilbert's 10th Problem: Positive Aspects vote of a Negative Solution. Another way is to express enough complex analysis in Peano Arithmetic to carry the contour integral argument above, which can be done because ultimately everything involves formulas and estimates that can be made sufficiently explicit. How to do this is explained in Gaisi Takeuti's essay Two Applications Of Logic to Mathematics. EDIT-2: re: verifications of RH, the ZetaGrid distributed computation checked that at least the first 100 billion (10^11) zeros, ordered by imaginary part, are on the critical line. The zero computations are opposite to the $\Pi_1$ approach: instead of falsifying RH if it's wrong, if run for unlimited time they would validate RH as far as the program can reach, but could get stuck if there are double zeros anywhere. The algorithms assume RH and whatever other conjectures are useful for finding zeros, such as the absence of multiple roots, or GUE spacings between zeros. Every time they locate another zero, a contour integral then verifies that there are no other zeros up to that height, and RH continues to hold. But if there is a double zero the program could get stuck in an endless attempt to show that it's a single zero. Single zeros off the line would be detected by most algorithms, but not necessarily localized: once you know one is there you can take a big gulp and run a separate program to find it precisely. (Concerning the philosophical interest of the $\Pi_1$ formulation of RH, see also the comments under the question.) add comment Andres Caicedo's answer is the correct one, but my comment is too big to fit in a comment box. Here is a Haskell program that exhibits the Riemann Hypothesis: rh :: Integer -> Bool rh n = (h - n'^2/2)^2 < 36*n'^3 up vote 8 n' = toRational n down vote h = sum [1/toRational k|k <- [1..d]] d = product [product [e j|j <- [2..m]] | m <- [2..n-1]] e x = foldr gcd 0 [a|a <- [2..x], x `mod` a == 0] The Riemann Hypothesis is equivalent to saying that the program rh returns True on all positive inputs. This equivalence is, of course, mathematical equivalence and not logical equivalence. Once we prove or disprove the Riemann Hypothesis it will be known to be mathematically equivalent to a $\Delta^0_0$ statement. +1 for giving an actual program – Daniel Miller Jul 11 '13 at 2:07 add comment Not the answer you're looking for? Browse other questions tagged lo.logic nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence?sort=votes","timestamp":"2014-04-20T09:00:07Z","content_type":null,"content_length":"86356","record_id":"<urn:uuid:99c870f3-d9dd-4caa-908b-7d91b22a631e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Burlington, WA Math Tutor Find a Burlington, WA Math Tutor I have a Bachelor of Science in Biology and an MBA with emphasis in Business Analysis. This means that I love math and science! I don't have any special skills that make me good at either of 17 Subjects: including statistics, ACT Math, SAT math, algebra 2 ...On my first attempt on my SAT before going to college, I received an 800 on the critical reading section. I also performed well on the math and writing sections, allowing me to later be honored as a National Merit Finalist. This in turn led to many of the scholarships that financed over 75 percent of my undergraduate career at Texas A&M. 36 Subjects: including geometry, probability, SAT math, statistics ...My experience as a tutor includes work with Vietnamese, Russian, Mexican, Japanese, Thai, Chinese, Ukrainian, and Taiwanese students. Often, when working with them, we have found that the difficulties are not about intellectual ability, but more often they relate to understanding of language, ge... 25 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I am a soprano and can offer a great deal of support to female voices, but I have taught male students who either have been starting the voice change process or have already settled into their tenor/bass register. I begin my students with folk songs to teach them the basics of vocal technique an... 26 Subjects: including algebra 2, special needs, piano, dyslexia ...I spent a year living and studying in Nagoya, Japan, and it was the best year of my life. I've really developed a deep passion for the language and culture of Japan, and I can't imagine my life without it! Through my 8 years of studying Japanese, I've been helping my peers understand grammar and study vocabulary, kanji, and whatever else they need through personal tutoring sessions. 15 Subjects: including prealgebra, reading, Japanese, writing
{"url":"http://www.purplemath.com/Burlington_WA_Math_tutors.php","timestamp":"2014-04-19T02:08:35Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:cc56ff53-58d6-4905-9ab5-a91406a36dd4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Index of Refraction Date: 10/10/2001 at 14:01:45 From: Shelby Subject: Index of refraction Here is my question: At Norbert's backyard Halloween party, Dilbert, a guest, is considering jumping into Norbert's backyard swimming pool. Dilbert can't swim, but as he looks at the surface of the water, looking down at a 45-degree angle, the pool appears to be only 6 feet deep. "It's okay, I'm six feet, six inches tall!" Dilbert assures his host. "Hold it!" says Norbert. "It's deeper than you think!" How deep is the water? I know the index of refraction of air is 1, and 1.33 for water but I don't know what equation to plug them into to find the depth of the water. This is what I thought the correct equation is to solve for the other angle: n1 sin x1 = n2 sin x2 1.33 sin 45 degrees = sin x2 If this is correct, I came up with 70 degrees. But I don't know how to apply this in finding out the depth of the water! Am I way off course? Help! Thanks! Date: 10/10/2001 at 15:21:31 From: Doctor Douglas Subject: Re: Index of refraction Here's the diagram: D V A . | --------O----------E--- water surface |\ . | \ . | \ . | \ . | \ C | \ U \ Dilbert looks into the water, and (assuming that the entire pool is filled with air) surmises that point C is 6 feet below point E. The dotted line (. . .) makes a 45-degree angle with the vertical line UV. In other words, the angle of incidence (angle DOV) is 45 degrees. But in fact D sees point B along the line formed by slashes (\). Dilbert can observe point B is directly below E, for example, by looking directly down on point E from point A. You are of course applying Snell's law: n1 sin(x1) = n2 sin(x2), but you have to make sure that the n1 goes with the x1. In this case the dotted line is along 45 degrees, and it is in the air, so x1 = 45 deg goes with n1 = 1.0 (index of refraction of air): 1.0 sin(45 deg) = 1.33 sin(x2) You can solve for x2, which is the angle that the ray from O to B makes with the vertical OU, i.e. angle UOB. Now you'll need to perform some trigonometry (knowing that the distance CE is 6 feet) to determine the actual depth (BE) of the pool. I hope that helps. Please write back if you need more explanation of what's going on. - Doctor Douglas, The Math Forum Date: 10/10/2001 at 16:25:02 From: Doctor Rick Subject: Re: Index of refraction Hi, Shelby. I see that Doctor Douglas answered your question. I'd like to add some explanation of why it works. It never seemed very obvious to me that an object on the bottom would appear to be directly above where it really is; but we can understand it by considering the psychophysics of depth perception. The big question is, how does Dilbert decide how deep the pool appears? The answer is, he uses his binocular vision. Consider an object just lying on the ground in front of you. The ray from that object to your left eye is in a vertical plane. The ray from the object to your right eye is in another vertical plane. It is the angle between these planes that your brain uses to determine distance: the smaller the distance, the more your eyes need to turn toward each other to look at the object. Knowing the distance between your eyes, and the angle between the planes, the brain can "calculate" the distance to the intersection of the two planes. This information, along with the angle of depression of either line of sight, is enough to compute the location of the We don't need to do the full calculation; we don't need to know the distance between the eyes, or the angle between the planes. All we have to note is that the intersection of the planes is a vertical line. Your brain finds this vertical line, then it finds the intersection of this line with the extension of the ray passing into the eye. That's where it thinks the object is. What happens when the object is at the bottom of the pool? Then the path followed by a light ray from the object to either eye is not a straight line - but the path is still in that same vertical plane. Therefore the true position of the object is on the same vertical line (the intersection of the planes) as the image - the place where Dilbert's brain tells him it is located. The figure now looks like \ \ | \ \ | \ O image \ | \ | \ | \ | \ | \ | \ | O object I have drawn the true (bent) path followed by a light ray, and the straight path that Dilbert's brain assumes the ray has followed. His brain thinks the object is at the place I labeled "image." - Doctor Rick, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56361.html","timestamp":"2014-04-17T04:59:31Z","content_type":null,"content_length":"10205","record_id":"<urn:uuid:fb7cadd2-c496-457b-88b0-cdbd9b817d07>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse Relationship Between Addition and Subtraction Once children are proficient in finding related facts, they are ready to discover fact families. Materials: per child: 10 red cubes and 10 blue cubes Preparation: none Prerequisite Skills and Concepts: Children should understand addition and subtraction. They should also understand sums through 18 and subtracting from 18 or less, or the numbers they are working on. They should also understand related facts. Remind children about the related facts that you covered. Take time to review several pairs of related addition and subtraction facts. • Say: Today we are going to learn something new. Remember, we talked about related facts. Well, today we are going to talk about a fact family. All the facts in a fact family are related. • Pick up 4 red cubes and 2 blue cubes. • Say: I will put the red cubes together and put the blue cubes together. Now I will place the blue cubes on the end of the red cubes to make a train. • Ask: What fact can you make from this train? (4 + 2 = 6) • Write each of the facts on the board as you introduce them. • Say: Watch as I turn the train end to end. • Ask: What fact do you see now? What is the same about the two facts? (2 + 4 = 6; they use the same 3 numbers.) • Ask: Can we show a subtraction fact using this train? How? Elicit from children that you can break one of the colors of cubes off the train. • Break the blue cubes off the train. • Ask: What subtraction fact have I made? (6 2 = 4) • Put the train back together. This time break off the red cubes. • Ask: What subtraction fact have I made this time? (6 4 = 2) • Say: What do you notice about the four facts that we just made ? Children should say that there are two addition and two subtraction facts and that all four facts have the same 3 numbers. • Ask: How are the numbers related? The same numbers are used in all four facts. • Repeat with other fact families that have four facts. Then use 3 red and 3 blue cubes to make a train. • Ask: What is the addition fact? What will happen when I turn this train end to end? (3 + 3 = 6; it is the same.) • Break 3 red cubes off the train. • Ask: What is the subtraction fact? Will the fact change if I break off the blue cubes? (6 3 = 3; no.) • Ask: What can you tell me about a doubles fact family? Children should say that it has only 2 facts in it one addition and one subtraction. • Have children work with the cubes to make fact families. Then write a fact on the board and invite children to write the complete family. Be sure to include some doubles facts. • Finally, give children 3 numbers such as 3, 2, and 5. • Ask: What fact family can you make from these 3 numbers? (3 + 2 = 5, 2 + 3 = 5, 5 2 = 3, 5 3 = 2) • Repeat with other numbers, including doubles. Wrap-Up and Assessment Hints Children need lots of practice with related facts. As you assess, give one fact and have the children write the fact family. Or, ask questions such as, "Can you write a related addition fact for 12 5 = 7?" As you observe, check to see that children are using the correct sign and are doing the correct operation for the sign that is in the exercise.
{"url":"http://www.eduplace.com/math/mathsteps/1/b/1.inverse.develop.html","timestamp":"2014-04-19T01:49:41Z","content_type":null,"content_length":"8520","record_id":"<urn:uuid:13d7a73e-77f2-40d9-af69-ddf536ecd690>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum laser turns electron wave into memory | EE Times News & Analysis Quantum laser turns electron wave into memory Quantum laser turns electron wave into memory ANN ARBOR, Mich. How many electrons does it take to remember the entire contents of the Library of Congress? Only one, according to University of Michigan professor Philip Bucksbaum. Since electrons, like all elementary particles, are actually waves, Bucksbaum has found a way to phase-encode any number of ones and zeros along a single electron's continuously oscillating waveform. "Our work in quantum-phase registers is highly experimental, but theoretically there is really no limit to how long a string of 1s and 0s you can store in one," said Bucksbaum. In practice, Bucksbaum's team is a long way from quantum-phase memory devices. They are sticking with byte-wide vectors encoded with an ultrahigh-speed laser on a single cesium atom. With that setup, the team can store information in quantum-phase bytes instead of on quantum bits, as with most quantum computer designs. So far, all work with quantum computing has used the binary property of electron spin, which is either up or down. "Most other researchers are using the spin of a quantum particle as a storage medium. Quantum-phase data storage is much more flexible but also very new. It may turn out to be a step toward quantum computers, or it could be a complete dead end," said Bucksbaum. Quantum phase has long been of interest to researchers, because theoretically atoms could take on unfamiliar characteristics by selectively altering the phase of their waveform. For example, with a polymer, changing its constituent atoms' phase could mimic the wave-function phase of a metal, thereby imbuing plastic with the strength of steel. Quantum phase for data storage was first proposed by Lov Grover at Lucent Technologies Bell Laboratories (Murray Hill, N.J.) in 1997. Bucksbaum's group was the first to test the theory experimentally, and so far it works like a charm. Waves in bathtub "Grover speculated that quantum data registers could store and retrieve data by allowing you to search many locations simultaneously; we tested one of his algorithms and confirmed it," said Quantum mechanics holds that electrons behave like the waves sloshing in a bathtub they exist simultaneously in an infinite number of locations or quantum states within the single wave. In the bathtub example, the surface waves define sets of points, any one of which has a certain probability of being in the wave's location at any one moment. By sculpting designer wave packets and injecting them into an electron's waveform, Bucksbaum encoded strings of 1s and 0s with laser blasts that reverse the phase from the natural state of the waveform. "Our wave packets enable us to engineer atoms by adjusting the amounts and quantum phases of an atom's electrons," he said. Usually an electron is bound to an atom but can nevertheless exist in many states simultaneously like all the points along a wave stretching from one side of a bathtub to the other. All the points are there at once, and all the points along the wave are constantly changing, enabling a specific sequence of waves to encode information. Bucksbaum used a laser to encode parallel phase reversals along the waveform of an atom's electrons a pulsating stream of 8-bit phase reversals. A second reference stream enabled the researchers to read back out the original bits by decoding the phase reversals, thereby recovering the stored information like a data register. "There are an infinite number of individually addressable states the Coulomb potentials where quantum bits can be stored," said Bucksbaum. An electron's wave is called a "probability wave," because it is in each possible quantum state simultaneously, each with a certain probability. For instance, the probability that a wave is at the highest point is proportional to the number of places where the waveform is currently hitting its highest point. For atoms, the infinite number of quantum states comes from the different bound states of the electrons in their orbits. These orbits come in an infinite variety of "quantum jumps" between states the quantum states are conveniently numbered 1, 2, 3 and so forth. Bucksbaum's laser excited his cesium atom to states 25 through 32 to encode his 8-bit quantum bytes. Each of these high-energy states has an associated amplitude that can be modulated with ultrashort laser bursts, essentially storing an 8-bit vector into a quantum register. When needed later, the register's value can be read out from the single atom by hitting it again with the laser, this time with a reference encoding. After the second burst, the atom has been driven to two different states by the two laser pulse streams, causing the two "alternate realities" inside the atom to interfere with each other like dropping two pebbles into a pond: Both states exist simultaneously along with all the other miscellaneous states (ripples), albeit with a smaller probability. The interference pattern between the two wave packets enables the 1s and 0s of the original register's value to be decoded, since interference occurs only on positions in the original value that differ from the reference beam. "The way a data register works is that the electron is simultaneously in many states, meaning the electron is simultaneously occupying many parts of the register, so you can store the whole stream in a single atom," said Bucksbaum. Bucksbaum has verified the function of his single-atom quantum register with all types of bit patterns, similar to the test patterns used to verify the proper function of a CPU after manufacture. So far he has not found any values that can't be stored and retrieved from a single atom. "Now we want to find out how long information can be stored," he said.
{"url":"http://www.eetimes.com/document.asp?doc_id=1141983","timestamp":"2014-04-17T12:58:59Z","content_type":null,"content_length":"129148","record_id":"<urn:uuid:0eecac6c-a8bb-4513-82aa-453689faaa06>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum laser turns electron wave into memory | EE Times News & Analysis Quantum laser turns electron wave into memory Quantum laser turns electron wave into memory ANN ARBOR, Mich. How many electrons does it take to remember the entire contents of the Library of Congress? Only one, according to University of Michigan professor Philip Bucksbaum. Since electrons, like all elementary particles, are actually waves, Bucksbaum has found a way to phase-encode any number of ones and zeros along a single electron's continuously oscillating waveform. "Our work in quantum-phase registers is highly experimental, but theoretically there is really no limit to how long a string of 1s and 0s you can store in one," said Bucksbaum. In practice, Bucksbaum's team is a long way from quantum-phase memory devices. They are sticking with byte-wide vectors encoded with an ultrahigh-speed laser on a single cesium atom. With that setup, the team can store information in quantum-phase bytes instead of on quantum bits, as with most quantum computer designs. So far, all work with quantum computing has used the binary property of electron spin, which is either up or down. "Most other researchers are using the spin of a quantum particle as a storage medium. Quantum-phase data storage is much more flexible but also very new. It may turn out to be a step toward quantum computers, or it could be a complete dead end," said Bucksbaum. Quantum phase has long been of interest to researchers, because theoretically atoms could take on unfamiliar characteristics by selectively altering the phase of their waveform. For example, with a polymer, changing its constituent atoms' phase could mimic the wave-function phase of a metal, thereby imbuing plastic with the strength of steel. Quantum phase for data storage was first proposed by Lov Grover at Lucent Technologies Bell Laboratories (Murray Hill, N.J.) in 1997. Bucksbaum's group was the first to test the theory experimentally, and so far it works like a charm. Waves in bathtub "Grover speculated that quantum data registers could store and retrieve data by allowing you to search many locations simultaneously; we tested one of his algorithms and confirmed it," said Quantum mechanics holds that electrons behave like the waves sloshing in a bathtub they exist simultaneously in an infinite number of locations or quantum states within the single wave. In the bathtub example, the surface waves define sets of points, any one of which has a certain probability of being in the wave's location at any one moment. By sculpting designer wave packets and injecting them into an electron's waveform, Bucksbaum encoded strings of 1s and 0s with laser blasts that reverse the phase from the natural state of the waveform. "Our wave packets enable us to engineer atoms by adjusting the amounts and quantum phases of an atom's electrons," he said. Usually an electron is bound to an atom but can nevertheless exist in many states simultaneously like all the points along a wave stretching from one side of a bathtub to the other. All the points are there at once, and all the points along the wave are constantly changing, enabling a specific sequence of waves to encode information. Bucksbaum used a laser to encode parallel phase reversals along the waveform of an atom's electrons a pulsating stream of 8-bit phase reversals. A second reference stream enabled the researchers to read back out the original bits by decoding the phase reversals, thereby recovering the stored information like a data register. "There are an infinite number of individually addressable states the Coulomb potentials where quantum bits can be stored," said Bucksbaum. An electron's wave is called a "probability wave," because it is in each possible quantum state simultaneously, each with a certain probability. For instance, the probability that a wave is at the highest point is proportional to the number of places where the waveform is currently hitting its highest point. For atoms, the infinite number of quantum states comes from the different bound states of the electrons in their orbits. These orbits come in an infinite variety of "quantum jumps" between states the quantum states are conveniently numbered 1, 2, 3 and so forth. Bucksbaum's laser excited his cesium atom to states 25 through 32 to encode his 8-bit quantum bytes. Each of these high-energy states has an associated amplitude that can be modulated with ultrashort laser bursts, essentially storing an 8-bit vector into a quantum register. When needed later, the register's value can be read out from the single atom by hitting it again with the laser, this time with a reference encoding. After the second burst, the atom has been driven to two different states by the two laser pulse streams, causing the two "alternate realities" inside the atom to interfere with each other like dropping two pebbles into a pond: Both states exist simultaneously along with all the other miscellaneous states (ripples), albeit with a smaller probability. The interference pattern between the two wave packets enables the 1s and 0s of the original register's value to be decoded, since interference occurs only on positions in the original value that differ from the reference beam. "The way a data register works is that the electron is simultaneously in many states, meaning the electron is simultaneously occupying many parts of the register, so you can store the whole stream in a single atom," said Bucksbaum. Bucksbaum has verified the function of his single-atom quantum register with all types of bit patterns, similar to the test patterns used to verify the proper function of a CPU after manufacture. So far he has not found any values that can't be stored and retrieved from a single atom. "Now we want to find out how long information can be stored," he said.
{"url":"http://www.eetimes.com/document.asp?doc_id=1141983","timestamp":"2014-04-17T12:58:59Z","content_type":null,"content_length":"129148","record_id":"<urn:uuid:0eecac6c-a8bb-4513-82aa-453689faaa06>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Viraj_Deshpande @ PaGaLGuY Dressed in shorts and a crumpled white t-shirt and sitting coyly in his apartment in Dombivali, on the outskirts of Mumbai, Shashank Prabhu looks more like the teenager next door who you'd see rushing to his tuition classes. But once he begins to speak, the confidence and command of a topper are quite evident. Prabhu topped t... 281 117 Comments [...] (US 10 1/2)WINTERSCHUHE STIEFELSchuhe Gunstig Kaufe.... My solution: Every no. can be expressed in the form of 9k or 9k+1 or 9k-1 or 9k+2 or 9k-2 or 9k+3 or 9k-3 or 9k+4 or 9k-4, where k is an integer. 2998 = 9 * 337 + 1 Thus, 1 to 3000 (Excluding 1 and 3000) has 334 nos. of type 9k+2 (because of no. 2999) and 333 nos. of type 9k+1, 9k,... 6711+4179 is possible. First_timer's solution (above) is correct. Test of divisibility by 11 is that sum of odd-placed nos = sum of even-placed nos. Looking at the options, their (sum of even-placed nos. - sum of odd placed nos.) = (8-7=)1, (14-7=)7, (8-10=)-2 and (14-10=)4 respectively. y=1, z=7, w=3 or 6 or 9, x=2 or 4 or 6 or 8 Thus the nos. (and t... (x+y)^2 = x^2+2xy+y^2 = 2007 - 54X^2 = 9 (223 - 6X^2) Thus, (x+y) = 3k, where k is an integer 9k^2 = 9(223-6x^2) gives 6x^2=223-k^2 Since 3 divides (223 - k^2), k^2 should be of the form (3p+1). k^2 of the form (3p+1) where (223-k^2)/6 is also a perfect square are found by listing the... Area of triangle = 0.5*b*c*sinA Thus area = 10sinA Clearly, this is <=10, equality existing when A=90 degrees. Bound means the maximum value it can attain. RHS is not 22752*x. It is only one no. with x in its units place. Hope this clears the confusion Their HCF is 45 means they dont have any other common factor except 45, otherwise it would've figured in the HCF. Hence, once we divide both by 45, what remains should be a pair of co-prime nos. No. of divisors of 21600 with HCF 45 = No of co-prime divisors of 21600/45 (=480). This should help. There is some formula for no. of pairs of co-prime divisors. Cant recollect now. Will post here if I manage to recollect in time.
{"url":"http://www.pagalguy.com/u/viraj_deshpande","timestamp":"2014-04-19T09:28:46Z","content_type":null,"content_length":"116101","record_id":"<urn:uuid:1d3e45b6-4074-493a-ada8-3b2a3121139c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnetic Field of a Moving Charge Phys 4, Section 2 Feb 20, 2002 Magnetic Field of a Moving Charge Experiments show that the magnetic field of of moving charge can be expressed as: [o] ≡ 4π × 10^-7 N·s^2/C^2 is called the permeability of free space. The constant ε[o] that is used in electric field calculations is called the permittivity of free space. Note that ε[o]μ[o] = 1/c^ Two protons with a vertical displacement of r between them move in the x-y plane parallel to the x-axis at the same speed v (small compared to c). When they are both at x = 0, what is the ratio of the electric/magnetic forces between them? F[E] = kq^2/r^2 To get F[B] acting on top charge, first find B caused by bottom charge: So, the force this field exerts on the top charge is: Comparing the ratio of F[B] to F[E]: Last modified on February 27, 2002
{"url":"http://academic.mu.edu/phys/matthysd/web004/l0220.htm","timestamp":"2014-04-17T21:22:58Z","content_type":null,"content_length":"16627","record_id":"<urn:uuid:06a56008-2ca3-40c9-bba0-947441ef8503>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: POW writing to be done Replies: 0 Re: POW writing to be done Posted: Jul 17, 1997 11:00 AM >Now that you've had a chance to talk about the problem, I'd like your >group to write up some pieces addressing some or all of the following >issues. Break it up so that each person is doing part of work. >What is your answer? How did you get it? Did you all agree? >What are similar problems that you have used in your classroom? >How would you use this problem in your classroom? >How would you change this problem if you were going to use it? >How does this problem fit in with your current curricular focus or >focuses? (patterns, functions, and problem solving, cooperative >learning, or whatever) >After you've done a bit of this writing, take a look at the solutions >submitted by students and talk about them in your group. What do you >see? Do you hear anything surprising to you? Do you see things that you >talked about when you were solving the problem in your group? > -Annie This is a group answer from Jodi Fleishman, Marcia Radbill and Ursala Rice. >What are similar problems that you have used in your classroom? I've used pattern problems, specifically the Rule of 360, in teaching solving LogoWriter circle equation programming sequence problems in math a few years ago.Ie, fill in the blank in the equation to complete the >How would you change this problem if you were going to use it? It could be used to determine which student may sit next to the teacher in my computer lab or in the luch room. (Which seat would they be sitting in to be next to the teacher?) Our group figured it out that you could use rows or lines, and by connecting the first and last seats on the rows, the solution still works. >How does this problem fit in with your current curricular focus or >focuses? (patterns, functions, and problem solving, cooperative >learning, or whatever) This works very well with supporting the math curriculum (problem solving' etc.), as well as in on-going activities of co-operative learning activities-something our inner city students need to practice on a daily These answers are from Marcy.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=350417","timestamp":"2014-04-19T20:33:28Z","content_type":null,"content_length":"15869","record_id":"<urn:uuid:85c4d8f6-008d-4662-91e3-a44f56adee41>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a notion of a zeta function of a morphism? up vote 3 down vote favorite The Hasse-Weil zeta function is defined only for arithmetic schemes. By an arithmetic scheme I will mean a scheme $X$ together with a morphism of finite type $X\rightarrow S$, where $S$ is an affine Dedekind scheme (a $0$ or $1$ dimensional nonsingular affine scheme). Actually, in the case where $S$ is $0$-dimensional, I believe it is only defined for $S$ being $Spec$ of a finite field. The way the Hasse-Weil zeta function is defined is like so: first you define it for varieties over finite fields, and then if $S$ is one dimensional, you define the zeta function as the product of the zeta function of every fiber. It seems rather arbitrary for it to be defined only in these cases. Is there a definition of a zeta function of a morphism of finite type (or maybe flat?) in general, even when $S$ is $\geq 2$ dimensional? I would be surprised if there isn't, but I've never heard of such an entity. nt.number-theory zeta-functions ag.algebraic-geometry Have you seen the discussion at golem.ph.utexas.edu/category/2010/07/… ? – Qiaochu Yuan Aug 9 '11 at 17:34 Do you mean "even when S is not $\ge2$ dimensional", or do you mean "even when S is $\ge2$ dimensional"? – Joe Silverman Aug 9 '11 at 18:28 3 Probably not what you want, but if one views the zeta function over a finite field as encoding the number of fixed points of iterates of the Frobenius map, then one is led do the same thing for the fixed points of the iterates of an arbitrary map. These types of zeta functions have been studied in dynamical systems. – Joe Silverman Aug 9 '11 at 18:31 What is the shape of an answer you are hoping for? E.g., should it be a holomorphic function? For "relative" algebraic geometry, you typically want something of local nature on the base, and given the form of usual zeta functions (which you seem to want as the result when $S=\operatorname{Spec}(\mathbb{Z})$), it's hard to see what form it would take. – Moosbrugger Aug 9 '11 at 18:55 4 If $X$ and $S$ are finite type over $\operatorname{Spec}(\mathbb{Z})$, then what you've said is indeed well-defined and is just the zeta function of $X$. – Moosbrugger Aug 9 '11 at 23:38 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged nt.number-theory zeta-functions ag.algebraic-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/72488/is-there-a-notion-of-a-zeta-function-of-a-morphism","timestamp":"2014-04-18T23:42:15Z","content_type":null,"content_length":"53557","record_id":"<urn:uuid:26eca566-cdc6-4701-9a0c-e579fe523179>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/inkyvoyd/medals","timestamp":"2014-04-18T03:37:31Z","content_type":null,"content_length":"114348","record_id":"<urn:uuid:1545c533-c0b8-4716-8db0-fc267afb09ee>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Source Code Archive gauss_elim.m Gaussian elimination with back substitution (this is a demonstration routine which does not incorporate any pivoting strategies) LUfactor.m determine an LU decomposition for a given matrix (performs partial pivoting) companion routine to LUfactor given an LU decomposition and a right-hand side vector, performs forward and backward substitution to determine the solution to the linear system tridiagonal.m solve a system of linear equations which has a tridiagonal coefficient matrix jacobi.m perform Jacobi method iterations gauss_seidel.m perform Gauss-Seidel iterations sor.m perform SOR iterations conj_grad.m conjugate gradient method newton_sys.m Newton's method for a system of equations broyden.m Broyden's method for a system of equations
{"url":"http://www.pcs.cnu.edu/~bbradie/msystems.html","timestamp":"2014-04-23T19:31:42Z","content_type":null,"content_length":"3449","record_id":"<urn:uuid:3b09db98-4062-477e-8a77-739cba7855a0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Using the Slope-Intercept Formula Date: 9/27/95 at 22:30:28 From: Andy Plotkin Subject: Algebra II Honors High School Is y = 11x + 6 and y - 7 = 11(x - 3) the same thing? "Has x- intercept of 5 and slope of -2/3" How would I write an equation for the line described? Thank You, Wendy Plotkin Date: 9/27/95 at 22:45:28 From: Doctor Andrew Subject: Re: Algebra II Honors High School If you can find two different points (x,y) that satisfy both equations you've got the same line, since two points uniquely define a line. You could also show that they are algebraically equivalent by solving each one for zero (0 = ab + by + c), setting the non-zero sides equal to each other and seeing if you can get 0 = 0. >"Has x- intercept of 5 and slope of -2/3" How would I >write an equation for the line described? The slope/intercept form of the equation for a line is: y = mx + b where m is the slope and b is a Y-intercept. However, you don't know the y-intercept, you know the x- intercept. The x-intercept is the value of x such that y = 0. So if you plug in the slope into m, the x-intercept value into x, and let y = 0, you can solve for b, the Hope this helps. Good luck! -Doctor Andrew, The Geometry Forum
{"url":"http://mathforum.org/library/drmath/view/52907.html","timestamp":"2014-04-16T13:10:47Z","content_type":null,"content_length":"6055","record_id":"<urn:uuid:5a119bcc-1108-4ca9-975d-0098ffa96860>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Math vs. maths Math and maths are equally acceptable abbreviations of mathematics. The only difference is that math is preferred in the U.S. and Canada, and maths is preferred in the U.K., Australia, and most other English-speaking areas of the world. Neither abbreviation is correct or incorrect. You may hear arguments for one being superior to the other, and there are logical cases for both sides. One could argue maths is better because mathematics ends in s, and one could argue math is better because mathematics is just a mass noun that happens to end in s. In any case, English usage is rarely guided by logic, and these usage idiosyncrasies are often arbitrary. If you were raised in a part of the world where people say maths, then maths is correct for you, and the same is of course true of math. Don’t listen to anyone who says otherwise. North America Math is the strong suit of students at the Ward Elementary School, where 50 percent of third grade students scored “advanced.” [Boston Globe] Math professors are appalled at the lack of math skills they see in some education students … [Winnipeg Free Press] Apollo paid less than $100 million to acquire Carnegie Learning, a provider of computer-based math tutorials. [The Atlantic] Outside North America It lasted a long 40 minutes, which is how I remember maths lessons. [Financial Times (U.K.)] But scratch below the surface and you’ll find the maths is seriously flawed. [Sydney Morning Herald] The Government has been under pressure from business and employer groups to boost standards in maths. [Irish Times]
{"url":"http://grammarist.com/spelling/math-maths/","timestamp":"2014-04-16T13:45:37Z","content_type":null,"content_length":"37221","record_id":"<urn:uuid:6355d3f1-ed5d-41d1-a4db-cadc0d11024a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear independence June 2nd 2008, 06:38 PM #1 Junior Member Apr 2008 Linear independence Let u,v be distinct vectors in a vector space V. Show that {u,v} is linearly dependent if and only if u or v is a multiple of the other. So we have a statement $p \iff q$ where p is if u,v are linearly dependent and q is if u or v is a multiple of the other Since we are proving and if and only if statement we need to prove both directions. So lets start with $p \implies q$ We assume that u and v are linearly dependant. So by definition there exits scalers $c_1,c_2 \in \mathbb{R}$ such that $c_1,c_2 e 0$ and $c_1u+c_2v=0$ Now if we solve this equation for u we get $u=-\frac{c_2}{c_1}v$. Therefore u and v are multiples of each other. Done. Now for the other direction $q \implies p$ We assume that there are mulitples of each other so we get $v=ku$ where $k e 0$Now we subract v from both sides to get $0=ku-1\cdot v$ Now we have to non zero scalers and a linear combination that is equal to zero. So the vectors are dependant. I hope this helps. Yes it helps greatly. June 2nd 2008, 07:00 PM #2 June 2nd 2008, 07:06 PM #3 Junior Member Apr 2008
{"url":"http://mathhelpforum.com/advanced-algebra/40440-linear-independence.html","timestamp":"2014-04-20T00:12:27Z","content_type":null,"content_length":"37628","record_id":"<urn:uuid:e2ec4c70-071f-48ca-8dd1-0f9bbea433ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Martins Creek Algebra Tutor Find a Martins Creek Algebra Tutor ...I have helped students of all levels learn to manage time, improve skills, and set goals to improve their grades in many subjects. Also I am prepared to supply materials and lessons to aid students with their study habits. I have been sewing ladies' and children's garments as well as curtains and other household items for many years. 22 Subjects: including algebra 1, reading, writing, English ...I will do everything in my power to get you or your child the best test scores possible.An introduction to algebra includes topics such as linear equations, ratios, quadratic equations, special factorizations, complex numbers, graphing linear and quadratic equations, linear and quadratic inequali... 34 Subjects: including algebra 1, algebra 2, English, physics ...I'm a Princeton graduate in Mechanical Engineering specializing in math, science, and test prep. I scored 790M/780W/760CR on my SATs; I am a National Merit Finalist for the PSAT, and I earned perfect 5s on: Physics C E&M, Calculus BC, Physics C Mech, Biology, Psychology, Physics B, and English L... 26 Subjects: including algebra 2, English, writing, algebra 1 ...Over the last 15 years I've worked for several different test-prep companies. All were pretty good, but my instruction incorporates the best techniques from each place. There are a lot of SAT coaches who CLAIM to be experts. 23 Subjects: including algebra 1, algebra 2, English, calculus ...Because of the enjoyment I got out of teaching skiing, I decided to pursue a career in education. In 2012 I obtained my Master's of Education from Lehigh University. I currently have over 1 year of experience teaching at both the middle and high school level and hold Pennsylvania certifications in middle level math, middle level science, elementary education, and special education. 17 Subjects: including algebra 1, reading, geometry, biology
{"url":"http://www.purplemath.com/martins_creek_algebra_tutors.php","timestamp":"2014-04-16T19:08:52Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:2217912a-5816-46f3-9857-b33467142d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Unkown and Unknowable See also: Workshop on the Known, the Unknown, and the Unknowable. What Have We Learned? What Should Be Explored? Typically students learn what is known in science while scientists study the unknown. An area that is starting to be explored is what is unknowable in principle. Staring with the seminal papers of Godel and Turing, this century has witnessed a stream of impossibility results, including undecidability, and intractability. But these results concern formal systems. Do they limit scientific knowledge? I first spoke about these issues at a panel discussion in memory of the physicist Heinz Pagels on February 1, 1989, and at the Second Annual Charles Babbage Foundation Lecture in October, 1989. My talk at the 25th Anniversary of the Computer Science Department, Carnegie-Mellon University, delivered in 1990, appears in the Proceedings. See What is Scientifically Knowable?. Proceedings, Twenty Fifth Anniversary Symposium, School of Computer Science, Carnegie-Mellon University, Addison-Wesley, 1991, 489-503. One scientific problem that I’ve considered is protein folding. This is something that nature does easily but which we are unable to simulate and which theory suggests is very difficult to do. Possible reasons for this dissonance are presented in On Reality and Models, in Boundaries and Barriers: On the Limits to Scientific Knowledge, (J. Casti and A. Karlqvist, eds.), Addison-Wesley, 1996, 238-251. Other papers include: The Unknown and the Unknowable, Columbia University Department of Computer Science, Technical Report, and Santa Fe Institute Working Paper, 1997. ● Do Negative Results from Formal Systems Limit Scientific Knowledge?, Complexity, Fall, 1997, 29-31. ● Varieties of Limits to Scientific Knowledge, to appear in Complexity, 1998 (with P. Hut and D. Ruelle). ● Non-Computability and Intractability: Does it Matter to Physics?, Working paper, Santa Fe Institute, 1998. In May, 1996, there was a workshop at the Santa Fe Institute on Fundamental Sources of Unpredictability. Ten of the papers appeared as a special Proceedings issue of Complexity, Vol. 3, No. 1, 1997. The Proceedings Editors were J. Hartle, P. Hut, and J. F. Traub. There are two formal systems that concern us. One is the mathematical model which is chosen by the scientist. Continuous models are common in fields varying from physics to economics. The real or complex number field is assumed. The second formal model is the model of computation. Computer scientists tend to favor the Turing Machine model. For scientific problems the real-number model has a number of advantages. The pros and cons of the real-number model versus the Turing Machine model are given in Chapter 8 of Complexity and Information.
{"url":"http://www.cs.columbia.edu/~traub/html/body_unkown.html","timestamp":"2014-04-16T19:15:41Z","content_type":null,"content_length":"5371","record_id":"<urn:uuid:3b3d6edd-1c09-4bde-be05-c7ed541e1c02>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Showing Percentage And Actual Values - Graph Showing Percentage And Actual Values Graph Showing Percentage And Actual Values - Excel View Answers I have created a clustered bar chart with percentages on the Y axis. I would like to display the actual values (that this percentage was calculated from) next to each individual bar, how to do this? Many thanks Similar Excel Video Tutorials Decrease Values By 35% - See Mr Excel and excelisfun demonstrate two methods to subtract 35% from a column of values: Mr Excel: Create formula for remaining percentage (1 ... Helpful Excel Macros Get Values from a Chart - This macro will pull the values from a chart in excel and list those values on another spreadsheet. This will get the s Similar Topics I am attempting to display a chart as percentages, and include a data table on the bottom of the chart, however i want to show the data table as the actual values that the percentages represent, rather than the percentages which come through by default. Is there a way to do this? I have attached a worksheet (2007) as an example. Hey everyone, I'm really hoping someone can help me with this... I need to plot percentages over time in a line graph in excel. I don't want to have to do a percentage equation in the spreadsheet, I just want excel to take two sets of values and display the percentage in the chart. For example, I need B1 as a percentage of B2 for week 1, C1 as a percentage of C2 for week 2, etc.... Can someone please offer a suggestion for how to do this? I would really appreciate it. Also, would it be possible to link data from other sheets in the workbook into one single chart? I have succesfully built a spreadsheet that changes ranges and data required in a graph. I am stuck at two points. Firstly, one of the selection options to appear in the graph, is for percentage or actual values. Is there a way that the chart axis can automatically detect this and switch between percentage values and "normal values"? Secondly, though not associated directly, I would like a pivot table filter to automatically change depending on the value entered into a different cell. I guess this would need VBA, any help please! I want to do a simple chart - the Y axis will be percent (from 0 to 100) and the X axis will be year (02, 03, 04). I want to use a line chart with markers displayed at each data value - the data values will be 34 (or 25%) for 02, 131 (or 96%) for 03, and 137 (or 100%) for 04. I can get it to plot the percentages nicely, but I can't figure out how to get it to put the actual quantities as data labels. Any help you could provide would be appreciated. hello there, I am doing bar graphs like the one I am attaching the graph axis needs to be % and show the % sign, this is achieved however the values for each bar, while being % values, should not display the % sign --- I asked that in another thread, but help will be welcome as well. then the second and more tricky question I have do display the actual values (not the percentage ones) somehow in the graph for each of the vars anybody can help we with that? I've heard there is some trick with an invsible secondary axis --- can somebody point out an straight forward process? And I need to do like a tone of times, hopefully is not too troublesome thanks a lot in advance! I'd like to have the x-axis values as displayed on the chart be different than the actual values plotted on the graph. Specifically, my actual x-axis values are date/time values so they're huge mother labels like "October 12, 2009 13:52:27". I absolutely don't want these as my x-axis increment values. Instead I'd like to have the x-axis increment values enumerated as 1, 2, 3... values, while actually plotting the true x-values on the chart itself. Is there a way to do this in Excel 2007. Thanks - AJ. There is probably a really easy solution to this, but I just can't figure it out!!! I performed an investigation, which was to return a certain percentage of data (e.g. 1%, 2%, 5%, 10%, 25%, 50%, 75% or 100%) and time how long it took for the data to be displayed to the screen. I performed the investigation on each percentage twice, so I now have two sets of results for each percentage. I would like to display these results in a line chart, but Excel won't let me arrange the line chart how I wish. I would like the time taken to be in the X axis and the percentage to be in the Y No matter how many different ways I try, I cannot get this to happen. I have tried drawing my table of results with the percentages along the top and then down the side, I've tried adding in extra series and then taking them out after the graph has been drawn, but nothing works. If anyone could advise, I would be so grateful. Many thanks. I have a straightforward line chart with values from say 80 to 101 on the y axis. However I want the real 90.5 axis value to display as 100, and then BOTH the real 89 & 92 values (i.e. 90.5 plus and minus 1.5) to display as 90. Similarly I want both the 86 & 95 axis values (i.e. 90.5 plus and minus 4.5) to display as 75. In summary although the actual series line plots are plotted normally, I want the y axis to display something different. i.e. reading upwards the axis display will be 60, 75, 90, 100, 90, 75, 60 Has anyone got any ideas? Usual TIA Hi folks. I'm currently working on a spreadsheet to give a target update for various partners I work with. I need to work out a percentage value for a quater, which is dependant on the input of a percentage of retail target figure. So for instance I have: Jan - 125% of retail target Feb - 60% March - 100% I need to show a quater value, which I have come to the conclusion is: (1-(((1-B8)+(1-B9)+(1-B10))/3) Which just so happens to be the average value Which works out as 95% But if these percentage figures are derived from a actual/target like: Jan - 4 target, 5 actual Feb - 5 target, 3 actual March - 6 target, 6 actual Then my "true" percentage actual works out at 15 target, 14 actual which is 93.34% Is there a better way to work out the quartile percentage? I currently have a set of data the is shown in a column chart. At the moment, the 'y' axis has the scale set to the values and each bar is shown as the value itself. Is there anyway of showing the figure on the bar as a percentage of the total for that bit, whilst keeing the 'y' axis as the values? Hope this makes sense?? Please note this is for a column chart, not bar chart I have created a simple bar chart (with percentage as it y-axis) but I have a one value that has a much larger percentage than the other (x-axis) values. As all other values are shown to the same scale, it is making all slight differences between the other values look completely insignificant. Is there a way we could alter the scale on the y-axis, so that it goes 0% to 30% with a gap to 200% to 250%? What I have: I inherited a chart and table that I am required to use. The chart is supposed to display table values and percentage. As the table and chart is now, values have to be manually entered into the table then the chart with each new use. Table data isn't linked to the chart. I tried to figure out how the percentages are calculated without success. What I need: I would like to enter Occupied hours in the table and display those hours and the unoccupied hours in the chart with the percentage each value is of 168 hours. I have created a dynamic chart with the option of viewign the data as percentage data or the actual numerical (non percentage) data. The slect button I created selects the rows to chart either percenatges of actual values as a line chart. The problem is I have 4 charts in a quadrant style and the maximum in the y scale varies for each as it is automatic. I want all charts to show max of 100% in the scale to make them relavive when viewing but if I set the max to 1.0 then when I hit the button and the chart changes to regular values eg 0-1500 then a max of 1.0 shows no data obviously any thoughts ? I am doing absolutely masses of simple sums in Excel (X for Mac) to then produce charts for a report - but I am having to illustrate the actual figures as whole percentages and that is getting me hot under the collar - as my little lists rarely add up to an exact 100%. Now, I can understand where the discrepancy is creeping in with the rounding and all - but is there a sneaky way to get Excel to make the adjustment for me so that the percentages always total 100%? Here is an example from my sheet, where the actual figures are added to equal 200, and the cell showing the percentage figure has a simple formula at present of for example "=(B71/B76)" where B71 is the actual figure (in this case, 12) and B76 is the sum of all the actual figures (200 below) and the cell is then set to be a percentage with zero decimal places. 12 6% 105 53% 44 22% 31 16% 8 4% 200 So you can see that my percentages here total 101%! I have spent hours trying to research a sneaky way and am starting to think I'm asking for the impossible - and will just have to 'adjust' the percentages manually to make my 90 charts look accurate! Many thanks as always, here's hoping. I'm trying to create a pie chart showing a percentage breakdown based upon a set of values that includes repeating values. For example, for the list: I'd like a pie chart that shows the pentage breakdown of the four unique values, not represent each line as it's own slice. I'm not sure how to do this in Excel without manually calculating the percentages based. Any suggestions woul dbe much appreciated. I'm creating a chart that shows a distribution of values, comparing them month by month (see attached spreadsheet). In addition to showing the actual values on the chart, I'd like to show the percentage for that month. For example, in my spreadsheet, in month '1', there are 11 values. 4 of them are labeled as '14.3'. I'd like to show the 14.3 column as being 4 high, but also, I'd like to show that it's 36% (4 divided by 11) of the total numbers gathered for that month. Is there a way to do this? I am trying to find a formula to create my chart column, I have managed to see the total column and the actual column, but the percentage column remains at `0`, I am not sure if I have the formual 1, 2, 3, 4 Total 275 1050 1140 310 Actual 120 1000 1050 250 % 44% 95% 92% 81% The percentage is divided from actual into total which gives 44% (rounded up) The actual and total coulumn appear correct, but the % column is showing `o` Can you advise thank you Hi again everyone, I have a graph that has 5 variables in a stacked column. X axis is year, Y axis is total number of people. On the same graph, I'd like to label each individual series with its % of the total for that year. I've done this before by making individual text boxes and manually entering in the percentages, but I feel like there must be a better way to do this. I tried a line-column graph and plotted the percentages on a second Y axis (percentage) but the data points don't line up with their corresponding bar chunk. I hope I'm making sense, I've b een staring at this for quite awhile. Thanks!! Hi! I have a graph which displays the movement of a parameter in percentages. On the y-axis are the percentages displayed, e.g. 5,0% I want to show these numbers on this axis without the '%'-sign, so e.g. 5,0 Which custom format should/can I use? I'm using Excel 2007. I want to create a chart with 2 values Eg below: name marks percentage Total no. 76 100% Pass 54 71% Fail 22 29% No result 41 54% I need to create a chart where I want to show both the marks and percentage on Clustered Column 3D chart. When I try to create the chart I get only the marks bars and the percentage bars are just flat shown on the chart. Pls helt Thanks for your answer I am trying to create a chart that has the primary axis as a stacked chart that shows 2 figures (numbers of sales & number of referrals). I am then adding a line that shows percentage across the chart in a secondary axis. The issue is that I need this line to show the percentage of sales that were referrals. I have the chart created, however due to the referrals changing each month the line is not accurately showing the percentage. The percentage number is correct; it is the spot where the line intersects that is not calculating properly on the stacked bars. Any thoughts? I have a pie chart and I dont want to display percentages, but the whole number value from the source. When I format the data label I get a fraction instead of the whole number. I want the chart to display the actual numbers, and not a percentage, how can I do this? Hello all. I have a pie chart that simply displays total number of items and number of remaining items. I have the percentages manually calculated but wanted the pie chart to also display the percentages. The problem is that when I select Format Data Series, Data Labels, Percentage...the percent displayed on the chart is incorrect. For example, I set up a test chart with 100 total items with 50 items remaining. The calculated percentage is 50% but the pie chart is displaying 33.33% which is obviously incorrect. Anyone know how to correct the pie chart percentages? i'm trying to create a simple line graph for percentages. Every five years from 1960 to 2000 i have a percentage( ranging from 5 to 30%). I want one axis to be the year and the other to be the Next i'll need to show the same graph with a shockingly steep increase, probally by starting the percentages at zero. I am able to get the line graph with the information but not with the corresponding years. thanks for any help So I can't seem a way to do this, but I've heard before that you can link data labels to other data than the Chart is showing. For instance, I am comparing actual data to forecasted data. I want the data label to show the variance between the two in percentage form. Forecast = 200 Actual = 210 Variance = 10% = (210 - 200) / 200 I have a 3D Clustered Column in which it shows the Actual and Forecast. The only remaining part is the Variance that I want above those two columns. Does anybody know how to do this?
{"url":"http://www.teachexcel.com/excel-help/excel-how-to.php?i=131953","timestamp":"2014-04-18T16:30:30Z","content_type":null,"content_length":"131053","record_id":"<urn:uuid:4b96344c-599f-4516-8ea4-1683ed114462>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the center of Qcoh(X)? up vote 15 down vote favorite The center of a category $C$ is the monoid $Z(C)=\text{End}_{C^C}(1_C)$. Thus it consists of all families of endomorphisms $M \to M$ of objects $M \in C$, such that for every morphism $M \to N$ the resulting diagram commutes. If $C$ is a $Ab$-category, this is actually a ring. For example, the center of $\text{Mod}(A)$ is the center of $A$, if $A$ is a (noncommutative) ring. Now my question is: What is the center of $\text{Qcoh}(X)$, where $X$ is a scheme? Observe that there is a natural map $\Gamma(\mathcal{O}_X,X) \to Z(\text{Qcoh}(X))$; a global section is mapped to the endomorphisms of the quasi-coherent modules which are given by multiplication with this section. Also, there is a natural map $Z(\text{Qcoh}(X)) \to \Gamma(\mathcal{O}_X,X)$, which takes a compatible family of endomorphisms to the image of the global section $1$ in $\mathcal{O}_X$. The composite $\Gamma(\mathcal{O}_X,X) \to Z(\text{Qcoh}(X)) \to \Gamma(\mathcal{O}_X,X)$ is the identity, but what about the other composite? If $X$ is affine, it also turns out to be the identity. In the end of his thesis about the Reconstruction Theorem, Gabriel proves that $\Gamma(\mathcal{O}_X,X) \to Z(\text{Qcoh}(X))$ is an isomorphism if $X$ is a noetherian scheme (using recollements of localizing subcategories). I'm pretty sure that the proof just uses that $X$ is quasi-compact and quasi-separated. Now what about the general case? Note that this is about the reconstruction of the structure sheaf of $X$. Since Rosenberg generalized this to arbitrary schemes, it is tempting to look at his proof. But if I understand correctly, Rosenberg uses a structure sheaf on the spectrum of an abelian category which avoids the above problems and uses $Z(\text{Mod}(X))=\Gamma(\mathcal{O}_X,X)$, which is certainly true (use extensions by zero). But I'm not sure because Rosenberg refers to a proof step (a4) which is not there ... Edit: Angelo has proven it below if $X$ is quasi-separated. Now what happens if $X$ is not quasi-separated? homological-algebra ag.algebraic-geometry add comment 2 Answers active oldest votes In the case of a quasi-separated scheme, the center of the category of quasi-coherent sheaves is $\mathcal O(X)$. Suppose that $f$ is in the center. Let $a \in \mathcal O(X)$ be the scalar that describes the action of $f$ on $\mathcal O_X$; it is enough to show that if $a = 0$ then $f = 0$. Suppose that $M$ is a quasi-coherent sheaf, and that $s$ is a section of $M$ over an up vote open subscheme $U$ of $X$; we need to show that $f_M(s) = 0$. Call $j\colon U \to X$ the embedding; then $j$ is quasi-compact, because $X$ is quasi-separated, so $\overline M := j_*(M\mid_U) 11 down $ is quasi-coherent. The adjuntion map $M \to \overline M$ induces an isomophism $M(U) \simeq \overline M(U)$. Call $\overline s$ the image of $s$ in $\overline M$; is enough to show that vote $f_{\overline M}\overline s = 0$. But $\overline s$ extends to all of $X$, so it in the image of a map $\mathcal O_X \to \overline M$, and the thesis follows. There is no such natural map $M \to \overline{M}$, since it is not compatible with respect to the restriction of $X$ to $U$. – Martin Brandenburg Oct 11 '10 at 15:47 Does this version work? – Angelo Oct 11 '10 at 16:01 Yes, of course. I wonder why I have not seen this, because I already started with $\overline{M}$ before asking the question here ... – Martin Brandenburg Oct 11 '10 at 17:05 Being a bit picky, I think in this argument we need U to be quasi-compact. Otherwise I don't see why $$U\rightarrow X$$ is quasi-compact. – Yuhao Huang Oct 25 '11 at 5:38 @Yuhao Huang: right, but it suffices to show that $f_M(s_{\vert V})=0$ whenever $V\subset U$ is affine (because $f_M(s_{\vert V})=f_M(s)_{\vert V}$). – Laurent Moret-Bailly Oct 25 '11 at show 1 more comment I've surely misunderstood your question or made a simple mistake, but it seems to me that if $X$ is allowed to vary then $Z({\rm QCoh}(X))$ is a sheaf in the Zariski topology---call it $\ underline{Z}_X$---and there is a morphism of sheaves $\mathcal{O}_X \rightarrow \underline{Z}_X$. It's an isomorphism in general because it's an isomorphism when $X$ is affine. Edit: Martin's comment explains the simple mistake I made above, but I think it might be possible to salvage the argument. I'm a little nervous about this though... Let $A_X$ be the smallest additive subcategory of the category of $\mathcal{O}_X$-modules that contains ${\rm QCoh}(X)$ and is closed under arbitrary products and kernels. Any endomorphism of the identity functor of ${\rm QCoh}(X)$ extends uniquely to an endomorphism of the identity functor of $A_X$ (since product and kernel are functors). Therefore $Z(A_X) = Z({\rm QCoh}(X)) Since $f^{-1}$ commutes with arbitrary products and kernels, the categories $A_X$ for varying $X$ form a fibered category (actually a stack) over the Zariski site of $X$. If $f : U \rightarrow X$ and $G \in A_U$ then $f_\ast G \in A_X$ (see the proof that the pushforward of a quasi-coherent sheaf under a quasi-compact, quasi-separated morphism is quasi-coherent in EGA(1971)I.6.7.1, taking into account that $f_\ast$ commutes with arbitrary inverse limits). If $U$ is an open embedding, this implies that $f^\ast : A_X \rightarrow A_U$ is essentially surjective (since $f^\ast f_\ast G = G$). Therefore any endomorphism of the identity functor of $A_X$ can be restricted to an endomorphism of the identity functor of $A_U$. up vote Therefore as $X$ varies, $Z(A_X) = Z({\rm QCoh}(X))$ forms a sheaf in the Zariski topology. 1 down vote Edit 2: I have to argue that $f_\ast$ carries $A_U$ into $A_X$. I'm mimicking the argument from EGA here, but I can't see Martin's objection. We can assume that $X$ is affine (since $A$ is a stack). If $f$ is affine, this is because $f_\ast$ commutes with arbitrary inverse limits and $f_\ast$ carries ${\rm QCoh}(U)$ into ${\rm Let $U_i$ be a cover of $U$ by open affines, and let $U_{ij}$ be the pairwise intersections. Let $f_i$ and $f_{ij}$ be the restrictions of $f$. Let $G_i = {f_i}_\ast G_i \big|_{U_i}$ and let $G_{ij} = {f_{ij}}_\ast G \big|_{U_{ij}}$. Then $f_\ast G = \ker( \prod_i G_i \rightarrow \prod_{ij} G_{ij} )$ and the $G_i$ are in $A_X$ because $U_i \rightarrow X$ are affine. If the $G_{ij}$ are also in $A_X$ then so is $f_\ast G$. This will be the case if $f$ is separated, since then the $U_{ij}$ will be affine. But this implies the general case because the $U_{ij}$ will be quasi-affine, hence separated, over $X$, so $G_{ij} = {f_{ij}}_\ast G \big|_{U_{ij}}$ will be in $A_X$ by the case mentioned above. No, the restriction functor does not have to induce a homomorphism between the center. This is only the case if the inclusion is quasi-compact (and this has been dealt before). However, look at mathoverflow.net/questions/38009/… for a context where this actually does make sense. – Martin Brandenburg Oct 14 '10 at 20:19 Nice construction. I've tried to check the details. It is clear that $Z(A_X) \to Z(Qcoh(X))$ is injective and actually how to define the inverse map. But is it true that this will be well-defined? The endomorphism of a product will be the product of the endomorphisms, but how do we show that these endomorphisms are natural? I think we can repair it when we also make $A_X$ closed under subobjects, but then other problems arise ... What do you think? – Martin Brandenburg Oct 18 '10 at 9:44 Also there are problems proving that the direct image $A_U \to A_X$ is well-defined. I can reduce it to the case that $U$ is affine. We cannot copy the EGA proof. – Martin Brandenburg Oct 18 '10 at 16:31 I don't see the problem with the EGA argument, so I'm copying it in above. Can you point out my mistake? Checking naturality seems like a serious problem. How would having subobjects inside $A_X$ help? – Jonathan Wise Oct 19 '10 at 7:19 The case $X$ affine is clear to me, I should have remarked that. The problem is that in order to show that $A$ is a stack, I actually need the claim about direct images. You seem to use another argument. Which one? – Martin Brandenburg Oct 19 '10 at 16:01 show 2 more comments Not the answer you're looking for? Browse other questions tagged homological-algebra ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/41789/what-is-the-center-of-qcohx/41797","timestamp":"2014-04-20T01:14:49Z","content_type":null,"content_length":"71785","record_id":"<urn:uuid:2bcf7f8f-dcb0-4e34-a604-9afc3357488b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
13.2.5 Natural Convection and Buoyancy-Driven Flows When heat is added to a fluid and the fluid density varies with temperature, a flow can be induced due to the force of gravity acting on the density variations. Such buoyancy-driven flows are termed natural-convection (or mixed-convection ) flows and can be modeled by FLUENT. The importance of buoyancy forces in a mixed convection flow can be measured by the ratio of the Grashof and Reynolds numbers : When this number approaches or exceeds unity, you should expect strong buoyancy contributions to the flow. Conversely, if it is very small, buoyancy forces may be ignored in your simulation. In pure natural convection, the strength of the buoyancy-induced flow is measured by the Rayleigh number : Rayleigh numbers less than 10 Modeling Natural Convection in a Closed Domain When you model natural convection inside a closed domain, the solution will depend on the mass inside the domain. Since this mass will not be known unless the density is known, you must model the flow in one of the following ways: • Perform a transient calculation. In this approach, the initial density will be computed from the initial pressure and temperature, so the initial mass is known. As the solution progresses over time, this mass will be properly conserved. If the temperature differences in your domain are large, you must follow this approach. • Perform a steady-state calculation using the Boussinesq model (described in Section 13.2.5). In this approach, you will specify a constant density, so the mass is properly specified. This approach is valid only if the temperature differences in the domain are small; if not, you must use the transient approach. For a closed domain, you can use the incompressible ideal gas law only with a fixed operating pressure. It cannot be used with a floating operating pressure. You can use the compressible ideal gas law with either floating or fixed operating pressure. See Section 9.6.4 for more information about the floating operating pressure option. The Boussinesq Model For many natural-convection flows, you can get faster convergence with the Boussinesq model than you can get by setting up the problem with fluid density as a function of temperature. This model treats density as a constant value in all solved equations, except for the buoyancy term in the momentum equation: where 13.2-18 is obtained by using the Boussinesq approximation Limitations of the Boussinesq Model The Boussinesq model should not be used if the temperature differences in the domain are large. In addition, it cannot be used with species calculations, combustion, or reacting flows. Steps in Solving Buoyancy-Driven Flow Problems The procedure for including buoyancy forces in the simulation of mixed or natural convection flows is described below. 1. Activate the calculation of heat transfer. Define Models Energy... 2. Define the operating conditions. Define Operating Conditions 3. Define the boundary conditions. Define Boundary Conditions... 4. Set the parameters that control the solution. Solve Controls Solution... See also Section 13.2.2 for information on setting up heat transfer calculations. Operating Density When the Boussinesq approximation is not used, the operating density This form of the body-force term follows from the redefinition of pressure in FLUENT as The hydrostatic pressure in a fluid at rest is then Setting the Operating Density By default, FLUENT will compute the operating density by averaging over all cells. In some cases, you may obtain better results if you explicitly specify the operating density instead of having the solver compute it for you. For example, if you are solving a natural-convection problem with a pressure boundary, it is important to understand that the pressure you are specifying is 13.2-19. Although you will know the actual pressure In some cases the specification of an operating density will improve convergence behavior, rather than the actual results. For such cases use the approximate bulk density value as the operating density and be sure that the value you choose is appropriate for the characteristic temperature in the domain. Note that if you are using the Boussinesq approximation for all fluid materials, the operating density Solution Strategies for Buoyancy-Driven Flows For high-Rayleigh-number flows you may want to consider the solution guidelines below. In addition, the guidelines presented in Section 13.2.3 for solving other heat transfer problems can also be applied to buoyancy-driven flows. Note, however that no steady-state solution exists for some laminar, high-Rayleigh-number flows. Guidelines for Solving High-Rayleigh-Number Flows When you are solving a high-Rayleigh-number flow ( The first procedure uses a steady-state approach: 1. Start the solution with a lower value of Rayleigh number (e.g., 2. To change the effective Rayleigh number, change the value of gravitational acceleration (e.g., from 9.8 to 0.098 to reduce the Rayleigh number by two orders of magnitude). 3. Use the resulting data file as an initial guess for the higher Rayleigh number and start the higher-Rayleigh-number solution using the first-order scheme. 4. After you obtain a solution with the first-order scheme you may continue the calculation with a higher-order scheme. The second procedure uses a time-dependent approach to obtain a steady-state solution [ 139]: 1. Start the solution from a steady-state solution obtained for the same or a lower Rayleigh number. 2. Estimate the time constant as [ 32] Using a larger time step 3. After oscillations with a typical frequency of 13.2-21 and Postprocessing Buoyancy-Driven Flows The postprocessing reports of interest for buoyancy-driven flows are the same as for other heat transfer calculations. See Section 13.2.4 for details. Previous: 13.2.4 Postprocessing Heat Transfer Up: 13.2 Modeling Conductive and Next: 13.2.6 Shell Conduction Considerations &copy Fluent Inc. 2006-09-20
{"url":"http://aerojet.engr.ucdavis.edu/fluenthelp/html/ug/node572.htm","timestamp":"2014-04-18T08:02:56Z","content_type":null,"content_length":"26523","record_id":"<urn:uuid:f3264831-a9b0-459e-9c98-35a4d54bb547>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Tell Military Time Edit Article Edited by Shadow dude, Carolyn Barratt, Davjohn, Tipsy and 7 others The twenty-four hour clock is not only used by the military, but it's the standard practice in many countries outside of North America. However, since it's rarely used outside the military in North America, the twenty-four hour clock has come to be known as "military time." If you want to know how to tell military time, just follow these easy steps. 1. 1 Understand the military clock. The military clock starts at midnight, known as 0000 hours. This is called "Zero Hundred Hours." Instead of having a twelve-hour clock that resets twice, in military time, you work with one clock that starts with 0000 at midnight and runs all the way until 2359 hours (11:59 p.m.) until it resets at 0000 hours at midnight again. Note that the military clock does not use the colon to separate the hours and minutes. □ For example, while 1 a.m. is 0100 hours, 1 p.m. is 1300 hours. □ Contrary to popular belief, the military does not also call midnight 2400 hours, or "Twenty-Four Hundred Hours." 2. 2 Learn how to write the hours from midnight until noon in military time. To know how to write the hours from midnight until noon in military time, you just have to add a zero before the hour and two zeros afterward. 1 a.m. is 0100 hours, 2 a.m is 0200 hours, 3 a.m. is 0300 hours, and so on. When you reach the two digit numbers, 10 a.m. and 11 a.m., just write 1000 hours for ten a.m. and 1100 hours for 11 a.m. Here are a few more examples: □ 4 a.m. is 0400 hours. □ 5 a.m. is 0500 hours. □ 6 a.m. is 0600 hours. □ 7 a.m. is 0700 hours. □ 8 a.m. is 0800 hours. 3. 3 Learn how to write the hours from noon until midnight in military time. Things get a bit more tricky as the hours ascend from noon until midnight. In military time, you don't start a new twelve-hour cycle after noon, but you continue to count beyond 1200 instead. Therefore, 1 p.m. becomes 1300 hours, 2 p.m. becomes 1400 hours, 3 p.m. becomes 1500 hours, and so on. This continues until midnight, when the clock resets. Here are a few more examples: □ 4 p.m. is 1600 hours. □ 5 p.m. is 1700 hours. □ 6 p.m. is 1800 hours. □ 10 p.m. is 2200 hours. □ 11 p.m. is 2300 hours. 4. 4 Learn how to say the hours in military time. If you're dealing with whole hours without any minutes, saying them aloud is easy. If there's a zero as the first digit, then say the first two digits as "Zero" and whatever number is next, followed by "Hundred Hours." If there's a 1 or 2 as the first digit, then say the first two numbers as a pair of numbers with a tens and ones digit, followed by "Hundred Hours." Here's are some examples: □ 0100 hours is "Zero One Hundred Hours." □ 0200 hours is "Zero Two Hundred Hours." □ 0300 hours is "Zero Three Hundred Hours." □ 1100 hours is "Eleven Hundred Hours." □ 2300 hours is "Twenty Three Hundred Hours." ☆ Note that in the military, "Zero" is always used to signify the zero digit in front of a number. "Oh" is used more casually. ☆ Note that using "hours" is optional. 5. 5 Learn how to say hours and minutes in military time. Saying the time in military lingo is a bit trickier when you're dealing with hours and minutes, but you can quickly get the hang of it. When you tell military time, you have to state the four digit number as two pairs of numbers with a tens and ones digit. For example, 1545 becomes "Fifteen Forty-Five Hours." Here are some more rules for this process: □ If there are one or more zeros in front of the number, say them. 0003 is "Zero Zero Zero Three Hours" and 0215 is "Zero Two Fifteen Hours." □ If there are no zeros in the first two digits of the number, then just say the first two numbers as a set with a tens and ones digit, and do the same with the last two digits. 1234 becomes "Twelve Thirty-Four Hours" and 1444 becomes "Fourteen Forty-Four Hours." □ If the last number ends in zero, just think of it as the ones unit paired with the tens digit to its left. Therefore, 0130 is "Zero One Thirty." 6. 6 Learn to convert from military time to regular time. Once you know how to write and say military time, you can become a pro at converting from military to regular time. If you see a number greater than 1200, that means you've reached the afternoon hours, so just subtract 1200 from that number to get the time using the 12-hour clock. For example, 1400 hours is 2 p.m. in standard time, because you get 200 when you subtract 1200 from 1400. 2000 hours is 8 p.m. because when you subtract 1200 from 2000, you get 800. □ If you're looking at a time less than 1200, then you know you're working with numbers from midnight until noon. Simply use the first two digits to get the a.m. hour, and the last two digits to get the minutes to convert to military time. ☆ For example, 0950 hours means 9 hours and 50 minutes, or 9:50 a.m. 1130 hours means 11 hours and 30 minutes, or 11:30 a.m. • You can subtract 12 from any value 12 or higher to give you the actual time in standard time. Example: 21:00 - 12 = 9:00 PM • The more you practice telling military time, the easier it will be. Article Info Thanks to all authors for creating a page that has been read 47,137 times. Was this article accurate?
{"url":"http://www.wikihow.com/Tell-Military-Time","timestamp":"2014-04-18T18:19:41Z","content_type":null,"content_length":"68775","record_id":"<urn:uuid:e9ea1c54-8fb7-45cf-9917-cabad1407b47>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
How to find vector for the quaternion from X Y Z rotations up vote 7 down vote favorite I am creating a very simple project on OpenGL and I'm stuck with rotations. I am trying to rotate an object indepentdently in all 3 axes: X, Y, and Z. I've had sleepless nights due to the "gimbal lock" problem after rotating about one axis. I've then learned that quaternions would solve my problem. I've researched about quaternions and implementd it, but I havent't been able to convert my rotations to quaternions. For example, if I want to rotate around Z axis 90 degrees, I just create the {0,0,1} vector for my quaternion and rotate it around that axis 90 degrees using the code here: http://iphonedevelopment.blogspot.com/2009/06/opengl-es-from-ground-up-part-7_04.html (the most complicated matrix towards the bottom) That's ok for one vector, but, say, I first want to rotate 90 degrees around Z, then 90 degrees around X (just as an example). What vector do I need to pass in? How do I calculate that vector. I am not good with matrices and trigonometry (I know the basics and the general rules, but I'm just not a whiz) but I need to get this done. There are LOTS of tutorials about quaternions, but I seem to understand none (or they don't answer my question). I just need to learn to construct the vector for rotations around more than one axis combined. UPDATE: I've found this nice page about quaternions and decided to implement them this way: http://www.cprogramming.com/tutorial/3d/quaternions.html Here is my code for quaternion multiplication: void cube::quatmul(float* q1, float* q2, float* resultRef){ float w = q1[0]*q2[0] - q1[1]*q2[1] - q1[2]*q2[2] - q1[3]*q2[3]; float x = q1[0]*q2[1] + q1[1]*q2[0] + q1[2]*q2[3] - q1[3]*q2[2]; float y = q1[0]*q2[2] - q1[1]*q2[3] + q1[2]*q2[0] + q1[3]*q2[1]; float z = q1[0]*q2[3] + q1[1]*q2[2] - q1[2]*q2[1] + q1[3]*q2[0]; resultRef[0] = w; resultRef[1] = x; resultRef[2] = y; resultRef[3] = z; Here is my code for applying a quaternion to my modelview matrix (I have a tmodelview variable that is my target modelview matrix): void cube::applyquat(){ float& x = quaternion[1]; float& y = quaternion[2]; float& z = quaternion[3]; float& w = quaternion[0]; float magnitude = sqrtf(w * w + x * x + y * y + z * z); if(magnitude == 0){ x = 1; w = y = z = 0; if(magnitude != 1){ x /= magnitude; y /= magnitude; z /= magnitude; w /= magnitude; tmodelview[0] = 1 - (2 * y * y) - (2 * z * z); tmodelview[1] = 2 * x * y + 2 * w * z; tmodelview[2] = 2 * x * z - 2 * w * y; tmodelview[3] = 0; tmodelview[4] = 2 * x * y - 2 * w * z; tmodelview[5] = 1 - (2 * x * x) - (2 * z * z); tmodelview[6] = 2 * y * z - 2 * w * x; tmodelview[7] = 0; tmodelview[8] = 2 * x * z + 2 * w * y; tmodelview[9] = 2 * y * z + 2 * w * x; tmodelview[10] = 1 - (2 * x * x) - (2 * y * y); tmodelview[11] = 0; glGetFloatv(GL_MODELVIEW_MATRIX, tmodelview); And my code for rotation (that I call externally), where quaternion is a class variable of the cube: void cube::rotatex(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = sinf(ang/2); quat[2] = 0; quat[3] = 0; quatmul(quat, quaternion, quaternion); void cube::rotatey(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = sinf(ang/2); quat[3] = 0; quatmul(quat, quaternion, quaternion); void cube::rotatez(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = 0; quat[3] = sinf(ang/2); quatmul(quat, quaternion, quaternion); I call, say rotatex, for 10-11 times for rotating only 1 degree, but my cube gets rotated almost 90 degrees after 10-11 times of 1 degree, which doesn't make sense. Also, after calling rotation functions in different axes, My cube gets skewed, gets 2 dimensional, and disappears (a column in modelview matrix becomes all zeros) irreversibly, which obviously shouldn't be happening with a correct implementation of the quaternions. c++ opengl matrix quaternion Do you want help with quaternions, or with rotating an object? – Beta Mar 25 '12 at 20:52 well I've gone with the quaternions path and wrote code so it'd be helpful if you have a quaternion-involving solution for rotation. what i simply need to implement is to be able to rotate an object with 3 degrees of freedom in 3D space – Can Poyrazoğlu Mar 25 '12 at 20:54 Maybe the book "Quaternions and Rotations Sequences" from Jack Kuipers may help you. There are lots to say about X-Y-Z rotations (aka Euler angles). For example are you rotating around the old X axis or the new one (the axis 'attached' to the object you are rotating)? – ascobol Mar 25 '12 at 21:20 I need to rotate around world coordinates, not the attached one. (else it would be easy with regular rotations/Euler angles) – Can Poyrazoğlu Mar 25 '12 at 21:22 add comment 1 Answer active oldest votes You're approaching this the wrong way. If you have three Euler angle rotations for the X, Y, and Z axes, turning them into a quaternion and then into a matrix won't help you. Gimbal lock arises because of the representation of your desired rotation. If you store the rotation that you want as X-Y-Z Euler angles, then you will get Gimbal lock. You need to store your desired orientation as a quaternion to get the advantages. That is, it's possible to take a current orientation as a quaternion and then ask the question "how do I rotate that around the Z axis by 90 degrees and use that as my new orientation?", but it's not useful to ask "my current orientation is defined by these X-Y-Z Euler angles, how do I turn that into a quaternion?". up vote 7 down vote accepted A full treatment of the relevant parts of quaternions would be pretty lengthy. This site may help you out. It's worth noting that the site you linked to appears to really be talking about axis-angle rotations, not quaternions. Edit: The code you posted is correct except that the signs for tmodelview[6] and tmodelview[9] are wrong. thanks. I understand that I need to be storing my orientation (hence, a quaternion), and forget about angles in that one. your link is quite helpful, but one thing I still don't understand is that the question that you are talking about: "how do I rotate that around the Z axis by 90 degrees and use that as my new orientation?" – Can Poyrazoğlu Mar 25 '12 at 21:54 2 Quaternion multiplication. If you have a quaternion q that refers to your current orientation, and a quaternion r that is a rotation around the Z axis by 90 degrees, then the quaternion r*q is your new orientation. An exact parallel to matrix multiplication, really. – John Calsbeek Mar 25 '12 at 21:57 i'll be looking into that – Can Poyrazoğlu Mar 25 '12 at 22:59 i've done the multiplication using the link here cprogramming.com/tutorial/3d/quaternions.html but as soon as i try to rotate on another axis, i lose a dimension (a column becomes all zero) and the object becomes a 2d plane, irreversibly. – Can Poyrazoğlu Mar 26 '12 at 22:31 There's little to no way to debug that without seeing the code. – John Calsbeek Mar 26 '12 at 22:52 show 3 more comments Not the answer you're looking for? Browse other questions tagged c++ opengl matrix quaternion or ask your own question.
{"url":"http://stackoverflow.com/questions/9863877/how-to-find-vector-for-the-quaternion-from-x-y-z-rotations","timestamp":"2014-04-24T14:35:37Z","content_type":null,"content_length":"79999","record_id":"<urn:uuid:90b31a1e-004a-4f26-b998-abcbc5b40c4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Tractable Tree Convex Constraint Networks Yuanlin Zhang and Eugene C. Freuder A binary constraint network is tree convex if we can construct a tree for the domain of the variables so that for any constraint, no matter what value one variable takes, all the values allowed for the other variable form a subtree of the constructed tree. It is known that a tree convex network is globally consistent if it is path consistent. However, if a tree convex network is not path consistent, enforcing path consistency on it may not make it globally consistent. In this paper, we identify a sub-class of tree convex networks which are locally chain convex and union closed. This class of problems can be made globally consistent by path consistency and thus is tractable. More interestingly, we also find that some scene labeling problems can be modeled by tree convex constraints in a natural and meaningful way. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/AAAI/2004/aaai04-032.php","timestamp":"2014-04-18T10:42:17Z","content_type":null,"content_length":"2706","record_id":"<urn:uuid:6ec40827-2325-408b-9ba1-c44b1ee05a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Pointwise and Uniform Convergence of x^n e^{-nx} January 25th 2010, 03:53 PM [SOLVED] Pointwise and Uniform Convergence of x^n e^{-nx} There is a thread similar to this already in existence, but it is quite old and it didn't help me very much. I need to determine the convergence of $\lim_{n \to \infty} x^n e^{-nx}$ for $x \in [0,1]$ and $x \in [0, +\infty)$ Trying to use l'Hopital's Rule I get $\lim_{n \to \infty} x^n e^{-nx} = \lim_{n \to \infty} \frac{x^n}{e^{nx}}$ $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{\log(x) x^{n-1}}{e^{nx}}$ Using l'Hopital n time results in $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{(\log x)^n}{e^{nx}}$ However, I am not sure this even makes sense, because $f_n$ is not continuous with respect to n. So, what am I suppose to do? January 25th 2010, 07:12 PM Prove It There is a thread similar to this already in existence, but it is quite old and it didn't help me very much. I need to determine the convergence of $\lim_{n \to \infty} x^n e^{-nx}$ for $x \in [0,1]$ and $x \in [0, +\infty)$ Trying to use l'Hopital's Rule I get $\lim_{n \to \infty} x^n e^{-nx} = \lim_{n \to \infty} \frac{x^n}{e^{nx}}$ $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{\log(x) x^{n-1}}{e^{nx}}$ Using l'Hopital n time results in $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{(\log x)^n}{e^{nx}}$ However, I am not sure this even makes sense, because $f_n$ is not continuous with respect to n. So, what am I suppose to do? Is this a series or a sequence? January 25th 2010, 07:15 PM It is a sequence. What I have to do is determine Piecewise and Uniform convergence on those intervals. I can get the Uniform once I get the Piecewise convergence. But, I can't get past that. January 25th 2010, 08:06 PM There is a thread similar to this already in existence, but it is quite old and it didn't help me very much. I need to determine the convergence of $\lim_{n \to \infty} x^n e^{-nx}$ for $x \in [0,1]$ and $x \in [0, +\infty)$ Trying to use l'Hopital's Rule I get $\lim_{n \to \infty} x^n e^{-nx} = \lim_{n \to \infty} \frac{x^n}{e^{nx}}$ $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{\log(x) x^{n-1}}{e^{nx}}$ Using l'Hopital n time results in $\lim_{n \to \infty} \frac{x^n}{e^{nx}} \stackrel{H}{=} \frac{(\log x)^n}{e^{nx}}$ However, I am not sure this even makes sense, because $f_n$ is not continuous with respect to n. So, what am I suppose to do? It is clearly pointwise convergent. Now, do you think it's uniformly convergent? Will your choice of $N$ in the definition of convergence depend on $x$? January 26th 2010, 05:09 AM Like I said I don't need help with the Uniform Convergence. If it is clearly Pointwise then it should be easy to prove, however I am unable to figure out how. January 26th 2010, 06:54 AM So, I have a bit of an idea, but can't seem to figure it out. $\lim_{n \to \infty} \left(\frac{x}{e^x}\right)^n$ This will converge to zero if $\left(\frac{x}{e^x}\right)^n < 1$ and will converge to one if $\left(\frac{x}{e^x}\right)^n = 1$, but diverges if $\left(\frac{x}{e^x}\right)^n > 1$ January 26th 2010, 07:03 PM Consider the function $f(x)=\ln (x) -x$ then $f'(x)= \frac{1}{x} -1$ and thus $f$ is increasing on $(0,1)$ and decreasing on $(1, \infty)$ and so $f(x)\leq f(1)=-1$ for all $x\in (0, \infty)$. Now consider $g_y(x)= x^ye^{-yx}=e^{y(\ln (x) -x)}$ with $y>0$ then $0\leq |g_y(x)| \leq |e^{-y} | \rightarrow 0$ as $y\rightarrow \infty$. This proves both pointwise and uniform convergence in $ [0,\infty )$
{"url":"http://mathhelpforum.com/differential-geometry/125446-solved-pointwise-uniform-convergence-x-n-e-nx-print.html","timestamp":"2014-04-16T13:10:06Z","content_type":null,"content_length":"16455","record_id":"<urn:uuid:d9b10123-6bfa-4a18-90d4-ab94114defaa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Brane trust: tunneling and stringy physics “As is often the case in science, everybody contributes their piece, forming a complete picture only after years hard work,” Amanda Weltman tells PhysOrg.com. Weltman, a scientist at the University of Cape Town and at Cambridge University, believes that she and her collaborators have found another piece of that puzzle, especially with regard to string theory. “We took a tool developed in quantum field theory and adapted it to study stringy physics.” Weltman and her colleagues, Adam Brown and Saswat Sarangi at Columbia University, and Benjamin Shlaer at the University of Colorado, explain their work in a piece titled “Enhanced Brane Tunneling and Instanton Wrinkles,” published in Physical Review Letters. Weltman says she is most interested in understanding dynamics in what is known in string theory as the landscape. “In the early years of string theory, people hoped that there would be a unique ground state describing our universe.” She describes a valley between two hills, and then points out that there are other valleys with similar low points, or vacua. “There may be a landscape of such vacua with many different ways that each can be reached,” she continues. “This realization of a whole landscape of possible values has opened new questions for the Weltman says that she and her collaborators “asked a very basic question: If the universe were in one of the myriad of such vacua then how long would it stay there?” To answer this question they “studied tunneling between different vacua, including stringy degrees of freedom.” “The concept of tunneling has been around in quantum theory for a while,” she points out. “The notion is that if you have a quantum particle hitting a wall, the probability of it appearing on the other side of the wall is non-zero, unlike its classical counterpart. We are now studying such tunneling in the context of strings and branes in string theory rather than just particles in quantum Keeping with the idea of the valley, Weltman explains that a field in a valley would try to get to the other side of the hill – and into another valley. “You would think that the higher the barrier, the harder it would be, much as if you tried to cycle over a hill.” She pauses. “It didn’t work that way. The higher the barrier, the easier the tunneling was.” Weltman says that using the old quantum techniques to study the more complicated landscape of string theory brought out quite a different answer than many would expect. She speaks of tunneling in terms of D-branes, which are located at the ends of strings. These branes represent the boundaries of the strings in string theory. Weltman visualizes it as one brane at either end of a length of In an email, Weltman expounds on how the brane tunneling works: “Our interpretation of this result is that rather than the field tunneling through, a new brane-antibrane pair is created on the other side, and the antibrane tunnels back and annihilates the original brane. By raising the barrier height, the nucleation of such pairs, and consequently tunneling, is enhanced. You get faster tunneling than you would naively expect.” Right now, like much of string theory, this brane tunneling concept is in its early stages. “The next thing is to study concrete models.” Weltman says it is already being worked on. “We want to be as specific as possible, and see how it changes what people have looked at before.” She continues: “In other areas of research, people are studying these vacua and looking for classes of vacua that would give us the features we expect of the standard model – the particle masses, their couplings and the correct number of generations. To explain why we are in such a vacuum state, we must understand the dynamics of string theory. The difficulty lies in getting it right. “It’s not easy getting everything at the same time, especially when we include the cosmological constant,” Weltman admits. “Any mechanism which drives us to small values of the cosmological constant, but keeps us from zero, would be compelling.” Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. 1 / 5 (2) Oct 31, 2007 By Aether Wave Theory our generation of Universe is formed by inertial environment, which is formed by previous generations of Universe. Such model even found some support in LQG theory recently (see the http://space.news.../dn12853 article). By such way, the vacuum is formed by density fluctuations of black hole, forming the previous generation of Universe. These fluctuations are kept at the distance by their surface curvature, followed by "surface tension" by the same way, by which the tiny mercury dropplets are repelled each of other. With compare to string theory, here's no lotta mechanisms, how to keep the such structure conditionally stable (low value of cosmological constant), everything is driven by simple to comprehend Newtonian mechanics. So here's no need to seek the stable solution of vacuum landscape permutations by proposing of another and another abstract assumptions. Despite of this, I'd consider the tunelling aspect of string theory (which is quantum field theory anyway), if it would lead to some testable predictions. Can somebody propose some?
{"url":"http://phys.org/news112963812.html","timestamp":"2014-04-18T06:13:46Z","content_type":null,"content_length":"71617","record_id":"<urn:uuid:8f170faf-fb98-42ff-8889-7fecacb9947c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
SNP set analysis for detecting disease association using exon sequence data Rare variants are believed to play an important role in disease etiology. Recent advances in high-throughput sequencing technology enable investigators to systematically characterize the genetic effects of both common and rare variants. We introduce several approaches that simultaneously test the effects of common and rare variants within a single-nucleotide polymorphism (SNP) set based on logistic regression models and logistic kernel machine models. Gene-environment interactions and SNP-SNP interactions are also considered in some of these models. We illustrate the performance of these methods using the unrelated individuals data from Genetic Analysis Workshop 17. Three true disease genes (FLT1, PIK3C3, and KDR) were consistently selected using the proposed methods. In addition, compared to logistic regression models, the logistic kernel machine models were more powerful, presumably because they reduced the effective number of parameters through regularization. Our results also suggest that a screening step is effective in decreasing the number of false-positive findings, which is often a big concern for association studies. High-throughput sequencing technologies have been evolving extraordinarily fast in the past few years. They have been recently applied to genome-wide association studies to study the effects of both common and rare variants. The different natures of these two types of variants call for distinct methods. For common variants, association tests based on individual SNPs are still widely used. However, such approaches suffer from multiple comparison problems and do not take into account possible interactions among variants. To overcome these limitations, analyses based on single-nucleotide polymorphism (SNP) sets have been developed to test the joint effect (either linear or nonlinear) of variants within a SNP set. For instance, Wu et al. [1] proposed a kernel-machine-based method for association studies; this approach is flexible for modeling various interactions and nonlinear effects. Mukhopadhyay et al. [2] derived similarity scores of genotypes between pairs of individuals using a kernel and then used these scores as the response variable in an analysis of variance (ANOVA) model to establish association between genotypes and phenotypes. Such methods tend to be more powerful and flexible than individual-SNP analysis. Although many genome-wide association studies in the past focused on common variants, it is now widely believed that for complex diseases, rare variants are more likely to be functional than common variants [3]. Because rare variants usually have low marginal effects, multiple rare variants within a SNP set (e.g., a gene or a pathway) are often combined into a single variable to be used in tests for association. For example, Li and Leal [4] proposed a method for collapsing multiple rare variants into a single indicator that recorded whether or not the genome contained any rare variants for the SNP set under consideration; Madsen and Browning [5] proposed a weighted-sum score, where the weight for each variant indicator (0 for absent and 1 for present) was proportional to the inverse of its estimated standard deviation in the population. An overview of rare variant collapsing methods is provided by Dering et al. [6]. To effectively detect association signals, investigators might find it beneficial to jointly model the common and rare variants and to account for correlations among both variants. For this purpose, in this paper we introduce several methods to jointly model the common and rare variants within a SNP set. Note that throughout this paper SNPs with minor allele frequency (MAF) less than 1% are treated as rare variants and all other SNPs are treated as common variants. We start with logistic regression models, including gene-environment interaction terms, and derive score statistics for testing the presence of any marginal or interaction effects. We then consider logistic kernel machine models, which can incorporate both interactions among SNPs and gene-environment interactions. This type of model is an extension of the method proposed by Wu et al. [1] and Liu et al. [7]. We also introduce a summary score for combining common variants based on the idea of principal fitted components [8], which is then used to reduce the dimensionality of the logistic regression model. We then use the 200 independently simulated data sets for unrelated individuals from Genetic Analysis Workshop 17 (GAW17) [9] to illustrate these methods, where a SNP set is defined as the observed SNPs (common and rare) within a gene. We also use a two-stage procedure, consisting of a screening stage and a testing stage, when analyzing the GAW17 data. The results suggest that the kernel machine methods enjoy better power than the score tests and that the screening stage helps to reduce the number of false-positive findings. Logistic regression models and score tests For the ith individual (i = 1, …, n), let response y[i] be 0 if the individual is unaffected and 1 if affected. Let X[i] be a q × 1 covariates vector (including an intercept term), z[i] be a p × 1 vector of SNP genotypes (or summary scores) for a given gene (SNP set) under testing, and s[i] be the environment covariate that is also included in X[i]. We consider the following logistic regression model with gene-environment interactions: The goal is to test the null hypothesis H[0]: a = b = 0, and we consider the corresponding score statistic. For a detailed derivation and expression of the score statistic, see Wang et al. [10]. Logistic kernel machine models Following Wu et al. [1] and Liu et al. [7], we now extend Eq. (1) to a semiparametric logistic regression model: where h(·) and g(·) belong to reproducing kernel Hilbert spaces H[K] and K(·, ·) and h(·) and g(·) can be estimated by: Following Liu et al. [7], the solutions to Eq. (4) have the same form as the penalized quasi-likelihood estimators from the logistic mixed model: where h[i] and g[i] are independent. Denote τ = 1/λ and H[0]: h(·) = g(·) = 0 in Eq. (3) can be reformulated as testing the absence of the variance components, H[0]: τ = [1] and Liu et al.’s [7] papers, we consider the (two-dimensional) test statistic: which is based on the score statistic of Q* can be approximated by scaled chi-square distributions [7]. Finally, we construct a combined test statistic: The corresponding p-value is then: where υ degrees of freedom. For detailed derivations and expressions of Q*, [10]. Note that when both K and Summary score for common variants For a gene with p common variants, we introduce the summary score: where I[ik] is the number of times the kth variant is observed in the ith individual, and kth variant is observed among affected and unaffected individuals, respectively, and n^A and n^U are the total numbers of affected and unaffected individuals, respectively. This summary score is derived based on the idea of principal fitted components for dimension reduction [8]. Two-stage procedure We propose a two-stage procedure to analyze the GAW17 data. In the screening stage, genes that do not show any statistical significance are filtered out. The main purpose of this stage is to achieve dimension reduction and at the same time to retain genes that are more likely to be associated with the disease. In the testing stage, we apply various methods to test the subset of genes that have passed the screening criteria. In the screening stage, both genetic effects and gene-environment interaction effects are investigated, and common and rare variants are handled differently. Common variants are tested in the three subpopulations (Europeans, Asians, and Africans) separately, whereas rare variants are studied based on the whole population. For each gene, the genotypes of the common variants (coded 0, 1, or 2, denoting the number of minor alleles) are treated as a vector and the Hotelling T^2 test is used to test whether there is a mean difference between the affected and unaffected individuals [4]. For rare variants, weighted-sum scores [5] are derived for the synonymous and nonsynonymous groups, denoted WS[syn] and WS[nonsyn], respectively. Then a two-dimensional Hotelling T^2 test is performed based on WS[syn] and WS[nonsyn]. To test gene-environment interactions, we consider the null hypothesis Corr(G, E|Y = 0) = Corr(G, E|Y = 1). We take the difference between Fisher’s z transformations of sample correlations for the affected and unaffected groups as the test statistic: Again, instead of testing each variant individually, we use combined scores for both common variants (Eq. (9)) and rare variants (the weighted-sum score) and test gene-environment interactions for each SNP set as a whole. In addition, for rare variants, we consider only the nonsynonymous variants. In all the tests, the p-values are determined through permuting disease status (while keeping the total numbers of affected and unaffected individuals unchanged). Finally, genes are deemed to pass the screening and become candidates for the testing stage if they have (unadjusted) p-values smaller than a prespecified threshold (e.g., 0.1) for at least one of the tests. In the testing stage, two kinds of models are considered: logistic regression models (Eq. (1)) and logistic kernel machine models (Eq. (5)). For all models, the covariates vector consists of Age, Sex, two principal component scores to account for population structure (see Results section for more details), and an environmental factor (Smoke status). For rare variants, we further introduce a combined weighted-sum score: WS[combined] = WS[syn] + 2WS[nonsyn], (13) where nonsynonymous variants receive more weight. For logistic regression models, we consider two different scenarios for the common variants, one using the original genotypes (referred to as logistic regression) and the other using the common score (Eq. (9)) with the weights calculated based on the corresponding screening data set (referred to as the logistic common score). In addition, WS[combined] is used for both scenarios. Finally, score statistics are calculated and the p-values are determined using theoretical chi-square distributions. For logistic kernel machine models (Eq. (5)), the original genotypes are used for common variants. We consider two different schemes for the kernels. One uses linear kernels for both K and K that models interactions among variants and a linear kernel for [combined] is used and the method is referred to as the quadratic rare WS[combined] method. For the linear kernel case, two scenarios are considered for combining rare variants, one using WS[combined] (referred to as the linear rare WS[combined] method) and the other using WS[nonsyn] (referred to as the linear rare WS[nonsyn] method). Moreover, for the kernel machine methods, the weighted-sum scores for rare variants and the genotypes of the common variants are both standardized (to have mean 0 and standard deviation 1) before model fitting. In total, we consider five different methods in the testing stage, which are summarized in Table 1. GAW17 data description The GAW17 data we analyzed in this paper have 200 replicates, each consisting of data for 697 unrelated individuals. The genotypes, age, and sex of these individuals are from real studies and are kept fixed across the 200 replicates. One environmental risk factor (smoking status) and a binary disease status were simulated for each replicate [9]. Moreover, in all these replicates, the total numbers of affected and unaffected individuals are fixed to be 209 and 488, respectively, which reflects the population prevalence of this disease. The 697 individuals were from seven different sources: Denver Chinese, Han Chinese, Japanese, Luhya, Yoruba, CEPH (European-descended residents of Utah), and Tuscan. Through principal components analysis on about 1,000 common variants (distance ≥ 50,000 bp) with MAF larger than 10%, the first two principal components clearly divide the sample into three distinct clusters, corresponding to Africans (Luhya and Yoruba), Asians (Chinese and Japanese), and Europeans (CEPH and Tuscan). The genotype data consist of 24,487 SNPs from 3,205 genes on 22 autosomal chromosomes. The MAF for 74% of the SNPs is less than 1%. In our analysis, these are treated as rare variants, whereas all other SNPs are treated as common variants. Moreover, 2,208 genes contain at least one common variant, and the maximum number of common variants within a gene is 52. A total of 2,476 genes contain at least 1 rare variant and the maximum number is 179. One hundred sixty-two rare variants were removed from the analysis because they appeared in only one individual. Genes with a rare variant event occurring in less than 1% of individuals were removed, and 2,534 genes were left for subsequent analysis. Genotypes are coded as 0, 1, or 2, indicating the number of minor alleles at each locus. We randomly divided the 200 simulated replicates into 100 pairs. For each pair, one data set was used for screening and the other was used for testing. Across the 100 screening data sets, if a 0.1 threshold was used, the mean number of genes passing screening was 1,307 and 8 genes (RUNX2, MUC3A, TMEM67, NIBP, AKAP2, GOLGA1, USP5, and FLT1) were selected at least 95 times. If a 0.05 threshold was used, the mean number of genes passing screening was 824 and 1 gene (FLT1) was selected 95 times. For each pair of screening and testing data sets, genes that passed the screening step were tested using the five methods described in the Methods section. P-values were adjusted using the Holm procedure [11], which is an improvement of the Bonferroni procedure and controls the family-wise error rate. A gene was then said to be selected by a method if its corresponding adjusted p-value was less than 0.1. Throughout the 100 pairs of screening and testing data sets, if a threshold of 0.1 was used in the screening step, then four genes (FLT1, PIK3C3, KDR, and PRR4) were selected more than 10 times by at least one of the five testing methods. In contrast, if no screening was used (i.e., all 2,534 genes were passed to the testing stage), nine genes were selected more than 10 times by at least one of the five testing methods. The selection frequencies of these genes are illustrated in Figure 1. Figure 1. Frequently selected genes and their selection frequencies. For each gene, the height of the bar represents the number of times it has been selected across the 100 screening-testing pairs. As can be seen from Figure 1, FLT1 was selected more than 40 times using the linear rare WS[combined] method and more than 50 times using the linear rare WS[nonsyn] method. Moreover, PIK3C3 and KDR were selected about 20 times using the linear rare WS[combined] method and the quadratic rare WS[combined] method, respectively. Note that the quadratic kernel model is capable of capturing some of the SNP-SNP interaction effects, whereas the linear kernel model does not. Thus the fact that the quadratic kernel works better for KDR may imply that there are potential SNP-SNP interaction effects in this gene, which may result from the complicated disease model and/or correlation structure among the SNPs. Compared with the kernel machine methods, the two logistic regression methods gave less consistent results in terms of gene selection across the replicates. Furthermore, summarizing information of common variants by using the common score seemed to improve the power of the logistic regression model slightly. Gene FLT1 is on chromosome 13, and it contains 35 SNPs, of which 25 are rare variants. Applying the logistic regression model with gene-environment interaction (Eq. (1)) on the first replicate indicated that the (common) variant C13S523 was associated with disease status highly significantly (nominal p = 0.000817). This variant was nonsynonymous with a MAF of 6.7%. The weighted-sum score of the rare variants in FLT1 also showed evidence of association (nominal p = 0.0033). Gene KDR is on chromosome 4 with 14 rare variants and 2 common variants. Gene PIK3C3 has 7 variants (6 rare variants and 1 nonsynonymous common variant). It also seemed that this common variant was the reason that PIK3C3 was picked by the linear rare WS[combined] method about 20 times across the 100 The results were obtained without knowledge of the underlying disease model. Afterward, we examined the GAW17 simulation model [9]. It turns out that, FLT1, PIK3C3, and KDR are true disease susceptible genes. However, other genes reported in Figure 1 were not directly related to disease status. By comparing the top and bottom panels in Figure 1, we see that the procedure with a screening step is effective in eliminating such genes. A closer look at the results reveals that these genes are mainly filtered by the screening step. For instance, TAS2R48 was detected as a significant gene among 18 (out of 100) data pairs by the linear rare WS[combined] method when no screening was applied. However, for 15 out of 18 pairs, TAS2R48 would not pass the screening step if a 0.1 threshold was used. In this paper, we considered SNP set analysis for detecting disease-susceptible variants using exon sequence data. In large-scale association studies, there is often a need to combine information across variants to improve detection power. This is especially the case for rare variants. Here, we adopted the weighted-sum score of Madsen and Browning [5] to summarize information across rare variants within each SNP set. In addition, we proposed a summary score based on principal fitted components [8] to combine information across common variants. Moreover, the large number of variants poses challenges, such as multiple comparisons and modeling various interactions. To address this issue, we extended the logistic kernel machine methods used by Wu et al. [1] and Liu et al. [7] to include gene-environment interactions. Compared to logistic regression models, the logistic kernel machine models were more powerful, estimating the degrees of freedom in a data-adaptive way by accounting for correlations among the SNPs. Thus they reduced the effective number of parameters and consequently enjoyed improvements in power. Kernel machine models also had greater degrees of flexibility in modeling interactions and nonlinearity. We also applied a two-step procedure consisting of a screening stage and a testing stage to the GAW17 data. The results suggest that the screening stage is effective in decreasing the number of false-positive findings, which is often a big concern for association studies. Authors’ contributions RW carried out data analysis and participated in the development of methods. JP and PW led the development of methods. All authors participated in drafting the manuscript. All authors read and approved the final manuscript. This work is supported by National Institutes of Health (NIH) grant R01 GM082802 from the National General Institute of Medicine. The Genetic Analysis Workshop is supported by NIH grant R01 GM031575. This article has been published as part of BMC Proceedings Volume 5 Supplement 9, 2011: Genetic Analysis Workshop 17. The full contents of the supplement are available online at http:// Sign up to receive new article alerts from BMC Proceedings
{"url":"http://www.biomedcentral.com/1753-6561/5/S9/S91","timestamp":"2014-04-24T21:58:40Z","content_type":null,"content_length":"94692","record_id":"<urn:uuid:0672a0a1-4845-4ed9-aea0-e6830b38b757>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: pooled t test vs "unpooled" t test Replies: 1 Last Post: Nov 6, 1996 10:34 AM Messages: [ Previous | Next ] pooled t test vs "unpooled" t test Posted: Nov 4, 1996 8:39 PM To this observation >> -- My guess would be that if a stat-program presented just one >> "two-sample t-test", it *would* be the "Student's t-test" which pools >> the variances. (The t in "t-test" is not capitalized.) Bob Hayden commented > Minitab only pools the variances if you explicitly ask it to. I > think Minitab's default is the correct default. You lose very little > if the variances ARE the same and you gain a lot of they are not. In "What is Statistics?" (Chapter 1 in the MAA's Notes Number 21, _Perspectives on Contemporary Statistics_), David Moore says: "The pooled-sample t test ... is somewhat robust against unequal sigmas if the sample sizes are equal, but not otherwise." He uses T_p for the pooled statistic. Then, arguing for the "unpooled" statistic, labeled T, he asserts that "A substantial literature ... demonstrates the accuracy of this [i.e., the "unpooled" T] approximation for even quite small samples, and demonstrates in addition that when in fact sigma_1 = sigma_2, using T sacrifices very little power relative to T_p." (p.15) It's this kind of "expert testimony" that is missing from so many textbooks at the elementary level--the kind of testimony that would help us make wiser decisions about the relative merits of various procedures, when working with students. (This debate about the two t tests is not all one-sided, of course. I believe I recall Paul Velleman arguing that a good reason to teach the pooled test is that it generalizes readily to ANOVA.) Here's another comment by David Moore whose essence is missing from most elementary textbooks: "... the F ratio for comparing variances is almost worthless. ... The reason for this ... is that no data are exactly normal. ... the F ratio is so sensitive to even small departures from normality as to be almost useless." (same source, p. 14) Bruce King Department of Mathematics and Computer Science Western Connecticut State University 181 White Street Danbury, CT 06810 Date Subject Author 11/4/96 pooled t test vs "unpooled" t test KINGB@WCSUB.CTSTATEU.EDU 11/6/96 Re: pooled t test vs "unpooled" t test Joe H Ward
{"url":"http://mathforum.org/kb/thread.jspa?threadID=192771","timestamp":"2014-04-17T11:17:03Z","content_type":null,"content_length":"19318","record_id":"<urn:uuid:2c53e8e4-1d3a-4943-9215-9306481ab42c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Hardness of approximation of Dominating Set up vote 4 down vote favorite It is stated throughout the computational complexity literature that the Dominating Set problem is NP-hard to approximate within a factor of $\Omega(\log n)$. To my knowledge, the first and only proof available (Lund and Yannakakis, 1994), relies on a well-known L-reduction from Set Cover to Dominating Set (also reported on Wikipedia), which implies that the two problems are equivalent in terms of approximation ratio. Because Set Cover is NP-hard to approximate within a factor of $\Omega(\log n)$, the same holds for Dominating Set. I have reasons to believe that this may be an incorrect deduction. Recall that, in Set Cover, the parameter $n$ is the size of the universe set. In contrast, the number of sets given as input, $m$, could be exponentially larger than $n$. Because the L-reduction from Set Cover to Dominating Set constructs a graph on $n+m$ vertices, this graph may have size exponential in $n$. Now, in Dominating Set, the "$n$" that is used in approximation bounds is in fact the number of vertices. It follows that, using this reduction, only a ratio of $O(\log \log n)$ can be deduced for Dominating Set, as opposed to $\Omega(\log n)$. Can this proof be fixed in some easy way (or is my reasoning incorrect)? If I understand your objection correctly, I think this issue may be resolved by observing that the problem size of Set Cover is related to the size of the universe set and all the subsets. The subsets constitute part of the input for Set Cover. – mhum May 6 '13 at 23:11 Using my notation, the problem size of Set Cover is $m\cdot n$. This is correct, but all the approximation bounds are always given just in terms of $n$. That is, there is a greedy algorithm that achieves a $\ln n$ approximation ratio (no $m$ involved), and it is NP-hard to achieve a $c\cdot\log n$ approximation ratio (no $m$ involved). Hence, when reducing to Dominating Set, $n$ cannot be the number of vertices, but should be its logarithm. – Giovanni Viglietta May 7 '13 at 1:51 I stand corrected. The approximation ratios for Set Cover do appear to be independent of the number of sets. I'm not sure why I thought otherwise. – mhum May 7 '13 at 15:19 I agree this is sort of confusing. I guess this is why people tend to say nonchalantly that Set Cover and Dominating Set are equivalent as approximation problems (because they L-reduce to each other), and THEREFORE Dominating Set is not approximable within $\Omega(\log n)$, either. Well, these are two different $n$'s, so we should pay attention... – Giovanni Viglietta May 7 '13 at 17:48 add comment 2 Answers active oldest votes I had a private conversation with Dana Moshkovitz (whom I thank), who confirmed that, in Alon, Moshkovitz, and Safra (2006), the hard instances of Set Cover resulting from a rather involved gap-preserving reduction are all such that $m\leqslant {\rm poly}(n)$. Hence, after the L-reduction to Dominating Set, we have graphs on $|V|$ vertices, such that $|V|=n+m\ leqslant a\cdot n^k+b$, for some constants $a>0$, $b\geqslant 0$, $k\geqslant 1$. It follows that, since Set Cover is $NP$-hard to approximate within a factor of $\Omega(\log n)$, Dominating Set is $NP$-hard to approximate within a factor of $$\Omega(\log n) = \Omega\left(\log\left(\frac{|V|-b}{a}\right)^{\frac 1k}\right)= \Omega\left(\frac{\log(|V|-b)-\log a}{k}\ right)=\Omega(\log |V|).$$ Determining optimal values for $a$, $b$, $k$ remains open. up vote 4 down vote It is my understanding that these matters have been overlooked by most authors, as no explicit mention to them is ever made, to the best of my knowledge. Usually it is just stated that accepted Set Cover and Dominating Set are "equivalent" under L-reductions, hence the $c\cdot\log n$ hardness carries over to Dominating Set. Even the observation of the crucial inequality $m\ leqslant {\rm poly}(n)$ resulting from the reductions to Set Cover has often been neglected by most authors (Feige being an exception), as well as the determination of an optimal constant factor for the hardness of approximation of Dominating Set. add comment I have a partial answer (thanks to Uriel Feige for pointing this out), but the main problem is still open. According to Feige (1998), it is quasi-$NP$-hard to approximate Set Cover within a ratio of $(1-o(1))\ln n$ and, in all the hard Set Cover instances, $n>m$ holds. Hence, after the reduction to Dominating Set, the number of vertices is at most $2n$, which implies once again an $\Omega(\log n)$ lower bound. Of course, this does not completely answer my question, because it only shows quasi-$NP$-hardness (i.e., the approximation ratio is not achievable in polynomial time unless $NP\subset TIME up vote 1 (n^{{\rm polylog}\ n})$), as opposed to $NP$-hardness. down vote The current state of the art for Set Cover, obtained by Alon, Moshkovitz, and Safra (2006), is that it is $NP$-hard to approximate within a $c\cdot \log n$ factor. Even after inspecting their construction, it is not clear to me if $n>m$ can be inferred for all hard Set Cover instances, as well. Actually, even $n^\lambda >m$ would suffice, for some $\lambda\geqslant 1$. A related paper by Raz and Safra (1997) claims a similar result, but with a lower constant factor $c$. However, I cannot find any proof of this claim. If anyone can find it, it can be checked if $n^\lambda >m$ at least there. add comment Not the answer you're looking for? Browse other questions tagged computational-complexity or ask your own question.
{"url":"http://mathoverflow.net/questions/129812/hardness-of-approximation-of-dominating-set","timestamp":"2014-04-20T13:37:27Z","content_type":null,"content_length":"61939","record_id":"<urn:uuid:b731df4b-4eae-41be-a61a-c5e4e4535460>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Universality of Painleve functions in random matrix models Seminar Room 1, Newton Institute Several types of critical phenomena take place in the unitary random matrix ensembles (1/Z_n) e^{-n Tr V(M)} dM defined on n-by-n Hermitian matrices M in the limit as n tends to infinity. The first type of critical behavior is associated with the vanishing of the equilibrium measure in an interior point of the spectrum, while the second type is associated with the higher order vanishing at an endpoint. The two types are associated with special solutions of the Painlev\'e II and Painlev\'e I equation, respectively. The quartic potential is the simplest case where this behavior occurs and serves as a model for the universal appearance of Painlev\'e functions in random matrix models. Related Links
{"url":"http://www.newton.ac.uk/programmes/PEM/seminars/2006091910001.html","timestamp":"2014-04-17T18:41:51Z","content_type":null,"content_length":"5168","record_id":"<urn:uuid:3d4546ec-6563-49cc-9d1f-a2821bdb0b5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
how many ways can rooks be placed?. September 12th 2009, 04:11 AM #1 how many ways can rooks be placed?. Here is a tricky counting problem. "How many ways can two red and four blue rooks be placed on an 8-by-8 chessboard so that no two rooks can attack one another?". I believe that all placements of 6 non-attacking rooks on the board would be $(C(8,6))^{2}\cdot 6!$, but the two different colors are tricky. Or am I making it too difficult?. Here is a tricky counting problem. "How many ways can two red and four blue rooks be placed on an 8-by-8 chessboard so that no two rooks can attack one another?". I believe that all placements of 6 non-attacking rooks on the board would be $(C(8,6))^{2}\cdot 6!$, but the two different colors are tricky. Or am I making it too difficult?. That result is for six indistinguishable rooks (I am trusting you on that, I have not checked it). So the final answer is this times the number of ways that the six can be coloured such that two are red and four are blue, which is 15 ways. (Given any acceptable configuration of indistinguishable rooks, they can be numbered in order from the top left in the order they appear when read from left to right down the rows. Then we have the problem of assigning the colours to the numbered rooks for this configuration ) September 12th 2009, 04:45 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-statistics/101824-how-many-ways-can-rooks-placed.html","timestamp":"2014-04-18T03:28:48Z","content_type":null,"content_length":"35084","record_id":"<urn:uuid:52df9d85-25ce-41a5-8e04-777c4affbe53>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm a little sad to see Pluto demoted as a planet. The key question is how we update: my very educated mother just served us nine pickles The Economist, always on the forefront of important issues, suggests: my very educated mother just served us noodles Kottke had a contest for a planet mnemonics, with this excellent winner (using nine planets as a protest): My! Very educated morons just screwed up numerous planetariums.
{"url":"http://gorithm.blogs.com/gorithm/science/","timestamp":"2014-04-19T09:23:22Z","content_type":null,"content_length":"38131","record_id":"<urn:uuid:47e9c865-5854-4622-a566-5cb8c304f6cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Placement work with functions represented in a variety of ways: graphical, numerical, analytical, or verbal. They should understand the connections among these representations. understand the meaning of the derivative in terms of a rate of change and local linear approximation and they should be able to use derivatives to solve a variety of problems. understand the meaning of the definite integral both as a limit of Riemann sums and as the net accumulation of change and should be able to use integrals to solve a variety of problems. understand the relationship between the derivative and the definite integral as expressed in both parts of the Fundamental Theorem of Calculus. communicate mathematics both orally and in well-written sentences and should be able to explain solutions to problems. model a written description of a physical situation with a function, a differential equation, or an integral. use technology to help solve problems, experiment, interpret results, and verify conclusions. determine the reasonableness of solutions, including sign, size, relative accuracy, and units of measurement. determine the reasonableness of solutions, including sign, size, relative accuracy, and units of measurement. develop an appreciation of calculus as a coherent body of knowledge and as a human accomplishment. AP Government and Politics: United States The AP United States Government and Politics course is designed to provide the student with an experience equivalent to a one-semester college introductory course. Students will be expected to move beyond factual recall into critical analysis of the creation, function, and process of government. As stated in the College Board 2010 course description, this course will: give students an analytical perspective on government and politics in the United States. This course includes both the study of general concepts used to interpret U.S. government and politics and the analysis of specific examples. It also requires familiarity with the various institutions, groups, beliefs, and ideas that constitute U.S. government and politics . . . students should become acquainted with the variety of theoretical perspectives and explanations for various behaviors and outcomes. AP Computer Science A This course is intended to serve both as an introductory course for computer science majors and as a course for people who will major in other disciplines that require significant involvement with technology. It is not a substitute for the usual college-preparatory mathematics courses. Pre-requisites: The necessary prerequisites for entering the AP Computer Science A course include knowledge of basic algebra and experience in problem solving. A student in the AP Computer Science A course should be comfortable with functions and the concepts found in the uses of functional notation, such as f(x) 5 x 1 2 and f(x) 5 g(h(x)). It is important that students and their advisers understand that any significant computer science course builds upon a foundation of mathematical reasoning that should be acquired before attempting such a course. AP English Language and Composition This course engages students in becoming skilled readers of prose written in a variety of rhetorical contexts, and in becoming skilled writers who compose for a variety of purposes. Both their writing and their reading should make students aware of the interactions among a writer's purposes, audience expectations, and subjects, as well as the way genre conventions and the resources of language contribute to effectiveness in writing. AP English Literature and Composition This course engages students in the careful reading and critical analysis of imaginative literature. Through the close reading of selected texts, students deepen their understanding of the ways writers use language to provide both meaning and pleasure for their readers. As they read, students consider a work's structure, style and themes, as well as such smaller-scale elements as the use of figurative language, imagery, symbolism and tone. AP Statistics The purpose of the AP course in statistics is to introduce students to the major concepts and tools for collecting, analyzing and drawing conclusions from data. Students are exposed to four broad conceptual themes: Exploring Data: Describing patterns and departures from patterns. Sampling and Experimentation: Planning and conducting a study. Anticipating Patterns: Exploring random phenomena using probability and simulation. Statistical Inference: Estimating population parameters and testing hypotheses. Pre-requisites: The AP Statistics course is an excellent option for any secondary school student who has successfully completed a second-year course in algebra and who possesses sufficient mathematical maturity and quantitative reasoning ability. AP U.S. History This course is designed to provide students with the analytic skills and factual knowledge necessary to deal critically with the problems and materials in U .S .history. The program prepares students for intermediate and advanced college courses by making demands upon them equivalent to those made by full-year introductory college courses. Students should learn to assess historical materials—their relevance to a given interpretive problem, reliability, and importance—and to weigh the evidence and interpretations presented in historical scholarship. AP World History The purpose of the AP World History course is to develop greater understanding of the evolution of global processes and contacts in different types of human societies. This understanding is advanced through a combination of selective factual knowledge and appropriate analytical skills. The course highlights the nature of changes in global frameworks and their causes and consequences, as well as comparisons among major societies. It emphasizes relevant factual knowledge, leading interpretive issues, and skills in analyzing types of historical evidence.
{"url":"http://www.foresttrailacademy.com/advanced-placement-courses.html","timestamp":"2014-04-18T03:00:49Z","content_type":null,"content_length":"95366","record_id":"<urn:uuid:a3cc76ac-a3e9-4d17-ac42-a7f74b5301a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Science 5541 Artificial Intelligence Homework Assignment 5 (20 points) Due December 15, 2009 1. For the following set of examples: □ a1, b1, c1, d1, + □ a2, b2, c2, d2, - □ a3, b1, c3, d1, - □ a2, b1, c2, d2, + □ a1, b2, c3, d1, - □ a2, b1, c2, d2, + □ a3, b2, c3, d2, - □ a2, b1, c3, d2, - Calculate the decision tree that would be produced by ID3, show all of your work. 2. A test for a rare disease is 99% accurate in identifying the disease when the person has the disease and 97% accurate when the person does not have the disease. .5% of the population has the disease. Rich takes the test and it comes up positive (the test indicates he may have the disease). Using Bayes rules, what is the likelihood Rich actually has the disease (hint: the probability that Rich has the disease plus the probability that Rich does not have the disease must add up to 1)? If he takes a second test, and the tests are conditionally independent, how likely is it that he has the disease? 3. How are reinforcement learning and partial order planning similar? How do they differ?
{"url":"http://www.d.umn.edu/~rmaclin/cs5541/fall2009/homework/05.html","timestamp":"2014-04-20T03:19:12Z","content_type":null,"content_length":"1640","record_id":"<urn:uuid:c52047f7-8f80-4cf4-b576-7c9dda1ed0bd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Sara on Wednesday, August 22, 2007 at 6:55pm. O.k. I am confused. So here is what I did. Mass of empty flask plus stopper: 37.34g mass of stoppered flask plus water: 63.67g Mass of stoppered flask plus liquid: Mass of water: 63.67 - 37.34= 26.33 Volume of Flask(density of H20 at 25*C, 0.9970 g/cm^3: Mass of Liquid: 52.01g - 37.34g= 14.67g Density of Liquid: That was part 1 (density of unknown Liquid) Here comes part 2 density of unknown metal. Mass of stoppered Flask plus metal: Mass of stoppered Flask + Metal+ Water: Mass of metal: 61.28g - 37.34g= 23.94g (Note that 37.34 is the mass of the stoppered Flask from before) Mass of water: 78.19g - 61.28g=16.91g Volume of water: (16.91)/ (0.9970)=16.96cm^3 Volume of metal: I did the following: (23.94g)/(0.9970g/cm^3)= 24.01cm^3 then using this answer I need to caculate the Density of the Metal: (23.94g)/(24.01cm^3)= 0.9970 g/cm^3 But why would the Density for water at 25*C, which was given, be the same as the density of the metal? Shouldn't it be different? So I came to the conclusion that in the end I must be doing something wrong because my answer is like walking around in circles...(The last two caculations), so I either don't really get the concept or I made a mistake from the get go...or I am using the wrong numbers, to calculate certain parts. so, this is were i need your help. I hope it's clearer whemn I show you exactly what I did. • Chemistry - DrBob222, Wednesday, August 22, 2007 at 7:09pm You are getting confused with the numbers. Go back to square 1. You were correct before that mass H2O = 26.33 g. mass liquid = 14.67 g. volume of H2O = 26.33/0.9970 = 26.409 cc = volume of the liquid. so density of liquid = mass/volume =14.67/26.409 = 0.55549 which rounds to 4 significant figures as 0.5555 Your original error was in miscalculating the volume of the liquid. The volume of the liquid is the same as the volume of the water and the volume of water is the mass of water divided by the density of water to give the volume of water. • Chemistry - DrBob222, Wednesday, August 22, 2007 at 7:33pm Mass of empty flask plus stopper: 37.34g mass of stoppered flask plus water: 63.67g Mass of stoppered flask plus liquid: Mass of water: 63.67 - 37.34= 26.33 Volume of Flask(density of H20 at 25*C, 0.9970 g/cm^3: You are ok to here. You have used the mass of the EMPTY flask. The volume of the flask is the grams H2O in the flask divided by the density of the water which is 26.33 (your figures from above)/0.9970 (the density of the water at the temperature at which the experiment was conducted) = 26.409 cc = volume of flask. Mass of Liquid: 52.01g - 37.34g= 14.67g OK here Density of Liquid: No, the density of the liquid is the mass of the liquid (14.67 is correct) divided by the volume of the liquid. The volume of the liquid is the same as the volume of the water that was in the flask which is 26.409 cc. That's why you went through the whole process of determining the volume of the flask by weighing the water and using the density of water to calculate the volume of the water which also is the volume of the flask. So, density of liquid = 14.67/26.409 = 0.55549 g/cc which to 4 significant figures is 0.5555 g/cc. Let's get the first part straight before tackling the second part. I will let you see what you were doing incorrectly for the first part> then using that knowledge work through the density of the metal. Repost that part if you get stuck. Thanks for showing your work. It makes it easier to catch what is going wrong. That was part 1 (density of unknown Liquid) Here comes part 2 density of unknown metal. Mass of stoppered Flask plus metal: Mass of stoppered Flask + Metal+ Water: Mass of metal: 61.28g - 37.34g= 23.94g (Note that 37.34 is the mass of the stoppered Flask from before) Mass of water: 78.19g - 61.28g=16.91g Volume of water: (16.91)/ (0.9970)=16.96cm^3 Volume of metal: I did the following: (23.94g)/(0.9970g/cm^3)= 24.01cm^3 then using this answer I need to caculate the Density of the Metal: (23.94g)/(24.01cm^3)= 0.9970 g/cm^3 Related Questions Chemistry - The Densities of Liquids and solids O.k. so here is my problem. I ... Chemistry - A student obtained a clean, dry glass- stoppered flask. She weighed ... chemistry - A student obtained a clean, dry, glass stoppered flask. The weight ... Chemistry - mass of empty flask with stopper = 32.634 g mass of flask with water... Chemistry - hmmm...I just can't seem to get it. It looks like I got it right up ... Chemistry - A stoppered flask at 25 C contains 250 mL water, 200 mL octanol, and... chemistry - A. What is the mass of Solution 3? in g B. What is the density of ... Chemistry - An empty flask weighs 128.632g. After vaporization of a sample of ... Chemistry - An empty flask weighs 128.632g. After vaporization of a sample of ... Chemistry - Having obtained the voume of the flask, she emptied the flask, dried...
{"url":"http://www.jiskha.com/display.cgi?id=1187823339","timestamp":"2014-04-19T15:04:35Z","content_type":null,"content_length":"12787","record_id":"<urn:uuid:a19c4290-dcd6-4ec0-b99c-423d9904abf6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
open inertvals (proof) January 12th 2011, 02:47 AM #1 Feb 2010 open inertvals (proof) Hello all, Let A be a bounded subset of $\mathbb{R}$, which contains infinitely many different points. Prove that there exists at least one point $x_{0}$ such that all open intervals around $x_{0}$ contains infinitely many different points from A. This doesn't seem true to me. If $A=\{ \frac{1}{ 2^n}|n\in \mathbb{N}\}$ then the open interval $(\frac{1}{2^n}-\frac{1}{2^{n+1}},\frac{1}{2^n}+\frac{1}{2^{n+1}})$ contains only $\frac{1}{2^n}$. Is there a flaw in this counterexample I'm not seeing? Last edited by DrSteve; January 12th 2011 at 04:49 AM. Reason: Fixed counterexample Oops - I meant $\frac{1}{2^n}$. I'm going to edit my post. Now tell me if there's something wrong with it. Now that works nicely as a counter-example. I suspect the OP meant to add that $\math{A}$ is closed. It doesn't say that the point $x_0$ has to be in A. In this example, take $x_0=0$. In this case, I would try to construct a limit point. First, $A$ can be contained in a closed interval $I=[a,b]$. Split $I$ in half. One of the parts must contain infinitely many points from $A$. Take that one and call it $A_1$. Then, split $A_1$ in half. One of those parts must contain infinitely many points from $A$, so take that one and call it $A_2$. Continue this procedure indefinitely to receive a sequence of closed intervals $A_1, A_2, A_3,\ldots$ such that $A_1\subset A_2\subset A_3,\ldots$ whose sizes are $\frac{b-a}{2},\frac{b-a}{2^2},\frac{b-a}{2^3},\ldots$, respectively. Now, by the Nested Interval theorem, $\bigcap A_n$ contains a single point, and that point is the limit point we are looking for. Merely note that since $A$ is bounded we have that $\overline{A}$ is bounded and thus compact. Thus, we have that there is an infinite subset of $\overilne{A}$ and by compactness (or the Bolzano-Weierstrass property to be exact) this sequence must have a limit piont. In any $T_1$ space (if this doesn't mean anything ignore it and replace it with $\mathbb{R}$) a point is a limit point if and only if every neighborhood contains infintely many points of the set. January 12th 2011, 03:12 AM #2 Senior Member Nov 2010 Staten Island, NY January 12th 2011, 03:26 AM #3 January 12th 2011, 04:47 AM #4 Senior Member Nov 2010 Staten Island, NY January 12th 2011, 06:13 AM #5 January 12th 2011, 06:28 AM #6 January 12th 2011, 06:37 AM #7 January 12th 2011, 07:15 AM #8 January 13th 2011, 10:58 PM #9
{"url":"http://mathhelpforum.com/differential-geometry/168114-open-inertvals-proof.html","timestamp":"2014-04-16T06:13:51Z","content_type":null,"content_length":"66022","record_id":"<urn:uuid:d8af62d7-7ea5-4236-b565-efe487c692df>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Re: Berkeley and nonstandard analysis Robert Tragesser RTragesser at compuserve.com Mon Jan 31 08:51:49 EST 2000 It is worth considering that the logical error Berkeley points up is not one the logically hyper-acute Leibniz (could have) made.-- Leibniz sharply distinguished methods of demonstration [where logic is central] from methods of discovery [where the method leads to an obviousl;y correct formula but not necessarily to a logically sound demonstration of the correctness of the formula]. The latter (his "calculus" was a method of discovery, not of demonstration) is happy to come up with the obviously correct formula and is not gloomy about not being able to give a logically sound/rigorous demonstration. Bos made the point that when looking at the geometric picture(s), it is obviously good mathematical sense to let the dx vanish to get the right formula for (dx\2)/dx, say; never mind that the algebra/logic leaves much to de be desired -- that the algebra/logic is deficient doesn't get in the way of seeing that the formula sans the tail dx is the right one, it just gets in the way of giving a logically coherent demonstration that it is. (By the time one moves from division to cancelling the dx, one has shifted frames from algebra to intuitive geometry. . .so there isn't any sort of logical error FORCING drawing a contradiction, just a switch of frame of consideration.) (By the way, indeed, Bos observes that for higher order _Eulerian_ infinitesimals, the geometric picture breaks down; here the failure of the logic catches up with "the infinitesimal calculus" as a mathod of discovery.--In this sense one would expect that Berkeley could make his strongest case with higher order fluxions. But of course Berkeley still loses -- for the mathematicians' response to this dead end is to sharpen up the logic, that is, to find the right concepts.) west(running)brook connecticut usa More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003708.html","timestamp":"2014-04-18T04:37:30Z","content_type":null,"content_length":"4203","record_id":"<urn:uuid:d6714f2f-402a-4c5b-bdb1-0d56fcd10655>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
PI Product Overview / Piezo Stage, NanoPositioning Stage, Microscopy Scanning Stage / Selection Guide Piezo NanoPositioning and Scanning Stages Selection Guide *Models Description Axes *Travel [µm] *Feedback PDF Objective Nano-Focus Devices PIFOC®. Long range microscope objective piezo nanopositioner, compact & light weight, QuickLock Z 100, 250, 400 capacitive P-721.CDQ PIFOC®. Microscope objective piezo nanopositioner. Now with QuickLock Z 100 capacitive P-720 PIFOC®. Compact, open-loop microscope objective piezo nanopositioner Z 100 no P-722 P-723 PIFOC®. Microscope objective piezo nanopositioner Z 200, 350 LVDT Open Loop Single Axis Stages P-280 Piezo nanopositioning stage. Low cost, XY and XYZ combinations X 30, 50, 100 no P-290 Vertical piezo nanopositioning stage. Large travel range Z 1000 no P-287 Vertical/tilt piezo nanopositioning stage. Large travel range Z, θ[x] 700, 12 mrad no Closed Loop, Single Axis Stages with Direct Metrology Feedback Direct metrology measures the position of the moving platform rather than strain in the actuator (as is common with Download paper on state-of-the-art nanopositioning stages ( lower-precision piezo resistive strain gauge sensors). This results in higher linearity scans and provides superior An improvement of up to 3 orders of magnitude in dynamic linearity responsiveness, resolution, repeatability and stability at the sub-nano-meter level. can be achieved with the Dynamic Digital Linearization ( option in the E-710 controller. *Models Description Axes *Travel [µm] *Feedback PDF P-783 Vertical piezo nanopositioning stage. Large travel range, closed-loop, compact Z 300 LVDT P-620.Z - P-622.Z PIHera vertical piezo nanopositioning stage family. Low cost, compact, accurate, very ling travel. Z 50, 100, 250 capacitive P-772 Ultra-compact piezo nanopositioning stage. Fast and accurate, smallest closed-loop stage with capacitive position feedback X 10 capacitive P-780 Piezo nanopositioning stage. Compact, fast, for small loads X 80 LVDT P-750 High-load piezo nanopositioning stage. Extremely straight motion, very stiff X 75 capacitive, LVDT P-752 NanoAutomation® stage. Very fast and accurate, for FBG writing and disk drive test set ups X 15, 30 capacitive P-753 NanoAutomation® actuator/stage. Very small and accurate, works as both stage and linear actuator Z & X 12, 25, 38 capacitive P-620.1 - P-625.1 PIHera piezo nanopositioning stage family. Low cost, compact, accurate, very ling travel. X (XY, Z) 100, 250, 500 capacitive Multi Axis, Serial Kinematics Stages *Models Description Axes *Travel [µm] *Feedback PDF P-281 P-282 Open-loop piezo nanopositioning stage family. Low cost, compact XY, XYZ 30, 50, 100 no P-611 NanoCube XYZ piezo nanopositioning stage. Smallest XYZ closed-loop stage with 100 µm travel XYZ 100 SGS P-762 Piezo nanopositioning stage family. Compact, 20 X 20 mm clear aperture. 1 through 5 axis versions X, XY, XYZ, Z, Zθ[x]θ[y], XYZθ[x]θ[y] 100, ±2.6 mrad LVDT P-620.2 - P-625.2 PIHera XY piezo nanopositioning stage family. Very compact, long travel, low cost, very precise. Direct metrology XY (Z, XYZ) 50,100,250, 500 capacitive Multi-Axis, Parallel Metrology, Parallel Kinematics Stages Why Parallel Metrology /Parallel Kinematics? Download paper on state-of-the-art nanopositioning stages ( Parallel metrology can "see" all controlled degrees of freedom simultaneously. These stages provide An improvement of up to 3 orders of magnitude in dynamic linearity can be achieved with the Dynamic superior trajectory control to serial kinematics stages. Digital Linearization ( option in the E-710 controller. *Models Description Axes *Travel [µm] *Feedback PDF P-615 NanoCube™ 350C XYZ Piezo NanoAligner. Clear aperture, ideal for photonic alignment. XYZ to 350 capacitive P-363 PicoCube™: Closed-loop XYZ, XY piezo scanner for AFM, SPM. Nanomanipulation tool, extremely high resolution, compact, ideal for Nanotech and Biotech. XY, XYZ 6 capacitive P-733 XY piezo Nano-scanning and positioning stage. NEW: Standard Vacuum Version. Fast and accurate XY positioner, clear aperture XY 100x100 capacitive P-733.2DD High-speed, direct-drive XY, XYZ nanoscanning stages. Clear aperture. Flat and extremely fast: 2.2 kHz Fres. UHV version available. XY, XYZ 30x30 (x10) capacitive P-734 XY Nanoscanning stage. Special design for extremely straight & flat motion (low nm-range). Clear aperture XY 100 capacitive P-770 Large aperture (200 X 200 mm) XY piezo nanopositioning stage. XY 200 LVDT P-517 - P-527 "P-500"-series XY, XYZ, XYθ[z] piezo nanopositioning & scanning stages Clear aperture, active trajectory control, special version with 6 degrees of freedom XY, XYZ, XYθ[z] Up to 200 in XYZ, 2 mrad capacitive P-561.3DD Series PIMars XYZ, direct-drive, high-speed, piezo stages for scanning microscopy. Very fast, clear aperture, active trajectory control XY, XYZ 45 XY, 11 Z capacitive P-562 PIMars series XY & XYZ, long range piezo scanning stages. Up to 300x300x300 µm. Clear aperture, active trajectory control XY, XYZ 100x100x100, 300x300x250 capacitive P-558 "P-500" series vertical & tip/tilt nanopositioning stages. Clear aperture, capacitive sensors Z, Zθ[x]θ[y] Up to 200 in Z, 2 mrad tip/tilt capacitive Closed and Open Loop Vertical / Tilt Nanopositioning Stages P-620.Z - P-622.Z PIHera vertical piezo nanopositioning stage family. Low cost, compact, accurate, very ling travel. Z 50, 100, 250 capacitive P-558 "P-500" series vertical & tip/tilt nanopositioning stages. Clear aperture, capacitive sensors Z, Zθ[x]θ[y] Up to 200 in Z, 2 mrad tip/tilt capacitive P-762.ZL P-762.TL Piezo nanopositioning stages. P-762.ZL: elevation stage; P-762.TL: tilt/elevation stage with aperture Z, & 5-axis versions 100, ±2.6 mrad LVDT P-783 Vertical piezo nanopositioning stage. Large travel range, closed-loop, compact Z 300 LVDT P-290 Vertical piezo nanopositioning stage. Large travel range Z 1000 no P-287 Vertical/tilt piezo nanopositioning stage. Large travel range Z, θ[x] 700, 12 mrad no "Piezo Hexapod" 6DOF Parallel-Kinematic Stages *Models Description Axes *Travel [µm] *Feedback PDF P-587 Piezo Hexapod 6DoF nanopositioning & scanning stage. Sophisticated closed-loop, 6-axis system XYZ, θ[x]θ[y]θ[z] 800 / 10 mrad capacitive Ultrasonic Piezo Linear Motor Driven Stages click here *Custom dimensions, sensors, designs for volume buyers. Capacitive sensors and LVDT sensors are direct metrology sensors and provide the highest precision. For piezo steering mirrors, tip/tilt positioners see the “Piezo Active Optics” section.
{"url":"http://www.physikinstrumente.de/products/section2/Piezo_NanoPositioning_System_Selection_Guide.htm","timestamp":"2014-04-21T07:04:07Z","content_type":null,"content_length":"36878","record_id":"<urn:uuid:e2ddc559-e6a2-4e31-bdd9-9491d6f6df7c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Identity for cos(wt+phase)cos(wt) December 29th 2010, 07:29 PM #1 Dec 2010 Identity for cos(wt+phase)cos(wt) I am looking for an identity to simplify the following: I assume it will be something similar to the power reduction identity: Dear laguna92651, You can use the product to sum identity. $\cos A\cos B=\frac{1}{2}\left[\cos(A+B)+\cos(A-B)\right]$ For a complete list of trignometric identities you can refer List of trigonometric identities - Wikipedia, the free encyclopedia Hope this will help you. I looked at that identity many times, but having the phase as part of one of the functions threw me. But all of sudden with your replay, I could see it clearly for some reason. It didn't dawn on me what A and B represented I guess, made it way harder than it was. I got what I would expect, this is an amplitude modulation problem I was working on. December 29th 2010, 09:14 PM #2 Super Member Dec 2009 December 29th 2010, 09:48 PM #3 Dec 2010
{"url":"http://mathhelpforum.com/trigonometry/167103-identity-cos-wt-phase-cos-wt.html","timestamp":"2014-04-19T00:40:19Z","content_type":null,"content_length":"35004","record_id":"<urn:uuid:e45a5fe7-893c-459d-be66-f8b284a85900>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] qr decompostion gives negative q, r ? Sturla Molden sturla@molden... Wed Nov 21 05:20:15 CST 2012 Den 21. nov. 2012 kl. 09:34 skrev Virgil Stokes <vs@it.uu.se>: > But, again, it is an issue for the algorithm given in Table 3 (p. 2248 of paper). Look at step 8. and equation (30). As stated in this step "The square-root of the filtered state-error covariance" is returned as <tblatex-1.png> from step 5. where a QR decomposition is performed for the triangularization. The <tblatex-2.png> matrix must have diagonal elements > 0 (I leave this to you to think about). When I perform step 5. with MATLAB, I always get a <tblatex-1.png> that has diagonal elements > 0 for my application (which is satisfying). When I perform step 5. with numpy, for the same matrix on the RHS of (27) I do not always get diagonal elements > 0 for <tblatex-1.png> --- herein lies the problem that I have been trying to explain. There is nothing in the definition of QR that says R must be Positive definite. NumPy will give you a valid QR fatorization computed with LAPACK, and so will Matlab. You might blame it on the LAPACK version, or Matlab might do an undocumented normalization as post-processing. Matlab is not a standard for the "right ansver" in linear algebra. Any numerical code requiring R to be PD is errorneous, even in Matlab. Then again, it is not rocket science to post-process Q and R so that R is PD. If you need to decompose a covariance matrix P = R*R' with R PD it sounds like you use QR as a Cholesky factorization. If you want a Cholesky factor of P, you know where to find it. (Not that I recommend using it.) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20121121/dea2df58/attachment.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-November/033667.html","timestamp":"2014-04-21T15:14:09Z","content_type":null,"content_length":"4494","record_id":"<urn:uuid:35eca3b2-0cce-49db-80a0-796c3c003847>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
halt problem October 27th 2010, 12:49 PM halt problem well I am not sure if its the right place to ask this question but here it goes -it is said that (halt problem) no algorithm could be written to tell if a program will halt or not on giving some I think one only have to get values for which program will enter infinite loop and use these values in the algorithm to make halt program. can anyone put more light on it? October 27th 2010, 08:45 PM well I am not sure if its the right place to ask this question but here it goes -it is said that (halt problem) no algorithm could be written to tell if a program will halt or not on giving some I think one only have to get values for which program will enter infinite loop and use these values in the algorithm to make halt program. can anyone put more light on it? The >>Halting Problem<< October 27th 2010, 09:51 PM already read it but still can't get it. i post this question here because it is related to maths programs. my question is related to programming only. why such a program can't be made. just find the values for which program run infinite loop, make program halt and make it to give output 1 if such value encountered n 0 output is given i.e no loop. October 27th 2010, 10:13 PM already read it but still can't get it. i post this question here because it is related to maths programs. my question is related to programming only. why such a program can't be made. just find the values for which program run infinite loop, make program halt and make it to give output 1 if such value encountered n 0 output is given i.e no loop. It can't be done because if it could it would imply a contractiction. Mark Chu-Carrol gives a more programmey explanation on his blog >>here<<
{"url":"http://mathhelpforum.com/math-software/161189-halt-problem-print.html","timestamp":"2014-04-16T10:40:13Z","content_type":null,"content_length":"6434","record_id":"<urn:uuid:3c157def-4aa4-4841-b1df-ef0d4a2ac7d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
You might be a mathematician if ganesh wrote: You Might Be a Mathematician if... you are fascinated by the equation e^(i*pi) +1=0 you know by heart the first fifty digits of pi. you have tried to prove Fermat's Last Theorem. you know ten ways to prove Pythagoras' Theorem. your telephone number is the sum of two prime numbers. you have calculated that the World Series actually diverges. you are sure that differential equations are a very useful tool. you comment to your wife that her straight hair is nice and parallel. when you say to a car dealer "I'll take the red car or the blue one", you must add "but not both of them." How many of them apply to you? and which ones? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Full Member Re: You might be a mathematician if I have to do all that to become a mathematician! It's easier to think that i am a mathematician. There are 10 kinds of people in the world,people who understand binary and people who don't. Re: You might be a mathematician if Okay, but how many of them apply to you 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if I am afraid none of them. I am feeling very dejected. Can you make another list? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if Agnishom's Scoresheet you are fascinated by the equation e^i π +1=0 - Very much you know by heart the first fifty digits of pi. - Only 40 you have tried to prove Fermat's Last Theorem. - Yes you know ten ways to prove Pythagoras' Theorem. - No, only two of them your telephone number is the sum of two prime numbers. - Unfortunately, it was odd, So I tried 65's telephone number you have calculated that the World Series actually diverges. - I have no idea what an World Series is you are sure that differential equations are a very useful tool. - Yes you comment to your wife that her straight hair is nice and parallel. - Not Applicable right now but would surely be if I had a wife when you say to a car dealer "I'll take the red car or the blue one", you must add "but not both of them." - Yes bobbym wrote: I am afraid none of them. I am feeling very dejected. Can you make another list? Here you go: http://img.spikedmath.com/comics/456-to … tician.png Last edited by Agnishom (2013-06-30 13:42:01) 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if Thanks for providing the new list. I have one of those. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if Would you please explain point 8 and 9 in the new list? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if Those are two logic symbols. For all and there exists. But I do not get it. The other one is incomprehensible to me. I only got 3. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if How much do you score on the first list? How do you pronounce LaTeX? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if I scored zero and am glad of it. lah teck I am pretty sure. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if How can you score zero? Don't the following apply to you? you are sure that differential equations are a very useful tool. you comment to your wife that her straight hair is nice and parallel. when you say to a car dealer "I'll take the red car or the blue one", you must add "but not both of them. 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Full Member Re: You might be a mathematician if Agnishom wrote: Okay, but how many of them apply to you only 3 There are 10 kinds of people in the world,people who understand binary and people who don't. Re: You might be a mathematician if It is understood I am only buying one. Who buys two cars. I would never use any math on any woman. Not unless I despised her. DE's are nothing more than approximations. Then to solve them one usually has to resort to numerics or linearize them further turning it into an approximation of an approximation. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if It is understood I am only buying one. Who buys two cars. Does that stop you from using the XOR operator? I would never use any math on any woman. Not unless I despised her. Really? Would you use math on chocolate chips? You surely use them a lot 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if I love XOR. My favorite command back in the old Z-80 days was XOR A. Nope, I eat chocolate chip You surely use them a lot They do crop up in numerical analysis. But as you know I never do any physics. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if Z-80 was a type of game player? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if Showing that the author of that page probably never even seen a Z - 80! #3 of your second list is the one! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: You might be a mathematician if Hi Agnishom For #9: Do you know what those symbols mean by themselves. For #3: I thought LaTeX was pronounced lay teck. Last edited by anonimnystefy (2013-06-30 19:08:35) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: You might be a mathematician if Knuth says lah teck. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if I do not know anything about the symbols 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if For all and there exists. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: You might be a mathematician if I still do not get the joke 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Real Member Re: You might be a mathematician if In mathematical logic the two symbols are used to express certain facts. For example: For every x there is (at least one) y which is greater than x, Or, even further simplified: There is no largest number. The thing is, it is very common to have an exists symbol right after the for all symbol, like in the expression above. So, the joke would translate to: For all for-all symbols there exists a there-exists symbol. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: You might be a mathematician if 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: You might be a mathematician if I can not tell the difference between Logic and joke Logic. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19704","timestamp":"2014-04-18T16:48:58Z","content_type":null,"content_length":"42823","record_id":"<urn:uuid:8ecdf6df-a6b0-40cd-a1a0-474bde040096>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The ages of a group of women are approximately normally distributed with a mean of 48 years and a standard deviation of 6 years. One woman is randomly selected from the group, and her age is observed. A. Find the probability that her age will fall between 54 and 60 years is_____. B. Find the probability that her age will fall between 47 and 52 years is_____. C. Find the probability that her age will be less than 35 is_____. D. Find the probability that her age will exceed 40 years is_____. • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51b74a2fe4b05b167ed486a6","timestamp":"2014-04-17T03:52:45Z","content_type":null,"content_length":"25670","record_id":"<urn:uuid:230db516-e2d7-4898-9da8-0e92a87f1040>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Time Series Regression III: Influential Observations When considering the empirical limitations that affect OLS estimates, Belsley et al. [1] advise that collinearities be addressed first. A next step is to look for influential observations, whose presence, individually or in groups, have measurable effects on regression results. We distinguish the essentially metric notion of "influential observation" from the more subjective notion of "outlier," which may include any data that do not follow expected patterns. We begin by loading relevant data from the previous example on "Collinearity & Estimator Variance," and continue the analysis of the credit default model presented there: load Data_TSReg2 Influential observations arise in two fundamentally distinct ways. First, they may be the result of measurement or recording errors. In that case, they are just bad data, detrimental to model estimation. On the other hand, they may reflect the true distribution of the innovations process, exhibiting heteroscedasticity, skewness, or leptokurtosis for which the model fails to account. Such observations may contain abnormal sample information that is nevertheless essential to accurate model estimation. Determining the type of influential observation is difficult when looking at data alone. The best clues are often found in the data-model interactions that produce the residual series. We investigate these further in the example on "Residual Diagnostics." Preprocessing influential observations has three components: identification, influence assessment, and accommodation. In econometric settings, identification and influence assessment are usually based on regression statistics. Accommodation, if there is any, is usually a choice between deleting data, which requires making assumptions about the DGP, or else implementing a suitably robust estimation procedure, with the potential to obscure abnormal, but possibly important, information. Time series data differ from cross-sectional data in that deleting observations leaves "holes" in the time base of the sample. Standard methods for imputing replacement values, such as smoothing, violate the CLM assumption of strict exogeneity. If time series data exhibit serial correlation, as they often do in economic settings, deleting observations will alter estimated autocorrelations. The ability to diagnose departures from model specification, through residual analysis, is compromised. As a result, the modeling process must cycle between diagnostics and respecification until acceptable coefficient estimates produce an acceptable series of residuals. The fit method of the LinearModel class computes many of the standard regression statistics used to measure the influence of individual observations. These are based on a sequence of one-at-a-time row deletions of jointly observed predictor and response values. Regression statistics are computed for each delete-1 data set and compared to the statistics for the full data set. Significant changes in the coefficient estimates after deleting an observation are the main concern. The fitted model property Diagnostics.dfBetas scales these differences by estimates of the individual coefficient variances, for comparison: dfBetas = M0.Diagnostics.dfBetas; hold on hold off xlabel('Observation Deleted') ylabel('Scaled Change in Coefficient Estimate') title('{\bf Delete-1 Coefficient Estimate Changes}') axis tight grid on Effects of the deletions on component pairs in are made apparent in a matrix of 2-D scatter plots of the changes: title('{\bf Delete-1 Coefficient Estimate Changes}') With sufficient data, these scatters tend to be approximately elliptical [2]. Outlying points can be labeled with the name of the corresponding deleted observation by typing gname(dates) at the command prompt, then clicking on a point in the plots. Alternatively, Cook's distance, found in the Diagnostics.CooksDistance property of the fitted model, is a common summary statistic for these plots, with contours forming ellipses centered around (that is, dfBeta = 0). Points far from the center in multiple plots have a large Cook's distance, indicating an influential observation: cookD = M0.Diagnostics.CooksDistance; ylabel('Cook''s Distance') title('{\bf Cook''s Distance}') axis tight grid on If is the estimated coefficient vector with the observation deleted from the data, then Cook's distance is also the Euclidean distance between As a result, Cook's distance is a direct measure of the influence of an observation on fitted response values. A related measure of influence is leverage, which uses the normal equations to write where is the hat matrix, computed from the predictor data alone. The diagonal elements of are the leverage values, giving componentwise proportions of the observed contributing to the corresponding estimates in . The leverage values, found in the Diagnostics.Leverage property of the fitted model, emphasize different sources of influence: leverage = M0.Diagnostics.Leverage; title('{\bf Leverage}') axis tight grid on Another common measure of influence, the Mahalanobis distance, is just a scaled version of the leverage. The Mahalanobis distances in X0 can be computed using d = mahal(X0,X0), in which case the leverage values are given by h = d/(T0-1)+(1/T0). Additional diagnostic plots can be created by retrieving other statistics from the Diagnostics property of a fitted model, or by using the plotDiagnostics method of the LinearModel class. Before deleting data, some kind of economic meaning should be assigned to the influential points identified by the various measures. Cook's distance, associated with changes in the overall response, shows a sharp spike in 2001. Leverage, associated with predictor data alone, shows a sharp spike in 1988. It is also noteworthy that after the sudden increase in leverage and a period of high default rates, the predictor BBB bends upward after 1991, and the percentage of lower-grade bonds begins to trend. (See a plot of the predictors in the example on "Linear Models.") Some clues are found in the economic history of the times. 2001 was a period of recession in the U.S. economy (second vertical band in the plots above), brought on, in part, by the collapse of the speculative Internet bubble and a reduction in business investments. It was also the year of the September 11 attacks, which delivered a severe shock to the bond markets. Uncertainty, rather than quantifiable risk, characterized investment decisions for the rest of that year. The 1980s, on the other hand, saw the beginning of a long-term change in the character of the bond markets. New issues of high-yield bonds, which came to be known as "junk bonds," were used to finance many corporate restructuring projects. This segment of the bond market collapsed in 1989. Following a recession (first vertical band in the plots above) and an oil price shock in 1990-1991, the high-yield market began to grow again, and matured. The decision to delete data ultimately depends on the purpose of the model. If the purpose is mostly explanatory, deleting accurately recorded data is inappropriate. If, however, the purpose is forecasting, then it must be asked if deleting points would create a presample that is more "typical" of the past, and so the future. The historical context of the data in 2001, for example, may lead to the conclusion that it misrepresents historical patterns, and should not be allowed to influence a forecasting model. Likewise, the history of the 1980s may lead to the conclusion that a structural change occurred in the bond markets, and data prior to 1991 should be ignored for forecasts in the new regime. For reference, we create both of the amended data sets: % Delete 2001: d1 = (dates ~= 2001); % Delete 1 datesd1 = dates(d1); Xd1 = X0(d1,:); yd1 = y0(d1); % Delete dates prior to 1991, as well: dm = (datesd1 >= 1991); % Delete many datesdm = datesd1(dm); Xdm = Xd1(dm,:); ydm = yd1(dm); The effects of the deletions on model estimation are summarized below. Dataset arrays provide a convenient format for comparing the regression statistics across models: Md1 = LinearModel.fit(Xd1,yd1); Mdm = LinearModel.fit(Xdm,ydm); % Model mean squared errors: MSEs = dataset({M0.MSE,'Original'},... % Coefficient estimates: Coeffs = dataset({M0.Coefficients.Estimate,'Original'},... % Coefficient standard errors: StdErrs = dataset({M0.Coefficients.SE,'Original'},... MSEs = Original Delete01 Post90 MSE 0.0058287 0.0032071 0.0023762 Coeffs = Original Delete01 Post90 Const -0.22741 -0.12821 -0.13529 AGE 0.016781 0.016635 0.014107 BBB 0.0042728 0.0017657 0.0016663 CPF -0.014888 -0.0098507 -0.010577 SPR 0.045488 0.024171 0.041719 StdErrs = Original Delete01 Post90 Const 0.098565 0.077746 0.086073 AGE 0.0091845 0.0068129 0.013024 BBB 0.0026757 0.0020942 0.0030328 CPF 0.0038077 0.0031273 0.0041749 SPR 0.033996 0.025849 0.027367 The MSE improves with deletion of the point in 2001, and then again with deletion of the pre-1991 data. Deleting the point in 2001 also has the effect of tightening the standard errors on the coefficient estimates. Deleting all of the data prior to 1991, however, severely reduces the sample size, and the standard errors of several of the estimates grow larger than they were with the original data. [1] Belsley, D. A., E. Kuh, and R. E. Welsch. Regression Diagnostics. Hoboken, NJ: John Wiley & Sons, 1980. [2] Weisberg, S. Applied Linear Regression. Hoboken, NJ: John Wiley & Sons, Inc., 2005.
{"url":"http://www.mathworks.com.au/help/econ/examples/time-series-regression-iii-influential-observations.html?nocookie=true","timestamp":"2014-04-23T15:56:23Z","content_type":null,"content_length":"45629","record_id":"<urn:uuid:b9a09bc0-e853-48c4-9271-a6d2431c69ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
p-divisible group Group Theory Classical groups Finite groups Group schemes Topological groups Lie groups Super-Lie groups Higher groups Cohomology and Extensions In great generality, for an integer $p$ a $p$-divisible group is a codirected diagram of abelian group objects in a category $C$ where the abelian-group objects are (equivalently) the kernels of the map given by multiplication with a power of $p$; these kernels are also called $p^n$-torsions. In the classically studied case $p$ is a prime number, $C$ is the category of schemes over a commutative ring (mostly a field with prime characteristic) and the abelian group schemes occurring in the diagram are assumed to be finite. In this case the diagram defining the $p$-divisible group can be described in terms of the growth of the order of the group schemes in the diagram. Note that there is also a notion of divisible group. Fix a prime number $p$, a positive integer $h$, and a commutative ring $R$. A $p$-divisible group of height $h$ over $R$ is a codirected diagram $(G_u, i_u)_{u \in \mathbb{N}}$ where each $G_u$ is a finite commutative group scheme over $R$ of order $p^{u h}$ that also satisfies the property that $0\to G_u \stackrel{i_u}{\to} G_{u +1}\stackrel{p^u}{\to} G_{u +1}$ is exact. In other words, the maps of the system identify $G_u$ with the kernel of multiplication by $p^u$ in $G_{u +1}$. Some authors refer to the $p$-divisible group as the colimit of the system $colim G_u$. Note that if everything is affine, $G_u=\mathrm{Spec}(A_u)$ and the limit $colim G_u = \mathrm{Spec}(\lim A_u)= It can be checked that a $p$-divisible group over $R$ is a $p$-torsion commutative formal group $G$ for which $p\colon G \to G$ is an isogeny. The kernel of raising to the $p^u$ power on $\mathbb{G}_m$ (sometimes called p-torsion) is a group scheme $\mu_{p^u}$. The limit $\lim_{\to} \mu_{p^u}=\mu_{p^\infty}$ is a $p$-divisible group of height $1$. The eponymous ($p$-divisible groups are sometimes called Barsotti-Tate groups) example is a special case of the previous one - namely the Barsotti-Tate group of an abelian variety. Let $X$ be an abelian variety over $R$ of dimension $g$, then the multiplication map by $p^u$ has kernel $_{p^u}X$ which is a finite group scheme over $R$ of order $p^{2g u}$. The natural inclusions satisfy the conditions for the limit denoted $X(p)$ to be a $p$-divisible group of height $2g$. A theorem of Serre and Tate says that there is an equivalence of categories between divisible, commutative, formal Lie groups over $R$ and the category of connected $p$-divisible groups over $R$ given by $\Gamma \mapsto \Gamma (p)$, where $\Gamma(p)=\lim_{\to} \mathrm{ker}(p^n)$. In particular, every connected $p$-divisible group is smooth The Cartier dual • Given a $p$-divisible group $G$, each individual $G_u$ has a Cartier dual $G_u^D$ since they are all group schemes. There are also maps $j_u$ that make the composite $G_{u+1}\stackrel{j_u}{\to} G_u \stackrel{i_u}{\to} G_{u +1}$ the multiplication by $p$ on $G_{u +1}$. After taking duals, the composite is still the multiplication by $p$ map on $G_{u +1}^D$, so it is easily checked that $ (G_{u}^D, j_{u}^D)$ forms a $p$-divisible group called the Cartier dual. • One of the important properties of the Cartier dual is that one can determine the height of a $p$-divisible group (often a hard task when in the abstract) using the information of the dimension of the formal group and its dual. For any $p$-divisible group, $G$, we have the formula that $ht(G)=ht(G^D)=\dim G + \dim G^D$. Dieudonné modules For the moment see display of a p-divisible group. • The dual $\mu_{p^\infty}^D\simeq \mathbb{Q}_p/\mathbb{Z}_p$. • For an abelian variety $X$, the dual is $X(p)^D=X^t(p)$ where $X^t$ denotes the dual abelian variety. Another proof that $X(p)$ has height $2g$ is to note that $X$ and $X^t$ have the same dimension $g$, so using our formula for height we get $ht(X(p))=2g$. The category of étale $p$-divisible groups is equivalent to the category of $p$-adic representations of the fundamental group of the base scheme . p-divisible groups and crystals References: Weinstein Relation to crystalline cohomology In derived algebraic geometry See Lurie. For references concerning Witt rings? and Dieudonné modules see there. Original texts and classical surveys • Barsotti, Iacopo (1962), “Analytical methods for abelian varieties in positive characteristic”, Colloq. Théorie des Groupes Algébriques (Bruxelles, 1962), Librairie Universitaire, Louvain, pp. 77–85, MR 0155827 • Demazure, Michel (1972), Lectures on p-divisible groups, Lecture Notes in Mathematics, 302, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0060741, ISBN 978-3-540-06092-5, MR 034426, web • Grothendieck, Alexander (1971), “Groupes de Barsotti-Tate et cristaux”, Actes du Congrès International des Mathématiciens (Nice, 1970), 1, Gauthier-Villars, pp. 431–436, MR 0578496 • Messing, William (1972), The crystals associated to Barsotti-Tate groups: with applications to abelian schemes, Lecture Notes in Mathematics, 264, Berlin, New York: Springer-Verlag, doi:10.1007/ BFb0058301, MR 0347836 • Serre, Jean-Pierre (1995) [1966], “Groupes p-divisibles (d’après J. Tate) web, Exp. 318”, Séminaire Bourbaki, 10, Paris: Société Mathématique de France, pp. 73–86, MR 1610452 • Stephen Shatz, Group Schemes, Formal Groups, and $p$-Divisible Groups in the book Arithmetic Geometry Ed. Gary Cornell and Joseph Silverman, 1986 • Tate, John T. (1967), “p-divisible groups.”, in Springer, Tonny A., Proc. Conf. Local Fields( Driebergen, 1966), Berlin, New York: Springer-Verlag, MR 0231827 Modern surveys Further development of the theory
{"url":"http://ncatlab.org/nlab/show/p-divisible+group","timestamp":"2014-04-18T03:35:11Z","content_type":null,"content_length":"52968","record_id":"<urn:uuid:c92fa918-39ff-4108-817a-e0ff87f6c372>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Home » Foundations of computational mathematics Foundations of Computational Mathematics August 10, 1998 to December 23, 1998 Organizers Felipe Cucker (co-Chair), Arieh Iserles (co-Chair), Tien Yien Li, Mike Overton, Jim Renegar, Mike Shub (co-Chair), Steve Smale, and Andrew Stuart Description This half year program is organized around four topics: • Complexity of numerican computations • homotopy methods • optimization and interior point methods • differential equations. The main emphasis is on the relationships among these topics. Solving systems of equations is among the oldest and thorniest problems of mathematics. During centuries mathematicians have alternated between 'negative' and 'positive' results. An example of a negative result is the famous theorem of Abel and Galois, who proved that there exist polynomials of degree 5 in one variable whose zeros cannot be expressed by means of the four basic arithmetic operations and root extractions. A positive result of tremendous applicability is Newton's method. It associates to a function f (e.g. one of the polynomials just mentioned) another function N[f] with the following property. If z is a point close enough to a root [0] = z , z[ i+1] = N[f](z[i]) , converges to [I] is at most that between [0] divided by 2^2^i-1 . In more recent years mathematicians have focused in questions such as what exactly "close enough'' or "in general'' mean in the description above. Can one decide whether a point z is close enough? Will the sequence starting with a specific z[0] converge to a root of f ? Image courtesy of Scott Sutherland, SUNY Stony Brook In the accompanying picture bad initial points (those for which the sequence fails to converge to a root) for the polynomial f(z)=(z2-1)(z2+ 0.16) are colored black. Note the complicated pattern arising from this set of bad points. The clear implication is that the question whether an initial point is `good' might well be an undecidable problem. Yet, to formally prove such an assertion requires the development of a formal computational model and a deep understanding of the geometry of decidable sets. It has been accomplished very recently. The situation described above is in a sense very simple. In general, one needs to deal with functions in several variables and occasionally inequalities must be considered as well. Such problems are ubiquitous in optimization, where the goal is to find x maximizing (or minimizing) a function f subject to inequality constraints. Moreover, in a realistic computing environment it is important to consider the effects produced by round-off and other sources of error. The goal of the FoCM program at MSRI is to develop a better understanding of how mathematics underpins numerical calculations. We hope to address ourselves to the four following issues and the interplay among them: • Complexity: What is the 'cost' inherent in a computational problem that no algorithm can circumvent? • Optimization: How to find the best value of a function (or a functional) subject to constraints? • Homotopy: How does the knowledge of the solution of a `nearby' problem assist us to compute the problem in hand? • Geometric integration: How to compute approximate solutions that share qualitative properties with the true solution of the problem? All these issues share two important characteristics. Their methodology requires deep and demanding pure mathematics, while their understanding is vital to the development of a new generation of powerful and reliable algorithms in scientific computing. Logistics Program Logistics can be viewed by Members. If you are a program member then Login Here.
{"url":"https://www.msri.org/programs/56","timestamp":"2014-04-16T10:27:08Z","content_type":null,"content_length":"54401","record_id":"<urn:uuid:d351c8e2-a0ea-4e82-b8e0-761c48e04818>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick Smooth Manifold Question May 17th 2010, 09:17 PM #1 Quick Smooth Manifold Question This may be stupid question, but why is it that given a topological manifold $X$ and some atlas $\mathfrak{A}$ then their exists a unique $C^{\infty}$ structure $\mathfrak{A}^*$ on $X$ which contains $\mathfrak{A}$? I can see why their exists some $C^{\infty}$ structure. Just define $\Omega$ to be the set of all atlases on $X$ containing $\mathfrak{A}$, order it in the natural way and apply Zorn's lemma. But, why is it unique? Is it because the way one constructs the ordering any two maximal atlases $\mathfrak{M},\mathfrak{N}$ would need to be comparable and thus equal? Any help would be appreciated! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-geometry/145232-quick-smooth-manifold-question.html","timestamp":"2014-04-19T00:24:18Z","content_type":null,"content_length":"33059","record_id":"<urn:uuid:a11a75ee-d776-4593-ab63-a37703c2e9d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Phoebus: A Distributed System for Processing Very Large Graphs Phoebus is a system written in Erlang for distributed processing of very large graphs that span billions of vertices and edges. Phoebus is an implementation of the Pregel paper published by Google Research. Phoebus supports a distributed model of computation similar to MapReduce, but more tuned to Graph processing. From the Google Research Blog post by Grzegorz Malewicz: Take, for example, geographic locations. A relatively simple analysis of a standard map (a graph!) can provide the shortest route between two cities. But progressively more sophisticated analysis could be applied to richer information such as speed limits, expected traffic jams, roadworks and even weather conditions. In addition to the shortest route, measured as sheer distance, you could learn about the most scenic route, or the most fuel-efficient one, or the one which has the most rest areas. All these options, and more, can all be extracted from the graph and made useful provided you have the right tools and inputs. Essentially, Phoebus makes calculating data for each vertice and edge in parallel possible on a cluster of nodes. Makes me wish I had a massively large graph to test it with.
{"url":"http://erlanginside.com/de/article/370/phoebus-erlang-version-of-google-s-pregel-a-distributed-system-for-pro","timestamp":"2014-04-20T17:39:55Z","content_type":null,"content_length":"8632","record_id":"<urn:uuid:53c6e528-08fa-4e73-bcb1-398ec4dc937e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
A Month of Math Software - March 2013 A Month of Math Software – March 2013 Welcome to the latest edition of A Month of Math Software where I look back over the last month and report on all that is new and shiny in the world of mathematical software. I’ve recently restarted work after the Easter break and so it seems fitting that I offer you all Easter Eggs courtesy of Lijia Yu and R. Enjoy! General purpose mathematical systems MATLAB add-ons GPU accelerated computation Statistics and visualisation Finite elements
{"url":"http://www.walkingrandomly.com/?p=4867","timestamp":"2014-04-19T11:55:48Z","content_type":null,"content_length":"45517","record_id":"<urn:uuid:f978771a-2230-4f4f-9298-57af405465e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and apparatus for encoding data for transfer over a communication channel - Patent # 5559835 - PatentGenius Method and apparatus for encoding data for transfer over a communication channel 5559835 Method and apparatus for encoding data for transfer over a communication channel (12 images) Inventor: Betts Date Issued: September 24, 1996 Application: 08/391,328 Filed: February 21, 1995 Inventors: Betts; William L. (St. Petersburg, FL) Assignee: Lucent Technologies Inc. (Murray Hill, NJ) Primary Chin; Wellington Assistant May; Timothy J. Attorney Or Malvone; Christopher N. U.S. Class: 375/265; 375/298; 714/792 Field Of 375/265; 375/263; 375/264; 375/266; 375/269; 375/290; 375/298; 371/43; 332/103 U.S Patent 4941154; 5022053; 5040191; 5159610; 5162812; 5195107; 5249200; 5265127 Patent 2652692 Other Telecommunications Industry Association, Paper TR-30-.1/93-04-14, "Implementation of precoding in V.fast" by Vedat Eyuboglu, et al, GeneralDataComm Inc., Motorola Codex, Rockwell References: International, Atlanta, Ga., 15 Apr. 1993.. Telecommunications Industry Association, Paper TR-30.1/93-06 23, "ISI Coder-Combined Coding & Precoding" Rajiv Laroia, AT&T, Baltimore, Md., 14-18 Jun., 1993.. CCITT, Paper 989-1992, "Trellis precoding vs. linear pre-emphasis: test results", Motorola Information Systems, Baltimore, Md., Jul. 31-Aug. 2, 1991.. International Telecommunications Union, Telecommunications Standards Sector (ITU-TSS), Period 1993-96, "4D codes for V.fast", Motorola Information Systems, Rockville, Md., May 12-14, European Search Report dated Apr. 11, 1995, regarding EPO Application No. 94 30 4979.. European Transactions On Telecommunications and Related Technologies, May-Jun. 1993, Italy, vol. 4, No. 3, pp. 243-256. "Advanced Modulation Techniques For V.Fast"; M. V. Eyuboglu, et Abstract: In a system that uses a dither signal in the production of a transmitted signal, the recoverability of an original trellis code is maintained while forming the dither signal using a modulo value that is equal to the distance between two adjacent symbols. This is accomplished by forming individual modulo counts for each of the orthogonal components produced by the transmitter's 3-tap FIR filter. The modulo counts and the bits from the trellis encoder are used to substitute the constellation subset identified by the trellis encoder with another constellation subset. The substituted subset is used for transmission and results in recovery of the original trellis code by the trellis decoder in the receiver. Claim: I claim: 1. A method of transmitting data from a transmitter to a receiver over a communication channel, comprising the steps of: generating a first signal using an output from a filter that receives prior transmitted signal points as inputs; selecting a subset of a predefined signal point constellation based on at least one bit of a data word and a modulo count derived by counting modifications made to an amplitude of said signal, said modifications resulting in said amplitude beingbetween an upper predetermined threshold and a lower predetermined threshold; selecting a signal point in said subset based on at least one bit of said data word; and transmitting a second signal representative of said signal point over the communication channel. 2. The method of claim 1, wherein said step of selecting said subset comprises using an output of a state machine that enters one of a plurality of predefined states, a current state of said state machine being a function of at least one bit ofsaid data word and a prior state of said state machine. 3. The method of claim 2, wherein said state machine produces a trellis code. 4. The method of claim 2, wherein said state machine produces a differential code. 5. The method of claim 2, wherein said state machine remains in one state for at least two signal point periods. 6. The method of claim 1, wherein said modulo count comprises conducting a separate modulo count on each orthogonal component of said signal. 7. The method of claim 1, wherein a variable modulo value is used to produce said modulo count. 8. The method of claim 1, further comprising adding a dither signal to a third signal representative of said signal point to produce said second signal representative of said signal 9. The method of claim 8, wherein said dither signal comprises orthogonal components X and Y, where -2.sup.-m .ltoreq.X.ltoreq.2.sup.-m, -2.sup.-m .ltoreq.Y.ltoreq.2.sup.-m, and 2 (2.sup.-m) is the distance between adjacent signal points in saidsignal point constellation. 10. The method of claim 9, wherein m is an integer. 11. A method of transmitting data from a transmitter to a receiver over a communication channel, comprising the steps of: identifying a first subset of a predefined signal point constellation based on at least one bit of a data word; generating a signal using an output from a filter that receives prior transmitted signal points as inputs; selecting a second subset of said predefined signal point constellation based on said first subset and a modulo count derived by counting modifications made to an amplitude of said signal, said modifications resulting in said amplitude beingbetween an upper predetermined threshold and a lower predetermined threshold, said second subset having a signal point selected based on at least one bit of said data word; and transmitting a second signal representative of said signal point over the communication channel. 12. The method of claim 11 wherein said step of identifying a first subset comprises using an output of a state machine that enters one of a plurality of predefined states, a current state of said state machine being a function of at least onebit of said data word and a prior state of said state machine. 13. The method of claim 12, wherein said state machine produces a trellis code. 14. The method of claim 12, wherein said state machine produces a differential code. 15. The method of claim 12, wherein said state machine remains in one state for at least two signal point periods. 16. The method of claim 11, wherein said subsets are rotationally related. 17. The method of claim 11, wherein said modulo count comprises conducting a separate modulo count on each orthogonal component of said signal. 18. The method of claim 11, wherein a variable modulo value is used to produce said modulo count. 19. The method of claim 11, further comprising adding a dither signal to a third signal representative of said signal point to produce said second signal representative of said signal 20. The method of claim 19, wherein said dither signal comprises orthogonal components X and Y, where -2.sup.-m .ltoreq.X.ltoreq.2.sup.-m, -2.sup.-m .ltoreq.Y.ltoreq.2.sup.-m, and 2 (2.sup.-m) is the distance between adjacent signal points insaid signal point constellation. 21. The method of claim 20, wherein m is an integer. 22. A method of transmitting data from a transmitter to a receiver over a communication channel, comprising the steps of: identifying a first subset of a predefined signal point constellation based on at least one bit of a data word: generating a signal using an output from a filter that receives prior transmitted signal points as inputs; selecting a second subset of said predefined signal point constellation based on said first subset and a modulo count derived by counting modifications made to an amplitude of said signal said modifications resulting in said amplitude beingbetween an upper predetermined threshold and a lower predetermined threshold, said second subset having a signal point selected based on at least one bit of said data word; modifying an amplitude of at least one orthogonal component of a second signal representative of said signal point to form a modified signal point, said modification being a function of said signal point's position in said constellation; and transmitting a third signal representative of said modified signal point over the communication channel. 23. The method of claim 22, wherein said step of identifying a first subset comprises using an output of a state machine that enters one of a plurality of predefined states, a current state of said state machine being a function of at least onebit of said data word and a prior state of said state machine. 24. The method of claim 23, wherein said state machine produces a trellis code. 25. The method of claim 23, wherein said state machine produces a differential code. 26. The method of claim 23, wherein said state machine remains in one state for at least two signal point periods. 27. The method of claim 22, wherein said modulo count comprises conducting a separate modulo count on each orthogonal component of said signal. 28. The method of claim 22, wherein a variable modulo value is used to produce said modulo count. 29. The method of claim 22, further comprising adding a dither signal to a fourth signal representative of said signal point to produce said second signal representative of said signal point. 30. The method of claim 29, wherein said dither signal comprises orthogonal components X and Y, where -2.sup.-m .ltoreq.X.ltoreq.2.sup.-m, -2.sup.-m .ltoreq.Y.ltoreq.2.sup.-m, and 2 (2.sup.-m) is the distance between adjacent signal points insaid signal point constellation. Description: CROSS-REFERENCE TO RELATED APPLICATIONS Related subject matter is disclosed in the applications assigned to the same assignee hereof identified as Ser. No. 08/076,603, filed Jun. 14, 1993, entitled "Intersymbol Interference Channel Coding Scheme" and Ser. No. 08/141,301, filed Oct. 22, 1993, entitled "A Method and Apparatus for Adaptively Providing Precoding and Preemphasis Conditioning to Signal Data for Transfer Over a Communication Channel". FIELD OF THE INVENTION The invention relates to encoding data for transfer over a communication channel; more specifically, communicating data over a telephone communication channel which is susceptible to inter-symbol interference. DESCRIPTION OF THE RELATED ART U.S. Pat. No. 5,162,812, entitled "Technique for Achieving the Full Coding Gain of Encoded Digital Signals", discloses a system in which a transmitted signal is encoded using a trellis code and precoded using a generalized partial responsefilter. FIG. 1 illustrates the transmitter disclosed in the aforementioned U.S. Patent. Serial-to-parallel converter 10 converts the incoming data to parallel words. Trellis encoder 12 encodes the parallel word to provide increased immunity tointer-symbol interference. Symbol mapper 14 maps the trellis encoded word to a signal point in a predefined symbol or signal point constellation. The symbol specified by symbol mapper 14 is in the form of a complex number which is received by precodingunit 16. Precoding unit 16 is used to compensate for signal distortions that are introduced at a receiver when the receiver passed the symbol through a noise whitening filter. Received symbols are passed through a noise whitening filter to compensatefor the communication channel's colored noise and thereby improve proper decoding of the trellis code. Precoder 16 includes transversal filter 18 and non-linear filter 20. Non-linear filter 20 is in the form of a modulo device that repeatedly subtractsor adds a value of 2 L until the output .alpha. of the device satisfies -L.ltoreq..alpha..ltoreq.L. Non-linear filter 20 is used to compensate for any instability introduced by filter 18. The output of encoder 16 is modulated by modulator 19 using amodulation technique such as QAM (quadrature amplitude modulation). The output of modulator 19 is filtered by filter 20, passed through hybrid 22, and then out to local telephone loop 24. A similar system is disclosed in a paper presented to Technical Committee TR-30 of the Telecommunications Industry Association (TIA) in Atlanta, Ga. on Apr. 15, 1993. The paper is entitled "Implementation of Precoding in V-FAST" authored byEyuboglu et al. FIG. 2 illustrates the precoder disclosed in the paper. Precoder 30 is similar to precoder 16. In this embodiment both the FIR filter and the modulo device are in the feedback loop. The FIR filter is disclosed as a 3-tap filter and theoutput of the modulo device is subtracted from the input to the precoder. Both of the aforementioned systems precode the data so that there is compensation for the effects of the noise whitening filter in the receiver. Unfortunately, both systems have drawbacks. The first system is only useful for square symbolconstellations and thereby prevents using more efficient constellations. The second system uses a relatively large dither signal. The large dither signal varies transmitted signal power by a relatively large amount that may exceed the maximum allowablepower for the communication channel. As a result, the amount of signal space allotted to the constellation must be decreased to accommodate the variations in transmitted power. Decreasing the constellation's signal space decreases the space betweensignal points in the constellation and decreases noise immunity. SUMMARY OF THE INVENTION The present invention is not limited to square constellations and it decreases the amplitude of the dither signal. The dither signal is decreased by using a smaller modulo value to generate the dither signal while maintaining the ability torecover the original trellis code in the receiver. The recoverability of the original trellis code is achieved by using a modulo count, which was formed while producing the dither signal, to select a substitute constellation subset for the constellationsubset identified by the trellis encoder. BRIEF DESCRIPTION OF THE DRAWING FIG. 1 illustrates a prior art transmitter; FIG. 2 illustrates a precoder used in a transmitter; FIG. 3 illustrates the transmitter section of one embodiment of the present invention; FIG. 4 illustrates a symbol or signal point constellation; FIG. 5 illustrates a state machine with 64 states; FIG. 6 illustrates a state machine with 16 states; FIG. 7 is a subset selection table; FIG. 8 is a subset substitution table; FIG. 9 illustrates the receiver section of one embodiment of the present invention; FIG. 10 illustrates the transmitter section of another embodiment of the present invention; FIG. 11 illustrates the receiver section of another embodiment of the present invention; FIG. 12 illustrates the transmitter section of the present invention with a non-linear encoder; FIG. 13 illustrates the receiver section of the present invention with a non-linear decoder; FIG. 14 is a block diagram of the non-linear encoder; FIGS. 15-17 illustrate warped constellations with different values for g; FIG. 18 illustrates an unwarped constellation; and FIG. 19 is a block diagram of the non-linear decoder. DESCRIPTION OF THE PREFERRED EMBODIMENT FIG. 3 illustrates the transmitter section of one embodiment of the present invention. Serial data is received by serial-to-parallel converter 40. The output of serial-to-parallel converter 40 is an L-bit word. Bits 1 to n are sent todifferential encoder 42, and the remaining n+1 to L bits are sent to symbol mappers 44a, 44b, 44c, 44d, and 46a, 46b, 46c and 46d. Bits n+1 to L are mapped into different signal point or symbol constellation subsets by the mappers. Taken together, thesubsets comprise the overall transmit constellation. The output of each mapper is a complex number with orthogonal components. The complex number identifies a symbol in a symbol constellation subset. The outputs from mappers are received by mux 48. Differential encoder 42 differentially encodes some of bits 1 through n. Differentially encoded bits, as well as the unchanged data bits, are passed to trellis encoder 50. Trellis encoder 50 produces trellis bits Y.sub.0 through Y.sub.n. BitsY.sub.0 through Y.sub.n are received by trellis enhancer 52. Trellis enhancer 52 also receives inputs x-count and y-count from modulo device 54. Based on the values Y.sub.0 -Y.sub.n, and the values of x-count and y-count, trellis enhanced controls mux48 to select one of the mapper outputs. The output of mux 48, signal e(k), is received by summer 58. Dither signal d(k) from modulo device 54 is subtracted from signal e(k) in summer 58. The output of summer 58, signal x(k), is fed to modulator 60,passband filter 62 and hybrid 64. The output of summer 58 is also fed to three-tap finite impulse response (FIR) filter 66. The output of filter 66 is received by modulo device 54 to produce outputs x-count, y-count and d(k). During each symbol period, serial-to-parallel converter 40 produces parallel word (I.sub.1 -I.sub.L).sub.k. Bits I.sub.n+1 -I.sub.L are passed to the mappers. The mappers output a signal point or symbol in a predefined constellation subsetbased on bits I.sub.n+1 -I.sub.L. FIG. 4 illustrates an 8-way partitioned symbol constellation. Bits I.sub.1 -I.sub.L are encoded as one of the symbols in the constellation. The constellation shows that there are eight constellation subsets making upthe overall constellation. The subsets consist of signal points labeled a, b, c, d, A, B, C and D, where like letters belong to the same subset. In a 4-way partitioned constellation where there are four subsets, the upper and lower case form of eachletter is considered part of the same subset. Data bits I.sub.1 through I.sub.n and a trellis bit are used to select one of the eight subsets. Data bits I.sub.n+1 through I.sub.L are used to identify a particular symbol or signal point within thesubset. Differential encoder 42 and trellis encoder 50 use bits I.sub.1 -I.sub.n to choose a constellation subset. In this embodiment n=5; however, it may have other values. Differential encoder 40 differentially encodes bits I.sub.2 -I.sub.3 inaccordance with the differential encoding table to produce bits J.sub.2 and J.sub.3. ______________________________________ Differential Encoding Previous Inputs Outputs Outputs I.sub.2 I.sub.3 J.sub.2 ' J.sub.3 ' J.sub.2 J.sub.3 ______________________________________ 1 1 0 1 ______________________________________ Bits I.sub.1, J.sub.2, J.sub.3, I.sub.4 and I.sub.5 are fed to trellis encoder 50. Trellis encoder 50 can be any finite state machine. These types of state machines are well known in the art and two such state machines are shown in FIGS. 5 and6. The state machine of FIG. 5 is a 64-state machine and the state machine of FIG. 6 is a 16-state machine. State machines with other numbers of states may be used. In the case of the 64-state machine, bits J.sub.2, I.sub.1, J.sub.3 and I.sub.4 areused as inputs. The outputs of the state machine are bits Y.sub.0 -Y.sub.5, where bits Y.sub.1 -Y.sub.5 equal bits I.sub.1, J.sub.2, J.sub.3, I.sub.4 and I.sub.5, respectively. The devices labeled 80 are adders and the devices labeled 82 are delays. Bits Y.sub.0 -Y.sub.5 are used to identify constellation subsets that are used with remaining bits I.sub.n+1 -I.sub.L. The state machines of FIGS. 5 and 6 are used to output a new Y.sub.0 bit every symbol period for two-dimensional trellis encoding, andevery other symbol period for 4-dimensional encoding. If a new set of outputs is produced each symbol period, delay elements 82 act as a one-symbol period delay, and if a new output is produced every other symbol period, elements 82 act as two symbolperiod delays. When used to produce a new set of outputs Y.sub.0 through Y.sub.5 every two symbol periods, the selection of subsets is shown in Table 1 of FIG. 7. The Table illustrates which constellation subsets will be used during the two symbolperiods. The first letter identifies the constellation subset used during the first symbol period, and the second letter identifies the constellation subset used during the second symbol period. (If two-dimensional encoding is used, only the firstletter is used.) For example, if Y.sub.0 through Y.sub.5 equal 000010, bits (I.sub.n+1 -I.sub.L).sub.k-1 will be encoded using constellation subset "a" and bits (I.sub.n+1 -I.sub.L).sub.k will be encoded using constellation subset "A". If a constellation with a 4-way partition is used, the 16-state machine of FIG. 6 is used to produce bits Y.sub.0 -Y.sub.3. (In this case, n=3), where bits Y.sub.1, Y.sub.2 and Y.sub.3 equal bits I.sub.1, J.sub.2 and J.sub.3, respectively. Table 1 is used with Y.sub.4 and Y.sub.5 set equal to 0, and with lower and upper case forms of the same letter belonging to the same constellation subset. It is also possible to practice the present invention without the use of the encoders of FIGS. 5 or 6. In this case, n=2 and bits I.sub.1 and I.sub.2 are fed to the differential encoder. the J.sub.2 and J.sub.3 bits from the differentialencoder are used as bits Y.sub.2 and Y.sub.3. In this embodiment, two-dimensional coding is used and the differential encoder produces a new output for each symbol period. Table 1 is used with Y.sub.0, Y.sub.1, Y.sub.4 and Y.sub.5 set equal to 0, andwith the second letter in each table entry ignored. Returning to the case of an 8-way partitioned constellation, mappers 44a through 44d, and 46a through 46d, identify a symbol in constellation subsets a, b, c, d and A, B, C, D, respectively, based on bits I.sub.n+1 -I.sub.L. The desired mapperoutput is selected using mux 48 which is controlled by trellis enhancer 52. Trellis enhancer 52 substitutes the constellation subset identified by Table 1 and bits Y.sub.0 through Y.sub.n (in this example n=5), based on the value of x-cnt and y-cnt from modulo device 54. Table 2 of FIG. 8 illustrates the subsetsubstitutions. Trellis enhancer 52 operates mux 48 in accordance with Table 2 so that the proper substitution occurs. The output of mux 48 is received by summer 58. Before trellis enhancer 52 substitutes a constellation subset for the one identified by bits Y.sub.0 -Y.sub.n, FIR filter 66 computes output p(k) based on its memory of past transmitted symbols (in the case of a 3-tap filter, the past threesymbols). FIR filter 66 is a 3-tap filter that is well known in the art. Coefficients for the filter are obtained during training in a manner well known in the art and specified by standards committees such as the ITU (International TelecommunicationUnion, formerly the CCITT) in ITU Recommendation V.32 bis. The output of the FIR filter is received by modulo device 54. Modulo device 54 performs a modulo operation on each of the orthogonal components of the symbol to produce a separate modulo count,x-cnt and y-cnt, for the X and Y orthogonal components of filter 68's output. If the output of the FIR filter is positive and greater than 2.sup.-m for a particular orthogonal component of p(k), then modulo value 2(2.sup.-m) is subtracted an integralnumber of times from that component of p(k) until the result is less than or equal to 2.sup.-m. The number of subtractions is counted by incrementing a respective x or y counter. If the output of the FIR filter is negative and less than or equal to-2.sup.-m for a particular orthogonal component of p(k), then modulo value 2(2.sup.-m) is added an integral number of times to that component of p(k) until the result is greater than or equal to -2.sup.-m. The number of additions is counted bydecrementing the respective x or y counter. The counters are arithmetic base 4; that is, decrementing two-bit value 00 by 1 produces two-bit value 11, and incrementing two-bit value 11 by 1 produces two-bit value 00. These counts are provided totrellis enhancer 52 via lines x-cnt and y-cnt. The portion of signal p (k) that remains after these subtractions/additions is provided to summer 58 as signal d(k). Signal d(k) is called the dither signal. After performing these calculations, trellisenhancer 52 uses x-cnt, y-cnt and bits Y.sub.0 through Y.sub.n to substitute constellation subsets in accordance with Table 2. (For 4-way partitioned constellations, upper and lower case versions of the same letter are considered identical and only thefirst four columns of Table 2 are necessary.) The resulting output from mux 48 is sent to summer 58 where value d(k) is subtracted to produce signal x(k). This signal is provided to modulator 60, filter 62 and hybrid 64 in a conventional manner. The count of additions or subtractions is computed independently for each orthogonal axis of the output from filter 66. The counts can be maintained using arithmetic base 4 for 8-way partition constellations and arithmetic base 2 for 4-waypartition constellations. These counts are used by the trellis enhancer 52 to perform the substitutions in accordance with Table 2. When using large symbol constellations, a larger dither signal is tolerable because the larger dither signal reduces error propagation in the receiver's reconstruction filter. In order to accommodate a variety of constellations it may bedesirable to use a variable modulo device. A variable modulo device performs similarly to modulo device 54 with the following differences. If the output of the FIR filter is positive and greater than K2.sup.-m for a particular orthogonal component ofp(k), then modulo value 2K(2.sup.-m) is subtracted an integral number of times from that component of p(k) until the result is less than or equal to K2.sup.-m. The number of subtractions is counted by incrementing a respective x or y counter K times thenumber of subtractions. If the output of the FIR filter is negative and less than or equal to -K2.sup.-m for a particular orthogonal component of p(k), then modulo value 2K(2.sup.-m) is added an integral number of times to that component of p(k) untilthe result is greater than or equal to -K2.sup.-m. The number of additions is counted by decrementing the respective x or y counter K times the number of additions. The variable K is an integer that is greater than 1 for large constellations and equalto 1 for small With regard to the value 2.sup.-m, and in reference to FIG. 4, the spacing between symbols is shown to be 2.times.2.sup.-m. The value 2.sup.-m is an arbitrary scaler where m is preferably an integer such as 7 or 8. FIG. 9 illustrates a receiver that is used with the present invention. A signal is received from local loop 24 through hybrid 64. The receive signal then passes through linear equalizer 100. Demodulator/linear Equalizer 100 is well known inthe industry. The signal then passes into noise whitening filter 102. Noise whitening filter 102 compensates for colored noise that is introduced by the communication channel. It is desirable to have white noise so that the trellis code can besuccessfully decoded. Noise whitening filter 102 comprises three-tap FIR filter 104 and summer 106. FIR filter 104 is well known in the industry and has the same tap values as FIR filter 66 in the remote transmitter of FIG. 3. The whitened signal r(k)is fed to trellis decoder 108. Trellis decoder 108 executes the well known Viterbi algorithm to recover the trellis code and bits I.sub.1 -I.sub.n. The recovered trellis code is used to identify the transmitted constellation subset. This informationis supplied to enhancement unit 110 of reconstruction filter 112. Trellis enhancement unit 110 also receives the x-cnt and y-cnt outputs of modulo device 114. The output of trellis decoder 108 is signal y'(k) and represents a signal having an expanded number of symbols or signal points that extend beyond the constellation of FIG. 4. Constellation expansion is a result of noise whitening filter 102 andits complementary filter and modulo device in the remote transmitter. To eliminate this expansion, FIR filter 116 and summer 118 operate to perform the inverse of noise whitening filter 102. The coefficients of 3 tap FIR filter 116 are the same as FIRfilters 104 and 66 in the remote transmitter. The output of FIR filter 116 is labeled p'(k) and is fed to modulo device 114. Modulo device 114 operates in the same manner as the remote modulo device 54. As was described with regard to modulo device54, modulo device 114 produces signals x-cnt and y-cnt. The output of modulo device 114 is signal d'(k) which is an estimate of signal d(k). Signal d'(k) is combined with signal x'(k) from summer 118 in summer 120. The output of summer 120 is signale'(k). The output of summer 120 is fed to slicers 122a, b, c and d, and slicers 124a, b, c and d. Slicers 122a, b, c and d and slicers 124a, b, c and d are used to determine which symbol of constellation subsets a, b, c, d, and A, B, C and D,respectively, are represented by signal e'(k). Mux 126 is used to select the output of one of the aforementioned slicers. Mux 126 is controlled using trellis enhancement unit 110. Trellis enhancement unit 110 uses the bits Y'.sub.0 -Y'.sub.n toidentify the transmitted constellation subset, and inputs x-cnt and y-cnt of modulo device 114 are used in accordance with Table 2 to identify the original constellation subset that was replaced with the transmitted subset. Once the original subset hasbeen identified, the slicer associated with that subset is selected using mux 126. The output of mux 126 is then fed to parallel-to-serial converter 128 to recover the originally provided data stream. FIG. 10 illustrates an alternative embodiment for selecting substitute constellation subsets in the transmitter. In this embodiment mappers 44a, b, c, d and 46a, b, c, d are replaced with mappers 140 and 142. Each mapper maps signal containingbits I.sub.n+1 to I.sub.1 into a constellation subset. In this embodiment, there are eight constellation subsets that are grouped into two groups of four. In each group of four, the constellation subsets are rotationally related to each other by 90degree phase shifts. As a result, by selecting the output of mapper 140 or 142, mux 144 selects one of the two groups of four subsets. A particular subset within a group of four is selected through the use of multiplier 146. The subset from mux 144can be rotated by 0, 90, 180 or 270 degrees to produce any one of the four subsets associated with each mapper. As a result, trellis enhancement device 52 has two outputs, one output selects between mapper 140 and 142 using mux 144, and the other outputindicates to multiplier 146 that a 0, 90, 180 or 270 degree phase shift should be initiated. This operation provides the advantage of using a smaller number of mappers as compared to the embodiment of FIG. 3. In a similar manner, FIG. 11 illustrates an alternative embodiment of the receiver shown in FIG. 9. Signal e'(k) is received by multiplier 150, the output of multiplier 150 is fed to slicers 152 and 154. The output of slicers 152 and 154 areselected using mux 156. Trellis enhancement unit 110 provides inputs to multiplier 150 and mux 156. As discussed with regard to FIG. 9, trellis enhancement unit 110 uses the received subset identity from trellis decoder 108, and the x-cnt and y-cntinputs from modulo device 114 to identify the original constellation subset. As discussed with regard to FIG. 10, multiplier 150 is used to rotate the received symbol by 0, 90, 180 or 270 degrees to reverse the effect of multiplier 146. Mux 156 is usedto pick the appropriate slicer output to recover the original data. FIGS. 12 and 13 illustrate another embodiment of the present invention. With regard to FIG. 12, the transmitter is modified by placing preprocessing unit 200 between serial-to-parallel converter 40, and mappers 140 and 142. The processor can beused to perform functions such as fractional rate encoding, modulus conversion, shaping by rings, and constellation switching. Additionally, the output of summer 58 is fed to non-linear encoder 300 before being passed to modulator 60. With regard to FIG. 13, the receiver has been modified to include non-linear decoder 400 between demodulator/linear equalizer 100 and noise whitening filter 102. Non-linear decoder 400 compensates for the action of non-linear encoder 300. Inaddition, post-processing unit 202 is placed between mux 156 and parallel-to-serial converter 128. Post-processing unit 202 forms the inverse of preprocessing unit 200. The non-linear encoder compensates for non-linear characteristics of the transmission channel. The non-linear encoder warps the constellation by adjusting the positions of its signal points in accordance with a warp function which models theinverse of that component of the non-linear characteristic of the transmission channel which is known a priori. In the case of a PCM system, for example, that component is typically a logarithmic function of the magnitude of the signal beingtransmitted--the so-called .mu.-law characteristic. Thus, an inverse logarithmic function, i.e., an exponential function, of the magnitude of the transmitted signal is used to warp the constellation. Because the constellation warping is deterministic, it is possible for the receiver to "unwarp" the received signal points prior to applying them to the Viterbi decoder using the inverse of the warp function and thereby modeling the knownnon-linear component of the channel characteristic. (In the case of a PCM system, the inverse function is the inverse of the .mu.-law characteristic and is, more particularly, a logarithmic function.) As a result, the Viterbi decoder can use thestandard, unmodified Viterbi decoding algorithm. In reference to FIG. 14, the X and Y orthogonal values in signal x(k) are warped by being multiplied by a warp multiplier w generated in accordance with a selected warp function. Specifically, the warp multiplier is generated by encoder 202,which provides it on lead 304 to multipliers 306 and 308. The latter carry out the aforementioned multiplication and the resulting warped values are applied to modulator 60 which, in standard fashion, generates a modulated line signal representing thestream of warped signal points. It is presumed that the communication channel includes a PCM system so that the overall channel characteristic has a known non-linear component which is a function of instantaneous signal magnitude, that function being the .mu.-lawcharacteristic. Accordingly, the warp function used by encoder 302 to generate warp multiplier w is a function of the signal magnitude of the transmitted signal points. That is, the magnitude is an independent variable in the warp function. To thisend, encoder 302 includes magnitude computer 310, which receives the X and Y values from leads 312 and 314 and determines the magnitude p.sub.t of each signal point by computing the value p.sub.t =.sqroot.X.sup.2 +Y.sup.2 . That value of p.sub.t is thenapplied to warp generator 316, which receives a warp factor g on lead 318 from within the modem or communication device. This factor--which is another independent variable in the warp function--is selected as a function of the degree to which it isdesired to warp the overall signal constellation which, in turn, is a function of the known component of the non-linear characteristic of the channel--in this case, the .mu.-law characteristic. In the present illustrative embodiment, warp generator 316generates a preliminary warp multiplier w' in accordance with the warp function where P.sub.t =p.sub.t /g. This relation is a series approximation to the (exponential) inverse of the .mu.-law characteristic ##EQU1## Moreover, where a different non-linear relationship obtains in the channel, a different inverse of that function would be used by warpgenerator 316. For example, if the channel includes an ADPCM system, where the signal processing algorithm changes over time, as a function of signal magnitude, then the value of g used by warp generator 316 would be adapted in such a way as to modelthe inverse of that algorithm. The function used by the warp generator could also take into account how one expects noise in the channel to differently affect low- and high-magnitude signal points of the constellation. Depending on the value of warp factor g and the range of values for p.sub.t, it may be the case that multiplying preliminary warp multiplier w' by X and Y would result in warped signal points that cause the peak and/or average power limits of thechannel to be exceeded. Accordingly, preliminary warp multiplier w' is processed within encoder 302 by automatic gain control (AGC) 320 to generate the aforementioned warp multiplier w on lead 304. The AGC has a very long time constant, therebyproviding a scaling function which, after an adaptation period, will be essentially constant for any given constellation and warp factor g. This serves to impose an upper limit on the value of warp multiplier w which avoids any exceeding of the channelpower limits. FIGS. 15-17 show the warped versions of the constellation of FIG. 18 that result from the warping just described using different values of warp factor g. The particular value of warp factor g that is used will depend on the application and may bedetermined empirically. In any case, it will be appreciated that each of the warped signal points of the constellation of FIGS. 15-17 are related to a respective signal point of the base constellation of FIG. 18 in accordance with a predetermined warpfunction. Turning, now, to the receiver of FIG. 13, and in reference to FIG. 19, the signal from demodulator/linear equalizer 100 represents the demodulator/equalizer's best estimate of the in-phase and quadrature-phase components of the transmitted signalpoints, designated X.sub.r and Y.sub.r, the subscript "r" denoting "receiver." These components are "unwarped," by non-linear decoder 400 by multiplying them by an unwarping multiplier W. Specifically, that multiplier is generated by decoder 402, whichprovides multiplier W on lead 404 to multipliers 406 and 408 in a manner described below. Multipliers 406 and 408 carry out the aforementioned multiplication, and the resulting unwarped in-phase and quadrature-phase values on leads 410 and 412 areapplied to noise whitening filter 102. Referring to decoder 402, its job is to determine the value of p.sub.r of the received signal points and, armed with a knowledge of the value of warp factor g, to perform the inverse of the warping that was undertaken in the transmitter. Thus,decoder 402 includes magnitude computer 414, which computes the value of p.sub.r from the received X.sub.r and Y.sub.r values on leads 416 and 418, and unwarp generator 420 which, responsive to the value of warp factor g on lead 422, generates unwarpmultiplier W in accordance with the relation where P.sub.r =p.sub.r /g. This is the inverse of the relation by which preliminary warp multiplier w' was generated and is a series approximation--usable for P.sub.r <1--to the (logarithmic) .mu.-law characteristic ##EQU2## For P.sub.r .gtoreq.1, a differentapproximation would be used. Note that the value of the magnitude p.sub.r that is used in the expression for unwarp multiplier W is the value computed from the received signal points. This value of p.sub.r will typically be at least a little different from the value used togenerate warp multiplier w in the transmitter owing to the noise component superimposed on the received signal points. This means that the amount by which a point is unwarped will be slightly different than the amount by which it was warped. Advantageously, however, this difference will tend to bring the signal points, upon being unwarped, into tighter loci about their corresponding positions in the base constellation than if, for example, the unwarping were to be carried out employing thevalue of p.sub.t used in the transmitter (assuming that value could, in fact, be made known to, or could be computed in, the receiver). The foregoing relates to noise that was superimposed on the transmitted signal points after the .mu.-law encoding in the channel has been carried out. However, at the point in time that they are subjected to the .mu.-law encoding in the channel,the transmitted signal points have already been somewhat perturbed due to noise and other channel effects occurring between the transmitter and the codec within the channel in which the .mu.-law encoding is actually carried out. Thus the warped signalpoints are not warped from the ideal signal point positions of FIG. 18, but rather from positions that are just a little bit displaced therefrom. Using the inverse of the .mu.-law characteristic in the receiver does not take account of this. The effectis very minor, so that the approach described hereinabove does work quite well. It is, however, possible to take account of that effect, thereby providing results that are even better. In particular, it is known that, in the absence of warping, the noise associated with each received signal point--due to the non-linear A/D converter in a PCM system--may be closely represented by an equation of the form ##EQU3## where n is theroot-mean-square (r.m.s.) value of the noise associated with a signal point of magnitude p. The constants a and b depend upon the properties of the communication channel and the transmit and receive filters. In situations, such as that postulated here, in which the transmission channel superimposes multiplicative noise onto the received signal points, it is advantageous for the warp function and its inverse to be such that, upon warping, the distancebetween adjacent signal points is proportional to the r.m.s. noise associated with those points. As a result, the noise superimposed on each received signal point is independent of its position in the constellation and the difference of errorprobabilities associated with different signal points is minimized. If the constellation contains a large number of signal points, this property is achieved by a warp function where P.sub.t =p.sub.t /g and g=b/a. This relation is a series approximation to a hyperbolic sine function ##EQU4## Since the value of a and b are dependent on the communication channel and are generally not known a priori, g may be adapted as before, or may be calculated frommeasurement of the received noise so as to determine the ratio b/a. The corresponding receiver unwarp multiplier is generated according to the relation which is a series approximation to the inverse hyperbolic sine function ##EQU5## valid for P.sub.r <1. After the unwarping operation is carried out, the original constellation with equal spacing of signal points is approximately restored, with approximately equal noise power associated with each signal point. The foregoing merely illustrates the principles of non-linear encoding/decoding. Thus, although logarithmic and sinh functions are discussed herein, other functions may be advantageous in particular circumstances. In a simple implementation, warp factor g can be pre-set in the transmitter and receiver based on the expected characteristics of the channel. In a more sophisticated application, one might adaptively determine g by having the receiver examinethe dispersion of the received signal points about the expected signal points and then use that measurement to adapt the value of g in the receiver while making that value known to the transmitter via, for example, conventional diagnostic channelcommunications between the two modems or communication Although the various functional blocks of the transmitter and receiver are shown for pedagogic clarity as individual discrete elements, the functions of those blocks could and, with present technology, typically would be carried out by one ormore programmed processors, digital signal processing (DSP) chips, etc., as is well known to those skilled in the art. The invention is disclosed in the context of a system using two-dimensional constellations. However, it is equally applicable to systems using constellations of any dimensionality, as will be well appreciated by those skilled in the art. It is also important to note that the invention is not limited to modem technology but rather to any type of signal transmission system and/or environment in which inter-symbol intereference and/or deterministic, non-linear effects are present. Thus it will be appreciated that many and varied arrangements may be devised by those skilled in the art which, although not explicitly shown or described herein, embody the principles of the invention and are thus within its spirit and scope. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/5559835.html","timestamp":"2014-04-18T08:26:48Z","content_type":null,"content_length":"64221","record_id":"<urn:uuid:5d013451-78cd-4a15-a656-97ed2e628f48>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: solve 3x^2-5x-11=0 find the x intersepts of f(x)=3x^2-5x-11. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f5ab8b2e4b0636d8906236d","timestamp":"2014-04-17T18:41:33Z","content_type":null,"content_length":"160414","record_id":"<urn:uuid:c1edb73b-e67c-4f18-9e43-b55d30a8beb6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Technical Notes 032 Mopeds and Motorcycles Mopeds and motorbikes are two-wheeled motorised vehicles. Motorbikes are more powerful than mopeds, and therefore can reach higher speeds. There were 205 million mopeds and motorcycles in the world by 2002, 33 for every 1000 people. Data sources The World Bank World Development Indicators 2005 time series: Two-wheelers (in theory per 1,000 people, IS.VEH.2CYL.P3) was used as the source of these estimates. Two-wheelers refers to mopeds and motorcycles but not bicycles. The original source is cited as the International Road Federation, World Road Statistics and data files. To make these estimates many of the original data values were assumed to be awry by several orders of magnitude, more care than is usual is required in interpreting these figures. The number of these vehicles in some countries may be overestimated, all assumptions are documented in the spreadsheet. Assumptions were directed by the previous numbers recorded for motorbikes and mopeds in that territory, and assuring that a plausible ratio of cars to two-wheelers existed. Click here to view detailed data source references The quote used on the poster comes from George Davey Smith. It was first published in the British Medical Journal in 1991. However it was sourced from page 524 of a later publication, which is referenced below. George Davey Smith (2003) Afterword: Still wanting to be James Dean. p.523-525. in George Davey Smith (editor) (2003) Health in equalities: Lifecourse approaches. The Policy Press, Bristol. Excel sheets Below is an explanation of each of the columns in the excel file: Column A = Unique numerical territory (see 001). Column B = Region and territory names (see 001). Column C = Region code (see 001). Column D = The ISO 3 code, or ISO ALPHA-3 (see 001). Column E = Estimated mopeds and motorcycles, in millions. This is calculated by multiplying the mopeds and motorcycles per thousand people (F) by the population of a territory in millions (G), then dividing this by 1000 (E = F * H / 1000). Column F = Mopeds and motorcycles per thousand people. This is calculated by dividing the tot al number of mopeds and motorcycles in millions (H) by population in millions (G), and then multiplying this by 1000 (F = H/G * 1000). If no data was available in column H, then the regional average is used. Column G = Population in 2002, in millions. See the technical notes for ‘Total Population’ for the sources of this data (002). Column H = Mopeds and motorcycles, for the most recent date, in millions. The number from column Y is taken, which gives the most recent number of mopeds and motorcycles, per thousand people, provided for the period 1997-2002. If this is not available the maximum number recorded for the period 1990-1996 is used. This number is then multiplied by the population in millions (G), and divided by 1000 . If no data is available over the period of 1990-2002, #N/A is shown. The source data is on a separate sheet.
{"url":"http://www.sasi.group.shef.ac.uk/worldmapper/technotes.php?selected=32","timestamp":"2014-04-20T05:44:23Z","content_type":null,"content_length":"4765","record_id":"<urn:uuid:aea40486-23fb-485c-a723-f6f8a6b748ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 1998(1998), No. 30, pp. 1-38. Exponentially slow traveling waves on a finite interval for Burgers' type equation P. P. N. de Groen & G. E. Karadzhov Abstract: In this paper we study for small positive on the bounded spatial domain [-1,1]; Submitted March 10, 1998. Published November 20, 1998. Math Subject Classification: 35B25 35K60. Key Words: Slow motion, singular perturbations, exponential precision, Burgers' equation. Show me the PDF file (326K), TEX file, and other files for this article. P. P. N. de Groen Vrije Universiteit Brussel, Department of Mathematics Pleinlaan 2, B--1050, Brussels, Belgium E-mail: pdegroen@vub.ac.be G. E. Karadzhov Bulgarian Academy of Sciences Institute of Mathematics and Informatics Sofia, Bulgaria E-mail: geremika@math.bas.bg Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/1998/30/abstr.html","timestamp":"2014-04-19T04:27:08Z","content_type":null,"content_length":"3524","record_id":"<urn:uuid:69a1f0ef-2f09-4dc8-b422-0788ab84a4ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Post #1430598 12-08-2012 10:03 AM #0 Join Date Feb 2009 Large Format When we use a real-world lens: multiple thick elements and cemented compound elements having different indexes of refraction, the lens simply won’t conform exactly to the thin lens equation. This is annoying insofar as the calculated values differ from the actual values produced by the lens. If our needs aren’t fussy, the approximations the TLE provides are useful and reasonably accurate. The calculated values for typical enlarger lenses are somewhat less than the actual measured values. The enlarger lens behaves as though it had a somewhat longer focal length than specified by the maker. By measuring the negative-to-print distance and magnification of the projection, we can calculate the focal length of the imaginary equivalent thin lens that corresponds to these measured values. By substituting the equivalent focal length into subsequent equations we can get better approximations to the actual values of our system. I’ve found that for the 6-element enlarging lenses I’ve tested and for which I have the maker’s specified focal length, the ratio (equivalent thin lens focal length)/(actual focal length) is about 1.026, i.e. the equivalent focal length exceeds the actual focal length by about 2.6%.
{"url":"http://www.apug.org/forums/viewpost.php?p=1430598","timestamp":"2014-04-16T06:06:37Z","content_type":null,"content_length":"11217","record_id":"<urn:uuid:cce25a9f-09bf-4a6f-b276-23e18f56c82c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Moraga Trigonometry Tutor Find a Moraga Trigonometry Tutor ...I spent the next two years tutoring intro linear algebra on a daily basis at the DVC math lab. I then took an advanced linear algebra class at UC Santa Cruz and received a C (horribly difficult class for math/computer science majors about to finish their bachelor's degrees). During my time at UC... 15 Subjects: including trigonometry, reading, calculus, writing ...I enjoy one on one tutoring with students who are temporarily discouraged but are willing to put in the time to improve their grasp of mathematics. The students find that their arithmetic prowess gives them the confidence to master subjects such as algebra, geometry and trigonometry.I teach Stat... 10 Subjects: including trigonometry, calculus, statistics, geometry ...I am the oldest of five children and have years of experience with kids and tutoring in all subjects from elementary school to middle school. I also tutor Math and Biology for high-school and college students. I am a very patient, kind, understanding person and will go out of my way to help kids do their very best in school. 8 Subjects: including trigonometry, calculus, algebra 1, precalculus ...At Cal Berkeley, I specialized in artificial intelligence theory, taking graduate courses in analysis, probability and manifolds, and well as CS courses in data structures, functional programming, and artificial intelligence. My fascination in the subject arose from my interest in the brain and ... 28 Subjects: including trigonometry, English, reading, calculus ...In addition, I also can help the students to understand to basic concept of Physics like motions, pressures, force, wave, energy and light. I helped one of my friend improve her grade in Introduction to Physics class from D to B.I have a brother who is in grade 6, and I always help him to do Math and check his work. Besides, I'm a tutor of two girls who are in grade 5 and 6. 18 Subjects: including trigonometry, calculus, precalculus, statistics
{"url":"http://www.purplemath.com/Moraga_trigonometry_tutors.php","timestamp":"2014-04-19T15:26:24Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:9d8e40d2-5bc7-4e8c-ad2f-0669c5f46c83>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Tarskis Axioms Tarskis Axioms A selection of articles related to tarskis axioms. Original articles from our library related to the Tarskis Axioms. See Table of Contents for further available material (downloadable resources) on Tarskis Axioms. Among early Pagans, religion was a fact taken for granted, "requiring no explanation." There was no need to define religion because religion was a part of each persons being. "We have not yet encountered in Egyptian and Babylonian literature a... Do what thou wilt shall be the whole of the Law. What Crowley began, others must continue and develop or Thelema will become but a memory in the history of the Western Mystery Tradition. Yes, he was a Prophet and the Ipsissimus that one could say invented... Mystic Sciences >> Magick Tarskis Axioms is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Tarskis Axioms books and related discussion. Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/tarskis-axioms/","timestamp":"2014-04-21T09:40:46Z","content_type":null,"content_length":"21841","record_id":"<urn:uuid:ff6e3cb9-dc55-4103-bbd5-e101cf486d24>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization Problem April 9th 2010, 08:18 AM #1 Apr 2010 Optimization Problem I'm really confused on this problem A company wishes to run a utility cable from point A on the shore to an installation at point B on the island. The island is 6 miles from the shore (point O) It costs $3000 per mile to run the cable on land and $5000 per mile underwater. Assume that the cable starts at A and runs along the shoreline, then angles and runs underwater to the island. Let x represent the distance from O at which the underwater portion of the cable begins, and the distance between A and O be 9 miles. 1. Write the total cost C as a function of x. 2.Find the point at which the line should begin to angle in order to minimize the total cost. 3. what is the minimum cost? You want to find where to put point P so that $3000 * length of AP + $5000 * length of PB is a minimum. April 9th 2010, 09:33 AM #2 Senior Member Nov 2009
{"url":"http://mathhelpforum.com/calculus/138130-optimization-problem.html","timestamp":"2014-04-18T03:41:04Z","content_type":null,"content_length":"32011","record_id":"<urn:uuid:eabeb98e-863a-44df-873c-9c24e9766df6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
ordered fields with the bounded value property up vote 3 down vote favorite Say that an ordered field $F$ satisfies the bounded value property if, for all $a < b$ in $F$ and for every continuous function $f$ from $[a,b]_F := ${$x \in F: a \leq x \leq b$} to $F$, there exists $B$ in $F$ such that $-B < f(x) < B$ for all $x$ in $[a,b]_F$. (Here we say $f$ is continuous if it satisfies the usual $\epsilon$, $\delta$ definition, where all quantification is over $F$.) Does there exist a non-Archimedean ordered field with the bounded value property? I show in http://jamespropp.org/reverse.pdf (see the second paragraph on page 9) that every Archimedean ordered field satisfying the bounded value property is isomorphic to the reals, but my proof that the bounded value property implies the Archimedean property (see the first paragraph on page 9) is incorrect (thanks to Ricky Demer for pointing out my mistake). In attempting to fix my proof, I am starting to wonder if in fact the implication fails. For instance, does the surreal number system have the bounded value property? I don't see how to prove that it All I can show is that if $F$ satisfies the bounded value property and contains a cofinal set $S$ whose cardinality is less than or equal to that of the continuum, then $F$ is Archimedean. (Proof: Let $g:[0,1] \rightarrow F$ be a function that takes on all values in $S$, and for all $x$ in $[a,b]_R$ with standard part $\overline{x}$ let $f(x) = g(\overline{x})$. If $F$ is non-Archimedean, $f$ is continuous on $[a,b]_F$ and unbounded.) But, even leaving aside constructivist qualms about how one constructs $g$ from $S$, clearly this approach won't work for the surreal numbers or for sufficiently large fields within the Field of surreal numbers. real-analysis lo.logic related to your paper but not this question: For the cut property, wouldn't it be nicer to drop the condition that $A\cup B = R$ and use the second conclusion? – Ricky Demer Jul 29 '11 at 2:15 @Ricky: That was Tarski's feeling too. His version of the cut property is more flexible but I don't think students find it as intuitive, since the range of possibilities for the sets $A$ and $B$ becomes vastly greater. I personally find the narrower statement easier to visualize and hence more compelling, but that's purely an esthetic preference. If one were to develop real analysis from my version of the cut principle, your version should probably be Theorem 1! – James Propp Jul 29 '11 at 15:41 @James, this is a reply to your latest comment on my answer, but the comments there were piling up. I cannot see how to eliminate AC from the proof given in my answer, if one uses the Schmerl method; and I suspect the same holds for Sikorski's construction. I am also a bit puzzled about requiring no use of $AC$ in the counterexample since (1) that was not specified anywhere in your question, and (2) there is very little to say about uncountable model theory and algebra in the absence of choice. – Ali Enayat 0 secs ago – Ali Enayat Jul 29 '11 at 21:44 @James: despite the pessimism expressed in my previous comment about eliminating the axiom of choice $AC$ , a "miracle has happened", and I can now see how to take advantage of a key feature of Schmerl's construction to eliminate $AC$; by tomorrow I will put a PS on my answer in which I will outline how this is done. – Ali Enayat Jul 31 '11 at 1:57 I don't see why that cardinal must be regular. For that matter, I don't see why it can't have countable cofinality. – Ricky Demer Aug 1 '11 at 7:42 show 6 more comments 1 Answer active oldest votes EDIT NOTE: A postscript has been added to indicate why the answer does not change if one is forced to work in $ZF+AC_\omega$ (prompted by a query of James Propp). Thanks to James Propp, Ricky Demmer, and Emil Jeřábek for catching infelicities of the past versions. There are nonarchimedean fields with the bounded value property. Let's begin with a key definition: an ordered field $F$ satisfies the $\kappa$-Bolzano-Weiestrass property, abbreviated $BW(\kappa)$, if every bounded sequence $x_\alpha$ of length $\ kappa$ in $F$ has a convergent subsequence of length $\kappa$. So the Bolzano-Weirestrass theorem says that $\Bbb{R}$ satisfies $BW (\aleph_{0})$. Sikorski (1948) proved that for every uncountable regular cardinal $\kappa$ there is an ordered field of cardinality and cofinality $\kappa$ that satisfies $BW(\kappa)$. Since every archimedean ordered field has countable cofinality, the following Lemma, when coupled with Sikorski's theorem above (with $\kappa$ chosen as $\aleph_1$) shows that nonarchimedean fields with the bounded value property exist. Note that the proof is of the Lemma is an adaptation of the usual real-analysis proof of the boundedness of continuous functions on closed bounded intervals, using $BW (\aleph_{0})$. Lemma. Let $\kappa$ be a regular cardinal. If $F$ is an ordered field of cofinality $\kappa$ such that $F$ satisfies $BW(\kappa)$, then $F$ has the bounded value property. Proof: Choose an increasing unbounded sequence $x_\alpha$ of elements of $F$, where $\alpha \in \kappa$. If $f[a,b]$ has no upper bound for a continuous function $f$, then for each $\ alpha < \kappa$ there is some $t_{\alpha}$ $\in [a,b]$ with $f(t_{\alpha}) > x_{\alpha}$. up vote 6 down vote accepted By $BW(\kappa)$ there is some unbounded subset $U$ of $\kappa$ such that the subsequence $S$ := {$ t_{\alpha} : x \in U $} converges to some $c\in [a,b]$. Therefore by continuity of $f$, the sequence $f(S)$ converges to $f(c)$. But a convergent sequence of length $\kappa$ must be bounded (the regularity of $\kappa$, and the assumption that $F$ has cofinality $\kappa$ comes to the rescue here), and yet $f(S)$ is clearly unbounded by construction. This contradiction shows that $f[a,b]$ is bounded above; a similar reasoning shows that $f[a,b]$ is bounded below (or just replace $f$ by its absolute value). QED Some references: Sikorski's Theorem appears in: Roman Sikorski, On an ordered algebraic field. Soc. Sci. Lett. Varsovie. C. R. Cl. III. Sci. Math. Phys. 41 (1948), 69–96 (1950). A proof of Sikorski's theorem can also be found in the following paper (Cor. 2.7), as a corollary of a vast generalization of Sikorski's theorem; the paper is an impressive showcase for the interaction between deep methods of models of arithmetic and higher set theory with field theory. James Schmerl, Models of Peano arithmetic and a question of Sikorski on ordered fields. Israel J. Math. 50 (1985), no. 1-2, 145–159. PS. One can show, using some machinery from the model theory of arithmetic, that working only in $ZF+AC_\omega$, Schmerl's proof can produce a well-orderable field $F$ of cardinality and cofinality $\aleph_1$ that satisfies $BW(\aleph_1)$. This allows one to one obtain a non-archimedean field with the bounded value property, entirely within $ZF+AC_ Thanks for your reply, Ali. The first place where I don't follow you is in your restatement of the Bolzano-Weierstrass Theorem. Did you really mean to say that the ordered field has cardinality $\kappa$? Note that the Bolzano-Weiestrass Theorem concerns sequences of length $\aleph_0$ chosen from a set of cardinality $\aleph_1$. So something seems amiss here. – James Propp Jul 27 '11 at 22:47 @James, you are right, I should have just said: "the usual Bolzano-Weierstrass Theorem", I will fix that. – Ali Enayat Jul 27 '11 at 23:13 @James: I ended-up removing any cardinality restrictions from $BW(\kappa)$ which does not effect my answer; but in case you look at Schmerl's paper note that he stipulates the cardinality of $F$ being greater than $\kappa$ as part of the definition of $BW(\kappa)$. – Ali Enayat Jul 27 '11 at 23:28 How do you get the third sentence of your proof? – Ricky Demer Jul 28 '11 at 1:25 I also appear to be missing something: doesn't Cantor's original (en.wikipedia.org/wiki/…) proof show that any countable ordered field does not satisfy $BW(\aleph_0)$ ? – Ricky Demer Jul 28 '11 at 1:52 show 13 more comments Not the answer you're looking for? Browse other questions tagged real-analysis lo.logic or ask your own question.
{"url":"http://mathoverflow.net/questions/71432/ordered-fields-with-the-bounded-value-property?sort=oldest","timestamp":"2014-04-19T02:24:37Z","content_type":null,"content_length":"68244","record_id":"<urn:uuid:292a5995-0c5f-4210-8f15-e08f7875ef76>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Basics: Significant Figures After my post the other day about rounding errors, I got a ton of requests to explain the idea of significant figures. That’s actually a very interesting topic. The idea of significant figures is that when you’re doing experimental work, you’re taking measurements – and measurements always have a limited precision. The fact that your measurements – the inputs to any calculation or analysis that you do – have limited precision, means that the results of your calculations likewise have limited precision. Significant figures (or significant digits, or just “sigfigs” for short) are a method of tracking measurement precision, in a way that allows you to propagate your precision limits throughout your calculation. Before getting to the rules for sigfigs, it’s helpful to show why they matter. Suppose that you’re measuring the radius of a circle, in order to compute its area. You take a ruler, and eyeball it, and end up with the circle’s radius as about 6.2 centimeters. Now you go to compute the area: π=3.141592653589793… So what’s the area of the circle? If you do it the straightforward way, you’ll end up with a result of 120.76282160399165 cm^2. The problem is, your original measurement of the radius was far too crude to produce a result of that precision. The real area of the circle could easily be as high as 128, or as low as 113, assuming typical measurement errors. So claiming that your measurements produced an area calculated to 17 digits of precision is just ridiculous. As I said, sigfigs are a way of describing the precision of a measurement. In that example, the measurement of the radius as 6.2 centimeters has two digits of precision – two significant digits. So nothing computed using that measurement can meaningfully have more than two significant digits – anything beyond that is in the range of roundoff errors – further digits are artifacts of the calculation, which shouldn’t be treated as meaningful. The rules for significant figures are pretty straightforward: 1. Leading zeros are never significant digits. So in “0.0000024″, only the “2″ and the “4″ could be significant; the leading zeros aren’t. 2. Trailing zeros are only significant if they’re measured. So, for example, if we used the radius measurement above, but expressed it in micrometers, it would be 62,000 micrometers. I couldn’t claim that as 5 significant figures, because I really only measured two. On the other hand, if I actually measured it as 6.20 centimeters, then I could could three significant digits. 3. Digits other than zero in a measurement are always significant 4. In multiplication and division, the number of the significant figures in the result is the smallest of the number of significant figures in the inputs. So, for example, if you multiple 5 by 3.14, the result will have on significant digit; if you multiply 1.41421 by 1.732, the result will have four significant digits. 5. In addition and subtraction, you keep the number of significant digits in the input with the smallest number of decimal places. That last rule is tricky. The basic idea is, write the numbers with the decimal point lined up. The point where the last significant digit occurs first is the last digit that can be significant in the result. For example, let’s look at 31.4159 plus 0.000254. There are 6 significant digits in 31.3159; and there are 3 significant digits in 0.000254. Let’s line them up to add: + 0.000254 The “9″ in 31.4159 is the significant digit occuring in the earliest decimal place – so it’s the cutoff line. Nothing smaller that 0.0001 can be significant. So we round off 0.000254 to 0.0003; the result still has 5 significant Significant figures are a rather crude way of tracking precision. They’re largely ad-hoc. There is mathematical reasoning behind these rules – so they do work pretty well most of the time. The “right” way of tracking precision is error bars: every measurement has an error range, and those error ranges propagate through your calculations, so that you have a precise error range for every calculated value. That’s a much better way of measuring potential errors than significant digits. But most of the time, unless we’re in a very careful, clean, laboratory environment, we don’t really know the error bars for our measurements. Significant digits are basically a way of estimating error bars. (And in fact, the mathematical reasoning underlying these rules is based on how you handle error bars.) The beauty of significant figures is that they’re so incredibly easy to understand and to use. Just look at any computation or analysis result described anywhere, and you can easily see if the people describing it are full of shit or not. For example, you can see people claiming to earn 2.034523% on some bond; they’re not, unless they’ve invested a million dollars, and then those last digits are pennies – and it’s almost certain that the calculation that produced that figure of 2.034523% was done based on inputs which had a lot less that 7 significant digits. The way that this affects the discussion of rounding is simple. The standard rules I stated for rounding are for rounding one significant digit. If you’re doing a computation with three significant digits, and you get a result of 2.43532123311112, anything after the 5 is noise. It doesn’t count. It’s not really there. So you don’t get to say “But it’s more than 2.435, so you should round up to 2.44.”. It’s not more: the stuff that’s making you think it’s more is just computational noise. In fact, the “true” value is probably somewhere +/-0.005 of that – so it could be slightly more than 2.435, but it could also be slightly less. The computed digits past the last significant digit are insignificant - they’re beyond the point at which you can say anything accurate. So 2.43532123311112 is the same as 2.4350000000000 if you’re working with three significant digits – in both cases, you round off to 2.44 (assuming even preference). If you count the trailing digits past the one digit after the last significant one, you’re just using noise in a way that’s going to create a subtle upward bias in your computations. On the other hand, if you’ve got a measured value of 2.42532, with six significant figures, and you need to round it to 3 significant figures, then you can use the trailing digits in your rounding. Those digits are real and significant. They’re a meaningful, measured quantity – and so the correct rounding will take them into account. So even if you’re working with even preference rounding, that number should be rounded to three sigfigs as 2.43. 1. #1 The Science Pundit March 4, 2009 Thanks for this explanation. I guess my objection to your last post was that I was thinking of a situation like you described in your last paragraph here, when you were describing the rounding rules for a situation akin to that described in the penultimate paragraph here. 2. #2 Ed March 4, 2009 If you’re doing a computation with three significant digits, and you get a result of 2.43532123311112, anything after the 5 is noise. It doesn’t count. It’s not really there. Why? I mean, isn’t rounding to 2.44 more likely to yield an answer close to the true value? What makes 2.43 (assuming you favor odds in the last sig fig) a better approximation of what the error bars do than 2.44? And for that matter, aren’t error bars just an approximation for what we should “really” be using, which is probability distributions over all possible values? 3. #3 Juneappal March 4, 2009 Have you seen Chris Mulliss’s work on this topic? He ran a bunch of monte carlo trials with different sig fig methods, and found that for mult/division and exponentials, the standard methods are less accurate than other methods. Specifically, for those operations, he recommends adding one digit onto the least significant argument of the input terms. Here’s his results for the standard method: And for his “improved” method: 4. #4 BenE March 4, 2009 Error bars and by association significant digits, have always made me quite uneasy. Maybe you can shed some light Mark. My perplexion can be summarized as follow: What well stated and correct math construct are they an approximation of? It seems to me that the problem is a result of the fact that we are thinking at the interface between theoretical real numbers and real world measurements along with the fact that most real numbers cannot precisely be represented in an finite amount of time and space. In a theoretical setting, when not using error bars or significant digits the standard practice with real numbers seems to be to write down a bunch of digits after the decimal and then assume it represent the same thing as if we had followed by an infinite number of zeros, an assumption which also makes me quite uneasy. But what happens in the real world where we must manipulate numbers that don’t have infinite precision? We can set intervals and follow the rules but what do these intervals represent? Is it a limiting bound that “true” values are assumed never to cross? Is it a measure of variance, an interval that signals that a certain proportion of the samples are known to be within? If so, can we assume a central tendency to the distribution of the samples? And if so, wouldn’t it make sense to keep more digits to know where the center of the tendency should be? Otherwise aren’t we trowing out information? But then how many digits should we keep? Furthermore, the numbers used to represent the intervals, should they have a confidence interval too? Recursively?? How many digit should we write on each numbers in this example: 3.56 +-0.56+- 0.045 +-…? What mathematical principles govern all of this? My laymen intuition is that there is an information theoretic explanation to it all. That it might have to do with the diminishing returns on information content of extra digits in the face of rough measurements. A kind of criterion meant to save our efforts which justifies not bothering with too many digits. I am not a mathematician but this has been quite the enigma for me and I really wish someone would point me towards some insight. I really feel like I am missing a fundamental part of mathematics that is key to understanding the relation between real numbers and the real reality. Does anyone have a clue? 5. #5 Uncephalized March 5, 2009 Good Post! I was taught these rules at least 6 years ago, in my basic high school science courses. Now as an engineering student I am constantly amazed at how few of my classmates know them. They’ll leave things with 5 or 6 sig figs when they were only give a measurement with 2, and if you ask them why they did they’ll give you a blank stare. I don’t think this really gets taught to kids in public school, at least in AZ (I was in a good private school in HS), and I know no one ever bothered to explain it to us as freshmen. 6. #6 William Wallace March 5, 2009 The real area of the circle could easily be as high as 128, or as low as 113, assuming typical measurement errors. Of course these doesn’t stop some atheists from criticizing 1KI 7:23, which describes the circumference of a circle measuring 10 cubits as being 30 cubits. They’re largely ad-hoc. There is mathematical reasoning behind these rules – so they do work pretty well most of the time. Again, I pretty much agree with you. However, you should take a look at sigma-delta modulators (SDMs). These devices can, for example, use a single bit resolution analog to digital converter to come up with a measurement that has many more bits of resolution, e.g., 12 bits. And the 1-bit A/D converter doesn’t even have to be all that balanced to get good results. I have a basic grasp of SDMs, but still have difficultly explaining to others how taking a low-precision measurement repeatedly can generate high resolution estimates. I can convince myself from time to time, when I decide to look at it again, but I soon forget the details. 7. #7 William Wallace March 5, 2009 replace “these doesn’t” with “these observations don’t” replace “measuring 10 cubits” with “having a 10 cubit diameter” 8. #8 Alex Besogonov March 5, 2009 Typo: “will have on significant digit;” – should probably be “will have one significant digit;”. 9. #9 regordane March 5, 2009 I agree with #3 and that’s what I was always taught to do. Include the first “insignificant” digits from the imput in the calculation and then round the result to the correct number of significant ones. I also agree with #5. I’m sick of seeing papers in peer reviewed medical journals with ridiculous degrees of spurious precision. It’s something I pick up when I referee but clearly others don’t. 10. #10 Kristian Z March 5, 2009 What about the distinction between accuracy and precision? I was thaught that these are two different things. The precision is the number of digits used, regardless of whether it’s justified. In Mark’s example, 2.034523% would have a precision of six decimal places, but likely an accuracy much less than that (and therefore the precision used is not justified). 11. #11 Jens March 5, 2009 I agree with some of the commenters before me that significant digits (or error bars, of which “significant digits” are just one example) are a very weird approximation of the concept of I would expect in most cases that the “real” value is well modeled by something like a Gaussian distribution around the measured value (with as much precision as possible, no reason to “drop digits” there). “Significant digits” or error bars suppose a uniform distribution within a given interval. When you do your calculations using the distributions (assuming they are symmetric) your result is a distribution centered around the value you get when calculating using the centers of your initial distributions (if I’m not mistaken). Dropping digits at any point of your calculation before obtaining the final result just arbitrarily shifts the centers of your distributions. Do you have any reason to believe that this would improve the model you’re using? If you obtain 2.43532123311112 as the center of your distribution, why would you possibly shift it to 2.435 and then suddenly claim that that’s as close to 2.43 as to 2.44? It clearly is not. It may be (depending on the shape of your probability distribution) that 2.43 is almost as probable as 2.44, but arbitrarily shifting your distributions around is certainly not the best way to come to that conclusion. 12. #12 misterjohn March 5, 2009 On your point 4 about multiplying 5 by 3.14, I would disagree, as you are assuming that the 5 is a rounded value. It may however be exact. Imagine for example changing a 5 dollar bill to spome currency where you got 3.14 to the dollar. Then an answer of 15.70 would make perfect sense. I know you’ve tried to make the post simple, but context is very important here, and there are other times when you’ve over-simplified. Writing as a statistician, I’d say that if you found the mean of about 100 integer values, it would be quite reasonable to give the answer correct to 2 decimal places, which could well be a value with 2 or more significant figures than the original values. Similar results would occur in almost any statistical computation, from Standard Deviation onwards. I would agree that there are many people who give values to a completely spurious level of accuracy; I saw a case recently when something like £20000 in 1870 was said to be equivalent to 1234567.89 present day pounds. 13. #13 pjb March 5, 2009 Imagine for example changing a 5 dollar bill to spome currency where you got 3.14 to the dollar. Then an answer of 15.70 would make perfect sense. It would also make perfect cents. 14. #14 Todd P March 6, 2009 On your point 4 about multiplying 5 by 3.14, I would disagree, as you are assuming that the 5 is a rounded value. It may however be exact. Imagine for example changing a 5 dollar bill to spome currency where you got 3.14 to the dollar. If the 5 is exact, then you aren’t multiplying 5 by 3.14. You are multiplying 5.0000000…(with infinite significant zeroes) by 3.14. If the exchange rate is exactly 3.14 (e.g. if the mystery currency divides into even hundredths and this a real cash transaction, not an electronic exchange where you can have portions of the smallest unit of currency), then you are really multiplying 5.0000000… by 3.1400000000..(again with infinite sig figs). The result in this case has infinite sig figs, but would probably be listed as 15.70 because we don’t need to know about fractions of the smallest unit of currency when counting out cash. But the original statement is correct. When scientists are using sig figs, saying you multiply 5 by 3.14 means the 5 is measured to 1 sig fig. 15. #15 MJM March 7, 2009 Those of us who grew up using slide rules to do calculations understood these principles fairly well. Another skill we learned was to estimate the magnitude of our calculation first, so that we didn’t put the decimal in the wrong place. It’s amazing how first calculators then computer spreadsheets led to such ridiculous claims of “accuracy”. I can’t remember how many times I had to review this with the talented young engineers from good schools that I managed, but there’s obviously something missing in the way we teach simple mathematical concepts these days.
{"url":"http://scienceblogs.com/goodmath/2009/03/04/basics-significant-figures/","timestamp":"2014-04-16T16:37:07Z","content_type":null,"content_length":"57800","record_id":"<urn:uuid:fb9da431-bdfb-4295-a984-c97596180df8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding maximum of two vectors without a loop? up vote 0 down vote favorite If there are two vectors, say x and y. for (i in 1:length(x)) z[i] = max(x[i],y[i]) Can you please help me to perform this without using a loop? r vectorization don't forget to consider clicking "accept" on an answer that satisfactorily answers your questions ... – Ben Bolker Dec 30 '12 at 17:41 Related: Compute the minimum of a pair of vectors – Joshua Ulrich Dec 30 '12 at 18:18 This is documented in ?max. – Joshua Ulrich Dec 30 '12 at 18:18 add comment 2 Answers active oldest votes Assuming that the vectors x and y are of the same length, pmax is your function. z = pmax(x, y) up vote 10 down vote accepted If the lengths differ, the pmax expression will return different values than your loop, due to recycling. Yes, of course. Thak you very much. – Jevgenijs Strigins Dec 30 '12 at 17:41 add comment For completeness sake I include a solution which uses apply: Z = cbind(x,y) up vote 2 down apply(Z, 1, max) I don't know how the different solutions compare in terms of speed, but, @JevgenijsStrigins, you could check quite easily. 1 apply is very probably much slower than pmax ... – Ben Bolker Dec 30 '12 at 19:15 I agree, but I added apply because it is much more flexible in terms of the functions it can apply. – Paul Hiemstra Dec 30 '12 at 19:16 1 sure. library(benchmark); set.seed(101); x <- runif(1000); y <- runif(1000); benchmark(apply(cbind(x,y),1,max),pmax(x,y)) shows that pmax is about 40x faster (don't know how much of that is the cost of cbind()) – Ben Bolker Dec 30 '12 at 19:17 There might be some overhead because of cbind, but I cannot imagine that it would lead to a 40 times decrease in speed. – Paul Hiemstra Dec 30 '12 at 19:19 add comment Not the answer you're looking for? Browse other questions tagged r vectorization or ask your own question.
{"url":"http://stackoverflow.com/questions/14092922/finding-maximum-of-two-vectors-without-a-loop/14092936","timestamp":"2014-04-18T06:36:02Z","content_type":null,"content_length":"78151","record_id":"<urn:uuid:f5be48e9-0d65-4e7f-8cb0-c96d5beeac26>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Pleasantville, NY Math Tutor Find a Pleasantville, NY Math Tutor ...With guidance and practice, the examination becomes a challenge you can meet with a level head. When you walk into the test center, feeling confident and prepared, you know you will do your very best.In addition to an extensive background in Accounting - I qualified as an Audit manager with KPMG... 55 Subjects: including calculus, discrete math, differential equations, career development ...I am very well qualified to teach and tutors students in nearly any subject from pre-Kindegraten through elementary, middle, high school, college, and graduate school. I tutor students in Mathematics, the Sciences-Biology and Chemistry at any level, English, and Spanish, as well as many specialt... 90 Subjects: including calculus, European history, general computer, precalculus ...In addition, I try to foster a learning environment that motivates and builds success. I take pleasure in showing students that with appropriate instruction and a little hard work they are capable of significantly more than they imagined. My goal is to eventually get to a point where a student can be self-motivated to tackle any problem they come up against. 12 Subjects: including discrete math, differential equations, algebra 1, algebra 2 ...I graduated with a BA in Biological Sciences and a concentration in Neurobiology and Behavior. I am also an aspiring Neurologist and look forward to going to medical school. I have vast experience as a tutor and have worked with students in numerous subject areas from elementary Math to graduate level Biology. 25 Subjects: including ACT Math, physics, probability, prealgebra ...I was a Sun Certified Java and Solaris instructor for California State University, San Bernardino. I have more than 15 years of industry experience in programming. Computer Science is a tough topic in general. 12 Subjects: including algebra 1, algebra 2, geometry, prealgebra Related Pleasantville, NY Tutors Pleasantville, NY Accounting Tutors Pleasantville, NY ACT Tutors Pleasantville, NY Algebra Tutors Pleasantville, NY Algebra 2 Tutors Pleasantville, NY Calculus Tutors Pleasantville, NY Geometry Tutors Pleasantville, NY Math Tutors Pleasantville, NY Prealgebra Tutors Pleasantville, NY Precalculus Tutors Pleasantville, NY SAT Tutors Pleasantville, NY SAT Math Tutors Pleasantville, NY Science Tutors Pleasantville, NY Statistics Tutors Pleasantville, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Pleasantville_NY_Math_tutors.php","timestamp":"2014-04-19T23:38:45Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:35445284-5137-47ae-86bc-09700f885730>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding maximum of two vectors without a loop? up vote 0 down vote favorite If there are two vectors, say x and y. for (i in 1:length(x)) z[i] = max(x[i],y[i]) Can you please help me to perform this without using a loop? r vectorization don't forget to consider clicking "accept" on an answer that satisfactorily answers your questions ... – Ben Bolker Dec 30 '12 at 17:41 Related: Compute the minimum of a pair of vectors – Joshua Ulrich Dec 30 '12 at 18:18 This is documented in ?max. – Joshua Ulrich Dec 30 '12 at 18:18 add comment 2 Answers active oldest votes Assuming that the vectors x and y are of the same length, pmax is your function. z = pmax(x, y) up vote 10 down vote accepted If the lengths differ, the pmax expression will return different values than your loop, due to recycling. Yes, of course. Thak you very much. – Jevgenijs Strigins Dec 30 '12 at 17:41 add comment For completeness sake I include a solution which uses apply: Z = cbind(x,y) up vote 2 down apply(Z, 1, max) I don't know how the different solutions compare in terms of speed, but, @JevgenijsStrigins, you could check quite easily. 1 apply is very probably much slower than pmax ... – Ben Bolker Dec 30 '12 at 19:15 I agree, but I added apply because it is much more flexible in terms of the functions it can apply. – Paul Hiemstra Dec 30 '12 at 19:16 1 sure. library(benchmark); set.seed(101); x <- runif(1000); y <- runif(1000); benchmark(apply(cbind(x,y),1,max),pmax(x,y)) shows that pmax is about 40x faster (don't know how much of that is the cost of cbind()) – Ben Bolker Dec 30 '12 at 19:17 There might be some overhead because of cbind, but I cannot imagine that it would lead to a 40 times decrease in speed. – Paul Hiemstra Dec 30 '12 at 19:19 add comment Not the answer you're looking for? Browse other questions tagged r vectorization or ask your own question.
{"url":"http://stackoverflow.com/questions/14092922/finding-maximum-of-two-vectors-without-a-loop/14092936","timestamp":"2014-04-18T06:36:02Z","content_type":null,"content_length":"78151","record_id":"<urn:uuid:f5be48e9-0d65-4e7f-8cb0-c96d5beeac26>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Anonymous on Sunday, March 10, 2013 at 11:54pm. We are given this table which is Benford law for these questions they are used for filling your taxes: k= p(X=k) 1 0.301 2 0.176 3 0.125 4 0.097 5 0.079 6 0.067 7 0.058 8 0.051 9 0.046 1.) If you select two numbers randomly from any where on your taxes what is the probability of selecting a number that starts with 2 and a number that starts with 5 in any order 2.) If you randomly select a number on your taxed what is the probability that it will NOT begin with a number 4? 3.) If you select three numbers randomly from anywhere on your taxes what is the probability that all three numbers starts with the number 1 OR that none of the numbers start with the number 1? Please show work I want to understand how to do these.. • Statistics - Dr. Jane, Monday, March 11, 2013 at 10:54am 1) If I am looking at this correctly, I don't see any number that starts with 2, but I see one number that ends with 5 You probability is .079 2) I don't see any number beginning with 4. The probability that it won't be a 4 is the sum of all of the probabilities in your list which should = 1. 3) I don't see numbers starting with 1. I only see 2. So, the problem says that all 3 numbers start with 1. I would have to say 0. You can add up all of the probabilities of the numbers that don't start with one. Or you can add up the probabilities for the two numbers that do start with 1 and subtract it from 1. Related Questions Statistics - Anyone know how to get me started? Calculate b0 and b1 and make an ... Math - Given table, X ; (1+x)^(1/x) -0.5. ; 4 -0.1 ; 2.8680 -0.01 ; 2.7320 -0.... Math - Given table, X (1+x)^(1/x) -0.5. 4 -0.1 2.8680 -0.01 2.7320 -0.001 2.7196... stats - 7. Calculate b0 and b1 and make an equation of regression line for the ... Physics - I NEED HELP.... I look in my text for an inverse square law and i get ... math - complete the table given values of x. then use the table to solve the ... statistics - Assume that a procedure yields a binomial distribution with a trial... statistics - Assure that a procedure yields abinomial distribution with a trial ... Statistics - 6. Benford's Law claims that numbers chosen from very large data ... Statistics - 5. Benford's Law claims that numbers chosen from very large data ...
{"url":"http://www.jiskha.com/display.cgi?id=1362974041","timestamp":"2014-04-19T11:17:38Z","content_type":null,"content_length":"9577","record_id":"<urn:uuid:05ce7630-8b4a-461d-a46a-8202542b4f7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Power transmission variance predictions in complex systems with ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06 4aSA6. Power transmission variance predictions in complex systems with dissipation. John Burkhardt Richard Weaver Dept. of Theor. and Appl. Mech., Univ. of Illinois, Urbana, IL 61801 The eigenstatistics of damped complex systems are examined for the purpose of improving statistical response predictions such as those formulated in statistical energy analysis and statistical room acoustics. Both the statistical distribution of modal decay rates and the effect of dissipation on the intermodal correlation of the natural frequencies described by the gaussian orthogonal ensemble (GOE) of random matrix theory are explored. It is found that the modal decay rates are distributed according to a chi-square distribution whose degree depends on the distribution of damping in the system and the wavelength of the disturbance. The intermodal correlations of the natural frequencies are found to be unaffected by the presence of moderate damping. Level repulsion (the absence of near degeneracies) and spectral rigidity (the near regularity of the spectrum) are found to conform to the prediction of the GOE provided the system is reverberant. The variance of the power transmission function for an irregularly shaped membrane is formulated using a GOE-type natural frequency spectrum and chi-square distributed modal decay rates. Numerical simulations of membranes are performed which confirm the calculated effect of a distribution of decay rates on power transmission characteristics.
{"url":"http://www.auditory.org/asamtgs/asa95wsh/4aSA/4aSA6.html","timestamp":"2014-04-16T16:29:20Z","content_type":null,"content_length":"1993","record_id":"<urn:uuid:4063ef5d-535c-4158-b912-6a674d5be315>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Grove Hall, MA Science Tutor Find a Grove Hall, MA Science Tutor ...While tutoring in my junior and senior year of college, I tutored freshman in calculus. I took geometry in high school. I have been able to use the principles over and over in my career as a practicing engineer. 10 Subjects: including physics, mechanical engineering, calculus, algebra 1 ...Currently, I work as as a Research Associate at Harvard Medical School, teaching occasionally. Overall, I have more then 15 years of teaching experience as a tutor and in-class teacher. The major part of my teaching experience in related to the field of Chemistry, including General, Physical, Organic, Analytical Chemistry as well as Biochemistry. 23 Subjects: including chemical engineering, organic chemistry, biology, calculus ...As a part of my degree (major in Neuroscience and minor in Chemistry) I was required to take 4 classes of upper level chemistry. I took 2 semesters of organic chemistry and 2 semesters of biochemistry. In these 4 classes I learned a vast amount of information and more importantly, I gained the tools necessary to look at a problem in chemistry and ask the right questions. 45 Subjects: including organic chemistry, physics, botany, ACT Science ...It's been over 10 years since I've been in this field. I’m an independent contractor working part-time with a different company. My travel has expanded to Canada and the United Kingdom. 1 Subject: nursing ...I live in Cambridge and I'm available evenings and weekends. If you want more information, do not hesitate to contact me.I've taken organic chemistry I and organic chemistry II at Stevens Institute of Technology successfully. In addition, I've taken general chem I and II as well as labs for both general chem and organic chem. 31 Subjects: including organic chemistry, ACT Science, chemical engineering, mechanical engineering Related Grove Hall, MA Tutors Grove Hall, MA Accounting Tutors Grove Hall, MA ACT Tutors Grove Hall, MA Algebra Tutors Grove Hall, MA Algebra 2 Tutors Grove Hall, MA Calculus Tutors Grove Hall, MA Geometry Tutors Grove Hall, MA Math Tutors Grove Hall, MA Prealgebra Tutors Grove Hall, MA Precalculus Tutors Grove Hall, MA SAT Tutors Grove Hall, MA SAT Math Tutors Grove Hall, MA Science Tutors Grove Hall, MA Statistics Tutors Grove Hall, MA Trigonometry Tutors Nearby Cities With Science Tutor Cambridgeport, MA Science Tutors Dorchester, MA Science Tutors East Braintree, MA Science Tutors East Milton, MA Science Tutors East Watertown, MA Science Tutors Kenmore, MA Science Tutors North Quincy, MA Science Tutors Quincy Center, MA Science Tutors Readville Science Tutors South Boston, MA Science Tutors South Quincy, MA Science Tutors Squantum, MA Science Tutors West Quincy, MA Science Tutors Weymouth Lndg, MA Science Tutors Wollaston, MA Science Tutors
{"url":"http://www.purplemath.com/grove_hall_ma_science_tutors.php","timestamp":"2014-04-17T15:42:18Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:cea39116-34ec-45cf-b3ee-ee58748ac4c0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: In "square root of -1", should we say "minus 1" or "negative Replies: 1 Last Post: Dec 4, 2012 10:09 PM Messages: [ Previous | Next ] Re: In "square root of -1", should we say "minus 1" or "negative 1"? Posted: Dec 4, 2012 10:09 PM On Dec 4, 2012, at 10:58 AM, Joe Niederberger <niederberger@comcast.net> wrote: > Simply working through the logic of some time-reversal scenario or other way of illustrating the sign rule, may in fact stand in the way of deeper understanding, because the person may think that somehow "proves" the matter. > It good, but its not enough by itself. Can you describe more of what you mean by deeper understanding? If the teacher stands in front of the class and works through the logic or describes some analogy, and the students nod in agreement, then I don't think that is proof of any understanding (on the students' part). But if the students are able to tackle some pretty clever problems involving these elements, and even extend the elements past what the teacher is able to deliver in a limited amount of time, then I don't think you can get any deeper than that. Taking "negative numbers" as an example, what would represent "deeper" understanding. I know one thing I have now that I didn't have when I was 13, a much more "experienced" understanding. But I do know that there was an aha instant before which, my understanding of negative numbers was not fully ripe and after which it was fully ripe. And when I say fully ripe I mean after that point in time, there is nothing you could describe to me involving negative numbers that I wouldn't understand. In other words, I got it. How do you get deeper than that, other than experience? Bob Hansen Date Subject Author 12/4/12 Re: In "square root of -1", should we say "minus 1" or "negative Joe Niederberger 12/4/12 Re: In "square root of -1", should we say "minus 1" or "negative 1"? Robert Hansen
{"url":"http://mathforum.org/kb/message.jspa?messageID=7932551","timestamp":"2014-04-16T04:33:24Z","content_type":null,"content_length":"19793","record_id":"<urn:uuid:a7d5cedb-ccc8-4134-97fd-38a81dca01fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
context extension context extension In dependent type theory, context extension introduces new free variables into the context. If $T$ is a type in a context $\Gamma$, then the extension of $\Gamma$ by (a free variable of) the type $T$ is the context denoted $\Gamma, x\colon T$ (where $x$ is a new variable). (We have said ‘the’ extension of $\Gamma$ by $T$ using the generalised the; but it may literally be unique using certain conventions for handling alpha equivalence.) Categorical semantics The categorical semantics of context extension is the inverse image of the base change geometric morphism (or its analog for hyperdoctrines) along the projection morphism $T \to \Gamma$ in the slice $(\mathbf{H}_{/\Gamma})_{/T} \stackrel{\overset{\prod_{x : T}}{\to}}{\stackrel{\overset{(-)\times T}{\leftarrow}}{\underset{\sum_{x : T}}{\to}}} \mathbf{H}_{/\Gamma}$ Generally speaking, a morphism $\Delta \to \Gamma$ in the category of contexts (an interpretation of $\Gamma$ in $\Delta$) is a display morphism iff there is an isomorphism $\Delta \leftrightarrow \ Theta$ where $\Theta$ is an extension of $\Gamma$. (This might not actually be true in all type theories, or maybe it should be taken as the definition of ‘display morphism’; I'm not sure.) The observation that context extension forms an adjoint pair/adjoint triple with quantifiers is due to • Bill Lawvere, Adjointness in Foundations, (TAC), Dialectica 23 (1969), 281-296 and further developed in • Bill Lawvere, Equality in hyperdoctrines and comprehension schema as an adjoint functor, Proceedings of the AMS Symposium on Pure Mathematics XVII (1970), 1-14. Revised on November 23, 2012 02:26:33 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/context+extension","timestamp":"2014-04-17T09:36:19Z","content_type":null,"content_length":"38679","record_id":"<urn:uuid:376579a4-8b73-4988-902a-d88592cc7003>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
algorithm for Results 1 - 10 of 49 , 2004 "... Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves ..." Cited by 369 (17 self) Add to MetaCart Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves also figured prominently in the recent proof of Fermat's Last Theorem by Andrew Wiles. Originally pursued for purely aesthetic reasons, elliptic curves have recently been utilized in devising algorithms for factoring integers, primality proving, and in public-key cryptography. In this article, we aim to give the reader an introduction to elliptic curve cryptosystems, and to demonstrate why these systems provide relatively small block sizes, high-speed software and hardware implementations, and offer the highest strength-per-key-bit of any known public-key scheme. , 2000 "... Abstract. We present an index-calculus algorithm for the computation of discrete logarithms in the Jacobian of hyperelliptic curves defined over finite fields. The complexity predicts that it is faster than the Rho method for genus greater than 4. To demonstrate the efficiency of our approach, we de ..." Cited by 78 (6 self) Add to MetaCart Abstract. We present an index-calculus algorithm for the computation of discrete logarithms in the Jacobian of hyperelliptic curves defined over finite fields. The complexity predicts that it is faster than the Rho method for genus greater than 4. To demonstrate the efficiency of our approach, we describe our breaking of a cryptosystem based on a curve of genus 6 recently proposed by Koblitz. 1 , 2000 "... We develop a generic framework for the computation of logarithms in nite class groups. The model allows to formulate a probabilistic algorithm based on collecting relations in an abstract way independently of the specific type of group to which it is applied, and to prove a subexponential running ti ..." Cited by 54 (9 self) Add to MetaCart We develop a generic framework for the computation of logarithms in nite class groups. The model allows to formulate a probabilistic algorithm based on collecting relations in an abstract way independently of the specific type of group to which it is applied, and to prove a subexponential running time if a certain smoothness assumption is verified. The algorithm proceeds in two steps: First, it determines the abstract group structure as a product of cyclic groups; second, it computes an explicit isomorphism, which can be used to extract discrete logarithms. , 1990 "... A new probabilistic algorithm for the determination of class groups and regulators of an algebraic number field F is presented. Heuristic evidence is given which shows that the expected running time of the algorithm is exp( p log D log log D) c+o(1) where D is the absolute discriminant of F , wh ..." Cited by 51 (5 self) Add to MetaCart A new probabilistic algorithm for the determination of class groups and regulators of an algebraic number field F is presented. Heuristic evidence is given which shows that the expected running time of the algorithm is exp( p log D log log D) c+o(1) where D is the absolute discriminant of F , where c 2 R?0 is an absolute constant, and where the o(1)-function depends on the degree of F . 1 Introduction Computing the class group and the regulator of an algebraic number field F are two major tasks of algorithmic algebraic number theory. In the last decade, several regulator and class group algorithms have been suggested (e.g. [16],[17],[18],[3]). In [2] the problem of the computational complexity of those algorithms was adressed for the first time. This question was then studied in [2] in great detail. The theoretical results and the computational experience show that computing class groups and regulators is a very difficult problem. More precisely, it turns out that even under the a... , 1996 "... We present new algorithms for computing Smith normal forms of matrices over the integers and over the integers modulo d. For the case of matrices over ZZ d , we present an algorithm that computes the Smith form S of an A 2 ZZ n\Thetam d in only O(n `\Gamma1 m) operations from ZZ d . Here, ` is t ..." Cited by 42 (4 self) Add to MetaCart We present new algorithms for computing Smith normal forms of matrices over the integers and over the integers modulo d. For the case of matrices over ZZ d , we present an algorithm that computes the Smith form S of an A 2 ZZ n\Thetam d in only O(n `\Gamma1 m) operations from ZZ d . Here, ` is the exponent for matrix multiplication over rings: two n \Theta n matrices over a ring R can be multiplied in O(n ` ) operations from R. We apply our algorithm for matrices over ZZ d to get an algorithm for computing the Smith form S of an A 2 ZZ n\Thetam in O~(n `\Gamma1 m \Delta M(n log jjAjj)) bit operations (where jjAjj = max jA i;j j and M(t) bounds the cost of multiplying two dte-bit integers). These complexity results improve significantly on the complexity of previously best known Smith form algorithms (both deterministic and probabilistic) which guarantee correctness. 1 Introduction The Smith normal form is a canonical diagonal form for equivalence of matrices over a - Bull. Amer. Math. Soc , 1992 "... Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to ..." Cited by 40 (3 self) Add to MetaCart Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers. 1. - CODES AND CRYPTOGRAPHY, LNCS 1746 , 1999 "... ..." , 1999 "... In this paper we discuss various aspects of cryptosystems based on hyperelliptic curves. In particular we cover the implementation of the group law on such curves and how to generate suitable curves for use in cryptography. This paper presents a practical comparison between the performance of ellip ..." Cited by 30 (5 self) Add to MetaCart In this paper we discuss various aspects of cryptosystems based on hyperelliptic curves. In particular we cover the implementation of the group law on such curves and how to generate suitable curves for use in cryptography. This paper presents a practical comparison between the performance of elliptic curve based digital signature schemes and schemes based on hyperelliptic curves. We conclude that, at present, hyperelliptic curves offer no performance advantage over elliptic curves. , 1996 "... . In this article we survey recent developments concerning the discrete logarithm problem. Both theoretical and practical results are discussed. We emphasize the case of finite fields, and in particular, recent modifications of the index calculus method, including the number field sieve and the func ..." Cited by 24 (1 self) Add to MetaCart . In this article we survey recent developments concerning the discrete logarithm problem. Both theoretical and practical results are discussed. We emphasize the case of finite fields, and in particular, recent modifications of the index calculus method, including the number field sieve and the function field sieve. We also provide a sketch of the some of the cryptographic schemes whose security depends on the intractibility of the discrete logarithm problem. 1 Introduction Let G be a cyclic group generated by an element t. The discrete logarithm problem in G is to compute for any b 2 G the least non-negative integer e such that t e = b. In this case, we write log t b = e. Our purpose, in this paper, is to survey recent work on the discrete logarithm problem. Our approach is twofold. On the one hand, we consider the problem from a purely theoretical perspective. Indeed, the algorithms that have been developed to solve it not only explore the fundamental nature of one of the basic s... "... Abstract. We propose constructing provable collision resistant hash functions from expander graphs. As examples, we investigate two specific families of optimal expander graphs for provable hash function constructions: the families of Ramanujan graphs constructed by Lubotzky-Phillips-Sarnak and Pize ..." Cited by 19 (2 self) Add to MetaCart Abstract. We propose constructing provable collision resistant hash functions from expander graphs. As examples, we investigate two specific families of optimal expander graphs for provable hash function constructions: the families of Ramanujan graphs constructed by Lubotzky-Phillips-Sarnak and Pizer respectively. When the hash function is constructed from one of Pizer’s Ramanujan graphs, (the set of supersingular elliptic curves over Fp2 with ℓ-isogenies, ℓ a prime different from p), then collision resistance follows from hardness of computing isogenies between supersingular elliptic curves. We estimate the cost per bit to compute these hash functions, and we implement our hash function for several members of the LPS graph family and give actual timings. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=281511","timestamp":"2014-04-19T18:46:04Z","content_type":null,"content_length":"36136","record_id":"<urn:uuid:72b9db00-8ede-4c35-8fb8-14959cd410bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Wadsworth, IL Prealgebra Tutor Find a Wadsworth, IL Prealgebra Tutor ...I first began my journey helping others in college when I realized that I had a strong work ethic that allowed me to teach myself even if the material presented in class was inadequate. I often found myself mentoring colleagues through Chemistry classes, Philosophy classes, Spanish classes, and ... 26 Subjects: including prealgebra, Spanish, English, reading ...My students have had great success with my tutoring. In fact, one of my ACT students raised her Math score from 25 to a 31 in two and a half months!! I thoroughly enjoy teaching math and working with young people of all ages!I taught freshman and sophomore calculus and differential equations at a major engineering college. It was a comprehensive two year curriculum. 18 Subjects: including prealgebra, physics, GRE, calculus ...During this time, I was available to students for additional assistance outside of classroom hours; this included help with general concepts, laboratory assignments, study materials and test preparation. I am passionate about my discipline and love teaching and bringing enthusiasm to students while helping them to expand their knowledge. Education:B.S. 10 Subjects: including prealgebra, calculus, algebra 1, algebra 2 ...I also recently graduated from Trinity International University with a bachelor's degree. When beginning college at Trinity International University in 2009, the very first class that I had to take was on study skills. It was an intense lesson in how to study and learn so that each student was given the best opportunity to succeed in the rigorous returning adult program. 38 Subjects: including prealgebra, reading, English, writing ...I am passionate about teaching people how to excel. My Human Development Major allowed me to take course in a multitude of disciplines: biology, chemistry, psychology, child observation, and teaching. I discovered early in life that I have knack for inspiring and motivating people to surpass their expectations. 13 Subjects: including prealgebra, chemistry, English, reading Related Wadsworth, IL Tutors Wadsworth, IL Accounting Tutors Wadsworth, IL ACT Tutors Wadsworth, IL Algebra Tutors Wadsworth, IL Algebra 2 Tutors Wadsworth, IL Calculus Tutors Wadsworth, IL Geometry Tutors Wadsworth, IL Math Tutors Wadsworth, IL Prealgebra Tutors Wadsworth, IL Precalculus Tutors Wadsworth, IL SAT Tutors Wadsworth, IL SAT Math Tutors Wadsworth, IL Science Tutors Wadsworth, IL Statistics Tutors Wadsworth, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Wadsworth_IL_Prealgebra_tutors.php","timestamp":"2014-04-21T05:13:34Z","content_type":null,"content_length":"24510","record_id":"<urn:uuid:b699f51e-89a5-4aed-a1ab-4d1f21c58d9f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
On September 20 th, Hope Hardware received an invoice for 26 items at $234 each with terms 5/10, 3/20, N/30 EOM. ... - Homework Help - eNotes.com On September 20 th, Hope Hardware received an invoice for 26 items at $234 each with terms 5/10, 3/20, N/30 EOM. Hope Hardware reduced the bill by $1,990 on the last day of the first discount period. A cheque for $1,478 arrived on the last day of the second discount period. A payment to completely pay off the remainder of the invoice arrived on the final day before a penalty would have been incurred. a) What was the amount of the first payment? $ ____________ b) What was the outstanding balance after the first payment was made? $ ____________ c) What was the amount credited for the cheque for $1,478 when it was received during the second discount period? $ ____________ d) What was the amount of the last payment to pay off the invoice completely? $ ____________ e) What was the due date for the final payment? (Choose Month-Enter Date) __________ ____________ The invoice for 26 items worth $234 each, amounts to 26*234=$6084 in total. The terms 5/10, 3/20, n/30 means the entire balance is due within 30 days after the end of the month in which the invoice is issued, but the buyer can take a 5% cash discount if the bill is paid within 10 days and a 3% discount if the bill is paid within 20 days after the end of the month in which the invoice is issued. The invoice issue date is Sep 20. Hope Hardware reduced the bill by $1,990 on the last day of the first discount period. a) Let the actual amount of the first payment be $x Then `x*6084/(0.95*6084) =1990` `rArr x=1990*5779.8/6084=$1890.5` b) The outstanding balance after the first payment was $(6084-1990)=$4094 A cheque for $1,478 arrived on the last day of the second discount period. c) The amount credited at this point was 1478/0.97=$1523.7 d) The amount of the last payment to pay off the invoice completely was (4094-1523.7)=$2570.3 e) The last date for the final payment was October 30. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/september-20-th-hope-hardware-received-an-invoice-462415","timestamp":"2014-04-24T00:55:24Z","content_type":null,"content_length":"26808","record_id":"<urn:uuid:64716b9d-b0a7-48c3-96a1-519f432aca50>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 95 "... In this paper we formalize the notion of a ring signature, which makes it possible to specify a set of possible signers without revealing which member actually produced the signature. Unlike group signatures, ring signatures have no group managers, no setup procedures, no revocation procedures, and ..." Cited by 1754 (4 self) Add to MetaCart In this paper we formalize the notion of a ring signature, which makes it possible to specify a set of possible signers without revealing which member actually produced the signature. Unlike group signatures, ring signatures have no group managers, no setup procedures, no revocation procedures, and no coordination: any user can choose any set of possible signers that includes himself, and sign any message by using his secret key and the others ’ public keys, without getting their approval or assistance. Ring signatures provide an elegant way to leak authoritative secrets in an anonymous way, to sign casual email in a way which can only be verified by its intended recipient, and to solve other problems in multiparty computations. The main contribution of this paper is a new construction of such signatures which is unconditionally signer-ambiguous, provably secure in the random oracle model, and exceptionally efficient: adding each ring member increases the cost of signing or verifying by a single modular multiplication and a single symmetric encryption. - IEEE TRANSACTIONS ON NEURAL NETWORKS , 2005 "... Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the ..." Cited by 231 (3 self) Add to MetaCart Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed. - JACM , 1976 "... ABSTRACT A linear time algorithm to obtain a minimum finish time schedule for the two-processor open shop together with a polynomial time algorithm to obtain a minimum finish time preemptive schedule for open shops with more than two processors are obtained It Is also shown that the problem of obtai ..." Cited by 93 (4 self) Add to MetaCart ABSTRACT A linear time algorithm to obtain a minimum finish time schedule for the two-processor open shop together with a polynomial time algorithm to obtain a minimum finish time preemptive schedule for open shops with more than two processors are obtained It Is also shown that the problem of obtaining mimmum fimsh time nonpreemptlve schedules when the open shop has more than two processors is - IEEE/ACM Trans. Networking , 1996 "... Continually growing number of users have to exchange increasing amounts of information. Local Area Networks (LANs) are commonly used as the communication infrastructure that meets the demands of the users in the local environment. These networks typically consist of several LAN segments connected to ..." Cited by 34 (0 self) Add to MetaCart Continually growing number of users have to exchange increasing amounts of information. Local Area Networks (LANs) are commonly used as the communication infrastructure that meets the demands of the users in the local environment. These networks typically consist of several LAN segments connected together via bridges. In this paper we describe an algorithm for designing LANs with the objective of minimizing the average network delay. The topology design includes issues such as determination of the number of segments in the network, allocating the users to the different segments and determining the interconnections and routing among the segments. The determination of the optimal LAN topology is a very complicated combinatorial optimization problem. Therefore, a heuristic algorithm that is based on genetic ideas is used. Numerical examples are provided and the quality of the designs obtained by using the algorithm is compared with lower bounds on the average network delay that are develo... - Artificial Intelligence , 2005 "... Artificial Intelligence, to appear Maximum Boolean satisfiability (max-SAT) is the optimization counterpart of Boolean satisfiability (SAT), in which a variable assignment is sought to satisfy the maximum number of clauses in a Boolean formula. A branch and bound algorithm based on the Davis-Putnam- ..." Cited by 32 (1 self) Add to MetaCart Artificial Intelligence, to appear Maximum Boolean satisfiability (max-SAT) is the optimization counterpart of Boolean satisfiability (SAT), in which a variable assignment is sought to satisfy the maximum number of clauses in a Boolean formula. A branch and bound algorithm based on the Davis-Putnam-Logemann-Loveland procedure (DPLL) is one of the most competitive exact algorithms for solving max-SAT. In this paper, we propose and investigate a number of strategies for max-SAT. The first strategy is a set of unit propagation or unit resolution rules for max-SAT. We summarize three existing unit propagation rules and propose a new one based on a nonlinear programming formulation of max-SAT. The second strategy is an effective lower bound based on linear programming (LP). We show that the LP lower bound can be made effective as the number of clauses increases. The third strategy consists of a a binary-clause first rule and a dynamicweighting variable ordering rule, which are motivated by a thorough analysis of two existing well-known variable orderings. Based on the analysis of these strategies, we develop an exact solver for both max-SAT and weighted max-SAT. Our experimental results on random problem instances and many instances from the max-SAT libraries show that our new solver outperforms most of the existing exact max-SAT solvers, with orders of magnitude of improvement in many cases. - COMPUTER COMMUNICATIONS JOURNAL , 1996 "... In conventional multicast communication, the source carries a single conversation with all destination nodes. If a node on the path to any destination becomes congested, the throughput to all destinations is reduced, thus treating some destination nodes unfairly. We consider a window-controlled mult ..." Cited by 31 (3 self) Add to MetaCart In conventional multicast communication, the source carries a single conversation with all destination nodes. If a node on the path to any destination becomes congested, the throughput to all destinations is reduced, thus treating some destination nodes unfairly. We consider a window-controlled multipoint connection and study the use of destination set grouping, where the destination set can be split into disjoint subgroups with the source carrying independent conversations with each subgroup. We present a static grouping heuristic that can obtain near optimum grouping for static network environments and a dynamic grouping protocol which can adjust the grouping and the window sizes per group in response to changing network conditions. The performance of the static grouping heuristic and the dynamic grouping protocol are studied using simulation and compared with single-group multicasting. - IEEE Transactions on Computers , 1990 "... A connected hypercube with faulty links and/or nodes is called an injured hypercube. To enable any non-faulty node to communicate with any other non-faulty node in an injured hypcrcube, the information on component failures has to be made available to non-faulty nodes so as to route messages around ..." Cited by 31 (2 self) Add to MetaCart A connected hypercube with faulty links and/or nodes is called an injured hypercube. To enable any non-faulty node to communicate with any other non-faulty node in an injured hypcrcube, the information on component failures has to be made available to non-faulty nodes so as to route messages around the faulty components. We propose first a distributed adaptive fault-tolerant routing scheme for an injured hypercube in which each node is required to know only the condition of its own links. Despite its simplicity, this scheme is shown to be capable of routing messages successfully in an injured hypercube as long as the number of faulty components is less than n. Moreover, it is proved that this scheme routes messages via shortest paths with a rather high probability and the expected length of a resulting path is very close to that of a shortest path. Since the assumption that the number of faulty components is less than n in an n-dimensional hypercube might limit the usefulness of the above scheme, we also introduce a routing scheme based on depth-first search which works in the presence of an arbitrary number of faulty components. Due to the insufficient information on faulty components, however, the paths chosen by the above scheme may not always be the shortest. To guarantee all messages to be routed via shortest paths, we propose to equip every node with more information than that on its own links. The effects of this additional information on routing efficiency are analyzed, and the additional information to be kept at each node for the shortest path routing is determined. Several examples and remarks are also given to illustrate bur results. Index Terms: Injured and regular hypercubes, distributed adaptive fault-tolerant routing, dcpthfirst search, looping effects, network delay tables, failure information. - the International Journal of Robotics Research "... A "modular" robotic system consists of joint and link modules that can be assembled in a variety of configurations to meet different or changing task requirements. However, due to typical symmetries in module design, different assembly configurations may lead to robotic structures which are kinemati ..." Cited by 26 (1 self) Add to MetaCart A "modular" robotic system consists of joint and link modules that can be assembled in a variety of configurations to meet different or changing task requirements. However, due to typical symmetries in module design, different assembly configurations may lead to robotic structures which are kinematically identical, or isomorphic. This paper considers how to enumerate the non-isomorphic assembly configurations of a modular robotic system. We introduce an Assembly Incidence Matrix (AIM) to represent a modular robot assembly configuration. Then we use symmetries of the module geometry and graph isomorphisms to define an equivalence relation on the AIMs. Equivalent AIMs represent isomorphic robot assembly configurations. Based on this equivalence relation, we propose an algorithm to generate non-isomorphic assembly configurations of an n-link tree-like robot with different joint and link module types. Examples demonstrate that this method is a significant improvement over a brute force enu... - IEEE 29th Design Automation Conference , 1992 "... In this paper, we demonstrate that the "dual" intersection graph of the netlist strongly captures circuit properties relevant to partitioning. We apply this transformation within an existing testbed that uses an eigenvector computation to derive a linear ordering of nets, rather than modules [12]. W ..." Cited by 22 (9 self) Add to MetaCart In this paper, we demonstrate that the "dual" intersection graph of the netlist strongly captures circuit properties relevant to partitioning. We apply this transformation within an existing testbed that uses an eigenvector computation to derive a linear ordering of nets, rather than modules [12]. We then find a good module partition with respect to the ratio cut metric [23] via a sequence of incremental independent-set computations in bipartite graphs derived from the net ordering. An efficient matching-based algorithm called IG-Match was tested on MCNC benchmark circuits as well as additional industry examples. Results are very encouraging: the algorithm yields an average of 28.8% improvement over the results of [23]. The intersection graph representation also yields speedups over, e.g., the method of [11], due to additional sparsity in the netlist representation. 1 1 Preliminaries A standard model for VLSI layout associates a graph G = (V; E) with the circuit netlist; vertices in...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=167439","timestamp":"2014-04-19T19:52:39Z","content_type":null,"content_length":"39687","record_id":"<urn:uuid:15dcb845-fac5-4e55-97e2-5a16c1b41586>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Plymouth Meeting SAT Math Tutor Find a Plymouth Meeting SAT Math Tutor ...My over 30 years' experience as a student, published author and teacher has given me a wealth of knowledge and background to help students dissect a written passage to extract the author's intent. I fully understand the student's frustration with trigonometry. I will help students to feel more comfortable with the subject matter, by relating it to real situations. 60 Subjects: including SAT math, reading, English, writing ...Mason --> PS: I have two Ivy League degrees - Columbia BA and Wharton MBA - and have been an instructor for the SAT, GMAT and LSAT for the Princeton Review. Personally, I missed 5 questions on the SAT, 3 questions on the GMAT, received a perfect score on the LSAT and scored in the 99th perce... 23 Subjects: including SAT math, reading, English, writing I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including SAT math, geometry, algebra 2, GRE ...Understand directed numbers in equations. 4. Solve problems using equations and inequalities. 5. Solve problems using exponents and roots. 6. 27 Subjects: including SAT math, calculus, statistics, geometry Hi! My name is Kristin and I have taught middle school math for the past 6 years. I've enjoyed tutoring students in elementary and middle school for the past 10 years. 21 Subjects: including SAT math, reading, statistics, algebra 1
{"url":"http://www.purplemath.com/Plymouth_Meeting_SAT_math_tutors.php","timestamp":"2014-04-16T16:26:40Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:1a866744-7b76-4ed7-bd1b-8c0de924eea5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Last Update Time Of Cell NOTE: The CALL function was disabled one of the later Excel97 updates, and is not available in Excel2000 or 2002. The reason for this was that is presented a severe security problem -- it allowed a worksheet cell to call any procedure from any DLL, regardless of the potentially destructive consequences of the that procedure. This page contains information that will most likely not work on your installation of Excel. Using the Excel4 macro function library, you can easily create a formula the returns the date and time of the last update of a cell. Suppose we have a value in A1, and want to put in B1 the time that A1 was last updated. Simple enter the following formula in B1: To get the last time that any cell in a range was updated, use Note that is A1 contains a formula, the time will be updated whenever A1 or any of its precedents are changed. This may cause confusion if the value of A1 does not change when one of its precedents does. For example, suppose A1 contains the formula =MAX(10, A2) And B1 contains the first time formula described above. If we change A2 from 5 to 6, A1 will not change, but the last-updated time will change. This is because one of its precedents has changed, even though the resulting value remains the same. For more information about the CALL function and the Excel4 function library, please see The CALL Function page, which appears on my web site compliments of Laurent Longre.
{"url":"http://www.cpearson.com/excel/lasttime.htm","timestamp":"2014-04-21T02:28:30Z","content_type":null,"content_length":"4657","record_id":"<urn:uuid:d71ef6c7-f69f-499f-b242-7e3fc252acb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Contact Lens Spectrum the contact lens exam A Fine Tool for Soft Toric Adjustments BY JENNIFER L. SMYTHE, OD, MS, FAAO A significant advancement in fitting soft toric contact lenses is the ability to calculate cross-cylinder effects using a sphero-cylinder over-refraction and a cross-cylinder calculator. Before the availability of these calculators, practitioners primarily determined lens power parameters using the "left add, right subtract" (LARS) method of adjusting for lens rotation. Such calculations are limited in that LARS will not reveal underlying: • errors in refraction • vertex calculation errors • possible cylinder masking in thicker/stiffer ballasted lenses • lens draping effects secondary to the underlying corneal topography In other words, many unknown power variables may exist. A simple soft lens over-refraction can pick up virtually all of these variables and, when calculated properly, patients can achieve excellent visual results. Figure 1. ToriTrack calculates the final prescription based on three variables. My Calculator of Choice Several cross-cylinder calculators are available. The one I've found most useful is CooperVision's ToriTrack. This calculator doesn't require on-eye estimation of lens rotation, which can be a significant source of error. The program can determine the resultant lens power to order based on three known variables: baseline manifest refraction, contact lens power and sphero-cylinder ToriTrack also helps in selecting appropriate empirical lens parameters based on corneal diameter, apical radius of curvature and refractive error. It selects a base curve based on the overall sagittal height of the cornea and suggests lens power to compensate for tear lens effects that occur with certain ballasted designs. How it Works You can find ToriTrack at www.coopervision.com. If you're fitting a high modulus or thicker, ballasted lens design such as Proclear or Hydrasoft (CooperVision), click the appropriate box and enter the spectacle prescription from the phoropter. Click "calculate," and the prescription appears, including compensation for vertex distance and a potential tear lens. If you chose a low modulus or non-ballasted design, then the calculator suggests a power that doesn't require presumed lacrimal lens compensation. After you apply the diagnostic lens and allow it to settle, perform a sphero-cylinder over-refraction. Click "over-refraction" to enter this third variable. The program then calculates the final prescription (Figure 1). In this example, the patient requires a lower sphere and cylinder power in addition to a modified axis based on the cross cylinder effects from a lens that rotates. If we applied LARS alone in this example, the patient would be over-corrected in both principal power meridians (although the axis might be close). The LARS method points us in the right direction only with respect to the axis; it doesn't help determine sphere or cylinder power. Also Consider These ToriTrack is limited in that it relies on a stable manifest refraction for baseline calculations. In cases of irregular astigmatism, consider using programs that ask for the amount and direction of lens rotation, such as those found on www.eyedock.com or the Sunsoft Calculator (Ocular Sciences, Inc.). Dr. Smythe is an associate professor of optometry at Pacific University and is in private group practice in Beaverton, Oregon. Contact Lens Spectrum, Issue: June 2004
{"url":"http://www.clspectrum.com/printarticle.aspx?articleID=12598","timestamp":"2014-04-20T15:58:57Z","content_type":null,"content_length":"9045","record_id":"<urn:uuid:21e57cb3-ca49-4dfb-bb93-f7276f604261>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
class Repair p whereSource Repair and RepairToFL deal with repairing old patches that were were written out due to bugs or that we no longer wish to support. Repair is implemented by collections of patches (FL, Named, PatchInfoAnd) that might need repairing. RepairToFL p => Repair (FL p) RepairToFL p => Repair (Named p) RepairToFL p => Repair (PatchInfoAnd p) class Apply p => RepairToFL p whereSource RepairToFL is implemented by single patches that can be repaired (Prim, Patch, RealPatch) There is a default so that patch types with no current legacy problems don't need to have an implementation. RepairToFL Prim RepairToFL Prim PrimPatch prim => RepairToFL (Patch prim) PrimPatch prim => RepairToFL (RealPatch prim) class Check p whereSource Check p => Check (RL p) Check p => Check (FL p) Check (Patch prim) Check p => Check (Named p) PrimPatch prim => Check (RealPatch prim)
{"url":"http://hackage.haskell.org/package/darcs-2.8.4/docs/Darcs-Patch-Repair.html","timestamp":"2014-04-17T04:30:01Z","content_type":null,"content_length":"10241","record_id":"<urn:uuid:a9106dde-eefe-4a58-a6ac-76a8a1e782e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Undergraduate Program Become a master problem solver As a math major at Missouri State University, you can spend some time exploring our varied courses – from actuarial science to statistics or abstract algebra. MSU’s math graduates have gone on to careers such as computer analyst, actuary, lawyer, statistician and teacher. A variety of options for you Our program offers you the opportunity to focus on one of the following tracks: Actuarial Mathematics This is the science of evaluating the likelihood of future events and managing risk. People trained in this field often work for insurance companies. Applied Mathematics This branch of the discipline applies mathematical theory and methods to physics, biology, medicine, computer sciences, life sciences, etc. It includes mathematical modeling, which transforms real-world problems into mathematical problems. General Mathematics With this focus, you can study various aspects of mathematics, and choose from the University’s broad range of courses. Statistics deals with the collection, organization and interpretation of data.
{"url":"http://math.missouristate.edu/undergraduate/","timestamp":"2014-04-20T08:18:06Z","content_type":null,"content_length":"14602","record_id":"<urn:uuid:133c4cd5-6bb8-45a9-94e2-8b3b5c7817a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Guttenberg, NJ Geometry Tutor Find a Guttenberg, NJ Geometry Tutor ...Does your child have difficulty with spelling? Do you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems? Do you have a student whose life you would like to enrich with piano lessons? 30 Subjects: including geometry, English, piano, reading ...I have a Bachelor of Science in Physics earned from a world-leading research institution in astrophysics (University of La Laguna/Canary Islands Institute of Astrophysics). I have extensive physics knowledge and tutoring experience at all levels. I am deeply passionate about the subject and love... 17 Subjects: including geometry, chemistry, calculus, Spanish ...I've taught children in Pre K and between 1st-4th grade, as well as teenagers and college students how to approach, better grasp and finally apply this readiness to the actual test-taking. I've dissolved writer's blocks, test anxieties,fears of reading + writing, and made studying more fun than ... 31 Subjects: including geometry, Spanish, English, writing ...I am an expert in tutoring algebra, geometry, fractions, ratios, percents, decimals, etc. I also have scored highly on comparative exams like the SAT and GRE. I am qualified to tutor in this subject due to my experience in the subject. 22 Subjects: including geometry, Spanish, chemistry, algebra 1 ...Throughout my career, I have tutored over a hundred students at many educational levels; from elementary school through graduate level. In addition, I have years of experience tutoring both SAT and ACT verbal and math sections as well as the entire GRE. Teaching is one of my biggest passions and one of the things I do best. 23 Subjects: including geometry, English, Spanish, writing Related Guttenberg, NJ Tutors Guttenberg, NJ Accounting Tutors Guttenberg, NJ ACT Tutors Guttenberg, NJ Algebra Tutors Guttenberg, NJ Algebra 2 Tutors Guttenberg, NJ Calculus Tutors Guttenberg, NJ Geometry Tutors Guttenberg, NJ Math Tutors Guttenberg, NJ Prealgebra Tutors Guttenberg, NJ Precalculus Tutors Guttenberg, NJ SAT Tutors Guttenberg, NJ SAT Math Tutors Guttenberg, NJ Science Tutors Guttenberg, NJ Statistics Tutors Guttenberg, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Guttenberg_NJ_geometry_tutors.php","timestamp":"2014-04-19T23:20:44Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:4558a248-86db-4c23-ae95-78afdcfb380c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 20 In a problem on percentages there are three quantities that are involved. These are the base, the amount, and the rate percent. Express the relation between them in the form of an equation and explain with examples how to find each one of them given the other two quantities In a problem on percentages there are three quantities that are involved. These are the base, the amount, and the rate percent. Express the relation between them in the form of an equation and explain with examples how to find each one of them given the other two quantities intermediate algebra 0.3x + 0.2y=5, 0.5x + 0.4y=11 using the substitution method A 36.0mL sample of 1.20 M KBr and a 56.0mL sample of 0.700 M KBr are mixed. The solution is then heated to evaporate water until the total volume is 60.0mL. What is the molarity of the KBr in the final solution? A 36.0mL sample of 1.20 M KBr and a 56.0mL sample of 0.700 M KBr are mixed. The solution is then heated to evaporate water until the total volume is 60.0mL. What is the molarity of the KBr in the final solution? The mass percentage of chloride ion in a 29.00 sample of seawater was determined by titrating the sample with silver nitrate, precipitating silver chloride. It took 42.42 of 0.2998 silver nitrate solution to reach the equivalence point in the titration. Multiply whole number by denominator and add numerator. Put this number as numerator with original denominator. For example: 9 2/11 would be 9*11 which is 99 add that to 2, which gives you 101. SO final answer is 101/11. 1 7/15= (15*1) + 7=22 SO 22/15 Introductory Physics I am really nervous about mathematics, so I just want to make sure I am doing these problems correctly. I figure if I am getting the incorrect answer, I must be doing them incorrectly and need to work on them some more. I THINK I have them correct, but I just want to verify. E... Solve by the elimination method. 5x-15y=2 and 5x-15y =3 Is there a solution of an ordered pair? Are the infinitely many solutions? or is there no solution? I got that there is no solution. use the quotient rule to differentiate the following k(t)=e^3t/4-t^2 (-2<t<2) find the stationary points of the following f(x)=x^3-3x^2-24x-7 finf the definite integral of the following h(u)=cos^2(1/8*u) (8th*u) find the indefinite integral of the following g(x)=21-12x^3/x (x>0) find the indefinite integral of the following f(t)=6cos(3t)+5e^-10t find the y coordinates of each stationary points f(0). f(x)=x^3-3x^2-24x-7 solve the following simultaneous equations using matrices 3x-6y=24 and -4x+5y=-23 Given g(x)= 1/3(x-2)^2-3 2<x<5 state the domain and image set g^-1 and find its rule given function G(x)=1/3(x-2)^2-3 2<x<5 sketch a graph of function y=g^-x and find the inverse function g^-1 the equation of the circle is x^2-8x+y^2+4y+11=0 find any coordinates that intersect the line y=-x-1 given the following equation for a circle x^2-8x+y^2+4y+11=0 find its centre and radius
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Cecelia","timestamp":"2014-04-20T23:36:36Z","content_type":null,"content_length":"9977","record_id":"<urn:uuid:449e5882-8e5f-4a9b-b689-74dfcb2737b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with calculating pi (beginner) 02-27-2013 #1 Registered User Join Date Feb 2013 San Juan, PR Help with calculating pi (beginner) I have only just begun programming and i am a bit stuck on a current logic problem. The program i made (which is very simple) for the purpose of calculating pi is based on the infinite alternating series: 4/1 - 4/3 + 4/5 - 4/7 … I've already worked out the mathematics and it works, but it's very inefficient (since the input is the number of iterations). Obviously the series converges to pi as it goes towards infinity, but as there is no way to repeat a program an infinite number of times (without waiting an infinite amount of time) the technique i've employed is flawed. I was wondering if someone could give me a suggestion as to how i could make my program more efficient and get a more accurate reading of pi (without waiting a really long time for the answer). Thank you. The source code i have up until now is: #include <iostream> #include <cmath> using namespace std; int main() double pi, series, entry, number = 1, alternating_sign; int count; char ans; cout << "Entry:"; cin >> entry; for (count = 0; count < entry; count ++) alternating_sign = pow((-1.0), (count)); series += alternating_sign / (number); pi = 4*series; cout << "Pi equals " << pi << endl; cout << "Try again?\n"; cin >> ans; } while (ans == 'Y' || ans == 'y'); return 0; Try this one. π˛/6 = 1/1˛ + 1/2˛ + 1/3˛ + 1/4˛ + 1/5˛ + 1/6˛ + 1/7˛ + 1/8˛ + ... Code - functions and small libraries I use It’s 2014 and I still use printf() for debugging. "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson And here is the analysis I had made on a post that what discussing about this series I provided I calculated the absolute error of that(where x' is the machine number) x' = 3.141592645 x= 3.141592653 (actual value) |ε| = |x - x'| = 0.000000008 = 0.08 * 10^(-7) < 0.5*10^(-7) which tell us that the machine value is going to be accurate at 7 decimal digits at the most.Here it is exact 7 digits. Then i calculated the absolute relative error |ρ| = |ε| / x = 0,025464790899483937645941521750815 * 10^(-7) = 0,025464790899483937645941521750815 * 10^(-9) < 5 * 10^(-9) which says that at the most nine significant digits are going to be accurate.Here 8 digits are accurate. Code - functions and small libraries I use It’s 2014 and I still use printf() for debugging. "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson Thanks for the help, that really sounds much simpler You need to initialize the variable "series" to 0. to std10093 I figured that i would probably would have to do that. But instead of placing a value exact to 8 digits i thought of making a program where the user types in the measured accuracy. I wanted to be able to input, let's say, a 20 digit measure; but i guess that would probably take a long time as well By the way, do you know of any online or literary resource where i could obtain some further information on creating algorithms. The current textbook i'm using doesn't really have a lot of information on the development of logical sequences which is obviously crucial to programming. thanks fightmx, stupid error on my part. Although i guess i also aimed too high wanting to calculate any number of accuracy on my computer. Check out this formula then. Might help. By the way, you can see the code for the other formula here. If you google you may found more Code - functions and small libraries I use It’s 2014 and I still use printf() for debugging. "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson to std10093 I figured that i would probably would have to do that. But instead of placing a value exact to 8 digits i thought of making a program where the user types in the measured accuracy. I wanted to be able to input, let's say, a 20 digit measure; but i guess that would probably take a long time as well By the way, do you know of any online or literary resource where i could obtain some further information on creating algorithms. The current textbook i'm using doesn't really have a lot of information on the development of logical sequences which is obviously crucial to programming. You can let him set the term to stop I am not sure of what you are asking for. A great book for giving you an understand and make you think like a scientist in algorithms is the one of Cormen's. Google for it Code - functions and small libraries I use It’s 2014 and I still use printf() for debugging. "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson Thanks a lot! I've barely started and i'm already trying to do things for myself 02-27-2013 #2 02-27-2013 #3 02-27-2013 #4 Registered User Join Date Feb 2013 San Juan, PR 02-27-2013 #5 Registered User Join Date Mar 2008 02-27-2013 #6 Registered User Join Date Feb 2013 San Juan, PR 02-27-2013 #7 Registered User Join Date Feb 2013 San Juan, PR 02-27-2013 #8 02-27-2013 #9 02-27-2013 #10 Registered User Join Date Feb 2013 San Juan, PR
{"url":"http://cboard.cprogramming.com/cplusplus-programming/154697-help-calculating-pi-beginner.html","timestamp":"2014-04-16T23:03:37Z","content_type":null,"content_length":"81554","record_id":"<urn:uuid:7d4833f0-0767-4425-9301-0c659d7d8c74>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
how does m = b (or slope=y-intercept February 14th 2011, 07:50 PM #1 Jul 2009 how does m = b (or slope=y-intercept How can I prove that COVAR(x,y)/VAR(x) = b? I'm pretty sure the first part equals m, but I don't see how the slope and y-intercept can be the same thing, unless their both zero. Any nudge in the right direction would be greatly appreciated. My understanding is, for $\displaystyle y=a+bx$ then $\displaystyle a=\bar{y}-b\bar{x}, b= \frac{cov(x,y)}{var(x)}$ So this works if you call the equation $\displaystyle y=mx+c$ , many text books do. Thanks, pickslides! February 14th 2011, 08:02 PM #2 February 15th 2011, 07:27 AM #3 Jul 2009
{"url":"http://mathhelpforum.com/advanced-statistics/171307-how-does-m-b-slope-y-intercept.html","timestamp":"2014-04-16T08:29:13Z","content_type":null,"content_length":"34927","record_id":"<urn:uuid:53dd2ea0-7241-4249-8009-23331b4eabf5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the center and radius of a circle that has a diameter with endpoints (-9, -6) and (-1,0). Show work. a) center (4,3); radius 5 b) center (8,6); radius 10 c) center (-5,-3); radius 5 d) center (-10,-6); radius 10 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508f099de4b0ad6205372f12","timestamp":"2014-04-19T19:36:29Z","content_type":null,"content_length":"61143","record_id":"<urn:uuid:54101124-f510-482b-9c5b-0935ee840dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Noncommutative Geometry, Quantum Fields and Motives The scope of Non-commutative Geometry, Quantum Fields and Motives is titanic, as the title already indicates. Non-commutative geometry is of course closely associated with Alain Connes, and this has been the case for many years now, certainly ever since his deep and definitive work hit the presses over a quarter of a century ago, at that time focused primarily on the (vast) area where algebraic and differential topology, algebraic geometry, von Neumann and C* algebras, and K-theory meet. Over the intervening years non-commutative geometry has taken shape more and more and rumors surfaced from time to time that Connes’ focus was ultimately nothing less than the Riemann Hypothesis. Well, there’s a lot more to it than rumor, but it’s only fair to say that there’s even more to it than It’s really all about a philosophy of how deep themes in mathematics, specifically number theory and theoretical physics (particularly quantum field theory) are interrelated, and how different themes along these lines influence one another. For instance, there’s Chapter 1, weighing in at well over 300 pages: a great deal of quantum electrodynamics-spawned material featuring Feynman’s formalism of “path integrals” and diagrams as well as the notorious business of renormalization, non-commutative spaces, and Grothendieck’s motives in a non-commutative setting. Next, there’s truly spellbinding material on Riemann’s zeta function and non-commutative geometry (Chapter 2, at only about 100 pages), followed by quantum statistical mechanics connected to a good deal of arithmetic and algebraic geometry (everything from class field theory and Kronecker-Weber to elliptic curves and Shimura varieties) and Galois theoretic themes in the broader sense (Chapter 3: almost 150 pages). Finally we get almost 200 pages (Chapter 4) on “endomotives, thermodynamics, and the Weil explicit formula.” It is revealing to quote from Connes-Marcolli’s § 5.7 (p. 671) to get an idea of what forces are conspiring to reach a common goal: The fact that we do not work here with Hilbert spaces means … that we do not have the restriction of unitarity … [and] will be able to obtain a trace formula [realizing Weil’s explicit formula] which is not only [i.e. merely] semilocal … The RH [Yes! It’s the Riemann Hypothesis] will then be equivalent to a positivity statement … [W]e formulate the trace formula in a cohomological version, which is closer to what happens in the classical setting with the action of Frobenius on étale cohomology. And one discerns shades of RH for function fields, of course, and to be sure, on p. 674 ff. we read, as a prelude to §6, “We now proceed to compare the setting we have developed in terms of the non-commutative geometry of adèle class space with the classical algebra-geometric setting of the Weil proof of RH for function fields.” So the cat is out of the bag, really: what greater mathematical objective can there be than to realize RH for number fields and RH for function fields as two sides of the same coin, the discriminators being, as it were, algebraic geometry and (or versus) non-commutative geometry? This is manifestly one aspect of the rationale for what the entire sweeping (and evolving) program is about, with a complementary aspect being quantum physics in its post-Feynman form. What a wild, wild ride! So, what is required of the reader, then? Well, a lot. It would be folly to try to read this book without a preliminary knowledge of QED + QFT (well, I guess QFT will do), done for mathematicians. Here there is the (likewise titanic, but actually very readable) two-volume source, Quantum Fields and Strings: A Course for Mathematicians, now available in paperback. There are other sources, of course, but it’s useful to stick with things written for us mathematicians. The critical thing is to learn the yoga of Feynman diagrams, and to get a good idea of what renormalization is all about. The latter topic is certainly covered at length in the aforementioned Quantum Fields and Strings. Next, you had better be comfortable with the French approach to algebraic geometry (Leray à H. Cartan à Serre à Grothendieck), as opposed to, say, the Zariski approach. Then there’s Chevalley and Weil, of course: adèles and idèles and (at least) Tate’s thesis. Furthermore, given the focus of RH for function fields, it would be a good idea if some familiarity were present there, too. Beyond this, well, it’s important to have a good “graduate school and beyond” grounding in commutative algebra, homological algebra, functional analysis, and so on. And, oh yes, be sure to know a load of number theory, e.g. elliptic curves, modular forms — well, you get the idea. And it would also be good if you know a bit about Grothendieck’s motifs. So there it is. There are no two ways about it: this is a very dense book, and has to be worked through very slowly, with lots of margin work, and outside reading. But to go that route is both virtuous and psychologically instructive for a scholar, and the material at hand is obviously of huge importance, depth, and elegance. Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
{"url":"http://www.maa.org/publications/maa-reviews/noncommutative-geometry-quantum-fields-and-motives","timestamp":"2014-04-16T15:00:27Z","content_type":null,"content_length":"100508","record_id":"<urn:uuid:91d7bb37-294d-4d03-9148-69ee3d826401>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
The philosophy of applied mathematics I told a guest at a recent party that I use mathematics to try to understand migraines. She thought that I ask migraine sufferers to do mental arithmetic to alleviate their symptoms. Of course, what I really do is use mathematics to understand the biological causes of migraines. My work is possible because of a stunning fact we often overlook: the world can be understood mathematically. The party goer's misconception reminds us that this fact is not obvious. In this article I want to discuss a big question: "why can maths be used to describe the world?", or to extend it more provocatively, "why is applied maths even possible?" To do so we need to review the long history of the general philosophy of mathematics — what I will loosely call metamaths. What is applied maths? A stunning fact: the world can be understood mathematically. Before we go any further, we should be clear on what we mean by applied mathematics. I will borrow a definition given by an important applied mathematician of the 20th and 21st centuries, Tim Pedley, the GI Taylor Professor of Fluid Mechanics at the University of Cambridge. In his Presidential Address to the Institute of Mathematics and its Applications in 2004, he said "Applying mathematics means using a mathematical technique to derive an answer to a question posed from outside mathematics." This definition is deliberately broad — including everything from counting change to climate change — and the very possibility of such a broad definition is part of the mystery we are discussing. The question of why mathematics is so applicable is arguably more important than any other question you might ask about the nature of mathematics. Firstly, because applied mathematics is mathematics, it raises all the same issues as those traditionally arising in metamaths. Secondly, being applied, it raises some of the issues addressed in the philosophy of science. I suspect that the case could be made for our big question being in fact the big question in the philosophy of science and mathematics. However, let us now turn to the history of metamaths: what has been said about mathematics, its nature and its applicability? The long history of mathematics generally lacks a distinction between pure and applied maths. Yet in the modern era of mathematics over, say, the last two centuries, there has been an almost exclusive focus on a philosophy of pure mathematics. In particular, emphasis has been given to the so-called foundations of mathematics — what is it that gives mathematical statements truth? Metamathematicians interested in foundations are commonly grouped into four camps. Formalists, such as David Hilbert, view mathematics as being founded on a combination of set theory and logic (see Searching for the missing truth), and to some extent view the process of doing mathematics as an essentially meaningless shuffling of symbols according to certain prescribed rules. Logicists see mathematics as being an extension of logic. The arch-logicists Bertrand Russell and Alfred North Whitehead famously took hundreds of pages to prove (logically) that one plus one equals Intuitionists are exemplified by LEJ Brouwer, a man about whom it has been said that "he wouldn't believe that it was raining or not until he looked out of the window" (according to Donald Knuth ). This quote satirises one of the central intuitionist ideas, the rejection of the law of the excluded middle. This commonly accepted law says that a statement (such as "it is raining") is either true or false, even if we don't yet know which one it is. By contrast, intuitionists believe that unless you have either conclusively proved the statement or constructed a counter example, it has no objective truth value. (For an introduction to intuitionism read Constructive mathematics.) Moreover, intuitionists put a strict limit on the notions of infinity they accept. They believe that mathematics is entirely a product of the human mind, which they postulate to be only capable of grasping infinity as an extension of an algorithmic one-two-three kind of process. As a result, they only admit enumerable operations into their proofs, that is, operations that can be described using the natural numbers. Finally, Platonists, members of the oldest of the four camps, believe in an external reality or existence of numbers and the other objects of mathematics. For a platonist such as Kurt Gödel, mathematics exists without the human mind, possibly without the physical universe, but there is a mysterious link between the mental world of humans and the platonic realm of mathematics. It is disputed which of these four alternatives — if any — serves as the foundation of mathematics. It might seem like such rarefied discussions have nothing to do with the question of applicability, but it has been argued that this uncertainty over foundations has influenced the very practice of applying mathematics. In The loss of certainty, Morris Kline wrote in 1980 that "The crises and conflicts over what sound mathematics is have also discouraged the application of mathematical methodology to many areas of our culture such as philosophy, political science, ethics, and aesthetics [...] The Age of Reason is gone." Thankfully, mathematics is now beginning to be applied to these areas, but we have learned an important historical lesson: there is to the choice of applications of mathematics a sociological dimension sensitive to metamathematical problems. What does applicability say about the foundations of maths? The logical next step for the metamathematician who bothers to think about the applicability of mathematics would be to ask what each of the four foundational views has to say about our big question. Discussions along this line have been written by a number of mathematicians and scientists, such as Roger Penrose in the book The road to reality, or Paul Davies in his book The mind of god. I would like to take a different path here by reversing the "logical" next step: I want to ask "what does the applicability of mathematics have to say about the foundations of mathematics?" In asking this question I take for granted that there is no serious disagreement about whether mathematics is applicable: the entire edifice of modern science and technology, depending heavily as it does on the mathematisation of nature, bears witness to this fact. So what can a formalist say to explain the applicability of mathematics? If mathematics really is nothing other than the shuffling of mathematical symbols in the world's longest running and most multiplayer game, then why should it describe the world? What privileges the game of maths to describe the world rather than any other game? Remember, the formalist must answer from within the formalist worldview, so no Plato-like appeals to a deeper meaning of maths or hidden connection to the physical world is allowed. For similar reasons, the logicists are left floundering, for if they say "well, perhaps the universe is an embodiment of logic", then they are tacitly assuming the existence of a Platonic realm of logic which can be embodied. This turns logicism into a mere branch of platonism, which, as we will shall see below, comes with its own grave problems. Thus for both formalists and non-platonist logicists the very existence of applicable mathematics poses a problem apparently fatal to their position. Neither logicism nor formalism is widely believed any more, despite the cliché that mathematicians are platonists during the week and formalists at the weekend. Both perspectives fell out of favour for reasons other than the potentially fatal one about the applicability of mathematics, reasons largely connected with the work of Gödel, Thoralf Skolem, and others. (See Gödel and the limits of Is the world inherently mathematical or is maths a construct of the human mind? The third proposed foundation, intuitionism, never really garnered much support in the first place. To this day, it is muttered about in dark tones by most working mathematicians, if it is considered at all. What is seen as a highly restricted toolkit for proofs and a bizarre notion of limbo, in which a statement is neither true nor false until a proof has been constructed one way or the other, make this viewpoint unattractive to many mathematicians. However, the central idea of the enumerable nature of processes in the universe appears to be deduced from reality. The physical world, at least as we humans perceive it, seems to consist of countable things and any infinity we might encounter is a result of extending a counting process. In this way, perhaps intuitionism is derived from reality, from the apparently at-most-countably infinite physical world. It appears that intuitionism offers a neat answer to the question of the applicability of mathematics: it is applicable because it is derived from the world. However, this answer may fall apart on closer inspection. For a start, there is much in modern mathematical physics, including for example quantum theory, which requires notions of infinity beyond the enumerable. These aspect may therefore lie forever beyond the explicatory power of intuitionistic mathematics. There is one modern idea which could benefit from the finitist logic of the intuitionists: so-called digital physics. It holds that the Universe is akin to a giant computer. The fundamental particles, for example, are described by the quantum state they happen to be in at a given moment, just as the bit from computer science is defined by its value of 0 or 1. Just like a computer, the Universe is based on information about states and its evolution could in theory be simulated by a giant computer. Hence the digital physics motto, "It from bit". But this world view too fails to be truly intuitionistic and seems to sneak in some platonic ideas. The bit of information theory seemingly posits a platonic existence of information from which the physical world is derived. But more fundamentally, intuitionism has no answer to the question of why non-intuitionistic mathematics is applicable. It may well be that a non-intuitionistic mathematical theorem is only applicable to the natural world when an intuitionistic proof of the same theorem also exists, but this has not been established. Moreover, although intuitionistic maths may seem as if it is derived from the real world, it is not clear that the objects of the human mind need faithfully represent the objects of the physical Universe. Mental representations have been selected for over evolutionary time, not for their fidelity, but for the advantage they gave our forebears in their struggles to survive and to mate. Created in the image of mathematics Formalism and logicism have failed to answer our big question. The jury is out on whether inuitionism might do so, but huge conceptual difficulties remain. What, then, of Platonism? Galileo Galilei, who believed that the world was written in the language of maths, facing the Roman Inquisition for proclaiming that the Earth moved around the Sun. Painting by Cristiano Banti. Platonists believe that the physical world is an imperfect shadow of a realm of mathematical objects (and possibly of notions like truth and beauty as well). The physical world emerges, somehow, from this platonic realm, is rooted in it, and therefore objects and relationships between objects in the world shadow those in the platonic realm. The fact that the world is described by mathematics then ceases to be a mystery as it has become an axiom: the world is rooted in a mathematical realm. But even greater problems then arise: why should the physical realm emerge from and be rooted in the platonic realm? Why should the mental realm emerge from the physical? Why should the mental realm have any direct connection with the platonic? And in what way do any of these questions differ from those surrounding ancient myths of the emergence of the world from the slain bodies of gods or titans, the Buddha-nature of all natural objects, or the Abrahamic notion that we are "created in the image of God"? Indeed, the belief that we live in a divine Universe and partake in a study of the divine mind by studying mathematics and science has arguably been the longest-running motivation for rational thought, from Pythagoras, through Newton, to many scientists today. "God", in this sense, seems to be neither an object in the space-time world, nor the sum total of objects in that physical world, nor yet an element in the platonic world. Rather, god is something closer to the entirety of the platonic realm. In this way, many of the difficulties outlined above which a platonist faces are identical with those faced by theologians of the Judeo-Christian world — and possibly of other religious or quasi-religious systems. The secular icon Galileo believed that the "book of the universe" was written in the "language" of mathematics — a platonic statement begging an answer (if not the question) if ever there was one. Even non-religious mathematical scientists today regularly report feelings of awe and wonder at their explorations of what feels like a platonic realm — they don't invent their mathematics, they discover it. Paul Davies goes further in The Mind of God, and highlights the two-way nature of this motivation. Not only may a mathematician be driven to understand mathematics in a bid to glimpse the mind of God (a non-personal God like that of Spinoza or Einstein), but our very ability to access this "key to the universe" suggests some purpose or meaning to our existence. In fact, the hypothesis that the mathematical structure and physical nature of the universe and our mental access to study both is somehow a part of the mind, being, and body of a "god" is a considerably tidier answer to the questions of the foundation of mathematics and its applicability than those described above. Such a hypothesis, though rarely called such, has been found in a wide variety of religious, cultural, and scientific systems of the past several millenia. It is not natural, however, for a philosopher or scientist to wholeheartedly embrace such a view (even if they may wish to) since it tends to encourage the preservation of mystery rather than the drawing back of the obscuring veil. Penrose's three-worlds diagram. Roger Penrose has most lucidly illustrated some of this mystery with a three-worlds diagram. The platonic, physical, and mental worlds are the three in question, and he sketches them as spheres arranged in a triangle. A cone then connects the platonic world with the physical: in its most general form, the diagram shows the narrow end of the cone penetrating the platonic world and the wider part penetrating some of the physical world. This is to show that (at least some of) the physical world is embedded in at least some of the platonic world. A similar cone connects the physical to the mental world: (at least some of) the mental world is embedded in the physical world. Finally, and most mysteriously, the triangle is completed by a cone from the mental to the platonic world: (at least some of) the platonic world is embedded in the mental world. Each cone, each world, remains a mystery. We seem to have reached the rather depressing impasse in which none of the four proposed foundations of mathematics can cope unambiguously with the question of the applicability of mathematics. But I want you to finish this essay instead with the feeling that this is incredibly good news! The teasing out of the nuances of the big question — why does applied mathematics exist? — is a future project which could yet yield deep insight into the nature of mathematics, the physical universe, and our place within both systems as embodied, meaning-making, pattern-finding systems. About the author Phil Wilson is a senior lecturer in mathematics at the University of Canterbury, New Zealand. He applies mathematics to the biological, medical, industrial, and natural worlds. Submitted by Anonymous on March 23, 2012. Mathematics studies structures and is the language in which we can talk about these beings in a rather precise way (define them, express their properties, behaviour,...). If we take a simplistic view of the universe, we could say that it is made of lego bits. These bits happen to fit with each other or can create new bits when something happens to them. These can be put together to form structures (or maybe they are structures already but where do we start?), and we have an increase of complexity that after some time gave rise to things like us. There may be universes where these bits can not be put together, or the structures they form are more unstable than in ours. Since structures can be talked about with Mathematics, thus it appears as Mathematics can explain our universe. I say appears because it is a byproduct of the original structures and because OUR Mathematics does not really study all possible structures. The fact is that we do a very human type of Mathematics. If you want me to put a number on it, I would say that with today's Mathematics we study less than 1% of what is out there. We are mostly interested in "nice" structures (that appeal to our sense of beauty). For example, a Ring in which lets say multiplication is associative for all its elements (otherwise it would not be called a Ring) is nicer than a "Ring" in which multiplication with all elements but one (or two, three...) is associative. Why is it nicer you ask? Well, for starters it is easier to define, easier to find examples to work with and behaviour is more predictable. Just try to find an example of the other "Ring" or try to prove that such structure does not exist (there you have a life work). So far we have been lucky that our "nice" Mathematics can explain so much of the universe. But if we find things we can not explain, then it becomes very interesting: Is it because we have not developed Mathematics enough? Or is it because there are structures that can not be explained with Mathematics at all? In the later case, are there universes out there that work on such structures? Is ours one of them and we have not realized it? There is surely a limit to what our "nice" Mathematics can explain. What do we do when we get there? Well, we can try to evolve faster and become something better at math :) Submitted by .mau. on July 14, 2011. Hi Phil! I may be dead wrong, but I thought that Intuitionists do not accept at all actual infinite in proving theorems: the only kind of infinite they would accept is the potential one (like in Euclid axioms: the line is not infinite, it may be prolonged at wish). Is it true, or constructivism is only a subsect of intuitionism? Submitted by PhilWilson on July 14, 2011. Yes, I think that's basically correct, although there are different kinds of intuitionism and constructivism, so I suppose the answer will depend upon to whom you are speaking. For example, you have the option to study constructive objects with classical logic, classical objects with intuitionistic logic, and other combinations. There is also the freedom in which axioms you have in your bedrock intuitionistic logic. I'm not particularly clear on all of these distinctions myself. I wrote another article on Plus about this sort of thing, and my distinguished intuitionistic colleague Douglas Bridges thought it was OK so you might want to take a look. But he also said that some points were not properly developed so take it only as a starting point! Submitted by .mau. on July 15, 2011. It's something of a surprise that Brouwer, after proving this theorem, decided to start constructivism. Or maybe it was just a logical reaction :-) Submitted by PhilWilson on July 15, 2011. Hi .mau. I think that Brouwer was already beginning to reject non-constructive methods before he developed his fixed-point theorems, and indeed some of these ideas were swirling around the mathematical zeitgeist at the time. But who knows what it was that triggered his formulation of a mini-revolution? These things are a product of a creative free will, I would say, and are thus somewhat Submitted by Anonymous on July 12, 2011. First, I think it's great that you're taking the time to answer comments. It's rare that authors take full advantage of the conversational potential the internet allows, as opposed to more traditional forms of publishing. I consider myself a philosopher with only an elementary understanding of mathematical foundations, but I've always found Godel's use of the liar's paradox to undermine Bertrand and Whitehead's Principia especially brilliant. I tend to be drawn to subversive intellectual accomplishments more than straightforward constructive projects. Though I also greatly respect the work of those like Gell-Mann and Feynman who have worked on the nuts and bolts of the Standard Model, which is probably humanity's greatest mathematical project. I'm not a huge fan of mereology (http://plato.stanford.edu/entries/mereology/), but for a while I've felt that it's central occupation, the relation of parts to wholes, is the same problem underlying metamathematical concerns. Things like, what constitutes a unit? Does the property of divisibility only inhere within some objects, or all of them? In reference to math, we could ask these questions more like, how do we bound objects in order to pick them out for counting? How do we maintain the identity of an object after it is unbound into parts or fractions? And how do those fractions become independent units themselves? Seems like the question of how math is applicable to the world has to grapple with these types of questions at some point. If only because I place our denoting of physical objects as units prior to our employment of those units for counting. I'm only basing that on evolutionary assumptions, but I don't think it's a stretch as far as assumptions go. My usual initial response to these questions is that we've adopted a method of differentiating units based on what we discern to be separate objects using visual perception. As it seems the dominant human sense, and especially tuned to delineating the edges of macroscopic objects. I'm certainly not the first to suggest that if we used a different faculty than vision for getting around in the world, then our math, as well as a lot of other things, might be quite different. Thus, we're left with a kind of chicken-and-egg situation. Does our visual field separate objects because objects really are separable (and not necessarily just at our normal scale of perception), or do we think of objects as separable because we've had such success with a visual system that stresses the distinction between objects? Just one more thing. I take the problems of units to be the inverse of the problems of infinity. Instead of just asking how can there be an endless succession of things, we might also ask how a thing can be separate from everything else? Submitted by PhilWilson on July 14, 2011. Hi Anonymous. I was bowled over by your comment, and not just because of the praise at the beginning. (Answering questions here has been a delight because I do so enjoy discussing such topics. I'm also fascinated that a lot of the questioners appear to be adults, on this site aimed at high school kids.) Your comment was so deep that I barely know where to begin. I'm afraid that I cannot answer it in the way it deserves as I am currently in Palo Alto, attending a workshop at the American Institute of Mathematics and am very busy indeed. I think that you have identified a very deep and important aspect of the scientifico-mathematical enterprise which is certainly a part of what I was discussing here, although not something I had emphasised in the article. How do we define our basic unit? This seems to me to be (a) partly arbitrary, (b) non-algorithmic, and (c) always wrong, although hopefully useful. It is almost certainly impossible to define the "edge" of a human on a quantum scale, or even an atomic scale, yet this "unitisation" of human-scale objects lies at the heart of what so many consider to be "natural" about the natural numbers. So I certainly agree that there is something about our visual perceptions which define units, and hence the natural numbers, and hence counting - we discretise the world and call it natural. This only seems to deepen the mystery of why a system of thought which develops from such crude beginnings should be able to uncover the depth of quantum mechanics and more besides. I would be keen on discussing these matters in a more formal way, so do please feel free to contact me at dr dot phil dot wilson at gmail dot com Submitted by Anonymous on July 11, 2011. I appreciate the appeal of your question. I was a philosopher before I began seriously studying mathematical physics, and even though I had already considered the basic questions, I found myself struck anew by the regularity with which nature has been found to conform to the most abstract imaginative leaps of rare minds. Nevertheless I believe that the question is more to be dissolved than solved. It is akin to the question of how language can "latch onto reality" (the answer to which lies in understanding our own Some of the questions you ask (e.g. "why should the universe be comprehensible at all?") may be illuminated by evolutionary considerations. An amoeba responding to being poked is already "comprehending" (a tiny piece of) the world in a sense, and higher animals display a finely-tuned set of useful, reality-conforming behaviors requiring discrimination and goal-pursuit. Their mental constructs "conform to reality" because (and to the extent that) they worked in the past (and others didn't). (And of course we know that past results are no guarantee…) At the same time, it helps to be aware that we have no idea what it would mean for the universe NOT to be AT ALL comprehensible. We can contrast understanding here with ignorance there, or complete understanding with partial understanding; but we have no concept of a world without any understanding at all. So this is why I say that your question is at bottom a pseudo-question (which may nevertheless be very educational and fruitful along the way to being dissolved). I don't think your 4 positions are exhaustive. I'd say you're missing a position of Realism, which says that mathematical structures inhere in nature, without requiring any "independent realm" to give them "reality". It is nature that gives them reality. For we cannot conceive of a nature without mathematical structure--without unity and multiplicity, spatial orientations, boundaries, sets… And by the way I'd like to add this open-ended list of mathematical categories to your account of the natural genesis of mathematics out of counting. Not just counting, but geometrical intuition and discrimination of natural sets seem to me to be equally grounded in an evolutionary account. You say that a theistic picture is tidier than the others, but are afraid of embracing religious baggage. I do think that a Spinozan patheism is a viable solution, as long as you realize that it is really nothing more than a counterweight to the kind of materialism that denies any structure or intelligibility to the world independently of humans. To call the world "divine" in this sense is just to acknowledge that before intelligent beings, the world already had intelligible structure, and contained the potentiality for the causal generation of intelligence. Submitted by PhilWilson on July 14, 2011. Hi, Anonymous, and thanks for a great comment. I am currently attending a workshop at the American Institute of Mathematics and am very busy indeed, so I am afraid that your answer will not get the full and in-depth response which it merits. My I think that your raise some excellent points. While I am very receptive to the evolutionary approach - and while I think that it can be extended to provide a definition of mathematical beauty in terms of maximum fecundity - it leaves me thinking "and yet, and yet . . .". It simply does not, to my mind, address the thorny issue of why completely abstracted mathematical ideas should turn out to be so central to our understanding of the physical universe. You are correct to point out that we have no notion of what an incomprehensible universe would be like. This does not answer the question of why our universe is comprehensible and, as far as I can see, does not even "dissolve" the question or render it a "pseudo-question. (I like the notion of "dissolve" vs solve, by the way!) I don't understand what you mean by nature "giving reality" to mathematical notions. This seems to avoid the question. I am only afraid of embracing theological baggage to the extent of claiming that it answers beyond doubt the question before us. I am quite happy to load that baggage onto my life-train for other reasons, reasons which tend to subjectively exclude a Spinozan pantheism. I do agree with you that the potentiality for intelligence evidenced by our existence is an awe-inspiring aspect of the Submitted by Anonymous on July 6, 2011. Reading your article a radical thought came to my mind. Being a mathematician or a physicist (like myself) we are used to treating mathematics as the ultimate tool in understanding nature. We develop mathematical formalisms that describe nature and then marvel at how well mathematics is suited to describe our subject of interest. But is mathematics really so universal as we make it out to be, or do we just see it in that light because we are so used to it? The foundations of mathematics evolved in our ancestors for the need to count things in order to have to survive. Mathematics was used for centuries by the Babylonians, Egyptians, Greeks and other highly developed cultures. But it took the genius of a Newton or Leibniz to develop differential calculus and to be able to properly describe and object's motion when a variable force acts upon it. This formalism uses infinities and infinitely small intervals and is quite a long way from the simple act of counting. So my answer is: The world can be understood mathematically, only because we develop the mathematical tools that allow us to understand it. Constructions, like differential calculus, do not follow inevitably from the basic principles of mathematics, but only from our need to describe nature. But this reverses the dependency. The universe is not inherently mathematical, but mathematics is constructed to be universal. Submitted by Anonymous on February 9, 2012. A mathematics fan myself (and a self proclaimed philosopher :), though everyone is a philosopher in their own rights), I do believe in the universality of mathematics but feel that mathematics is just an aspect of reality and not an all pervading foundation. Mathematics can be seen as a language of describing certain kinds of correlations between objects, events etc. So whether mathematics is all pervading depends on whether i) Everything in this world is correlated ii) Can mathematics describe all correlations in this world. Irrespective of whether i) is true or not, I think its not too difficult to construct examples to show ii) is not true. As many would already have thought of, feelings like love, etc cannot be explained by mathematics. (Reminds me of this valentine's day quote by H.L. Mencken: Love is the triumph of imagination over intelligence). On a different note, some of you may enjoy this tangentially related blog post of mine: http://janakspen.blogspot.in/2011/05/infinite-soul-and-bit-of-discrete-m... I can say this to extend Holger's comment: Most of the things or phenomena around us can be measured (if not objectively, then perhaps subjectively), and measurement is very much mathematical. So one may feel that everything is mathematical. But in reality, measurement or correlation between entities is just one aspect of their existence. So its wrong to say mathematics can describe or is the basis of everything. I did enjoy aspects of this article .. this has become one of my favorite sites on the internet :) Submitted by Anonymous on January 24, 2013. A nice point made by Anonymous and PhilWilson. It is true that relations of the heart are not the same as relations of numbers. People not believing this no mathematical equation can predict what gift gift your valentine will want on this Valentine's Day:) Having said that I think there is no end to the argument whether emotions like love are matters of the heart or produced mathematically controlled events in the brain. And now on a lighter note, for all those who believe that mathematics can describe everything including love, two proofs: Firstly, some gems of quotes from this page: mathematical love quotes for valentines day: * I'm a fraction of what I should be without you... you complete me. * You and I are living proof one plus one equals three, happy valentines day to a dad to be Now this is one great use of applied mathematics, right? Caution: Use the above quotes only if your valentine is a geek :) Second proof: these beautiful love equations from http://www.walkingrandomly.com/?p=2326 Also do read this interesting article on valentine programming apparently people are researching how mathematics can explain lost love and what not. Applied mathematics is a whole ocean out there. Submitted by PhilWilson on July 8, 2011. Hi Holger. Yes, this seems to me to be very good and valid point. We identify a subset of real-world experiences all of which are well-described by mathematics and then step back and marvel at the "universality" of mathematics. I'm reminded of this. Nevertheless, there remains the question of exactly why we have access to a "language" (to take George's point below) which we can (perhaps) abstract from (a subset of) the world, play with in our minds, and then accurately re-apply to (a subset of) the world. We take this for granted, but, as I am trying to establish, this fact is stunning and (perhaps) non-obvious. Submitted by Anonymous on July 5, 2011. I take the view that mathematics is simply an extension of language, made more precise by use of abbreviated symbolism. This doesn't seem to be the same as either "formalism" or "logicism", though it has aspects of both. on this basis mathematics is language developed to describe "structures" from simple things like sets up to complex scientific theories. This approach makes no presumptions about the nature of what is being described, i.e. the world, other than that it can be described. Mathematics is indeed inadequate to describe many aspects of the world. It is effective in areas where elements can be clearly defined. On this view "pure" mathematics is an outgrowth from this practical process of developing methods of mathematical description, abstracting it to describe formalisms not yet having any application in the world other than to mathematical ideas themselves. George Jelliss Submitted by PhilWilson on July 8, 2011. Hi George, and thanks for your comment. If mathematics is a language, why should it have the precision it appears to do? Why should the universe be describable and comprehensible at all? If it is abstracted from the world why should we be able to develop it in non-world realms ("mathematical ideas themselves" as you put it)? Why are we able to make abstractions, and in a precise way? I realise that in asking such questions I am in a way moving beyond your points and in some way beyond the point of the article. But I am trying to emphasise that in trying to "solve" this question, we employ assumptions that raise larger, wider questions. This shouldn't be taken as an excuse not to raise such questions! Indeed, I was trying to inculcate a sense that there is still great mystery in the world. Submitted by Anonymous on July 13, 2011. It seems that different languages have different levels of precision. English, with its notorious ambiguity, jargon, idiom, &c seems to be very imprecise. German, with its very specific words for very specific things and a high level of succinctness, seems much more Could it be that mathematics as a language is just the language that is the most precise that we know of currently? Submitted by PhilWilson on July 14, 2011. It is possible that math is "just" the most-precise currently-known language. But this doesn't answer any of the questions we have been raising, although it does possibly reframe the overall question to be something more like "how accurately are we able to describe patterns and is math the optimal language for so doing?". I also note that precision is something, but not everything. In human languages, it is possible to convey things by ambiguity and choice of words that it is impossible to encode formally in precise language. Understanding what "Feeling under a cloud" means takes a whole lifetime of experience. Languages are embedded in culture and personal experience, such that semantics require that context even if syntactics do not. Vaguer languages such as Japanese, can convey much subtler meanings than English or German. There is possibly a way to maximise a weighted sum of the creativity which ambiguity allows and the accuracy which precision allows. Submitted by Anonymous on July 2, 2011. I'd like to start this comment with a question: When you talk about a "Platonic world" you mean a distinct, clearly defined world, completely different from the physical world, or is it a short-hand term used to describe the underlying order and internal coherence of physical phenomena? If the former, I don't think a discussion is possible, because: where do you draw the line? As you very well put it in your article, a hypothesis such as that would be, for all practical purposes, virtually indistinguishable from a religious one, because it entails assuming all order and coherence are derived from a world to which we cannot have access empirically, and is thus impossible to prove scientifically. As Albert Szent-Györgyi once put it "Thus Aristotle laid it down that a heavy object falls faster than a light one does. The important thing about this idea is not that he was wrong, but that it never occurred to Aristotle to check it." My point being that the empirical aspect of scientific inquiry is just as important as the logical/mathematical one, as Galileo himself so aptly proved. If the latter, then I don't see why the formal justification of mathematics would be wrong, provided you assume that the symbols (and their possible relations) are adjusted to coincide with physical phenomena, and, to a great extent, are able to predict them. According to this definition, mathematics would ultimately be a scientific enterprise concerned with the coherence and consistency of sets of symbols and relations we invent as we come across new phenomena, and, since these relations can be studied without empirical analysis, mathematics can go further than other, more heavily empirical sciences, which allow it to predict the behavior of phenomena we have yet to come across (assuming these phenomena follow the same underlying principles as those encountered before). I hope the above made sense, and that I did not misunderstand any of the concepts you used. Submitted by PhilWilson on July 5, 2011. Hi Rodrigo. Your comment was very coherent, and absolutely right in identifying a need for clarity concerning what is meant by a "platonic realm". Perhaps I wasn't clear enough in the article. Anyway, of the two options you mention it is the former: a completely distinct, transcendent, non-physical realm. This definition raises the problem that you and I have mentioned: apparent difficulties with empirical investigation. But I would like to ask you what you mean by "prove scientifically". Are you working in a Popperian scheme? The trouble with doing so is that it itself is more of an over-arching classification (science/not-science), or perhaps a working definition, rather than an actual established, provable fact, or even something open to empirical investigation. In fact, to the extent that it is open to empirical study, it is plainly falsified in the working lives of almost every scientist. Alternatives, such as Quine's holism, are perhaps closer to the everyday experience of most scientists. Either way, both schemes may allow for an empirical test of the platonic hypothesis, if I may call it that. Because after all, although the platonic realm is distinct and independent of the physical realm, the one influences the other (I'm talking from within the hypothesis, you understand). So it is possible that someone may be able to come up with a (falsifiable?) statement about the existence of the platonic realm which can be investigated with empirical tools of the physical realm. As for the second option you mention, that the platonic realm is a kind of shorthand for a quasi-empirical science of mathematics (a science of pattern, essentially), this is not what I meant. It is a perspective which has its advocates, but which few mathematicians or mathematical scientists find appealing (this doesn't make it wrong, of course). There are serious implications of such a viewpoint for the entire scientific program. Submitted by Anonymous on July 8, 2011. It was precisely with Popper's scheme in mind (and, to a lesser extent, Hume's simpler "How do you know?") that I wrote my previous comment, although, as you said, very few scientists work with said scheme in mind, given its obvious impracticality and lack of precisely defined methods. I didn't even know Quine's holism existed, thank you for providing me with the link. As for the rather liberal use of the words 'empiric' and 'scientific' in my previous post, it was because I wanted to stress the importance of the 'applied' part of this article's title. When it comes to applicability, the old engineering adage says it all: "In engineering, you're not done when there's nothing more to add, but when there's nothing more to take away". When developing a philosophy aimed at giving some sort of justification for the applicability of abstract mathematical principles to concrete, physical reality, shouldn't the prospective scientist/ philosopher/logician guide his reasoning by that same adage? I can't think of a more gross violation of the most basic principles that guide applied mathematics than literally inventing a whole world to justify itself. I apologize if this all sounds rather vague, but I lack the preparation or background to give a more rigorous and detailed defense of my case, assuming I have one at all. P.S. I'm currently trying to study this subject on my own using Mendelson's "Introduction to mathematical theory", Pierce's "Introduction to information theory" and Devlin's "Information and logic". Are there any other good books I could use to study this, preferably mathematical in format? Submitted by PhilWilson on July 14, 2011. Hi Rodrigo, First, my apologies for my slow reply. I am currently attending a workshop at the American Institute of Mathematics in California and have been working very hard! I like the idea of stopping when there is nothing more to take away. It reminds me of the adage about how Michaelangelo went about his sculpting. I note also, however, that this is logically irreducible to an algorithm: if you are only "subtracting", then how do you define the initial condition? Let me ask you, however, why it is that the philosophy of a branch of science should follow the same rules as the science itself? Don't worry about what you feel may be vagueness: no-one knows the answers to these deep questions we are discussing, so we all of us can have as much fun as we like in discussing them! Your preparation for doing so may in fact be much better than my own, which has largely grown organically through reading popular science and being fortunate enough to have been able to bend the ear of talented and indulgent philosophers. Have fun! Submitted by Anonymous on June 30, 2011. Very interesting post. My 2 cents: Why maths can be applied? Whatever one's approach to understanding our universe is, we have to agree that, as it exists, there needs to be some notion of "inner coherence" (call it physical laws or whatever). On the other hand there is maths which in my understanding is no more (no less) than a grammar whose associated language is all that is coherent (I should extend here but will not). To me, physics study this particular "coherent reality" and maths study any possible "coherent reality". N.B. a topological variety is a potential reality to me as long as it is coherent and can be described "grammatically", so to speak. Pedro Pablo Submitted by PhilWilson on July 1, 2011. Hi, Pedro, and thanks for your comments. Your suggestion that maths studies structure, or pattern, or "coherent realities" of all forms, and that physics is one of these, and that the universe in which we live is another, is certainly popular among mathematicians and mathematical scientists. But it does seem to displace the central question in favour of even harder questions! These include the following. Why should physics be coherent - why not be too difficult for us to understand, or not coherent at all? Must the universe be coherent for life of any kind to arise? Is there a hidden anthropic principle in this line of argument, requiring a vast array of unknowable parallel universes? You might be able to sense that while I find your "answer" appealing, I also think that it raises harder and more troubling questions! Submitted by Anonymous on June 28, 2011. Would you really need three realms (mental, physical, platonic) to answer the question? I am no mathematician but wouldn’t it be a plausible model to assume that the mind (the mental realm) reflects those properties of the physical realm that were important to our survival at some evolutionary time point? This would obviously include rules like cause and effect and the natural numbers. If these inbuilt rules were correct, they could by inference (likewise a mental rule reflecting properties of the physical world) be used to extrapolate more and more of the rule framework of the physical realm. This would also explain why applied mathematics (in the chosen definition) can describe counter-intuitive physics, i.e. it works beyond the limits where the mind correctly reflects the physical world. This model obviously is still platonic but I do not see that it needs three realms. Submitted by PhilWilson on June 30, 2011. Daniel, you make some good points which strike at the heart of part of what this article is about. But let me try to unpick a little of what you say to see if I understand you correctly. You are worried about needing three realms. The two realms you seem to affirm are the mental and the physical. Even this is a leap, actually, since a pure materialist would say that the mental realm can be reduced to the physical, that our language for describing the mental realm is a kind of shorthand for what is "really" happening on a physical level. I disagree with this perspective, and you can find coherent book-length arguments against it in pop science books by Robert Laughlin (A Different Universe) and Stuart Kauffman (Reinventing the Sacred), and elsewhere. So you and I agree that there are at least two realms, the mental and the physical. While neither is reducible to the other, they are not independent either. So the real question which I think you are asking is, "Do we need to posit the existence of a platonic realm." Well, to some extent, my article is all about this very question! But while I think that there is a lot of currency in your evolutionary argument, it doesn't seem to me to satisfactorily address all of the issues. As I said in the article, the evolutionary pressure would have been on (a) survival, and (b) mating. While understanding cause and effect, and being able to count distinct objects, and so on, would presumably have helped, it doesn't seem obvious to me that the leap you make to an evolutionary explanation for advanced mathematical ability is a necessary one, or one well-founded in the hypothesis of mathematics being somehow an encoding of the physical world in the mental. However, I think that John Barrow is a fan of this idea - I remember him discussing the work of Rosen in this context in his book "Impossibility". I'm afriad I can't get the reference for you now because I am in quake-shattered Christchurch, and I cannot get in to my office building! As far as I can tell, an evolutionary approach to the question of the existence of applied mathematics does not address its ability to work in counter-intuitive realms, its apparent timelessness, its felt sense of independence, its coherence, its ability to explain the physical universe, or even our ability to find the universe comprehensible at all! Submitted by Anonymous on July 1, 2011. Thank you for answer Phil. If you don’t mind, I would like to defend my evolutionary explanation a bit (neglecting the other parts of your answer for the moment). In a way, there is lot of applied mathematics in biology – a spectacular example would be echolocation in bats or dolphins but there are many others. Admittedly, those are mostly specialized systems and not all-purpose processors like the human mind, but I still think they illustrate that advanced mathematical abilities can be acquired by evolution. The human ability to apply mathematical rules and – probably more important for real mathematics - to combine them freely might be a chance product of their coexistence in the same mental apparatus. The chance element is of course a weak point here, but then again, chance is an essential part of evolution and there are quite a few examples where unsuspecting inventions have been coopted by evolution for some unforeseen purpose (e.g. feathers for flight). I would also like to outline a possible explanation for the ability of mathematics to work in counter-intuitive realms which is in line with an evolutionary approach: The rules used to interpret the physical world (or to manipulate it) would be more useful the more general (and accurate) they were. I would further posit that intuition is based on what we can directly experience through our senses or actions but those experiences are constrained by the implementation of our physical body. However, mental rules that are able to interpret data accurately in the working range of our body do not necessarily fail where the body fails, in the same way that the validity of a temperature scale does not stop where a particular thermometer reaches the end of its dynamic range. Submitted by PhilWilson on July 5, 2011. Hi Daniel. The great thing about a conversation like this one is that since no-one knows the answers we can have a lot of fun defending different viewpoints! I must admit, however, that I don't really have a strong opinion on this matter one way or the other, so it is wonderful for me to find myself attracted to what you are saying - and then to try to play devil's advocate with your ideas. You say that there is a lot of applied math in biology. With my devil's advocate (tail?) on, I would say that really you are only allowed to say that that which we call (applied) math can be found in biology. I mean to say, that if you are saying that echolocation and the like actually are mathematics, then you already have a platonic view, to some extent. You're saying that there is an embodiment of an abstract principle, and that the universe (or the bit of it identified with the bat) is somehow solving equations to work out the source of an echo. This may be true, or it may be true that our mathematics is (as I think you want to say) a language, honed by evolution, for describing patterns, some of which can be found in nature. So I think that there may be a contradiction in your advocacy of "finding math in nature" and "math being an evolutionary product". If it is not a contradiction, then it may instead be a circular argument, in that you say (a) math evolved, because (b) math is in nature, because (a) math evolved. The problem with this is the same as when I pointed out that to say that the bat is "doing" math (unconsciously, presumably) is to assume that that which we call math is identical to how the universe works. And this is unknown. You also point to what are known as Darwinian preadaptations: incidental features of an organism which were not directly selected, but which later turned out to be advantageous in another, unforeseen way. Your example of feathers, which are hypothesised to have evolved for warmth and possibly plumage display, but which later enabled controlled flight, is a good one. You then seem to want to argue that math was selected for its utility in the (energy? length? speed? mass?) range of our bodies, and then had a Darwinian preadaptation utility in more advanced forms of math. First, I'm not convinced that being able to do math consciously has a utility of any form. We may certainly need it subconsciously, like the bat, or like a recent study of a certain Brazilian tribe which showed that they have excellent spherical geometry intuition but not even natural numbers, arithmetic, or sense of time or object persistence. Secondly, even if the first point were true, we still come back to the question of why we should be able to extend math beyond the range of our bodies and still find it has utility. Just because Darwinian preadaptations exist doesn't help to explain our Big Question - in fact, it tells us almost nothing about it. Why should math extend this way? Why should the universe be comprehensible at all? As a footnote of sorts, Stuart Kauffman has written a lot about the mathematical structure of evolution, and John Barrow has been one of the brave few to raise the question of the mystery of the universe's comprehensibility - and potential limits on our own understanding of it. Submitted by Anonymous on June 27, 2011. Cute quotation from Knuth - however Knuth was wrong. Brouwer would have no problem with the statement 'either it is raining outside or it is not raining outside' because he knew you could go and look. He did not insist on your actually looking. He had a problem with things like König's lemma - not quite the same. I have a problem with things like König's lemma too, but in the great scheme of things it is not very important. Submitted by PhilWilson on June 30, 2011. Well, of course Knuth's comment was made in jest, and if it does slightly miss the mark the in terms of its accuracy it is bang on target in terms of capturing the discomfort many mathematicians feel about Brouwer's perspective.
{"url":"http://plus.maths.org/content/philosophy-applied-mathematics","timestamp":"2014-04-18T18:13:04Z","content_type":null,"content_length":"100554","record_id":"<urn:uuid:a2c3b467-7c23-4ad5-91ea-639549dfefa4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Heights in Diophantine geometry. Paperback reprint of the 2006 original. (English) Zbl 1130.11034 New Mathematical Monographs 4. Cambridge: Cambridge University Press (ISBN 978-0-521-71229-3/pbk). xvi, 652 p. £ 35.00; $ 70.00 (2007). The first edition of this outstanding, very comprehensive and nearly sweeping monograph on height functions in modern Diophantine geometry appeared just a little more than one year ago. As for its precise rich contents, its masterly design, and its emphatic appraisal, we may therefore refer to our very recent review of this text (Zbl 1115.11034). The appreciation for this treasure of mathematical writing is certainly best documented by the very fact that, in the meantime, the distinguished “Doob Prize” (2008) for the best recent mathematical monograph has been awarded to the two authors in regard to their book “Heights in Diophantine Geometry”. In the 2008 Doob Price Citation it is particularly stressed that this treatise masterly combines the various aspects of Diophantine geometry, both from the perspective of arithmetic geometry and of transcendental number theory, and that the choice of subjects is extremely broad. Also, it is emphasized that the text is essentially self-contained, yet surprisingly accessible given the great depth of the material. Finally, it is assessed that the book is a masterpiece regarding its original approach, its incomparable comprehensiveness, its elegance of exposition, its appealing style of writing, and its unrivalled profundity and accuracy. The present second edition is a paperback reprint of the original (Zbl 1115.11034). However, the authors have taken the opportunity to correct the few minor typing errors in the 2006 original edition, thereby even increasing the already high degree of perfection of their prize-winning book. It just remains to repeat the meanwhile widely common opinion about this outstanding monograph: “Heights in Diophantine Geometry” by E. Bombieri and W. Gubler is a fundamental and pioneering standard text in the field, which will undoubtedly serve as a basic source for the future development of number theory and arithmetic geometry as a whole. 11G50 Heights 11-02 Research monographs (number theory) 14G40 Arithmetic varieties and schemes; Arakelov theory; heights 11G30 Curves of arbitrary genus or genus $e 1$ over global fields 11G10 Abelian varieties of dimension $>1$ 14K15 Arithmetic ground fields (abelian varieties)
{"url":"http://zbmath.org/?q=an:1130.11034&format=complete","timestamp":"2014-04-16T04:25:07Z","content_type":null,"content_length":"23298","record_id":"<urn:uuid:fd305fcd-dcb7-4537-b003-a380803d538b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Quasi-compact maps in Number Theory up vote 7 down vote favorite Can someone give me an example of a non-quasi-compact morphism of schemes which arises naturally in the field of Algebraic Number Theory? nt.number-theory ag.algebraic-geometry ac.commutative-algebra 5 Just a start, suppose you have such a morphism. It's domain has to be non-noetherian. Since every open subset in a noetherian space is quasi-compact. So the natural question to ask is, what are examples of non-noetherian schemes that arise naturally in algebraic number theory? I cannot think of any offhand, since usually in algebraic number theory, the focus is on Dedekind domains which are noetherian. – Jamie Weigandt Apr 16 '11 at 18:01 Yes. I agree. That is the question to ask! – Andrew Stout Apr 16 '11 at 18:14 3 @REX: could you explain why you're looking for such an example? For what it's worth, one non-noetherian scheme that algebraic number theorists might think about is $Spec(\overline{\mathbb Q})\ otimes_{Spec(\mathbb Q)}Spec(\overline{\mathbb Q})$, the absolute galois group of $\mathbb Q$. The inclusion of the complement of the identity (diagonal) $\overline{\mathbb Q}$-point is a non-quasi-compact open immersion. However, I'd guess this doesn't arise "naturally" in algebraic number theory. – Anton Geraschenko Apr 16 '11 at 18:28 Thanks for your response. I am looking for a "nice" example of a morphism of schemes $f: X\rightarrow Y$ such that the scheme theoretic image Z is not equal to $\overline{f(X)}$, which also comes along with a nice picture. All the examples I have seen so far seem "contrived." Of course, I may have to adjoin a nilpotent to some of these examples, but that's not a big deal. – Andrew Stout Apr 16 '11 at 18:59 2 How about the adele ring? – Kevin Ventullo Apr 16 '11 at 21:00 show 1 more comment 3 Answers active oldest votes A typical non-Noetherian ring that would arise in algebraic number theory would be the ring $\mathbb Z_p \otimes_{\mathbb Z_{(p)}} \mathbb Z_p$, where I am writing $\mathbb Z_{(p)}$ to denote the localization of $\mathbb Z$ at the prime ideal $(p)$, and $\mathbb Z_p$ to denote its completion (the usual ring of $p$-adic integers). Such tensor products come up in considerations related to faithfully flat descent, and can arise for example in justfiying (in certain situations) the passage from working over the up vote 10 localization $\mathbb Z_{(p)}$ to its completion $\mathbb Z_p$. (The kind of thing I have in mind is studying finite flat group schemes over $\mathbb Z$, say, by working over $\mathbb Z down vote [1/p]$ and $\mathbb Z_p$ separately. It is not hard to justify working over $\mathbb Z[1/p]$ and $\mathbb Z_{(p)}$ separately, but to justify the replacement of $\mathbb Z_{(p)}$ by $\ accepted mathbb Z_p$, one needs to make (or at least, might naturally find oneself making) a descent argument, in which the tensor product written above could play a role.) Another (perhaps simpler) example of a non-Noetherian ring that naturally appears in algebraic number theory is the ring of all algebraic integers. Just want to point out that M. Artin has a wonderful theorem ("formal glueing of module categories") that says: given a ring R and an element f, specifying an R-module M is equivalent 5 to specifying an R_f-module M_f, a \hat{R}-module \hat{M} (where \hat{R} is the f-adic completion of R), and an isomorphism between M_f and \hat{M} over \hat{R}_f; this procedure preserves tensor products, and so passes to algebras, quasi-projective (group) schemes, etc. In particular, one can quite often avoid contemplating big rings like \hat{R} (x)_R \hat {R} for making descent arguments. – Bhargav Apr 16 '11 at 18:56 1 Dear Bhargav, Thanks for this comment. It was because of these sorts of results that I put in my weasely parenthetical remarks "or at least ..."! Best wishes, Matt – Emerton Apr 16 '11 at 19:10 2 About this "formal glueing" theorem: it was also proved by Ferrand and Raynaud (appendix to "Fibres formelles..., Ann. Sci. ENS 1970). Artin's version is in "Algebraization of formal moduli II" (also 1970) and, if I remember correctly, Artin attributes it to Grothendieck. In any case the result is not very hard to prove, and would deserve to be better known. – Laurent Moret-Bailly Apr 17 '11 at 7:25 Dear Laurent, Thanks for adding these details, and for the additional references. The reference I knew for this was a paper of Beauville (perhaps with a coauthor?), and I'm glad to learn of some other (earlier) sources. Best wishes, Matthew – Emerton Apr 17 '11 at 21:10 The paper of Ferrand and Raynaud is on Numdam, numdam.org/item?id=ASENS_1970_4_3_3_295_0 – ACL Apr 18 '11 at 12:30 show 3 more comments Rex, about your scheme-theoretic image problem, I don't know how "contrived" the following example is (I am afraid it is not particularly related to number theory). Notations: $R$ is a discrete valuation ring, $t$ a uniformizer, $R_n:=R/(t^{n+1})$ ($n\in\mathbb{N}$), $X_n=\mathrm{Spec}\,R_n$, $A=\prod_n R_n$. Take $X:=\coprod_n X_n$ and $Y:=\mathrm{Spec}\,A$. There is a natural open immersion $f:X\to Y$ since each $X_n$ embeds in $Y$ as an open and closed subscheme. up vote The scheme-theoretic image of $f$ is $Y$: since $Y$ is affine, it just means that each $x\in A$ vanishing on each $X_n$ is zero, which is obvious. (In fact, $A=\Gamma(X,\mathcal{O}_X)$). 7 down But $X$ is not topologically dense in $Y$: indeed, consider $x=(t,t,\dots)\in A$. Then $x$ is locally nilpotent on $X$ but not nilpotent on $Y$, hence the open set $D(x)\subset Y$ is vote nonempty and disjoint from $X$. I really like this example. Again, $A$ is not Noetherian which prevents us from applying Krull's Intersection Theorem. You have exhibited elements of A contained in $inf(A) = \cap_{n=1}^ {\infty} A/\mathfrak{m}^n$. It seems that these are exactly the elements in the topological closure of Y. It is nice to see the interaction between the m-adic topology of A and the zariski topology on X. I might add, now that you have given this great example, that if I choose an ultrafilter on $\mathbb{N}$, Then ultraproduct $B = A/\sim$ would still be non-Noetherian, and the same problem would arise. – Andrew Stout Apr 18 '11 at 13:53 @Laurent Actually, something needs to add to your set up. We need to make R is not artinian; or else, x will be nilpotent on $Y$. – Andrew Stout Apr 18 '11 at 19:58 2 $R$ is a DVR, hence not artinian. – Laurent Moret-Bailly Apr 19 '11 at 6:36 Dear Laurent, I think in your second last line, you mean "But $X$ is not topologically dense in $Y$". Best wishes, Matthew – Emerton Apr 19 '11 at 6:48 2 OK, but be careful with non-noetherian rings: it may happen that $\mathfrak{m}^2=\mathfrak{m}\neq0$. – Laurent Moret-Bailly Apr 19 '11 at 18:33 show 4 more comments You might see non-quasi-compact maps in the context of universal covers of maximally degenerate pointed curves, and depending on who you ask, this might be called algebraic number theory. Specifically, a maximally degenerate positive genus (proper) curve $X$ has the form of a connected graph made out of finitely many projective lines intersecting transversely, where each line has exactly 3 special points (namely intersections and markings). If $x$ is a marked point, then $\pi_1^{geom}(X,x)$ is a finitely generated free group. One then has a universal cover $(\ tilde{X},\tilde{x}) \to (X,x)$, where $\tilde{X}$ is a tree of projective lines. The covering map is étale but not quasi-compact. up vote 3 down Gerritzen and van der Put wrote a book (Schottky Groups and Mumford Curves, Springer LNM 817) describing some number-theoretic data like theta functions on these objects. Some brief web vote searching suggests that there seem to be some more modern treatments using rigid analytic techniques (that I don't really understand). I'm afraid this doesn't answer the more focused question about scheme-theoretic image that you posed in the comments. I appreciate the answer anyway. I will have to look into this more. – Andrew Stout Apr 18 '11 at 13:58 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory ag.algebraic-geometry ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/61935/quasi-compact-maps-in-number-theory","timestamp":"2014-04-16T05:02:49Z","content_type":null,"content_length":"80153","record_id":"<urn:uuid:9d8b0ff3-2cab-409c-9135-bfd23e3f5429>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Reindex/rebase time series? From "Michael S. Hanson" <mshanson@mac.com> To statalist@hsphsun2.harvard.edu Subject st: Reindex/rebase time series? Date Thu, 14 Apr 2005 17:21:29 -0400 series variables in a Stata dataset. The observations are quarterly, and all need to be rebased (for example) to the annual average value for (say) the year 2000. The following code will find that average value:
{"url":"http://www.stata.com/statalist/archive/2005-04/msg00468.html","timestamp":"2014-04-17T09:43:45Z","content_type":null,"content_length":"5762","record_id":"<urn:uuid:63feb6b6-fc15-41fa-83ee-0539c870aa5f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
integration using tables January 31st 2008, 03:28 PM #1 Sep 2007 integration using tables i need to find the integral of: sin^-1 (x^1/2) what substituion should i use and what identity do i need to use to do this? There are always various ways to tackle these, but you could try parts. Let $u=sin^{-1}(\sqrt{x})dx, \;\ dv=dx, \;\ du=\frac{1}{2\sqrt{x(1-x)}}dx, \;\ v=x$ This leads to: Now, continue?. January 31st 2008, 04:59 PM #2
{"url":"http://mathhelpforum.com/calculus/27163-integration-using-tables.html","timestamp":"2014-04-20T07:58:16Z","content_type":null,"content_length":"33010","record_id":"<urn:uuid:9bf171c0-a57a-4610-b68d-aaed5c7057de>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
exponential distribution May 2nd 2008, 12:00 PM exponential distribution Hi can anybody help me with exponential distribution? The speed of wind can be modelled with an exponential distribution with a mean of 40 miles per hour. At a random location what is the probability that wind speed is less than 25 miles per hour? What is the probability at a random location wind speed will exceed 100 miles an hour? Now i know that this can be modelled as lamda e^(-lamda x) and i also know that in that equation that lamda can be replaced by 1/40 (i think!) But i'm not sure how to work out the question. Can anybody give me a hand? May 2nd 2008, 12:10 PM There is just a $\lambda$ missing : $f_X(x)=\lambda\exp(-\lambda x)$ with $x\geq 0$. and i also know that in that equation that lamda can be replaced by 1/40 But i'm not sure how to work out the question. Can anybody give me a hand? Denote $V$ the random variable associated to the wind speed The probability that the wind speed is less than $V_0$ is given by $P(V\leq V_0)=\int_0^{V_0}\lambda\exp(-\lambda x)\mathrm{d}x$ As the wind speed is either greater than $V_0$, either less than $V_0$, $P(V\geq V_0)=1-P(V\leq V_0)$. Hope that helps.
{"url":"http://mathhelpforum.com/advanced-statistics/36902-exponential-distribution-print.html","timestamp":"2014-04-17T05:06:10Z","content_type":null,"content_length":"6846","record_id":"<urn:uuid:3ed146f5-59ef-4113-ac47-93f29a065bdf>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Integration - Simpson's Rule Error March 31st 2010, 08:25 PM #1 Aug 2009 Numerical Integration - Simpson's Rule Error For numerical integration, Simpson's Rule is: $\int_{x_0}^{x_2} f(x) dx = \frac{h}{3} (y_0 + 4y_1 +y2) - \frac{h^5}{90} f^{iv} (c)$ where $h = x_2 - x_1 = x_1 - x_0$ and $c$ is between $x_0$and $x_2$. There is however a conventional error which arises when using Simpson's rule to approximate an integral. An expression for this error can be derived using Taylor series, but I'm having trouble deriving it. Can anyone please help? Simpson's Rule and series sounds like calculus topics. You may have better luck getting help on this if you post this in the calculus section April 1st 2010, 05:35 PM #2
{"url":"http://mathhelpforum.com/advanced-math-topics/136800-numerical-integration-simpson-s-rule-error.html","timestamp":"2014-04-17T05:16:21Z","content_type":null,"content_length":"33140","record_id":"<urn:uuid:4eeebf7f-d31b-497e-a3f0-ff9827c5ba93>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions Our Favorite Dudool's Bridal Wedding Gowns <strong>Bridal Wedding Dresses: Weddings Picks </strong></br> We love couture gowns and high-end de Sunday, May 26, 2013 at 12:07am Our Favorite Dudool's Bridal Wedding Gowns <strong>Bridal Wedding Dresses: Weddings Picks </strong></br> We love couture gowns and high-end de Sunday, May 26, 2013 at 12:07am Our Favorite Dudool's Bridal Wedding Gowns <strong>Bridal Wedding Dresses: Weddings Picks </strong></br> We love couture gowns and high-end de Sunday, May 26, 2013 at 12:07am How to get quick responses to your math questions Math is a wide subject, ranging from K to 11, college and university. Then there is algebra, trigonometry, geometry, arithmetic, calculus, number theory, ... etc. Not all teachers answer all math questions (many do). If you would give a little more detail on which branch of ... Saturday, May 25, 2013 at 9:45am Rusk High School A desert scrub ecosystem is found in all of the following South Asian countries Friday, May 24, 2013 at 9:57am Grammar (Writeacher) or (Ms. Sue) On your paper, write the comparative and superlative degrees of the following modifiers. If the degrees can be formed in two ways, write the -er and -est forms. 1. high 2. friendly 3. fully 4. low 5. steep 6. painful 7. early 8. small 9. brisk 10. near My answers are the ... Thursday, May 23, 2013 at 10:58pm Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, May 23, 2013 at 6:51pm Math needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, May 23, 2013 at 6:30pm Ms.Sue, I'm about to graduate high school! Can you believe it? I started using Jiskha in 7th grade :) Thursday, May 23, 2013 at 5:10pm the difference of n and 12 is greater than 13 Thursday, May 23, 2013 at 3:05pm Criminal justice What are the primary modifications of the classical school made by neoclassical criminology? Define moral panic according to Stanley Cohen. Compare the concept to the disaster analogy, especially the politics of media construction of particular moral panics. In your opinion ... Thursday, May 23, 2013 at 12:48pm Criminal justice What are the primary modifications of the classical school made by neoclassical criminology? Thursday, May 23, 2013 at 12:44pm life sciences ways in wich human/environmental problems impacts on the community/school Thursday, May 23, 2013 at 8:46am 6th grade science The erosive energy of rivers alters landscapes over periods of millions of years. Which of the following increases a river's rate of erosion? A)low gradient B)high discharge C)small load D)decreased Wednesday, May 22, 2013 at 11:24pm World Geography Hey Kaitlyn, I'm also doing CA. I got a 60 on this quiz cause I couldn't find anything! It's so frustrating right? Anyway, I'm not sure if your still needing help, but we could get together and help each other out if you'd like. I know the years almost over... Wednesday, May 22, 2013 at 10:49pm a. work=force*distance=53cos18*122 b. at constant speed, up the incline, he goes h distance upward, or 122sin6.7 high work done=mgh=10*g*122sin6.7 Wednesday, May 22, 2013 at 8:36pm 5 ways in which the human or environmental problem impact on the community.life orientation activity in RAMA SEC SCHOOL BOLOBEDU(Mahekgwe) i dnt hv answr i jst ned aid,4rm those who khows dat.plz help us,so dat we cn build a god country by being eductd by mokwena lebogang for mre inf conct 0762654325 Wednesday, May 22, 2013 at 3:52pm water flow over a falls 12.5m high. If the potential energy of the water is all converted to thermal energy calculate the temperature difference between the water at the top and buttom of the falls. Wednesday, May 22, 2013 at 3:41pm Please answer this question as soon as posibble #1 - As PsyDAG has written, if you really want the help of an expert in this subject, you'll follow directions and put the SUBJECT in the School Subject box. #2 - There's no way to tell if you are paraphrasing well or summarizing or plagiarizing unless we know what the... Wednesday, May 22, 2013 at 2:13pm English needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Wednesday, May 22, 2013 at 2:04pm Please answer this question as soon as posibble I did the paraphrases for the story is it the correct way. Lifeguard Assignment Chris doesn t like children, but he works as lifeguard at the Brigeland Community swimming. He does not like children a lot, but he is working as a lifeguard at the Brigeland Community ... Wednesday, May 22, 2013 at 1:34pm English Please answer ASAP Lifeguard Assignment Chris doesn t like children, but he works as lifeguard at the Brigeland Community swimming. He does not like children a lot, but he is working as a lifeguard at the Brigeland Community swimming pool. He spends most of his time at the swimming pool. It... Wednesday, May 22, 2013 at 11:30am We're learning disks, shells, and cylinders in school but we have a substitute and I've been trying to teach this to myself. Can you check them please? =) Thank you! 1) Find the volume of the solid formed when the region bounded by curves y=x^3 + 1, x= 1, and y=0 is ... Tuesday, May 21, 2013 at 9:30pm lincoln highschool Be sure you read carefully. For instance you didn't read carefully to find that the school subject, not your school should be on the above line. Tuesday, May 21, 2013 at 8:18pm the angle of depresssion from the top of a building to a point on the ground is 59 degrees. how far is the point on the ground from the top of the building if the building is 87 meters high? Tuesday, May 21, 2013 at 7:25pm The question I have is to solve the problem. A model rocket is launched from the ground with an initial speed of 50 feet per second. The equation that models its height, h feet, off the ground t seconds after it was fired is h=-16ft^2+50t a)How high is the rocket 1 seconds ... Tuesday, May 21, 2013 at 5:12pm We're learning disks, shells, and cylinders in school but we have a substitute and I've been trying to teach this to myself. Can you check them please? =) Thank you! 1) Find the volume of the solid formed when the region bounded by curves y=x^3 + 1, x= 1, and y=0 is ... Tuesday, May 21, 2013 at 4:52pm Math 104 Students, please view the "Submit a Clickable Rubric Assignment" in the Student Center. Instructors, training on how to grade is within the Instructor Center. Assignment 2: Financial Project Due Week 7 and worth 55 points Five (5) years ago, you bought a house for $... Tuesday, May 21, 2013 at 11:17am Math 104 Students, please view the "Submit a Clickable Rubric Assignment" in the Student Center. Instructors, training on how to grade is within the Instructor Center. Assignment 2: Financial Project Due Week 7 and worth 55 points Five (5) years ago, you bought a house for $... Tuesday, May 21, 2013 at 11:13am Math- 5th grade Shauna baked bread rolls to sell at a school fair. 2/5 of the rolls were chocolate rolls, 1/3 were raisin rolls, 1/4 of the remainder were strawberry rolls, and the rest were vanilla rolls. What fraction of the rolls were vanilla rolls? Tuesday, May 21, 2013 at 1:03am 7th grade math I taught in a rural school in southwestern Michigan. Mostly I taught middle school English and social studies with an occasional 7th grade math class to keep me on my toes. :-) Monday, May 20, 2013 at 8:14pm psych hw help can you check this for me : For those who graduate from high school, get a full-time job, and marry before they have their first child, the probability that they will be poor is 2%. But, if those things are absent, 76% will be poor. a) What is the causal relationship ... Monday, May 20, 2013 at 7:04pm psychology HW CHECK PLEASE can you check this for me : For those who graduate from high school, get a full-time job, and marry before they have their first child, the probability that they will be poor is 2%. But, if those things are absent, 76% will be poor. a) What is the causal relationship ... Monday, May 20, 2013 at 5:34pm Dear Jiskha Yes!! Yes I did. Plus I am in junior high. What are you trying to say XD lol. But seriously. I have learned. and I am in junior high. Monday, May 20, 2013 at 5:18pm Dear Jiskha It's a very poor idea to keep switching names. It's very junior-high-ish. And I see you have already been testing your limits in the math post above. Have you learned anything today?? Monday, May 20, 2013 at 4:40pm Drug addiction occurs when: A. an individual will lie about taking a drug. B. a drug no longer causes a person to get high, but they take it again. C. biological or psychological dependence on taking the drug develops. D. a person can be without the drug no longer than five days. Monday, May 20, 2013 at 4:19pm 1. nathan is climbing 25ft ladder oeaning against a tree. the foot of the ladder is 15ft from the base of the tree, what is the measure of the angle the ladder makes with ground. 2.erin is flying an plan at 3000 ft high, she sees her house at 32 degrees angle of depression ... Monday, May 20, 2013 at 3:06pm #1 -- What school subject is called "ashford unversity"? I never heard of such a course. #2 -- What does "SEP" stand for? #3 -- What "following"? Monday, May 20, 2013 at 7:23am Can somebody please help me write a summary news lead using the story below. The lead may be no longer than 20 words and must be written in the active voice A home at 2481 Santana Avenue was burglarized between the hours of 1 p.m. and 4 p.m. yesterday afternoon. The owner of ... Monday, May 20, 2013 at 6:02am PSY210 Psychological Statistics Simply reporting measures of central tendency or measures of variability will not tell the whole story. Using the following information, what else does a psychologist need to know or think about when interpreting this information? A school psychologist decided to separate some... Monday, May 20, 2013 at 1:50am the americans with disabilities act requires wheelchair ramps no greater than 1 over 12 or 1/12 .whta is the minimum horizontal length 2-foot-high ramp can have? Monday, May 20, 2013 at 1:43am Physics needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Sunday, May 19, 2013 at 5:04pm PSY210 Psychological Statistics what else does a psychologist need to know or think about when interpreting this information? A school psychologist decided to separate some classes by gender to see if learning improved. She looked at student scores on the final exam and obtained the following information: ... Sunday, May 19, 2013 at 3:25pm Students will be paid to report illegal items in school under the new Tattle Tale program. Sunday, May 19, 2013 at 2:02pm The Tattle-Tale Program is meeting to plan on a reward for student who report any classmate who bring drug or handgun to school Sunday, May 19, 2013 at 1:42pm Can somebody please help me write a summary news lead using the story below. The lead may be no longer than 20 words and must be written in the active voice. Gary hubard, superintendent of schools, announced a new program for your local school district. It is called me ... Sunday, May 19, 2013 at 12:19pm PQ is a vertical tower 95m high, the points R and S are on the same horizontal plane as Q, the angle of elevation of P from R is 35 if QS =155m and Q from RQS=48 1.Calculate the distance of QR 2.Calculate the distance of RS 3.Calculate the are of QRS Sunday, May 19, 2013 at 9:15am PSY210 Psychological Statistics Simply reporting measures of central tendency or measures of variability will not tell the whole story. Using the following information, what else does a psychologist need to know or think about when interpreting this information? A school psychologist decided to separate some... Sunday, May 19, 2013 at 1:09am Math 4 questions "at most" means ≤ "minimuum of" means ≥ "less than" means < "low of" means ≥ "high of" means ≤ You will be more likely to get an answer quickly if you show your attempts. Saturday, May 18, 2013 at 10:47pm I posted these questions earlier a lot of times but noone answered them and i really need help so can someone help me. i dont want all the answers, can someone just do like the first 2 so i get an example of how to do them. thanks 1. represent each with an ineqaulity a.) time ... Saturday, May 18, 2013 at 9:56pm Math 4 questions I posted these questions earlier but noone answered them and i really need help so can someone help me. 1. represent each with an ineqaulity a.) time spent on the activity can be at most 13 mins b.) the volume of the container must be a minimum of 1.8 L and a max of 2.5 L 2. ... Saturday, May 18, 2013 at 9:28pm Math help urgent I have a few questions that i need help with so can someone pleese help me thanks:) 1. represent each with an ineqaulity a.) time spent on the activity can be at most 13 mins b.) the volume of the container must be a minimum of 1.8 L and a max of 2.5 L 2. in canada by law any ... Saturday, May 18, 2013 at 8:05pm can you check my grammar and spelling? and does it look better? Erin Oldham Calvin Brown 20 March 2013 ASL 2 ASL 2 Research paper I am writing about interviewing three Deaf Persons for my ASL research paper. The first person I interviewed is Christian DeGuzman. He has been ... Saturday, May 18, 2013 at 6:48pm 1. Larry is a very aggressive four-year-old child. He watches many television programs that contain violence. It s most likely that A. he will become overactive. C. his behavior won t change. B. he will become depressed. D. his behavior will become more aggressive. 2... Saturday, May 18, 2013 at 5:00pm Math/statistics (?) Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. What is your question? Saturday, May 18, 2013 at 12:16pm hsa practice high school assessment and thank you Saturday, May 18, 2013 at 12:10pm If you really want the help of an expert in this subject, you'll follow directions and put the SUBJECT in the School Subject box. Friday, May 17, 2013 at 8:07pm week 5 104 HIS world civilizations 11 You want to cheat!!!???? Plagiarism is a serious offense against academic standards. Many students have been thrown out of school for copying someone else's work. Beware! Friday, May 17, 2013 at 7:36pm One day I decide I want to see how high I can throw a rock. I throw a rock straight up in the air and it has a speed of 30 m/s when it leaves my hand. I then realize this was foolish, as I need to get out of the way before the rock comes back down and hits me on the head. How ... Friday, May 17, 2013 at 6:40am A human cannonball is launched from a cannon at 26.4 m/s at 20.4 degrees above the horizontal. (Assume he/she lands in a net the same height as the cannon.) How high in the air does he/she go? Thursday, May 16, 2013 at 7:48pm Freshman Sophomore Junior Senior Left-handed batters 4 6 5 4 Right-handed batters 13 10 11 12 A school baseball team has 65 players. What is the probability that a randomly chosen player is a junior or a right-handed batter? Thursday, May 16, 2013 at 6:26pm Kendall asked 40 randomly-selected seniors at his high school about their plans for after they graduated. Twenty-nine students said they planned to go to college. If there are 380 seniors at Kendall s high school, estimate the number who plan to go to college. Thursday, May 16, 2013 at 5:45pm Chemistry needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Thursday, May 16, 2013 at 11:15am geometry!!!! PLEASE HELP!!!! There are 300 students at a local high school. On "All Students Exercise Day", Sam, the Physical Education instructor, wants them to line up in a perfect rectangular formation (same number of students in each row). However, there must be more than 1 student in each ... Thursday, May 16, 2013 at 11:01am If you really want the help of an expert in this subject, you'll follow directions and put the SUBJECT in the School Subject box. Thursday, May 16, 2013 at 10:02am How much force would you need to apply to a 1.7kg object to have it reach 5.8 m/s if it starts from rest? Thursday, May 16, 2013 at 9:47am Leadership Quality Is staying after school to help file papers and grade papers a leadership quality? Wednesday, May 15, 2013 at 8:49pm how can the plan of action help you address a current problem or challenge you are having with school Wednesday, May 15, 2013 at 6:30pm Why would you multiply by 6? You're right to multiply by 3 to find that she lives 15,840 feet from school. Wednesday, May 15, 2013 at 5:03pm It is 3 miles from Sally's home to school. How many feet is that? Wednesday, May 15, 2013 at 4:52pm Technology Skills class???? Grade 9??? I teach an Information Technology course in Grade 9, which may be similar to yours. We develop keyboarding skills, and learn various kinds of computer-related skills. We include everything from using Excel to create spreadsheets with formulae, to using MovieMaker to make our ... Wednesday, May 15, 2013 at 1:15pm 11. In which one of the following sentences is the simple subject also the complete subject? A. Crocodiles, quiet as logs, lurked on the riverbank. B. Detectives Homer Fry and Janine Small looked high and low for clues. C. Large and small dinosaurs stalked the grassy plains. ... Wednesday, May 15, 2013 at 12:40pm Talk about the things you don't have to do on Sunday. I don't have to study hard. I don't have to see teachers at school. I don't have to see my homeroom teacher. I don't have to walk to school. I don't have to clean my classroom. I don't have to ... Wednesday, May 15, 2013 at 4:45am ted sells his homemade peanut butter for 1.60 a jar at the local farmers Market the jar is 8 cm in diameter and 10 cm high he decides he will also sell peanut butter in jats that are 16 cm in diameter and 20 cm high.What should he charge if he uses the same price per cubic ... Wednesday, May 15, 2013 at 1:30am a manufacturer makes right triangular prisms like the one shown for refracting light. They will be packed in boxes 12.5 cm long, 2.5 cm wide and 22.5 cm high. How many prisms can fit the box? Wednesday, May 15, 2013 at 1:23am a manufacturer makes right triangular prisms like the one shown for refracting light. They will be packed in boxes 12.5 cm long, 2.5 cm wide and 22.5 cm high. How many prisms can fit the box? Wednesday, May 15, 2013 at 1:22am a person jumps off of a 6 foot high diving board with an initial velocity of 13 feet per second. how many seconds does it take the person to hit the water? Tuesday, May 14, 2013 at 8:14pm What will happen in the future? We will fly to the USA in an hour. We will visit Mars. We will go to the moon with our family. We will travel to many planets without paying a lot of money. We will live on Jupitor. We will live on another planet. We will live for more than 200 ... Tuesday, May 14, 2013 at 6:47pm Current GPA is your grade average for this year. The cumulative GPA is your grade average since you entered this school. Tuesday, May 14, 2013 at 6:05pm HELP PLEASE!!!!!!! maybe she just want help what is so wrong with help dont give a crap if you have to do your homework i came in here to see if you guy wolud have said something that has to do with it but i go to online school to there is not a textbook for this its a DISCUSSION for SOCIAL ... Tuesday, May 14, 2013 at 2:14pm Advantages and Disadvantages of School Lunches Never doubt that a small group of thoughtful committed citizens can change the world; indeed, it's the only thing that ever has. Margaret Mead once said this, which means that we as Americans need to make a ... Tuesday, May 14, 2013 at 12:03pm AP Statistics A school psychologist reports that the mean number of hours the students at this school sleep each night is 8 hours. The students believe the mean is not 8 hours. To find an estimate of the true mean, they select a random sample of 15 students from their school and ask how ... Tuesday, May 14, 2013 at 11:43am Logan Academy students take part in four field trips during the school year. They visit a library, a museum, a park, and a zoo. How many different ways can the four trips be ordered for the school Monday, May 13, 2013 at 11:21pm Rohnda is in charge of buying food for the school picnic, she found that hot dogs are in packages of 24, buns in packages of 18, ketchup in boxes of 12, and mustard in boxes of 15. what is the least number of students that can be feed if each student receives 1 of each item ... Monday, May 13, 2013 at 9:00pm HARD CHEM QUESTION- any help appreciated Phosgene (COCl2) is a poisonous gas that dissociates at high temperature into two other poisonous gases, carbon monoxide and chlorine with equilibrium constant Kp = 0.0041 at 600 K. Find the equilibrium composition of the system after 0.124 atm of COCl2 is allowed to reach ... Monday, May 13, 2013 at 8:48pm Why did Airhead eat the dollar he brought to school Monday, May 13, 2013 at 7:24pm Can you proofread my essay? The assignment was to write a problem-solution essay on a big social issue in our world. Never doubt that a small group of thoughtful committed citizens can change the world; indeed, it's the only thing that ever has. A quote once ... Monday, May 13, 2013 at 3:41pm Chemistry needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Monday, May 13, 2013 at 1:17pm Math - Probability in a primary school 70% of the boys and 55% of the girls can ride a bicycle. if a boy or a girl are chosen at random, what is the probability that both of them can ride a bicycle? Monday, May 13, 2013 at 1:08pm in a primary school 70% of the boys and 55% of the girls can ride a bicycle. if a boy or a girl are chosen at random, what is the probability that both of them can ride a bicycle? Monday, May 13, 2013 at 12:04pm Life orientation alcohol abuse causes road carnage because when people are driving under the influence of alcohol they drive at higher speed thats de reason why road carnage is so high Monday, May 13, 2013 at 11:48am The acceleration due to gravity near the surface of Mars is 3.72ms^-2 . A rock is thrown straight up from the surface with an initial velocity of 23ms^-2 . How high does it go? Monday, May 13, 2013 at 6:57am your school librarian has asked your class for some help on the purchase of some new bookcases with two shelves each. She has 300 new books she needs to shelve. One half of the books are half an inch thick. Ones third of them are one fourth of an inch thick, and the rest are ... Monday, May 13, 2013 at 12:29am Since Jessica s participation in local politics increased significantly after she joined her school s political science club, it is clear that her involvement in that club led her to take an interest in politics. The argument above is flawed because Sunday, May 12, 2013 at 8:04pm Chemistry needs help Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Sunday, May 12, 2013 at 1:07pm Indicate your specific subject in the "School Subject" box, so those with expertise in the area will respond to the question. Also, we do not do your homework for you. Although it might take more effort to do the work on your own, you will profit more from your ... Sunday, May 12, 2013 at 1:04pm let the size of the cut-out square be x cm by x cm then after bending, the base of the box will be 30-2x by 30-2x and x cm high Volume = x(30-2x)^2 = x(900 - 120x + 4x^2) = 4x^3 - 120x^2 + 900x d (Volume)/dx = 12x^2 - 240x + 900 = 0 for a max/min of Volume 12x^2 - 240x + 900 = ... Sunday, May 12, 2013 at 6:43am EARLY CHILDHOOD EDUCATION & AFTER-SCHOOL DAY CARE 02604300 1.b 2.c 3.d 4.d 5.a 6.b 7.c 8.b 9.a 10.c 11.a 12.b 13.c 14.a 15.d 16.c 17.b 18.c 19.b 20.d 100% Right Saturday, May 11, 2013 at 3:11am 100% Right answers 02604300 Saturday, May 11, 2013 at 3:09am la porte high school h= gt²/2, t=2h/g=2 12/9.8=2.45 s s=v t = 15 t=15 2.45 = 36.7 m Friday, May 10, 2013 at 4:19pm Pages: <<Prev | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | Next>>
{"url":"http://www.jiskha.com/high_school/?page=23","timestamp":"2014-04-16T22:26:43Z","content_type":null,"content_length":"38913","record_id":"<urn:uuid:af4f7b64-42e8-45cd-851b-095f3a85e039>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravity in game Gravity in game I need help trying to understand how to implement gravity in game. I think i understand the concept of this derivatives 1. Position Vector. position of an object 2. Velocity Vector . change in position over time. 3. acceleration is change in velocity over time. My problem is how do implement gravity and force into the game? I know the position of the player. I know the direction it is facing by using sin and cos thetha.. can i not use the the direction vector as my velocity? That is one place in which i am confused. How do i use the velocity vector to move in the direction in which i am facing? How do i get the acceleration of the player: force = mass * acceleration; acceleration = force/mass; do i just set mass and force to be a random value? Can someone please clarify this in a simple manner. And if velocity is the change in position over time? What is the use of speed? Thanks in advance. 1. Position Vector. position of an object 2. Velocity Vector . change in position over time. 3. acceleration is change in velocity over time. I know the direction it is facing by using sin and cos thetha.. can i not use the the direction vector as my velocity? You can, but why would you? Position is calculated from velocity, not the other way around. How do i use the velocity vector to move in the direction in which i am facing? The things is that the direction you're facing can be different from the direction you're traveling. Think cars on an icy road. Facing is useful, however, if you want to work with car-like physics - to accelerate, you'd apply a force in the direction the car is facing. How do i get the acceleration of the player: force = mass * acceleration; acceleration = force/mass; do i just set mass and force to be a random value? You can use any units you want, but it might be easiest to keep track of real-world values: force in newtons and mass in kilograms. Force is a vector, by the way. So, to implement some simple gravity in the z-direction, in pseudo-code: acceleration = <0,0,m*g> / m; //a = f/m, gravity force = m*g, positive z is down velocity += acceleration * dt; //dt = elapsed time since last update position += velocity * dt; Note that if you have multiple forces acting on an object, you'd just add them together and stick the sum into your a=f/m equation. I'd recommend these articles as an intro to game physics. Thanks for really fast reply. How is the direction in which i am facing different from the direction i am travelling in? It appears to be the same to me! :'( Oh, and I am working with 2d vectors. Apply a net acceleration due to gravity per frame. The accleration you choose can be anything but gravity on earth is 9.8m/s^2. This may or may not have any meaning within the context of your game units. Think of side-stepping (strafing) in a first person shooter. You can face one direction but be walking sideways. In a simple 2d game this probably isn't an issue. oh right, i see. Only if explanations were as simple and straightforward as that :D so how do i apply this to my game. I know my initial position. and i need to get to a final position, well wherever the player decides to stop. I know the direction vector. But i don't know which correct figure to use for the velocity, force, mass. This is what i hate about games tutorials and maths. All this random numbers, and no explanation why they're chosen. You can guess, I am crap at maths. if i had an initial position of (320,100) say on the graph. and i added (0,1) to the position. It would mean i was down the y axis of the graph. If that is the case, is (0,1) my velocity? I know <b>what </b> it tells me, I am looking down. (0,1) has to be velocity because it changes the position of the player,right? so if that is correct how do i use (or how can i arrive at the correct figure for the) acceleration and speed? Ok, take a step back and breath. Then go and read this article on vectors: Euclidean vector - Wikipedia, the free encyclopedia Now after you have read that article, say you have a position (10, 10) and you are looking along the vector (1, 0) (will refer to this as the look vector). What this tells you is that you are facing straight down the x-axis at the point (10, 10) only. This has no relation whatsoever to do with velocity or movement direction. Now say you want to move straight along the y-axis at speed x, this would give you a movement vector or velocity vector. In this case the velocity vector could be x*(0, 1) (x being a scalar). Think about what this means...what part of that formula tells you the direction of the movement and what part tells you about the speed of the movement? Now what would the speed of the velocity vector (2, 2) be? The look vector is not to be confused with the velocity vector. The look vector is only used as an orientation vector, it describes how you are oriented in the game world (what way you are facing). The velocity vector describes how are moving in the game world (what direction you are moving and what speed you are moving at). This does not mean they can not point in the same direction (if you move straight forward they would) but there is no restriction whatsoever that a velocity vector is bound to the look vector. Now to understand gravity you need to understand what a force is and how it relates to speed (and how it changes speed) and how a set of forces acts on an object and changes the speed. Force - Wikipedia, the free encyclopedia gives you a read on that, but you should really try to understand the above before going into forces. Basicly you need to understand Mechanics: Mechanics - Wikipedia, the free encyclopedia And some basic linear algebra (vectors). Ok, take a step back and breath. Then go and read this article on vectors: Euclidean vector - Wikipedia, the free encyclopedia Now after you have read that article, say you have a position (10, 10) and you are looking along the vector (1, 0) (will refer to this as the look vector). What this tells you is that you are facing straight down the x-axis at the point (10, 10) only. This has no relation whatsoever to do with velocity or movement direction. Facing down, does it not mean i am looking right? to look left i would have to do (-1,0) Now say you want to move straight along the y-axis at speed x, this would give you a movement vector or velocity vector. In this case the velocity vector could be x*(0, 1) (x being a scalar). Think about what this means...what part of that formula tells you the direction of the movement and what part tells you about the speed of the movement? Now what would the speed of the velocity vector (2, 2) be? so (0,1) is the look vector, and i can move down the y axis....but what does the scalar x represent? speed? Now what would the speed of the velocity vector (2, 2) be? I honestly don't know the answer to that. Can't it have any amount of speed? Read the links i provided. Edit: Didnt see your first question in the quote-box. I meant facing down in the same context as in facing down a road, bad wording on my part. No, the velocity vector part of my post has nothing to do with the look vector part. I gave you the name of the two parts that x*(0,1) contains, so while dealing with velocity vectors forget look vectors for now. Read the links i provided. Edit: Didnt see your first question in the quote-box. I meant facing down in the same context as in facing down a road, bad wording on my part. No, the velocity vector part of my post has nothing to do with the look vector part. I gave you the name of the two parts that x*(0,1) contains, so while dealing with velocity vectors forget look vectors for now. Hey, yeah I read them, it got confusing with all those maths symbols. But a speed is a scalar and doesn't tell me which direction i am facing. The velocity tells me the speed and direction i am facing. So i assume that x is speed and (0,1) is he velocity. but if (2,2) is the velocity, which i think it is, i have no idea what the speed is unless i get the length of the vector sqrt(8) and divide by a time. I just don't know lol The speed of a vector 2,2 is the length or magnitude of the vector. To start out you can do very simple Euler integration without friction / drag. accel = force / mass; vel += accel * dt; pos += vel * dt; ah, you see this is what confuses me. How is "length" or "distance" equivalent to speed? Speed <b>is</b> not distance. I mean we have the equation s = d/t ; if you noticed in my previous post. i got the magnitude of the velocity vector and divided by time. sqrt(8)/time.. so how is length of a vector, speed? Basicly the length of a vector is not an actual length in the way you think it is. In the case of a velocity vector the vector holds the following information: the direction and the speed. Think of a vector as an information container. It holds 2 kinds of information, the direction of the vector and the length of the vector. How this information is used depends on in what context the vector is used. I seriously suggest you read linear algebra if you can take a course at a local college or so, or just search the web. I can not provide you better answers than these really. Alright, i get it thanks. And i got it work. Thanks for help Getting it to work may not be your only goal here. The important lesson here is that you understand vectors backwards and forwards. If you do not then when you come up against this next time you will be just as unprepared as you were this time. Vectors are one of the fundamental building blocks of computer graphics and linear algebra.
{"url":"http://cboard.cprogramming.com/game-programming/132551-gravity-game-printable-thread.html","timestamp":"2014-04-16T18:39:20Z","content_type":null,"content_length":"27424","record_id":"<urn:uuid:7aa8708e-aede-4b3f-af2a-0ef099722337>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Exponents Multiplication and Division Examine the problem below. Recall that multiplication is implied when there is no sign between a variable or set of parentheses and a number, another variable, or another set of parentheses. Therefore in this problem, the x^3 and y^4 are being multiplied. In the next problem the x^2 and x are being multiplied. The difference is that a * is present which explicitly indicates multiplication. We will solve this problem, then return to the first problem on the page. (x^2 * x)^3 Because there is no addition or subtraction inside the parentheses, the exponent can be just "distributed" in and simplified: (x^2*3 * x^3) x^6 * x^3 Notice that this gives the same result as if we had simplified the inside of the parentheses first, as we have done below. (x^2 * x)^3 So why are there two different methods of solving this problem? The first method, where the exponent was distributed in can be applied to the first problem on this page, whereas the second method We will now apply the "distribute in" method to the first problem presented on this page. This method will also work when the terms are being divided, like the problem below: (x^2 / x)^3 Again, the exponent is just "distributed" in: (x^2*3 / x^3) (x^6 / x^3) The next page explains what to do when you encounter fractions.
{"url":"http://www.algebrahelp.com/lessons/simplifying/polyexp/pg2.htm","timestamp":"2014-04-19T09:31:50Z","content_type":null,"content_length":"7788","record_id":"<urn:uuid:7f9f573d-68e9-4657-977c-5c0e36955497>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Angles Congruent - Geometry Proof Proving Angles Congruent – Geometry Proof Posted on October 8, 2009 by Mr. Pi Recent Comments quadratic equation c… on Modeling Data With Quadratic… Lemuel on Quadratic Functions and Their… Arizona Bayfield on How To Simplify Rational … Suemac on Proving Lines Parallel with Tr… Mr. Pi on Subsets of Real Numbers If you have been reading my math blog at all, then you know I have been posting my youtube videos here and giving each one a brief description. I am ready to get back in action and prove some angles congruent! One of the easiest ways to prove angles congruent is with knowledge of the Vertical Angles Theorem. The vertical angle theorem states that vertical angles are congruent. In the proof of the vertical angles theorem, you have to establish a relationship between angles 1 and 3 and angles 2 and 3. Both pairs of angles are are supplementary pairs, thus their sum is 180 degrees, which can be seen in statement 2 of the above proof. Now that statement 2 is established, you can state that the sum of the measures of angles 1 and 3 is equal to the sum of the measures of angles 2 and 3. The previous is shown in statement 3 in the above proof. Now this equation is really cool, because it can be changed into the measure of angle 1 is equal to the measure of angle 2, which is very close to what must be proved. Since the measures are equal, the angles are also congruent. See statements for and 5 in the above prove. I hope to incorporate this into my class some how. I will get back to you and let you know. Filed under: Geometry, Theorem Proof, Vertical Angles Theorem Tagged: | Geometry, Theorem Proof, Vertical Angles Theorem Thanks. I kinda understand it now. God bless! :) • Janina, Great. Do you have any specific questions I could answer? I understand it a lot more now. Thank you very much :D • Thank you for taking the time to comment Shannon. I am glad my work was able to help you. Kind Regards, Mr. Pi.
{"url":"https://mrpilarski.wordpress.com/2009/10/08/proving-angles-congruent-geometry-proof/","timestamp":"2014-04-20T03:39:19Z","content_type":null,"content_length":"104936","record_id":"<urn:uuid:a2f18719-4a32-46d1-a0de-6232e1e31ff5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00155-ip-10-147-4-33.ec2.internal.warc.gz"}