content
stringlengths
86
994k
meta
stringlengths
288
619
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/robert_luo/answered","timestamp":"2014-04-17T10:03:25Z","content_type":null,"content_length":"105124","record_id":"<urn:uuid:46a7101f-b137-45be-9379-6ea8fe37910d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
The D.R.K Club (FFXIV: Woes and Wonderments) Nilatai wrote: I picked up a Haub +1 for 200k on my server. Haha, I remember when that was one of my money makers. Raelix wrote: if i had 10g for every time someone called me "milch" on here or BG, i'd have sold gil and never had to work IRL. I'd call you Mitchell, and imagine you to be Joe Don Baker. Raelix wrote: I'd have way more inventory space if I didn't have this wicked collection. The only moghouse furniture I'll ever want is some kind of weapon rack to display six at a time... I'd need three XD Edited, Oct 8th 2012 2:10am by Raelix I've been wondering for years why you idle in that. Anyone in the mood for a math problem? What I've done so far is factor the bottom to split it and use partial fractions...so you end up with x(x+sqrt(-1+i))(x-sqrt(-1+i))(x+sqrt(-1-i))(x-sqrt(-1-i)) or x((x^2)+(1+i))((x^2)+(1-i)) Partial fraction split would be... left side = A,B,C,D,E over each of those factors or a/x then Bx+C & Dx+E if you used the x^2 factors Multiply by lcd...then...yeah. I'm not sure if this is something simple and I'm just getting bogged down by so many factors, variables, and the ever so wonderful i. But If you could point me in a direction to try that would be great. Busaman wrote: Raelix wrote: I'd have way more inventory space if I didn't have this wicked collection. The only moghouse furniture I'll ever want is some kind of weapon rack to display six at a time... I'd need three XD I've been wondering for years why you idle in that. No HQ boxes in idle set. Idle refresh piece. Gothic gear useless but super rare and unique looking and sorta matches. Ice Spikes legs to go with Gothic gives me about a 50% ice spikes rate. Used to make people say the stupidest sh*t on FFXIAH to try and discredit me when they didn't like facts. I looked at the question for 5 seconds, hence I have no idea how to solve it or what the solution is at this time, but two things I'd try (and in this order): Integration by Parts My goal would be to keep things clean. I have no clue how this problem comes into what your class if covering, so I might be off. We just covered by parts and partial fractions so you're not far off. By parts seems like it would be messy as @#%^ though. Then in that case... Also, I already punched it in Solution (honestly, this really isn't even spoiler material, the problem is so @#%^ed up, but it uses partial fractions!). Your professor is such a dick, that's all I can say. Edited, Oct 9th 2012 4:06pm by xypin Yeah I had plugged it into wolfram and couldn't really make heads or tails of it. The way wolfram handles partial fractions is different than the way we learned it. :( The problem is worth 10 extra points on our next test so he certainly isn't a dick lol. Ahh, I was thinking this was a homework problem. Then in this case, it's probably easier to solve partial fractions twice because of the imaginary parts. N = numerator of original problem D = denominator of original problem So you'd first find partial fractions for x*(x^4 + 2x^2 + 2 ) You'll need to solve the following equation N/D = A / x + (Bx^3 + Cx^2 + Dx + E) / (x^4 + 2x^2 + 2) The numerator of partial fractions is one degree less than its denominator. From here, multiply out, solve the equation. The second is a bit tricky. You've already got the denominator parts: x^2 + 1 - i x^2 + 1 + i but since there is an imaginary part, for the partial fractions you want to use the form Ax + Bx*i + C + D*i Even though the imaginary part is only a constant in the denominator, you still need it as function of x for the numerator. If it is indeed the case that there is no imaginary part (in this case B), then that variable will be equal to 0. This is how I was taught partial fractions anyway. Is this any help? Edited, Oct 9th 2012 4:39pm by xypin My current attempt. The way we were taught was if the factor is linear you can put a constant variable above it. If it's quadratic you put a linear above it (Bx+C). I think the way I'm currently trying will just lead to a dead end with a complicated system of equations to solve (Or can't be solved) I was trying to get the coefficients to line up so it was like...(B+D)x^4 on one side and -1/ 2x^4 on the other so you could equate (B+D)=-1/2...but that didn't seem to work either. I feel like my background in algebra just isn't quite up to par for this problem, or I'm just going about it And of course, you're always a good help xyp. <3 Edited, Oct 9th 2012 5:01pm by Siralin Get any further? Image of my partial fraction solutions: http://i180.photobucket.com/albums/x308/xypin/math.jpg (Warning, might be more confusing) PDF document outlining my partial fraction work: https://dl.dropbox.com/u/80054039/math_solution.pdf (Slightly more detailed) If you're still stuck on that part, this might be helpful. Oh, I pulled out a 1/2 in the second part, it's there... just not there. So keep that in mind if you try this method. Also, in the link you posted, while your setup isn't unsolvable, it's just really, really hard (something may or may not be wrong with the algebra, didn't check toooo closely). The difficulty comes from the fact that B, C, D, and E are all complex. The real(B-D) + imag(C +E) has to equal 2 while the other parts have to sum to 3/2. Setting B = a + bi and similar for other variables if you wanted to continue from that point would probably fix this. Edited, Oct 9th 2012 6:51pm by xypin Where does the (Bx^3 + Cx^2 + Dx + E) come from in your solution? Is that just part of the way you were taught the decomposition? Siralin wrote: Where does the (Bx^3 + Cx^2 + Dx + E) come from in your solution? Is that just part of the way you were taught the decomposition? The first part is finding partial fractions for... N / (x)(x^4 + 2x^2+ 2) You actually answer your question here Siralin wrote: The way we were taught was if the factor is linear you can put a constant variable above it. If it's quadratic you put a linear above it (Bx+C). The denominator will be quartic (x^4), so the numerator is cubic (x^3). Edited, Oct 9th 2012 6:58pm by xypin Got it...and going from the first line to the second, how did the Bx^3 end up as Bx^4. :( That feels like a stupid question, but I'm not seeing where the x came from that was distributed through. You're trying to find P = (Bx3 + Cx2 + Dx + E)/(x4+2x2+2) Since D = x*(x4+2x2+2) D*P = x*(x4+2x2+2)*(Bx3+Cx2+Dx+E)/(x4+2x2+2) = x*(Bx3+Cx2+Dx+E) I'll update pdf file in a minute as well. Oh, I see, I missed an x... sorry for the typo, the pdf is updated, should clear up the confusion. Edited, Oct 9th 2012 7:15pm by xypin Going to try it with the (Bx3+Cx2+Dx+E) way. That seems like it will work out better (easier) than with my way. Will report back! You're my hero. I'm nearing the point of saying @#%^ this problem. You have to do partial fractions like 2 more times to come to a really @#%^ing convoluted answer. Ugh. The moment I popped it into Wolfram, I gave up on solving the problem beyond computing the first three partial fractions. Since there is a fourth part to the solution, I'm guessing the (x^2 +1-j) integral can be separated, but my interest stopped at the end of that pdf. I just wanted to prove to myself that I could at least do that much. I pushed through and got an answer. Going to check back over it tomorrow. Probably not right...but it seemed to work out. We'll see when my brain doesn't want to slap me. Here we go.... https://dl.dropbox.com/s/nz62u45uugmun0r/MathEC.docx?dl=1 (Obviously it could be simplified more...[factor then combine lns], but @#%^ that because @#%^ this problem.) Edited, Oct 10th 2012 2:46pm by Siralin Relevant: bottom of page 1 in your document Ok, so the mistake isn't terrible, but either your B or D variable is incorrect on the bottom of page 1 (A and C are correct). Edited, Oct 10th 2012 3:23pm by xypin I think I originally had (4/2i)+1 = B which apparently equals (1-2i) >.> I wondering how long it takes until someone says STFU WITH THE MATH YOU GUYS TAKE IT TO PM. Second page, middle- given your solution for A and B... A + B =/= 0 I would guess A is missing a 1/2. Also, where the hell is the physicist? I'm only an engineer and I'm pretty sure physics is 300% more math than engineering. It's just a typo, but second page, second to last equation, the denominator should be x^2 + 1 - i, not x^2 + 1 + i. Edited, Oct 10th 2012 4:03pm by xypin He didn't even care when I mentioned it on facebook! Can't blame him for that. Gaxe, troll harder, you bore me. Don't care. He wasn't even trolling though... Sounded pretty reasonable to me. Few people enjoy this kind of stuff like xyp. I did have a stab at that problem, but after a while I gave up because I suck at calculus, apparently. It's a pretty evil problem. Did you get an answer in the end? I could put it in my Physics message board for Uni? XD I think the last link Sira posted is the answer, mostly. I pointed out a few minor errors, but quickly became bored with that. I should have a read through what he posted, then. Haven't had a spare minute the past few days. For some reason they've decided it's a good idea for the first four weeks of term that the Physicists should do more Mathematics based lectures than actual Mathematics degree students. So burned out by it! INB4 Rae get's here. {Blood Cuisses} You know, it's funny. When you're wrestling with some sort of moral dilemma, how songs suddenly jump out at you. For example, right now I'm particularly drawn to this song by Cliff Richard. I won't go into the why of it, cuz this isn't f*cking livejournal. Just thought it was, y'know, weird. Too bad it was done first in the goddamn 50's. Edited, Oct 15th 2012 1:55am by Raelix Pst, they had Joe Kittinger leading the team. Everyone knows it was previously done. (At a lower altitude, slower speed, and with a droge chute.) The only record thy didnt break was the longest freefall time, which can be attributed to the lack of a droge chute. If you're trying to say it's not special because they were able to do it 50 years ago, gfy. It's funny to point out the drogue chute caused more problems than it solved, but Kittinger still broke the speed of sound (714mph, sound <680mph at those altitudes) even with the drogue. Hey you know what? Felix did have a drogue 'just in case'. It's a load of hype I don't buy into. 20% higher is not 20% cooler. Yes I guess I will eventually be that guy that says "Guys, this was already done at 1/10th the cost back in 1969" when private industry reaches a certain milestone. Kinda hard to take modern high-performance aircraft seriously when you know the A-12 was designed in the 50's too. This isn't rose-tinted glasses and nostalgia, those engineers had balls. Today they'd say "WTF is this round window bullsh*t? You're dealing with 14 f*cking PSI, you aren't building a f*cking submarine. And get rid of this @#%^-ass hybrid rocket sh*t and put some f*cking LH2/ LOX tanks in you pansy f*cks." 50's Engineers made things in Flash White. Burt Rutan thinks he makes things in Flash Gordon. Edited, Oct 15th 2012 8:06am by Raelix Everything that I'm finding (looked past wiki) is showing that he only hit 614 mph which is below the speed of sound at those altitudes. Even the Air Force's website states this. I was only able to find the 714 number at one link and it stated that Kittinger had stated this speed in later interviews. Raelix wrote: This isn't rose-tinted glasses No it's your own "I hate society and anything they think is cool" vision. You're taking the fact that this is a hyped marketing campaign and running with it to the point of discrediting the guy and the "mission." Sure there are some frat boys and video game majors out there going " BRO THIS IS THE COOLEST THING EVER THIS GUY HAS THE BIGGEST BALLS MAYBE HE SHOULD TAKE A PISS OFF THE BALLOON LOL THINK OF ALL THE SCIENCE THEY'RE GONNA GET FROM THIS BRO...@#%^ING SCIENCE," but that doesn't change the fact that it's pretty neat and a cool accomplishment. You would probably have some of the same (justifiable) criticisms as you do now if they had done this without big sponsors and it wasn't in the media, but you'd also be eating this sh*t up and bring it to us on your "I'm cooler than you because I know about things like this" platter. You're confusing me with Turin, dangus. I'm not saying jumping 300 feet on a motorcycle is lame because Knievel did 140 feet on a softtail harley. I'm saying media hype about a 'jump from space' that isn't even a third of the way to legal 'space' and being overseen by the guy who did it 50 years ago is just 'meh'. Edited, Oct 15th 2012 8:10am by Raelix I wouldn't compare you to Turin, but you are being a bit of a twat. I hate the media as much as anyone else (ask my GF, she has to listen to me rant about sensationalist article titles about science and technology all the time), but you have to understand that they need to make it out to be awesome (and more than it actually is) because they're sinking a sh*t load of money into it hoping they'll see a return. You can't blame them for this, but you also don't have to buy into it. (Which you're clearly not.) Edited, Oct 15th 2012 10:18am by Siralin Develop a suit that can survive higher speeds. Use 50 years of bettered materials and technology. Hell, bring back ribbon chutes. Good? Now jump from 200k. It's awesome sure, but it's a rerun. I'm good with that. It was quite fun to watch though. Footage from this jump is undeniably better, just sayin. ;) space jump is obviously cool regardless of whether it's in the 50s or now. imagine doing it. sh*t is crazy. Since I've somehow gotten dragged into this, Raelix, you're being a douche again. Stop it. milich wrote: space jump is obviously cool regardless of whether it's in the 50s or now. I really don't see why it's cool. From what I understand, he failed at everything he set out to do so it was just some dude parachuting. I don't particularly care if you thought it was cool or meh, but he broke all but one record that he set out for.
{"url":"http://wow.allakhazam.com/forum.html?forum=260&mid=126175514163454467&p=170","timestamp":"2014-04-19T18:10:05Z","content_type":null,"content_length":"189313","record_id":"<urn:uuid:0e74aec7-f691-45ef-a758-5f1bab87f024>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Epimorphisms have dense range in TopHausGrp? up vote 10 down vote favorite Consider the category of Topological Groups with continuous homomorphisms. Then a continuous homomorphism $f:G\rightarrow H$ with dense range is an epimorphism. Is the converse true? If not, what about for locally compact groups? Even for groups, without topology, this is not trivial-- Wikipedia points me to a simple proof given by Linderholm, "A Group Epimorphism is Surjective", The American Mathematical Monthly Vol. 77, No. 2 (Feb., 1970), pp. 176-177 see http://www.jstor.org/pss/2317336 It is far from obvious to me that this argument extends to the topological case (but perhaps it does). Edit: As suggested in the comments, I really was to ask about Hausdorff topologies. gr.group-theory topological-groups ct.category-theory There is rarely need to use abbreviations in MO questions (and, in your title, calling maps with dense image surjective is strange!) – Mariano Suárez-Alvarez♦ Feb 23 '11 at 22:17 I don't think the abbreviations hinder one's ability to read the question, myself. Though I agree that the way the title is worded is momentarily confusing... – Yemon Choi Feb 23 '11 at 23:00 I replaced the abbreviations - the meaning of 'cts' may not leap out at someone whose first language is not English, and 'homo' is a particularly inelegant abbreviation IMHO. – David Roberts Feb 23 '11 at 23:18 2 It's not true that a continuous homomorphism with dense range are epimorphisms, unless you work in the category of Hausdorff topological groups. This is just because you can give any group the indiscrete topology, and in that context all maps have dense image. Alternatively, you can consider the inclusion $\mathbb{Q}\to\mathbb{R}$, which equalises the projection and the zero map to the non-Hausdorff group $\mathbb{R}/\mathbb{Q}$. – Neil Strickland Feb 24 '11 at 7:42 1 Mariano is right. I would even say that a mathematical text should never contain any abbreviation. This rule has been observed, I think, by the best authors: Bourbaki (also in English) , Grothendieck, Serre , Cartan and his seminarists,... – Georges Elencwajg Feb 24 '11 at 9:15 show 3 more comments 1 Answer active oldest votes Google, MathSciNet and some ferreting lead me to MR1235755 (94m:22003) Uspenskiĭ, Vladimir(D-MNCH) The solution of the epimorphism problem for Hausdorff topological groups. Sem. Sophus Lie 3 (1993), no. 1, 69–70. where the review indicates that the answer is negative in general, but positive for locally compact groups; this latter case was apparently treated in MR0492044 (58 #11204) Nummela, Eric C. On epimorphisms of topological groups. Gen. Topology Appl. 9 (1978), no. 2, 155–167. up vote 13 down vote The case of compact groups had been done earlier by Poguntke: MR0263978 (41 #8577) Poguntke, Detlev Epimorphisms of compact groups are onto. Proc. Amer. Math. Soc. 26 1970 503–504. and this apparently inspired the authors of the following paper MR1338245 (96c:46054) Hofmann, K. H.(D-DARM); Neeb, K.-H.(D-ERL-MI) Epimorphisms of $C^∗$-algebras are surjective. Arch. Math. (Basel) 65 (1995), no. 2, 134–137. 4 It's interesting that the answer is "yes" for Hausdorff topological spaces, and "yes" for groups, but "no" for Hausdorff topological groups. – Greg Marks Feb 23 '11 at Many thanks. I guess I really should have been able to find the Nummela paper myself, given its title! – Matthew Daws Feb 24 '11 at 8:51 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory topological-groups ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/56453/epimorphisms-have-dense-range-in-tophausgrp?sort=votes","timestamp":"2014-04-19T12:40:18Z","content_type":null,"content_length":"61053","record_id":"<urn:uuid:4036afcd-dfee-4e66-a28a-e46134e56f21>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantitative measurement of infinite dimensionality up vote 5 down vote favorite I recently encountered the metric mean dimension, which is a numerical metric invariant of (discrete time, compact space) dynamical systems that refines topological entropy for infinite-entropy systems. I am wondering if anything similar can be found in the literature for any metric notion of dimension (Let say that by ``metric'' means bi-Lipschitz invariant). Put another way, I have a compact metric space $X$ that has infinite dimension for any sensible notion of dimension, and I would like to make this statement quantitative. I see two ways to do this. The first one is to mimic the box-dimension, and consider the (extra-polynomial) growth rate of the smallest number of $\varepsilon$-balls needed to cover $X$ when $\varepsilon^{-1}$ goes to infinity. This is the simplest way to go, but I am concerned by the fact that box dimension have not as nice a behavior than Hausdorff dimension (for example countable spaces can have positive box The second one, suggested by Greg Kuperberg, is to mimic Hausdorff dimension but replacing the family of "size functions" $(x\mapsto x^s)_s$ by another family with similar properties, like $(x\mapsto My question is the following: do you know any example of such an invariant in the literature? Where is it used, in what purpose? mg.metric-geometry hausdorff-dimension dimension-theory add comment 3 Answers active oldest votes Gideon Schechtman and I speculated on a notion of dimension (we call it complexity) of a general metric space that comes from the theory of Lipschitz $p$-summing operators that Farmer and I introduced. A metric space has finite complexity provided the Lipschitz $1$-summing norm of the identity function on the space is finite. For an infinite set with all distances one, which we consider a simple metric space, the Lipschitz $1$-summing norm of the identity is two. For $\mathbb{R}^n$, this parameter is about $n^{1/2}$. When the Lipschitz $1$-summing norm of the identity is infinite, the asymptotics of the Lipschitz $(p,1)$-summing norm of the identity as $p$ decreases to one describes the complexity of the space (the point being that for $p>1$, this parameter is always finite and tends to the Lipschitz $1$-summing norm when $p$ decreases to one). up vote 5 down For our speculation, see the last paragraph of section 5 in our paper Diamond graphs and super-reflexivity, J. Topology and Analysis 1 (2009), no. 2, 177–189. We have not followed up on this notion and have no idea whether it is good for anything. Thanks ; this construction seems however more difficult to interpret than the more or less straightforward examples I gave. I don't feel how difficult it is to estimate these summing norms. – Benoît Kloeckner Mar 25 '11 at 12:35 It is certainly more difficult to understand than the examples you mentioned, and it is difficult to compute for many examples. It is however quite natural when you come into metric geometry from geometric functional analysis. – Bill Johnson Mar 26 '11 at 12:05 add comment After failing to find any evidence that the notions I asked for have been previously defined, I chose to write things down. The resulting paper is available: A generalization of Hausdorff dimension applied to Hilbert cubes and Wasserstein spaces. Wasserstein spaces were my initial target, while (generalized) Hilbert cubes are handy reference spaces. By the way, I should stress that using the family of functions $(x\mapsto \exp(-\lambda/x))_\lambda$ suggested in the question is a bad idea: the resulting analogue to Hausdorff up vote 2 down dimension is not bi-Lipschitz invariant. One has to use cruder families like $(x\mapsto \exp(-x^{-s}))_s$. vote accepted Is it good policy to accept my own answer so that the question is not left open? I don't see a problem with accepting your own answer. – S. Carnahan♦ Sep 26 '11 at 16:35 add comment See my paper LINK Centered densities and fractal measures, New York Journal of Mathematics 13 (2007) 33-87 up vote 2 down vote Some references are also at the end of it. In particular, Boardman, Goodey, and McClure. Thanks for the references. I find somewhat surprising that so much has been done for generalized Hausdorff measures and so little for generalized Hausdorff dimensions. – Benoît Kloeckner Oct 16 '11 at 9:36 add comment Not the answer you're looking for? Browse other questions tagged mg.metric-geometry hausdorff-dimension dimension-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/59281/quantitative-measurement-of-infinite-dimensionality","timestamp":"2014-04-16T19:39:41Z","content_type":null,"content_length":"64940","record_id":"<urn:uuid:d377784e-14ea-4736-939c-af5677a3ea83>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to jumping to conclusions in the NBA and Employee of the Year for 2009-2010 Posted on 11/18/2010 by Arturo Galletti My first move as the manager of the machine shop was to introduce standardized work. –Taiichi Ohno father of the Toyota Production System Quality over Quantity. Consistency. These are the hallmarks on which truly excellent organizations are founded. The centerpiece of the Toyota Production system is the reduction of variability. Variability leads to waste and loss and that is anathema to true success. This is going to get very real, very fast. Before we start sling tables and numbers, feel free to go read and review the Basics. Success in the NBA should be no different. I‘ve talked before (see here for example) about the value of consistency night in and night out. Where I a GM, a player/employee who consistently delivers night in and night out would the ideal employee. One of my priorities would be to have a system to evaluate total productivity and productivity variation for all the players on my roster. Luckily, Wins Produced provides just such a framework for this analysis. The quicker I can complete this analysis and determine the value and potential of my roster, the better my edge against other GM’s in trades. My enemy here is time and a small sample size. I want to reach valid conclusions on player talent ahead of the market but I am aware that the quicker I reach conclusions, the larger my error will be. This post will focus on two things: Player variability/reliability and the size of error introduced by sample size. I want to rate the players and I want to know how quickly I can do it. For this I’m going to need a hell of a lot of data. Luckily I have Andres Alvarez, and his mad skills ( All Powered by Nerd Numbers) at my disposal. Andres went out and did splits for every player and every game for last season. Did you know that 442 players combined for 24796 individual games played last season in the NBA last season? Now you do thanks to Andres J. With all this Wins Produced data in hand for the 2009-2010 Season, I can go off and do an analysis of value, variability and predictive value of the numbers by chronological sample size. Now, not every game qualifies for this analysis. To qualify as a sample, I’m requiring ten minutes played for a game. For the player to qualify for the ranking/evaluation sample, I want at least 20 game samples and 800 minutes played. For the correlation analysis, I’m going to use players with at least 50 games samples. This leaves 232 players and 16616 game samples for correlation (all players for 2009-2010 with at least 50 games with >10 MP, Avg is 71 games) and 286 players and 18936 game samples for the Reliability Value (or Employee of the Year) rankings (Minimum 20 Games with >10 Minutes Played & >799 Minutes total). Enough with the talking, let’s get to variability. How good is that sample in the internet? I’ve said before that the beginning of any season is a tantalizing time full of promise, expectations but mostly questions. Will my favorite team/player be better or worse than expected? Will a team’s surprising/disappointing start prove to be a mirage or be sustained thru 82 games? How fast can we start jumping to conclusions? For the media who has no real conscience or memory, the answer can be measured in nanoseconds. For the hopefully rational group of people that are my readers, this is a much tougher question. We know of things like the law of large numbers (LLN). As the number of samples in a data set increases we will get closer and closer to the real value of something and conversely, the error (or more accurately the possibility of it) gets larger and larger the smaller the sample. So rushing to judgement based on a small sample is premature. A larger sample size is called for before we can make any solid conclusions. We know that already. Frankly just saying get a larger sample size is a little fuzzy for my tastes. Luckily, you know me, data, excel and math by now. If there’s some sort of answer to be found, I’m going to give it the ol’Harvard try. Let’s take a look at last year’s game data. We’ll look at numbers for the full year and for chronological game samples for 5,10,15,20,25,30 and 40 games. We’ll look at: • Raw Productivity (ADJP48) correlation to final Season Number: This is how closely the sample correlates to the final full season number for the player • Average Total Error in ADJP48 from Sample to total for season (ADJP48): This is the difference from the sample value to the final full season number for the player expressed in Raw Productivity per 48 minutes (ADJP48) • % Avg. Total Error in ADJP48 from Sample to total for season: This is the difference from the sample value to the final full season number for the player expressed as % of Final season number • Avg. Absolute Error in ADJP48 from Sample to total for season (ADJP48) : This is the absolute difference from the sample value to the final full season number for the player expressed in Raw Productivity per 48 minutes (ADJP48) • % Avg. Absolute Error in ADJP48 from Sample to total for season: This is the absolute difference from the sample value to the final full season number for the player expressed as % of Final season number • Std Deviation of Raw Productivity (ADJP48) correlation to final Season Number: This is how closely the sample variation correlates to the final full season variation for the player • % Avg. Absolute Error in stddev of ADJP48 from Sample to total for season: This is the absolute difference from the sample variation to the final full season variation for the player expressed as % of Final season variation Table Time! Fascinating. Let’s analyze. The second column tells us that once we have a 15 games sample the correlation is above 75% (which is good). 20 games is above 80%, 30 gives us 90% and 40 games is almost a lock at 94%. So at this point in the season player productivity for the full year can be predicted with about 70% accuracy (depending on sample size). In two more weeks this should be close to 80%. In terms of overall error (column #3 & #4), we do not see a lot of variation. This means that league wide productivity and things like position adjustments and replacement levels for players can be very accurately set with a very small sample (5 games yields a 2% variation). As for absolute error (columns 5& 6), we see a similar story as with the correlation data. Right now you’d expect player productivity variation for the rest of the year to be about 15% to 20%. By the middle of december, this’ll be down to about 10%. For the actual variability (columns 7 &8), the results are a little different. Correlation increases more linearly the larger the sample. However for absolute population variation the percentages track absolute error. So at this point you have a fair idea of a player game to game variation. So to synthesize, at this point in the season there’s about a 30% uncertainty in the numbers (assuming the data follows the 2009-2010 pattern but this is a safe assumption). By the end of the year this’ll be down to 15% and by the All star break to 5%. I expect this might be improved by eliminating injured players and rookies. That covers the hard math portion of our program. Let’s do some fun rankings! Employee of the Year for the NBA in 2009-2010 A lot of you out there are going through your own year-end evaluations. Hopefully, you feel these are a fair assessment of your contribution to the success of your enterprise.Your value and your consistency was measured and compared to your peers and you we’re rated fairly in comparison to your peers. So the total opposite of the typical All-NBA Balloting. What I will attempt here is to evaluate players based on the guidelines set above. I’ll look at numbers for 2009-2010 : WP48, Wins Produced, WP48 Std dev, WP48 I can expect 85% of Time. I’ll rank each player in each category and average the ranks. Player with the lowest average rank get the overall highest rating. If you remember we have 286 players and 18936 game samples for the Employee of the Year rankings (Minimum 20 Games with >10 Minutes Played & >799 Minutes total). Table time again: So Lebron James in a landslide. the top ten is rounded out by: Jason Kidd, Rajon Rondo, Mike Miller, Andre Iguodala, Pau Gasol, Al Horford, Ben Wallace, David Lee and Steve Nash. The bottom ten reflects guys who should not by any means be on your team (sorry Mr. Pargo, Rasheed and others but ball don’t lie) If we use this to do my own All NBA teams we get: Thabo Sefolosha was a huge surprise in the first team. Zach Randolph for the second team. But I guess together with everyone else on this list, they got the job done night in and night out. 35 Responses “A Guide to jumping to conclusions in the NBA and Employee of the Year for 2009-2010” → 1. You realize if you want me to shut up about Andre Iguodala you can’t put him top five on your list! Podcast topic #1 for next week! Consider it marked. Awesome work as always. I’m confused though, shouldn’t Kobe be on your first team not Gasol :p 2. Those are some interesting results. Dwight Howard is top 5 in WP48. WP, and Worst Day, but 199th in variability. Basically, he bounces around from average to elite every other game. How strange. It would be interesting to rank the stuff by other categories. Like most variability for usage groupings (ie >27% usage, 20-27%, etc). Most variability for each WP48 grouping (>.3, then .25-.3, then .2-.25, etc). My eyes seem to be telling me ball distributer are the least variable and the shooters and rebounders are the most variable players. I wonder if that’s right. Great work. Now the next step is finding the variability in these players (and your ranking) when only up against the top defenses in the league. Say, top 8. 3. Great stuff, now we’re really getting somewhere. :) This all brings up larger question regarding consistency, such as a preference for the consistently bad over the variably brilliant, when that’s your basic choice. Not surprisingly, the topic has come up consistently regarding Vladi’s playing time. But as of yet, it has only come up in fan discussions – no mainstream medium has touched upon it. 4. So, this raises the following question: If 12 games into the season, there is about 70% correlation to final WP numbers, shouldn’t we be able to put some upper and lower bounds on team win totals 5. In addition to Howard; Boozer, Durant, C.Paul, G.Wallace, & Duncan all have their values decimated by variability – do we really want a metric that puts Al Jefferson ahead of Duncan and Jonas Jerebko ahead of Dwane Wade because they are consistently bad and Duncan/Wade are inconsistently great? I think variaility might make since as a factor when comparing players in tiers, but makes no since in putting players into tiers (if that makes sense). 6. I apologize for not letting go of this, but yesterday I had a question about Gerald Wallace and whether small sample size accounted for his current ranking. You replied: “So we have: Gerald Wallace with 0.283 avg WP48 but with a stdev of .315 That data suggests that either his variability is high (and you can’t depend on him every night as he literally disappears) or that his average will come down as the season progresses. A larger sample will reveal the truth. Either one suggests that you can’t count on him to carry/build your team around.” Now I’m looking at your new table of a measure of variability/consistency and see that Wallace’s high variability drops him to 35th on your list. That’s not shabby! But he ranked 5th in Wins Produced with 19.4! So I join Neal in thinking “variability might make since as a factor when comparing players in tiers, but makes no since in putting players into tiers” In fact someone like Wallace might be winning you games you have no business winning by going off on a given night. I can already hear the counter argument that he might lose you games you should have won on the days he disappears. In any case he presents a nice test of your thesis. 7. How does this turn out if you check correlations to the rest of the season versus the full season? For trade deadline purposes, that’s the correlation that matters. It doesn’t matter if they’re a star for the year including before the trade, but how they perform strictly after. A regression with a couple years prior stats and up to point X. How much confidence is generation for the rest of the season after X? Is this year’s bump for real after X games of > 10 MP? 8. Off topic, Can we use the numbers to categorize players within positions? I’m thinking that there are three basic types at each position: possessors (at least 2/3rds of production from possessions), scorers (at least 2/3rd of production from scoring), and possessor-scorers (everyone else). Having a way to categorize them would assist in team construction. At some point we might even be able to quantify the synergy (deviation from expected performance) generated based upon different combinations of typed positions in a statistically significant way. The most fundamental question is: is it better to have a team (or frontcourt, backcourt, etc…) of complimentary specialists who maximize the natural advantages of each position (i.e. pair a scoring C with a possessor PF or vice versa) or is it better to have balance (possessor-scorers) at every position. I have no idea what the answer to that question is. I would tend to go with the team of specialists but there is a strategic advantage in the uncertainty that a team with complete balance has when it comes to trying to defend them and in overcoming bad nights by one of the parts. 9. Awesome article!…any way you can put the final player list in a table/spreadsheet for quick searching? 10. It would be interesting to see how these translates to player performance in the post season. You would think the msot constistent would perform better, but maybe the highly variable players are variable because they take it easy too often in the regular season. 11. I think what you’re seeing with the high variability players is the heavy weight that WP assigns to rebounds, first, and also to assists. These likely vary more from game to game than shooting efficiency, and since they play such a big role in WP the players who do well in those categories will have a lot of variance. If you sorted players based on the proportion of their WP derived from rebounds, I think you will find that correlates pretty well with variability. The same may be true for assists. I’d guess that shooting guards and low-rebound forwards will have the least 12. it is truly unbelievable how much Jason Kidd produces night in night out – the perception in the media certainly seems to be that he was Hall of Fame but is now a good piece. Its almost as if you can’t mention his name without saying he gets burned by quick guards (who doesn’t in the league?). I don’t ever hear his name come up as one of the elite or even top 5 point guards yet the metrics certainly suggest he is one of a handful of top flight players in the game. Arturo, this work is fantastic and really allows me to watch basketball and enjoy many of the intricate details of the game that actually produce wins. 13. awesome work arturo, i think this analysis is fantastic to find those mid tier players that wont demand a massive salary but will produce consistent effort and results night in night out for example sefolosha, jerebko, haywood these guys would be fantastic value for playoff teams with star players as you know they will play their role and produce regularly. As we all know playing to potential can more easily be achieved against the lower ranked clubs but how do these players perform against the best, in the playoffs when it counts? 8 Trackbacks For This Post
{"url":"http://arturogalletti.wordpress.com/2010/11/18/a-guide-to-jumping-to-conclusions-in-the-nba-and-employee-of-the-year-for-2009-2010/","timestamp":"2014-04-20T15:51:30Z","content_type":null,"content_length":"126936","record_id":"<urn:uuid:0a0ba23d-0e69-4e28-95ce-42a91bc491ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling Residential Electricity Usage with R January 30, 2013 By Lloyd Spencer Wow, I can’t believe it has been 11 months since my last blog posting! The next series of postings will be related to the retail energy field. Residential power usage is satisfying to model as it can be forecast fairly accurately with the right inputs. Partly as a consequence of deregulation there is now more data more available than before. As in prior postings I will use reproducible R code each step of the way. For this posting I will be using data from Commonwealth Edison [ComEd] in Chicago, IL. ComEd makes available historical usage data for different rate classes. In this example I use rate class C23. First we must download the data and fix column and header names: # load historical electric usage data from ComEd website # edit row and column names Next we hypothesize some explanatory variables. Presumably electricity usage is influenced by day of the week, time of the year and the passage of time. Therefore we construct a set of explanatory variables and use them to predict usage for a particular hour of the day [16] as follows: # construct time related explanatory variables While all the input variables are highly predictive, something seems to be missing in overall predictive accuracy: lm(formula = ComEd[, 16] ~ timcols) Min 1Q Median 3Q Max -1.11876 -0.16639 -0.01833 0.12886 1.64360 Estimate Std. Error t value Pr(>|t|) (Intercept) 1.30253 0.02577 50.545 < 2e-16 *** timcolsweekday -0.08813 0.02226 -3.959 7.93e-05 *** timcolsTime 0.06901 0.00959 7.196 1.03e-12 *** timcolsS1 0.37507 0.01450 25.863 < 2e-16 *** timcolsC1 -0.17110 0.01424 -12.016 < 2e-16 *** timcolsS2 -0.23303 0.01425 -16.351 < 2e-16 *** timcolsC2 -0.37622 0.01439 -26.136 < 2e-16 *** timcolsS3 -0.05689 0.01434 -3.967 7.65e-05 *** timcolsC3 0.13164 0.01424 9.246 < 2e-16 *** Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3679 on 1331 degrees of freedom Multiple R-squared: 0.6122, Adjusted R-squared: 0.6099 F-statistic: 262.7 on 8 and 1331 DF, p-value: < 2.2e-16 A plot of the residuals clearly shows something else is going on: As you may have already concluded there is seasonal in the data, beyond what we have fitted with the seasonal variables. Let’s load some temperature data for the Chicago area and see how it relates to this data. First we load the weather data from the NOAA database. It is stored in annual fixed width files which we stitch together: # Load weather data from NOAA ftp site for (yearNum in 2009:2013) # Change missing data to NAs # create cross reference date index dateInd<-sapply(ComEd[,1], function(x) which(KORDtemps[,1]==x)) Next we plot the residuals from our regression vs. the prevailing max temperature for the day: As expected there is something going on relating temperature to residuals. In my next blog posting I will discuss approaches to fitting this weather data to the demand model... for the author, please follow the link and comment on his blog: Commodity Stat Arb daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/modeling-residential-electricity-usage-with-r/","timestamp":"2014-04-16T10:26:14Z","content_type":null,"content_length":"61189","record_id":"<urn:uuid:9a239e75-6714-4883-ac24-65fdf4de36bb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
least upper bound and greatest lower bound September 22nd 2007, 08:24 AM least upper bound and greatest lower bound How would you prove this question: Let A and B be nonempty sets of the real numbers. Define A-B={a-b:a in A,b in B}. Show that if A and B are bounded, then sup(A-B)=supA - infB and inf(A-B)=infA - supB. Thanks a lot for any help. September 22nd 2007, 09:52 AM Let $m_1=\inf A, \ M_1=\sup A, \ m_2=\inf B, \ M_2=\sup B$. Let $M=M_1-m_2$ Let $a\in A, \ b\in B\Rightarrow a\leq M_1,b\geq m_2\Rightarrow a-b\leq M_1-m_2$, so $M$ is a majorant for $A-B$. Let $\epsilon >0$. Then $M-\epsilon=M_1-\frac{\epsilon}{2}-\left(m_2-\frac{\epsilon}{2}\right)$. Exist $a\in A$ and $b\in B$ such that $a>M_1-\frac{\epsilon}{2}$ and $b<m_2-\frac{\epsilon}{2}$. Then $a-b>M_1-\frac{\epsilon}{2}-m_2-\frac{\epsilon}{2}=M-\epsilon$, so $M-\epsilon$ is not a majorant. So, $M=\sup A-\inf B$ is the supremum for $A-B$. Proove in the same way the second part. September 22nd 2007, 09:59 AM Suppose that $U_A = \sup (A)\quad \& \quad L_B = \inf (B)$ then $\left( {x \in A} \right)\left[ {x \le U_A } \right]\quad \& \quad \left( {y \in B} \right)\left[ {y \ge L_B } \right]$. Now this means that $\left( {x \in A} \right)\left[ {x \le U_A } \right]\quad \& \quad \left( {y \in B} \right)\left[ {y \ge L_B } \right]\quad \& \quad - y \le - L_B$ So $U_A - L_B$ is an upper bound for the set $A - B$. Now show that $U_A - L_B$ is the least upper bound. $\left( {\varepsilon > 0} \right)\left[ {\exists a \in A:U_A - \frac{\varepsilon }{2} < a \le U_A } \right]\left[ {\exists b \in B:L_B \le b < L_B + \frac{\varepsilon }{2}} \right]$. From which we see that $\left( {a - b} \right) \in \left[ {A - B} \right]\quad \& \quad \left( {U_A - L_B } \right) - \varepsilon < a - b \le U_A - L_B$. Thus no number less than $U_A- L_B$ is an upper bound for $A - B$ making $U_A - L_B$ the least upper bound for $A - B$. You need to fill many details. But this is the basic idea.
{"url":"http://mathhelpforum.com/calculus/19327-least-upper-bound-greatest-lower-bound-print.html","timestamp":"2014-04-20T15:10:50Z","content_type":null,"content_length":"10876","record_id":"<urn:uuid:122d73f9-ae3b-4038-9ed7-3f0a807ab493>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Explicit Construction of Families of LDPC Codes of girth at least six Results 1 - 10 of 18 - IEEE Transactions on Information Theory , 2006 "... Abstract — One approach to designing structured low-density parity-check (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the parity-check matrix contain all the variables involved in short cycles. This approach is especially effective i ..." Cited by 14 (1 self) Add to MetaCart Abstract — One approach to designing structured low-density parity-check (LDPC) codes with large girth is to shorten codes with small girth in such a manner that the deleted columns of the parity-check matrix contain all the variables involved in short cycles. This approach is especially effective if the parity-check matrix of a code is a matrix composed of blocks of circulant permutation matrices, as is the case for the class of codes known as array codes. We show how to shorten array codes by deleting certain columns of their parity-check matrices so as to increase their girth. The shortening approach is based on the observation that for array codes, and in fact for a slightly more general class of LDPC codes, the cycles in the corresponding Tanner graph are governed by certain homogeneous linear equations with integer coefficients. Consequently, we can selectively eliminate cycles from an array code by only retaining those columns from the parity-check matrix of the original code that are indexed by integer sequences that do not contain solutions to the equations governing those cycles. We provide Ramsey-theoretic estimates for the maximum number of columns that can be retained from the original parity-check matrix with the property that the sequence of their indices avoid solutions to various types of cycle-governing equations. This translates to estimates of the rate penalty incurred in shortening a code to eliminate cycles. Simulation results show that for the codes considered, shortening them to increase the girth can lead to significant gains in signalto-noise ratio in the case of communication over an additive white Gaussian noise channel. Index Terms — Array codes, LDPC codes, shortening, cyclegoverning equations - IEEE Trans. Information Theory , 2006 "... Abstract — Let C be an [n, k, d] binary linear code with rate R = k/n and dual C ⊥. In this work, it is shown that C can be represented by a 4-cycle-free Tanner graph only if: pd ⊥ ≤ $r np(p − 1) + n2 ..." Cited by 7 (0 self) Add to MetaCart Abstract — Let C be an [n, k, d] binary linear code with rate R = k/n and dual C ⊥. In this work, it is shown that C can be represented by a 4-cycle-free Tanner graph only if: pd ⊥ ≤ $r np(p − 1) + "... It is well known that certain combinatorial structures in the Tanner graph of a low-density parity-check code exhibit a strong influence on its performance under iterative decoding. These structures include cycles, stopping/trapping sets and parameters such as the diameter of the code. In general, i ..." Cited by 5 (1 self) Add to MetaCart It is well known that certain combinatorial structures in the Tanner graph of a low-density parity-check code exhibit a strong influence on its performance under iterative decoding. These structures include cycles, stopping/trapping sets and parameters such as the diameter of the code. In general, it is very hard to find a complete characterization of such configurations in an arbitrary code, and even harder to understand the intricate relationships that exist between these entities. It is therefore of interest to identify a simple setting in which all the described combinatorial structures can be enumerated and studied within a joint framework. One such setting is developed in this paper, for the purpose of analyzing the distribution of short cycles and the structure of stopping and trapping sets in Tanner graphs of LDPC codes based on idempotent and symmetric Latin squares. The parity-check matrices of LDPC codes based on Latin squares have a special form that allows for connecting combinatorial parameters of the codes with the number of certain sub-rectangles in the Latin squares. Sub-rectangles of interest can be easily identified, and in certain instances, completely enumerated. The presented study can be extended in several different directions, one of which is concerned with modifying the code design process in order to eliminate or reduce the number of configurations bearing a negative influence on the performance of the code. Another application of the results includes determining to which extent a configuration governs the behavior of the bit error rate (BER) curve in the waterfall and error-floor regions. - IEEE Transactions on Information Theory , 2008 "... In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code ..." Cited by 4 (0 self) Add to MetaCart In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results. Index Terms: LDPC codes, linear programming (LP) decoding, fundamental cone, pseudo-codewords, pseudo-weight, stopping sets. - Designs, Codes, and Cryptog , 2004 "... We study sets of lines of AG(n, q) and P G(n, q) with the property that no three lines form a triangle. As a result the associated point-line incidence graph contains no 6-cycles and necessarily has girth at least 8. One can then use the associated incidence matrices to form binary linear codes whic ..." Cited by 4 (1 self) Add to MetaCart We study sets of lines of AG(n, q) and P G(n, q) with the property that no three lines form a triangle. As a result the associated point-line incidence graph contains no 6-cycles and necessarily has girth at least 8. One can then use the associated incidence matrices to form binary linear codes which can be considered as LDPC codes. The relatively high girth allows for efficient implementation of these codes. We give two general constructions for such triangle-free line sets and give the parameters for the associated codes when q is small. 1 , 2005 "... We study a class of quasi-cyclic LDPC codes. We provide both a Grobner basis approach, which leads to precise conditions on the code dimension, and a graph theoretic prospective, that lets us guarantee high girth in their Tanner graph. Experimentally, the codes we propose perform no worse than rando ..." Cited by 2 (2 self) Add to MetaCart We study a class of quasi-cyclic LDPC codes. We provide both a Grobner basis approach, which leads to precise conditions on the code dimension, and a graph theoretic prospective, that lets us guarantee high girth in their Tanner graph. Experimentally, the codes we propose perform no worse than random LDPC codes with their same parameters, which is a significant achievement for algebraic "... Abstract—We consider turbo-structured low-density paritycheck (TS-LDPC) codes—structured regular codes whose Tanner graph is composed of two trees connected by an interleaver. TS-LDPC codes with good girth properties are easy to construct: careful design of the interleaver component prevents short c ..." Cited by 2 (1 self) Add to MetaCart Abstract—We consider turbo-structured low-density paritycheck (TS-LDPC) codes—structured regular codes whose Tanner graph is composed of two trees connected by an interleaver. TS-LDPC codes with good girth properties are easy to construct: careful design of the interleaver component prevents short cycles of any desired length in its Tanner graph. We present algorithms to construct TS-LDPC codes with arbitrary column weight � ! P and row weight � and arbitrary girth �. We develop a linear complexity encoding algorithm for a type of TS-LDPC codes—encoding friendly TS-LDPC (EFTS-LDPC) codes. Simulation results demonstrate that the bit-error rate (BER) performance at low signal-to-noise ratio (SNR) is competitive with the error performance of random LDPC codes of the same size, with better error floor properties at high SNR. Index Terms—Error floor, girth, interleaver, low-density paritycheck (LDPC) codes, turbo-structured. "... Abstract—In this paper, we determinate the shortest balanced cycles of quasi-cyclic low-density parity-check (QC-LDPC) codes. We show the structure of balanced cycles and their necessary and sufficient existence conditions. Furthermore, we determine the shortest matrices of balanced cycle. Finally a ..." Cited by 1 (1 self) Add to MetaCart Abstract—In this paper, we determinate the shortest balanced cycles of quasi-cyclic low-density parity-check (QC-LDPC) codes. We show the structure of balanced cycles and their necessary and sufficient existence conditions. Furthermore, we determine the shortest matrices of balanced cycle. Finally all nonequivalent minimal matrices of the shortest balanced cycles are presented in this paper. Index Terms—Girth, quasi-cyclic low-density parity-check (QC-LDPC) codes, balanced cycles. I. , 2005 "... In [6] a class of quasi-cyclic LDPC codes has been proposed, whose information rate is 1/2. We generalize that construction to arbitrary rates 1/s and we provide a Grobner basis for their dual codes, under a non-restrictive condition. As a consequence, we are able to determine their dimension. 1 ..." Add to MetaCart In [6] a class of quasi-cyclic LDPC codes has been proposed, whose information rate is 1/2. We generalize that construction to arbitrary rates 1/s and we provide a Grobner basis for their dual codes, under a non-restrictive condition. As a consequence, we are able to determine their dimension. 1 , 2004 "... We construct two families of low-density parity-check codes using point-line incidence structures in PG(3, q). The selection of lines for each structure relies on the geometry of the two classical quadratic surfaces in PG(3, q), the hyperbolic quadric and the elliptic quadric. ..." Add to MetaCart We construct two families of low-density parity-check codes using point-line incidence structures in PG(3, q). The selection of lines for each structure relies on the geometry of the two classical quadratic surfaces in PG(3, q), the hyperbolic quadric and the elliptic quadric.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=549839","timestamp":"2014-04-25T02:13:32Z","content_type":null,"content_length":"37422","record_id":"<urn:uuid:9f870d09-5c81-4583-a98f-32d8a0a3a915>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrons in the Band Gap Thanks...In most of the Semiconductor Physics Textbooks, they say that the probability of finding an electron at Fermi Energy ( Chemical Potential, as you say ) is 0.5. What does this mean?? and why should the probability curve is symmetric about the fermi level (i.e 0.5 probability) above zero Kelvin ?? Again, they are using the terminology carried over from metals. Look at the Fermi level for a metal. At T>0 K, the Fermi function will start to evolve from a step function, to a rounded curve at the top and a tail at the bottom foot. At any temperature, the probability (which corresponds to the occupation number Fermi function) is always half. So what your text is doing is to carry over that definition into the semiconductor, which is what I said before. It isn't entirely wrong if you "extrapolate" the statistics of the occupation number in the valence band and the conduction band of the semiconductor, but it is sloppy and confusing to say that, since obviously, there are NO states in the gap.
{"url":"http://www.physicsforums.com/showthread.php?p=1097540","timestamp":"2014-04-20T03:25:48Z","content_type":null,"content_length":"42525","record_id":"<urn:uuid:0e7c0c16-4eac-4fba-ae8c-d6bb47b8b413>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] feasibility (was PA Incompleteness) Vaughan Pratt pratt at cs.stanford.edu Wed Oct 17 03:30:41 EDT 2007 From: Vladimir Sazunov > 2. Another approach is based on Parikh's formalization of feasible > numbers and, as I believe, more adequate formalization which I > mentioned in my recent posting to FOM > http://www.cs.nyu.edu/pipermail/fom/2007-October/012042.html > Here it is assumed that the "semi-set" (using the term of Vopenka) F of > feasible numbers contains 0,1 and is closed under +, but is bounded by > 2^1000 (Alternatively, forall n log log n < 10 is postulated what means > that 2^1024.) If we are talking about the "real" world here, may I conservatively suggest 2^^6, a stack of six 2's? (Less conservatively, four 2's and a 3, but six 2's is simpler as well as offering a handy margin of error. The notation 2^^6 adapts to ASCII Knuth's notation for the next operation after exponentiation in the Grzegorczyk hierarchy.) Rationale: it is plausible that there are fewer than 2^256 qubits in what we understand by the "real world," which therefore would have at most 2^(2^256) observable states, noting that 256 = 2^(2^3) < 2^^4, whence a state count of six 2's or 2^^6. 2^1024 is tiny, being the square root of the number of memory states of the MITS Altair 8800, which at $400 in 1975 was the first PC readily available to consumers. Its 256 bytes of RAM gave it 2^2048 states for its RAM alone, rather less than a stack of four 2's (2^65536 = 2^^4, the number of states of RAM in the enhanced MAI JOLT kit I assembled in September 1975). For that money today you can buy a terabyte of hard drive offering well over 2^^5 states. Another two in the stack takes you to 2^^6, which should see you through the foreseeable future of the universe. Unless of course you were planning to reason about the universe with either dynamic or temporal logic. In that case you would appear to be looking at 2^^7 possible predicates on the states of the universe. On the other hand the counterargument implied by http://boole.stanford.edu/pub/coimbra.pdf suggests 2^^5 as a more reasonable estimate of the number of such predicates, on the ground that the 2^^6 states form a complete atomic Boolean algebra or CABA, predicates on which are better understood as complete ultrafilters, i.e. the qubits themselves as the atoms of the CABA, than simply as subsets of its underlying set. Number theory will no doubt demand larger numbers, feasibility be damned, but for reasoning about the real world, 2^^6, a stack of six 2's, may well suffice. Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-October/012063.html","timestamp":"2014-04-20T21:03:24Z","content_type":null,"content_length":"5193","record_id":"<urn:uuid:a9784e6a-244f-4881-ac50-42c5d040b1ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Combinations and Spans Throughout this discussion I am assuming we are dealing with a real vector space V. Every time I mention a collection of vectors I will assume that these vectors are in V. Vectors will usually be in bold for example w or v. Scalars will be in italics for example k or c. Remember for real vector spaces scalars are just real numbers. For complex vector spaces scalars are complex numbers. Typical vector spaces are: ● R^n={all n-tuples with real entries}={ (x[1],x[2], … ,x[n]) | x[i] is a real number} ● M[mn]={all m by n matrices with real entries} ● P[n]={all polynomials of degree less than or equal to n} ● C(a,b)={set of all continuous functions on the interval (a,b)} ● C^r(a,b)={set of all functions which have r continuous derivatives on (a,b)} Subsets are denoted by Í. Subspaces will be denoted £. Definition: A vector w is a linear combination of the vectors v[1], v[2], … , v[r], if it can be expressed in the form where k[1], k[2], … , k[r] are scalars If r = 1 then we have w= k[1]v[1], and we say that w is a scalar multiple of v[1]. Lemma 1: If a nonzero vector w is a scalar multiple of v then v is a scalar multiple of w. │Think │Write │ │Remember in a proof always start the proof by feeding back the given which is called the hypothesis │Since w is a scalar multiple of w then w= kv for some scalar k. │ │Check to make sure have used all of the given. We never used that w is nonzero. What does that tell us?│Since w is nonzero then k must be nonzero because if k were zero we would have: w= kv=0v=0.│ │We want to write v as a scalar multiple of w. This means we need to write v equals. │Since k is nonzero we can write v=(1/k)w │ │Make sure to state your conclusion. │Therefore v is a scalar multiple of w. │ Ex. 1: -14+9x+4x^2 is a linear combination of 2+3x and 5-x^2 since Ex. 2: -14+8x+4x^2 is not a linear combination of 2+3x and 5-x^2 since if -14+8x+4x^2= k[1](2+3x)+ k[2](5-x^2) then this could be rearranged to -14+8x+4x^2=(2k[1]+5k[2])+(3k[1])x+(-k[2])x^2. Two polynomials are equal if their coefficients are equal. So the above equation becomes 3 equations and two unknowns. 2 k[1] + 5 k[2] = -14 3 k[1] 8 -1 k[2] 1 This system has no solution (you can eyeball it or use matrices). So there are scalars that work therefore -14+8x+4x^2 is not a linear combination of 2+3x and 5-x^2. Since the scalars are found by solving a system of equations it is possible to have infinite solutions for the choice of scalars. Ex 3: since w=2v[1]+v[2]+v[3]= -1v[1]+2v[2]+2v[3]. When a vector can be written as two different linear combinations of a set of vectors then we say {v[1],v[2],v[3]} form a dependent set. We will come back to this later. Because saying "vector w is/is not a linear combination of the vectors v[1], v[2], … , v[r]" is so wordy, mathematicians of course invented another word to use instead of this phrase. Definition: Span{v[1], v[2], … , v[r]} is the set of all linear combinations of v[1], v[2], … , v[r]. To save me typing let S be the set of vectors v[1], v[2], … , v[r]. In the definition above span is used as a noun. It is a set too. Notice that v[i]äSpan{S} for 1£ I £ r so S Í Span{S} Í V. You can also take the span of V, but Span{V}=V. Now if vector w is a linear combination of the vectors v[1], v[2], … , v[r] we will write wÎSpan{S}. If vector w is not a linear combination of the vectors v [1], v[2], … , v[r] we will write wÏSpan{S}. Theorem 2: If S is the set of vectors v[1], v[2], … , v[r] in V then, a. Span{S} is a subspace of V. b. If any subspace W of V contains S then W also contains Span{S}. ie. If W£V and SÍW then Span{S}£W. Because of the second part of the Theorem Span{S} is said to be "the smallest subspace of V that contains S". If we let U=Span{S} then we say v[1], v[2], … , v[r] span U. Here the first span in Span {S} is used as a noun, but the second span is used as a verb. Proof: Part a. To prove a subset is a subspace you must only show three things. │Think │Write │ │You must show it is nonempty. Can you show zero is in Span{S} │Since 0=0v[1]+0v[2]+ … +0v[r ]we have 0 is a linear combination of the vectors in S so it is in Span{S}. Thus │ │ │Span{S} is nonempty. │ │You need to check closure. Make sure to start with a hypothesis telling where things │Let w[1] and w[2] be vectors in Span{S}. Let a be a real number. │ │come from. │ │ │Interpret your hypothesis. │This means that w[1] and w[2] are linear combinations of the vectors in S. │ │Sometimes you need to further interpret. Notice that you must use different letters │Therefore there exists scalars k[1], k[2], … , k[r] such that w[1]=k[1]v[1]+k[2]v[2]+ … + k[r]v[r] and there │ │for the scalars since if you used the same letters you would be implying that the │are scalars m[1], m[2], … , m[r] such that w[2]=m[1]v[1]+m[2]v[2]+ … +m[r]v[r]. │ │scalars were equal which would mean the vectors would be equal. │ │ │Now you need to show that w[1] +a w[2] is in Span{S} What does this mean? It means │w[1] +a w[2]=(k[1]v[1]+k[2]v[2]+ … + k[r]v[r]) + a( m[1]v[1]+m[2]v[2]+ … +m[r]v[r])=(k[1]+am[1])v[1] + (k[2]+a│ │that this new vector can be written as a linear combination of the vectors in S. This │m[2])v[2] + … + (k[r]+am[r])v[r] . The last equality followed because vectors satisfy all of our normal │ │normally involves plug and chug. │associative, commutative, and distributive rules under vector addition and scalar multiplication. │ │Make a conclusion about closure. │Thus w[1] +a w[2]ÎSpan{S} so Span{S} is closed under vector addition and scalar multiplication. │ │Make a conclusion about the Theorem │Since Span{S} is nonempty and closed under vector addition and scalar multiplication it follows that Span{S} │ │ │is a subspace of V. │ Proof Part b. │Think │Write │ │Remember in a proof always start the proof by feeding back the given which is called the hypothesis │Let W be a subspace of V that contains S. │ │To show that W contains Span{S} means you are showing Span{S}ÍW. To do this you must start with an element in Span{S} and show it│Let w be a vector in Span{S}. │ │is also in W. │ │ │Interpret │Therefore there exists scalars k[1], k[2], … , k[r] such that w=k │ │ │[1]v[1]+k[2]v[2]+ … + k[r]v[r]. │ │Somehow connect to given. What does the given mean? It means W is closed under vector addition and scalar multiplication. Now │Since the vectors v[i]ÎW and W is closed under vector addition and │ │this was for two vectors by induction if you add up r vectors in W the resulting vector is in W. │scalar multiplication it follows that w is in W. │ │Conclude │Therefore Span{S}ÍW. │ Ex 4: Describe Span{(1,2)}. Span{(1,2)}={ a(1,2) where a is a scalar}. This is the straight line through the origin with slope 2. Ex 5: Show that Span{ (1,0,3,4), (3,0,-1,3) } Ì Span{ (1,0,3,4), (3,0,-1,3), (2,1,5,6) } Since (1,0,3,4) and (3,0,-1,3) are in Span{ (1,0,3,4), (3,0,-1,3), (2,1,5,6) } part b of the theorem above tells us that Span{ (1,0,3,4), (3,0,-1,3) } Í Span{ (1,0,3,4), (3,0,-1,3), (2,1,5,6) }. We must still show that the sets cannot be equal. To do this you must find one element of the larger set that is not in the smaller set or you must show that the "size" of the larger set is bigger than the "size" of the lower set. Notice that (2,1,5,6) is not in Span{ (1,0,3,4), (3,0,-1,3) } so the sets are not equal. If the third vector in the larger set had a 0 in the second component it would not be as easy to say it was not in the smaller set you would have to test whether or not it was like in Ex 1 and 2. Ex 6: Show Span{ (1,-2,3) , (5,6,-1), (3,2,1) }=Span{ (5,6,-1), (3,2,1) } Like above it follows immediately from Theorem 2 that Span{ (1,-2,3) , (5,6,-1), (3,2,1) } Ê Span{ (5,6,-1), (3,2,1) }. To finish the proof you must show that Span{ (1,-2,3) , (5,6,-1), (3,2,1) } Í Span{ (5,6,-1), (3,2,1) } too. (Don't forget how to show subsets!) Let wÎ Span{ (1,-2,3) , (5,6,-1), (3,2,1) }. (Interpret what does this mean?) Therefore there exists scalars k[1], k[2], k[3] such that w=k[1](1,-2,3)+k[2](5,6,-1)+ k[3](3,2,1). We need to show that we can find scalars that will be functions of k[1], k[2], k[3] such that w=m[1](5,6,-1)+m[2](3,2,1). This changes to a system: k[1 ] + 5k[2] + 3k[3 ] = 5m[1]+3m[2] -2 k[1] + 6 k[2] + 2 k[3] = 6m[1]+2m[2] 3 k[1] - k[2] + 1 k[3] = -1m[1]+1m[2] Despite its backward form this is a system of 3 equations in two unknowns m[1] and m[2]. The left side of each of the above equations is just a real number; it is not unknown since the scalars are not unknown. Solving this system we get m[1]= -k[1]+k[2] and m[2]= 2k[1]+k[3]. Since there is a solution to our system we could find the scalars thus wÎ Span{ (5,6,-1), (3,2,1) }. This implies that Span{ (1,-2,3) , (5,6,-1), (3,2,1) } Í Span{ (5,6,-1), (3,2,1) }. We therefore can conclude that Span{ (1,-2,3) , (5,6,-1), (3,2,1) }=Span{ (5,6,-1), (3,2,1) } Independent and Dependent Sets When the span of a set of vectors does not change by removing one of its members we say that the larger set of vectors is said to be a dependent set or the vectors themselves are said to be dependent. So the larger set of vectors { (1,-2,3) , (5,6,-1), (3,2,1) } in Example 6 form a dependent set. To determine if the smaller set was a dependent set you would have to try removing a vector and see if the span changes. Be careful if you remove one and the span does not change that does not mean it is not dependent. You would have to then try removing a different one and check again. Etc. until you have tried removing everyone. If the span never changes then the set is not dependent. This is a lousy way to check for dependency. Before I can give you a better way I need to remind you about notation. A subset T of a set S is called proper if it is nonempty and is not all of the vectors in S. If you know T is a proper subset of S you should write TÌS; if you don't know if it is proper or you don't care you write TÍS. Definition: A set S of vectors v[1], v[2], … , v[r] is a dependent set of vectors if there exists a choice of scalars k[1], k[2], … , k[r] with at least one scalar not being zero such that k[1]v[1]+k [2]v[2]+ … + k[r]v[r]=0. Sometimes it is easier to use the following: The set S is a dependent set if any of the following occur. a. The Span{S}=Span{T} where T is some proper subset of S. OR b. If one vector in S can be written as a linear combination of the others. OR c. S contains the zero vector. OR Choice a and b could have actually have been used for the definition; In other words the definition is equivalent to bullet a which is equivalent to bullet b. Bullet c is not equivalent to the definition. Bullet c says if you know your set contains the zero vector you know it is dependent, but if you know it does not contain the zero vector you don't know anything. ● Bullet c implies definition. If S contains the zero vector then for the purpose of this proof we can assume that the zero vector is v[1] so we have 50+0v[2]+ … +0v[r]=0. ● definition implies bullet b. In the definition assume that the nonzero scalar is k[d]. Then the equation in the definition can be rewritten to v[d]=(k[1]/k[d])v[1]+ …+(k[d-1]/k[d])v[d-1] +(k[d+1] /k[d])v[d+1]+ … +(k[r]/k[d])v[r ] . So v[d] is a linear combination of the others. ● b implies definition. Assume that v[d] is a linear combination of the others. So v[d]=m[1]v[1]+ …+m[d-1]v[d-1] +m[d+1]v[d+1]+ … +m[r]v[r]. Rearranging we obtain m[1]v[1]+ …+m[d-1]v[d-1] +(-1)v[d] +m[d+1]v[d+1]+ … +m[r]v[r]=0. ● definition implies bullet a. Since the definition implies bullet b it suffices to show that bullet b implies bullet a. Assume that v[d] is a linear combination of the others. Let T be the set of all vectors in S except for v[d]. By Theorem 2 we have that Span{T}ÍSpan{S}. We must show that Span{S}}ÍSpan{T}. Let wÎSpan{S} then there exists scalars k[1], k[2], … , k[r] such that w=k[1]v[1] +k[2]v[2]+ … + k[r]v[r]. Substituting our equation for v[d] into this last equation gives w as a linear combination of the vectors in T. In other words wÎSpan{T}. We conclude that Span{T}=Span ● Bullet a implies definition. Since the bullet b implies the definition it suffices to show that bullet a implies bullet b. So we are given that there exists some proper subset T of S with Span{T} =Span{S}. Since T is proper there is at least one vector in S that is not in T. Assume it is v[d]. Since v[d]ÎSÍSpan{S}=Span{T} it follows that v[d] is a linear combination of the vectors in T. But since the vectors in T are also in S it follows that this linear combination can also be viewed as a linear combination of vectors in S. This is just the statement in bullet b. So we are Definition: A set is called independent if it is not dependent. In other words a set is independent if a. The Span{T}ÌSpan{S} for every proper subset T of S OR b. If no vector in S can be written as a linear combination of the others. OR c. If you try to solve k[1]v[1]+k[2]v[2]+ … + k[r]v[r]=0 for the scalars the only solution is k[1]=k[2]=…=0. Now don't forget all of Chapter 1 and 2. If the equation in bullet c in the definition of independent translates to r equations and r unknowns it has a unique solution iff the determinant is nonzero; thus if the determinant is nonzero the set is independent and if the determinant is zero the set will be dependent. If instead it translates to m equations and r unknowns where r is bigger than m then there are infinite solutions so the set will be dependent. Ex 7: Determine if the set of vectors { (1,-2,3), (5,6,-1), (3,2,1)} is independent or dependent a different way then in Example 6. Assume we can find scalars k[1], k[2], k[3] such that k[1](1,-2,3)+k[2](5,6,-1)+ k[3](3,2,1)=0=(0,0,0). This becomes (k[1]+5k[2]+3k[3], -2k[1]+6k[2]+2k[3], 3k[1]-k[2]+k[3])=(0,0,0). Which translates to three equations and three unknowns by doing first coordinate equals zero, second coordinate equals zero, etc. k[1 ] + 5k[2] + 3k[3 ] = 0 -2 k[1] + 6 k[2] + 2 k[3] = 0 3 k[1] - k[2] + 1 k[3] = 0 So we can use determinants: . Thus the vectors are dependent. Much easier, right? Please notice that the vectors given appear as columns in our matrix or determinant not as rows. Ex 8: Determine if the set of vectors {sin^2x, cos^2x, 5} is a independent or dependent set. We start the same way. Assume we can find scalars k[1], k[2], k[3] such that k[1]sin^2x+k[2]cos^2x+k[3]5=0 regardless of the value of x. You cannot change the scalars if x changes. In other words you actually have infinite equations: k[1]sin^2p+k[2]cos^2p+k[3]5=0, k[1]sin^22p+k[2]cos^22p+k[3]5=0, k[1]sin^20+k[2]cos^20+k[3]5=0 k[1]sin^2(p/2)+k[2]cos^2(p/2)+k[3]5=0 etc. Even though we used four values of x we only really got two different equations. This is not a good approach. Maybe you remember your trig and "see" the answer as dependent because 5sin^2x+5cos^2x+(-1)5=0. If you don't and the functions are differentiable you can try to see if the Wronskian is helpful. Theorem 3: Given a set of functions f[1],f[2],…,f[r] in C^(n-1) (-¥,¥). If the set is dependent then Proof: To same me typing I will assume n=2 if n is bigger the same argument works but with more typing. Since the set is dependent we can find scalars k and m at least one of which is nonzero such that kf+mg=0 The 0 vector is the function Zero(x)=0 for all real x in the interval (-¥,¥). So our equation becomes kf(x)+mg(x)=0 for all real x in the interval (-¥,¥). Since either k or m is nonzero the system has a nontrivial solution for every x in the interval (-¥,¥). This implies that for every x in (-¥,¥) the coefficient matrix is not invertible, or equivalently that its determinant the Wronskian is zero for every x in (-¥,¥). Thus if the functions are linearly dependent in C^1(-¥,¥) then W(x)=Zero(x)=0 for every x in the interval (-¥,¥). Lemma 4: If there exists an x such that W(x)¹0 then the functions are independent. Proof: If they were dependent then W(x)=0 contradiction. Remark: Although Theorem 3 and Lemma 4 are often helpful you must remember that the converse of Theorem 3 is not true. In other words If W(x)= 0 for all x in an interval [a,b] it does not mean the functions are dependent in C^r[a,b]. Ex 9: Show that if f=sin^2x and g=|sinx|sinx then W(x)=0 on the interval [-1,1] but the vectors are independent on [-1,1]. . Remember to think in radians. Since on the interval [0,1] sinx and sin2x are both positive the absolute values are not needed and then it is clear that W(x)=0. On the interval [-1,0) sinx is negative and sin2x is negative so the expression becomes W(x)=sin^2x(-sin2x)-sinx sin2x(-sinx)=0. So we have W(x)=0. Now if we just look at kf(x)+mg(x)=0 when x=1 and when x= -1 we obtain the two equations k+m=0 and k-m=0 which imply k=m=0. Thus the functions are independent. Ex 10: Show that f=sin^2x, g=cos^2x, h=5 is an dependent set and W(x)=0. We already showed dependence in Ex 8. Ex 11: Determine if the vectors f=x and g=sinx form a dependent set or independent set of vectors in C^1(-¥,¥). Calculate the Wronskian. . This function has value -p when x=p so it is not identically zero. Lemma 4 tells us that the functions must therefore be independent.
{"url":"http://homepage.smc.edu/mazorow_moya/Math13/Classnotes/linear_combinations_and_spans.htm","timestamp":"2014-04-18T13:07:34Z","content_type":null,"content_length":"72616","record_id":"<urn:uuid:8636ff8b-8d3b-47f4-8516-79da1463f9fb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Emerson, NJ Math Tutor Find an Emerson, NJ Math Tutor ...As long as the student is trying hard to understand something, I will not give up on trying to help them learn. TRAVEL I am based in Manhattan, however I am able to travel to other boroughs or Jersey via train. Depending on your distance and the time needed to travel, I may request an additional amount to my rate to compensate for travel time/train fare. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra I recently graduated magna cum laude from Ramapo College with a bachelor's degree in biochemistry. I also completed the college honors program, and a two-year research program to earn TAS research honors with distinction. Throughout college, I worked as both an English tutor and a chemistry tutor for college students, and gained experience with a wide variety of learning types. 17 Subjects: including calculus, phonics, public speaking, algebra 1 ...I hold teaching certifications in both NJ & NY, K-12, and am highly qualified in math and science. I am currently working at a high school as the Science department chair and as a biology and environmental science teacher. I have tutored students in all of these areas for over 25 years. 16 Subjects: including prealgebra, reading, SAT math, ACT Math ...I genuinely love to teach math and truly care about my students. Also, I have a strong background in math prepping for the SAT, ACT, GRE, GMAT, ISEE, and SSAT. My results have been very good in improving test scores. 22 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel My name is Amarachi, and I enjoy teaching Math, Science and Business. Tutoring provides me with a continuous opportunity to improve in my learning and teaching skills because it constantly challenges me to adapt to new skill sets in order to best serve the needs of my student. I am well equipped to be Math, Science and Business Tutor. 21 Subjects: including calculus, chemistry, elementary (k-6th), Microsoft Excel Related Emerson, NJ Tutors Emerson, NJ Accounting Tutors Emerson, NJ ACT Tutors Emerson, NJ Algebra Tutors Emerson, NJ Algebra 2 Tutors Emerson, NJ Calculus Tutors Emerson, NJ Geometry Tutors Emerson, NJ Math Tutors Emerson, NJ Prealgebra Tutors Emerson, NJ Precalculus Tutors Emerson, NJ SAT Tutors Emerson, NJ SAT Math Tutors Emerson, NJ Science Tutors Emerson, NJ Statistics Tutors Emerson, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/emerson_nj_math_tutors.php","timestamp":"2014-04-20T11:14:35Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:a33d4c46-4a26-4d43-aa60-462ab73efc31>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Fractions Chapter 3: Using Fractions Created by: CK-12 In Math 7, the learning content is divided into Concepts. Each Concept is complete and whole providing focused learning on an indicated objective. Theme-based Concepts provide students with experiences that integrate the content of each Concept. Students are given opportunities to practice the skills of each Concept through real-world situations, examples, guided practice and independent practice sections. There are also video links provided to give students an audio/visual way of connecting with the content. In this Chapter, students will engage with Concepts on comparing and ordering fractions, operations with fractions, mixed numbers, operations with mixed number, relationships between fractions and decimals, measuring with customary units of measurement and converting customary units of measurement. Chapter Outline Chapter Summary Having completed this chapter, students are now ready to move on to the next Chapter. Each Concept has provided students with an opportunity to engage and practice skills in many Concepts including comparing and ordering fractions, operations with fractions, mixed numbers, operations with mixed number, relationships between fractions and decimals, measuring with customary units of measurement and converting customary units of measurement. Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r2/section/3.0/","timestamp":"2014-04-16T16:50:16Z","content_type":null,"content_length":"101768","record_id":"<urn:uuid:de7a52d1-251f-4570-baae-ad506af474b9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Smyrna, GA 30080 Everything about Mathematics and Economics: High school to college ...I have also worked with pivot tables and their relationships and macros. Also, I have done package installations, printing with/without grids, copying and pasting to word and hence print. , as one of the foundations of Mathematics, involves study of various... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/geo_Atlanta_geometry_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-16T08:00:37Z","content_type":null,"content_length":"64265","record_id":"<urn:uuid:1d977026-04e0-4860-a8cc-60f61416f168>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
2667 -- Barber's problem Barber's problem Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 657 Accepted: 151 Tom is a barber in a small town. He owns a little store, several apprentices and good credit for his high quality service. However, as his customers become more and more, he got some problems. Let's look at the procedure of the haircut at first. Customers are served to change clothes for haircut; after having the hair washed, start the haircut; wash the hair again after that, get the hair dried, change the clothes back and then pay. In Tom's store, he does every haircut, dry work and cash job by himself and leaves the changing clothes and washing hair to his apprentices. Assume each action takes 1 unit time, as shown in Fig. 1. Usually, Tom pipelines the whole procedure, so that he could serve several customers at the same time. Here is the simple case for two customers: However, when three customers are coming together, the simple pipeline will meet the problem: At time 5 and time 7, both customer 1 and 3 need Tom. It is too bad, Tom could only do one thing in 1-unit time, but how could he let his customers wait for him with their hair wet? Of course this is not a big deal for Tom. He just add some wait units in his procedure, as described in Fig. 4: Nobody will complain for a little rest after haircut and dry, and now all three customers are served in time (Fig. 5). But what should Tom do for more complicated situations? Now Tom needs your help. Assuming Tom is able to know when the customer comes, you will judge whether it is possible to add some wait units into the procedure to satisfy: (1) Enable all the customers to be served as soon as they come (Customers are impatient and will leave if they could not be served in time). (2) Eliminate the collisions in the pipeline (The collision means that Tom has to serve two and more than two customers at the same time, but many customers can be served to change clothes or wash hair at the same time, because there are enough apprentices) Some other notes you need to keep in mind: (1) The wait units you add must be finite. (2) The wait units can be adjacent. (3) The procedure for every customer should be the same. Nobody is happy to wait more time than others. (4) You can only add wait units into procedure, but not exchange or remove any existing step. The original sequence must be kept: Change-Wash-Cut-Wash-Dry-Change-Pay. There are several test cases in the input. The following lines describe test cases. Each line for one case is given in such format: (n[1], n[2], ... n[k] ) 0 < n[i] <= 20, 0 < k <= 20 It gives the time when customers come. n[i] represents the interval time of each sequential customer. The parenthesis means that the customer will come periodically. For example, assuming the first customer at time 1, if the customers arrive at 1, 3, 5, 7, 9, 11, 13, ... the representation is (2). The representation (1,3,7) represents that the customers will come at time 1, 2, 5, 12, 13, 16, 23, 24, 27, 34... The example shown in Fig. 5 the description corresponds to the representation (1), and of course more customers will come in time 4, 5, 6, 7... A line containing "(0)" ends the input. Your task is to decide whether there is a solution to add FINITE wait units into the haircut procedure to eliminate the collision in the pipeline and enable all the customers to be served as soon as he comes. If it can be solved, print "Yes", otherwise "No", each in one line. For example, for the input (1), you can add 2 wait units as shown in Fig. 5 above to solve the situation of 3 customers, but when more and more customers come, you can't satisfy everyone except adding more and more wait units. There is not an end. So the answer is "No". For the input (2,3,7), you may add 4 wait units into the procedure to satisfy the pipeline, as explained in Fig. 6. So the answer is "Yes". Sample Input Sample Output Beijing 2005 Preliminary
{"url":"http://poj.org/problem?id=2667","timestamp":"2014-04-20T13:19:18Z","content_type":null,"content_length":"9758","record_id":"<urn:uuid:32c8d553-a824-4632-9191-4f81c0c53fe0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Calculus/Real Analysis Help March 7th 2013, 09:54 AM #1 Mar 2013 Advanced Calculus/Real Analysis Help Hello Everyone! I have 2 questions that I am extremely stuck on and I was wondering if anyone knew how to help! Anything would be great. Here they are: (1) Use Bolzano Weierstrass to prove the intermediate value theorem. "If f is a continuous function on a,b and f(a) & f(b) have opposite signs, then there is a point c on (a,b) such that f(c)=0. (2) Prove that every convergent sequence has an increasing or decreasing subsequence, or maybe both. Do so using the limi as n goes to infinity of x(sub n) is equal to l. Thank you! Re: Advanced Calculus/Real Analysis Help Problem 1. Let f be a continuous function on the interval I, [x,y] with [x,y] subset of I, and t $\in$ R with f(x) < t < f(y), then there exists s $\in$ (x,y) s.t f(s) = t. W.L.O.G suppose that f(x) < t < f(y). Let U = {u $\in$ (x,y) | f(u) < t}, x $\in$ U, so U is non-empty. Clearly, the set U is bounded above by y. Let s = sup U, and n $\in$ N. Since s is the sup of U, s - 1/n is not an upper bound for U, By bolzano-weierstrass, there exists a x_n $\in$ U $\cap$ such that s - 1/n < x_n < s. For all n $\in$ N, x_n $\in$ U implies that f(x_n) < t. (this is how U is defined) You can see that x_n converges to the sup U = s. By continuity, f(x_n) converges to f(s) $\leq$ f(t). Finish the proof, using a similar argument, you want to show that f(s) $\geq$ f(t). Hint: Use Bolzano-Weirstrass, the fact that s = sup U, and the fact that there is a convergent subsequence such that y_n is not in U. Problem 2 We first need to define a dominant term. The nth term of a sequence is dominant if for all m > n, a_m < a_n. In other words the dominant term is greater than all of the elements that come after it. (note that this term may be defined a little different in your book or an inferior term may be discussed). Anyway, the proof would not be any different. Now, we continue our proof by inspecting two cases. 1. First case: A sequence contains infinitely many dominant terms. -Find the first dominant term, then the second, then the third. Do you see a pattern? You can obtain a decreasing subsequence by listing all of the dominant terms in order. 2 Second case: A sequence contains finitely many dominant terms. - List all of the dominant terms. Then, select the element a_k that comes right after the last dominant term, and construct your increasing subsequence with those elements. Do you see why? Let me know if you run into problems. Last edited by jll90; March 7th 2013 at 06:23 PM. Re: Advanced Calculus/Real Analysis Help Thank you so much for your response, I really appreciate you taking your time out and helping me. For number 1 I'm not really sure how to use sup as my class has not gotten to it yet. For number 2 what exactly do you mean by dominant term? Thank you! Re: Advanced Calculus/Real Analysis Help The supremum is the least upper bound of a set. What do we mean by a least upper bound? Upper Bound. Let S be a subset of R. If there exists a M $\in$ R such that for all x $\in$ S, one has x $\leq$ M, then M is an upper bound for S. In other words, an upper bound for a set is a number greater than any element of the set. Take the set A ⊆ R, A = {1,2,3}. Well, 4, by definition, is an upper bound of this set, right? 4 is greater than any element in the set A. Similarly, 3.000001 and 7/2 are upper bounds. 3 is also an upper bound, in particular the least upper bound, since 3 is greater than or equal to all of the elements in that set and less than any other upper bound. Least upper bound, supremum: Let S be a subset of R. Suppose S is bounded above (has an upper bound). If there exists s ∈ R such that s is an upper bound for S and for all upper bounds M of S, one has s ≤ M, then we call s the least upper bound or supremum of S, denoted In other words, the least upper bound is the smallest of the upper bounds. Take the set T = { x | x $\in$ R, x $\leq$ 1}. What are some upper bounds for this set? What is the least upper bound of this set? Is the least upper bound of this set T in the set or not? What is the supremum of the empty set? What can you say about this? Note: Don't forget to look for the definition of infimum. An infimum is the greatest lower bound of a set (or a sequence), depending on the context. It is defined similarly. Also, the existence of supremums and infimums relies on the Completeness Axiom. This is crucial in Real Analysis. "Let S ⊆ R be nonempty and bounded above. Then, supS ∈ R." A lot of your proofs will revolve around the idea of having a supremum (or infimum). Depending on your analysis course, you may be asked to prove that the set of the reals is complete. In any case, like I said, this idea is fundamental in Real Analysis. Moving on to dominant terms, let a_n = 1 / n a1 = 1, a2, = 1/2... This sequence has infinitely many dominant terms. Why? 1 > 1/2, 1/2 > than all of the terms after it. Any term in the sequence is greater than all of the terms following it. Hence, every term of this sequence is a dominant term. Let's construct a sequence say: 1,4,2,4, 1, 2, 1, 4, 5, 7, 9, 0, 0, 0, 0, 0... (all zeroes from this point on). The dominant term of this sequence is 9. Why? Look at the definition. Why is 1 not a dominant term? Are there finitely many or infinitely many dominant terms? Let me know if you have further questions! Last edited by jll90; March 12th 2013 at 08:16 PM. March 7th 2013, 06:02 PM #2 Dec 2012 United States March 12th 2013, 07:38 PM #3 Mar 2013 March 12th 2013, 08:12 PM #4 Dec 2012 United States
{"url":"http://mathhelpforum.com/advanced-math-topics/214394-advanced-calculus-real-analysis-help.html","timestamp":"2014-04-21T09:22:50Z","content_type":null,"content_length":"43646","record_id":"<urn:uuid:4e76f95b-6ad3-45c7-87c6-6fbf60d754da>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Idledale Precalculus Tutor Find a Idledale Precalculus Tutor ...I worked as a tutor and instructional assistant at American River College in Sacramento, CA for 15 years. I also attended UC Berkeley as an Engineering major.I took this class at American River College in Sacramento, CA. I received an A, one of the highest grades in the class. 11 Subjects: including precalculus, calculus, statistics, geometry ...Learning in a fun and easy going environment is my tutoring style. I have two years of experience doing document translation in Chinese to English or English to Chinese for the College. I have taken Differential Equations Class as one part of our Advanced Mathematics classes, when I was in College. 27 Subjects: including precalculus, calculus, physics, algebra 1 ...I was then hired by the ULL Athletic Department to help fellow student athletes maintain high GPA's and academic eligibility. Subjects ranged from remedial algebra, geometry, trigonometry, to calculus, calculus-based physics, and chemistry. In graduate school, I instructed undergraduate courses... 16 Subjects: including precalculus, chemistry, calculus, physics ...If you need help with Math for the ACT let me know. We will tackle it together. I have worked for a video production company for the past 5 years. 22 Subjects: including precalculus, English, algebra 1, geometry ...Geometery is often the first math class taught by means of memorizing theorems without reference to the real world, and this makes the topic unnecessarily abstract and hard to grasp. But everything Euclid wrote came as a result of looking at the world around him, and if geometry is taught from t... 57 Subjects: including precalculus, chemistry, calculus, reading
{"url":"http://www.purplemath.com/Idledale_Precalculus_tutors.php","timestamp":"2014-04-18T11:29:06Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:b5f0db33-4119-41e1-b649-e382e0a078ed>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Inductor Impedance Calculator The inductor impedance calculator calculates the impedance of an inductor based on the value of the inductance, L, of the inductor and the frequency, f, of the signal passing through the inductor, according to the formula, XL= 2πfL. A user enters the inductance, L, and the frequency, f, and the result will automatically be calculated and shown. The impedance result which is displayed above is in unit ohms (Ω). The impedance calculated is a measure of the inductor's resistance to a signal passing through. Inductorss have higher impedance to higher frequency signals; and, conversely, they have lower impedance to signals of lower frequency. This means that lower frequency signals will have lower impedance (or resistance) passing through an inductor, while higher frequency signals will have higher impedance passing through an inductor. This means that in our calculator, above, the higher the frequency you enter, the higher the impedance will be. And the lower the frequency you enter, the lower the impedance will be. The same effect that the frequency of the signal has, the inductance of the inductor has as well. The higher the inductance of the inductor, the higher the impedance. Conversely, the lower the inductance, the lower the impedance. Related Resources How to Calculate the Voltage Across an Inductor How to Calculate the Current Through an Inductor Inductive Reactance Impedance Matching Calculator
{"url":"http://www.learningaboutelectronics.com/Articles/Inductor-impedance-calculator.php","timestamp":"2014-04-18T07:30:16Z","content_type":null,"content_length":"6607","record_id":"<urn:uuid:7869dca5-8449-42d3-a722-e5df2f36e61b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
UT Dallas 2012 Undergraduate Catalog School of Natural Sciences and Mathematics Mathematics (B.S.) Mathematics is both a profession and an indispensable tool for many types of work. As a tool, mathematics is a universal language that has been crucial in formulating and expressing ideas not only in science and engineering, but also in many other areas such as business and the social sciences. As probably the oldest and most basic science, it provides the key to understanding the major technological achievements of our time. Of equal importance, knowledge of mathematics may help provide a student with the type of uncompromising and clear-sighted thinking useful in considering the problems of many other disciplines. The Mathematics degree program encompasses mathematics, statistics, and applied mathematics. Applied mathematics and statistics continue to enjoy a rapid growth. Students have the opportunity of applying their expertise to any of a number of fields of application. For the student to be more effective in such applications, Mathematics also offers degree programs allowing additional emphasis in the areas of actuarial science, computer science, electrical engineering, and management. Those interested in obtaining both a B.S. in Mathematics and Teacher Certification in the state of Texas should consult the UT Dallas Teacher Development Center or the UTeach office for specific requirements as soon as possible after formal admission to the University. See the Teacher Education Certification Program section of the catalog for additional information. The Mathematics degree program also prepares students for graduate studies. An accelerated B.S./M.S. Fast Track program is available which provides the opportunity for undergraduate students to satisfy some of the requirements of the master's degree while they are completing the bachelor's degree in Mathematics. Professors: Larry P. Ammann, Michael Baron, Sam Efromovich, Matthew J. Goeckner, M. Ali Hooshyar, Wieslaw Krawcewicz, Patrick L. Odell (Emeritus), Istvan Ozsvath, Viswanath Ramakrishna, Ivor Robinson (Emeritus), Robert Serfling, Janos Turi, John W. Van Ness (Emeritus) Associate Professors: Zalman I. Balanov, Pankaj Choudhary, Mieczyslaw Dabkowski Assistant Professors: Yan Cao, Tobias Hagge, Qiongxia Song Clinical Associate Professor: Natalia Humphreys Research Assistant Professor: Qingwen Hu Senior Lecturers III: David L Lewis, Paul Stanford, Bentley T Garrett, Senior Lecturers II: Manjula Foley, Yuly Koshevnik, Joanna Robinson, William Monte Scott Senior Lecturers I: Mohammad Akbar, Diana Cogan, Malgorzata Dabkowska, Anatoly Eydelzon, Richard Ketchersid, Brady McCary, Jigarkumar Patel, Michael Tseng Adjunct Professors: Jose Carlos Gomez Larranage, Adolfo Sanchez Valenzuela Affiliated Faculty: Herve Abdi (BBS), Raimund J. Ober (ECS/EE), Alain Bensoussan (JSOM), Titu Andreescu (ECS/SME), John Wiorkowski (JSOM) The Program in Mathematics Students seeking a degree in Mathematics may specialize in Mathematics, Statistics, or Applied Mathematics, and receive a B.S. degree. Each specialization allows some flexibility in electives so that students can better adapt their degree plans to their educational goals. Mathematics Specialization: For students interested in a career in mathematics and for students interested in continuing on to graduate work in mathematics, applied mathematics, math education, and related areas. Statistics Specialization: For students interested in probability and statistical models and their use in data analysis and decision-making and for students interested in continuing on to graduate work in statistics, biostatistics, actuarial science, and other statistics related areas. Applied Mathematics Specialization: For students interested in mathematics for the purpose of using it broadly in various areas of application and for students interested in continuing on to graduate work in applied mathematics and related areas. The UTeach option may be added to the BS degree in Mathematics. UTeach Dallas Option degree plans are streamlined to allow students to complete both a rigorous Bachelor of Science or Bachelor of Arts degree and all course work for middle or high school teacher certification in four years. Teaching Option degrees require deep content knowledge combined with courses grounded in the latest research on math and science education. While most graduates go on to classroom teaching, UTeach alums are also prepared to enter graduate school and to work in discipline related industry. Bachelor of Science in Mathematics Degree Requirements (120 semester credit hours) All majors with specialization in either Mathematics or Statistics are strongly urged to meet with assigned departmental advisors every semester. I. Core Curriculum Requirements^1: 42 semester credit hours Communication (6 semester credit hours) 3 semester credit hours Communication (RHET 1302) 3 semester credit hours Communication Elective (NATS 4310 or MATH 4390 or MATH 4399)^2 Social and Behavioral Sciences (15 semester credit hours) 6 semester credit hours Government (GOVT 2301 and GOVT 2302) 6 semester credit hours American History 3 semester credit hours Social and Behavioral Science Elective Humanities and Fine Arts (6 semester credit hours) 3 semester credit hours Fine Arts (ARTS 1301) 3 semester credit hours Humanities (HUMA 1301) Mathematics and Quantitative Reasoning (6 semester credit hours) 6 semester credit hours Calculus (MATH 2417 and 2419)^3 Science (9 semester credit hours) Mathematics/Applied Mathematics Specialization PHYS 2125 Physics Laboratory I PHYS 2126 Physics Laboratory II PHYS 2325 Mechanics or PHYS 2421 Honors Physics I - Mechanics and Heat PHYS 2326 Electromagnetism and Wave or PHYS 2422 Honors Physics II - Electromagnetism and Waves And an additional science course approved by the assigned departmental advisor. Statistics Specialization PHYS 2325/2125 Mechanics with Laboratory and PHYS 2326/2126 Electromagnetism and Waves with Laboratory or PHYS 2421 Honors Physics I - Mechanics and Heat with Laboratory and PHYS 2422 Honors Physics II - Electromagnetism and Waves with Laboratory or CHEM 1311/1111 and CHEM 1312/1112 General Chemistry I and II with Laboratory And an additional science course approved by the assigned departmental advisor. II. Major Requirements: 48 semester credit hours Major Preparatory Courses (15 semester credit hours) CS 1337^4 Computer Science I MATH 2417 Calculus I^3^, ^5 MATH 2418^4 Linear Algebra MATH 2419 Calculus II^3^, ^5 MATH 2420^4 Differential Equations with Applications MATH 2451^4 Multivariable Calculus with Applications Major Core Courses (21 semester credit hours) MATH 3310 Theoretical Concepts of Calculus MATH 3311 Abstract Algebra I MATH 3379 Complex Variables MATH 4301 Mathematical Analysis I MATH 4302 Mathematical Analysis II MATH 4334 Numerical Analysis NATS 4310 Advanced Writing in the Natural Sciences and Mathematics^2 STAT 4351 Probability Major Related Courses (12 semester credit hours) Applied Mathematics Specialization MATH 4341 Topology MATH 4355 Methods of Applied Mathematics MATH 4362 Partial Differential Equations STAT 4382 Stochastic Processes Mathematics Specialization MATH 3312 Abstract Algebra II MATH 3380 Differential Geometry MATH 4341 Topology 3 semester credit hours upper-division guided elective^6 Statistics Specialization STAT 3355 Data Analysis for Statisticians and Actuaries STAT 4352 Mathematical Statistics STAT 4382 Stochastic Processes 3 semester credit hour upper-division guided elective^6 III. Elective Requirements: 30 semester credit hours Advanced Electives (6 semester credit hours) All students are required to take at least six semester credit hours of advanced electives outside their major field of study. These must be either upper-division classes or lower-division classes that have prerequisites. Free Electives (24 semester credit hours) Both lower- and upper-division courses may count as electives, but the student must complete at least 51 semester credit hours of upper-division credit to qualify for graduation. BS in Actuarial Sciences The department offers a BS in Actuarial Sciences (see the program within this catalog for additional information). Mathematics or Statistics with Computer Science Emphasis Applied Mathematics Specialization or Statistics Specialization together with following courses: CS 2305 Discrete Mathematics for Computing I CS 2336 Computer Science II CS 3305 Discrete Mathematics for Computing II CS 3376 C/C++ Programming in a UNIX Environment CS 3345 Data Structures and Introduction to Algorithmic Analysis CS 4337 Organization of Programming Languages CS 3340 Computer Architecture Mathematics or Statistics with Electrical Engineering Emphasis Applied Mathematics Specialization or Statistics Specialization together with following courses: EE 3101 Electrical Network Analysis Laboratory EE 3111 Electronic Circuits Laboratory EE 3120 Digital Circuits Laboratory EE 3301 Electrical Network Analysis EE 3311 Electronic Circuits EE 3320 Digital Circuits EE 4301 Electromagnetic Engineering I Mathematics or Statistics with Management Emphasis Mathematics Specialization, Applied Mathematics Specialization or Statistics Specialization together with following courses: ACCT 2301 Introductory Financial Accounting ACCT 2302 Introductory Management Accounting BLAW 2301 Business and Public Law FIN 3320 Business Finance MIS 3300 Introduction to Management Information Systems OBHR 3310 Organizational Behavior NOTE: Students transferring into Mathematics at the upper division level are expected to have completed all of the 1000- and 2000- level mathematics core course requirements. Bachelor of Science in Mathematics with UTeach Option Degree Requirements (120 semester credit hours) I. Core Curriculum Requirements^1: 42 semester credit hours Communication (6 semester credit hours) 3 semester credit hours Communication (RHET 1302) 3 semester credit hours Communication Elective (NATS 4390/NATS 4399)^2 Social and Behavioral Sciences (15 semester credit hours) 6 semester credit hours Government (GOVT 2301 and GOVT 2302) 6 semester credit hours American History 3 semester credit hours Social and Behavioral Science Elective Humanities and Fine Arts (6 semester credit hours) 3 semester credit hours Fine Arts (ARTS 1301) 3 semester credit hours Humanities (HUMA 1301) Mathematics and Quantitative Reasoning (6 semester credit hours) 6 semester credit hours Calculus (MATH 2417 and 2419)^3 Science (9 semester credit hours) Mathematics/Applied Mathematics Specialization PHYS 2125 Physics Laboratory I PHYS 2126 Physics Laboratory II PHYS 2325 Mechanics or PHYS 2421 Honors Physics I - Mechanics and Heat PHYS 2326 Electromagnetism and Waves or PHYS 2422 Honors Physics II - Electromagnetism and Waves and an additional acceptable science course Statistics Specialization PHYS 2325/2125 Mechanics with Laboratory and PHYS 2326/2126 Electromagnetism and Waves with Laboratory or PHYS 2421 Honors Physics I - Mechanics and Heat with Laboratory and PHYS 2422 Honors Physics II - Electromagnetism and Waves with Laboratory or CHEM 1311/1111 and CHEM 1312/1112 General Chemistry I and II with Laboratory and an additional acceptable science course II. Major Requirements: 50 semester credit hours Major Preparatory Courses (17 semester credit hours beyond core curriculum) CS 1337^4 Computer Science I MATH 2417 Calculus I^3 MATH 2418^4 Linear Algebra MATH 2419 Calculus II^3 MATH 2420^4 Differential Equations with Applications MATH 2451^4 Multivariable Calculus with Applications Major Core Courses (21 semester credit hours beyond core curriculum) MATH 3310 Theoretical Concepts of Calculus MATH 3311 Abstract Algebra I MATH 3379 Complex Variables MATH 4301 Mathematical Analysis I MATH 4302 Mathematical Analysis II MATH 4334 Numerical Analysis NATS 4390/4399 Research Methods^2 STAT 4351 Probability Major Related Courses (12 semester credit hours) Applied Mathematics Specialization MATH 4341 Topology MATH 4355 Methods of Applied Mathematics MATH 4362 Partial Differential Equations STAT 4382 Stochastic Processes Mathematics Specialization MATH 3312 Abstract Algebra II MATH 4341 Topology 3 semester credit hours upper-division guided elective^6 Statistics Specialization STAT 3355 Data Analysis for Statisticians and Actuaries STAT 4352 Mathematical Statistics STAT 4382 Stochastic Processes 3 semester credit hour upper-division guided elective^6 III. Elective Requirements: 28 semester credit hours Advanced Electives (6 semester credit hours) All students are required to take at least six semester credit hours of advanced electives outside their major field of study. These must be either upper-division classes or lower-division classes that have prerequisites. UTeach courses can fulfill this requirement. UTeach Requirements (18 semester credit hours beyond core curriculum and advanced electives) NATS 1141 UTeach Step 1 NATS 1143 UTeach Step 2 NATS 3341 Knowing and Learning in Mathematics and Science NATS 3343 Classroom Interactions HIST 3328 History and Philosophy of Science and Medicine NATS 4390/4399 Research Methods^2 NATS 4341 Project-Based Instruction NATS 4694 UTeach Student Teaching, 8-12 Science and Mathematics or NATS 4696 UTeach Student Teaching, 4-8 Science and Mathematics NATS 4141 UTeach Student Teaching Seminar MATH 3303 Introduction to Mathematical Modeling Free Electives (4 semester credit hours) Both lower- and upper-division courses may count as electives, but the student must complete at least 51 semester credit hours of upper-division credit to qualify for graduation. Minor in Mathematics Students not majoring in Mathematics or Statistics may obtain a minor in Mathematics or Statistics by satisfying the following requirements: 18 semester credit hours of mathematics or statistics, 12 semester credit hours of which must be chosen from the following courses: Mathematics Minor: MATH 3310, MATH 4334 and two more upper-division mathematics courses that satisfy degree requirements by students in Mathematics. Statistics Minor: STAT 4351, STAT 4352 and two more upper-division mathematics courses that satisfy degree requirements by students in Statistics. Fast Track Baccalaureate/Master's Degrees For students interested in pursuing graduate studies in Mathematics, the Mathematics Department offers an accelerated B.S. / M.S. Fast Track that involves taking graduate courses instead of several advanced undergraduate courses. Acceptance into the Fast Track is based on the student's attaining a GPA of at least 3.200 in all mathematics classes and being within 30 semester credit hours of graduation. Fast Track students may, during their senior year, take 15 graduate semester credit hours that may be used to complete the baccalaureate degree. After admission to the graduate program, these 15 graduate semester credit hours may also satisfy requirements for the master's degree. Fast Track programs are offered in mathematics with specializations in applied mathematics and statistics. 1. Curriculum Requirements can be fulfilled by other approved courses from accredited institutions of higher education. The courses listed in parentheses are recommended as the most efficient way to satisfy both Core Curriculum and Major Requirements at UT Dallas. 2. A Major course requirement that also fulfills a Core Curriculum requirement. If semester credit hours are counted in the Core Curriculum, students must complete additional coursework to meet the minimum requirements for graduation. Course selection assistance is available from the undergraduate advisor. 3. Two semester credit hours of Calculus are counted as electives; six semester credit hours are counted in Core Curriculum. 4. Indicates a prerequisite class to be completed before enrolling in upper-division classes. 5. MATH 2417 and 2419 requirements can be fulfilled by completing MATH 2413, 2414, and 2415. 6. Approval of Mathematics department advisor required. 7. Another MATH course, i.e. MATH 3380, may be substituted if MATH 3321 is not offered. Updated: April 6, 2014 - Visitor: 14102
{"url":"http://catalog.utdallas.edu/2012/undergraduate/programs/nsm/mathematics/showprint","timestamp":"2014-04-19T14:31:21Z","content_type":null,"content_length":"215903","record_id":"<urn:uuid:44999b6b-6335-4d3e-b13f-4ceec06d6487>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
-Transform of Sub-fBm and an Application to a Class of Linear Subfractional BSDEs Advances in Mathematical Physics Volume 2013 (2013), Article ID 827192, 11 pages Research Article The -Transform of Sub-fBm and an Application to a Class of Linear Subfractional BSDEs ^1College of Information Science and Technology, Donghua University, 2999 North Renmin Road, Songjiang, Shanghai 201620, China ^2School of Sciences, Ningbo University of Technology, 201 Fenghua Road, Ningbo 315211, China ^3Department of Mathematics, College of Science, Donghua University, 2999 North Renmin Road, Songjiang, Shanghai 201620, China Received 28 May 2013; Revised 7 June 2013; Accepted 8 June 2013 Academic Editor: Ming Li Copyright © 2013 Zhi Wang and Litan Yan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Let be a subfractional Brownian motion with index . Based on the -transform in white noise analysis we study the stochastic integral with respect to , and we also prove a Girsanov theorem and derive an Itô formula. As an application we study the solutions of backward stochastic differential equations driven by of the form , where the stochastic integral used in the above equation is Pettis integral. We obtain the explicit solutions of this class of equations under suitable assumptions. 1. Introduction As an extension of Brownian motion, Bojdecki et al. [1, 2] introduced and studied a rather special class of self-similar Gaussian processes which preserves many properties of the fractional Brownian motion of the Weyl type here and below. This process arises from occupation time fluctuations of branching particle systems with Poisson initial condition. This process is called the subfractional Brownian motion (sub-fBm). The so-called sub-fBm with index is a mean zero Gaussian process with and the covariance for all , . For , coincides with the standard Brownian motion . is neither a semimartingale nor a Markov process unless . So many of the powerful techniques from stochastic analysis are not available when dealing with . As a Gaussian process, it is possible to construct a stochastic calculus of variations with respect to (see, e.g., Alòs et al. [3] and Nualart [4]). The sub-fBm has properties analogous to those of fractional Brownian motion and satisfies the following estimates: Thus, Kolmogorov's continuity criterion implies that subfractional Brownian motion is Hölder continuous of order for any . But its increments are not stationary. More works for sub-fBm can be found in Bojdecki et al. [5], Liu and Yan [6], Shen and Chen [7], Tudor [8–11], Yan et al. [12–14], and the references therein. On the other hand, it is well known that general backward stochastic differential equations (BSDEs) driven by a Brownian motion were first studied by Pardoux and Peng [15], where they also gave a probabilistic interpretation for the viscosity solution of semilinear partial differential equations. Because of their important value in various areas including probability theory, finance, and control, BSDEs have been subject to the attention and interest of researchers. A survey and complete literature for BSDEs could be found in Peng [16]. Recently, motivated by stochastic control problems, Biagini et al. [17] first studied linear BSDEs driven by a fractional Brownian motion, where existence and uniqueness were discussed in order to study a maximum principle. Bender [18] gave explicit solutions for a linear BSDEs driven by a fractional Brownian motion, and Hu and Peng [19] studied the linear and nonlinear BSDEs driven by a fractional Brownian motion using the quasi-conditional expectation. More works for the BSDEs driven by Brownian motion and fractional Brownian motion can be found in Bisumt [20], Geiss et al. [21], Karoui et al. [22], Ma et al. [23], Maticiuc and Nie [24], Peng [25], and the references therein. In this paper, we study the BSDEs driven by a sub-fBm of the form where the stochastic integral used in above equation is Pettis In recent years, there has been considerable interest in studying fractional Brownian motion due to its applications in various scientific areas including telecommunications, turbulence, image processing, and finance and also due to some of its compact properties such as long-range dependence, self-similarity, stationary increments, and Hölder's continuity (see, e.g., Mandelbrot and van Ness [26], Biagini et al. [27], Hu [28], Mishura [29], Li [30], Li and Zhao [31, 32], and Lim and Muniandy [33]). Moreover, many authors have proposed to use more general self-similar Gaussian processes and random fields as stochastic models. Such applications have raised many interesting theoretical questions about self-similar Gaussian processes and fields in general. Therefore, other generalizations of Brownian motion have been introduced such as sub-fBm, bifractional Brownian motion, and weighted-fractional Brownian motion. However, in contrast to the extensive studies on fractional Brownian motion, there has been little systematic investigation on other self-similar Gaussian processes. The main reason for this is the complexity of dependence structures for self-similar Gaussian processes which do not have stationary increments. The sub-fBm has properties analogous to those of fractional Brownian motion (self-similarity, long-range dependence, Hölder paths, the variation, and the renormalized variation). However, in comparison with fractional Brownian motion, the sub-fBm has nonstationary increments and the increments over nonoverlapping intervals are more either weakly or strongly correlated and their covariance decays polynomially as a higher rate in comparison with fractional Brownian motion (for this reason in Bojdecki et al. [1] is called subfractional Brownian motion). The above mentioned properties make sub-fBm a possible candidate for models which involve long-range dependence, self-similarity, and nonstationary. Thus, it seems interesting to study the BSDEs driven by a sub-fBm. This paper is organized as follows. Section 2 contains some basic results. In Section 3, we give a definition of subfractional Itô integral based on an -transform in white noise analysis. As an application we establish a Girsanov theorem for this integral. In Section 4, we give an Itô formula for functionals of a Wiener integral for a sub-fBm. We also discuss the geometric sub-fBm in this section. Section 5 considers the BSDEs (3). Finally, we will conclude the paper in Section 6. 2. Preliminaries In this section, we briefly recall some basic definitions and results of sub-fBm. Throughout this paper we assume that is arbitrary but fixed and let be a one-dimensional sub-fBm with Hurst index defined on . To simplify, we denote , and let be a two-sides Brownian motion and We also denote (i): the usual -norm, and the corresponding inner product is denoted by ; (ii): the Schwartz space of rapidly decreasing smooth functions of real valued; (iii): the Wiener integral of the function ; (iv): the -field generated by ; (v): the -norm. can be written as a Volterra process with the following moving average representation: where , , . The sub-fBm is also possible to construct a stochastic calculus of variations with respect to the Gaussian process , which will be related to the Malliavin calculus. Some surveys and complete literatures for Malliavin calculus of Gaussian process could be found in Alòs et al. [3], Nualart [4] and Tudor [9, 10], Zähle [34], and the references therein. Let . Consider Weyl's type fractional integrals of order if the integrals exist for almost all , and Marchand's type fractional derivatives of order if the limit exists in for some , where for . Define the operator where and denotes the gamma function defined by Recall that we now give a stochastic version of the Hardy-Littlewood theorem as follows. Theorem 1 (Theorem 2.10 in [35]). Let and let the operators be defined as above. Then is a continuous operator from into if and . Define the function for any Borel function on . Then the function is odd, which is called the odd extension of . Based on the moving average representation (5), we can show the following proposition. Proposition 2. Let the operators be defined as above. Then and admits the following integral representation: for all . We finally recall the -transform. The -transform is an important tool in white noise analysis. Here we give a definition and state some results that do not depend on properties of the white noise space. Denote the -transform of (see, e.g., [35, 36] for more details) by where the Wick exponential: : of is given by The -transform has the following important properties. The -transform is injective; that is, for all , implies that . Let be a sequence that converges to in ; then: : converges to: : in . for . Hence it can deduce a probability measure on by especially, for , we can rewrite the -transform as Let be a progressively measurable process such that Then is the unique element in with -transform given by The Wiener integral is the unique element in with -transform given by The following result points out that the operators interchanges with the -transform. Lemma 3 (Lemma 2.9 in [35]). Let exist for some . Then one has for all . In the case the convergence of the fractional derivative on the right-hand side is in the sense, if . In particular, the operators interchange with the -transform. 3. A Subfractional Itô Integral In this section, based on the -transform we aim to define the subfractional Itô integral, denoted by with , and introduce the Girsanov theorem. To this end, inspired by the Hitsuda-Skorohod integral, we define the subfractional Itô integral as the unique random variable such that for all , provided the integral exists under suitable conditions. According to (12) and Property (), we have Combining this with the fact () in Section 2, we give the following definition. Definition 4. Let be a Borel set. A mapping is said to be subfractional Itô integrable on if for any , and there is a such that for all . It is important to note that in the above definition is unique because the -transform is injective, which is called the subfractional Itô integral of on and we denote it by In this paper, sub-fractional Itô integralalways refers to the -transform approach proposed in Definition 4. Proposition 5. The following statements hold. (1)For anyone has(2)Let be subfractional Itô integrable for . Then Proof. These results are some simple examples. Recall that the Wick product of is an element such that for all . The following theorem expresses the relationship between the subfractional Itô integral defined as above and the integral based on Wick product . Theorem 6. Let and ; then in the sense that if one side is well defined then so is the other, and both coincide. We can obtain it by calculating the -transform of both sides. In particular, for , this theorem implies that It means that the subfractional Itô integral is the -limit of Wick-Riemann sums for some suitable processes. That is, for some suitable processes , where is a partition of with and the convergence is in . Now we calculate the expectation of a subfractional Itô integral under a measure . Theorem 7. Let and be given by (15). If the following assumptions hold: (1) is subfractional Itô integrable, and ; (2) and for , One then has Proof. Let be given such that converges to in , we have the identity It can be easily obtained that the left-hand side of (33) converges to the same side of (32) by Theorem 1 and () in Section 2. Then we just need to prove the right-hand side of (33) converges to (32) correspondingly. By Lemma 3, applying the fractional integration by parts rule, we have which is bounded by We can easily show that converge to zero, as , respectively, by Hölder's inequality. This completes the proof. Remark 8. Under the assumptions of Theorem 7, exists as a Pettis integral (see Definition 2.3 in [35]). In fact, for all , Thus, the property of the Pettis integral deduces Now, we establish a Girsanov theorem for subfractional Itô integral. Consider the measure , , the probability space carries a two-side Brownian motion given by according to the classical Girsanov theorem. On this probability space, we denote the -transform with respect to the measure ; that is, and the following identity holds: for all . Theorem 9. Let the assumptions of Theorem 7 be satisfied, and Then, the identity holds in -almost surely. Proof. We apply Theorem 7 to , . It is easy to check that according to Lemma 2.5 in [36]. By Theorem 7 and (40), it follows The second identity based on the fact that exists as a Pettis integral which is proved in Remark 8. The proof is complete. 4. An Itô Formula In this section, we prove an Itô formula for a subfractional Wiener integral using the -transform approach. An indefinite subfractional Wiener integral is understood as a process for all provided is a deterministic function such that the above integral exists as a subfractional Itô integral for all . Proposition 10. Assume that is continuous for , and -Hölder continuous with for . Then the indefinite subfractional Wiener integral exists, and Proof. We should prove that and exists. For , since is continuous on , by Hardy-Littlwood theorem, it is obvious that . For , similar to the argument in Proposition 5.1 in [35], there exists a function , such that Hence, , and so is . is a deterministic function implies that exists. Next, consider the -transform of the right-hand side in (45), then by (19), we obtain that This completes the proof. The following lemma is essential to the proof of our Itô's formula. Lemma 11. Let be continuous and . Then one has In particular,(1)for all , ; (2) is differentiable in , and for all , one has Proof. For , the following identity holds: Then, Equation (48) easily follows and the other assertions are trivial. Remark 12. Since the right of (48) is not hold when , there is a lack of a result similar to the above Lemma. Hence, we only consider the case of constant , and we have . Now we give the following Itô formula. Theorem 13. Let , such that () be an indefinite subfractional Wiener integral; that is, for all , , where is continuous when , constant when ; (); () there exists constants and such that Then the following equality holds in : Proof. It suffices to show that both sides have the same -transform. Indeed, by Definition 4, the integral of the left-hand side has the -transform given by Henceforth, we just need to show the right-hand side has the same result. Firstly, we show the integrals of the right-hand side exist in . Without loss of generality, denote , , , , and . By the growth condition (52), we obtain Consequently, exists. For the last one, by Lemma 11 and Remark 12, we have Hence, the last integral exists as a Pettis integral in the -sense. On the other hand, denote the heat kernel as follows: Thanks to the classical Girsanov theorem, for arbitrary , under the measure , we can easily calculate that is a Gaussian random variable with mean and variance . Thus, we obtain Moreover, by , integration and differentiation can be interchanged. Since the heat kernel fulfills , we have Consequently, Compared with (54), the proof can be The objective of this part is to define the geometric sub-fBm and establish an Itô formula with respect to it. Definition 14. Let , , and , Then one calls a geometric sub-fBm with coefficients , , , , provided the right-hand side exists as an element of for all . Theorem 15. Let , such that (i) be a geometric sub-fBm with continuous coefficients , and let be a constant when ; (ii), hold. Then the following equality holds in : Proof. Let Then, apply Theorem 13 to , and the result is obvious. The special case yields the following. Corollary 16. Let be a geometric sub-fBm as in Theorem 15; then for all , For this reason, one calls it “geometric sub-fBm”. 5. Explicit Solution of a Class of Linear Subfractional BSDEs General BSDEs driven by a Brownian motion are usually of the form where are given. The generator is a -adapted process for every pair , the terminal value is a -measureable random variable, and denotes the filtration generated by . We say a pair is a solution of this equation, if the processes which are -adapted and satisfy a suitable integrability condition solve the equation -almost After these preparations, we now turn to the problems to solve the BSDEs driven by a sub-fBm of the form where are given. The generator is a -adapted process for every pair , the terminal value is a -measureable random variable, and denotes the filtration generated by . We say a pair is a solution of this equation, if the processes which are -adapted and satisfy a suitable integrability condition solve the equation -almost surely. Let us recall a result about the following PDE, which is a parabolic partial differential equation solved by the heat equation (see Theorem 9 in [18]). Let the following conditions be satisfied: () and is strictly increasing with and ; (); () and there exists constant and such that for all , . Then the PDE has a classical solution given by Next we give the main result of this paper. Theorem 17. Let and . Suppose the following conditions are satisfied: () is continuous when , constant when , and there exist constants , such that ; (); () holds with and with bounded on ; () and there exist constants , and such that for all , . Then the BSDEs, have a solution of the form Proof. Let ; from Lemma 11 and Remark 12, we have satisfies . By the growth condition , is yielded, and follows from . Henceforth, is a classical solution of the PDE Moreover, by Lemma 10 and Corollary 11 in [18], suppose that , which fulfills the conditions of Theorem 13 for all and . Consequently, Next, according to Definition 4 and the growth condition, exists when tends to zero. On the other hand, similar to (58), we obtain By the growth and the continuity conditions of , we have converges to as tends to zero. Now it remains to show the existence of the last integral of (72). In fact, there exists a constant , such that For , (48) and yieldFor , , we obtain This means that is well defined, which completes the proof. The above theorem also holds for geometric sub-fBm as described in the following proposition. Proposition 18. Let a geometric sub-fBm , and is continuous and of polynomial growth. Then Theorem 17 holds with the terminal value of the form . Proof. We just need to apply Theorem 17 with and . The regularity of the obtained solutions is described as follows. Proposition 19. Let , as defined in Theorem 17. Then . Moreover, when . It is a straightforward result in view of the growth condition of . 6. Conclusion We have presented the subfractional Itô integral using the method of the -transform. A Girsanov theorem with respect to the subfractional Itô integral and an Itô formula for functionals of a subfractional Wiener integral has been established. As an application, we obtain explicit solutions for a class of linear BSDEs driven by a sub-fBm with arbitrary Hurst parameter under suitable The Project was sponsored by NSFC (11171062, 40901241), Innovation Program of Shanghai Municipal Education Commission(12ZZ063), the Research Project of Education of Zhejiang Province (Y201326507) and Natural Science Foundation of Zhejiang Province (Y5090377). 1. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Sub-fractional Brownian motion and its relation to occupation times,” Statistics & Probability Letters, vol. 69, no. 4, pp. 405–419, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Some extensions of fractional Brownian motion and sub-fractional Brownian motion related to particle systems,” Electronic Communications in Probability, vol. 12, pp. 161–172, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. E. Alòs, O. Mazet, and D. Nualart, “Stochastic calculus with respect to Gaussian processes,” The Annals of Probability, vol. 29, no. 2, pp. 766–801, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. D. Nualart, The Malliavin Calculus and Related Topics, Springer, Berlin, Germany, 2nd edition, 2006. View at MathSciNet 5. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Fractional Brownian density process and its self-intersection local time of order $k$,” Journal of Theoretical Probability, vol. 17, no. 3, pp. 717–739, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. J. Liu and L. Yan, “Remarks on asymptotic behavior of weighted quadratic variation of subfractional Brownian motion,” Journal of the Korean Statistical Society, vol. 41, no. 2, pp. 177–187, 2012. View at Publisher · View at Google Scholar · View at Scopus 7. G. Shen and C. Chen, “Stochastic integration with respect to the sub-fractional Brownian motion with $H\in \left(0,1/2\right)$,” Statistics & Probability Letters, vol. 82, no. 2, pp. 240–251, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. C. Tudor, “Some properties of the sub-fractional Brownian motion,” Stochastics, vol. 79, no. 5, pp. 431–448, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at 9. C. Tudor, “Inner product spaces of integrands associated to subfractional Brownian motion,” Statistics & Probability Letters, vol. 78, no. 14, pp. 2201–2209, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 10. C. Tudor, “Some aspects of stochastic calculus for the sub-fractional Brownian motion,” Analele Universitatii Bucuresti. Matematica, vol. 57, no. 2, pp. 199–230, 2008. View at Zentralblatt MATH · View at MathSciNet 11. C. Tudor, “On the Wiener integral with respect to a sub-fractional Brownian motion on an interval,” Journal of Mathematical Analysis and Applications, vol. 351, no. 1, pp. 456–468, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. L. Yan, K. He, and C. Chen, “The generalized Bouleau-Yor identity for a sub-fractional Brownian motion,” Science China Mathematics, 2013. View at Publisher · View at Google Scholar 13. L. Yan and G. Shen, “On the collision local time of sub-fractional Brownian motions,” Statistics & Probability Letters, vol. 80, no. 5-6, pp. 296–308, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 14. L. Yan, G. Shen, and K. He, “Itô's formula for a sub-fractional Brownian motion,” Communications on Stochastic Analysis, vol. 5, no. 1, pp. 135–159, 2011. View at MathSciNet 15. É. Pardoux and S. G. Peng, “Adapted solution of a backward stochastic differential equation,” Systems & Control Letters, vol. 14, no. 1, pp. 55–61, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. S. Peng, “Backward stochastic differential equations,” in Nonlinear Expectations, Nonlinear Evaluations and Risk Measures, Lecture Notes in Chinese Summer School in Mathematics Weihai, 2004. 17. F. Biagini, Y. Hu, B. Øksendal, and A. Sulem, “A stochastic maximum principle for processes driven by fractional Brownian motion,” Stochastic Processes and their Applications, vol. 100, pp. 233–253, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 18. C. Bender, “Explicit solutions of a class of linear fractional BSDEs,” Systems & Control Letters, vol. 54, no. 7, pp. 671–680, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 19. Y. Hu and S. Peng, “Backward stochastic differential equation driven by fractional Brownian motion,” SIAM Journal on Control and Optimization, vol. 48, no. 3, pp. 1675–1700, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 20. J.-M. Bismut, “Conjugate convex functions in optimal stochastic control,” Journal of Mathematical Analysis and Applications, vol. 44, pp. 384–404, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 21. C. Geiss, S. Geiss, and E. Gobet, “Generalized fractional smoothness and ${L}_{p}$-variation of BSDEs with non-Lipschitz terminal condition,” Stochastic Processes and their Applications, vol. 122, no. 5, pp. 2078–2116, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 22. N. El Karoui, S. Peng, and M. C. Quenez, “Backward stochastic differential equations in finance,” Mathematical Finance, vol. 7, no. 1, pp. 1–71, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 23. J. Ma, P. Protter, and J. M. Yong, “Solving forward-backward stochastic differential equations explicitly—a four step scheme,” Probability Theory and Related Fields, vol. 98, no. 3, pp. 339–359, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 24. L. Maticiuc and T. Nie, “Fractional backward stochastic differential equations and fractional backward variational inequalities,” http://arxiv.org/abs/1102.3014. View at Publisher · View at Google Scholar 25. S. G. Peng, “Backward stochastic differential equations and applications to optimal control,” Applied Mathematics and Optimization, vol. 27, no. 2, pp. 125–144, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 26. B. B. Mandelbrot and J. W. van Ness, “Fractional Brownian motions, fractional noises and applications,” SIAM Review, vol. 10, pp. 422–437, 1968. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 27. F. Biagini, Y. Hu, B. Øksendal, and T. Zhang, Stochastic Calculus for Fractional Brownian Motion and Applications, Springer, London, UK, 2006. 28. Y. Hu, “Integral transformations and anticipative calculus for fractional Brownian motions,” Memoirs of the American Mathematical Society, vol. 175, no. 825, 2005. View at Zentralblatt MATH · View at MathSciNet 29. Y. S. Mishura, Stochastic Calculus for Fractional Brownian Motion and Related Processes, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 30. M. Li, “On the long-range dependence of fractional Brownian motion,” Mathematical Problems in Engineering, vol. 2013, Article ID 842197, 5 pages, 2013. View at Publisher · View at Google Scholar 31. M. Li and W. Zhao, “On $1/f$ noise,” Mathematical Problems in Engineering, vol. 2012, Article ID 673648, 23 pages, 2012. View at Publisher · View at Google Scholar 32. M. Li and W. Zhao, “Quantitatively investigating locally weak stationarity of modified multifractional Gaussian noise,” Physica A, vol. 391, no. 24, pp. 6268–6278, 2012. View at Publisher · View at Google Scholar 33. S. C. Lim and S. V. Muniandy, “On some possible generalizations of fractional Brownian motion,” Physics Letters A, vol. 266, no. 2-3, pp. 140–145, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 34. M. Zähle, “Integration with respect to fractal functions and stochastic calculus. I,” Probability Theory and Related Fields, vol. 111, no. 3, pp. 333–374, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 35. C. Bender, “An $S$-transform approach to integration with respect to a fractional Brownian motion,” Bernoulli, vol. 9, no. 6, pp. 955–983, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 36. C. Bender, “An Itô formula for generalized functionals of a fractional Brownian motion with arbitrary Hurst parameter,” Stochastic Processes and their Applications, vol. 104, no. 1, pp. 81–106, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/amp/2013/827192/","timestamp":"2014-04-20T19:06:28Z","content_type":null,"content_length":"976421","record_id":"<urn:uuid:71766eca-896e-4680-aae4-9d593589df72>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Concerning definition of formulas Vaughan Pratt pratt at cs.stanford.edu Fri Oct 5 20:57:19 EDT 2007 Saurav's question (on how to break the vicious circle between the definition of wff and the definition of ZFC which seemed interdependent) was responded to initially by Andrej Bauer in terms of three methodologies for syntax, namely trees, inductive types, and initial algebras, and then by Richard Heck on Sept. 30 who proposed breaking the circle using "ordinary" set theory understood as being prior to ZFC. In response to Heck's post I proposed on Sunday taking his non-ZFC (naive) set theory approach a step further to eliminate any dependency at all on recursion. Regrettably this step was rejected by the moderator as "confusing the entirely syntactic notion of formula with semantic notions." During the week spent defending the purely syntactic content of my post to the board, Hazen, Avron, Forster, Aitken, Fugard, McCarthy, Jones, and Blum also responded, making it clear that this issue bothered many people, not least students who were puzzled by it when first exposed to it. The board having approved my post as of today, here's my proposal, rather longer now than on Sunday as a result of my attempts to clarify it to the board's satisfaction, but it is surely reasonable that clarity trump brevity on FOM. Answering Feferman's "what rests on what", the following assumes only pages 1-33 of Halmos, Naive Set Theory, namely up to the chapter on functions, material that every college mathematics student should possess. No recursion or induction, either explicit, or implicit in the use of numbers, is involved in the definition. The wffs accommodated are those in prenex normal form with a full DNF matrix, sufficient for all definitional purposes in bootstrapping up from naive set theory. The definition permits some flexibility regarding variants as per the notes below. DEFINITION OF "WFF" All sets are assumed finite. 1. Language. Let S (Sigma) be any alphabet. A (relation) *symbol* is a pair (s,A) where s \in S and A is a set constituting the formal parameters or *arity* (as a set) of the symbol. A *language* L is a set of symbols. 2. Wff. A *wff* W = ((E <= Q <= V), P, m, <) over a language L is a 4-tuple such that (i) V is the set of variables used in W, a subset Q of which are the quantified variables, with a subset E of Q being the existentially quantified variables. V\Q is the set of free variables, while Q\E is the set of universally quantified variables. An *atom* (atomic formula) is a pair ((s,A), b: A --> V) consisting of a symbol (s,A) of L and a binding b of its parameters to variables. Example: R(x,y,...,z) where x,y,...,z \in V. (ii) P is a set of atoms, namely those appearing in W. A *disjunct* is a subset D of P. The intended interpretation of D is as the conjunction of the elements of D conjoined with the conjunction of the logically negated elements of P\D. (So every atom appears exactly once as a literal in every disjunct D, bearing a sign, + if in D, - if in P\D. There are therefore 2^|P| possible disjuncts.) (iii) m is a set of disjuncts (disjunctive normal form). These are the disjuncts whose disjunction forms the matrix of W. (iv) < linearly orders Q from left to right (prenex normal form). So if x < y and y is in E but not x, corresponding to the quantification ...Ax...Ey...m, then y depends on x. 1. A refinement of (iv) is to allow < to be a partial order. This permits Henkin-style quantification in which w can depend on x and y on z without entailing any other dependencies the way the linear order must. 2. With the exception of Q (and hence E), the finiteness restriction is inessential. This makes essential use of the existence of the free complete atomic Boolean algebra on an arbitrary set P, namely 2^(2^P), in striking contrast to the 1964 Gaifman-Hales result that infinite free complete Boolean algebras do not exist. 3. Dropping finiteness for Q raises the question of the meaning of quantification when Q is say the reals and E the rationals as a subset of the reals, with < as their standard numerical order. An intermediate generalization would be to allow Q to be infinite but require < to be a partial order of finite height (no infinite chains), so that each variable in E depends on only finitely many variables in Q\E. 4. The definition is "framework-neutral" in the sense that it is equally applicable to conventional formulations of first order logic as encountered in any introduction to logic; to Tarski's cylindric algebra (CA) formulation; and to Lawvere's conception of existential and universal quantifiers as respectively left and right adjoints to substitution. In CA the set V is fixed for all the elements of a given algebra, whereas in Lawvere's approach V increases with substitution and decreases with quantification. This definition of wff then provides considerable flexibility in answering Sid Smith's question yesterday of how to understand quantifiers: if none of those three work for you, at least you have a reasonably clean syntactic foundation on which to erect your fourth solution. 5. A student picking up Paul Cohen's "Decision Procedures for Real and p-Adic Fields," 1969 and reading in the first paragraph, after a listing of the logical and relation symbols and variables, "Of course, these symbols must be used in the grammatically correct fashion with which we are all familiar," might wonder why anything more is needed. One response would be to ask the student to make sense of Notes 1-4 on the basis of that account. Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-October/012008.html","timestamp":"2014-04-17T04:21:26Z","content_type":null,"content_length":"8060","record_id":"<urn:uuid:4298c22f-e4a3-4f03-b0be-febc3588d5e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Square Constructed Upon A Given Line An illustration showing how to construct a square upon a given line. "With AB as radius and A and B as centers, draw the circle arcs AED and BEC. Divide the arc BE in two equal parts at F, and with EF as radius and E as center, draw the circle CFD. Join A and CB and D, C and D, which completes the required square."
{"url":"http://etc.usf.edu/clipart/49900/49914/49914_construction.htm","timestamp":"2014-04-20T06:23:56Z","content_type":null,"content_length":"11543","record_id":"<urn:uuid:45207442-1ac4-47d8-befc-4849d87467ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Guide Entry 07.03.06 Yale-New Haven Teachers Institute Home Astronomy: The Mathematician's Perspective, by Maria Stockmal Guide Entry to 07.03.06: This curriculum unit offers a way to teach mathematics through astronomy. Sometimes teaching mathematics becomes a matter of routine. Keeping students engaged is a challenge in today's world with competition from electronics. Since astronomy is a fascinating subject and captures the attention of everyone it is no wonder that there is a desire to write an astronomical unit to be used in the classroom to teach secondary school mathematics. Instruction of astronomy does not have to be confined to use in only the science classes but may be used in mathematics classes to teach mathematical concepts in a captivating The unit proposed is to instruct about graphs, slope, the Pythagorean Theorem, trigonometric ratios, measure of an arc length, and area of a sector. It contains all the astronomical information and data necessary for students to construct graphs and an idea on how to introduce finding the slope of a line. The astronomy ideas used to teach geometric concepts are more involved and so astronomical information, data, and sample lesson plans are included. (Recommended for Mathematics, Algebra I, and Geometry, grades 8-10.) Contents of 2007 Volume III | Directory of Volumes | Index | Yale-New Haven Teachers Institute
{"url":"http://www.yale.edu/ynhti/curriculum/guides/2007/3/07.03.06.x.html","timestamp":"2014-04-20T15:56:58Z","content_type":null,"content_length":"4476","record_id":"<urn:uuid:c12fe614-5992-42a0-ab29-9a0d0d1f14de>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
motion, equation ofmathematical formula that describes the position, velocity, or acceleration of a body relative to a given frame of reference. Newton’s second law, which states that the force F acting on a body is equal to the mass m of the body multiplied by the acceleration a of its centre of mass, F = ma ma, is the basic equation of motion in classical mechanics. If the force acting on a body is known as a function of time, the velocity and position of the body as functions of time can, theoretically, be derived from Newton’s equation by a process known as integration. The gravitational force (g) acting on For example, a falling body , for example, accelerates it and is the weight of the body W, which is constant with respect to time; thus, using Newton’s equation, W = mg and m = Wg. Substituting these values in Newton’s equation, W = (Wg)a, from which a = g, the acceleration of gravity. accelerates at a constant rate, g. Acceleration is the rate of change of velocity with respect to time, so that by integration the velocity v in terms of time t is given by v = gt gt. Velocity is the time rate of change of position S, and, consequently, integration of the velocity equation yields S = 12gtgt^2. If the force acting on a body is specified as a function of position or velocity, the integration of Newton’s equation may be more difficult. When a body is constrained to move in a specified manner on a fixed path, it may be possible to derive the position-time equation; from this equation the velocity-time and acceleration-time equations can, theoretically, be obtained by a process known as
{"url":"http://media-3.web.britannica.com/eb-diffs/101/394101-9008-53960.html","timestamp":"2014-04-17T21:31:52Z","content_type":null,"content_length":"7758","record_id":"<urn:uuid:435420a1-c72d-4b32-a39c-7c258a42a175>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups Vague idea for factorization in time N^(1/5+o(1)) Expand Messages View Source Crandall & Pomerance's book in addition to explaining how N^(1/4+o(1)) is the fastest known time bound for a rigorous deterministic factoring algorithm, also explains that there is another deterministic algorithm, invented by D.Shanks and based on the theory of "class numbers," which runs in time N^(1/5+o(1)) PROVIDED the extended Riemann hypothesis is true. [This was turned into an N^(1/5+o(1)) EXPECTED time randomized algorithm not needing to assume the ERH, by Anitha Srinivasan: Computations of class numbers of real quadratic fields, Mathematics of Computation 67,223 (1998) 1285-1308 D.Shanks: Class number, a theory of factorization, and genera, Proc. Symp. Pure Math., Amer. Math. Soc. 20 (1971) 415-440. R. A. Mollin & H. Williams: Computation of the class number of real quadratic fields, Utilitas Math., 41 (1992) 259-308. H. W. Lenstra, J:, On the calculation of regulators and class numbers of quadratic fields, Lond. Math. Soc. Lect. Note Ser, 56 (1982) 123-150. R. J. Schoof: Quadratic fields and factorization, Computational methods in number theory, (H. W. Lenstra, Jr., and R.Tijdeman, eds.), Math.Centrum, Number 155, part II, Amsterdam 1 (1983) 235-286. Anyhow, I have an idea. It seems plausibly likely to me that one can prove a Shanks-like algorithm will factor N in N^(1/5+o(1)) time under the assumption the ERH is FALSE. This combined with Shanks' result on the assumption ERH is true, will yield an unconditional rigorous deterministic N^(1/5+o(1))-time factoring algorithm, a new record, which will work on two paths simultaneously, one assuming ERH true, other assuming it false. These authors employ certain sums and products representing Dirichlet L-functions which they evaluate approximately. Given the ERH, they don't have to work too hard to get the approximation good enough. However, it seems to me if the ERH is false, then these sums will also be well-behaved, just in a different sense. I warn you this whole approach does not look attractive from the standpoint of producing a factoring program you'd want to program & use (even if it is valid). It probably would be purely of theoretical interest. Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/primenumbers/conversations/topics/23901?o=1&d=-1","timestamp":"2014-04-18T10:46:51Z","content_type":null,"content_length":"41299","record_id":"<urn:uuid:e24ccd49-3c64-42a1-a60f-4e67a7d189ca>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Sums of Powers of Positive Integers - Johann Faulhaber (1580-1635), Germany Johann Faulhaber was an arithmetic teacher (rechenmeister), who spent his entire life in Ulm, Germany. He also was an enthusiastic adherent of the new Protestantism who highlighted the number 666 in all of his books, whether religious or mathematical, and whose numerological predictions put him in constant conflict with religious and civil authorities. In his 1631 Academia Algebrae, Faulhaber presented formulas for sums of powers of the first n positive integers from the 13^th to the 17^th powers (Faulhaber, folios Bi verso-Cii recto). He had presented formulas for sums of powers up to the seventh power in his 1614 Newer Arithmetischer Wegweyser and up to the 12^th power in his 1617 Continuatio seiner neuen Wunderkunsten (Schneider, p. 138; see also Faulhaber, ff. Aiii verso-Aiv recto). Furthermore, D.E. Knuth has argued that, in Academia Algebrae, Faulhauber actually encoded sums of powers of positive integers up to the 23^rd power (Knuth, pp. 282-283). Figure 10. Faulhaber’s presentation of his formula for the sum of the 13^th powers in Academia Algebrae, folios Bi verso-Bii recto, is printed across a two-page spread. An electronic copy of Academia Algebrae, including folios Bi verso and Bii recto, is available from the University of Dresden Digital Library. (Reproduced by permission of Columbia University Library. For images of these two pages and the title page with rulers alongside to show size, see "Johann Faulhaber's Academia Algebrae" in "Mathematical Treasures.") Faulhaber's formulas in his Academia Algebrae are presented both in terms of n and in terms of n(n + 1)/2. Actually, rather than being given in terms of an unknown n or x, they are presented using an early algebraic notation for powers of the unknown called "cossist" notation. Faulhaber's symbols for these powers are close to Recorde's (see Figure 8) and are listed in folio Dii verso of his Academia Algebrae, available electronically from the University of Dresden Digital Library. In his notation, the symbol for the square is a script "z" that stands for zensus (square), and the symbol for the cube a script "c" that stands for cubus (cube). The symbol for the fourth power, actually two script "z"s, stands for zensus de zensus (square of square), and that for the sixth power, "zc," zensicubus (square of cube). The symbol for the fifth power, similar to the German "ss," stands for sursolidum; for the seventh power, this symbol is preceded by a "b", so we get b-sursolidum. Similarly, Faulhaber's symbol for the eleventh power is c-sursolidum, for the 13th power d-sursolidum, and so on for the other prime powers. Continuing in this manner, we see that the 14th power symbol, then, stands for zensus de b-sursolidum (square of seventh power). We can rewrite Faulhaber's instructions for the sum of the 13th powers (see Figure 10) in modern notation as follows: Given n, multiply $${{n^2 + n} \over 2}$$ by 2764 to obtain 1382n^2 + 1382n, then subtract 691 from this expression to obtain 1382n^2 + 1382n – 691. Next, subtract $${{n^4 + 2n^3 + n^2 } \over 4} \cdot 4720,$$ or $$1180n^4 + 2360n^3 + 1180n^2;$$ add $${{n^6 + 3n^5 + 3n^4 + n^3 } \over 8} \cdot 4592,$$ or $$574n^6 + 1722n^5 + 1722n^4 + 574n^3;$$ subtract $${{n^8 + 4n^7 + 6n^6 + 4n^5 + n^4 } \over {16}} \cdot 2800,$$ or $$175n^8 + 700n^7 + 1050n^6 + 700n^5 + 175n^4;$$ and add $${{n^{10} + 5n^9 + 10n^8 + 10n^7 + 5n^6 + n^5 } \over {32}} \cdot 960,$$ or $$30n^{10} + 150n^9 + 300n^8 + 300n^7 + 150n^6 + 30n^5,$$ to obtain the expression $$30n^{10} + 150n^9 + 125n^8 - 400n^7 - 326n^6 + 1052n^5 + 367n^4 - 1786n^3 + 202n^2 + 1382n - 691,$$ which we are to divide by 105. Note that $${{n^2 + n} \over 2} = {{n(n + 1)} \over 2}$$ and that the successive quotients whose products by certain numbers are being added and subtracted are $$\left( {{{n(n + 1)} \over 2}} \right)^2, \left( {{{n(n + 1)} \over 2}} \right)^3, \left( {{{n (n + 1)} \over 2}} \right)^4,\ {\rm and} \left( {{{n(n + 1)} \over 2}} \right)^5.$$ The polynomial we have obtained so far cannot be the correct formula for the sum of the 13^th powers because it is of degree 10 rather than degree 14. Faulhaber’s next instruction is to multiply the expression obtained so far by $${n^4 + 2n^3 + n^2 \over 4}\,\,\, {\rm or}\,\,\, \left(n(n + 1)\over 2 \right)^2$$ to obtain the sum of the 13^th powers, $${{30n^{14} + 210n^{13} + 455n^{12} - 1001n^{10} + 2145n^8 - 3003n^6 + 2275n^4 - 691n^2 } \over {420}}.$$ As D.E. Knuth pointed out (Knuth, p. 277), Faulhaber also described in his presentation above the formula $${{960N^7 - 2800N^6 + 4592N^5 - 4720N^4 + 2764N^3 - 691N^2 } \over {105}},$$ where N = n(n + 1)/2, for the sum of the 13^th powers. In Academia Algebrae, Faulhaber presented similar formulas for sums of 14^th through 17^th powers, and gave a formula for the sum of the eighth powers as well (f. Diii). According to Faulhaber scholar Ivo Schneider, Faulhaber's early training as a weaver helped him see relationships between long columns containing numerical values of powers, sums of powers, and powers and products of sums of powers (see Schneider, pp. 131-139). Indeed, when these columns were arranged side-by-side, they would have resembled somewhat the warp threads – the vertical threads – in a loom, and Faulhaber's alternating pattern of addition and subtraction of terms may have corresponded, for him, to the weaving of weft threads – horizontal threads – over and under the warp threads. Even so, patterns or formulas for writing sums of 18th and higher powers have never been readily apparent to readers of Faulhaber's presentation of his formulas for sums of 13th through 17th powers in Academia Algebrae!
{"url":"http://www.maa.org/publications/periodicals/convergence/sums-of-powers-of-positive-integers-johann-faulhaber-1580-1635-germany","timestamp":"2014-04-17T01:37:01Z","content_type":null,"content_length":"109902","record_id":"<urn:uuid:8703e414-9207-44b5-9041-3e1f04cc959a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Minus times a minus is a plus My response to the blog wars about multiplying negative numbers. Mostly inspired by Eric’s comment on Mike Croucher’s Walking Randomly. Big image, links to a PDF (of vector goodness). I wanted to put the Inkscsape SVG source inside the PNG image. But it turns out wordpress.com “optimises” the image and means my klever hack doesn’t work. Bad wordpress.com. 2009-10-05 at 15:25:20 I prefer the “money” explanation. Having a negative amount of money means owing the money (anyone with a bank account will understand this) so to say I have -£1000 in my account makes perfect sense. If I am paying a mortgage of £600 a month, I can work out what my balance is (absent other transactions) by multiplying by number of months from now. 2 months ahead gives me 2 * -600 pounds, i.e. -1200 pounds. 2 months in the past gives me (-2) * (-600) pounds, i.e £1200 more. At the time I gave this any real thought (when I was doing a PGCE) I did indeed have some serious financial problems so this was foremost in my mind. There is some suggestion that Indians (who used numbers for accounting) found -ve numbers easier to conceptualise (and therefore use) than Greek based mathematicians who were used to numbers as measures of geometric extent. Your geometrical explanation (which is great by the way) does lapse into alegbra. I don’t think it needs to but you can see how the -ve is so much more obvious with money (which can be naturally negative) than distance (when you have to understand co-ords to do the job). Double entry bookkeeping is of course something you use if you are a bit nervous about minuses 8-). 2009-10-05 at 15:57:45 Wikipedia on the history of negative numbers suggests that confusion about negative numbers was widespread among Europeans (even European mathematicians) as late as the 18th century: for example, in 1758 the British mathematician Francis Maseres claimed that negative numbers “darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple”. In some sense the “minus times minus” question is an improvement on the 18th century situation, since it basically assumes the existence of negative numbers, with at least some understanding of them, even if complete understanding of how to operate on them is still missing. Maybe at some point we will reach this level of public understanding of imaginary and complex numbers? I am cautiously optimistic. 2009-10-05 at 19:01:04 The trouble with complex numbers is motivation. Why bother? At school level you can try to do this via the completion of a field idea – complex numbers are sufficient to supply all the roots you need for any polynomial. Of course those roots don’t look like they appear on any graph but… Sadly, a rather slender reason for the complex plane. The real motivation for complex numbers is that complex *analysis* is so wonderful. Sadly you can’t see that until you’ve learned some analysis – I am told that this is now solely post 16. Maybe in a generation only university students will learn it. A shame. 2009-10-05 at 16:01:18 Leo Rogers identifies three sources for the confusion: [ed: I added LI markers to make the three points typographically clearer; sorry about the bold] the difference between the operation of subtraction and the object (a negative number), since the same sign is used for both the language involved like ‘minus minus 3′ as opposed to ‘subtract negative 3′ separating the physical model or analogy (be it profit/loss or rise/fall in temperature or rotation/direction in the plane) from the rules of operating on the entities. 2009-10-07 at 19:45:48 Of course I had <ul>…<li>… in the original. As usual, WordPress destroyed my markup. 2013-05-21 at 21:49:01 “algebraiclly we know (a-b)(c-d) = ac + a(-d) + (-b)c + (-b)(-d) ” how did u know this ,the proof of it might contain (-) * (-) u should prove it without using (-) * (-) 2009-10-05 at 17:43:48 Using algebra to explain this is a non-starter, at least the way that maths is taught here. Anyone who hasn’t deeply internalized this result (minus times minus is plus) either is too young to have encountered algebra or has been so turned off maths that the algebra will bring them out in hives. Nice pictures though. 2009-10-05 at 18:25:19 I totally agree, FWIW. 2009-10-05 at 18:51:29 Part of the difficulty here is that your areas aren’t directed. If they were (say) elements of an alternating algebra, then you might make some progress. You could teach directed areas to children pre-algebra, but no-one does. 2009-10-05 at 17:48:21 I think Gareth has the right idea here: negative numbers are different from subtracting. Failing to mentally distinguish them is the same kind of mistake as converting “10C warmer” to “50F warmer” (which category error is seen disturbingly often). 2009-10-05 at 18:53:26 I like it. Good for people with a strong geometric intuition. 2009-10-05 at 18:58:44 One of the things I found very positive (in terms of improvements in pedagogy) during my PGCE (c.1999 I think) was the strong emphasis on building on the intuitions that one’s students happened to have and the methods that worked for them. Of course this means you have to be a much better mathematician and understand many ways of looking at the same problem, but all the better (and all the more interesting) for the teacher. It will just happen that there is probably a way to help the student “get” what you are trying to say, even if others fail. We had a completely incomprehensible “explanation” for – x – = + when I was taught that involved the number line (a dreary place to start I feel) and though we learned it I don’t think it helped at The *easiest* way to understand it – but only if you are teaching your children in a really off the wall way – is that rotating through an angle of \pi twice is obviously going to leave you facing the way you started. I know of no-one that starts using the complex plane before multiplication is taught, but you could. 2009-10-05 at 19:39:44 Of course, there are many audiences for these explanations. You might be talking to a class of kids, or one interested friend over a glass of wine. So I agree that ideally you’ve got many ways to understand the same thing. For example: Some of the other blog explanations for this topic use reflection about zero on the number line. This is a way to describe your rotation in the complex plane, but without discussing complex numbers. 2009-10-05 at 19:43:18 All reflections are pale shadows of higher dimensional rotations. It seems a shame to hide the real truth from one’s interlocutor. Sadly, needs sometimes must. 2009-10-05 at 19:12:40 To use the complex plane, one has to abstract number away from reality. Three isn’t three apples or £3 or three fingers. Once you have abstracted numbers into these entities which have their own properties, which are fun to play with in their own right – independent of the apples, oranges, money, sweets, or football scores – then the step to complex (and to other kinds of numbers) is easy. But: this abstraction isn’t really taught in primary school, although it certainly could be and a lot of children do learn to abstract number at a very early age (I could stretch to a claim that some of these children are actually not learning not to). So a lot of people never really learn to abstract numbers. 2009-10-05 at 19:16:01 “not learning not to”: I taught my son to do long multiplication and division in about an hour, when he was about 5 or 6. A couple of years later, over a ridiculously long period of time, he was then taught some bizarro-world crippletastic method of multiplying numbers of 2 or 3 digits together (only ever 2 or 3 digits). In the process, of course, he unlearned long multiplication and division, and essentially learned that multiplying and dividing are hard things best done with a calculator. 2009-10-05 at 19:33:26 One superb thing about my early maths education is that my teachers actually gave a commentary on competing algorithms. So she preferred using “equal addition” as a subtraction algorithm, but explained to us how to use decomposition and also why she thought that equal addition was superior algorithmically (empirical evidence suggests decomposition is easier to teach but more error prone). Having that sort of education is very useful (if you are bright enough to follow it) because it means that you aren’t confused by the way that other people do things. It sounds like your son’s maths tuition was terrible. You should never overwrite methods they already know and that was very strongly “policy” in ’99. Sadly I was aware that lots of my fellow students (at a top maths PGCE course) did not care much for “policy” with disastrous consequences. The same bunch had one student who couldn’t accept that 1=0.9 recurring. No really. 2009-10-06 at 09:51:51 Re subtraction algorithms: What is “equal addition” and “decomposition” ? 2009-10-07 at 09:07:02 “bizarro-world crippletastic” was maybe too harsh. It was really just long multiplication with every little intermediate sum written out in its own separate box. When multiplying 435 by 27, I learned (and taught my son) to do this: wherein the only marks one makes on the paper are those, with possibly a tiny carry digit in 3 places (depending on one’s fluency). The “boxes method” would involve drawing a grid of (I guess) six boxes, doing each digit multiplication in a separate box, then adding down the columns, then adding across the rows. Maybe. Something like that. So you’d write this: Something like that. I can see that this method has pedagogical value: it shows the exact mechanics – what is going on under the hood in a multiplication. But actually using this technique for doing a set of sums involves lots more writing, and lots of opportunity to make elementary errors (in particular, losing track of columns, so the 600 would be 60 or 6000). So one often gets the wrong answer, therefore multiplication is difficult and mysterious. 2009-10-07 at 09:12:14 But yes, maths teaching at primary schools is crap. 2009-10-07 at 09:49:31 And in fact one multiplies 435 by 27 by saying “400 25s is 10k, 35 25s is 750 and 125, 2 435s is 870, that makes 11k7 and some change”. I’m not sure this is possible to teach to any child who doesn’t do a lot of mental arithmetic. Which they are not required to do. So I suspect that this is a dying art, possibly inaccessible to anyone under 30. 2009-10-07 at 09:56:56 I saw some pretty good mental arithmetic stuff when I was doing teaching practice. There was a move towards making sure children did get exposed to mental arithmetic and the SAT tests had a mental element to them (no working on the paper for instance). A plus of this is that you can teach estimation. Efforts to do so without a mental arithmetic test fail because the kids use a calculator and then round off. Consider (a discussion I had with Gareth Rees recently) what volume of gas at 25 degrees C is occupied by 1 mole? The answer is (in Litres): In your head you will think (“that’s about 8 times 300 divided by a 100 or 24″). Kids can do that because there’s no way they can try to do anything else. Since the actual answer is just 24.4654313 we weren’t doing too badly. So don’t despair of mental arithmetic too much. 2009-10-07 at 09:59:00 The paper (column) methods all require reasonable mental arithmetic, and this is one of the reasons I deliberately use paper methods: so I can practice my mental arithmetic. 2009-10-07 at 11:01:02 The main difficulty with mental arithmetic is remembering all the intermediate results (and in re-organizing the computation so that the intermediate results are small enough to fit in one’s feeble working memory). On paper there’s no such difficulty. 2009-10-07 at 11:25:22 @Francis: interesting. I can usually remember that the molar gas volume is 24 (whereas I never knew the gas constant was 8 until now), but I have massive “K confusion” when it comes to physics. I can never remember whether burning 12 Kg of Carbon will give 24 litres or 24 kilolitres of CO2. It takes me a moment’s thought to realise it must be 24 kilolitres. My intuition is very bad at visualising some large quantities and can easily be out by a factor of 1000. 2009-10-07 at 11:44:17 @drj: that is why I sneakily used J/L for atmospheric pressure rather than Pascals: three fewer 0′s. I have the same problem too. 2009-10-05 at 19:52:48 On the last issue (0.9… ?=? 1), there’s a really interesting paper by Tall and Schwarzenberger, “Conflicts in the Learning of Real Numbers and Limits“. Quote: “Teachers do not help the situation if they show clearly that they feel uneasy with the limit process and so pass on their fears to their pupils.” 2009-10-06 at 09:58:41 http://online.edfac.unimelb.edu.au/485129/wnproj/subtract/algorith.htm is a link to an explanation. Basically when in a column you subtract larger from smaller you add a 10 to it, where does that 10 come from? In Equal Addition you add it to the next digit to subtract, in decomposition you “borrow” it from the next digit to the left. Decomposition means that you sometimes have to make repeated “borrows” for a single subtraction, in equal addition the same thing would cascade through each 2009-10-06 at 10:38:32 Thanks. It seems I was taught (and use) “equal addition”. I always thought it was called “the standard column subtraction method”, but I guess “standards” can vary. 2009-10-06 at 10:45:44 Decomposition went through a phase of being very popular, such that for a while a generation of parents could not understand the algorithms their children were learning (if you see what I Of course since we read left to right the way you really subtract is to start at the left digit and work right, messier but hey. 2009-10-07 at 09:24:54 One of the views some of us came to on the PGCE (looking at the research evidence) is that some of the cognitive problems at age 11 (year 7) were entirely generated by the methods of teaching in primary school. The classic (though there are many) is of course the equation 3 divided by 5 = 1 2/5 (from one bit of research a surprising number of children gave this answer). This works thus: (i) FACT: you cannot divide a smaller number by a larger number (“it doesn’t go”); (ii) so we need to see how many times 3 goes into 5 => answer 1 remainder 2; (iii) what do we do with the 2? Well we were dividing by 5 weren’t we so it must be 2 fifths. Hence: 1 2/5 The “doesn’t go” is a major block for some children, but it is just plain false. Of course you can divide 3 by 5 and most really quite young children can already do that (try it with a cake, chocolate bars, or something similar). They already have the right intuitions so telling them that “it doesn’t go” tramples on any real intuition they have about division and makes them think that fractions are somehow weirder things. You then have to teach them all this later on. It means that “divide a half by a quarter” seems much harder than it needs to be. Again a surprising number of teens can do this if they don’t think about it but if they write it out it all goes pear shaped and they end up with stupid answers. Recently my wife was revising some mathematics for an exam and I realised that she hadn’t understood (vulgar) fractions at school either. Asking for a “half of 2 thirds” at first left her blank though of course a “half of two apples” and a “half of two cats” made perfect sense. Easy when you understand what fractions are all about – impossible if you have been mystified by them. At my PGCE interview I was asked to explain how to divide one fraction by another. Sadly this is done badly. Why? One fundamental problem is that most primary teachers don’t understand what they are teaching, hence sowing further confusion. I know people who obtained their maths GCSE on the fifth attempt in order to qualify as a primary school teacher.
{"url":"http://drj11.wordpress.com/2009/10/05/minus-times-a-minus-is-a-plus/","timestamp":"2014-04-16T16:10:02Z","content_type":null,"content_length":"124224","record_id":"<urn:uuid:5c50c50d-bc38-43df-9608-a080ecfedce7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Document Distance: Program Using Dictionaries Problem Definition | Data Sets | Programs: v1 - v2 - v3 - v4 - v5 - v6 | Program Using Dictionaries Here is one more version of the document distance code which uses hashing more thoroughly (PY). It achieves a running time of Θ(n). This linear time bound is optimal because any solution must at least look at the input. This version makes three changes compared to Document Distance: Program Version 6. First, count_frequency no longer converts the dictionary it computes into a list of items. This will be useful for computing the inner products later. The only changed line is the final return, which is now return D instead of return D.items(). Second, word_frequencies_for_file no longer calls merge_sort on the frequency mapping. Because the frequencies are stored in a dictionary, we no longer need to sort them. The third and main change is the new version of inner_product, which works directly with dictionaries instead of merging sorted lists: def inner_product(D1,D2): Inner product between two vectors, where vectors are represented as dictionaries of (word,freq) pairs. Example: inner_product({"and":3,"of":2,"the":5}, {"and":4,"in":1,"of":1,"this":2}) = 14.0 sum = 0.0< for key in D1: if key in D2: sum += D1[key] * D2[key] return sum The code is actually simpler now, because we no longer need the code for sorting. This version runs about three times faster on our canonical example, t2.bobsey.txt t3.lewis.txt.
{"url":"http://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/dd_dict.htm","timestamp":"2014-04-19T20:29:47Z","content_type":null,"content_length":"10572","record_id":"<urn:uuid:86cd3546-328b-4853-9730-d9dab91e564b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
The most complicated problem in Derivative of Inverse Trigonometric Functions September 12th 2009, 09:59 PM The most complicated problem in Derivative of Inverse Trigonometric Functions Hello everyone, i'm new in here . I do hope I'm welcome in here. Pls.. give me some complicated, not basic, really complicated problems in derivatives of inverse trigonometric.. If possible pls. post it asap... many thanks :D September 12th 2009, 10:01 PM May I ask why? Or, does the name say it all? BTW I am an MHF Ambassador(Giggle), and as such, I speak on behalf of the forum when I say: September 12th 2009, 10:02 PM Prove It September 12th 2009, 10:09 PM mr fantastic Go to the library and find calculus books. Each book will have examples of what you're looking for. Try also searching the Calculus subforum. And I also suggest you try using Google. And, what the heck, perhaps you could ask your teacher too. September 12th 2009, 10:13 PM Ok, Ok... I'll give him one. This isn't that difficult, but try this and we'll see where you're at. Try that on for size. Ooops he said derivatives... I'm not a very good listener. How about... Show that $\frac{d}{dx}\arcsin{x}=\frac{1}{\sqrt{1-x^2}}$ . September 12th 2009, 10:44 PM actually i have browsed and photocopied exercises in books in our library.. But the problems are so easy,... but when our professor gives an exam... It's so difficult as compared to the problems in the books.. Pls.. give me a set of problems thanks. September 12th 2009, 11:30 PM here answer this one click the blue text http://www.mathhelpforum.com/math-he...tml#post364022 September 13th 2009, 02:47 AM mr fantastic As I said earlier, ask you Professor (that's part of his job). Also, past exam papers should be available (ask your institute's library, which is where they are usally archived) for you to work We don't have time to construct difficult questions for you (and how are we to know what is easy, difficult and impossible for you anyway. Everything is relative.) and we don't have time to write out solutions to such questions (which I assume you would want when you got stuck). The main purpose of MHF is to help people with questions thay can't do, not to provide an extension program for people who already understand the work and don't really need help. September 13th 2009, 03:34 AM Students are constantly complaining that the problems on the test are harder than the homework problems. Actually professors typically make an effort to see that most of the test problems are easier than most of the homework problems. What they really mean is that on a test they don't have the text book open, or the answer in the back of the book, or friends working with them, etc.
{"url":"http://mathhelpforum.com/calculus/101984-most-complicated-problem-derivative-inverse-trigonometric-functions-print.html","timestamp":"2014-04-20T11:54:41Z","content_type":null,"content_length":"11657","record_id":"<urn:uuid:1cd728ed-cb02-49f3-bbc9-2cffa230afc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
A Symbolic Derivation of Beta-splines of Arbitrary Order Brian A. Barsky and Gadiel Seroussi EECS Department University of California, Berkeley Technical Report No. UCB/CSD-91-633 June 1991 Beta-splines are a class of splines with applications in the construction of curves and surfaces for computer-aided geometric design. One of the salient features of the Beta-spline is that the curves and surfaces thus constructed are geometrically continuous, a more general notion of continuity than the one used in ordinary B-splines. The basic building block for Beta-splines of order k is a set of Beta-polynomials of degree k-1, which are used to form the Beta-spline basis functions. The coefficients of the Beta-polynomials are functions of certain shape parameters Beta s; i. In this paper, we present a symbolic derivation of the Beta-polynomials as polynomials over the field K n of real rational functions in the indeterminates Beta s; i. We prove, constructively, the existence and uniqueness of Beta-polynomials satisfying the design objectives of geometric continuity, minimum spline order, invariance under translation, and linear independence, and we present an explicit symbolic procedure for their computation. The initial derivation, and the resulting procedure, are valid for the general case of discretely-shaped Beta-splines of arbitrary order, over uniform knot sequences. By extending the field K n with symbolic indeterminates z s representing the lengths of the parametric intervals, the result is generalized to discretely-shaped Beta-splines over non-uniform knot sequences. BibTeX citation: Author = {Barsky, Brian A. and Seroussi, Gadiel}, Title = {A Symbolic Derivation of Beta-splines of Arbitrary Order}, Institution = {EECS Department, University of California, Berkeley}, Year = {1991}, Month = {Jun}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6395.html}, Number = {UCB/CSD-91-633}, Abstract = {Beta-splines are a class of splines with applications in the construction of curves and surfaces for computer-aided geometric design. One of the salient features of the Beta-spline is that the curves and surfaces thus constructed are geometrically continuous, a more general notion of continuity than the one used in ordinary B-splines. The basic building block for Beta-splines of order <i>k</i> is a set of Beta-polynomials of degree <i>k</i>-1, which are used to form the Beta-spline basis functions. The coefficients of the Beta-polynomials are functions of certain shape parameters Beta<i>s</i>;<i>i</i>. In this paper, we present a symbolic derivation of the Beta-polynomials as polynomials over the field K<i>n</i> of real rational functions in the indeterminates Beta<i>s</i>;<i>i</i>. We prove, constructively, the existence and uniqueness of Beta-polynomials satisfying the design objectives of geometric continuity, minimum spline order, invariance under translation, and linear independence, and we present an explicit symbolic procedure for their computation. The initial derivation, and the resulting procedure, are valid for the general case of discretely-shaped Beta-splines of arbitrary order, over uniform knot sequences. By extending the field K<i>n</i> with symbolic indeterminates z<i>s</i> representing the lengths of the parametric intervals, the result is generalized to discretely-shaped Beta-splines over non-uniform knot sequences.} EndNote citation: %0 Report %A Barsky, Brian A. %A Seroussi, Gadiel %T A Symbolic Derivation of Beta-splines of Arbitrary Order %I EECS Department, University of California, Berkeley %D 1991 %@ UCB/CSD-91-633 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6395.html %F Barsky:CSD-91-633
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6395.html","timestamp":"2014-04-16T16:08:13Z","content_type":null,"content_length":"7637","record_id":"<urn:uuid:1d10bc59-efa6-4158-b772-cfa25b9426ee>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining a generating function (of a restricted form) up vote 0 down vote favorite Inspired by a recent problem for linear recurrence relations I have the following question (which may be too much to hope for). The Catalan numbers (just to give a specific example) have generating Suppose I had the the first 20 or 50 terms of the left-hand size and wondered if there was an expression such as the right-hand side which agreed with it at least as far as I had? I would value answers in other forms but I'll ask about the following: Given a finite integer sequence is there a reasonable procedure to discover if it might have a generating function of the form $\frac{p(x)+\sqrt{r(x)}}{q(x)}$ where $p,q,r$ are polynomials with many fewer coefficients between them than the length of the series. generating-functions co.combinatorics 1 You could try the seriestoalgeq command of the Maple package gfun. See algo.inria.fr/libraries/papers/gfun.html. – Richard Stanley Jun 4 '11 at 1:43 add comment 2 Answers active oldest votes In effect you're asking whether a given power series $y(x) = \sum_{n=0}^\infty a_n x^n$ satisfies a quadratic equation $a(x) y^2 + b(x) y + c(x) = 0$ with $a,b,c$ polynomials of low degree in $x$, say degree less than $\delta$. This is a linear system in $3\delta$ variables (the coefficients of $a,b,c$), so it's easy to test whether it has a nontrivial solution; and once you have several more than $3\delta$ coefficients of $y$ your linear system has several more equations than unknowns so the existence of a nontrivial solution is strong evidence that you've found the correct formula for $y$. In the case of the Catalan numbers, if we take $\delta=2$ we're looking for a linear dependence among the sequences of coefficients of $1,x,y,xy,y^2,xy^2$, which are respectively 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ... 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, ... 0, 1, 1, 2, 5, 14, 42, 132, 429, 1430, ... up vote 8 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, ... down vote 0, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, ... and in this simple case you'll find the linear relation $1 - y + xy^2 = 0$ "by inspection", but in any case a linear algebra package will find the corresponding dependence $(1, 0, -1, 0, 0, 1)$ automatically. Presumably that's what's done at some level by the seriestoalg routine that Richard Stanley recommends. Naturally this idea works for finding dependencies whose degree in $y$, call it $d$, exceeds 2; for instance you can find a cubic over ${\bf Q}(x)$ satisfied by the generating function with $a_n = (3n)! / (n! (2n+1)!)$. [The proof is a standard exercise in residue calculus; for two elementary alternatives see my "one-page papers" at http://www.math.harvard.edu/~elkies/ Misc/catalan.pdf and http://www.math.harvard.edu/~elkies/Misc/catalan2.pdf .] For large $\delta$ there are algorithms for finding such relations that are more efficient than generic linear algebra, analogous to (but simpler than) the lattice reduction that's used for finding integer relations among real numbers known to some accuracy. I don't know if such an algorithm is implemented in seriestoalg or in another available package. Fantastic, thanks! – Aaron Meyerowitz Jun 4 '11 at 4:04 For large $\delta$ use guessAlg in FriCAS... I'm not sure whether advertising this is already spam, in case it is, please let me know. – Martin Rubey Jun 4 '11 at 17:25 add comment This sort of question was flogged half to death by D. Bailey some twenty five years ago, using lattice reduction techniques. I am sure there are other references, but see up vote 1 http://crd.lbl.gov/~dhbailey/dhbpapers/pslq-cse.pdf down vote Thanks for this early reference. It seems though that Bailey's paper addresses "the lattice reduction that's used for finding integer relations among real numbers known to some accuracy", which I mentioned at the end of my answer, but not the analogous (and simpler) question that the OP asked. Yes, there are bound to be references for this generating-function analogue too. – Noam D. Elkies Jun 4 '11 at 15:53 add comment Not the answer you're looking for? Browse other questions tagged generating-functions co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/66863/determining-a-generating-function-of-a-restricted-form","timestamp":"2014-04-19T14:57:05Z","content_type":null,"content_length":"61010","record_id":"<urn:uuid:fb5120e3-a855-4992-8323-a96fc5fd49a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Quartic April 2nd 2006, 11:18 AM #1 Dec 2005 Solving Quartic Find all roots of x^4 + i =0. Write answers in trigonometric form. Don't know how to solve quartics. Any help would be appreciated. Find all roots of x^4 + i =0. Write answers in trigonometric form. Don't know how to solve quartics. Any help would be appreciated. You need to use a theorem know as De' Moivre's You need to express Now, express $i$ in trigonomteric form, By De Moivre's Theorem, But, this is only one solution there are 3 more (because of the degree). In this case, we divide, $2\pi$ by the degree, (4) to get, $\pi/2$ and each other solution is added by $\pi/2$ $x=\left(\cos\frac{3\pi}{8}+i\sin\frac{3\pi}{8} \right)$ $x=\left(\cos\frac{7\pi}{8}+i\sin\frac{7\pi}{8} \right)$ $x=\left(\cos\frac{11\pi}{8}+i\sin\frac{11\pi}{8} \right)$ If you want you can add $2\pi$ to your first solution to get, $x=\left(\cos\frac{3\pi}{8}+i\sin\frac{3\pi}{8} \right)$ $x=\left(\cos\frac{7\pi}{8}+i\sin\frac{7\pi}{8} \right)$ $x=\left(\cos\frac{11\pi}{8}+i\sin\frac{11\pi}{8} \right)$ $x=\left(\cos\frac{15\pi}{8}+i\sin\frac{15\pi}{8} \right)$ The beauty of the second result is that it is very easy to see, that each one is divided by 8 and the numerator is increasing by 4. when you solve for roots do you always solve for x? when you solve for roots do you always solve for x? Not sure I understand your question $x^2 + i = 0$ You subracted $i$ from both sides to get $x^4=-i$ Why did you subtract $i$? Last edited by c_323_h; April 2nd 2006 at 03:46 PM. $x^2 + i = 0$ You subracted $i$ from both sides to derive $x^4=-i$ Why did you subtract $i$? You mean, in a regular poynomial you factor one side so you can set its factors equal to zero but over here you do not do this? Because a special equation of polynomial is, $x^n=y,yot =0$. We can show that they are EXACTLY $n$ solutions and there are called Roots of Unity which are of quite importance in Abstract Algebra/Field Theory. Whenever, you see and equation like this you bring the constant term to the other side and then use De Moivre's Theorem to find its roots. It is interesting to note, that the soltuions that you get, are in trigonomteric form. Some of them can be simplified in algebraic terms (only using +,-,x,/ and roots) while some of them cannot (impossible) and thus you need to leave it in the form it is in. April 2nd 2006, 01:05 PM #2 Global Moderator Nov 2005 New York City April 2nd 2006, 02:17 PM #3 Dec 2005 April 2nd 2006, 03:11 PM #4 Global Moderator Nov 2005 New York City April 2nd 2006, 03:44 PM #5 Dec 2005 April 2nd 2006, 03:51 PM #6 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/pre-calculus/2424-solving-quartic.html","timestamp":"2014-04-16T17:46:48Z","content_type":null,"content_length":"51574","record_id":"<urn:uuid:4fca84a2-15d6-4254-b1d4-5cb4576736ed>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
This series covers all areas of research at Perimeter Institute, as well as those outside of PI's scope. The standard method to study nonperturbative properties of quantum field theories is to Wick rotate the theory to Euclidean space and regulate it on a Euclidean Lattice. An alternative is "fuzzy field theory". This involves replacing the lattice field theory by a matrix model that approximates the field theory of interest, with the approximation becoming better as the matrix size is increased. The regulated field theory is one on a background noncommutative space. I will describe how this method works and present recent progress and surprises. Several current experiments probe physics in the approximation in which Planck's constant and Newton's constant may be neglected, but, the Planck mass, is relevant. These include tests of the symmetry of the ground state of quantum gravity such as time delays in photons of different energies from gamma ray bursts. I will describe a new approach to quantum gravity phenomenology in this regime, developed with Giovanni Amelino-Camelia, Jersy Kowalski-Glikman and Laurent Freidel. Traditional condensed matter physics is based on two theories: symmetry breaking theory for phases and phase transitions, and Fermi liquid theory for metals. Mean-field theory is a powerful method to describe symmetry breaking phases and phase transitions by assuming the ground state wavefunctions for many-body systems can be approximately described by direct product states. The Fermi liquid theory is another powerful method to study electron systems by assuming that the ground state wavefunctions for the electrons can be approximately described by Slater determinants. How many interacting quantum (field) theories of four-dimensional geometry are there which have General Relativity as their classical limit? Some of us still harbour hopes that a quantum theory of gravity is "reasonably unique", i.e. characterized by a finite number of free parameters. One framework in which such universality may manifest itself is that of "Quantum Gravity from Causal Dynamical Triangulations (CDT)". Cosmic strings, generic in brane inflationary models, may be detected by the current generation of gravitational wave detectors. An important source of gravitational wave emission is from isolated events on the string called cusps and kinks. I first review cosmic strings, discussing their effective action and motion, and showing how cusps and kinks arise dynamically. I then show how allowing for the motion of the strings in extra dimensions gives a potentially significant reduction in signal strength, and comment on current LIGO bounds. Scattering amplitudes in gauge theories and gravity have extraordinary properties that are completely invisible in the textbook formulation of quantum field theory using Feynman diagrams. In the standard approach--going back to the birth of quantum field theory--space-time locality and quantum-mechanical unitarity are made manifest at the cost of introducing huge gauge redundancies in our description of physics. To a first approximation, everything that happens at the Large Hadron Collider at CERN is a strong interaction process. If signals of supersymmetric particles or other new states are found at the LHC, the events that produce those signals will represent parts per trillion of the total sample of proton-proton scattering events and parts per billion of the sample of events with hard scattering of quarks and gluons. Can we predict the rates of QCD processes well enough to control their contribution to a tantalizing signal? What physics insights can assist this process? Graphene-like materials provide a unique opportunity to explore quantum-relativistic phenomena in a condensed matter laboratory. Interesting phenomena associated with the internal degrees of freedom, spin and valley, including quantum spin-Hall effect, have been theoretically proposed, but could not be observed so far largely due to disorder and density inhonogeneity. We show that weak magnetic field breaks the symmetries that protect flavor (spin, valley) degeneracy, and induces large bulk non-quantized flavor-Hall effect in graphene. Quantum error correcting codes and topological quantum order (TQO) are inter-connected fields that study non-local correlations in highly entangled many-body quantum states. In this talk I will argue that each of these fields offers valuable techniques for solving problems posed in the other one. First, we will discuss the zero-temperature stability of TQO and derive simple conditions that guarantee stability of the spectral gap and the ground state degeneracy under generic local perturbations. These conditions thus can be regarded as a rigorous definition of TQO. The average quantum physicist on the street believes that a quantum-mechanical Hamiltonian must be Dirac Hermitian (invariant under combined matrix transposition and complex conjugation) in order to guarantee that the energy eigenvalues are real and that time evolution is unitary. However, the Hamiltonian $H=p^2+ix^3$, which is obviously not Dirac Hermitian, has a real positive discrete spectrum and generates unitary time evolution, and thus it defines a fully consistent and physical quantum theory.
{"url":"https://www.perimeterinstitute.ca/video-library/collection/colloquium?page=11","timestamp":"2014-04-19T04:01:59Z","content_type":null,"content_length":"65844","record_id":"<urn:uuid:04ef06ba-0a7d-4ebd-9dbc-d2c85ffcc68d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Interpolation Results 1 - 10 of 416 - LEARNING IN GRAPHICAL MODELS , 1995 "... ..." - ADVANCES IN LARGE MARGIN CLASSIFIERS , 1999 "... The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..." Cited by 699 (0 self) Add to MetaCart The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. However, training with a maximum likelihood score will produce non-sparse kernel machines. Instead, we train an SVM, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities. This chapter compares classification error rate and likelihood scores for an SVM plus sigmoid versus a kernel method trained with a regularized likelihood error function. These methods are tested on three data-mining-style data sets. The SVM+sigmoid yields probabilities of comparable quality to the regularized maximum likelihood kernel method, while still retaining the sparseness of the SVM. , 2001 "... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..." Cited by 552 (5 self) Add to MetaCart This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art `support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while oering a number of additional advantages. These include the benets of probabilistic predictions, automatic estimation of `nuisance' parameters, and the facility to utilise arbitrary basis functions (e.g. non-`Mercer' kernels). - Bioinformatics , 2001 "... Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory ..." Cited by 294 (2 self) Add to MetaCart Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data. Results: We develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t-test, provide a systematic inference approach that compares favorably with simple t-test or fold methods, and partly compensate for the lack of replication. Availability: The approach is implemented in a software called Cyber-T accessible through a Web interface at www.genomics.uci.edu/software.html. The code is available as Open Source and is written in the freely available statistical language R. and Department of Biological Chemistry, College of Medicine, University of California, Irvine. To whom all correspondence should be addressed. Contact: pfbaldi@ics.uci.edu, tdlong@uci.edu. 1 - Neural Computation , 1999 "... We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square no ..." Cited by 219 (9 self) Add to MetaCart We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square noiseless mixing, but also the general case where the number of mixtures differs from the number of sources and the data are noisy. IFA is a two-step procedure. In the first step, the source densities, mixing matrix and noise covariance are estimated from the observed data by maximum likelihood. For this purpose we present an expectation-maximization (EM) algorithm, which performs unsupervised learning of an associated probabilistic model of the mixing situation. Each source in our model is described by a mixture of Gaussians, thus all the probabilistic calculations can be performed analytically. In the second step, the sources are reconstructed from the observed data by an optimal non-linear ... , 2000 "... The support vector machine (SVM) is a state-of-the-art technique for regression and classification, combining excellent generalisation properties with a sparse kernel representation. However, it does suffer from a number of disadvantages, notably the absence of probabilistic outputs, the requirement ..." Cited by 214 (6 self) Add to MetaCart The support vector machine (SVM) is a state-of-the-art technique for regression and classification, combining excellent generalisation properties with a sparse kernel representation. However, it does suffer from a number of disadvantages, notably the absence of probabilistic outputs, the requirement to estimate a trade-off parameter and the need to utilise `Mercer' kernel functions. In this paper we introduce the Relevance Vector Machine (RVM), a Bayesian treatment of a generalised linear model of identical functional form to the SVM. The RVM suffers from none of the above disadvantages, and examples demonstrate that for comparable generalisation performance, the RVM requires dramatically fewer kernel functions. - IEEE Transactions on Image Processing , 1996 "... The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do t ..." Cited by 211 (10 self) Add to MetaCart The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses how to utilize both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subp... - Machine Learning , 1997 "... We discuss Bayesian methods for learning Bayesian networks when data sets are incomplete. In particular, we examine asymptotic approximations for the marginal likelihood of incomplete data given a Bayesian network. We consider the Laplace approximation and the less accurate but more efficient BIC/MD ..." Cited by 178 (10 self) Add to MetaCart We discuss Bayesian methods for learning Bayesian networks when data sets are incomplete. In particular, we examine asymptotic approximations for the marginal likelihood of incomplete data given a Bayesian network. We consider the Laplace approximation and the less accurate but more efficient BIC/MDL approximation. We also consider approximations proposed by Draper (1993) and Cheeseman and Stutz (1995). These approximations are as efficient as BIC/MDL, but their accuracy has not been studied in any depth. We compare the accuracy of these approximations under the assumption that the Laplace approximation is the most accurate. In experiments using synthetic data generated from discrete naive-Bayes models having a hidden root node, we find that (1) the BIC/MDL measure is the least accurate, having a bias in favor of simple models, and (2) the Draper and CS measures are the most accurate. 1 , 1996 "... Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been develop ..." Cited by 167 (12 self) Add to MetaCart Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach. - Neural Computation , 1992 "... Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalising over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a `mo ..." Cited by 152 (10 self) Add to MetaCart Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalising over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a `moderation' of the most probable classifier 's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in (MacKay, 1992a, 1992b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information--based data selection criterion is derived and demonstrated within this framework. 1 Introduction A quantitative Bayesian framework has been described for learning of mappings in feedforward networks (MacKay, 1992a, 1992b). It was demonstrated that this `evidence' fram...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.27.9072","timestamp":"2014-04-18T21:50:50Z","content_type":null,"content_length":"39295","record_id":"<urn:uuid:e7bcfd76-895c-4fa9-8c79-ac03851c57a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Description of Accuplacer Tests Reading This test is designed to measure how well you understand what you read. Some ask you to decide how two sentences are related. Other questions ask you questions about reading Comprehension passages of various lengths. You will be asked to interpret and draw conclusions from what you have read. Twenty questions are asked. Sentence Skills Two kinds of questions are given in this test. You will be asked to correct a sentence by choosing a word or phrase to substitute for an underlined portion of a sentence. In the other type of question, you will be asked to rewrite a sentence is a specific way without changing the meaning. Twenty questions are asked. The Arithmetic test measures your skills in three primary categories: • Operations with whole numbers and fractions Includes: addition, subtraction, multiplication, division, recognizing equivalent fractions and mixed numbers Arithmetic • Operations with decimals and percents Includes: addition, subtraction, multiplication, and division percent problems, decimal recognition, fraction percent equivalencies, and estimation problems • Applications and problem solving Includes: rate, percent, and measurement problems, geometry problems, distribution of a quantity into its fractional parts. Seventeen questions are asked. When may I take The test is given up until 2 hours before the closing at that particular testing center. Please check the hours on this website. No appointment is necessary. There are also three categories in the Elementary Algebra Test: • Operations with integers and rational numbers Includes: computation with integers and negative rationals, the use of absolute values, and ordering Elementary • Operations with algebraic expressions Algebra Includes: evaluations of simple formulas, expressions, and adding, subtracting monomials and polynomials, the evaluation of positive rational roots and exponents, simplifying algebraic fractions, and factoring • Equation solving, inequalities, and word problems Includes: solving verbal problems presented in algebraic context, geometric reasoning, the translation of written phrases into algebraic expressions, graphing Twelve questions are asked. The College-Level mathematics test assesses proficiency from intermediate algebra through precalculus. The categories covered include: • Algebraic operations Includes: simplifying rational algebraic expressions, factoring and expanding polynomials, manipulating roots and exponents • Solutions of equations and inequalities Includes: the solution of linear and quadratic equations by factoring, expanding polynomials, manipulating roots and exponents College-Level • Coordinate geometry Mathematics Includes: plane geometry, the coordinate plane, straight lines, conics, sets of points in a plane, graphs of algebraic functions • Application and other algebra topics Asks about: complex numbers, series and sequences, determinants, permutations, combinations, fractions, word problems. • Functions and trigonometry Presents questions about: polynomial, algebraic, exponential, logarithmic, trigonometric functions. Twenty questions are asked.
{"url":"http://www.weber.edu/TestingCenter/accuplacer-description.html","timestamp":"2014-04-21T04:32:47Z","content_type":null,"content_length":"22949","record_id":"<urn:uuid:b982f19a-b063-4c34-829d-4a08817efb6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Toltec, AZ Math Tutor Find a Toltec, AZ Math Tutor ...I would welcome the opportunity to help you! Regards, Fred Reviews from previous students: "An inspiration. He touches the lives of everyone he teaches and gives students a new confidence in 15 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have taught elementary math both one-on-one and in a classroom setting. It is extremely rewarding to me to help give younger students a firm grounding in the basic principles of arithmetic and basic number concepts that will carry them through subsequent years of math classes. I readily adapt my teaching style and curriculum to suit the needs and individual learning styles of each 33 Subjects: including algebra 1, algebra 2, ACT Math, piano Hi,I'm a friendly high school senior that is enthusiastic in the subject of Mathematics. I hope to tutor any student and I hope to work my hardest with you to reach your full potential. Here's a big accomplishment that I want you to know:I am already used to hearing “I need your help,” from elementary students. 67 Subjects: including algebra 2, American history, biology, chemistry ...I have a Masters Degree in Math and Math Education. My specialties are Algebra and Geometry. I am also proficient in Calculus. 13 Subjects: including statistics, algebra 1, algebra 2, calculus ...I have assisted hundreds of students in calculus and hope to work with you in the future! Let my love of chemistry infect your son, daughter, or perhaps yourself! I have been tutoring and taught chemistry for years and find immense enjoyment assisting others in the subject. 10 Subjects: including algebra 1, algebra 2, calculus, chemistry Related Toltec, AZ Tutors Toltec, AZ Accounting Tutors Toltec, AZ ACT Tutors Toltec, AZ Algebra Tutors Toltec, AZ Algebra 2 Tutors Toltec, AZ Calculus Tutors Toltec, AZ Geometry Tutors Toltec, AZ Math Tutors Toltec, AZ Prealgebra Tutors Toltec, AZ Precalculus Tutors Toltec, AZ SAT Tutors Toltec, AZ SAT Math Tutors Toltec, AZ Science Tutors Toltec, AZ Statistics Tutors Toltec, AZ Trigonometry Tutors Nearby Cities With Math Tutor Arizona City Math Tutors Corona De Tucson, AZ Math Tutors Corona, AZ Math Tutors Dudleyville Math Tutors Eleven Mile Corner, AZ Math Tutors Eleven Mile, AZ Math Tutors Gu Achi, AZ Math Tutors Haciendas De Tena, PR Math Tutors Litchfield, AZ Math Tutors Little Tucson, AZ Math Tutors Mescal, AZ Math Tutors Pisinemo, AZ Math Tutors Redington, AZ Math Tutors Santa Rita Foothills, AZ Math Tutors Sil Nakaya, AZ Math Tutors
{"url":"http://www.purplemath.com/Toltec_AZ_Math_tutors.php","timestamp":"2014-04-19T17:27:00Z","content_type":null,"content_length":"23748","record_id":"<urn:uuid:8ed94256-3398-42a1-bbbc-fc9eed24c201>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (K), or Technology Education (K - 2) Title: Lucky Charms colors and shapes Description: Students will complete a hands-on activity by sorting, estimating, counting, and graphing data. Students will sort the Lucky Charms by color and/shape. Students will interpret the data and create a graph. Subject: Mathematics (K), or Technology Education (K - 2) Title: Heart Graphing Description: During this lesson students will complete hands-on activities.Technology will be incorporated with this lesson. Students will sort conversation heart candy by colors. Students will then use their data to complete picture graphs. Subject: Arts Education (1), or English Language Arts (1), or English Language Arts (1), or Mathematics (K), or Science (1) Title: Get Jiggy With It! Description: Students will exhibit their understanding of character through improvisation. Students will listen to a story and act out the actions. They will demonstrate the verbs and adjectives that describe their characters' movements. Subject: Mathematics (K - 2) Title: "Bursting with Math" Description: Using "Starburst" jelly beans to sort, graph and add. Subject: Mathematics (K) Title: Is Your Order Up or Down? Description: The students will be afforded opportunities to actively explore ordering numbers (from least to greatest and greatest to least). The students will engage in a whole group and cooperative learning group setting to explore ordering whole numbers. Students will also explore interactive web activities in an effort to enhance their understanding of ordering whole numbers.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: English Language Arts (2), or English Language Arts (2), or Mathematics (K - 3), or Technology Education (K - 2) Title: "No More Money Trouble" Description: This lesson will allow students to identify and count money. They will enjoy playing with coins that look like real money. This lesson is guaranteed to motivate students through the use of hands-on and cooperative group activities. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: Mathematics (K - 2), or Technology Education (K - 2) Title: Congruent Figures Description: This lesson is an introduction to a unit about congruent shapes. Students will begin by listening to a book about shapes. They will then complete several hands-on activities, including sorting shapes, using their bodies to form congruent shapes, and using activities on the Internet to practice forming congruent shapes. Subject: Mathematics (K) Title: Counting Backward from Ten Description: Students will be introduced in this lesson to the concept of orally counting backward from ten using a book, number cards, and the Internet. Subject: Character Education (K - 12), or English Language Arts (1 - 3), or English Language Arts (1 - 3), or Mathematics (K - 2), or Technology Education (K - 5) Title: It's Your Birthday, Dr. Seuss! Description: This lesson plan is intended to be used during the month of March when the class is preparing to celebrate Dr. Seuss Day. After listening to and reading different stories by Dr. Seuss, the students have opportunities to compare and contrast story elements through critical thinking and Venn diagrams. Subject: Mathematics (K) Title: Fishy Addition! Description: In this lesson, students will use manipulatives to act out and solve simple addition problems. Subject: English Language Arts (K), or English Language Arts (K), or Mathematics (K) Title: There's Magic In Green Eggs and Ham Description: As a part of a unit about Dr. Suess, students will learn about sequential order. Student understanding will be aided by the use of hands-on preparation of green eggs and ham, dictation, flannel board, sentence strips, and a living center. Subject: English Language Arts (1), or English Language Arts (1), or Mathematics (K - 2), or Science (1), or Technology Education (K - 2) Title: "Nuts" About Peanuts!! (Writing) Description: This lesson will be implemented as part of a unit about plants. The students will describe the characteristics of a peanut and peanut butter (using their five senses) and record their observations/descriptions on a graphic organizer. Student understanding will be enhanced with the use of books, class discussions, and the Internet. Subject: English Language Arts (1), or English Language Arts (1), or Mathematics (K - 2), or Science (1), or Technology Education (K - 2) Title: "Nuts" About Peanuts! (Reading) Description: The lesson will be implemented as part of a unit about plants. The students will learn about the growth cycle of a peanut and use a Five-Step Sequence Think-sheet to sequence the steps. They will also make predictions and complete activities to check their predictions/draw conclusions (including math skills of graphing and comparing). Student understanding will be enhanced with the use of books, class discussions, and the Internet. Subject: English Language Arts (1), or English Language Arts (1), or Mathematics (K - 1), or Science (1), or Technology Education (K - 2) Title: Leo Lionni’s Little Blue and Little Yellow Description: Students will use Leo Lionni’s literature and his website to learn about the author, mix primary colors, summarize and rewrite a story, and use graphing skills to determine the classes’ favorite color. Subject: English Language Arts (2), or English Language Arts (2), or Mathematics (K - 2), or Science (1 - 2), or Technology Education (K - 2) Title: Very Busy Spiders Description: The students will work from a computer and a specified Webquest to complete a unit on spiders. The lessons will incorporate literature, writing, science, art, and graphing. Thinkfinity Lesson Plans Subject: Mathematics Title: Building Numbers to Five Description: In this lesson, one of a multi-part unit from Illuminations, students make groups of zero to five objects, connect number names to the groups, compose and decompose numbers, and use numerals to record the size of a group. Visual, auditory, and kinesthetic activities are used to help students begin to acquire a sense of number. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Further Investigating Greater Than, Less Than, and Equal To Description: Students build upon their understanding of greater than, less than, and equal to by observing quantity and making comparison using varied instructional materials. The fish cut-out, with its mouth open, represents the greater than or less than symbol; the clam cut-out represents the equal to symbol. Using fish lips as a transition point, children will apply their understanding of greater, less, and equal to the standard symbols as the teacher introduces symbolic notation at an developmentally appropriate level. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Count to 10: Looking Back and Moving Forward Description: In this lesson, one of a multi-part unit from Illuminations, students review this unit by creating, decomposing, and comparing sets of zero to ten objects and by writing the cardinal number for each set. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Multiple Patterns Description: In this lesson, one of a multi-part unit from Illuminations, students explore patterns that involve doubling. They use objects and numbers in their exploration and record them using a Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Building Sets of Seven Description: In this lesson, one of a multi-part unit from Illuminations, students construct and identify sets of seven objects. They compare sets of up to seven items and record a set of seven in chart form. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Health,Mathematics Title: Food Pyramid Power: Description: In this six-lesson unit from Illuminations, students use algebraic thinking to explore properties of addition. The food pyramid is the starting point for these explorations. The learning experiences foster facility with the process standards of communication, connections, representation, and reasoning and proof. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Health,Mathematics Title: Pyramid Power Description: In this lesson, one of a multi-part unit from Illuminations, students make sets of a given number, explore relationships between numbers, and write numbers that enumerate how many elements are in a group. They make and record sets of one more and one less than a given number. They have the opportunity to apply their reasoning and communication skills in this lesson. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Language Arts,Mathematics Title: How Many Buttons? Description: In this lesson, one of a multi-part unit from Illuminations, students review classification, make sets of a given number, explore relationships between numbers, and find numbers that are one more and one less than a given number. They apply their knowledge of classification as they play a game similar to bingo. Several pieces of literature appropriate for use with this lesson are Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Island Inequality Mat Description: The concepts of greater than, less than, and equal to are explored in this 2-lesson unit where students create piles of food on two islands, and use a paper fish, whose mouth is open, pointing towards the greater amount, introducing students to the associated symbols Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Button Trains Description: In this lesson, one of a multi-part unit from Illuminations, students describe order by using vocabulary such as before, after, and between. They also review and use both cardinal and ordinal numbers. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Shirts Full of Buttons Description: In this lesson, one of a multi-part unit from Illuminations, students explore subtraction in the comparative mode by answering questions such as How many more? and How many less? as they match sets of buttons. They also make and discuss bar graphs based on the number of buttons they are wearing. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Look at Me Description: In this six-lesson unit, from Illuminations, students collect data and display it with tally marks, pictographs, bar graphs, and glyphs. Students explore logical and numerical relationships, review one-to-one correspondence, explore patterns and the relationships between numbers, and model addition sentences with missing addends. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics, Science Title: Sampling Rocks Description: The purpose of this lesson, from Science NetLinks, is for students to learn about sampling through an investigation of rocks found in the schoolyard. In this lesson, students collect and analyze a sampling of rocks from the schoolyard. They sort the collected rocks by characteristics such as size, weight, and color, to see if any generalizations can be made about the types of rocks that can be found in the schoolyard. Thinkfinity Partner: Science NetLinks Grade Span: K,1,2 Subject: Mathematics Title: Calculating Patterns Description: In this eight lesson unit, from Illuminations, students represent patterns in different ways. They solve problems; make, explain, and defend conjectures; make generalizations; and extend and clarify their knowledge. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Who's in the Fact Family? Description: In this lesson, one of a multi-part unit from Illuminations, students explore the relation of addition to subtraction. Students use problem-solving skills to find fact families, including those in which one addend is zero or in which the addends are alike. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 Subject: Mathematics Title: Begin with Buttons: Looking Back and Moving Forward Description: In this lesson, one of a multi-part unit from Illuminations, students review the work of the previous lessons through a variety of activity stations, one of which involves using an interactive Web site. Students model with buttons and record addition and subtraction. Thinkfinity Partner: Illuminations Grade Span: K,PreK,1,2 ALEX Podcasts The Game of Compare This activity is like the card game WAR. The students turn over cards and compare the numbers on the cards. The student with the larger number retrieves the cards. If the same number is turned over, the students turn over another card and compare the new numbers to decide who get the cards. The Great Bear Race Racing Bears is a counting game. The math focus points are counting spaces and moving on a gameboard and thinking strategically about the moves on a gameboard. Calendar Math In this podcast our class will demonstrate how we use calendar math each day. We discussed the days of the week. We graphed the daily, monthly and yearly weather. We used the 100's chart to practice skip counting and even and odd numbers. We used straws to count the days of school, grouping ones, tens and one hundreds. Traveling Numbers This podcast shows the various places numbers can be found when traveling. It also shows the shapes that can be found in traffic signs. Web Resources Informational Materials Know Your Numbers in a Flash This website creates flashcards or allows teachers to create their own flashcards to assist students in number recognition and illustrations for one-to-one correspondence. Teacher Tools Know Your Numbers in a Flash This website creates flashcards or allows teachers to create their own flashcards to assist students in number recognition and illustrations for one-to-one correspondence.
{"url":"http://alex.state.al.us/all.php?std_id=53521","timestamp":"2014-04-20T23:32:28Z","content_type":null,"content_length":"190306","record_id":"<urn:uuid:f9c2011d-3212-4de7-9e80-56b0d323a800>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
C++ templates 06-05-2012 #1 Registered User Join Date Sep 2009 C++ templates Hi, I'm fairly new to C++. I'm trying to make a priority queue with a binary search tree. My BST class is template <class T> class BST class Node T Value; Node* Left; Node* Right; Node(const T& value) Value = value; Left = NULL; Right = NULL; Node* Root; Root = NULL; void Add(const T& value) Node* node = new Node(value); #include "BST.h" template <class T, class K> class PriorityQueue class PriorityNode T Value; K Priority; PriorityNode(T value, K priority) Value = value; Priority = priority; inline bool operator<(const PriorityNode& node) const inline bool operator==(const PriorityNode& node) const BST<PriorityNode> Bst; void Add(T value, K priority) PriorityNode* node = new PriorityNode(value, priority); My problem is here: PriorityNode* node = new PriorityNode(value, priority); When I try to add to the Bst, it expects type T instead of PriorityNode. Sorry, but I cannot reproduce your problem, though I did get a compile error having to do with the fact that PriorityNode does not have a default constructor (you should be using the constructor initialiser list in various places instead of what you are doing now). I suggest that you post the smallest and simplest program that you think should compile but which demonstrates the error. Also, post the error messages. C + C++ Compiler: MinGW port of GCC Version Control System: Bazaar Look up a C++ Reference and learn How To Ask Questions The Smart Way This is so muddled up! A "Binary Search Tree" is something completely different from a "Priority Queue". A "Priority Queue" strongly implies a heap data structure. A heap should not be implemented as a binary tree at all. It is actually much harder to implement a heap using as a binary tree and it a lot less efficient. Choose which of these things it is supposed to be. Once you've done that we can work out which bits of this code can be salvaged. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" This is about as simple as I can make it (you can remove operator overloads I guess). My error is error C2512: 'PriorityQueue<T,K>::PriorityNode' : no appropriate default constructor available BST.h 19 1 I'm compiling with VS 2012 if that helps. Also, this is an assignment, so I am required to use a BST to implement a priority queue. My plan was to have a BST that is of type PriorityNode, and the PriorityNode overloads <, >, and ==, and does that comparison based on the priority. The value of the PriorityNode will probably be a vector of whatever values have that priority. Stab in the dark ... template <class T> class BST class Node T Value; Node* Left; Node* Right; Node(const T& value) Value = value; Left = NULL; Right = NULL; This code requires T to be default-constructible. My guess is, whatever type you specialize this template for just doesn't have a parameterless default constructor. To fix that, you could do what laserlight suggested and INITIALIZE rather than assign. Node(const T& value) : Value( value ), Left( NULL ), Right( NULL ) I am required to use a BST to implement a priority queue. Are you sure the requirement were "binary search tree" and not just "binary tree"? A heap built from a simple binary tree is easier than messing about building a balanced search tree. My guess is, whatever type you specialize this template for just doesn't have a parameterless default constructor. That should probably be "instantiate" instead of "specialize". Uh ... yeah, sorry. Language barrier problem. Thanks, that worked. I will look up constructor initializers. edit: yes, it has too be a BST 06-05-2012 #2 06-05-2012 #3 06-05-2012 #4 Registered User Join Date Sep 2009 06-05-2012 #5 06-05-2012 #6 06-05-2012 #7 06-05-2012 #8 Registered User Join Date Sep 2009
{"url":"http://cboard.cprogramming.com/cplusplus-programming/148998-cplusplus-templates.html","timestamp":"2014-04-17T23:03:07Z","content_type":null,"content_length":"70542","record_id":"<urn:uuid:63e19962-c596-42d0-bc56-ab60be97371c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Conjugate gradients squared method x = cgs(A,b) [x,flag] = cgs(A,b,...) [x,flag,relres] = cgs(A,b,...) [x,flag,relres,iter] = cgs(A,b,...) [x,flag,relres,iter,resvec] = cgs(A,b,...) x = cgs(A,b) attempts to solve the system of linear equations A*x = b for x. The n-by-n coefficient matrix A must be square and should be large and sparse. The column vector b must have length n. You can specify A as a function handle, afun, such that afun(x) returns A*x. Parameterizing Functions explains how to provide additional parameters to the function afun, as well as the preconditioner function mfun described below, if necessary. If cgs converges, a message to that effect is displayed. If cgs fails to converge after the maximum number of iterations or halts for any reason, a warning message is printed displaying the relative residual norm(b-A*x)/norm(b) and the iteration number at which the method stopped or failed. cgs(A,b,tol) specifies the tolerance of the method, tol. If tol is [], then cgs uses the default, 1e-6. cgs(A,b,tol,maxit) specifies the maximum number of iterations, maxit. If maxit is [] then cgs uses the default, min(n,20). cgs(A,b,tol,maxit,M) and cgs(A,b,tol,maxit,M1,M2) use the preconditioner M or M = M1*M2 and effectively solve the system inv(M)*A*x = inv(M)*b for x. If M is [] then cgs applies no preconditioner. M can be a function handle mfun such that mfun(x) returns M\x. cgs(A,b,tol,maxit,M1,M2,x0) specifies the initial guess x0. If x0 is [], then cgs uses the default, an all-zero vector. [x,flag] = cgs(A,b,...) returns a solution x and a flag that describes the convergence of cgs. ┃ Flag │ Convergence ┃ ┃ 0 │ cgs converged to the desired tolerance tol within maxititerations. ┃ ┃ 1 │ cgs iterated maxit times but did not converge. ┃ ┃ 2 │ Preconditioner M was ill-conditioned. ┃ ┃ 3 │ cgs stagnated. (Two consecutive iterates were the same.) ┃ ┃ 4 │ One of the scalar quantities calculated during cgs became too small or too large to continue computing. ┃ Whenever flag is not 0, the solution x returned is that with minimal norm residual computed over all the iterations. No messages are displayed if the flag output is specified. [x,flag,relres] = cgs(A,b,...) also returns the relative residual norm(b-A*x)/norm(b). If flag is 0, then relres <= tol. [x,flag,relres,iter] = cgs(A,b,...) also returns the iteration number at which x was computed, where 0 <= iter <= maxit. [x,flag,relres,iter,resvec] = cgs(A,b,...) also returns a vector of the residual norms at each iteration, including norm(b-A*x0). Using cgs with a Matrix Input A = gallery('wilk',21); b = sum(A,2); tol = 1e-12; maxit = 15; M1 = diag([10:-1:1 1 1:10]); x = cgs(A,b,tol,maxit,M1); displays the message cgs converged at iteration 13 to a solution with relative residual 2.4e-016. Using cgs with a Function Handle This example replaces the matrix A in the previous example with a handle to a matrix-vector product function afun, and the preconditioner M1 with a handle to a backsolve function mfun. The example is contained in the file run_cgs that ● Calls cgs with the function handle @afun as its first argument. ● Contains afun as a nested function, so that all variables in run_cgs are available to afun and myfun. The following shows the code for run_cgs: function x1 = run_cgs n = 21; b = afun(ones(n,1)); tol = 1e-12; maxit = 15; x1 = cgs(@afun,b,tol,maxit,@mfun); function y = afun(x) y = [0; x(1:n-1)] + ... [((n-1)/2:-1:0)'; (1:(n-1)/2)'].*x + ... [x(2:n); 0]; function y = mfun(r) y = r ./ [((n-1)/2:-1:1)'; 1; (1:(n-1)/2)']; When you enter x1 = run_cgs MATLAB^® software returns cgs converged at iteration 13 to a solution with relative residual 2.4e-016. Using cgs with a Preconditioner. This example demonstrates the use of a preconditioner. Load west0479, a real 479-by-479 nonsymmetric sparse matrix. load west0479; A = west0479; Define b so that the true solution is a vector of all ones. b = full(sum(A,2)); Set the tolerance and maximum number of iterations. tol = 1e-12; maxit = 20; Use cgs to find a solution at the requested tolerance and number of iterations. [x0,fl0,rr0,it0,rv0] = cgs(A,b,tol,maxit); fl0 is 1 because cgs does not converge to the requested tolerance 1e-12 within the requested 20 iterations. In fact, the behavior of cgs is so poor that the initial guess (x0 = zeros(size(A,2),1) is the best solution and is returned as indicated by it0 = 0. MATLAB stores the residual history in rv0. Plot the behavior of cgs. xlabel('Iteration number'); ylabel('Relative residual'); The plot shows that the solution does not converge. You can use a preconditioner to improve the outcome. Create a preconditioner with ilu, since A is nonsymmetric. [L,U] = ilu(A,struct('type','ilutp','droptol',1e-5)); Error using ilu There is a pivot equal to zero. Consider decreasing the drop tolerance or consider using the 'udiag' option. MATLAB cannot construct the incomplete LU as it would result in a singular factor, which is useless as a preconditioner. You can try again with a reduced drop tolerance, as indicated by the error message. [L,U] = ilu(A,struct('type','ilutp','droptol',1e-6)); [x1,fl1,rr1,it1,rv1] = cgs(A,b,tol,maxit,L,U); fl1 is 0 because cgs drives the relative residual to 4.3851e-014 (the value of rr1). The relative residual is less than the prescribed tolerance of 1e-12 at the third iteration (the value of it1) when preconditioned by the incomplete LU factorization with a drop tolerance of 1e-6. The output rv1(1) is norm(b) and the output rv1(14) is norm(b-A*x2). You can follow the progress of cgs by plotting the relative residuals at each iteration starting from the initial estimate (iterate number 0). xlabel('Iteration number'); ylabel('Relative residual'); [1] Barrett, R., M. Berry, T. F. Chan, et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, 1994. [2] Sonneveld, Peter, "CGS: A fast Lanczos-type solver for nonsymmetric linear systems," SIAM J. Sci. Stat. Comput., January 1989, Vol. 10, No. 1, pp. 36–52. See Also bicg | bicgstab | function_handle | gmres | ilu | lsqr | minres | mldivide | pcg | qmr | symmlq
{"url":"http://www.mathworks.com.au/help/matlab/ref/cgs.html?nocookie=true","timestamp":"2014-04-24T19:15:04Z","content_type":null,"content_length":"46541","record_id":"<urn:uuid:0da6bda2-a2bc-4991-86a9-48b024501387>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Z-Scores to Calculate Probability Date: 05/25/2008 at 16:56:42 From: Kate Subject: Statistics Trains carry bauxite ore from a mine in Canada to an aluminum processing plant in northern New York state in hopper cars. Filling equipment is used to load ore into the hopper cars. When functioning properly, the actual weights of ore loaded into each car by the filling equipment at the mine are approximately normally distributed with a mean of 70 tons and a standard deviation of 0.9 ton. If the mean is greater than 70 tons, the loading mechanism is overfilling. If the filling equipment is functioning properly, what is the probability that the weight of the ore in a randomly selected car will be 70.7 tons or more? Date: 05/27/2008 at 18:46:08 From: Doctor Grosch Subject: Re: Statistics Hi, Kate: This is a problem in determining the Z-function. Visualize a diagram of the normal, bell-shaped distribution. Visualize also how the standard deviation fits in the diagram. The area under the normal curve from + 1 SD to - 1 SD comprises about 2/3 of the total area of the curve. That is, for this problem, the mean sits at 70 tons on the horizontal axis and the area under the normal curve from the points on the horizontal axis from 70-0.9 tons to 70+0.9 tons (69.1 tons to 70.9 tons) contains about 2/3 of the area of the whole curve. Now, the area under the curve from - infinity to the mean is half the total area and that from the mean to + infinity is the other half of the total area. In this context area is the same as probability. The question is what the probability is of having 70.7 tons or more of ore in a randomly selected car. Don't let the "randomly selected" bit spook you. That's a necessary precondition for getting a reliable answer. Don't be concerned about how the workers in the field make sure that the car is randomly selected. Just accept, as a given, that they've managed to select the car randomly and go from Your main task, in a problem like this, is to figure out the critical z-value and to make certain that you pick it correctly. As you probably know, the formula for z follows: x - mu Z = -------- where x is the point in question, namely 70.7 tons of ore, mu is the mean value (in this case, 70 tons of ore), and sigma is the standard 70.7 - 70 Z = --------- Z = 0.7/0.9 = 0.778 That gives you the entry-value for the tabulated z-value in your table for the normal curve. Look at the following link: Hyperstat Online: Normal Distribution It's a normal-curve calculator. All you have to do is to put your z- value into the indicated space and it calculates the area or probability for you and gives you a picture of the situation on the normal curve. The only difference from the problem-statement is that the curve is centered on zero as its mean, whereas your problem centers the mean at 70. You can disregard that distinction because calculating the Z-value universalizes the normal curve and centers its mean at zero. You are interested in the area of the curve ABOVE the critical Z-value, so put your figure, 0.778, in the indicated space (above) and you see the area or probability: 0.218, to three significant figures, which is usually adequate. The calculator gives you six significant figures but the last three are meaningless, for most purposes. - Doctor Grosch, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/72187.html","timestamp":"2014-04-16T17:21:33Z","content_type":null,"content_length":"8715","record_id":"<urn:uuid:9fe002e1-3704-4a37-b362-f1927de40601>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Adeles A generalization of number concept is proposed. One can replace integer n with n-dimensional Hilbert space and sum + and product × with direct sum ⊕ and tensor product ⊗ and introduce their co-operations, the definition of which is highly non-trivial. This procedure yields also Hilbert space variants of rationals, algebraic numbers, p-adic number fields, and even complex, quaternionic and octonionic algebraics. Also adeles can be replaced with their Hilbert space counterparts. Even more, one can replace the points of Hilbert spaces with Hilbert spaces and repeat this process, which is very similar to the construction of infinite primes having interpretation in terms of repeated second quantization. This process could be the counterpart for construction of n^th order logics and one might speak of Hilbert or quantum mathematics. The construction would also generalize the notion of algebraic holography and provide self-referential cognitive representation of mathematics. This vision emerged from the connections with generalized Feynman diagrams, braids, and with the hierarchy of Planck constants realized in terms of coverings of the imbedding space. Hilbert space generalization of number concept seems to be extremely well suited for the purposes of TGD. For instance, generalized Feynman diagrams could be identifiable as arithmetic Feynman diagrams describing sequences of arithmetic operations and their co-operations. One could interpret ×[q] and +[q] and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). The definition of co-operations would characterize quantum dynamics. Physical states would correspond to the Hilbert space states assignable to numbers. One prediction is that all loops can be eliminated from generalized Feynman diagrams and diagrams are in projective sense invariant under permutations of incoming (outgoing legs).
{"url":"http://vixra.org/abs/1203.0059","timestamp":"2014-04-19T06:51:56Z","content_type":null,"content_length":"8707","record_id":"<urn:uuid:3814e430-4648-493a-b43b-2b4d50595fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Polygonal Meshes Author Polygonal Meshes Ranch Hand Hi, I'm experimenting with polygonal meshes to make 3D objects. Joined: Apr 11, 2005 I can draw two polygons (connected by an edge), each calling: Posts: 743 But there seems to be a slight gap between them, when the edge is not parallel/perpendidular to edge of screen, allowing the background to faintly show through. So the endpoints of this line eg (20,20) and (50,70) will occur in both polygons. I can partially fix this by adding 1 to the x-coordinate of the all the points on the right side of each polygon, to extend the polygon on the right side, to seal up the gap. This backfires once you rotate the polygons so that the right edge becomes the left edge, since here you would have to subtract 1, to fill the gap by extending the new left side Is there any simpler way, like setting some property on the Graphics2D object, or something like that? Thanks for any help. Ranch Hand I just did a little test Joined: Apr 11, 2005 If I use: Posts: 743 Then the edges of polygons are smooth, but gaps exist. Then the edges of polygons are jagged, but no gaps. So far, I can have one but not the other, unless I start fiddling with adding/subracting 1, to bridge the gaps. Is there anyway to achieve both? Thanks again for any help. Ranch Hand Joined: Jan 14, 2004 Sometimes the stroke control hint seems to help when precision is important. Posts: 1535 subject: Polygonal Meshes
{"url":"http://www.coderanch.com/t/346384/GUI/java/Polygonal-Meshes","timestamp":"2014-04-16T16:53:57Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:4cc73579-ff73-411c-9139-f040a7588fdf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A CRITERION FOR HNN EXTENSIONS OF FINITE p-GROUPS TO BE RESIDUALLY p Abstract. We give a criterion for an HNN extension of a finite p-group to be residually p. 1. Statement of the Main Results By an HNN pair we mean a pair (G, ) where G is a group and : A B is an isomorphism between subgroups A and B of G. Given such an HNN pair (G, ) we consider the corresponding HNN extension = G, t | t-1 at = (a), a A of G, which we denote, by slight abuse of notation, as G = G, t | t-1 At = (A) . Throughout this paper we fix a prime number p, and by a p-group we mean a finite group of p-power order. We are interested in the question under which conditions an HNN extension of a p-group is residually a p-group. (HNN extensions of finite groups are always residually finite [BT78, Co77].) Recall that given a property P of groups, a group G is said to be residually P if for any non-trivial g G there exists
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/092/3188992.html","timestamp":"2014-04-20T16:07:50Z","content_type":null,"content_length":"8078","record_id":"<urn:uuid:a8560707-b11e-42ec-a1a5-339215b1d526>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Is a quasi-iso in Lie algebra cohomology necessarily an iso? up vote 6 down vote favorite Let $\mathfrak g$ be a Lie algebra (if it matters, right now I only care about finite-dimensional Lie algebras in characteristic $0$, although I'm never opposed to hearing about more general cases). Recall that it determines a differential graded algebra ("the complex that computes Lie algebra cohomology"), with $k$th component $\wedge^k \mathfrak g^*$ and differential determined by the bracket. It is a complex by the Jacobi identity. Thinking in terms of supergeometry, I will call this dga $\mathcal C^\infty([-1]\mathfrak g)$. Moreover, if $\mathfrak h \to \mathfrak g$ is a homomorphism of Lie algebras, then we get a homomorphism of dgas $\mathcal C^\infty([-1]\mathfrak g)\to\mathcal C^\infty([-1]\mathfrak h)$, so that $\ mathcal C^\infty\circ [-1]$ is a contravariant functor. The dga is a complete invariant: the functor is full and faithful. Suppose that $\mathfrak h,\mathfrak g$ are two Lie algebras and $f: \mathfrak h \to \mathfrak g$ a Lie algebra homomorphism. Suppose furthermore that the corresponding map $f^*: \mathcal C^\infty ([-1]\mathfrak g)\to\mathcal C^\infty([-1]\mathfrak h)$ is a quasi-isomorphism, i.e. it induces an isomorphism on cohomology. Does it follow that $f$ is an isomorphism? When I ask it this way, it sounds strongly like the answer should be "no": almost never is cohomology a complete invariant. For example, the cohomology in degree $1$ sees only the abelianizations of $\mathfrak g,\mathfrak h$. But on the other hand, research I'm doing on Lie algebroids suggests that the similar statement with "algebra" replaced by "algebroid" throughout should be true. I don't see a direct proof even in the "algebra" case, but I feel like there should be either a trivial counterexample or an easy argument in favor. In either case, though, and maybe because it's late at night, I'm stuck. Which is all to say that secretly I care about algebroids, so if any of y'all know a good reference for the problem at that generality, please send it my way. But I will happily accept an answer just for algebras if one is provided. lie-algebras lie-algebra-cohomology cohomology rt.representation-theory I like to think of $C^\infty([-1]g)$ as being the Lie-algebra version of the (nerve of the) 1-object Lie groupoid associated to the 1-connected Lie group G integrating g. This is of course related to the classifying space BG. Probably your f corresponds to a homotopy equivalence of H and G, but I can't make any of my intuitions (such as they are) rigorous.. – David Roberts Sep 20 '10 at 9:44 If you replace the Lie algebra cohomology with the Lie algebra bar homology then a theorem like this may hold. Any thoughts? – James Griffin Sep 21 '10 at 10:09 add comment 4 Answers active oldest votes Here are some comments to your answer that I hope will be helpful (it's still sufficiently confusing that I might make some mistakes). Given an $L_{\infty}$ algebra, we can define the "Koszul dual" Chevalley $\it{chains}$ $C_*(L)$(for several reasons it's more natural to think of the Chevalley chains rather than cochains). The ordinary notion of a quasi-isomorphism of $L_{\infty}$ is a morphism of Chevalley complexes whose first Taylor coefficient induces a quasi-isomorphism of complexes. We could ask what happens if we declare two Lie algebras weakly equivalent if there is a quism of Chevalley complexes? We have seen that this notion can differ from the standard one. Another example is when $g_1=sl_2(\mathbb{C})$ and $g_2=\mathbb{C}[2]$ an abelian Lie algebra concentrated in degree 2. From this point of view, your notion is a very natural relaxing of the usual notion of quasi-isomorphism --- it's the notion of weak equivalence transported from Koszul duality. It's worth noting that the two notions will coincide if your dg-Lie algebra is concentrated in positive degree (or more generally with some nilpotency hypotheses on the action of the degree zero piece $g^0$ on your dg lie algebra.) This explains Trial's answer above. It's useful to have the rational homotopy picture in mind. Given a 1-connected CW complex Xwhich is finite in each degree, you can construct a Lie model L for X whose $H_{*}(L)=\pi_{*}(\ Omega(M)) $ with Whitehead bracket. One gets chains on your space by taking $C_{*}(L)$. To go back, you have something known as the cobar construction which spits out $C_{*}(\Omega(M))$, which is U(L), the universal enveloping algebra of L. Again, things start to break down in the non-simply connected case if you don't assume some hypothesis about the action of $\pi_1$ on up vote 3 the higher homotopy groups. From this point of view, the comment at the end of the last paragraph is a reflection of the fact that in these instances, when you have an isomorphism on down vote homology groups, you have an isomorphism on homotopy groups. Finally when thinking about the idea that the more relaxed notion gives you a derived equivalence, you must define the derived category properly. Let's think about the abelian case. The most classical case of Koszul duality says that given a finite dimensional graded vector space, $D^{+}(SV) \cong D^{+}(\wedge(V^*))$. Here we are considering the derived category of modules which are bounded below. It's pretty easy to see that this cannot be extended an equivalence of unbounded derived categories. The right notion which makes the whole thing work for unbounded derived categories can be found in Positselski's works and is called the coderived category. Let g be a Lie algebra over $\ mathbb{C}$. Then the equivalence between the derived category of modules over U(g) and the coderived category of co-modules over it's Chevalley complex $C_{∗}(g)$ in which $M \to C_{∗}(g,M) $. Under the assumption g is finite, we can consider these co-modules to be modules over C*(g)(now looking at cochains again) and look at the corresponding localization of the category of dg-modules over C*(g). What we see is that we are localizing at a smaller set of weak equivalences than quasi-isomorphisms. This is awesome, thanks! But I'm excited and confused in one place. Namely, given a Lie algebra $L$ (satisfying...), I construct some space $X$, which is roughly "$\mathrm BL$", or "derived $\mathrm BL$", with $\operatorname{Chains}(X) = \operatorname{CE Chains}(L)$. Fine. Then I think you said I should take loops $\Omega X$, and lo and behold, $\operatorname {Chains}(\Omega X) = \mathrm U(L)$. This shouldn't be surprising, because $\mathrm U(L)$ is a cocommutative coalgebra, as is (should be) $\operatorname{Chains}(Y)$, and also because I expect $\Omega\mathrm BG$ to be (continued) – Theo Johnson-Freyd Mar 21 '11 at 14:46 (continuation) roughly the same as $G$, and $\mathrm U(L)$ is a good model for the "formal group" integrating $L$. Except, I actually expect $\Omega\mathrm BG = G/G^{\rm ad}$, the adjoint groupoid. Or, no, $\operatorname{Maps}(S^1 \to \mathrm BG = G/G^{\rm ad}$. In this setting, I'm confused whether there's a difference between $\Omega(-)$ and $\operatorname{Maps}(S^1,-)$. This isn't really about your question, more just me expressing a confusion I have. – Theo Johnson-Freyd Mar 21 '11 at 14:55 Ah, nevermind, I've unconfused myself. A homotopy of maps from $S^1$ is allowed to rotate the basepoint around the circle, and this is precisely what generates the adjoint action. A homotopy of based loops is not, and so I just expect $\Omega\mathrm B G = G$. Nevermind all my mumbling. – Theo Johnson-Freyd Mar 21 '11 at 14:57 add comment It seems to me that if $\frak{g}$ is a 1-dimensional Lie algebra and $\frak{h}$ is a nonabelian 2-dimensional Lie algebra then a nontrivial map $\frak{h}\to\frak{g}$ will induce an up vote 5 down isomorphism on (trivial coefficients) cohomology. Yes, thanks. Now that it's <strike>morning</strike>afternoon, I also thought of that example. Thanks! – Theo Johnson-Freyd Sep 20 '10 at 20:35 add comment If $\mathfrak{g}$ and $\mathfrak{h}$ are say nilpotent and finite dimensional then the map $\mathfrak{g}\to\mathfrak{h}$ is an isomorphism. It follows from uniqueness of minimal Sullivan up vote 4 model, in this case $\bigwedge\mathfrak{g}^* $ (there should be a simpler reason, but I don't see it). As noted above by Tom Goodwillie, it is not true for solvable Lie algebras. down vote add comment I think what's going on more deeply is the following. I will describe it here for two reasons: (1) so that experts might correct errors, and (2) so that someone who wanders here via Google might get more data. There is something called "Koszul duality" — or rather, there are many such things — and I don't understand any of them. But some version of it connects dg Lie algebras, or rather $L_\infty$ algebras, to dg commutative algebras, or rather $E_\infty$ algebras. In one direction, the map from $L_\infty$ algebras to $E_\infty$ algebras is essentially what I have described as "Lie algebra cohomology": in particular, every Lie algebra is an $L_\infty$ algebra supported only in degree $0$, and every dg commutative algebra is an $E_\infty$ algebra in which all higher associators vanishing, and the map I have described is the part of Koszul duality restricted to these particular cases. Then the word "duality" means that somehow there should be an inverse to this map, and the word $_\infty$ means "only care up to quasi-isomorphism". Of course, as we've seen, there is not an inverse if the source is "dg Lie algebras up to quasi-iso" and the target is "dg commutative algebras up to quasi-iso": the one- and non-abelian-two- dimensional Lie algebras are not quasi-iso as dg Lie algebras, but their chain complexes are quasi-iso as dg commutative algebras. (Perhaps the problem is that I don't understand the words "$E_\infty$" well enough — are the chain complexes for the one- and two-dimensional Lie algebras the same $A_\infty$ algebra?) So where's the duality? At least one answer is (as far as I can tell — I am very not an expert) that if two Lie algebras have quasi-isomorphic cohomology, then their derived representation theories are the same. Here the word "derived" means "consider chain complexes in the category, and force quasi-isomorphisms to be isomorphisms." In particular, you force exact sequences to be up zero, and so in particular you force extensions to split. vote 0 down Well, for the example above, you can simply dive in and work out enough of the representation theory. The representation theory of the one-dimensional Lie algebra goes by the name vote "undergraduate linear algebra": it is the theory of a vector space along with a matrix. The indecomposable representations (provided that I work in an algebraically closed setting, and I am already in characteristic $0$, so let's call the ground field $\mathbb C$) are Jordan blocks: matrices that look like $$ \begin{pmatrix} \lambda & 1 & & \\ & \lambda & \ddots & \\ & & \ddots & 1 \\ & & & \lambda \end{pmatrix} $$ There are nontrivial extensions whenever $\lambda = \lambda$. But there is a unique irreducible representation for each $\lambda \in \mathbb C$, and it is one-dimensional, and this is all of them. The full representation theory of the two-dimensional Lie algebra is much more complicated, but again the irreducibles are all one-dimensional and parameterized by $\lambda \in \mathbb C$. Indeed, the abelianization map form the two-dimensional algebra to the one-dimensional algebra induces the obvious bijection on irreducible representations. This is essentially a proof (where "essentially" means "modulo everything I don't understand") that the derived categories of representations are the same. Or maybe it's not a proof: better is that it is a symptom of the equivalence of the derived representation theory. If I am correct in the story I have said, then it should be that quasi-isos between semisimple Lie algebras are necessarily isos: there just aren't very many interesting chain complexes in a semisimple category. But there is enough I don't understand that I would not say this with any certainty. If you see this post and know more than I do, please explain further in the comments. If you see a way to extract from this "answer" a question that we can post and attract more attention, that would also be awesome! (1) Almost, but $C^*(\mathfrak{g};k)=RHom_g(k,k)$ only knows about the part of the derived category generated by the trivial module. So it's not an equivalence between the whole derived categories, but only that part. (1b) I don't think your analysis of Tom's example is correct. (2) Also if the category of representations of a semi-simple Lie algebra really were semi-simple, there would be no cohomology at all. The cohomology records the existence of interesting chain complexes, but these chain complexes are made of infinite-dimensional modules, hence the failure of semi-simplicity. – Ben Wieland Mar 21 '11 at 23:31 (3) A quasi-isomorphism of $A_\infty$-algebras yields an equivalence of derived categories. The category of modules over a Lie algebra has the further structure of being symmetric monoidal (and this preserves the subcategory generated by the unit object, also known as the trivial representation). This is recorded by the $E_\infty$-structure on the algebra. So an equivalence of $E_\infty$-algebras is equivalent to a symmetric monoidal equivalence. – Ben Wieland Mar 21 '11 at 23:35 add comment Not the answer you're looking for? Browse other questions tagged lie-algebras lie-algebra-cohomology cohomology rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/39366/is-a-quasi-iso-in-lie-algebra-cohomology-necessarily-an-iso/40061","timestamp":"2014-04-17T21:42:32Z","content_type":null,"content_length":"81148","record_id":"<urn:uuid:e4555836-8864-434d-ab12-017b2f12c6c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-ME] CHOLMOD error in lmer-specifying nested random effects Eli Swanson eliswanson at gmail.com Fri Aug 7 18:17:06 CEST 2009 Hi all, I'm working on an analyzing some data right now that's causing me some difficulties. I have 2 questions, and would really appreciate any advice that people can offer. I don't post data because there are over 2000 cases. sessionInfo() is posted at the end of my post. The data was collected during focal animal surveys (FAS). The response variable is the number of minutes a cub was observed nursing (Nurse.min). The fixed predictors are 1) the number of minutes the mother was present during the FAS (Mom.min), 2) the mother's social rank (Mom.rank), and 3)sex of cub(Sex). The random effects are 1) Identity of mom, and 2) identity of cub, as there are a large number of observations for every mother, and most of the cubs. Cub is nested within mom, and when I include sex in the model, my understanding is that I have to nest it within cub. Question the first: I try to create the following model, leaving out sex for the moment, because my understanding is that with only two cases, sex complicates Is this model correctly specified? I receive a CHOLMOD warning, as follows: 11918.870: 0.289063 0.172748 -0.0419576 0.0575048 0.00979307 CHOLMOD warning: 7u‡e_ Error in mer_finalize(ans) : Cholmod error `not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 432 When I try to include Sex as a fixed effect, I'm even less sure I'm specifying the model correctly but my model looks like this: And my output: 0: 11926.160: 0.289063 0.172748 0.172336 0.0572882 0.00311534 -0.244655 -0.683584 CHOLMOD warning: 7u‡e_ Error in mer_finalize(ans) : Cholmod error `not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 432 The model works if I specify it as follows, but I can't get it to work with sex, and I think I'm specifying the random effects incorrectly Generalized linear mixed model fit by the Laplace approximation Formula: Nurse.min ~ Mom.min + Mom.rank + (1 | Mom:Cub) Data: data AIC BIC logLik deviance 11381 11404 -5687 11373 Random effects: Groups Name Variance Std.Dev. Mom:Cub (Intercept) 2.1313 1.4599 Number of obs: 2234, groups: Mom:Cub, 70 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.4669098 0.1962970 -7.47 7.84e-14 *** Mom.min 0.0688668 0.0007541 91.32 < 2e-16 *** Mom.rank 0.0676121 0.0069510 9.73 < 2e-16 *** Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Mom.mn Mom.min -0.134 Mom.rank -0.332 0.115 So, am I specifying these models correctly? What could be the problem? I only have 1 observation for some of the cubs, and I thought this might be the problem, but when I remove these cubs, the error remains. I'm not sure if it makes a difference, but I do have slightly overdispersed data (its 0-inflated i think, but not hugely so), hence my second question: When I use the quasipoisson family to estimate the model like so (specifying the model the only way i could get it to work, even though this may be incorrect) Then, by my understanding, estimate the degree of overdispersion: How badly overdispersed is this? I tried using MCMCglmm with a "zipoisson", but its giving me strange results (in fact, estimates for the fixed effects that appear to have an opposite sign of those I find with lmer). As I'm not an expert in Bayesian methods, I assume I'm specifying something wrong there and would prefer to stick with lmer for now if possible. Can I just continue to use quasipoisson? I know that the standard errors are inflated, but if im not worried about that does this mean that parameters are correctly estimated? R version 2.9.0 (2009-04-17) LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] lme4_0.999375-28 Matrix_0.999375-24 lattice_0.17-22 loaded via a namespace (and not attached): [1] grid_2.9.0 i would appreciate any help that people can offer, Thank you very much, Eli Swanson Department of Zoology Ecology, Evolutionary Biology, and Behavior Program Michigan State University More information about the R-sig-mixed-models mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q3/002712.html","timestamp":"2014-04-17T18:24:59Z","content_type":null,"content_length":"7863","record_id":"<urn:uuid:2fd3d42c-d06a-49a2-b0bd-82c3e74f2a80>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
When does an insight count as evidence? - Less Wrong Bayesianism, as it is presently formulated, concerns the evaluation of the probability of beliefs in light of some background information. In particular, given a particular state of knowledge, probability theory says that there is exactly one probability that should be assigned to any given input statement. A simple corrolary is that if two agents with identical states of knowledge arrive at different probabilities for a particular belief then at least one of them is irrational. A thought experiment. Suppose I ask you for the probability that P=NP (a famous unsolved computer science problem). Sounds like a difficult problem, I know, but thankfully all relevant information has been provided for you --- namely the axioms of set theory! Now we know that either P=NP is proveable from the axioms of set theory, or its converse is (or neither is proveable, but let's ignore that case for now). The problem is that you are unlikely to solve the P=NP problem any time soon. So being the pragmatic rationalist that you are, you poll the world's leading mathematicians, and do some research of your own into the P=NP problem and the history of difficult mathematical problems in general to gain insight into perhaps which group of mathematicians may be more reliable, and to what extent thay may be over- or under-confident in their beliefs. After weighing all the evidence honestly and without bias you submit your carefully-considered probability estimate, feeling like a pretty good rationalist. So you didn't solve the P=NP problem, but how could you be expected to when it has eluded humanity's finest mathematicians for decades? The axioms of set theory may in principle be sufficient to solve the problem but the structure of the proof is unknown to you, and herein lies information that would be useful indeed but is unavailable at present. You cannot be considered irrational for failing to reason from unavailable information, you say; rationality only commits you to using the information that is actually available to you, and you have done so. Very well. The next day you are discussing probability theory with a friend, and you describe the one-in-a-million-illness problem, which asks for the probability that a patient has a particular illness, which is known to exist within only one in a million individuals, given that a particular diagnostic test with known 1% false positive rate has returned positive. Sure enough, your friend intuits that there is a high chance that the patient has the illness and you proceed to explain why this is not actually the rational answer. "Very well", your friend says, "I accept your explanation but I when I gave my previous assessment I was unaware of this line of reasoning. I understand the correct solution now and will update my probability assignment in light of this new evidence, but my previous answer was made in the absence of this information and was rational given my state of knowledge at that point." "Wrong", you say, "no new information has been injected here, I have simply pointed out how to reason rationally. Two rational agents cannot take the same information and arrive at different probability assignments, and thinking clearly does not constitute new information. Your previous estimate was irrational, full stop." By now you've probably guessed where I'm going with this. It seems reasonable to assign some probability to the P=NP problem in the absence of a solution to the mathematical problem, and in the future, if the problem is solved, it seems reasonable that a different probability would be assigned. The only way both assessments can be permitted as rational within Bayesianism is if the proof or disproof of P=NP can be considered evidence, and hence we understand that the two probability assignments are each rational in light of differing states of knowledge. But at what point does an insight become evidence? The one-in-a-million-illness problem also requires some insight in order to reach the rational conclusion, but I for one would not say that someone who produced the intuitive but incorrect answer to this problem was "acting rationally given their state of knowledge". No sir, I would say they failed to reach the rational conclusion, for if lack of insight is akin to lack of evidence then any probability could be "rationally" assigned to any statement by someone who could reasonably claim to be stupid enough. The more stupid the person, the more difficult it would be to claim that they were, in fact, irrational. We can interpolate between the two extremes I have presented as examples, of course. I could give you a problem that requires you to marginalize over some continuous variable, and with an appropriate choice for x I could make the integration very tricky, requiring serious math skills to come to the precise solution. But at what difficulty does it become rational to approximate, or do a So, the question is: when, if ever, does an insight count as evidence? Comments (37) Sort By: Best Insight has to count as evidence when you are dealing with finite computational agents. Now we know that either P=NP is proveable from the axioms of set theory, or its converse is This isn't to be said casually; it would be a huge result if you could prove it, and very different from the case of neither being provable. Most things that are true about the natural numbers are not provable in any given set of axioms. I mention this because I read that and woke up and said "What? Really?" and then read the following parenthetical and was disappointed. I suggest editing the text. If we don't know anything in particular about the relation of P=NP to set theory, it shouldn't be said. Also, alexflint, you mean "negation", not "converse". I've heard either that P = NP is known to be falsifiable or that its negation is. I don't remember which I heard. I've heard either that P = NP is known to be falsifiable or that its negation is. I don't remember which I heard. I'm not quite sure what you mean by this. Falsifiable isn't really a word that makes sense in mathematics. P = NP is clearly falsifiable (give a proof that P!=NP) as is it's negation (give a polynomial time algorithm for an NP complete problem), Scott Aarsonson has a paper summarising the difficulties in proving whether or not the P vs. NP question is formally independent of the Zermelo Fraenkel axioms: Is P vs NP Formally Independent (PDF The paper is (obivously) pretty technical , but the take-home message is contained n the last sentence: So I’ll state, as one of the few definite conclusions of this survey, that P = NP is either true or false. It’s one or the other. But we may not be able to prove which way it goes, and we may not be able to prove that we can’t prove it. I'm not quite sure what you mean by this. Falsifiable isn't really a word that makes sense in mathematics. P = NP is clearly falsifiable (give a proof that P!=NP) as is it's negation (give a polynomial time algorithm for an NP complete problem) Sure it makes sense. Something is falsifiable if if it is false, it can be proven false. It's not obvious, given that P != NP, that there is a proof of this; nor is it obvious, given that P = NP, that for one of the polynomial-time algorithms for an NP-complete problem, there is a proof that it actually is such a thing. Though there's certainly an objective truth or falsehood to P = NP, it's possible that there is no proof of the correct answer. Something is falsifiable if if it is false, it can be proven false. Isn't this true of anything and everything in mathematics, at least in principle? If there is "certainly an objective truth or falsehood to P = NP," doesn't that make it falsifiable by your It's not always that simple (consider the negation of G). (If this is your first introduction to Gödel's Theorem and it seems bizarre to you, rest assured that the best mathematicians of the time had a Whiskey Tango Foxtrot reaction on the order of this video. But turns out that's just the way it is!) I know they get overused, but Godel's incompleteness theorems provide important limits to what can and cannot be proven true and false. I don't think they apply to P vs NP, but I just note that not everything is falsifiable, even in principle. What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious. "Wrong", you say, "no new information has been injected here, I have simply pointed out how to reason rationally. I'm not sure if this line makes sense. If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative. Putting it in another way: if you were writing a computer program to do something, you might hard-code into it some way of doing things, or you might build some sort of search algorithm that would let it find the appropriate way of doing things. Here, hard-coding corresponds to a friend telling you how something should be interpreted, and the program discovering itself corresponds to a person discovering it herself. If you hard-code it, you are still adding extra lines of code into the program - that is, adding information. What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious. Fair question, I should've gotten this clear in my mind before I wrote. My observation is that there are people who reason effectively given their limited computation power and others who do not (hence the existence of this blog), and my question is by what criteria we can distinguish them given that the Bayesian definition of rationality seems to falter here. If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as I would agree except that this seems to imply that probabilities generated by a random number generator should be considered rational since it "lacks" the procedural knowledge to do otherwise. This is not just semantics because we perceive a real performance difference between a random number generator and a program that multiplies out likelihoods and priors, and we would like to understand the nature of that difference. To me also, the post sounds more like it's equivocating on the definition of "rationality" than asking a question of the form either "What should I do?" or "What should I expect?" But at what difficulty does it become rational to approximate, or do a meta-analysis? It depends on how much resources you choose to, or can afford to, devote to the question. Say I have only a few seconds to ponder the product 538 time 347, and give a probability assignment for its being larger than 150,000; for its being larger than 190,000; and for its being larger than My abilities are such that in a limited time, I can reach definite conclusions about the two extreme values but not about the third; I'd have to be content with "ooh, about fifty-fifty". Given more time (or a calculator!) I can reach certainty. If we're talking about bounded rationality rather than pure reason, the definition "identical states of knowledge" needs to be extended to "identical states of knowledge and comparable expenditure of Alternatively you need to revise your definition of "irrational" to admit degrees. Someone who can compute the exact number faster than I can is perhaps more rational than I am, but my 1:1 probability for the middle number does not make me "irrational" in an absolute sense, compared to someone with a calculator. I wouldn't call our friend "irrational", though it may be appropriate to call them lazy. In fact, the discomfort some people feel at hearing the words "irrational" or "rational" bandied about can perhaps be linked to the failure of some rationalists to attend to the distinction between bounded rationality and pure reason... the more stupid the person, the more difficult it would be to claim that they were, in fact, irrational. So ? You already have a label "stupid" which is descriptive of an upper bound on the resources applied by the agent in question to the investigation at hand. What additional purpose would the label "irrational" serve ? Insight doesn't exactly "count as evidence". Rather, when you acquire insight, you improve the evidence-strength of the best algorithm available for assigning hypothesis probabilities. Initially, your best hypothesis-weighting algorithms are "ask an expert" and "use intuition". If I give you the insight to prove some conclusion mathematically, then the increased evidence comes from the fact that you can now use the "find a proof" algorithm. And that algorithm is more entangled with the problem structure than anything else you had before. I think this post makes an excellent point, and brings to light the aspect of Bayesianism that always made me uncomfortable. Everyone knows we are not really rational agents; we do not compute terribly fast or accurately (as Morendil states), we are often unaware of our underlying motivations and assumptions, and even those we know about are often fuzzy, contradictory and idealistic. As such, I think we have different ways of reasoning about things, making decisions, assigning preferences, holding and overcoming inconsistencies, etc.. While it is certainly useful to have a science of quantitative rationality, I doubt we think that way at all... and if we tried, we would quickly run into the qualitative, irrational ramparts of our minds. Perhaps a Fuzzy Bayesianism would be handy: something that can handle uncertainty, ambivalence and apathy in any of its objects. Something where we don't need to put in numbers where numbers would be a lie. Doing research in biology, I can assure you that the more decimal places of accuracy I see, the more I doubt its reliability. If you are envisioning some sort of approximation of Bayesian reasoning, perhaps one dealing with an ordinal set of probabilities, a framework that is useful in everyday circumstances, I would love to see that suggested, tested and evolving. It would have to encompass a heuristic for determining the importance of observations, as well as their reliability and general procedures for updating beliefs based on those observations (paired with their reliability). Was such a thing discussed on LW? Let me be the first to say I like your username, though I wonder if you'll regret it occasionally... Thank you, and thank you for the link; didn't occur to me to check for such a topic. when, if ever, does an insight count as evidence? I suspect you use the term "insight" to describe something that I would classify as a hypothesis rather than observation (evidence is a particular kind of observation, yes?). Consider Pythagoras' theorem and an agent without any knowledge of it. If you provide the agent with the length of the legs of a right-angled triangle and ask for the length of the hypotenuse, it will use some other algorithm/heuristic to reach an answer (probably draw and measure a similar triangle). Now you suggest the theorem to the agent. This suggestion is in itself evidence for the theorem, if for no other reason then because P(hypothesis H | H is mentioned) > P(H | H is not mentioned). Once H steals some of the probability from competing hypotheses, the agent looks for more evidence and updates it's map. Was his first answer "rational"? I believe it was rational enough. I also think it is a type error to compare hypotheses and evidence. If you define "rational" as applying the best heuristic you have, you still need a heuristic for choosing a heuristic to use (i.e. read wikipedia, ask expert, become expert, and so on). If you define it as achieving maximum utility, well, then it's pretty subjective (but can still be measured). I'd go for the latter. P.S. Can Occam's razor (or any formal presentation of it) be classified as a hypothesis? Evidence for such could be any observation of a simpler hypotheses turning out to be a better one, similar for evidence against. If that is true, then you needn't dual-wield the sword of Bayes and Occam's razor; all you need is one big Bayesian blade. P.S. Can Occam's razor (or any formal presentation of it) be classified as a hypothesis? Sadly, no; this is the "problem of induction" and to put it briefly, if you try to do what you suggest you end up having to assume what you're trying to prove. If you start with a "flat" prior in which you consider every possible Universe-history to be equally likely, you can't collect evidence for Occam's razor. The razor has to be built in to your priors. Thus, Solomonoff's lightsaber. then you needn't dual-wield the sword of Bayes and Occam's razor; all you need is one big Bayesian blade Sweet, but according to the wiki the lightsaber doesn't include full Bayesian reasoning, only the special case where the likelihood ratio of evidence is zero. One could argue that you can reach the lightsaber using the Bayesian blade, but not vice versa. The lightsaber does include full Bayesian reasoning. You make the common error of viewing the answer as binary. A proper rationalist would be assigning probabilities, not making a binary decision. In the one-in-a-million example, your friend thinks he has the right answer, and it is because he thinks he is very probably right that he is irrational. In the P=NP example, I do not have any such certainty. I would imagine that a review of P=NP in the manner you describe probably wouldn't push me too far from 50/50 in either direction. If it pushed me to 95/5, I'd have to discredit my own analysis, since people who are much better at math than I am have put a lot more thought into it than I have, and they still disagree. Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP. That would obviously change my certainty, even if I couldn't understand the insight. I would go from a rationally calibrated ~50/50 to a rationally calibrated ~99.99/.01 or something close. Thus, that insight certainly would be evidence. That said, you do raise an interesting issue about meta-uncertainty that I'm still mulling over. ETA: P=NP was a very hypothetical example about which I know pretty much nothing. I also forgot the fun property of mathematics that you can have the right answer with near certainty, but no one cares if you can't prove it formally. My actual point was about thinking answers are inherently binary. The mistake of irrational actor who has ineffective tools seems to be his confidence in his wrong answer, not the wrong answer itself. Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP. All mathematicians already agree that P != NP. I'm not sure quite how much more of a consensus you could ask for on an unsolved maths problem. (see, e.g., Lance Fortnow or Scott Aaronson) In a 2002 poll of 100 researchers, 61 believed the answer is no, 9 believed the answer is yes, 22 were unsure, and 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove. 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove. Wouldn't that imply P != NP since otherwise there would be a counterexample? There is a known concrete algorithm for every NP-complete problem that solves that problem in polynomial time if P=NP: Generate all algorithms and run algorithm n in 1/2^n fraction of the time, check the result of algorithm n if it stops and output the result if correct. Nice! More explicitly: if the polynomial-time algorithm is at (constant) index K in our enumeration of all algorithms, we'd need about R*2^K steps of the meta-algorithm to run R steps of the algorithm K. Thus, if the algorithm K is bound by polynomial P(n) in problem size n, it'd take P(n)*2^K steps of the meta-algorithm (polynomial in n, K is a constant) to solve the problem of size n. Wouldn't that imply P != NP since otherwise there would be a counterexample? No. It could be that there is an algorithm that solves some NP-complete problem in polynomial time, yet there is no proof that it does so. We could even find ourselves in the position of having discovered an algorithm that runs remarkably fast on all instances it's applied to, practically enough to trash public-key cryptography, yet although it is in P we cannot prove it is, or even that it You mean, a substantial majority of sane and brilliant mathematicians. Never abuse a universal quantifier when math is on the line! You're (all) right, of course, there are several mathematicians who refuse to have an opinion on whether P = NP, and a handful who take the minority view (although of the 8 who did so in Gasarch's survey 'some' admitted they were doing it just to be contrary, that really doesn't leave many who actually believed P=NP). What this definitively does not mean is that it's rational to assign 50% probability to each side my main point was that there is ample evidence to suggest that P != NP (see the Scott Aaronson post I linked to above) and a strong consensus in the community that P!=NP. To insist that one should assign 50% of one's probability to the possibility that P=NP is just plain wrong. If nothing else, Aaronson's "self-referential" argument should be enough to convince most people here that P is probably a strict subset of NP. All mathematicians already agree that P != NP. Not all of them. Just because there are some that disagree doesn't mean we must assign 50% probability to each case. I wasn't claiming that we must assign 50% probability to each case. (please note that this is my first post) I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on "rationality" here and that is the root of the problem. For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is "irrational" to not be able to (dis)prove P=NP ever. But in the sense of "is this a worthwhile application of my bounded resources" rationality, for most people the answer is no. One can reasonably expect a human claiming to be "rational" to be able to correctly solve one-in-a-million-illness, but not to have (or even be able to) go through the process of solving P=NP. In terms of fulfillingone's utility function, solving P=NP given your processing power is most likely not the most fulfilling choice (except for some computer scientists). So we can say this person is taking the best trade-off between accuracy and work for P=NP because it requires a large amount of work, but not for one-in-a-million-illness because learning Bayes rule is very little work. I don't think the two examples you gave are technically that different. Someone giving an "intuitive" answer to the diagnostic question is basically ignoring half the data; likewise, someone looking for answer to P=NP using a popularity survey is ignoring all other data (e.g. the actual math). The difference is whether you know what data you are basing your evaluation on and whether you know you have ignored some. When you can correctly state what your probability is conditional on, you are presenting evidence.
{"url":"http://lesswrong.com/lw/1lk/when_does_an_insight_count_as_evidence/","timestamp":"2014-04-18T15:42:26Z","content_type":null,"content_length":"156778","record_id":"<urn:uuid:0e7b0141-cab5-4887-992f-5db85a29425a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Handshaking Lemma May 29th 2009, 09:34 AM #1 Handshaking Lemma Handshaking Lemma. At a convention, the number of delegates who shake hands an odd number of times is even. Proof. Let $D_{1}, \ldots , D_n$ be the delegates. Apply double counting to the set of ordered pairs $(D_i, D_j)$ for which $D_i$ and $D_j$ shake hands with each other at the convention. Let $x_i$ be the number of times that $D_i$ shakes hands, and $y$ the total number of handshakes that occur. The number of pairs is $\sum_{i=1}^{n} x_i$. However each handshake gives rise to two pairs $(D_i, D_j)$ and $(D_j, D_i)$. So $\sum_{i=1}^{n} x_i = 2y$. But how does this imply that the number of delegates that shakes hands an odd number of times is even? Because $n$ could be even and each $x_i$ could be even (e.g. $2+4+6+8 = 20$). Last edited by Sampras; May 29th 2009 at 09:52 AM. In the general case we have the following: Let $A = \{a_{1}, \ldots, a_{m} \}$ and $B = \{b_1, \ldots, b_n \}$ be sets. Let $S$ be a subset of $A \times B$. Suppose that, for $i=1, \ldots m$, the element $a_i$ is the first component of $x_i$ pairs in $S$, while, for $j= 1, \ldots, n$, the element $b_j$ is the second component of $y_j$ pairs in $S$. Then $|S| = \sum_{i=1}^{m} x_i = \sum_{j =1}^{n} y_j$. So suppose $A = \{1,3,4,5,6 \}$ and $B = \{7,9,2 \}$. So if $1$ is the first component of $x_1$ pairs of $S$, $3$ is the first component of $x_2$ pairs of $S$ etc.. and $7$ is the second component of $y_1$ pairs of $S$ etc...then $\sum_{i=1}^{5} x_i = \sum_{j=1}^{3} y_j = |S|$. But we don't know what $x_i$ and $y_j$ are. We just know that they represent the number of pairs with the first component being $a_i$ and the second component being $b_j$ respectively. Is this the whole point? Isn't this the same as taking an unweighted, undirected graph and saying that: The number of vertices with odd degree in a graph must be even? If so, there is a pretty simple proof, unless you are trying to prove it using some other method besides graphs. Ok, here's a proof of this theorem using the graphs: Let $G = (V,E)$ be an unweighted, undirected graph. Lets look at the sum of the degrees of its vertices (I'll call it M, though it is often referred to as K): $M = \sum_{v\in V} deg_G (v)$ Applying the double-counting, each edge $e \in E$ will be counted twice, once for each each vertex to which it is incident. As a result, the sum MUST be twice the number of edges of G. $M = 2E$, which is even. Now, all we do is take out the sum of all the degrees of vertices that are even to get the sum of all the odd degree vertices: $\sum_{v \in V} deg_G (v) - \sum_{v \in V: deg_G (v) = 2m} deg_G (v) = \sum_{v \in V: deg_G (v) = 2m + 1} deg_G (v)$ This result on the right hand side must still be even, since it is the difference of two even numbers. Because the sum is of exclusively odd terms, then there must be an even number of them for the sum on the right side to be even, thus the desired result is achieved. Ok, here's a proof of this theorem using the graphs: Let $G = (V,E)$ be an unweighted, undirected graph. Lets look at the sum of the degrees of its vertices (I'll call it M, though it is often referred to as K): $M = \sum_{v\in V} deg_G (v)$ Applying the double-counting, each edge $e \in E$ will be counted twice, once for each each vertex to which it is incident. As a result, the sum MUST be twice the number of edges of G. $M = 2E$, which is even. Now, all we do is take out the sum of all the degrees of vertices that are even to get the sum of all the odd degree vertices: $\sum_{v \in V} deg_G (v) - \sum_{v \in V: deg_G (v) = 2m} deg_G (v) = \sum_{v \in V: deg_G (v) = 2m + 1} deg_G (v)$ This result on the right hand side must still be even, since it is the difference of two even numbers. Because the sum is of exclusively odd terms, then there must be an even number of them for the sum on the right side to be even, thus the desired result is achieved. I see...so in my case we don't even consider things like $2+4+6+8$. Well, what you're saying is true, but the the handshaking lemma specifically talks about vertices with an ODD degree. With a sum like 2 + 4 + 6 + 8, you're dealing with all even degrees, which is going to be even anyway. May 29th 2009, 10:00 AM #2 May 29th 2009, 10:15 AM #3 May 29th 2009, 10:16 AM #4 May 29th 2009, 10:23 AM #5 May 29th 2009, 10:32 AM #6 May 29th 2009, 10:37 AM #7
{"url":"http://mathhelpforum.com/discrete-math/90965-handshaking-lemma.html","timestamp":"2014-04-16T16:43:16Z","content_type":null,"content_length":"59638","record_id":"<urn:uuid:e429268f-7bf4-44d2-8ff9-8d5659a5c6a1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Wauwatosa, WI Prealgebra Tutor Find a Wauwatosa, WI Prealgebra Tutor ...Contact me today and get started soon.I've tutored college and high school algebra for the past 4 years. I know the concepts very well and can provide a comprehensive study plan to get you back on track. I've tutored college algebra and precalc for the past 4 years, and I am fully qualified to teach high school as well as advanced algebra in college. 46 Subjects: including prealgebra, chemistry, geometry, statistics ...Since I am a native speaker and a professional language user, the German I teach is always state of the art - whether it is conversational, formal, or creative. For me, language is a dynamic process, and communication is a showcase of skills, character, and personality. Since my college days, I have tutored people in different fields of German, history, geography, and reading. 8 Subjects: including prealgebra, reading, German, American history ...Send me an email!Review and test preparation for the Armed Services Vocational Aptitude Battery (ASVAB) includes Arithmetic Reasoning, Mathematic Knowledge, Word Knowledge, and Paragraph Comprehension. Because the tests are timed, it is essential to review and practice first. I can help you pre... 18 Subjects: including prealgebra, reading, English, geometry ...Born and raised in Japan, and also went back to teach English in Japan, I am naturally sensitive to diverse cultures and situations. I have a BS in Secondary Education, with Certifications in Middle Math, English as a Second Language, Elementary Education, and Special Education. In my teaching ... 12 Subjects: including prealgebra, reading, algebra 1, ESL/ESOL ...I hold a Bachelor in Science with majors in physics and chemistry, and have taught and tutored chemistry in a high school setting for 13 years. French is my primary language. I am fluent in French since I was child. 12 Subjects: including prealgebra, chemistry, French, physics
{"url":"http://www.purplemath.com/wauwatosa_wi_prealgebra_tutors.php","timestamp":"2014-04-16T19:21:38Z","content_type":null,"content_length":"24412","record_id":"<urn:uuid:938df6ac-3066-461d-97da-390afdc9f2be>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability help November 16th 2010, 08:03 AM #1 Feb 2007 Probability help Hi guys, I have been set a few examples to work throught and ive had a go at them im just wondering if you could help me double check them as im not entirely happy. An electronic assembly consists of two subsystems, A and B, say. After manufacture each assembly is tested P(A is faulty) = 0.04 P(B is faulty but A is not faulty) = 0.01 P(Both A & B are faulty) = 0.025 question1 : Calculate P(A is faulty but B is not) my answer: 0.04 - 0.025 = 0.015 Question2 : Calculate P(B is faulty) my answer : 0.01 + 0.025 = 0.035 Question3 : Do faults in the two subsystems occur independantly? my answer: The two subsystems are independant as they are do not effect the other. Question4: Calculate (A is Faulty Given That B Is Faulty) my answer: 0.04 + 0.015 - 0.04 x 0.015 = 0.0544 So i know the last one is almost definately wrong and is there any way to mathematically determine that the two subsystems are independant of each other. Any help is appreciated. Cheers Guys. Q1 and Q2 seem fine, drawing a venn diagram will help with events A and B being the chip is faulty. Two events are independent if and only if $P(A \cap B) = P(A)\cdot P(B)$ From the question we are given $P(A \cap B)=0.025$ and $P(A)=0.04$ In Q2 you calculated that $P(B)=0.035$ so that the RHS is $P(A)\cdot P(B)=0.04 \cdot 0.035=0.0014$ therefore they cannot be independent. $P(A|B)=\frac{P(A \cap B)}{P(B)}$ November 16th 2010, 09:43 PM #2 Mar 2009
{"url":"http://mathhelpforum.com/advanced-statistics/163430-probability-help.html","timestamp":"2014-04-21T08:36:15Z","content_type":null,"content_length":"33497","record_id":"<urn:uuid:acdd9fdb-1e56-4413-a610-b0633d15bd78>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
A Practical Approach to Drawing Undirected Graphs Results 1 - 10 of 13 , 1996 "... . We report on our experiments with five graph drawing algorithms for general undirected graphs. These are the algorithms FR introduced by Fruchterman and Reingold [5], KK by Kamada and Kawai [11], DH by Davidson and Harel [1], Tu by Tunkelang [13] and GEM by Frick, Ludwig and Mehldau [6]. Implement ..." Cited by 44 (1 self) Add to MetaCart . We report on our experiments with five graph drawing algorithms for general undirected graphs. These are the algorithms FR introduced by Fruchterman and Reingold [5], KK by Kamada and Kawai [11], DH by Davidson and Harel [1], Tu by Tunkelang [13] and GEM by Frick, Ludwig and Mehldau [6]. Implementations of these algorithms have been integrated into our GraphEd system [9]. We have tested these algorithms on a wide collection of examples and with different settings of parameters. Our examples are from original papers and by our own. The obtained drawings are evaluated both empirically and by GraphEd's evaluation toolkit. As a conclusion we can confirm the reported good behaviour of the algorithms. Combining time and quality we recommend to use GEM or KK first, then FR and Tu and finally DH. 1 Introduction Graph drawing has become an important area of research in Computer Science. There is a wide range of applications including data structures, data bases, software engineering, VLSI te... , 1998 "... JAWAA is a simple command language for creating animations of data structures and displaying them with a Web browser. Commands are stored in a script file that is retrieved and run by the JAWAA applet when the applet's Web page is accessed through the Web. JAWAA commands allow for creation and movem ..." Cited by 36 (4 self) Add to MetaCart JAWAA is a simple command language for creating animations of data structures and displaying them with a Web browser. Commands are stored in a script file that is retrieved and run by the JAWAA applet when the applet's Web page is accessed through the Web. JAWAA commands allow for creation and movement of primitive objects (circles, lines, text, rectangles) and data structure objects (arrays, stacks, queues, lists, trees and graphs). A JAWAA script can be generated as the output of a program written in any language. 1 Introduction An animation of a data structure is helpful to students as an educational aid in two ways, first as an alternative view in understanding a newly presented data structure or algorithm and second as an aid in debugging a program that uses the data structure. An animation can be easier to understand and remember than a textual representation, especially when one can interact with the animation by trying different input. Furthermore, using animations to debug p... - Theoretical Computer Science , 1999 "... Spring algorithms are regarded as effective tools for visualizing undirected graphs. One major feature of applying spring algorithms is to display symmetric properties of graphs. This feature has been confirmed by numerous experiments. In this paper, firstly we formalize the concepts of graph symmet ..." Cited by 22 (3 self) Add to MetaCart Spring algorithms are regarded as effective tools for visualizing undirected graphs. One major feature of applying spring algorithms is to display symmetric properties of graphs. This feature has been confirmed by numerous experiments. In this paper, firstly we formalize the concepts of graph symmetries in terms of "reflectional" and "rotational" automorphisms; and characterize the types of symmetries, which can be displayed simultaneously by a graph layout, in terms of "geometric" automorphism groups. We show that our formalization is complete. Secondly, we provide general theoretical evidence of why many spring algorithms can display graph symmetry. Finally, the strength of our general theorem is demonstrated from its application to several existing spring algorithms. 1 Introduction Graphs are commonly used in Computer Science to model relational structures such as programs, databases, and data structures. A good graph "layout" gives a clear understanding of a structural model; a ba... , 1999 "... Graphs are ubiquitous, finding applications in domains ranging from software engineering to computational biology. While graph theory and graph algorithms are some of the oldest, most studied fields in computer science, the problem of visualizing graphs is comparatively young. This problem, known as ..." Cited by 19 (0 self) Add to MetaCart Graphs are ubiquitous, finding applications in domains ranging from software engineering to computational biology. While graph theory and graph algorithms are some of the oldest, most studied fields in computer science, the problem of visualizing graphs is comparatively young. This problem, known as graph drawing, is that of transforming combinatorial graphs into geometric drawings for the purpose of visualization. Most published algorithms for drawing general graphs model the drawing problem with a physical analogy, representing a graph as a system of springs and other physical elements and then simulating the relaxation of this physical system. Solving the graph drawing problem involves both choosing a physical model and then using numerical optimization to simulate the physical system. In this , 2008 "... Abstract — Systems biologists use interaction graphs to model the behavior of biological systems at the molecular level. In an iterative process, such biologists observe the reactions of living cells under various experimental conditions, view the results in the context of the interaction graph, and ..." Cited by 16 (3 self) Add to MetaCart Abstract — Systems biologists use interaction graphs to model the behavior of biological systems at the molecular level. In an iterative process, such biologists observe the reactions of living cells under various experimental conditions, view the results in the context of the interaction graph, and then propose changes to the graph model. These graphs serve as a form of dynamic knowledge representation of the biological system being studied and evolve as new insight is gained from the experimental data. While numerous graph layout and drawing packages are available, these tools did not fully meet the needs of our immunologist collaborators. In this paper, we describe the data information display needs of these immunologists and translate them into design decisions. These decisions led us to create Cerebral, a system that uses a biologically guided graph layout and incorporates experimental data directly into the graph display. Small multiple views of different experimental conditions and a data-driven parallel coordinates view enable correlations between experimental conditions to be analyzed at the same time that the data is viewed in the graph context. This combination of coordinated views allows the biologist to view the data from many different perspectives simultaneously. To illustrate the typical analysis tasks performed, we analyze two datasets using Cerebral. Based on feedback from our collaborators we conclude that Cerebral is a valuable tool for analyzing experimental data in the context of an interaction graph model. Index Terms—Graph layout, systems biology visualization, small multiples, design study. 1 - DAMATH: Discrete Applied Mathematics and Combinatorial Operational Research , 1999 "... Suppose that G = (U; L; E) is a bipartite graph with vertex set U [L and edge set E U L. A typical convention for drawing G is to put the vertices of U on a horizontal line and the vertices of L on another horizontal line, and then to represent edges by line segments between the vertices that d ..." Cited by 5 (1 self) Add to MetaCart Suppose that G = (U; L; E) is a bipartite graph with vertex set U [L and edge set E U L. A typical convention for drawing G is to put the vertices of U on a horizontal line and the vertices of L on another horizontal line, and then to represent edges by line segments between the vertices that determine them. \Edge concentration" is known as an eective method to draw dense bipartite graphs clearly. The key in the edge concentration method is to reduce the number of edges, while the graph structural information is retained. The problem of having a maximal reduction on the number of edges by the edge concentration method was left open. In this paper we show that this problem is NP-hard. Keywords: Graph drawing, Bipartite graph, Edge cover, NP-complete. 1 Introduction Graphs are commonly used in computer science to model relation structures such as programs, databases, and data structures. A good graph drawing gives a clear understanding of a structural model; a bad , 1999 "... Algorithms based on force-directed placement and virtual physical models have become one of the most effective techniques for drawing undirected graphs. Spring-based algorithms that are the subject of this thesis are one type of force-directed algorithms. Spring algorithms are simple. They produce g ..." Cited by 4 (0 self) Add to MetaCart Algorithms based on force-directed placement and virtual physical models have become one of the most effective techniques for drawing undirected graphs. Spring-based algorithms that are the subject of this thesis are one type of force-directed algorithms. Spring algorithms are simple. They produce graphs with approximately uniform edge lengths, distribute nodes reasonably well, and preserve graph symmetries. A problem with these algorithms is that depending on their initial layout, it is possible that they find undesirable drawings associated with some local minimum criteria. In addition, it has always been a challenge to determine when a layout is stable in order to stop the algorithm. In this thesis, we develop a simple but effective cost function that can determine a node layout quality as well as the quality of the entire graph layout during the execution of a Spring algorithm. We use this cost function for producing the initial layout of the algorithm, for helping nodes move out ... - Proceedings of GD’97, Graph Drawing, 5 th International Symposium , 1997 "... Query refinement is a powerful tool for a document search and retrieval system. Lexical navigation—that is, the exploration of a network that expresses relations among all possible query terms—provides a natural mechanism for query refinement. An essential part of lexical navigation is the visualiza ..." Cited by 4 (2 self) Add to MetaCart Query refinement is a powerful tool for a document search and retrieval system. Lexical navigation—that is, the exploration of a network that expresses relations among all possible query terms—provides a natural mechanism for query refinement. An essential part of lexical navigation is the visualization of this network. This dynamic visualization problem is essentially one of incrementally drawing and manipulating a non-hierarchical graph. In this paper, we present the graph-drawing system we have developed for lexical navigation. 1. - IEEE Trans. on Visualization and Computer Graphics , 2009 "... Abstract—We introduce several novel visualization and interaction paradigms for visual analysis of published protein-protein interaction networks, canonical signaling pathway models, and quantitative proteomic data. We evaluate them anecdotally with domain scientists to demonstrate their ability to ..." Cited by 3 (0 self) Add to MetaCart Abstract—We introduce several novel visualization and interaction paradigms for visual analysis of published protein-protein interaction networks, canonical signaling pathway models, and quantitative proteomic data. We evaluate them anecdotally with domain scientists to demonstrate their ability to accelerate the proteomic analysis process. Our results suggest that structuring protein interaction networks around canonical signaling pathway models, exploring pathways globally and locally at the same time, and driving the analysis primarily by the experimental data, all accelerate the understanding of protein pathways. Concrete proteomic discoveries within T-cells, mast cells, and the insulin signaling pathway validate the findings. The aim of the paper is to introduce novel protein network visualization paradigms and anecdotally assess the opportunity of incorporating them into established proteomic applications. We also make available a prototype implementation of our methods, to be used and evaluated by the proteomic community. Index Terms—Biological (genome or protein) databases, Data and knowledge visualization, Graphs and networks, Interactive data exploration and discovery, Visualization techniques and methodologies. 1 "... In visual datamining proximity data, which encodes the relationship between some entities as a distance, is often available. This proximity data is inherently high-dimensional and can be mapped into a low-dimensional 2D or 3D target space such that the points in the target space adhere to the specif ..." Add to MetaCart In visual datamining proximity data, which encodes the relationship between some entities as a distance, is often available. This proximity data is inherently high-dimensional and can be mapped into a low-dimensional 2D or 3D target space such that the points in the target space adhere to the specified distances. The target space can then be visualised as scatterplot.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=896816","timestamp":"2014-04-17T07:12:25Z","content_type":null,"content_length":"40120","record_id":"<urn:uuid:9ea56dd1-6a01-447a-b1bb-56b7c2237f19>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Where the really hard problems are Results 1 - 10 of 462 , 1992 "... . A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to solve these problems completely. We may seek to partially solve the problem, in particular by satisfying ..." Cited by 427 (23 self) Add to MetaCart . A constraint satisfaction problem involves finding values for variables subject to constraints on which combinations of values are allowed. In some cases it may be impossible or impractical to solve these problems completely. We may seek to partially solve the problem, in particular by satisfying a maximal number of constraints. Standard backtracking and local consistency techniques for solving constraint satisfaction problems can be adapted to cope with, and take advantage of, the differences between partial and complete constraint satisfaction. Extensive experimentation on maximal satisfaction problems illuminates the relative and absolute effectiveness of these methods. A general model of partial constraint satisfaction is proposed. 1 Introduction Constraint satisfaction involves finding values for problem variables subject to constraints on acceptable combinations of values. Constraint satisfaction has wide application in artificial intelligence, in areas ranging from temporal r... - J. ARTIFICIAL INTELLIGENCE RESEARCH , 1993 "... This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-order ..." Cited by 398 (6 self) Add to MetaCart This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective. - Journal of Artificial Intelligence Research , 1994 "... There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a polynomial amount of justification information to prune the search space. This paper introduces a new te ..." Cited by 360 (14 self) Add to MetaCart There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a polynomial amount of justification information to prune the search space. This paper introduces a new technique that combines these two approaches. The algorithm allows substantial freedom of movement in the search space but enough information is retained to ensure the systematicity of the resulting analysis. Bounds are given for the size of the justification database and conditions are presented that guarantee that this database will be polynomial in the size of the problem in question. 1 INTRODUCTION The past few years have seen rapid progress in the development of algorithms for solving constraintsatisfaction problems, or csps. Csps arise naturally in subfields of AI from planning to vision, and examples include propositional theorem proving, map coloring and scheduling problems. The probl... - Artificial Intelligence , 1994 "... I present several computational complexity results for propositional STRIPS planning, i.e., STRIPS planning restricted to ground formulas. Different planning problems can be defined by restricting the type of formulas, placing limits on the number of pre- and postconditions, by restricting negation ..." Cited by 299 (3 self) Add to MetaCart I present several computational complexity results for propositional STRIPS planning, i.e., STRIPS planning restricted to ground formulas. Different planning problems can be defined by restricting the type of formulas, placing limits on the number of pre- and postconditions, by restricting negation in pre- and postconditions, and by requiring optimal plans. For these types of restrictions, I show when planning is tractable (polynomial) and intractable (NPhard) . In general, it is PSPACE-complete to determine if a given planning instance has any solutions. Extremely severe restrictions on both the operators and the formulas are required to guarantee polynomial time or even NP-completeness. For example, when only ground literals are permitted, determining plan existence is PSPACE-complete even if operators are limited to two preconditions and two postconditions. When definite Horn ground formulas are permitted, determining plan existence is PSPACE-complete even if operators are limited t... , 1994 "... . Constraint satisfaction problems have wide application in artificial intelligence. They involve finding values for problem variables where the values must be consistent in that they satisfy restrictions on which combinations of values are allowed. Two standard techniques used in solving such p ..." Cited by 206 (12 self) Add to MetaCart . Constraint satisfaction problems have wide application in artificial intelligence. They involve finding values for problem variables where the values must be consistent in that they satisfy restrictions on which combinations of values are allowed. Two standard techniques used in solving such problems are backtrack search and consistency inference. Conventional wisdom in the constraint satisfaction community suggests: 1) using consistency inference as preprocessing before search to prune values from consideration reduces subsequent search effort and 2) using consistency inference during search to prune values from consideration is best done at the limited level embodied in the forward checking algorithm. We present evidence contradicting both pieces of conventional wisdom, and suggesting renewed consideration of an approach which fully maintains arc consistency during backtrack search. 1 Introduction Constraint satisfaction problems (CSPs) involve finding values for - In CP , 2000 "... . When multiple agents are in a shared environment, there usually exist constraints among the possible actions of these agents. A distributed constraint satisfaction problem (distributed CSP) is a problem to find a consistent combination of actions that satisfies these inter-agent constraints. Vario ..." Cited by 203 (7 self) Add to MetaCart . When multiple agents are in a shared environment, there usually exist constraints among the possible actions of these agents. A distributed constraint satisfaction problem (distributed CSP) is a problem to find a consistent combination of actions that satisfies these inter-agent constraints. Various application problems in multi-agent systems can be formalized as distributed CSPs. This paper gives an overview of the existing research on distributed CSPs. First, we briefly describe the problem formalization and algorithms of normal, centralized CSPs. Then, we show the problem formalization and several MAS application problems of distributed CSPs. Furthermore, we describe a series of algorithms for solving distributed CSPs, i.e., the asynchronous backtracking, the asynchronous weak-commitment search, the distributed breakout, and distributed consistency algorithms. Finally,we showtwo extensions of the basic problem formalization of distributed CSPs, i.e., handling multiple local variables, and dealing with over-constrained problems. Keywords: Constraint Satisfaction, Search, distributed AI 1. - Artificial Intelligence , 2002 "... We propose a new translation from normal logic programs with constraints under the answer set semantics to propositional logic. Given a normal logic program, we show that by adding, for each loop in the program, a corresponding loop formula to the program’s completion, we obtain a one-to-one corresp ..." Cited by 201 (6 self) Add to MetaCart We propose a new translation from normal logic programs with constraints under the answer set semantics to propositional logic. Given a normal logic program, we show that by adding, for each loop in the program, a corresponding loop formula to the program’s completion, we obtain a one-to-one correspondence between the answer sets of the program and the models of the resulting propositional theory. In the worst case, there may be an exponential number of loops in a logic program. To address this problem, we propose an approach that adds loop formulas a few at a time, selectively. Based on these results, we implement a system called ASSAT(X), depending on the SAT solver X used, for computing one answer set of a normal logic program with constraints. We test the system on a variety of benchmarks including the graph coloring, the blocks world planning, and Hamiltonian Circuit domains. Our experimental results show that in these domains, for the task of generating one answer set of a normal logic program, our system has a clear edge over the state-of-art answer set programming systems Smodels and DLV. 1 1 , 1998 "... Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing th ..." Cited by 188 (16 self) Add to MetaCart Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote agents. In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraint-based temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA's rst New Millennium mission, is scheduled for a period of a week in late 1998. The development of the Remote Agent also provided the opportunity to reassess some of AI's conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper. , 1995 "... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable ..." Cited by 161 (0 self) Add to MetaCart ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research. - Journal of Heuristics , 1995 "... The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not w ..." Cited by 119 (2 self) Add to MetaCart The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be spent doing more experiments. This paper argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter. Most experimental studies of heuristic algorithms resemble track meets more than scientific endeavors. Typically an investigator has a bright idea for a new algorithm and wants to show that it works better, in some sense, than known algorithms. This requires computational tests, perhaps on a standard set of benchmark p...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.97.3555","timestamp":"2014-04-21T02:12:59Z","content_type":null,"content_length":"40696","record_id":"<urn:uuid:d572f266-181a-4420-a723-ad0679ee6b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Contradiction proofs October 28th 2007, 03:10 AM #1 Contradiction proofs These are from lecture and I don't understand them thoroughly. Any tips that could help me to understand this better would be greatly appreciated. Here are the proofs. 1) A positive integer is called prime if its only positive divisors are 1 and itself. Otherwise it is called composite. 2) There are infinitely many primes. These are from lecture and I don't understand them thoroughly. Any tips that could help me to understand this better would be greatly appreciated. Here are the proofs. 1) A positive integer is called prime if its only positive divisors are 1 and itself. Otherwise it is called composite. 2) There are infinitely many primes. What don't you understand; the definition of a prime or that there are infinitly many of them, or the proof of the latter. proofs of these statements I got some messy notes from a classmate on these and I can't figure out the proofs of the statements. That was all that I wanted to see. Prove both statements. Thanks The first statement is just a definition, there's nothing to prove. For the second, the most famous (and perhaps first) proof for the infinitude of primes was given by Euclid. It may be the proof you want. See here or here at the first link, other proofs besides Euclid's are shown. This is a side bar. I join many others who disagree with #1 as a definition of prime. I know that many use it. I even heard Keith Devlin it use on National Public Radio. However, that definition makes 1 a prime. What do you think is the best definition of a prime? October 28th 2007, 10:37 AM #2 Grand Panjandrum Nov 2005 October 29th 2007, 03:34 AM #3 October 29th 2007, 06:17 AM #4 October 29th 2007, 10:01 AM #5 October 29th 2007, 12:12 PM #6 October 29th 2007, 01:06 PM #7 October 30th 2007, 01:26 AM #8
{"url":"http://mathhelpforum.com/discrete-math/21486-contradiction-proofs.html","timestamp":"2014-04-18T13:36:44Z","content_type":null,"content_length":"55217","record_id":"<urn:uuid:e0b8436a-287f-450b-8d85-86b99aa3e045>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Jackpot Casino. Roulette, blackjack, slots, poker and craps. Tutorial Game rules How to play. The objective is to guess where the ball lands when it stops spinning. Our game implements the French (or European) Roulette which has 36 numbers and only a single zero. This variant of Roulette is played in European and South America casinos. In a single-zero roulette the number order on the roulette wheel should adhere to the following clockwise sequence: 0-32-15-19-4-21-2-25-17-34-6-27-13-36-11-30-8-23-10-5-24-16-33-1-20-14-31-9-22-18-29-7-28-12-35-3-26. Place bet(s) on the desired spots – move them using the stylus. When you are ready with the bets, tap the spinning roulette wheel. When the ball stops moving, the winning number is displayed in the top left corner and the resulting sum is shown in the top right corner. Bet rules. Betting on a single number pays 35 to 1, on two numbers pays 17 to 1, on three numbers pays 11 to 1, on four numbers – 8 to 1, on six numbers – 5 to 1. Column bets and dozen bets pay 2 to 1. Red, Black, Even, Odd, Low, High bets pay 1 to 1. │ Betting on two cells (10 │ Betting on three cells (7, 8 │ Betting on four cells (13, 14, 16 │ Betting on six cells (31, 32, 33, 34, │ "1" bets on 0, 2 and 3; "5" bets on 0, 1, 2 and 3; "20" │ │ and 11) │ and 9) │ and 17) │ 35 and 36) │ bets on 0, 1 and 2 │ How to play. The rules vary from place to place, so you have to find out the exact rules of the specific casino before you start gambling. In BlackJack you always play against the dealer. The cards are valued as follows: an Ace can count as either 1 or 11, it's assumed to always have the value that makes the best hand. The cards from 2 through 9 are valued as indicated. The 10, Jack, Queen, and King are all valued at 10. The value of a hand is simply the sum of the value of each card in the hand. The objective is to have a hand value that is closer to 21 than that of the dealer, without going over 21 (“busting”). You bust when your cards total to more than 21 and you lose automatically. First you have to place bets, to move them to the specified rectangular area. The maximum bet sum is the current sum on your account. You can see the current sum on your account by clicking the corner of the credit card in the low left corner. After placing the bets click “Deal”. The dealer will deal the cards. The second dealer’s card is turned down. Then you have to make one of the following: hit, stand, split or double. Hit means to draw another card. Stand means no more cards. Double: double your initial bet following the initial two-card deal, but you can hit one card only. Split: split the initial two-card hand into two and play them separately - allowed only when the two first cards are of equal value. If the dealer’s shown card is an Ace, you are offered to buy insurance against dealer’s BlackJack. It costs half the bets. If the dealer has a 10 face down and makes a blackjack, insurance pays at 2-1 odds, but loses if the dealer does not. After that the dealer turns up the down card. By rule, on counts of 17 or higher the dealer must stay; on counts of 16 or lower the dealer must draw. If you make a total of 21 with the first two cards (a 10 or a face and an Ace), you win. This is called BlackJack. If both you and the dealer have BlackJack or the same hand count, it is called a Push or a Tie (or a Stand-off) and you get your bet back. BlackJack experts have carefully studied the game and found out some strategies which can help you beat the casino at BlackJack. The following strategies are a starting point, those who would like to learn more should consult some professional books such as The World's Greatest Blackjack Book by Lance Humble. Basic strategy │Your Hand │Dealer's Up Card │ │ │2 │3 │4 │5 │6 │7 │8 │9 │10│A │ │5,6,7,8 │H │H │H │H │H │H │H │H │H │H │ │9 │H │D │D │D │D │H │H │H │H │H │ │10 │D │D │D │D │D │D │D │D │H │H │ │11 │D │D │D │D │D │D │D │D │D │H │ │12 │H │H │S │S │S │H │H │H │H │H │ │13,14 │S │S │S │S │S │H │H │H │H │H │ │15,16 │S │S │S │S │S │H │H │H │H │H │ │17,18,19,20 │S │S │S │S │S │S │S │S │S │S │ │Soft 13,14 │H │H │H │D │D │H │H │H │H │H │ │Soft 15,16 │H │H │D │D │D │H │H │H │H │H │ │Soft 17 │H │D │D │D │D │H │H │H │H │H │ │Soft 18 │S │D │D │D │D │S │S │H │H │H │ │Soft 19,20 │S │S │S │S │S │S │S │S │S │S │ │A-A │SP│SP│SP│SP│SP│SP│SP│SP│SP│SP│ │2-2,3-3 │SP│SP│SP│SP│SP│SP│H │H │H │H │ │4-4 │H │H │H │SP│SP│H │H │H │H │H │ │5-5 │D │D │D │D │D │D │D │D │H │H │ │6-6 │SP│SP│SP│SP│SP│H │H │H │H │H │ │7-7 │SP│SP│SP│SP│SP│SP│H │H │H │H │ │8-8 │SP│SP│SP│SP│SP│SP│SP│SP│SP│SP│ │9-9 │SP│SP│SP│SP│SP│S │SP│SP│S │S │ │10-10 │S │S │S │S │S │S │S │S │S │S │ H=Hit, S=Stand, D=Double, SP=Split Vegas Downtown Single-Hand strategy │Your Hand │Dealer's Up Card │ │ │2 │3 │4 │5 │6 │7 │8 │9 │10│A │ │8 │H │H │H │H │H │H │H │H │H │H │ │9 │D │D │D │D │D │H │H │H │H │H │ │10,11 │D │D │D │D │D │D │D │D │D │D │ │12 │H │H │S │S │S │H │H │H │H │H │ │13,14,15,16 │S │S │S │S │S │H │H │H │H │H │ │17 │S │S │S │S │S │S │S │S │S │S │ │A-2 │H │H │H │D │D │H │H │H │H │H │ │A-3,A-4,A-5 │H │H │D │D │D │H │H │H │H │H │ │A-6 │H │D │D │D │D │H │H │H │H │H │ │A-7 │D │D │D │D │D │S │S │H │H │S │ │A-8 │S │S │S │S │D │S │S │S │S │S │ │2-2,3-3,6-6 │SP│SP│SP│SP│SP│SP│H │H │H │H │ │4-4 │H │H │H │SP│SP│H │H │H │H │H │ │5-5 │D │D │D │D │D │D │D │D │H │H │ │7-7 │SP│SP│SP│SP│SP│SP│SP│H │H │H │ │8-8, A-A │SP│SP│SP│SP│SP│SP│SP│SP│SP│SP│ │9-9 │SP│SP│SP│SP│SP│S │SP│SP│S │S │ │10-10 │S │S │S │S │S │S │S │S │S │S │ H=Hit, S=Stand, D=Double, SP=Split Straight Hand strategy │Your Hand │Dealer's Up Card │ │ │2│3│4│5│6│7│8│9│10 │A│ │8 │H│H│H│H│H│H│H│H│H │H│ │9 │H│D│D│D│D│H│H│H│H │H│ │10 │D│D│D│D│D│D│D│D│H │H│ │11 │D│D│D│D│D│D│D│D│D │H│ │12 │H│H│S│S│S│H│H│H│H │H│ │13,14,15,16 │S│S│S│S│S│H│H│H│H │H│ │17+ │S│S│S│S│S│S│S│S│S │S│ H=Hit, S=Stand, D=Double, SP=Split How to play. The objective is to get the best combinations of symbols on the selected payline(s). Click "Bet One" to place one bet, you can place not more than 20 bets. Click "Bet Max" to place the maximum allowed bet. Click the arrow(s) and choose the payline(s). Your bet amount will be divided between the selected paylines. By clicking the arrows you can regulate the bet distribution. You can click "Cash Out" to cancel the bets. Click "Spin All" to launch the machine. When the drums stop spinning, the result will be shown. The result is calculated for each selected payline according to the following rules. How to play. The objective is to get as much money as possible by spinning the drums. Click "Bet One" to place one bet. One bet gives you one attempt to spin up or down one cylinder. Press "Arrow Up" button above the selected cylinder and it will spin up by one image cell. You can place as many bets as there are on your account. All five lines (three horizontal and two diagonal) are in the game. If the three images on a single line are the same, the line wins. The number of the remaining bets (attempts) is shown in the left box counter, the amount gained in the given game is shown in the right box counter. The objective of the Craps game is to predict the roll of the dice. To place a bet tap on a coin and move it to the desired area on the betting table. You cannot remove a placed bet. The first round is called the Come Out Roll. The marker is off. The Pass bet pays at 1 to 1 if the result is 7 or 11 and loses if the roll is 2, 3, or 12 (craps). If any other number is rolled, it becomes the Point and the on-marker is placed on that number. The dice is then rolled until the same point happens again or until a 7 appears. Don't Pass is the inverse of the Pass bet , except that 12 is a push and your bet will be returned in case of 12. In the subsequent rounds you can place wagers on Come and Don't Come areas. The Come bet wins if a 7 or 11 is rolled and loses on a 2, 3 or 12. Any other number becomes your Come Point and your initial Come bet moves to the appropriate number on the table. To win you must roll your Come Point before a 7. If a 7 comes before, you'll lose. The Don't Come bet is the opposite of the Come bet , except that in case of a 12 you have a push and your bet is returned. Additional bets - Odds - are also possible. Once a Point has been thrown you may bet up to 2 times your Pass Line bet on the 'odds.' The odds are simply an additional wager that the Point will be rolled before a 7. After the Come Out round you can place additional wagers on your Pass, Don't Pass, Come or Don't Come bets. Taking odds on Pass and Come on 4 and 10 pays 2 to 1; on 5 and 9 pays 3 to 1; on 6 and 8 pays 6 to 5. Laying odds on Don't Pass and Don't Come pays in the opposite way. There are other bets as well. At any time you can make a Place bet on a specific number (4, 5, 6, 8, 9, 10). The bet wins if the number is rolled before a 7 and loses in the opposite case. 4 and 10 pay 9 to 5; 5 and 9 pay 7 to 5; 6 and 8 pay 7 to 6. Buy bet works like a Place bet, but the payouts are different. 4 and 10 pay 2 to 1; 5 and 9 pay 3 to 2; 6 and 8 pay 6 to 5. A 5% commission is levied on the bet. Place this bet on the top of the numbered box. Lay bet is similar to Pay Bet, but wins if a 7 is rolled before the chosen number. Place this bet on the bottom of the numbered box. Lay Bet on 4 or 10 pays 1 to 2 (e.g. 2 dollars will bring you 1 dollar). Lay Bet on 5 or 9 pays 2 to 3 (e.g. 3 dollars will bring you 2 dollars). Lay Bet on 6 or 8 pays 5 to 6 (e.g. 6 dollars will bring you a 5-dollar win). Because of these payoffs you must take care to wager the correct amount to receive the correct payoff. The casino rounds down on uneven payoffs. You may place the Field bet at any time. The bet pays 1 to 1 if one of the following numbers roll: 3, 4, 9, 10, 11. The bet pays double if 2 or 12 rolls. There are four Hardway bets : double 2, double 3, double 4 and double 5. The bet wins if the appropriate double is rolled before a 7. The bet loses if a 7 is rolled before the chosen double or if any other combination totalling this number rolls. Double 2 and double 5 pay 7 to 1, double 3 and double 4 pay 9 to 1. Proposition bets include betting on Any Craps (2, 3, or 12), betting on Seven and betting on Horn (2, 3, 11, 12). These bets are one-roll bets. They win if the appropriate number is rolled and lose in the opposite case. The payout is usually drawn on the table. Video Poker is one of the most popular card games in the world. The video poker game is played with a 52- or 54-card deck with (or without) Jokers. The objective is to get the highest winning poker Click "Bet One" to place one bet. Click "Bet Max" if you want to place the highest bet. Once you have made a bet, click "Deal" and the machine will deal you five cards. After evaluating your cards you can decide to hold any of these cards. To do so, click "Hold" under the chosen card(s). You can cancel the hold in the same way. After that press "Deal" and all the unheld cards will be replaced by other cards from the pack. Then you'll get the result. If you have one of the winning poker combinations you'll gain money according to the following rules: │Combination │Payment coefficient │ │Jacks or Better │1 │ │Two Pair │2 │ │Three of a Kind │3 │ │Straight │4 │ │Flush │5 │ │Full House │7 │ │Four of a Kind │20 │ │Straight Flush │50 │ │Flush Royal │250 │ A straight flush is a poker hand such as Q♣ J♣ 10♣ 9♣ 8♣, which contains five cards in sequence, all of the same suit. An ace-high straight flush such as A♥ K♥ Q♥ J♥ 10♥ is known as a royal flush. Four of a kind, or quads, is a poker hand such as 7♥ 7♦ 7♣ 7♠, which contains four cards of one rank, and an unmatched card. A full house, also known as a boat or a full boat, is a poker hand such as J♥ J♠ J♣ 8♣ 8♠, which contains three matching cards of one rank, plus two matching cards of another rank. A flush is a poker hand such as Q♠ 10♠ 7♠ 6♠ 4♠, which contains five cards of the same suit, not in rank sequence. A straight is a poker hand such as Q♥ J♠ 10♦ 9♠ 8♣, which contains five cards of sequential rank, of varying suits. Video Poker. Variants. Jacks or Better Jacks or Better poker game is played with a 52-card deck. This is the simplest poker variant, it does not use any wilds or jokers. The lowest winning hand is a pair of Jacks (or better). Tens or Better Tens or Better poker is played with a 52-card deck. This is the simplest poker variant, it does not use any wilds or jokers. The lowest winning hand is a pair of Tens (or better). Deuces Wild Deuces Wild poker is played with a 52-card deck. Deuces ( 2's ) are considered to be "wild" cards. It means they can be used as any cards to complete the best possible poker hand. For example, they can be used as the missing cards to complete the straight. The lowest straight (A2345) is not allowed. Aces and Eights Aces and Eights poker is played with a 52-card deck. There are special winning combinations in this game: "Four Aces or Eights" and "Four Sevens". The lowest possible winning hand is a pair of Jacks (or higher).. One Eyed Jack One Eyed Jack poker is played with a 52-card deck. Jacks of Hearts and Spades ( two cards ) are considered to be "wild" cards. It means they can be used as any cards to complete the best possible poker hand. Joker Poker The Joker poker game is played with a 53-card deck. A Joker card is added to a standard 52-card deck. The Joker card can be used as any card to complete the best possible poker hand. Carribean poker. Caribbean poker is played with a deck of 52 cards. In order to participate in the game, you must first place an "ante bet". You are then given five cards - all of them face up. The dealer also receives five cards; four cards are dealt face down and one card face up. In Caribbean poker, the player and the dealer compare hands formed from their five cards. No additional cards are dealt. When you have evaluated your cards and the dealer's up card, you must decide whether to make a bet or to surrender. If you wish to challenge the dealer, you must place a bet by clicking "Bet". The bet always equals twice the ante. If you do not wish to challenge the dealer's hand, you must press "Surrender" and lose your ante. When you receive a good hand, you will naturally want to place a bet to challenge the dealer's hand. When the bet has been placed, the dealer reveals his/her four remaining cards and the hands are The Dealer Must Qualify. In Caribbean poker, the dealer's hand must contain at least one ace and one king in order to qualify. If the dealer's hand does not qualify, you receive 1 to 1 on your ante and your bet is returned to you without winnings. If the dealer's hand does qualify with a value of at least one ace + one king, the best hand wins. Your winning hand receives 1 to 1 on the ante plus the winnings on your bet, which are calculated according to the winnings table. In our implementation of Caribbean Poker we ignore "The dealer must qualify" rule for the sake of more addictive gambling. The rules are the same for the dealer and for the player. When the dealer and the player receive poker hands of equal value (i.e. a push game), both the ante and the bet are returned to the player. Please note that a pair of two Jacks is considered equal to a pair of two Kings, that is in combinations (pair, two pairs, three of a kind etc.) card values are not taken into account. Betting Limits: The ante bet limits in Caribbean Poker are from 1 to 200. The winnings calculation table │Hand │Pays │ │Nothing │1 to 1 │ │One Pair │1 to 1 │ │Two Pairs │2 to 1 │ │Three of a Kind │3 to 1 │ │Straight │4 to 1 │ │Flush │5 to 1 │ │Full House │7 to 1 │ │Four of a Kind │20 to 1 │ │Straight Flush │50 to 1 │ │Royal Flush │100 to 1 │
{"url":"http://www.mobile-stream.com/casino_tutorial.html","timestamp":"2014-04-16T22:54:16Z","content_type":null,"content_length":"38811","record_id":"<urn:uuid:76f349ac-a791-47c6-ad87-e936ef3194b6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Reimann's Critical Line Would it be correct to say that knowing the position of the zero's allows the number of primes up to x to be calculated more accurately than otherwise (than Gauss's calc for instance?). That would be fair, but maybe not quite in the sense of the word "calculate" that you would mean. Better control of the zeros gives better bounds for the junk term in the formula for [tex]\pi(x)[/tex]. Maybe a simpler example of asymptotics is in order. [tex](x^2+x)\sim x^2[/tex] means as x gets really large the ratio[tex](x^2+x)/x^2[/tex] tends to 1. If you've taken any calculus, you've seen this plenty of times. If not, try putting in very large numbers to get an idea of what is happening. So we can use the simpler [tex]x^2[/tex] to approximate [tex]x^2+x[/tex] when x is large. This approximation is imperfect though,especially if you want exact values of [tex]x^2+x[/tex]. If you put [tex]x=10^{10}[/tex], then even though the ratio [tex] (x^2+x)/x^2[/tex] is within 10 decimal places of 1 the absolute error [tex](x^2+x)-x^2[/tex] is an enormous [tex]10^{10}[/tex]. In terms of the graphs of the functions [tex]x^2+x[/tex] and [tex]x^2[/tex] if you zoomed out very far, the graphs would be indistinguishible, but up close there's a huge gap. Now control on the junk term in the prime number theorem tells us how fast the ratio of [tex]\pi(x)[/tex] and our simpler formula (such as Gauss's logarithmic integral estimate I gave in the last post) is going to 1. The absolute error can (and does) still get extremely large. This means we will never be able to calculate [tex]\pi(x)[/tex] to the nearest integer using the prime number However, asymptotics are good for many applications. For example, having a better control of the junk will let us calculate the distribution well enough to say certain primality testing algorithms work properly. I haven't really given you a sense of what I mean by calculate here, but hopefully I've given you more sense of what it's not. Sorry for the naive questions but I'm trying to understand what it is that Reimann did with the Zeta function, or what it is that the function does, but without much (any?) idea of the actual mathematics involved. That groovy expression for zeta I gave a couple posts back has a form in terms of an infinite sum [tex]{\mbox \zeta(s)=\prod_{p\ \text{prime}}(1-p^{-s})^{-1}=\sum_{n=1}^{\infty}\frac{1}{n^{s}}[/tex] (by the way, this second equality can be thought of as an analytic representation of the fundamental theorem of arithmetic). You may recognize this better, if s=1 you get the harmonic series [tex]{\ mbox\sum_{n=1}^{\infty}\frac{1}{n}}[/tex] which you may have seen diverges to infinity. Before Riemann, Euler had considered this infinite sum only for real values of s. Riemann allowed s to wander over the complex plane. A problem was the infinite sum (or the infinite product over the primes) was not well behaved if the real part of s was less than or equal to 1 (this is directly related to the divergence of the harmonic series above). Riemann used some magic (pun intended) to show there was a way to extend the definition of the zeta function to allow all complex values. Riemann then went on to do many great things. He showed that the zeta function had no zeros with real part greater than 1. He conjectured (possibly had a proof for) very accurate estimates on the number of zeros in the critical strip. He proved a formula that gives [tex]\pi(x)[/tex] explicitly in terms of the zeros of zeta. This main term in this formula was also more accurate that Gauss's, though he was unable to prove that it was in fact the 'main term' (meaning the junk was small). And of course he conjectured his famous hyposthesis. His formula for [tex]\pi(x)[/tex] was a grand thing. Up to this point, Gauss had conjectured [tex]{\mbox\pi(x)\sim \int_{2}^{x}\frac{1}{\log(t)}dt}[/tex], but no one could prove it. Riemann's formula reduced this prolem to proving that the zeros were in 'the right locations'. In fact it turned out that if you could show there were no zeros on the line real part of s=1 then the junk term in Riemann's formula would be 'small enough' to conclude that Gauss's asymptotic estimate was correct. This was done, but not by Riemann. He laid out the tools needed to prove the prime number theorem. I hope that gives you at least a very coarse outline of what's what. With the recent publicity of the clay prize ($1 million for solving the Riemann Hypothesis) there has been a few books aimed at a general audience on the subject, you might consider picking one up (if you're paying for it, look at it very closely to see if it has a level of math you're comfortable with). They'd probably do a better job of explaining things to you (though I'm happy to answer any questions you have!)
{"url":"http://www.physicsforums.com/showthread.php?p=365293","timestamp":"2014-04-17T18:24:59Z","content_type":null,"content_length":"56311","record_id":"<urn:uuid:5423ffe6-0793-4fd0-9cc5-a80187c16b27>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Another Cycloid problem, attempting to describe a graphic proof. April 18th 2012, 12:56 PM #1 Junior Member Mar 2012 Another Cycloid problem, attempting to describe a graphic proof. Okay, have a look at that picture I drew... What you are looking at is a reproduction of a diagram on a blueprint for a VERY old Jaguar XK over head cam design (I can't post). I provided what I thought were prudent dimensions... everything else... you should be able to assume. If not please ask. (okay the circle... to make drawing the plot lines easier, I used the right side for the 30° and the left side for the 15°+30° increments) The description on the blueprint is "cycloidal motion curve" Take a look at my older thread... Integration of a cycloid. and you will see what I have been working on. I have to ask... is this diagram I posted... a graphical proof of one of the integrals for a cycloid? It looks a lot like the 1st integral plot... but also looks a lot like the first part of the 2nd integral plot. I would love some insight to this! Re: Another Cycloid problem, attempting to describe a graphic proof. It's not one of the integrals, no. The left half is a graph of x = t - sin(t), half scale. I.e. x = t/2; y = (t - sin(t))/2. Not sure where that gets you. Re: Another Cycloid problem, attempting to describe a graphic proof. Don't ya love it when you can't sit back and see something simple for what it is. Thanks Re: Another Cycloid problem, attempting to describe a graphic proof. Ya know... On one hand Im not going to feel too bad about this, because it came right off a Jaguar engine blueprint calling it a cycloid. Maybe that's why british cars never seem to run well. Re: Another Cycloid problem, attempting to describe a graphic proof. So why do cam design people call functions that are a simple sin wave or at least the simple harmonic motions a Cycloid? I am looking at a figure in "Cam Design and Manufacturing Handbook, second edition, Robert L. Norton" Page 134 Figure 6-8 Half-Cycloidal functions for use on a rise segment. The "a" plot does not look like a cycloid... or is it? Cam Design and Manufacturing Handbook - Robert L. Norton - Google Books I'm trying to figure out a "Cycloidal Acceleration" Curve... Just need to make sure that I'm not being completely stupid. Last edited by Heloman; September 4th 2012 at 12:27 PM. Re: Another Cycloid problem, attempting to describe a graphic proof. Ah, right, so they do use a simple sin wave as the acceleration curve (*), and they call the resulting displacement curve (but not the acceleration) cycloidal because it projects from a circle as per your diagram up top, or from the vertical component of the corresponding vertically-traced cycloid, said component isolated by the groove machine in that video. Which, I take it, would all be gratuitous if it didn't provide a neat graphical method of fitting one complete cycle into the required rectangle. Re: Another Cycloid problem, attempting to describe a graphic proof. Well... the only thing I have to go off of is a technical report from the late 60's with 1 sentence in it saying "The cam is characterized by its dual-frequency cycloidal acceleration corners." I took that sentence to mean that the acceleration plot is cycloidal. It could just as well mean that the acceleration corners are of a dual frequency cycloidal form.... thus the simple See if a picture really is worth 1000 words... P1 to P2 is what I have been trying to define... either its based on a parametric cycloid acceleration... or... its based on a sin wave acceleration. The next part of the game has been trying to figure out how to get acceleration curve 1 and and de-acceleration curve 2 to have equal slopes at P2 and P3 so that they flow into eachother. Where we left off in the other thread I had figured out how to make Acceleration curve 1 and 2 identical, with P1(x1,y1) and P4 (x4,y4) being defined inputs. But what I am dealing with right now is trying to figure out the "dual-frequency" part of "...cycloidal acceleration corners". If in fact it turns out that these acceleration curves are just sin waves... That cam book should have what I need to figure it out. Either way... I feel a lot smarter than when I started this project. I guess what it comes down to now is run a analysis to try and figure out which of the two is the more efficient. Do you think the parametric based curve is even viable? April 19th 2012, 03:08 AM #2 MHF Contributor Oct 2008 April 19th 2012, 05:04 AM #3 Junior Member Mar 2012 April 19th 2012, 05:16 AM #4 Junior Member Mar 2012 September 4th 2012, 12:23 PM #5 Junior Member Mar 2012 September 7th 2012, 02:18 AM #6 MHF Contributor Oct 2008 September 7th 2012, 10:05 AM #7 Junior Member Mar 2012
{"url":"http://mathhelpforum.com/calculus/197515-another-cycloid-problem-attempting-describe-graphic-proof.html","timestamp":"2014-04-16T13:37:54Z","content_type":null,"content_length":"52547","record_id":"<urn:uuid:ee10b4e9-887e-4f9f-a906-55b13142f79c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do we want to have orthogonal bases in decompositions? up vote 2 down vote favorite In the decompositions I encountered so far, we all had orthogonal set of bases. For example in Singular Value Decomposition, we had orthogonal singular right and left vectors, in [discrete] cosine transform (or [discrete] fourier transform) we had again orthogonal bases. To describe any vector $x \in \mathbb{C}^N$, we need to have $N$ independent set of basis vectors but independent doesn't necessarily mean orthogonal. My intentions behind selecting orthogonal vectors are as follows: • The solution is not unique for $x$ if the basis are not orthogonal. • It is easy to find the solution numerically by projecting $x$ onto each vector and this solution doesn't depend on the order of the bases. Otherwise, it depends on the order. • If we are talking about some set of vectors, they might be correlated in the original space, but uncorrelated in the transformd space which might be important when analyzing the data, in dimensionality reduction or compression. I'm trying to understand the big picture. Do you think that I am right with these? Do you have any suggestions, what is the main reason for selecting orthogonal bases? na.numerical-analysis big-picture fa.functional-analysis linear-algebra 3 Frequently you don't (or can't) use orthonormal basis in decompositions. It begs the question of what a decomposition is for I suppose. If you're interested in the dynamical properties of the matrices -- like understanding $A^k$ for all $k$, and if say $A$ is an integer matrix you'll probably want your conjugating matrices to be integer matrices as well, so it's very unlikely they'll be orthogonal matrices. – Ryan Budney Oct 13 '10 at 18:03 5 One answer to this is that in many situations, orthogonal elements represent elements of a system that are in some sense non-interacting, so that in solving a problem they can be handled independently. This is the case in particular when the elements are the eigendirections of a self-adjoint operator. – Dick Palais Oct 13 '10 at 18:08 add comment 4 Answers active oldest votes Your first point, non-uniqueness, is definitely false. One of the basic facts in linear algebra is precisely that for any fixed set of basis vectors (we don't even have to work on a vector space endowed with an inner product, so orthogonality doesn't come in at all), a given vector has a unique decomposition. For if that were not true, let the basis vectors be $e_1, \ldots, e_n$, then you have two sets of numbers $a_1, \ldots a_n$ and $b_1, \ldots, b_n$ such that $$ a_1 e_1 + \cdots + a_n e_n = x = b_1 e_1 + \cdots + b_n e_n \implies (a_1 - b_1) e_1 + \cdots + (a_n - b_n) e_n = 0$$ if the sets $a_*$ and $b_*$ are not identical, this implies that $e_1\ldots e_n$ are linearly dependent, contradicting the assumption that they form a basis. The second point, however, is a biggie. Without an inner product you cannot define an orthgonal projection. Now, generally, this is not too much of a problem. Given the basis vectors $e_1,\ ldots, e_n$, finding the coordinates $a_1,\ldots, a_n$ of a given vector $x$ in this basis is just solving a linear system of equations, which actually is not too hard numerically, in finite dimensional systems. In infinite dimensional systems this inversion of the transformation matrix business gets slightly tricky. The key is to note that without using the orthogonal projection, you cannot answer the question "what is the length of $x$ in the direction of $e_1$?" without knowing the entire set of basis vectors. (Remember that without using the orthogonal projection, you need to solve a linear system of equations to extract the coordinates; if you only specify one of the basis vectors, you do not have a complete system of equations, and the solution is underdetermined. I suspect this is what you had in the back of your mind for the first point.) This is actually a very fundamental fact in geometry, regarding coordinate systems. (I've once heard this described as the "second fundamental mistake in multivariable calculus" made by many students.) Using the orthogonal projection/the inner product, you can answer the question, as long as you allow only orthgonal completions of the basis starting from the known vector. This fact is immensely useful when dealing with infinite dimensional systems. I also don't quite like your formulation of the third point. A vector is a vector is a vector. It is independent of the coordinate representation. So I'd expect that if two vectors are up vote 8 correlated (assuming correlation is an intrinsic property), they better stay correlated without regard to choice of bases. What is more reasonable to say is that two vectors maybe down vote uncorrelated in reality, but not obviously so when presented in one particular coordinate system, whereas the lack of correlation is immediately obvious when the vectors are written in accepted another basis. But this observation has rather little to do with orthogonality. It only has some relation to orthogonality if you define "correlation" by some inner product (say, in some presentations of Quantum Mechanics). But then you are just saying that orthogonality of two vectors are not necessarily obvious, except when they are. My personal philosophy is more one of practicality: the various properties of orthogonal bases won't make solving the problem harder. So unless you are in a situation where those properties don't make solving the problem easier, and some other basis does (like what Ryan Budney described), there's no harm in prefering an orthogonal basis. Furthermore, as Dick Palais observed above, one case where an orthogonal bases really falls out naturally at you is the case of the spectral theorem. The spectral theorem is, in some sense, the correct version of your point 3, that in certain situations, there is a set of basis vectors that are mathematically special. And this set happens to always be orthogonal. Edit A little more about correlation. This is what I like to tell students when studying linear algebra. A vector is a vector. It is an object, not a bunch of numbers. When I hand you a rectangular box and ask you how tall the box is, the answer depends on which side is "up". This is similar to how you should think of the coordinate values of a vector inside a basis: it is obtained by a bunch of measurements. (Picking which side is "up" and measuring the height in that direction, however, in a non-orthogonal system, will require knowing all the basis vectors. See my earlier point.) The point is that to quantitatively study science, and to perform numerical analysis, you can only work with numbers, not physical objects. So you have to work with measurements. And in your case, the correlation you are speaking of is correlation between the measurements of (I suppse) different "properties" of some object. And since what and how you measure depends on which basis you choose, the correlation between the data will also depend on which basis you choose. If you pick properties of an object that are correlated, then your data obtained from the measurements will also be correlated. The PCA you speak of is a way to disentangle that. It may be difficult to determine whether two properties of an object is correlated. Maybe the presence of a correlation is what you want to detect. The PCA tells you that, if there were in fact two independent properties of an object, but you just chose a bad set of measurements so that the properties you measure do not "line up" with the independent properties (that the properties you measure have a little bit of each), you can figure it out with a suitable transformation of the data at the end of the day. So you don't need to worry about choosing the best set of properties to measure. Thank you for the simple proof for my first point. You are definitely right. I didn't think on it clearly and accepted it as correct when a a friend of mine told that it should be that way. For the third point, I will check what spectral theorem gives us. Actually, the correlation/dependence was in my mind because of principal component analysis which is close to singular value decomposition. For example, it this image (bit.ly/cyt22i), when we change the bases, the data becomes independent (but for this gaussian distributed samples in fact). Is correlation independent from the bases? – İsmail Arı Oct 14 '10 at 9:41 I see, I didn't quite understand what you mean by correlation. Let me add one more paragraph in the bottom. – Willie Wong Oct 14 '10 at 13:46 I think I understood the mathematical way of seeing vectors. I always thought of them as being a bunch of numbers actually because of my engineering-oriented education. Thank you for the additional explanation. – İsmail Arı Oct 14 '10 at 19:45 add comment I assume you are seeing things from a numerical point of view, since you only mentioned computation-oriented decompositions. up vote 4 So here's a computational motivation: orthogonal matrices have condition number 1, thus multiplying and dividing by them is a numerically tame operation that does not increase the down vote (norm-wise) errors. For instance, even when you work with symplectic matrices, you usually look for symplectic and orthogonal matrices. add comment Maybe some examples of non-orthogonal decompositions are helpful: In the context of wavelet one frequently uses bi-orthonormal systems, i.e. two sets of vectors $(u_i)$ and $(v_i)$ such that $\langle u_i,v_j\rangle = \delta_{i,j}$. In this setting the calculations of coefficients can be done stably and efficiently. In real life, such bi-orthogonal wavelet bases are at the heart of the JPEG 2000 compression standard. up vote 3 down vote Another important notion in this context is the notion of frames. add comment Well, when multiplying a vector ${\bf x}$ (say of norm 1) with a matrix ${\bf A}$ you are expecting to get a vector that lies on the surface of the ellipse $\left\|{\bf A}{\bf x}\right\|^ 2_2$. The directions of the axes of this ellipse are the eigenvectors and the eigenvalues dictate the equatorial radii. So every time you multiply a vector with that matrix, this up vote 1 decomposition into axes and radii tell you how your matrix distorts an "input" vector. That might be one pictorial reason :) down vote add comment Not the answer you're looking for? Browse other questions tagged na.numerical-analysis big-picture fa.functional-analysis linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/42040/why-do-we-want-to-have-orthogonal-bases-in-decompositions?sort=oldest","timestamp":"2014-04-16T14:08:34Z","content_type":null,"content_length":"75424","record_id":"<urn:uuid:c7e39bca-95b7-4fe3-b8ba-11e87b75bcbb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
An inequality involving operator and trace norms up vote 4 down vote favorite Consider two square matrices $A, B \in \mathbb{R}^{n \times n}$ and let $\| \cdot\|_1$ and $\|\cdot\|$ be, respectively, the trace norm (the sum of singular values) and the usual operator norm (the maximum of singular values). Is there a known bound for the following quantity? $$ \sup\{\alpha > 0: \; \alpha \, \text{tr}(A^TB) \le \| A+B\|_1, \, \forall B \; \text{s.t.} \;\|B\| \le 1\} $$ inequalities matrix-analysis operator-theory Cross-posted on MSE. – 1015 Jul 7 '13 at 20:29 1 Maybe I'm missing something. If $A = 0$, then $\alpha$ becomes unbounded....? – Suvrit Jul 9 '13 at 17:42 1 @suvrit: the MSE version is already edited. It is annoying with all these cross-posts... – András Bátkai Jul 9 '13 at 18:57 @András: ah, ok. You are right, too much cross-posting! – Suvrit Jul 9 '13 at 22:08 add comment 1 Answer active oldest votes Since I cant comment, I will leave this thought here. Since $||\cdot||_1$ and $||\cdot||$ (as you defined them) are dual norms, it must be that tr$((A+B)^TX)\leq||A+B||_1$ for any $X$ such that $||X||\leq 1$. Therefore, tr$(A^TB)\leq ||A+B||_1 - ||B||^2<||A+B||_1$ (since tr$(B^TB)\geq||B||^2$). up vote 4 down vote accepted (edit: fixed typo and replaced $||B||$ with $||B||^2$ in the final step) 1 It is rather $\|B\|^2=\|B^TB\|=\rho(B^TB)\leq \mbox{tr}(B^TB)$. And since $\|B\|\leq 1$... But you still have $\|A+B\|_1-\mbox{tr}(B^TB)\leq \|A+B\|_1$, though. – 1015 Jul 7 '13 at 20:46 @julien. You're right. I fixed the typo. Thanks! – Skoro Jul 7 '13 at 21:03 @Skoro, thanks. Your argument shows that the supremum is at least one. But can it be bigger? For example, are there matrices $A\neq 0$ for which that supremum is at least, say 3? I am interested in bounds relating that quantity to the spectrum of $A$. – passerby51 Jul 8 '13 at 18:53 3 Interesting. It looks like one can get an upper bound on $\alpha$ by substituting $B = A/||A||$. This results in the following bound: $\alpha \leq \frac{||A||_1(||A||+1)}{tr(A^ TA)} = \frac{(\sum \sigma_i)(\max \sigma_i+1)}{\sum \sigma_i^2}$. maybe this could be controlled using the condition number of $A$. – Skoro Jul 8 '13 at 21:07 add comment Not the answer you're looking for? Browse other questions tagged inequalities matrix-analysis operator-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/136005/an-inequality-involving-operator-and-trace-norms","timestamp":"2014-04-16T11:25:40Z","content_type":null,"content_length":"60431","record_id":"<urn:uuid:b9d61ade-4340-4525-9def-2ac107dba95b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
The Equation Of A Transverse Wave Traveling Along ... | Chegg.com The equation of a transverse wave traveling along a very longstring is given by = 6.6sin(0.013 ), where are expressed in centimeters and is in seconds. Determine the following values. (a) the amplitude (b) the wavelength (c) the frequency (d) the speed (e) the direction of propagation of the wave (f) the maximum transverse speed of a particle in the string (g) the transverse displacement at = 3.5 cm when = 0.26 s
{"url":"http://www.chegg.com/homework-help/questions-and-answers/equation-transverse-wave-traveling-along-longstring-given-y-66sin-0013-x-42-t-wherex-y-exp-q541837","timestamp":"2014-04-20T22:24:13Z","content_type":null,"content_length":"26372","record_id":"<urn:uuid:0551c36b-1e74-42b3-a25e-0ab64af603fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Demonstration of {estout} I wrote a short talk demonstrating the use the R package {estout} for tonight’s New England R Users Group meeting. NB this is not a discussion of the econometric model, but rather a demonstration of how to get publication-quality results out of R efficiently. The basic functions of {estout} are modeled on the Stata package estout. Once the R user has a dataset and a regression format in memory, {estout} will 1. Print tables of summary statistics in CSV or LaTeX format. 2. Print regression results in CSV or LaTeX format. All the normal bells and whistles for econometrics are in there: reporting both coefficient estimates and their standard errors, asterisks for alpha=0.10, 0.05, and 0.01 significance levels, R-squared and number of observations. Options to customize are clearly marked in the documentation. 2 thoughts on “Demonstration of {estout}” 1. Hi Ben, I just discovered this package today and it’s fantastic! One question: when I loop over models, I store them as elements in a list (called “res”). My guess is that I can do lapply(res, eststo), right? Also, does estout work fine with glm? I asked this because the table doesn’t come out exactly right (R^2 missing). Posted by | September 21, 2013, 9:00 pm □ I haven’t tried using the lapply method. One possible pitfall is that you would need to watch the behavior of the stored models. If you wanted individual tables printed, rather than several columns in a table, you might need to store the model, print it, and clear it in several steps. Two developments you should try for yourself. First is the stargazer package that will summarize data frames and models elegantly for LaTeX. Second is the knitr package that produces either LaTeX or HTML output. Recommend you work with sweave files or r markdown files in RStudio, that facilitates drafting and compiling them. Thank you for reading. Let me know how your apply code works with estout. Posted by | September 27, 2013, 10:47 pm
{"url":"http://benmazzotta.wordpress.com/2010/08/24/demonstration-of-estout/","timestamp":"2014-04-20T14:12:58Z","content_type":null,"content_length":"74167","record_id":"<urn:uuid:0fd05f68-6506-4f47-b80c-525ad921d3fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Assessment of Strength-Probability-Time Relationships in Ceramics STP601: Assessment of Strength-Probability-Time Relationships in Ceramics Lenoe, EM Chief mechanic of materials and research mathematician, Army Materials and Mechanics Research Center, Watertown, Mass. Neal, DM Chief mechanic of materials and research mathematician, Army Materials and Mechanics Research Center, Watertown, Mass. Pages: 23 Published: Jan 1976 In the past few years a number of test procedures have evolved as a result of attempts to observe stable crack growth in ceramics under constant stress conditions. The experimental procedures have included double cantilever, double torsion, in-plane moment, and controlled flaws in beam bending tests. These procedures are briefly reviewed. Available slow crack growth data for hot pressed silicon nitride, which has been presented in the literature, is utilized to estimate survival times for various stress levels. The computations are completed in several ways: (a) use of strength data obtained at various loading rates; (b) deterministic integration of the equations; and (c) in a Monte Carlo sense, wherein the controlling parameters are assumed to possess realistic variability. The end product of each set of computations is a design stress-survival time relationship, and the purpose of this paper is to compare these life estimates and comment on the adequacy of each method. Additional motivations were to assess the status of properties information, and to establish, if possible, reasonable bounds on the accuracy of these probability-based life estimates. Examination of the data and application of the two probability techniques led to the conclusion that the procedures are appropriate for order of magnitude estimates, which are generally but not necessarily conservative. It was evident that additional data, improved experimental procedures, and further analysis of specimens were required. Since the exponents of the crack velocity-stress intensity functions in a power law form are large and typically cover a wide range for ceramics of interest, that is, 4 < m < 50, difficulties were encountered in use of numerical simulation procedures. With m ⩽ 10, for example, the resulting life functions were widely distributed. Furthermore, the mean value estimates were highly unstable and for large m did not appear amenable to economical digital simulation. Accordingly, a number of different trial functions, including logarithmic, exponential, and polynomial approximations were employed to represent subcritical crack behavior. The final function selected was of the form F = A^0exp(mK^1), where F = K^I/V. This form seemed appropriate since log F versus K^I resulted in reasonable requirements on number of simulations. The m values were appreciably lower than the power law representation. Application of the Monte Carlo method to lifetime estimates of ceramics provided an error tolerance for K^I and allowed calculation of the probability density function. fractures (materials), ceramics, silicon nitrides, fracture properties life (durability) probability theory, Monte Carlo method, crack propagation Paper ID: STP28638S Committee/Subcommittee: E08.06 DOI: 10.1520/STP28638S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP28638S.htm","timestamp":"2014-04-19T02:22:40Z","content_type":null,"content_length":"16009","record_id":"<urn:uuid:76ad42e0-88fa-4c03-bba1-ce9d7d6c90f7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
how the assignment happened Author how the assignment happened Ranch Hand public class Arg{ Joined: Dec 04, 2000 public static void main(String argv[]){ Posts: 31 Arg inc = new Arg(); int i =0; i = i++; System.out.println(i); //line 1, 0??? int k =0; System.out.println(k++); //0 System.out.println(k); //line3, 1 //hi, my confusion is why the line 1 still give 0? //after i= i++, the i should be 1 now even if it is post increment. //it should like the same in the line 3. Someone please explain to me, thanks! Ranch Hand Michael Lin, Joined: Nov 27, 2000 int i =0; // line 1 Posts: 73 i = i++; // line 2 There are 3 steps involved in this : 1. First in line2 i is assigned 0 (cos of line 1, and i++ is a postincrement operator) 2. Then i++ is evaluated ( so i = 1) 3. Now i = 0 is evaluated ( from step 1) Hope this clears Ranch Hand Hi micheal, Joined: Dec 15, 2000 I have he same doubt that how the assignment is happening. Posts: 68 I tried this which is the equivalent of your code. public class StrangeAssignment public static void main(String a[]) int i = 0; i = i = i+1; // i++ is equivalent to i=i+1. But the answer is 1 !!. Really it is a strange assignment. I tried the same in C/C++ it's giving 1. Regards<BR>---------<BR>vadiraj<P><BR>*****************<BR>There's a lot of I in J.<BR>***************** Ranch Hand ok lets take this expression Joined: Nov 04, 2000 i = i++ + i Posts: 81 what is the answer .. its one.. what it did was it writes the value of i before the + sign ( i=0) which is zero till now and then increments the value of i by 1 our expresion becoms i = 0 +... and i becomes 1 so now the value of i after + sign is assigned 1( as now i = 1) so the final expression becomes ( till now i is one) i = 0 + 1 now calculation the expression we get i = 1 if u got this now we will go back to ur question now our expression is i = i++ first i=0 so after the = sign i is assigned 0 and the value of i is incremented by 1 and i becomes 1 and our expression becomes i = 0 ( till now the value of i is one) but after evaluating this expression i becomes 0 Does this mean that the following statements Joined: Dec 27, 2000 public class Postincrement{ Posts: 5 public static void main(String args[]){ i=i++ + i++; System.out.println(i) ; would print 0 but the value of the variable i is 2 ? Ranch Hand I think confusion is due to behaviour of operator in C. Joined: Dec 12, 2000 In C it will do Posts: 71 i= 0; i=i++;/* 2 */ ans i =1 C do it as assign i=0 on line two then increment it by 1. Java seems workingin different way I am the most eligible bachelor in whole world, but only known in limited territory!!! Digital Intoxication Blog subject: how the assignment happened
{"url":"http://www.coderanch.com/t/196186/java-programmer-SCJP/certification/assignment-happened","timestamp":"2014-04-20T01:30:57Z","content_type":null,"content_length":"28290","record_id":"<urn:uuid:5e98ae27-29b0-46e8-9858-8d16b86a561b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of a Circle January 29th 2008, 03:55 PM Equation of a Circle Well the knowledge of it I just simply forgot about and don't remember a single thing about how to do it. Write the equation of a circle with center (4,0) and passing through the point (12,6). I don't want just an answer, I need the method of how to do it. Also I might also need help understanding this question as well Write the equation of a circle with center (6,6) and tangent in both axes. January 29th 2008, 03:59 PM Well the knowledge of it I just simply forgot about and don't remember a single thing about how to do it. Write the equation of a circle with center (4,0) and passing through the point (12,6). I don't want just an answer, I need the method of how to do it. Also I might also need help understanding this question as well Write the equation of a circle with center (6,6) and tangent in both axes. the equation of a circle is given by: $(x - h)^2 + (y - k)^2 = r^2$ where $(h,k)$ is the center and $r$ is the radius so fill in what you know for your circle. you are also given the additional information that when $x = 12$, $y = 6$, you can use this to solve for $r$ for the second question, you know that the circle passes through (0,6) and (6,0) ........which tells you the radius is 6
{"url":"http://mathhelpforum.com/geometry/27073-equation-circle-print.html","timestamp":"2014-04-18T15:50:47Z","content_type":null,"content_length":"5996","record_id":"<urn:uuid:5baeeeb3-96ee-4d8b-a278-3474e95c8519>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
CFP CADE-17 Workshop "Type-theoretic Languages: Proof search and Semanti CFP CADE-17 Workshop "Type-theoretic Languages: Proof search and Semantics" CALL FOR CONTRIBUTIONS Workshop on TYPE-THEORETIC LANGUAGES: PROOF-SEARCH AND SEMANTICS (in conjunction with CADE-17) CMU Pittsburgh, Pennsylvania 16 or 21 June, 2000 DEADLINE FOR SUBMISSION: 1 April, 2000. A one day workshop on "Type-Theoretic Languages: Proof Search and Semantics" will be held in June 2000 in conjunction with the 17th Conference in Automated DEduction (http://www.research.att.com/conf/cade/). Hardcopies of the preliminary proceedings will be distributed at the workshop. Final proceedings will be published as a volume in Electronic Notes in Theoretical Computer Science, Elsevier Science Much recent work has been devoted to type theory and its applications to proof- and program-development in various logical settings. The focus of this workshop is on proof-search, with a specific interest on semantic aspects of, and semantics approaches to, type-theoretic languages and their underlying logics (e.g., classical, intuitionistic, linear, substructural). Such languages can be seen as logical frameworks for representing proofs and in some cases formalize connections between proofs and programs that support program-synthesis. The theory of proof-search has developed mostly along proof-theoretic lines but using many type-theoretic techniques. The utility of type-theoretic methods suggests that semantic methods of the kind found to be valuable in the semantics of programming languages should be useful in tackling the main outstanding difficulty in the theory of proof-search, i.e., the representation of intermediate stages in the search for a proof. An adequate semantics would represent both the space of searches and the space of proofs and give an account of the recovery of proofs (which are extensional objects) from searches (which are more intensional objects). It would distinguish between different proof-search strategies and permit analyses of their relative merits. The objective of the workshop is to provide a forum for discussion between, on the one hand, researchers interested in all aspects of proof-search in type theory, logical frameworks and their underlying (e.g., classical, intuitionistic, substructural) logics and, on the other, researchers interested in the semantics of computation. Topics of interest, in this context, include but are not restricted to the following: - Foundations of proof-search in type-theoretic languages (sequent calculi, natural deduction, logical frameworks, etc.); - Systems, methods and techniques related to proof construction or to counter-models generation (tableaux, matrix, resolution, semantic techniques, proof plans, etc.); - Decision procedures, strategies, complexity results; - Logic programming as search-based computation, integration of model-theoretic semantics, semantic foundations for search spaces; - Computational models based on structures as games and - Proof synthesis vs. program-synthesis and applications, equational theories and rewriting; - Applications of proof-theoretic and semantics techniques to the design and implementation of theorem provers. Researchers interested in presenting their works are invited to send an extended abstract (up to 10 pages) by e-mail submissions of Postscript files to the Programme Chair (Didier.Galmiche@loria.fr) before April 1, 2000. Papers will be reviewed by peers, typically members of the Programme Committee. Additional information will be available through WWW address: D. Galmiche (LORIA & UHP, Nancy) - Programme Chair P. Lincoln (SRI, Stanford) F. Pfenning (CMU, Pittsburgh) D. Pym (Queen Mary & Westfield College, Univ. of London) J. Smith (Chalmers Univ., Goeteborg) Deadline for submissions: 1 April, 2000. Notification of acceptance: 1 May, 2000. Workshop date: June 16 or 21, 2000. Programme Chair Didier Galmiche LORIA - CNRS & UHP Nancy I B\^atiment LORIA 54506 Vandoeuvre-les-Nancy Phone: +33 3 83 59 20 15 Fax: +33 3 83 41 30 79 email: Didier.Galmiche@loria.fr URL: http://www.loria.fr/~galmiche/TTL-PSS00.html
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00338.html","timestamp":"2014-04-20T14:07:16Z","content_type":null,"content_length":"6835","record_id":"<urn:uuid:0b0f2dd7-d4c8-4ff9-9caa-96dace79f9a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Another harmonic mean approximation June 26, 2010 By xi'an Martin Weinberg posted on arXiv a revision of his paper, Computing the Bayesian Factor from a Markov chain Monte Carlo Simulation of the Posterior Distribution, that is submitted to Bayesian Analysis . I have already mentioned this paper in a previous post, but I remain unconvinced of the appeal of the paper method, given that it recovers the harmonic mean approximation to the marginal likelihood … The method is very close to John Skilling’s nested sampling, except that the simulation is run from the posterior rather than from the prior, hence the averaging on the inverse likelihoods and hence the harmonic mean connection. The difficulty with the original (Michael Newton and Adrian Raftery’s) harmonic mean estimator is attributed to “a few outlying terms with abnormally small values of” the likelihood, while, as clearly spelled out by Radford Neal, the poor behaviour of the harmonic mean estimator has nothing abnormal and is on the opposite easily explainable. I must admit I found the paper difficult to read, partly because of the use of poor and ever-changing notations and partly because of the lack of mathematical rigour (see, e.g., eqn (11)). (And maybe also because of the current heat wave.) In addition to the switch from prior to posterior in the representation of the evidence, a novel perspective set in the paper seems to be an extension of the standard harmonic mean identity that relates to the general expression of Gelfand and Dey (1994, Journal of the Royal Statistical Society B) when using an indicator function as an instrumental function. There is therefore a connection with our proposal (made with Jean-Michel Marin) of considering an HPD region for excluding the tails of the likelihood, even though the set of integration is defined as “eliminating the divergent samples with $L_i ll 1$“. This is essentially the numerical Lebesque algorithm advanced as one of two innovative algorithms by Martin Weinberg. I wonder how closely related the second (volume tesselation) algorithm is to Huber and Schott’s TPA algorithm, in the sense that TPA also requires a “smaller” integral…. Filed under: Bayes factor harmonic mean estimator HPD region nested sampling for the author, please follow the link and comment on his blog: Xi'an's Og » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or One Response to Another harmonic mean approximation 1. Rbloggers (R bloggers website) on June 26, 2010 at 4:26 pm Another harmonic mean approximation: Martin Weinberg posted on arXiv a revision of… [link to post] #rstats
{"url":"http://www.r-bloggers.com/another-harmonic-mean-approximation/","timestamp":"2014-04-21T12:19:14Z","content_type":null,"content_length":"43119","record_id":"<urn:uuid:2ca560f0-feb9-4f39-90a6-44ae39b23db1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
How The Go Stone Moves: Intro To Rigid Body Dynamics - gafferongames.com How The Go Stone Moves: Intro To Rigid Body Dynamics Hello, I’m Glenn Fiedler and welcome to Virtual Go, my project to simulate a Go board and stones. In previous articles we have mathematically defined the shape of a go stone and tessellated this shape so it can be drawn with 3D graphics hardware. Now we want to make the go stone move. I want the stone to have mass and obey Newton’s laws of motion so that the simulation is physically accurate. The stone should be accelerated by gravity and fall downwards. I also want the stone to rotate so it tumbles realistically as it falls through the air. Now lets see how we can simulate this motion with a virtual go stone. Lets go! The Rigid Body Assumption Try biting down on a go stone some time and you’ll agree: go stones are very, very hard. Golf balls are pretty hard too, but if you look at a golf ball being hit by a club in super-slow motion, you’ll see that it deforms considerably during impact. The same thing happens to all objects in the real world to some degree. Nothing is truly rigid. No real material is so hard that it never deforms. But this is not the real world. It’s a simulation and here we are free to make whatever assumptions we want. And the smartest simplification we can make at this point is to assume that the go stone is perfectly rigid and does not deform under any circumstance. This is known as the rigid body assumption. Working in Three Dimensions Because the go stones are rigid all we need to represent the current position of the go stone is the position of its center point P. As the center point moves so does the rest of the stone. We’ll represent this position using a three dimensional vector P. Lets define the axes so we know what the x,y,z components of P mean: • Positive x is to the right • Positive y is up • Positive z is into the screen This is what is known as a left-handed coordinate system. So called because I can use the fingers on my left hand to point out each positive axis direction without breaking my fingers. I’ve chosen this based purely on personal preference. Also, I’m left-handed and I like my fingers. Linear Motion Now we want to make the stone move. To do this we need the concept of velocity. Velocity is also a vector but it’s not a point like P. In this case you can think of it more like a direction and a length. The direction of the velocity vector is the direction the stone is moving and the length is the speed that it is moving in some unit per-second. Here I’ll use centimeters per-second because go stones are small. For example, if we the stone to move to the right at a rate of 5 centimeters per-second then the velocity vector is (5,0,0). To make the stone move all we have to do is add this velocity to the stone position once per-second. Now this isn’t very exciting obviously. We’d like to move the stone more smoothly. So instead of updating once per-second, lets update 60 frames per-second (fps). How much should the stone move each frame? Well, we want to move the same amount of distance in 60 small steps instead of one big one. So if we take 60 steps, but each step is 1/60th the amount of movement, the stone moves at the same overall rate. You can generalize this to any framerate with the concept of delta time or “dt”. To calculate delta time simply invert the frames per second: dt = 1/fps and you have the amount of time per-frame in Then, simply multiply velocity by delta time and you have the change in position per-frame. Add this to position each frame and the stone moves according to the current velocity, independent of const float fps = 60.0f; const float dt = 1 / fps; while ( !quit ) stone.rigidBody.position += stone.rigidBody.velocity * dt; RenderStone( stone ); This is actually a type of numerical integration. Specifically, we are integrating velocity to find the change in position over time. Gravitational Acceleration Next we want to add gravity. To do this we need to change velocity each frame by some amount downwards due to gravity. Change in velocity is known as acceleration. Gravity provides a constant acceleration of 9.8 meters per-second, per-second, or 98 centimeters per-second, per-second. Acceleration due to gravity is also a vector, and since gravity pulls objects down the acceleration vector is (0,-98,0). So how much does gravity accelerate the go stone in 1/60th of a second? Well, 98 * 1/60 = 1.633… Hey wait. This is exactly what we did with velocity to get position! Yes it is. It’s exactly the same. Acceleration integrates to velocity just like velocity integrates to position. And both are multiplied by dt to find the amount to add per-frame, where dt = 1/fps. const float gravity = 9.8f * 10; const float fps = 60.0f; const float dt = 1 / fps; while ( !quit ) stone.rigidBody.velocity += vec3f( 0, -gravity, 0 ) * dt; stone.rigidBody.position += stone.rigidBody.velocity * dt; RenderStone( stone ); Now that we’ve added acceleration due to gravity the go stone moves in a parabola just like it does in the real world. Angular Motion Now lets make the stone rotate! First we have to define how we represent the orientation of the stone. For this we’ll use a quaternion. Next we need the angular equivalent of velocity known as… wait for it… angular velocity. It too is a vector aka a direction and a length. The direction is the axis of rotation and the length is the rate of rotation in radians per-second. One full rotation is 2*pi radians or 360 degrees so if the length of the angular velocity vector is 2*pi the object rotates around the axis once per-second. Because we are using a left handed coordinate system the direction of rotation is clockwise about the positive axis. You can remember this by sticking your thumb of your left hand in the direction of the axis of rotation and curling your fingers. The direction your fingers curl is the direction of rotation. Notice that if you do the same thing with your right hand the rotation is the other way. How do we integrate orientation from angular velocity? Orientation is a quaternion and angular velocity is a vector. We can’t just add them together. The solution requires a reasonably solid understanding of quaternion math and how it relates to complex numbers. Long story short we need to convert our angular velocity into a quaternion form and then we can integrate that just like we integrate any other vector. For a full derivation of this result please refer to this excellent article. Here is the code that I use to convert angular velocity into quaternion form: inline quat4f AngularVelocityToSpin( quat4f orientation, vec3f angularVelocity ) const float x = angularVelocity.x(); const float y = angularVelocity.y(); const float z = angularVelocity.z(); return 0.5f * quat4f( 0, x, y, z ) * orientation; Once I have this spin quaternion, I can integrate it to find the change in the orientation quaternion just like any other vector. const float fps = 60.0f; const float dt = 1 / fps; while ( !quit ) quat4f spin = AngularVelocityToSpin( stone.rigidBody.orientation, stone.rigidBody.angularVelocity ); stone.rigidBody.orientation += spin * iteration_dt; stone.rigidBody.orientation = normalize( stone.rigidBody.orientation ); RenderStone( stone ); The only difference is that after integration I renormalize the quaternion to ensure that it doesn’t drift from unit length. This is important because quaternions that are not unit length do not represent pure rotation. Why Quaternions? Which brings us to an important point. Why are we using quaternions to represent orientation and not matrices? Here are several good reasons: • It’s much easier to integrate angular velocity using a quaternion than a 3×3 matrix • It’s much faster to normalize a quaternion than it is to orthonormalize a 3×3 matrix • It’s really easy to interpolate between two quaternions and this is very useful We’ll still use matrices but as a secondary quantity. This means that each frame after we integrate we convert the quaternion into a 3×3 rotation matrix and combine it with the position into a 4×4 rigid body matrix and its inverse like this: inline mat4f RigidBodyMatrix( vec3f position, quat4f rotation ) mat4f matrix; rotation.toMatrix( matrix ); matrix.value.w = simd4f_create( position.x(), position.y(), position.z(), 1 ); return matrix; inline mat4f RigidBodyInverse( const mat4f & matrix ) mat4f inverse = matrix; vec4f translation = matrix.value.w; inverse.value.w = simd4f_create(0,0,0,1); simd4x4f_transpose_inplace( &inverse.value ); inverse.value.w = simd4f_create( -dot( matrix.value.x, translation ), -dot( matrix.value.y, translation ), -dot( matrix.value.z, translation ), 1.0f ); return inverse; Now whenever we transform vectors want to go go in/out of go stone body space we’ll use this matrix and its inverse. It’s the best of both worlds. Bringing It All Together The best thing about rigid body motion is that you can calculate linear and angular motion separately and combine them together and it just works. Here is the final code: const float gravity = 9.8f * 10; const float fps = 60.0f; const float dt = 1 / fps; while ( !quit ) stone.rigidBody.velocity += vec3f( 0, -gravity, 0 ) * dt; stone.rigidBody.position += stone.rigidBody.velocity * dt; quat4f spin = AngularVelocityToSpin( stone.rigidBody.orientation, stone.rigidBody.angularVelocity ); stone.rigidBody.orientation += spin * dt; stone.rigidBody.orientation = normalize( stone.rigidBody.orientation ); RenderStone( stone ); And here is the end result: If you enjoyed this article please donate. Donations offset hosting costs and encourage me to write more articles!
{"url":"http://gafferongames.com/virtualgo/how-the-go-stone-moves/","timestamp":"2014-04-19T17:37:34Z","content_type":null,"content_length":"32241","record_id":"<urn:uuid:2a487598-c4df-4246-b1f8-df7f9e65d45f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On Minimum­Area Hulls \Lambda Esther M. Arkin y Yi­Jen Chiang z Martin Held x Joseph S. B. Mitchell -- Vera Sacristan k Steven S. Skiena \Lambda\Lambda Tae­Cheon Yang yy We study some minimum­area hull problems that generalize the notion of convex hull to star­shaped and monotone hulls. Specifically, we consider the minimum­area star­shaped hull problem: Given an n­vertex simple polygon P , find a minimum­area, star­shaped polygon P \Lambda containing P . This problem arises in lattice packings of translates of multiple, non­identical shapes in material layout problems (e.g., in clothing manufacture), and has been recently posed by Daniels and Milenkovic. We consider two versions of the problem: the restricted version, in which the vertices of P \Lambda are constrained to be vertices of P , and the unrestricted version, in which the vertices of P \Lambda can be anywhere in the plane. We prove that the restricted problem falls in the class of ``3sum­hard'' (sometimes called ``n 2 ­hard'') problems, which are suspected to admit no solutions in o(n 2 ) time. Further, we give an O(n 2 ) time algorithm, improving the previous bound of O(n 5 ). We also show that the unrestricted problem can be solved in O(n 2 p(n)) time, where p(n) is the time needed to find the roots of two equations in two unknowns, each a polynomial of degree O(n). We also consider the case in which P \Lambda is required to be monotone, with respect to an unspecified direction; we refer to this as the minimum­area monotone hull problem. We give a matching lower and upper bound of \Theta(n log n) time for computing P \Lambda in the restricted version,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/207/3741926.html","timestamp":"2014-04-19T08:06:04Z","content_type":null,"content_length":"9086","record_id":"<urn:uuid:7cbd734f-9a1f-4be3-9e85-8662a56c201b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-phylo] How to detect phylogenetic signal (lambda) in one unscaled trait? tgarland at ucr.edu tgarland at ucr.edu Tue Mar 22 20:36:15 CET 2011 Hi Alberto, I'll jump in here. Aside from anything you would do with Pagel's lambda, Grafen's rho, or an OU or ACDC transform, it is useful to have a value for the K statistic, as presented here: Blomberg, S. P., T. Garland, Jr., and A. R. Ives. 2003. Testing for phylogenetic signal in comparative data: behavioral traits are more labile. Evolution 57:717-745. In that paper (see pages 720-721), we surveyed a lot of traits on a lot of trees, and so you can compare your K values with what we show. For the traits that were obviously correlated with body mass (e.g., leg length, brain mass, metabolic rate), we first computed size-corrected values in the following way. 1. log-transform the trait and body mass. 2. Use a phylogenetic regression method (e.g. independent contrasts, PGLS, maybe a regression with a transform) to obtain the allometric equation. 3. Divide the trait by body mass raised to the allometric scaling exponent (i.e., the slope from #2), then take the log of that quantity. 4. Compute the K statistic. ---- Original message ---- Date: Tue, 22 Mar 2011 20:37:58 +0200 From: Alberto Gallano <alberto.gc8 at gmail.com> Subject: [R-sig-phylo] How to detect phylogenetic signal (lambda) in one unscaled trait? To: r-sig-phylo at r-project.org >This is a repost of an earlier question, after my colleague helped me with >my English: >To calculate signal in PGLS multiple regression (with say two >variables) I can use the following model: >lambdaModel <- gls(Y ~ X + bodymass, correlation=corPagel(1, tree), >This will take account of body mass when assessing the strength of >relationship between Y and X. This calculates lambda for the residuals and >is better than calculating lambda for each trait (according to >2010). My question is, If I only want to find phylogenetic signal in one >(unscaled) variable, should I use the model: >lambdaModel <- gls(Y ~ bodymass, correlation=corPagel(1, tree), >Will this give the lambda value for Y after controlling for body mass? Or, >would it be better to 'correct' for body mass first, using a ratio (Y / >body mass), and then calculate lambda for this scaled trait, using >lambdaModel <- fitContinuous(tree, scaled_Y, model="lambda") >kind regards, > [[alternative HTML version deleted]] >R-sig-phylo mailing list >R-sig-phylo at r-project.org More information about the R-sig-phylo mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-phylo/2011-March/001164.html","timestamp":"2014-04-20T20:58:32Z","content_type":null,"content_length":"6238","record_id":"<urn:uuid:910a59c6-49d3-4789-a8ee-f51964808818>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Missing Multipliers Copyright © University of Cambridge. All rights reserved. 'Missing Multipliers' printed from http://nrich.maths.org/ Ewan from Wilson's School sent us this solution explaining how he worked out the headings after revealing eight of the cells. Bradley from Bream Bay College shared his strategy for working out the headings after revealing seven of the cells: I revealed the first 4 horizontal answers and the first 4 vertical answers. That enabled me to work out the headings of the first column and row. Once I had worked out those headings I could work out the rest of the grid. Here is an example: │X│2 │3 │8 │10│ │4│8 │12│32│40│ │6│12│ │ │ │ │9│18│ │ │ │ │8│16│ │ │ │ Mollie, Jasmine, Zander, Thomas, Nicholas and Geor, from St Peters CEVC Primary School in Easton sent us this solution: We found out that if you start with 4 diagonal numbers from top left to bottom right and two other numbers, you can always solve the problem, unless one or more of the numbers is a square number- (except 16 & 36) - in which case you could do it in less. Square numbers are very useful because both the multiplying numbers are the same. We have always solved the problem in 6 or less. Michael from Wilson's School also had a strategy for working out the headings after revealing just 6 cells: The way I do it is to reveal the diagonal from the top-left to the bottom-right, then the second up on the left and the top-right. Here is an example of this: │X│? │? │? │? │ │?│24│ │ │22 │ │?│ │14│ │ │ │?│36│ │18│ │ │?│ │ │ │132 │ This method provides 2 columns and 2 rows with 2 numbers. You then look for a common factor in one of the rows or columns with two numbers: e.g. 22 and 132 = 11. From this, you could work out that the heading for the bottom row is 132/11 = 12, and that the heading for the top row is 2. This then makes it very easy to find the other '?'s e.g. 24/2 = 12, therefore the heading of the first column is 12. When all of this example has been worked out, it looks like this: │X │12│2 │6 │11 │ │2 │24│ │ │22 │ │7 │ │14│ │ │ │3 │36│ │18│ │ │12│ │ │ │132 │ Alexander, also from Wilson's School, sent us this solution that showed that a similar strategy could be used when the six exposed cells are in a different position. We also received good solutions from Annie, Sean, Jake G. and Julie T., all from East Vincent Elementary School in the United States, and Shaun and Jack from Wilson's School. Editor's comment: it is possible to reveal six cells which do not include cells on the leading diagonal (from top left to bottom right) and still work out the headings. Some of you may want to think about which combinations of six cells you would want to reveal.
{"url":"http://nrich.maths.org/7382/solution?nomenu=1","timestamp":"2014-04-18T03:00:02Z","content_type":null,"content_length":"9525","record_id":"<urn:uuid:c0aa10ea-4b7a-41a6-893b-03a6ef2a1c25>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Comp 280: History and Philosophy of Computing, Winter 2009 Links Below are some links to webpages that might be of interest to you. (If you know other websites that you think should be linked here, please let me know!) Home Syllabus Overview Schedule Assignments Papers Movies Links (c) Dirk Schlimm 1/03/09
{"url":"http://www.cs.mcgill.ca/~cs280/links.html","timestamp":"2014-04-17T21:25:45Z","content_type":null,"content_length":"8676","record_id":"<urn:uuid:0b4b49bf-e3fd-4893-a846-a1176186c488>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Post by thread:[PIC]:working out percentages Exact match. Not showing close matches. PICList Thread '[PIC]:working out percentages' 2002\02\21@041638 by MATTHEWS, DEAN (D.) I am using a PIC16f877 and CCs compiler. Is there a maths function that will enable me to calculate 50% of an integer variable in a register and store the answer in another variable, or would I calculate it myself. Also is an integer 8 bit or 16 bit long. some books say it is 8bit and others 16 bit. Dean Matthews Reliability Engineering Ford Engine Plant Bridgend Waterton industrial estate cf31 3pj South Wales, U.K Tel No: 0044(1656)672597 Fax No: 0044(1656)672558 Email: spam_OUTdmatth14TakeThisOuTford.com Room 25/118 ` Please consider your environmental responsibility before printing this e-mail {Original Message removed} 2002\02\21@043410 by Ashley Roll Hi Dean, If its always 50% then just divide by 2, which is efficiently implemented by a right shift by one bit. CCS-C should optimise it to a rotate right int Val, Result; Result = Val / 2; if the percentages change then you will need to change that to multiply by 100 then divide by the percentage. Note - be careful you don't over run the integer. You need to do the multiply first. In CCS-C the "int" type by default is 8 bits. This is always compiler dependant, if you want to make sure of the size your using try using int8, int16 and int32 as the types. So if you had a 8 bit number and you wanted to get an arbitrary percentage, you would need to promote it to a 16 bit number so you don't overflow it int8 Val, Result; int8 Percent; int16 temp; temp = (int16) Val * 100; temp = temp / Percent; Result = (int8)temp; the type in brackets is a cast to that type of the following identifier. hope that helps, Ashley Roll Digital Nemesis Pty Ltd Mobile: +61 (0)417 705 718 > {Original Message removed} 2002\02\21@054007 by ards, Justin P 50% if that is the same as a half then right shifting the register will give you 50% of what you had before. Hope thats what you were after {Original Message removed} 2002\02\21@055114 by MATTHEWS, DEAN (D.) oh yes, thanks. Dean Matthews Reliability Engineering Ford Engine Plant Bridgend Waterton industrial estate cf31 3pj South Wales, U.K Tel No: 0044(1656)672597 Fax No: 0044(1656)672558 Email: .....dmatth14KILLspam@spam@ford.com Room 25/118 ` Please consider your environmental responsibility before printing this e-mail {Original Message removed} 2002\02\21@090807 by Olin Lathrop > Is there a maths function that will enable me to calculate 50% of an > variable Shift right one bit. Olin Lathrop, embedded systems consultant in Littleton Massachusetts (978) 742-9014, olinKILLspamembedinc.com, http://www.embedinc.com http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@102355 by Bond, Peter > I am using a PIC16f877 and CCs compiler. Is there a maths > function that will enable me to calculate 50% of an integer > variable in a register and store the answer in another > variable, or would I calculate it myself. I may be missing something here, but is there a problem with doing a right shift? Or are you looking for something more generalised than /2 ? > Also is an integer 8 bit or 16 bit long. some books say it > is 8bit and others 16 bit. "It depends on the compiler" is the short answer. I'm assuming here that if you're referring to integer types, you are using a compiler... With the stuff I more normally use (gcc on 860s), an int is 32 bit, a short int 16 and a byte (usually defined as an unsigned char) is 8 - but, for portability, you should never depend upon those being consistent. This has been discussed before IIRC - perhaps a search of the archives would This email, its content and any attachments is PRIVATE AND CONFIDENTIAL to TANDBERG Television. If received in error please notify the sender and destroy the original message and attachments. http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@103144 by MATTHEWS, DEAN (D.) What is IIRC? Dean Matthews Reliability Engineering Ford Engine Plant Bridgend Waterton industrial estate cf31 3pj South Wales, U.K Tel No: 0044(1656)672597 Fax No: 0044(1656)672558 Email: .....dmatth14KILLspam.....ford.com Room 25/118 ` Please consider your environmental responsibility before printing this e-mail {Original Message removed} 2002\02\21@104023 by Alan B. Pearce >What is IIRC? If I Remember Correctly http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@104212 by Amaury Jacquot On Thu, Feb 21, 2002 at 10:28:26AM -0500, MATTHEWS, DEAN (D.) wrote: > What is IIRC? If I Remember Correctly http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@114936 by o-8859-1?Q?K=FCbek_Tony?= Richards, Justin P (and others providing the same answer) wrote : >50% if that is the same as a half then right shifting the register will >you 50% of what you had before. >Hope thats what you were after Well this is ofcource correct, however there is an issue with rounding. As would truncate the result, i.e. 2->1, 3->1, 4->2 etc. If 'normal' rounding is to be taken into account i.e. 0.5 is rounded to 1 0.4 is rounded to 0, you also need to: check lowest bit of number, if '1' then add one to number *then* divide by ( right shift by one ). nit picking yes, but could be an issue. http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@120308 by Spehro Pefhany At 05:32 PM 2/21/02 +0100, you wrote: >Well this is ofcource correct, however there is an issue with rounding. As >would truncate the result, i.e. 2->1, 3->1, 4->2 etc. There's also an issue if the number is negative. In the case of a 2's complement number, you have to duplicate the sign bit if you are doing an arithmetic right shift. The PIC doesn't have a specific instruction for this, but you can left shift with W as the destination to shove the sign bit into the carry, then right-shift the carry in. For unsigned numbers, of course, you'd just clear the carry first. Best regards, Spehro Pefhany --"it's the network..." "The Journey is the reward" EraseMEspeffspam_OUTTakeThisOuTinterlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com 9/11 United we Stand http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@131025 by Olin Lathrop > check lowest bit of number, if '1' then add one to number *then* divide by > two You don't need to check if the low bit is 1. Simply add 1. This will generate a carry into the rest of the number if the low bit was one. Of course you now have to worry about the original number being at maximum. You either have to check this, or add one to the *shifted* number if the original low bit was 1. However, in most cases truncation is either fine or is actually what you Olin Lathrop, embedded systems consultant in Littleton Massachusetts (978) 742-9014, olinspam_OUTembedinc.com, http://www.embedinc.com http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@153854 by Spehro Pefhany At 12:16 PM 2/21/02 -0500, I wrote: >There's also an issue if the number is negative. OK, I see that you're using C. In that case you should use /2 if the number is signed. Shifting (>>1) should be avoided because the results are officially "implementation defined", thus it is non-portable code. See: ISO/IEC 9899:1999 (E) 6.5.4 (the current C standard) Best regards, Spehro Pefhany --"it's the network..." "The Journey is the reward" @spam@speffKILLspaminterlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com 9/11 United we Stand http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads 2002\02\21@162202 by Bond, Peter > What is IIRC? If I Recall Correctly A few more abbreviations: This email, its content and any attachments is PRIVATE AND CONFIDENTIAL to TANDBERG Television. If received in error please notify the sender and destroy the original message and attachments. http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads More... (looser matching) - Last day of these posts - In 2002 , 2003 only - Today - New search...
{"url":"http://www.piclist.com/techref/postbot.asp?by=thread&id=%5BPIC%5D%3Aworking+out+percentages&w=body&tgt=post&at=20020221054007a","timestamp":"2014-04-16T13:34:37Z","content_type":null,"content_length":"31063","record_id":"<urn:uuid:6ff17200-3ebb-4d8a-95ac-dff323f6e83b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry word problem February 26th 2009, 06:48 PM #1 Junior Member Sep 2008 Trigonometry word problem 56. The planet Mercury completes one rotation on its axis every 59 days. Through what angle (measured in degrees) does it rotate in a) one day b) one hour c) one minute To solve it, would you take 360 degrees divided by 59, which would give us 6.1 degrees and then further it from there by 24 and then by 60 to get the question? I'm confused because I would think that the measurements would have to be the same to divide them...or am I on the right track? Thanks in advanced! Yes your approach is correct. Provided the concept of day , hour and minute are that of earth. Technically, for any planet a day means time it takes to complete one rotation on its axis. Like earth takes 24 hrs to complete one rotation on its axis so the day on earth means 24 hrs. ohh I am talking crap. Just leave it.. your approach is right.. 56. The planet Mercury completes one rotation on its axis every 59 days. Through what angle (measured in degrees) does it rotate in a) one day b) one hour c) one minute To solve it, would you take 360 degrees divided by 59, which would give us 6.1 degrees and then further it from there by 24 and then by 60 to get the question? I'm confused because I would think that the measurements would have to be the same to divide them...or am I on the right track? Thanks in advanced! (a) 1/59 x 360 = ans for a (b)1/24 x ans for a = ans for b (c)1/60 x ans for b = ans for c . February 26th 2009, 06:59 PM #2 February 26th 2009, 07:18 PM #3 MHF Contributor Sep 2008 West Malaysia February 26th 2009, 07:58 PM #4 Junior Member Sep 2008
{"url":"http://mathhelpforum.com/trigonometry/75971-trigonometry-word-problem.html","timestamp":"2014-04-20T11:42:04Z","content_type":null,"content_length":"38905","record_id":"<urn:uuid:2825ebb8-0aec-4fda-965f-9bf060703901>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with simplifying a 2nd order pde September 7th 2011, 11:49 AM #1 Sep 2011 Help with simplifying a 2nd order pde I have been given the equation: dp/dt = 4 + 1/e*(d/de(e*dp/de)) the d's are partial derivatives I am trying to solve for p. i am told to make the assumtion that the equations are separable and then convert the equation into a ordinary differential equation. What i have done so far: Set the RHS = 0 4 + 1/e*(d/de(e*dp/de)) = 0 use chain rule: 4 + 1/e*dp/de + d/de(dp/de) is what i have done correct???????? Re: Help with simplifying a 2nd order pde There's a couple of things you can do here! First, assume solutions in the form $p = T(t) + E(e)$. This gives $T' = 4 + E'' + \frac{E'}{e}.$ This then implies that $T = (a+4)t + b$ leaving the ODE $E'' + \frac{E'}{e} = a$. Second, let $p = 4t + P$. This reduces your PDE to $P_t = P_{ee} + \frac{P_e}{e}$. Then assume solutions of the form $P = T(t) E(e)$ and you will get two ODEs for $T$ and $E$. September 8th 2011, 04:43 AM #2
{"url":"http://mathhelpforum.com/differential-equations/187497-help-simplifying-2nd-order-pde.html","timestamp":"2014-04-25T07:40:40Z","content_type":null,"content_length":"35797","record_id":"<urn:uuid:8cddd04b-7b5b-4a0d-ba71-c431f10a43d3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Merge Columns The Excel merge columns process is as easy as merging cells. Follow these quick and easy steps to learn how to merge columns in Excel. Why Would You Want To Merge Columns In Excel? All you have to do is have a quick look in the Microsoft Office related forums on the internet to know that many people don't know how to merge columns in Excel. Often people receive advice that just doesn't work. Worry no longer; here is the definitive guide to the excel merge columns mystery. But first, why would you want to merge columns? Imagine that you had a column of first names and then another column containing second names of people who had taken a test. You are now required to format a column that contains the full name of each candidate. You know how to merge two cells so that the result contains the full name for just one person, but you need to apply that formula to the names in all rows. This is where merging columns in Excel comes in. How Do You Want To Merge Columns? The first question you have to ask yourself is what kind of column merge you want to perform? The example given above is called concatenation: if cell A1 contains "Joe" and cell B1 contains "Bloggs", then the concatenation of the two is "JoeBloggs". Note that you would have to take care to insert a space between the two names yourself, but this is easy to do. However, when some people say "merge", they mean "add". It may be that you are required simply required to add the contents of two columns together. Confirm the requirements first! Merging Columns In Excel Now that we've clarified what merging columns actually means, we can explore how to do it. The first step is to perform the merge for the first cells. Let's go back to our first example and suppose that we are merging column A that contains first names with column B that contains second names. We'll put the merged columns into column C. To merge cell A1 with cell B1 we woul type the following into cell C1: =A1&" "&B1 Remember to insert the space as shown. When you press enter, cell C1 will contain the full name for that first row. Believe it or not, that was the hardest bit. We now need to apply that formula to the remaining cells in those columns. To do this, select the first merged cell and then hover the mouse over the bottom right corner of the cell until you see the plus sign. Drag the cursor downwards until the selection includes all the remaining cells in the merged column. When you release the mouse, the merged column should contain the dat in the first and second columns merged. Using The Concatenate Function You can also use the CONCATENATE function in Excel to merge two piece of data together. The syntax is as follows: =CONCATENATE(A1," ",B1) This formula would be useful for merging first and last names as it also inserts a space between the two. With these two methods of cancatenating covered, let's move on to merging multiple columns. Merging Multiple Columns In Excel Merging mutiple columns in Excel is a simple extension of merging two columns. Suppose you need to merge columns A, B, C, D and E and put the result in column F. First of all you would need to type one of the following in cell F1: • =A1&B1&C1&E1 • =CONCATENATE(A1,B1,C1,D1,E1) Pressing enter merges the data into cell F1. As we did before, make cell F1 active, hover over the bottom right corner and then drag the cursor downwards.
{"url":"http://www.msexcel07.com/excel-merge-columns.htm","timestamp":"2014-04-20T00:38:10Z","content_type":null,"content_length":"12065","record_id":"<urn:uuid:0d5c2eae-d759-462c-b41c-52aa9d184838>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. FIG. 1. (a) Phonon dispersion for monolayer graphene monoxide, (b) the Grüneissen parameter for acoustic phonon modes in GMO. The inset in (b) shows the unit cell of centered rectangular GMO in real space and its first Brillouin zone with the high symmetry points and lines labeled. FIG. 2. Color maps of intrinsic thermal conductivities of GMO as a function of both temperature and lateral size along the XΓ direction (a) and ΓK direction (b). (c) and (d) show the thermal conductivity of GMO in (a) and (b) normalized with respect to the thermal conductivity of graphene along the MΓ direction, respectively. The lateral size starts from 2.5μm so that the calculated thermal conductivity is diffusive, since average LA and TA phonon mean free paths for GMO are calculated as 0.48μm and 0.13μm along the armchair and the zigzag directions, respectively. FIG. 3. Color maps of the ratio of thermal conductivity of LA mode to TA mode as a function of temperature and lateral size for GMO along the XΓ direction (a), GMO along the ΓK direction (b), and graphene along the MΓ direction (c), respectively. FIG. 4. Phonon dispersions of the ZA mode around the zone center under isotropic strains for (a) GMO and (b) graphene, respectively. (c) Grüneissen parameters of the LA and TA modes of GMO along the XΓ (upper panel) and ΓK (lower panel) directions with respect to lattice angle. (d) Thermal conductivity of GMO as a function of lattice angle at room temperature and lateral size of 5μm along the XΓ and ΓK directions. The unstained lattice angle is 130°. Purple curves in (c) and (d) are fittings. Article metrics loading... Most read this month Most cited this month
{"url":"http://scitation.aip.org/content/aip/journal/apl/102/22/10.1063/1.4808448","timestamp":"2014-04-16T23:30:49Z","content_type":null,"content_length":"72792","record_id":"<urn:uuid:8da644b5-2b53-402b-b1cf-df594aae4ce8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How many three-digit integers can be divided by 2 to produce a new integer with the same tens digit and units digit as the original integer? A. None B. One C. Two D. Three E. Four • one year ago • one year ago Best Response You've already chosen the best response. @shubhamsrg its 4 jst wanna see wat logic u apply Best Response You've already chosen the best response. 200 400 600 800 are only possibilities Best Response You've already chosen the best response. First, just think of the ones digits. Any number that has a units digit of 2, for example, will produce a units digit of 1 or 6 when divided by 2. 4s will produce 2s or 7s, 6s will produce 3s or 8s, 8s will produce 4s or 9s. The only one that’s gonna work for you are 0s, which will produce 0s or 5s. And there’s your breakthrough. The only integers that are gonna work are gonna be the even hundreds. and hence is your ans.. Best Response You've already chosen the best response. gt it:) Best Response You've already chosen the best response. glad you did! :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50cf1b28e4b0031882dcadec","timestamp":"2014-04-21T12:21:45Z","content_type":null,"content_length":"37484","record_id":"<urn:uuid:4737b783-9bb8-4049-b3ff-359ec28ba3f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
A productive synergy of being a visiting fellow at the Center is that most of my social life consists of interacting with other fellows, and so philosophy gets done even in leisure time. Not all of it is philosophy, however, as evidenced by the following item that Bert Leuridan and I concocted over pizza and beer. The Visiting Fellows Paradox Step one: Note that there are things which are considered to be paradoxes, things which are commonly referred to as the such-and-so paradox , which nevertheless are not paradoxes. As one example, consider the Birthday Paradox: it only takes 23 people for the probability of at least two people having the same birthday to be greater than .5. As another, consider Simpson's Paradox: variables which are positively correlated in each subpopulation of a population can be negatively correlated in itself. A common thing to say about such things is that prima facie these are not really at all. Rather, they are just surprising facts. Step two: A paradox arises when a plausible line of reasoning leads to a contradictory conclusion. It is easy to show that the cases described in step one are secunda facie paradoxical. The proof goes like Take one of these so-called paradoxes. Either it is after all a genuine paradox or it is not. If the former disjunct obtains, then the matter is shown. If the latter, its being called `the such-and-so paradox' gives us reason to believe that it a paradox; yet it is not a paradox, by assumption. Letting P be `this is a paradox', we have a reason to believe P and also to believe not-P. So we have a paradox. The latter disjunct, in which the commonly-called paradox's not being a paradox produces a paradox, shifts from using to mentioning the original alleged paradox. Rather than showing that the original statement about birthdays was a paradox, for example, it shows that the Birthday Paradox figures in a distinct but derivative paradox. Call this a second-order paradox. Step three: In the proof above, we derived a paradox by cases. It was either an ordinary, first-order paradox or a meta-level, second-order paradox. In this step, we propose a paradox which does not require the disjunction, one which is exclusively a second-order paradox. We call this the Visiting Fellows Paradox We rule out the first disjunct - that it is paradoxical in the usual way - by refusing to explain to you the ordinary, first-order content of the Visiting Fellows Paradox. It is not, however, at all paradoxical on its own. We assure you of that. The second-order paradox arises from the juxtaposition of reasons for thinking it is a paradox (because it is called one) and reasons for thinking it is not (because of our assurances). We provided our assurances in the previous paragraph, so all that remains is for it to be commonly called a paradox. One natural way to accomplish this is to publish a paper describing the new paradox so-called in a respected, scholarly journal. The success of our construction, and so the Visiting Fellows Paradox actually being a paradox, relies on the paper's being accepted by qualified referees of that scholarly journal. Yet, quite naturally, qualified referees will only accept the paper if it describes an actual paradox. It follows from this that the authority of philosophers (referees, in this case) allows them to make something a paradox which would otherwise be merely a curiosity. Surely, this is a surprising fact. We decline to give a name to this fact, lest we unwittingly contribute to a further paradox. [ add comment ] ( 1218 views ) | [ 0 trackbacks ] <<First <Back | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | Next> Last>>
{"url":"http://laser.fontmonkey.com/foe/index.php?entry=A-paradox-arises-over-beer","timestamp":"2014-04-21T12:58:16Z","content_type":null,"content_length":"20588","record_id":"<urn:uuid:8c28bda6-3c61-43fd-b244-c8d587f3d774>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Compositing a camera matrix... [Archive] - OpenGL Discussion and Help Forums I'm trying to implement a camera model using a 3x3 matrix for camera roation and a 3-vector for camera translation/position.I want to keep it simple for now so I only want to implement camera translation and rotation around it's own(the camera's coordinate system)axes.Supposing I have a camera position vector cam_pos[3] and a camera rotation matrix cam_rot[9],how can I compose a 4x4 homogenous camera matrix(for use with glLoadMatrixf or glMultMatrixf) so that the camera is first translated to cam_pos and the rotated in place(around this new position/coord. system) according to cam_rot?Thanx in advance for any help...
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-156093.html","timestamp":"2014-04-17T07:32:24Z","content_type":null,"content_length":"7134","record_id":"<urn:uuid:270fd16b-6fd0-4db9-8198-e51dfa64f4aa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Modular curves of genus zero and normal forms for elliptic curves up vote 20 down vote favorite This is maybe the first question I actually need to know the answer to! Let $N$ be a positive integer such that $\mathbb{H}/\Gamma(N)$ has genus zero. Then the function field of $\mathbb{H}/\Gamma(N)$ is generated by a single function. When $N = 2$, the cross-ratio $\ lambda$ is such a function. A point of $\mathbb{H}/\Gamma(2 )$ at which $\lambda = \lambda_0$ is precisely an elliptic curve in Legendre normal form $$y^2 = x(x - 1)(x - \lambda_0)$$ where the points $(0, 0), (1, 0)$ constitute a choice of basis for the $2$-torsion. When $N = 3$, there is a modular function $\gamma$ such that a point of $\mathbb{H}/\Gamma(3)$ at which $\gamma = \ gamma_0$ is precisely an elliptic curve in Hesse normal form $$x^3 + y^3 + 1 + \gamma_0 xy = 0$$ where (I think) the points $(\omega, 0), (\omega^3, 0), (\omega^5, 0)$ (where $\omega$ is a primitive sixth root of unity) constitute a choice of basis for the $3$-torsion. Question: Does this picture generalize? That is, for every $N$ above does there exist a normal form for elliptic curves which can be written in terms of a generator of the function field of $\mathbb {H}/\Gamma(N)$ and which "automatically" equips the $N$-torsion points with a basis? (I don't even know if this is possible when $N = 1$, where the Hauptmodul is the $j$-invariant.) If not, what's special about the cases where it is possible? nt.number-theory modular-forms elliptic-curves ag.algebraic-geometry 2 This is a great question. I feel like I should know the answer off the top of my head, but apparently I don't. What I can contribute at the moment is the full list of N such that X(N) has genus 0: 1,2,3,4,5. So you are missing at most normal forms for N = 4 and N = 5. To try to work this out for myself, I would start by taking the curve in Kubert-Tate normal form and see what additional relations come from having full N-torsion. (Note that you will necessarily have to extend the ground field to Q(\zeta_N) to see full N-torsion.) – Pete L. Clark Mar 4 '10 at 0:35 5 By the way, in more sophisticated terms, since X(N) is a fine moduli space over Q(\zeta_N) for N > 2, what you are looking for is an equation for the universal elliptic curve over this rational (genus zero) curve. So such normal forms definitely do exist. Looking back at an old paper of mine, I found the one for N = 3 together with the remark that the one for N = 4 is "well known". Too bad I forgot to write it down! – Pete L. Clark Mar 4 '10 at 0:38 When N = 1, as you say the j-invariant is the Hauptmodul. So you just want to write down an elliptic curve with j-invariant some given quantity j. There is a standard recipe for this; see e.g. 1 Silverman's book. However, when N = 1 and also when N = 2 the moduli space is not fine, so the family is not "universal" in the strict sense of moduli spaces, and also there will be multiple nonisomorphic elliptic curves over a non-algebraically closed ground field. For instance, as I believe came up here recently, full 2-torsion is not quite enough to put an elliptic curve in Legendre normal form. – Pete L. Clark Mar 4 '10 at 1:22 5 There is a paper of Daniel Kubert's from early 70s in which he writes out many normal forms (using 2 parameters) for elliptic curves with torsion points of various small orders. These could help you. Also, you can't get around the problem of not having a single universal family parameterized by j ; the curves with j=0 and 1728 have extra automorphisms, and there is no getting around it. – Emerton Mar 4 '10 at 3:03 1 Although I'm quite late, I want to remark that the N=4-case is treated in a paper of Shioda: projecteuclid.org/… – Lennart Meier Aug 22 '12 at 8:42 show 3 more comments 4 Answers active oldest votes I think the answer to your question is the content of Velu's thesis: Courbes elliptiques munies d'un sous-groupe $Z/NZ\times \mu_N$. In there, he explicitly writes down the up vote 6 down vote universal elliptic curve over $X(p)$ for $p>3$. add comment As I mentioned in connection with an answer to another question, it is not generally true for elliptic curves $f:E \rightarrow S$ over a base $S$ that there is a global embedding of $E$ into $\mathbf{P}^2_ S$. For example, if $S = {\rm{Spec}} ( A )$ for a Dedekind domain $A$ whose class group is nontrivial, it could fail (and sometimes does fail). The necessary and sufficient condition is that the line bundle $\omega_{E/S} = f_{\ast}(\Omega^1_{E/S})$ on $S$ is trivial. Example: If $S$ is the complement of a non-empty finite set of rational points in the projective line over a field $k$ then it is the spectrum of a localization of $k[x]$ and hence has trivial Picard group. Thus, the obstructions vanish and a global embedding exists. This applies to the modular curve $Y(N)$ over $\mathbf{Q}$ (geometrically connected over $\mathbf{Q}(\zeta_ N)$ via the Weil $N$-torsion pairing) when $N = 3, 4, 5$. Of course, to then really find the normal form in explicit terms requires real work and not just this kind of "brain work". up vote In case the elliptic curve is the universal one over some (fine) moduli scheme $S$ and this line bundle obstruction vanishes, such as if we know the stronger fact that ${\rm{Pic}}(S) = 1$, 8 down then such a global embedding must exist and its determination can then be regarded as a "normal form". On the other hand, consider universal elliptic curves $E \rightarrow Y$ over fine modular curves $Y$ whose "level structure" doesn't dominate one of the fine ones of genus 0, such as $Y(p)$ with a prime $p > 5$. To figure out if there is a Weierstrass form for $E$ over the entire affine base $Y$ (i.e., the projective plane doesn't need to be replaced with a projective space bundle, as is needed for general families of elliptic curves) one has to determine precisely if $\omega_{E/Y}$ is trivial. This amounts to the existence of modular functions which "transform" under the corresponding "congruence subgroup" (such as $\Gamma(p)$) like a weight-1 form and have no zeros or poles on the upper half-plane (if working over $\mathbf{C}$), and so can be analyzed concretely by thinking about Klein form in the case of full-level problems. Thanks for the response! I don't exactly have much background here, but from what I can tell you're confirming and elaborating on what Pete mentioned in the comments. I guess I should mention that I would be perfectly happy with an embedding into a higher-dimensional projective space as long as its defining equations could be written explicitly in terms of a Hauptmodul. Does one exist for N = 1, 2? – Qiaochu Yuan Mar 4 '10 at 5:18 I'm only saying there is a way to define "normal form", a conceptual reason for its existence in some cases, and obstruction in general. (There are ways to work with elliptic curves other than Weierstrass equations. Useful!) The "Legendre elliptic curve" over Q(lambda) has no Legendre form over Q(lambda) relative to other ordered 2-torsion bases. Why? If you understand that, you'll better understand why N = 2 is subtle (N=1 is more so). Hint: what extra structure on an elliptic curve with 2-torsion basis corresponds to a compatible "Lengendre structure"? x = a/ t^2 + ...., look at a. – BCnrd Mar 4 '10 at 6:17 "For example, if S=Spec(A) for a Dedekind domain A whose class group is nontrivial, it could fail (and sometimes does fail)." When A is the ring of integers in a number field K, a result of Silverman says that every ideal class c occurs as the obstruction for some quadratic twist of any fixed elliptic curve E|K. Cf. MR0804199 (86k:11030) Silverman, Joseph H., Weierstrass equations and the minimal discriminant of an elliptic curve. Mathematika 31 (1984), no. 2, 245--251 (1985). – Chandan Singh Dalawat Mar 4 '10 at 10:35 add comment I believe the following results that appear in papers of Rubin and Silverberg can be very useful here. Let $N=3,4,$ or $5$ and let $Y_N$ be the (non-compact) modular curve over $\mathbb{Q}$ which parametrizes $(E,P,C)$ where $E$ is an elliptic curve, $P$ is a point of order $N$ on $E$ and $C$ is a cyclic subgroup of order $N$ on $E$, and $C$ and $P$ generate $E[N]$. The curve $Y_N$ is isomorphic to one connected component of $Y(N)$, and $Y_N(\mathbb{C})$ is isomorphic to $\mathbb{H}/\Gamma(N)$. Let $X_N$ be the compactification of $Y_N$. Rubin and Silverberg describe explicit isomorphisms $f_N:X_N \cong \mathbb{P}^1$, with $f(u) = (A_u,P_u,C_u)$ and give equations for $A_u$, here: 1) [Rubin and Silverberg] for $N=3$ and $5$ in Families of elliptic curves with constant mod p representations up vote 4 down vote and 2) [Silverberg] for $N=4$ in ``Explicit families of elliptic curves with prescribed mod $N$ representations'', in Modular forms and Fermat's last theorem, Cornell, Silverman, Stevens (Editors), Springer, p. 447 - 461. I hope that helps, add comment The first thing you'd need in order to define a normal form is unirationality of the moduli space (otherwise you don't even have the correct number of parameters). In dimension 1, this means that you (at least) need the modular curve to be of genus 0, at which point we may look at the The On-Line Encyclopedia of Integer Sequences Here is how you can do n-torsion assuming you know the m-torsion solution and m divides n (and of course, the moduli space is genus 0): up vote 1 Let z be the moduli space parameter, and E(z) be the universal plane curve. Let $a_i(z),b_i(z)$ be n-torsion points on E(z) which span the set of n-torsion points; let $l_i(z)$ in the dual down vote projective plane be the line connecting $a_i(z),b_i(z)$. Then the locus of $l_i(z)$ is a plane curve, which is -- by our assumption -- rational. Now use your favorite "Italian" method to find an explicit rationalization of a rational plane curve. The coordinates of the universal projective plane are determined by the four points $0, a_i(z), b_i(z), a_i(z)+b_i(z)$. Note that from the original list of 2..10,12,13,16,18,25 you are now left with the task of finding solutions to 5,7,13. Qiaochu specified that he was looking at the genus zero cases. (Unless that was edited in after you gave this answer?) – David Speyer Mar 4 '10 at 16:40 @David: sure, and the first step in solving the genus 0 cases is knowing which ones they are – David Lehavi Mar 5 '10 at 6:02 I think the confusion is that Qiaochu asked about $X(N)$, and David Lehavi seems to be thinking about $X_0(N)$. – Jamie Weigandt Oct 8 '10 at 1:10 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory modular-forms elliptic-curves ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/17031/modular-curves-of-genus-zero-and-normal-forms-for-elliptic-curves/57673","timestamp":"2014-04-19T14:54:19Z","content_type":null,"content_length":"82722","record_id":"<urn:uuid:d1dd8bc6-a8f8-48bf-8752-9459f4bf9b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: coordinates for eight points on the coordinate plane (q > p > 0): A (0, 0), B (p, 0), C (q, 0), D (p + q, 0), E (0, q), F (p, q), G (q, q), and H (p + q, q). Which four points, if any, are on the vertices of a rectangle? a. A, B, F, and E b.C, E, F, and G c.D, C, E, and G d.F, E, D, and C • one year ago • one year ago Best Response You've already chosen the best response. can you tell me whats your attempt in this question? Best Response You've already chosen the best response. Best Response You've already chosen the best response. how did you try this question? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f6de8ae4b027eb5d996e43","timestamp":"2014-04-19T19:36:30Z","content_type":null,"content_length":"32586","record_id":"<urn:uuid:67dbe92a-0986-4dba-bfce-1feb357c6e3a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Radio astronomy - integration time vs. sample rate As you know, any radiometer have the Low Pass Filter (LPF) at it's output. The term 'integration time' relates to the device, named an `ideal integrator`, used as LPF. This is not good LPF, since it power response looks like (sin f)^2 / f^2. However, the such responce corresponds to the 'moving average' filter, which can be performed programmatically (after the sampling) but not in continuous time, as physical device. Instead this kind of LPF you can use any you prefer, but take into account the `effective integration time`, which defined in the Kraus book, Radioastronomy, Chapter 7. This effective integration time defined via the Effective Noise Bandwidth (ENB) of your real LPF. No matter at this point, are you using the ADC or not. If you use an ADC, the sampling frequency should be equal or greater the `Fcutoff*2`, othervise the `aliasing` phenomena (`sampling theorem`, Shannon, Nyquist, Kotelnikov). Fcutoff - the cutoff frequency your LPF (before the ADC).
{"url":"http://www.physicsforums.com/showthread.php?t=549088","timestamp":"2014-04-17T12:48:16Z","content_type":null,"content_length":"35652","record_id":"<urn:uuid:ff9de585-8488-45a9-8209-2a85f81a4007>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Projective and Affine varieties I'm having a little trouble seeing something Harris says in his intro book on alg. geom. Say X is contained in P^n. Harris says that X is a projective variety iff X intersect U_i is an affine variety for each i=0,...,n, where U_i are the points [Z_0 , Z_1 , ... , Z_n] in P^n with Z_i =/ 0. I'm a little confused about how he claims this: If X is a projective variety, say its the locus of the homogeneous polynomials F_α(Z_0, ..., Z_n). Say we define on A^n the polynomials f_α(z_1, ..., z_n) = F_α(1,z_1, ...,z_n) = F_α(Z_0, ..., Z_n) / Z^d_0 where d is the degree of F_α and z_i are the local coords (z_i = Z_i/Z_0). Then he claims the zero locus of the f_α is X intersect U_0. Now since there's a bijection between U_0 and A^n, are we just identifying X intersect U_0 with its image via the local coordinates, meaning its an affine variety too? For the other direction, I don't really see this. If for example X intersect U_0 is an affine variety, say its the locus of f_α(z_1, ..., z_n),then we can define homogeneous polynomials F_α(Z_0, ... Z_n) = Z^d_0 f_α(Z_1/Z_0,..., Z_n/Z_0) where d = deg(f_α). But then is the zero locus of the F_α just X? Any help would be appreciated!
{"url":"http://www.physicsforums.com/showthread.php?p=4201242","timestamp":"2014-04-17T12:45:03Z","content_type":null,"content_length":"27549","record_id":"<urn:uuid:7edb0606-2432-40f1-abb1-9b9f608544f6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum mechanics describes strange phenomena far beyond the realm of everyday experience. Computer visualization is sometimes the only method to observe what cannot be seen by any other means. "Visual Quantum Mechanics" is a systematic exploration of quantum mechanics using computer-generated animations. These are the main results: "Visual Quantum Mechanics" and "Advanced Visual Quantum Mechanics" - Two textbooks about quantum mechanics, each with a cover CD containing hundreds of short movie clips illustrating quantum phenomena. The software for both books won the European Academic Software Award EASA (2000 and 2004). A few examples (from about 700 movies coming with the two books). You need QuickTime to view these films. The explanations require some knowledge of quantum mechanics. Here are some additional movies about topics not covered in the books. A collection of Mathematica packages for visualizing complex-valued functions, for solving the Schrödinger and Dirac equations, and related topics. Outside of quantum mechanics, these packages are specially useful for visualizing complex analytic functions. (See the image gallery). A new version for Mathematica 6 is available. An openGL-based visualization tool for visualizing complex-valued functions and spinor-valued wave functions in three dimensions. Colored isosurfaces, slice planes, flux lines (see the sample images Work in progress - this is the starting point of a collection of "reusable learning objects" for elementary quantum mechanics (for teaching in high schools) that builds on the achievements of the "Visual Quantum Mechanics" project. Presently, this is mainly in German, but there are a few units available in English and Japanese.
{"url":"http://www.uni-graz.at/imawww/vqm/pages/index_start.html","timestamp":"2014-04-16T11:24:16Z","content_type":null,"content_length":"9068","record_id":"<urn:uuid:0dc50d22-21ee-4b83-8a67-0540c847a49c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Doing a research degree in IT? 04-22-2005 #1 Doing a research degree in IT? Hello everyone, I was wondering whether it is possible to do a research degree (e.g. M.Phil, Ph.D) in IT without involving with complex mathematics. I've been googling around for some time but so far what I've found that all existing/prospective research projects (in universities) involve serious mathematics. I would like to have a research degree but unfortunately maths are certainly not my strongest Does it mean that I have no hope of having a research degree in IT ? Maybe you could swing that in Information Systems, but I'm skeptical that you could do it in CS. Have you considered Business with a concentration in MIS? That might be possible without much math. Really, though, I can't imagine someone working in any of the sciences without a strong basis in mathematics. Even Psych majors need plenty of Statistics. What is it you want to research in CS that you think won't require mathematics? Well...as per my understanding, doing a research means specializing in any one area. So, what I assume that it could be anything I am interested in. For example, it could be specializing in database design, internet security, software testing procedures...things like that. I am quite comfortable with programming (C/Java/VB etc.). As long it is related with computers, I am happy with it. But, as I said before, I haven't find any such things offered by universities I've looked upon so far. Pretty much any graduate level science degree is going to require a bit of math. Its just the nature of the beast. Lets take the three things you mentioned (database design, internet security, software testing). All three require a good deal of math: Database design: How would you know which method has the best case, worst case, and mean case and when to switch between them to optimize without math? Internet Security: Internet Security == encryption most of the time. Encryption == math, lots of it at a high level Software Testing: Aka verification. One Word: proofs You man not need the higher level maths to do them but the courses teaches you how to approach the problems and adds more tools to your bag. You may never due an intergration but you'll probably use some of the skills you learn in the classes. Thanks Thantos. I don't mind if some maths are involved in the research, I believe that I can handle that Although research degrees require a higher level of specialization, your department is still going to want you to have some familiarity with most of the field. For PhDs, this means taking a large, comprehensive test on your field, the name of which I can't for the life of me remember currently. "The computer programmer is a creator of universes for which he alone is responsible. Universes of virtually unlimited complexity can be created in the form of computer programs." -- Joseph "If you cannot grok the overall structure of a program while taking a shower, you are not ready to code it." -- Richard Pattis. 04-22-2005 #2 04-22-2005 #3 04-22-2005 #4 04-22-2005 #5 04-22-2005 #6
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/64657-doing-research-degree.html","timestamp":"2014-04-23T10:58:32Z","content_type":null,"content_length":"61788","record_id":"<urn:uuid:b840a98c-0c38-4d7e-bd98-1ef157064636>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
divergent series Hello, I'm working on this problem: Prove that for any real x, the series SUM n=2 to infinity of 1/(log n)^x diverges. So far, I have applied the test that says that if SUM 2^n*a_2n converges then the series converges. I got: 1/log2*SUM 2^n/n^x I know that 1/n^x converges if x>1, but 2^n will explode, so I'd have convergent (sometimes) times divergent = divergent. Can I do that?
{"url":"http://www.physicsforums.com/showthread.php?t=99306","timestamp":"2014-04-18T10:38:54Z","content_type":null,"content_length":"34503","record_id":"<urn:uuid:3bba46f7-0976-4f34-85c3-e667769f7e41>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Joy of Programming: About the Java Overflow Bug - Open Source For You In this column, we’ll discuss a common overflow bug in JDK, which surprisingly occurs in the widely used algorithms like binary search and mergesort in C-based languages. How does one calculate the average of two integers, say i and j? Trivial you would say: it is (i + j) / 2. Mathematically, that’s correct, but it can overflow when i and j are either very large or very small when using fixed-width integers in C-based languages (like Java). Many other languages like Lisp and Python do not have this problem. Avoiding overflow when using fixed-width integers is important, and many subtle bugs occur because of this problem. In his popular blog post [1], Joshua Bloch (Java expert and author of books on Java intricacies) writes about how a bug [2] in binarySearch and mergeSort algorithms was found in his code in java.util.Arrays class in JDK. It read as follows: public static int binarySearch(int[] a, int key) { int low = 0; int high = a.length - 1; while (low <= high) { int mid = (low + high) / 2; int midVal = a[mid]; if (midVal < key) low = mid + 1 else if (midVal > key) high = mid - 1; return mid; // key found return -(low + 1); // key not found. The bug is in line 6—int mid = (low + high) / 2;. For large values of ‘low’ and ‘high’, the expression overflows and becomes a negative number (since ‘low’ and ‘high’ represent array indexes, they cannot be negative). However, this bug is not really new—rather, it is usually not noticed. For example, the classic K & R book [3] on C has the same code (pg 52). For pointers, the expression (low + mid) / 2 is wrong and will result in compiler error, since it is not possible to add two pointers. So, the book’s solution is to use subtraction (pg 113): mid = low + (high-low) / 2 This finds ‘mid’ when ‘high’ and ‘low’ are of the same sign (they are pointers, they can never be negative). This is also a solution for the overflow problem we discussed on Java. Is there any other way to fix the problem? If ‘low’ and ‘high’ are converted to unsigned values and then divided by 2, it will not overflow, as in: int mid = ( (unsigned int) low + (unsigned int) high) / 2; But Java does not support unsigned numbers. Still, Java has an unsigned right shift operator (>>>)—it fills the right-most shifted bits with 0 (positive values remain as positive numbers; also known as ‘value preserving’). For the Java right shift operator >>, the sign of the filled bit is the value of the sign bit (negative values remain negative and positive values remain positive; also known as ‘sign-preserving’). Just as an aside for C/C++ programmers: C/C++ has only the >> operator and it can be sign or value preserving, depending on implementation. So we can use the >>> operator in int mid = (low + high) >>> 1; The result of (low + high), when treated as unsigned values and right-shifted by 1, does not overflow! Interestingly, there is another nice ‘trick’ to finding the average of two numbers: (i & j) + (i ^ j) /2. This expression looks strange, doesn’t it? How do we get this expression? Hint: It is based on a well-known Boolean equality, for example, as noted in [4]: “(A AND B) + (A OR B) = A + B = (A XOR B) + 2 (A AND B)”. A related question: How do you detect overflow when adding two ints? It’s a very interesting topic and is the subject for next month’s column. 1. The C Programming Language, Brian W. Kernighan, Dennis M. Ritchie, Prentice-Hall, 1988.
{"url":"http://www.opensourceforu.com/2009/02/joy-of-programming-about-the-java-overflow-bug/","timestamp":"2014-04-16T07:33:11Z","content_type":null,"content_length":"75867","record_id":"<urn:uuid:b0b1489c-39d8-4a27-8be8-558737edb58c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Economic growth and its determinants in Economic growth and its determinants in Pakistan. Subject: Foreign investments Economic growth Authors: Shahbaz, Muhammad Ahmad, Khalil Chaudhary, A.R. Pub Date: 12/22/2008 Publication: Name: Pakistan Development Review Publisher: Pakistan Institute of Development Economics Audience: Academic Format: Magazine/Journal Subject: Business, international; Social sciences Copyright: COPYRIGHT 2008 Reproduced with permission of the Publications Division, Pakistan Institute of Development Economies, Islamabad, Pakistan. ISSN: 0030-9729 Issue: Date: Winter, 2008 Source Volume: 47 Source Issue: 4 Topic: Computer Subject: Company growth Accession 228122160 Full Text: This paper aims to investigate the impact of macroeconomic variables on economic growth after Structural Adjustment Programme (SAP) in Pakistan. In doing so, study utilises the quarterly time series data from 1991Q1 to 2007Q4. Advanced Autoregressive Distributed Lag Model (ARDL) approach has been employed for co-integration and error correction model (ECM) for short-run results in the case of Pakistan. Empirical investigations indicate that credit to private sector (financial development), foreign direct investment and inflow of remittances correlate positively with economic growth in the long run. High inflation rate and trade-openness slow down the speed of growth rate in short as well as long run. JEL classification: O1, C22 Keywords: Growth, ARDL Cointegration Economically developed countries have been able to reduce their poverty level, strengthen their social and political institutions, improve their quality of life, preserve natural environments and achieve political stability [Barro (1996); Easterly (1999); Dollar and Kraay (2002a); Fajnzylber, Lederman, et al. (2002)]. After the World War II, most of the countries adopted aggressive economic policies to improve the growth rate of real gross domestic product (GDP). The neoclassical growth models imply that during the evolution between steady states; technology, exogenous rate of savings, population growth and technical progress generate higher growth levels [Solow (1956)]. Endogenous growth model developed by Romer (1986) and Lucas (1988) argue that permanent increase in growth rate depends on the assumption of constant and increasing returns to capital. (1) Similarly, Barro and Lee (1994) investigate the empirical association between human capital and economic growth. They seem to support endogenous growth model by Romer (1990) that highlight the role of human capital in economic activity. Fischer (1993) argues that long-term growth is negatively linked with inflation and positively correlated with better fiscal performance and factual foreign exchange markets. In the context of developing countries, investment both in capital and human capital, labour force, ability to adapt technological changes, open trade polices and low inflation are necessary for economic growth. Since 1988, Pakistan's economic management, have been almost totally dependent on Structural Adjustment Programme (SAP). Focus of the SAP is on improving the balance of payments, cutting the fiscal deficit, lowering inflation and improving economic growth rate. This programme has focused on improving the balance of payments through devaluation of local currency, cutting down the fiscal deficit, decreasing government size and liberalising trade. Beneficiaries of economic reforms are consume by poor governance, lack of transparency in economic policies, high level of corruption, high burden of internal and external debts and interest rate payments on these debts, weak sitttation of law and order, and improper implementation of economic policies. Singer (1995) argues that the SAP are based on the assumption that the first and most necessary step is to get the macroeconomic fundamentals right. Supply will respond to the fight environment and proper price enticement and this leads to sustainable growth. This seems to neglect some of the SAP impediment to domestic supply. Furthermore, a small developing open economy has limited international capital mobility or financial integration; higher domestic saving results in higher investment and economic growth under the assumption of "investment and domestic savings are highly correlated" (Fledstien-Horioka Hypothesis). A Brief Look on Relevant Literature Barro (1996) seems to document that high inflation in a country reduces the rate of economic growth. Many studies find no strong positive association between openness and growth of the economy. Grilli and Milesi-Ferretti (1995) do not support the hypothesis that inflow of foreign capital promotes growth. Rodrik (1998) shows no significant correlation between financial liberalisation and growth in small open economies. Similarly, Edison (2003) does not find strong evidence of a relationship between trade liberalisation and growth. He also concludes that financial integration does not promote the growth per se, without controlling for some economic, financial, institutional and policy characteristics. Edwin and Shajehan (2001) support that apart from growth in the labour force, investment in skill and technology, as well as low inflation rate and open trade polices, are important for economic growth. Moreover, the ability to adopt beneficial technological shocks in order to increase efficiency is also necessary. Since many developing countries have a large agricultural sector, adverse supply shocks in this sector are likely to originate an adverse impact on economic growth. Growth in agriculture has a positive impact on industrial and service sector's growth, social infrastructure is an important determinant of the investment decisions [Krishna (2004)]. The author however stresses that there is a need for exploring other approaches to explain economic growth from all perspectives. Recent empirical studies confirm that natural resources, climate, topography and 'land lockedness' have a direct impact on economic growth affecting agricultural productivity, economic structure, transportation costs and competitiveness in goods markets [Sachs and Warner (1997), Bloom and Sachs (1998); Masters and McMillan (2001); Armstrong and Read (2004)]. However, others [e.g. Rodrik, et al. (2002); Easterly and Levine (2003)] find no effect of geography on growth after controlling for institutions. Edwin and Shajehan (2001) empirically suggest that apart from growth in the labour force, investment in both physical and human capital, as well as low inflation and trade liberalisation polices are essential for economic growth. They also suggest the ability to adopt technological changes in order to increase efficiency is also important. Klein and Olivei (2003) utilises quadratic interaction between income per capita and capital inflow or financial liberalisation and2 established a positive and significant effect of capital account openness along with stock market liberalisation on economic growth for middle-income countries but not for poor and rich countries. In small, open economies, absorption capacity for capital is limited because the financial markets are impulsive. The excessive capital inflows towards small open economies might cause "Dutch" disease phenomena and asymmetric information might be inefficient use of capital [Carlos, et al. (2001); Hauskrecht, et al. (2005)]. Stark and Lucas (1988); Taylor (1992); and Faini (2002) establish the positive relationship between remittances and economic growth. Empirical evidence of previous studies of the impact of worker's remittances on economic growth as well as poverty reduction is mixed [Juthathip (2007)]. The results suggests that, remittances have a significant impact on poverty reduction in developing economies through increasing income tends to relax the consumption constraints of the poor, they have a nominal impact on growth working through enhance in both domestic investment and human capital development. On the basis of recent and quite literal evidence, surveyed by Lopez and Olmedo (2005) analyse the positive impact of remittances on education and entrepreneurship at the household-level. The mechanism through which remittances can positively affect growth can be better results in micro-econometric studies based on household-level data. (3) Chaudhary, et al. (2002) investigate the role of trade instability on investment and economic growth. The results show that export instability does not affect economic growth and investment in Pakistan. However export instability could affect foreign exchange earnings and as a result it could have negative impact on imports and economic growth. Chaudhary, et al. (2007) examine the impact of trade policy on economic growth in Bangladesh. Results strongly support a long-run positive and significant relationship among exports, imports and economic output for Bangladesh. Furthermore, empirical evidence shows relationships between exports and output growth and also between imports and output growth in the short-run. A strong feedback effect between import growth and export growth has also been established. A number of studies have examined determinants of economic growth in case of Pakistan in terms of a mixture of factors that includes income, real interest rate, dependency ratios, foreign capital inflows, foreign aid, changes in terms of trade, and openness of the economy such as [Iqbal (1993, 1994), Khilji and Mahmood (1997); and Shabbir and Mahmood (1992)]. Iqbal (1994) seems to investigate the relationship between structural adjustment lending and real output growth. The empirical results indicate negative link between structural adjustment lending and output growth, and worsening the terms of trade and economic output in the country. Finally, favourable weather condition and real domestic savings stimulate the real economic growth rate. Furthermore, Iqbal (1995) examines a three-gap model and concludes that real devaluation, increased foreign demand and capacity utilisation are main contributors of economic growth in the country. Khilji and Mahmood (1997) seem to document that military expenditures are contractionary to economic growth in the case of Pakistan. Shabbir and Mahmood (1992) posit positive impact on real GNP growth of foreign private investment. The empirical results by Iqbal and Zahid (1998) reveal that openness of trade is positively associated with economic growth while budget deficit and external debt reduce growth of output. Iqbal and Satar, (2005) come to conclusion that foreign workers' remittances impact economic growth positively with high significance. Public and private investment is also an important source of economic growth in the country. But inflation rate, external debt and worsening situation of terms of trade are appeared to be correlated with economic growth negatively. Finally, Shahbaz (2009) reassesses the impact of some macroeconomic variables on economic growth. The results reveal that financial sector's development improves the performance of the economy in the long run. Credit to private sector as a share of GDP, used as a proxy for financial development, is a good predictor of economic growth for case of Pakistan. Similarly, rise in exports and investment boost economic growth while Inflation and imports both reduce economic growth. High economic growth is found to be associated with small size of the government. To better understand the growth process, this study develops an empirical model using a time series approach for the country specific case of Pakistan. This attempts to explore the some of the necessary factors for sustained economic growth in the country. The rest organisation of the papers is as follows; Section II explains the model and data collection procedure, Section III describes methodological framework and Section IV investigating the empirical results. Finally, Section V presents conclusion and policy recommendations. II. MODEL AND DATA International Financial Statistics (IFS) (2008) and Economic Survey of Pakistan (various issues) have been combed to obtain the data of said variables. Finally, quarterly data for GDP per capita has been collected from Ahmed (2007). (4) The study utilises the data period from 1991Q1 up to 2007Q4. Log-linear model has been constructed to find the required linkages. It provides better results than simple linear regression. Above discussed literature permits us to construct empirical model as following: GDPR = [[phi].sub.0] + [[phi].sub.1] FD + [[phi].sub.2] FDI + [[phi].sub.3] REM + [[phi].sub.4] TR + [[phi].sub.6] INF + vi (1) Where, GDPR = GDP per capita, FD = Credit to private sector as share of GDP proxy for financial development, FDI = Financial openness proxies by foreign direct investment as share of GDP, TR = [(Export +Imports)/GDP] proxies for trade-openness, INF = Annual Inflation. Financial sector's development stimulates the economic growth. Financial development improves productivity of investment projects and lowers the transaction cost that increases more investment activities. Development of financial also increases savings in the country and savings in resulting contributes to economic growth positively [Pagano (1993)]. Financial openness (foreign direct investment) promotes economic growth though improved technology transferee, efficiency, improvement in the quality of production factors and enhanced production. This will not only lead to increase in exports but also increases in savings rate in the country. The increased savings rate definitely stimulates investment opportunities that ultimately faster growth of output and employment [Khor (2000)]. It is expected that increased foreign remittances raises the economic growth. Continuous flow of remittances is important sources to lower current account and external borrowing as well. Remittances play its role to decline external debt and maintain exchange rate of an economy. External shocks are absorbed through sustainable flow of foreign remittances. So impact of remittances on economic growth may be positive because foreign remittances are perquisite to accelerate real output in the country [Iqbal and Satar (2005)]. International trade posits that economic growth can be accelerated through openness of an economy through its effects of increased competition, easy access to trade opportunities on efficiency of resource allocation. These positive externalities such as access to advanced technology with its spillovers effects, and availability of necessary inputs from rest of the world increase domestic output and hence economic growth in the country. In the literature, there are two arguments about link between inflation and economic growth. In first view, Mundell (1963) and Tobin (1965) document that high inflation increases the cost of holding money. It leads to increase the shift 6f capital from money portfolio that improves investment and hence economic growth. But rise in inflation retards economic growth through various channels. For instance, cost of capital that is increased due to high inflation lessens investment rate and rate of capital accumulation which declines real growth rate. High inflation rate encourages inflation tax to rise and alleviates the incentive to work. Thus rise in unemployment will not only reduce real output but also decline economic growth. III. METHODOLOGICAL FRAMEWORK In recent times, Ng-Perron (2001) developed four test statistics utilising GLS detrended data [D.sup.d.sub.t] . The calculated values of these tests based on the forms of Philip-Perron (1989) [Z.sub.[alpha]] and [Z.sub.t] statistics, the Bhargava (1986) [R.sub.1] statistics, and the Elliot, Rotherberg and Stock (1996) created optimal best statistics. The terms are defined as follows: k = [t.summation over (t=2)] ([D.sup.d.sub.t-1]).sup.2]/[T.sup.2] (2) While de-trended GLS tailored statistics are given below: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3) If [x.sub.t] = {1} in fist case and [x.sub.t] = {1,t} in second. (5) In economic literature, many methods are bluntly used for conducting the cointegration test; the most widely used methods include the residual based Engle-Granger (1987) test, and Maximum Likelihood based Johansen (1991) and Johansen-Juselius (1990) tests. All these require that the variables in the system be of equal order of integration. The residual-based co-integration tests are inefficient and can lead to contradictory results, especially when there are more than two I(1) variables under consideration. Recently, an emerging body of literature led by Pesaran and Shin (1995), Pesaran, Shin and Smith (1996), Pesaran and Shin (1997), and Pesaran, Shin and Smith (2001) has introduced an alternative co-integration technique known as the "Autoregressive Distributive Lag" or ARDL bounds testing. It is argued that ARDL has a numerous advantages over conventional techniques like Engle-Granger and Johansen Cointegration approaches. The first advantage of ARDL is that it can be applied irrespective of whether underlying regressors are purely I(0), purely I(1) or mutually co-integrated [Pesaran and Pesaran (1997)]. The second advantage of using the bounds testing approach to co-integration is that it performs better than Engle and Granger (1987), Johansen (1990) and Philips and Hansen (1990) co-integration tests in small samples [see for more details Haug (2002)]. The third advantage of this approach is that, the model takes sufficient number of lags to capture the data generating process in a general-to-specific modeling framework [Laurenceson and Chai (2003)]. Finally, ARDL is also having the information about the structural break in time series data. However, Pesaran and Shin (1995) contented that, appropriate modification of the orders of the ARDL model is sufficient to simultaneously correct for residual serial correlation and the problem of endogenous variables. Under certain environment, Pesaran and Shin (1995) and PSS (6) [Pesaran, Shin, and Smith (2001)] established that long run association among macroeconomic variables may be investigated by employing the autoregressive distributive lag model. After the lag order for ARDL procedure, OLS may be utilised for estimation and identification. Valid estimations and inferences can be drawn through presence of unique long run alliance. Such inferences not only on long run but also on short run coefficients may be made which lead us to conclude that the ARDL model is correctly augmented to account for contemporaneous correlations between the stochastic terms of the data generating process (DGP), also that ARDL estimation is possible even where explanatory variables are endogenous. Moreover, ARDL remains valid irrespective of the order of integration of the explanatory variables. But ARDL procedure will collapse if any variable is integrated at I(2). The PSS (2001) procedure is implemented to estimate error correction model given such an equation: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4) PSS F-test is estimated by imposing zero-joint restriction on [delta]'s in error correction model. Distribution of PSS F-test is non-standard [Chandan (2002)]. The reason is that lower and upper critical bounds are generated by PSS (1996). Lag order of ARDL model is selected on lower value of AIC or SBC. After empirical estimation, if PSS (2001) confirms the presence of unique cointegration vector among variables. This shows that one is outcome variable while other is forcing actor in model. On basis of selected ARDL, long run and short estimates can be investigated in two steps [Pesaran and Shin (1995)]. Long run relationship for said actors can be established by estimating ARDL model as given by means of Ordinary Least Squares (OLS): Y = [bar/[omega]] + [p.summation over (i=1)] [[beta].sub.i][Y.sub.t-1] + [q.summation over (i=0)] [[upsilon].sub.i][X.sup.t-i] + [v.sub.t] (5) Where v is normally distributed error term. Long run (cointegration) coefficients can be obtained: Y = [alpha] + [rho]X = [[mu].sub.t] (5) From Equation 6: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7) Firstly, we try to find out the direction of relationship between economic growth and its determinants in the case of Pakistan by analysing the PSS F-test statistics. The calculated F-statistic is compared with the critical value tabulated by Pesaran and Pesaran (1997) or Pesaran, et al. (2001). (7) The ARDL method estimates [(p+1).sup.k] number of regressions in order to obtain optimal lag length for each variable, where p is the maximum number of lags to be used and k is the number of variables in the equation. The model can be selected using the model selection criteria like Schwartz-Bayesian Criteria (SBC) (8) and Akaike's Information Criteria (AIC). SBC is known as the parsimonious model: selecting the smallest possible lag length, whereas AIC is known for selecting the maximum relevant lag length. In the second step, the long run relationship is estimated using the selected ARDL model. When there is a long run relationship between variables, there should exist an error correction representation. [DLETA]GDPR = [[phi].sub.0] + [[phi].sub.1][DELTA]AFD + [[phi].sub.2][DELTA]FDI + [[phi].sub.3][DELTA]REM + [[phi].sub.4][DELTA]TR + [[phi].sub.6][DELTA]INF + [[eta][[mu].sub.t-1] + [v.sub.t] (8) Finally, the error correction model is estimated. The error correction model results indicate the speed of adjustment back to the long run equilibrium after a short run shock. To determine the integrity of fit of the ARDL model, the diagnostic tests are conducted. The diagnostic or sensitivity tests examine the serial correlation, autoregressive conditional heteroscedisticity, normality of error term and heteroscedisticity associated with the model. IV. EMPIRICAL FINDINGS ARDL has the advantage of avoiding the classification of variable into I (0) or I(1) since there is no need for unit root pre-testing. As argued by Sezgin and Yildirim, (2002) that ARDL can be applied regardless of stationary properties of variables in the sample and allows for inferences on long run estimates, which is not possible under alternative co-integration techniques. In contrast, according to Ouattara (2004) in the presence of I(2) variables the computed F-statistics provided by PSS (2001) become invalid because bounds test is based on the assumption that the variables should be I(0) or I(1). Therefore, the implementation of unit root tests in the ARDL procedure might still be necessary in order to ensure that none of the variable is integrated of order I(2) or beyond. For this purpose, Ng-Perron (2001) test is employed which is more powerful and reliable for small data set. To find out the integrating order, ADF [Dicky and Fuller (1979), P-P [Philip and Perron (1989)] and DF-GLS [Elliot, et al. (1996)] tests are often used respectively (9). Due to the poor size and power properties, both tests are not reliable for small sample data set [Dejong, et al. (1992) and Harris (2003)]. They concluded that these tests seem to over-reject the null hypotheses when it is true and accept it when it is false. Therefore, Ng-Perron test utilised to overcome these above-mentioned problems about order of integration of running actors. Results of unit root estimation reveal that all variables are having unit root problem at their level form as shown in Table 1. Established order of integration leads us to apply the ARDL approach to find out cointegration among macroeconomic variables. So lag length for conditional error correction version of ARDL model has been obtained by means of swartz bayesian criteria (SBC) and akaike information criteria (AIC) through Vector auto regressive (VAR). With such type of time series data set, we cannot take lag more than 4 lags (see Table 2). The calculated F-Statistics is 5.674 that is higher than upper bound 4.37 and lower bound 3.29 at 1 percent level of significance. This implies that alternative hypothesis of cointegration may be accepted. It is concluded that there prevails cointegration among macroeconomic variables. After establishing cointegration among running actors in model, we can employ ARDL regression to investigate the long run elasticities. Table 3 reveals the impact of independent variables on dependent one. Improved performance of financial sector enhances the speed of economic growth significantly. It is concluded that 9 percent improvement in the efficiency of financial sector causes the economic growth by 4.18 percent to rise. Continuous inflows of remittances effect economic growth positively with minimal significance. Economic growth is negatively caused by increased trade-openness significantly. This reveals the low demand of country's exports in international market due to low quality. Trade history of the country shows the high dependence on imports as compare to exports which increases trade deficit and hence slows down the speed of economic growth. Financial openness correlates positively with economic activity in the country and improves economic growth rate. A 10 percent increase in FDI inflows (financial openness) will improve economic growth by 0.3 percent. Inflationary situation retards the economic growth, 0.16 percent of economic growth is eroded by 10 percent increase in inflation. The [ecm.sub.t-1] coefficient indicates how quickly/slowly variables return to equilibrium and it should have a negative sign with high significance. The error correction term, [ecm.sub.t-1] shows the speed of modification required to re-establish equilibrium in the short-run model. Bannerjee, et al. (1998) argue that the error correction term is significant at 5 percent level of significance. The coefficient of [ecm.sub.t-1] is equal to -0.2705 for the short-run model and implies that deviation from the long-term economic growth is corrected by 27.05 percent over each year. The lag length of the short-run model is selected on the basis of the schwartz bayesian criteria. In short span of time, economic growth is improved through previous supporting policies. Development of financial sector declines economic growth significantly as shown in Table 4. This shows that improvements in financial sector could not stimulate the economic activity in short span of time. Actually, financial activities take time to contribute in economic activity through capital formation process. Financial openness affects the economic growth positively but insignificant. Remittances and inflation lower down economic growth insignificantly. Finally, trade-openness and economic growth are inversely linked with significance. Sensitivity Analysis and Stability Tests The results for serial correlation, autoregressive conditional heteroskedasticity, normality and heteroskedasticity (sensitivity analysis) are presented in Table 2. These results show that the short-run model passed the diagnostic tests. The empirical estimations indicate that there is no evidence of autocorrelation and that the model passes the test for normality, the error term is also proved to be normally distributed. There is no existence of white heteroscedasticity in the model. Finally, for analysing the stability of the long-run coefficients together with the short-run dynamics, the cumulative sum (CUSUM) and the cumulative sum of squares (CUSUMsq) are applied. According to Pesaran and Shin (1999), the stability of the estimated coefficient of the error correction model should also be empirically investigated. A graphical representation of CUSUM and CUSUMsq is shown in Figures 1 and 2. Following Bahmani-Oskooee and Nasir (2004) the null hypothesis (i.e., that the regression equation is correctly specified) cannot be rejected if the plot of these statistics remains within the critical bounds of the 5 percent significance level. As it is clear from Figures 1 and 2, the plots of both the CUSUM and the CUSUMsq are within the boundaries, and, hence these statistics confirm the stability of the long-run coefficients of the regressors that affect the economic growth in the country. The stability of the selected ARDL model specification was evaluated using the CUSUM and the CUSUMsq of the recursive residual test for structural stability [see Brown, Durbin and Evans (1975)]. The model appears to be stable and correctly specified given that neither the CUSUM nor the CUSUMsq test statistics exceed the bounds of the 5 percent level of significance (see figure given in appendix). V. CONCLUSIONS AND POLICY IMPLICATIONS Over the last two decades the determinants of economic growth have been the primary focus of theoretical and applied research. Generally, it has been observed that both developing and developed countries with strong macroeconomic fundamentals tend to grow faster than those without them. Despite the lack of a unifying theory, there are several partial theories that argue the role of various factors in determining the economic growth. This study explores some of the causal factors for sustained economic growth in the country after the Structural Adjustment Programme (SAP). This programme was initiated as part of a massive world-wide policy measures under the directive of 1MF. It aimed to improve the balance of payments through devaluation of local currency, cutting the fiscal deficit and reducing subsidies, decreasing government size and liberalising trade. Empirical psychology reveals that ARDL bounds testing approach employed to find out the cointegration among running macroeconomic variables. ARDL F-statistic confirmed about the existence of long run association. Financial sector's development seems to stimulate economic activity and hence increases economic growth in long span of time but in short run. Remittances are positively correlated with economic growth in the country. Trade-openness erodes economic growth while financial openness promotes it. Domestic investment activities generate employment opportunities and in resulting contribute to improve economic growth. Finally, increased inflation and economic growth correlated inversely in the country. The findings show that structural adjustment program adopted by government was totally failed to fill its objectives. This study could not incorporate other important macroeconomic variables for economic growth due unavailability of data (quarterly). There is a need to make comprehensive study to find out impact of other macroeconomic variables in the country. Further research on this particular topic will provide inclusive policy implications to enhance growth rate in the country. [FIGURE 1 OMITTED] [FIGURE 2 OMITTED] Ahmed, M. Farooq and I. Batool (2007) Estimating Quarterly Gross Fixed Capital Formation. (SBP Working Paper Series No. 17.) Armstrong, H. and R. Read (2004) The Economic Performance of Small States and Islands: The Importance of Geography. Paper presented at Islands of the World VIII International Conference, Bahmani-Oskooee, M. and A. Nasir (2004) ARDL Approach to Test the Productivity Bias Hypothesis. Review of Development Economics Journal 8, 483-488. Bannerjee, A., J. Dolado, and R. Mestre (1998) Error-Correction Mechanism Tests for Co-integration in Single Equation Framework. Journal of Time Series Analysis 19, 267-83. Barro, R. (1991) Economic Growth in a Cross Section of Countries. Quarterly Journal of Economics 106, 407-442. Barro, R. (1996) Determinants of Economic Growth: A Cross-country Empirical Study. MIT Press Books. Barro, Robert J. (1996) Determinants of Economic Growth: A Cross-country Empirical Study. National Bauru of Economic Research. (NBER Working Paper No. 5698.) Barro, Robert J. and Wong-Wha Lee (1994) Sources of Economic Growth. Carnegie-Rochester Conference Series on Public Policy 40:1, 1-46. Bhargava, A. (1986) On the Theory of Testing for Unit Roots in Observed Time Series. The Review of Economic Studies 53:3, 369-384. Bloom, D. and J. Sachs (1998) Geography, Demography and Economic Growth in Africa. Brookings Papers on Economic Activity 2, 207-295. Brown, R. L., J. Durbin, and J. M. Ewans (1975) Techniques for Testing the Constance of Regression Relations Overtime. Journal of Royal Statistical Society, 149-72. Carlos, et al. (2001) When Does Capital Account Liberalisation Help More Then Hurts? NBER Working Paper Series. (Working Paper 8414). Chandana, Kularatne (2001) An Examination of the Impact of Financial Deepening on Long-Run Economic Growth: An Application of a VECM Structure to a Middle-Income Country Context. 2001 Annual Forum at Misty Hills, University of the Witwatersrand, Johannesburg. Chaudhary, A., Shirazi and Chaudhary Munir (2007) Trade Policy and Economic Growth within Bangladesh: A Revisit. Pakistan Economic and Social Review 45, 1-26. Chaudhary, M. Aslam and Ashfaq A. Qaisrani (2002) Trade Instability, Investment and Economic Growth in Pakistan. Pakistan Economic and Social Review 9, 57-73. Dejong, D. N., J. C. Nankervis, and N. E. Savin (1992) Integration versus Trend Stationarity in Time Series. Econometrica 60, 423-33. Dickey, D. A., and W. A. Fuller (1981) Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root. Econometrica 49, 057-1072. Dickey, D. and W. A. Fuller (1979) Distribution of the Estimates for Autoregressive Time Series with Unit Root. Journal of the American Statistical Association 74, 427-31. Dollar D. and A. Kraay (2002) Growth Is Good for the Poor. Journal of Economic Growth 7, 195-225. Dustmann, C. and O. Kirchkamp (2002) The Optimal Duration Migration and Activity Choice after Remigration. Journal of Development Economics 67, 351-72. Easterly, W. and R. Levine (2003) Tropics, Germs and Crops: How Endowments Influence Economic Development. Journal of Monetary Economics 50, 3-39. Easterly, William (1999) The Ghost of Financing Gap: Testing the Growth Model Used in the International Finance Institutions. Journal of Development Economics 60:2, 423-438. Edison, H. J. (2003) Do Indicators of Financial Crises Work? An Evaluation of An Early Warning System. International Journal of Finance and Economics 8:1, 11-53. Edwin, Dewan and Shajehan Hussein (2001) Determinants of Economic Growth: Panel Data Approach. Economics Department Reserve Bank of Fiji (Working Paper 01/04). Elliot, G., T. J. Rothenberg, and J. H. Stock (1996) Efficient Tests for an Autoregressive Unit Root. Econometrica 64, 813-36. Engle, R. F. and C. W. J. Granger (1987) Co-integration and Error Correction Representation: Estimation and Testing. Econometrica 55, 251-276. Fagerberg, J. (1987)A Technology Gap Approach to Why Growth Rates Differ. Faini, Riccardo (2002) Development, Trade, and Migration. Proceedings from the ABCDE Europe Conference, 1-2, pp: 85-116. Fajnzylber, P., D. Lederman, and N. Loayza (2002) What Causes Violent Crime? European Economic Review 46, 1323-57. Fischer, Stanley (1993) The Role of Macroeconomic Factors in Growth. Journal of Monetary Economics 32, 485-512. Grilli, Vittorio and Gian Maria Milesi-Ferretti (1995) Economic Effects and Structural Determinants of Capital Controls. IMF Staff Papers 42, 517-551. Harris, R. and R. Sollis (2003) Applied Time Series Modeling and Forecasting. West Sussex: Wiley. Hauskrecht, A. and Nhan Le (2005) Capital Account Liberalisation for a Small Economy. Indiana University, Kelley School of Business, Department of Business Economics and Public Policy in its series (Working Papers 13.) Iqbal, Z. (1993) Institutional Variations in Savings Behaviour in Pakistan. The Pakista, Development Review 32:4, 1293-1311. Iqbal, Z. (1994) Macroeconomic Effects of Adjustment Lending in Pakistan. The Pakistan Development Review 33:4, 1011-1031. Iqbal, Z. and A. Sattar (2005) The Contribution of Workers' Remittances to Economic Growth in Pakistan. Pakistan Institute of Development Economics, Islamabad. (Research Report No. 187.) Iqbal, Z. and G. Zahid (1998) Macroeconomic Determinants of Economic Growth in Pakistan. The Pakistan Development Review 37:2, 125-148. Johansen, S. (1991) Estimation and Hypothesis Testing of Co-integrating Vectors in Gaussian Vector Autoregressive Models. Econometrica 59, 1551-80. Johansen, Soren and Katarina Juselius (1990) Maximum Likelihood Estimation and Inference on Cointegration--with Applications to the Demand for Money. Oxford Bulletin of Economics and Statistics 52, 169-210. Juthathip, Jongwanich (2007) Workers' Remittances, Economic Growth and Poverty in Developing Asia and the Pacific Countries. Economic and Social Commission for Asia and the Pacific. (Working Paper WP/07/01.) Khilji, N. M. and A. Mahmood (1997) Military Expenditures and Economic Growth in Pakistan. The Pakistan Development Review 36:4, 791-808. Khor, M. (2000), Globalisation and the South. Some Critical Issues. Third World Network, Penang. Klein, M. and G. Olivei (1999) Capital Account Liberalisation, Financial Depth, and Economic Growth. (NBER Working Paper 7348). Klein, M. W. and G. Olivei (2000) Capital Account Liberalisation, Financial Depth and Economic Growth. Unpublished, Fletcher School of Law and Diplomacy, Tuft University, Boston, MA. Krebs, T. and P. Krishna (2004) Trade Policy, Income Volatility and Welfare. Econometric Society 2004 North American Summer Meetings 367, Econometric Society. Lichtenberg, F. (1992) R&D Investment and International Productivity Differences. (NBER Working Paper No. 4161.) Lopez, Cordova E. and A. Olmedo (2005a) International Remittances and Development: Existing Evidence, Policies and Recommendations. Paper prepared for the G-20 Workshop on "Demographic Challenges and Migration" held in Sydney on 27-28 August 2005. Lucas, Robert E. Jr. (1988) On the Mechanics of Economic Development. Journal of Monetary Economics 22:1 (July), 3A2. Masters, W. and M. McMillan (2001) Climate and Scale in Economic Growth. Journal of Economic Growth 6, 167-186. McCormick, B. and J. Wahba (2001) Overseas Work Experience, Savings, and Entrepreneurship amongst Return Migrants to LDCs. Scottish Journal of Political Economy 48:2, 164-78. Mundell, R. (1963) Inflation and Real Interest. Journal of Political Economy 71, 280-283. Ng, S. and P. Perron (2001) Lag Length Selection and the Construction of Unit Root Test with Good Size and Power. Econometrica 69, 1519-54. Nishat, Muhammad and Nighat Bilgrami (1991) The Impact of Migrant Workers, Remittances on Pakistan's Economy. Pakistan Economic and Social Review 29, 21-41. Ouattara, B. (2005) Foreign Aid and Fiscal Policy in Senegal. University of Manchester. (Mimeographed.) Pagano (1993) Financial Markets and Growth. European Economic Review 37, 613-22. Perron, P. (1989) The Great Crash, The Oil Price Shock, and the Unit Root Hypothesis. Econometrica 57, 1361-1401. Pesaran and Shin (1995, 1998) An Autoregressive Distributed Lag Modeling Approach to Co-integration Analysis. (DAE Working Papers No. 9514.) Pesaran, et al. (1996) Testing for the Existence of a Long Run Relationship. (DAE Working Papers No. 9622.) Pesaran, M. Hasem and Bahrain Pesaran (1997) Working with Microfit 4.0: Interactive Econometric Analysis. Oxford: Oxford University Press. Pesaran, M. Hasem, Yongcheol Shin, and Richard J. Smith (2001) Bounds Testing Approaches to the Analysis of Level Relationships. Journal of Applied Econometrics 16, 289-326. Phillips, P. and B. E. Hansen (1990) Statistical Inference in Instrumental Variables Regression with I(l) Processes. Review of Economic Studies 57:1, 99-125. Rodrik, D. (1998) Trade Policy and Economic Performance in Sub-Saharan Africa. National Bureau of Economic Research. (NBER Working Papers 6562). Rodrik, D., Subramanian A., and F. Trebbi (2002) Institutions Rule: the Primacy of Institutions over Geography and Integration in Economic Development. (NBER Working Paper No. 9305.) Romer, P. (1986) Increasing Returns and Long-Run Growth. Journal of Political Economy 92, 1002-1037. Romer, P. M. (1990) Human Capital and Growth. Paper presented at the Carnegie-Rochester Conference on Economic Policy, Rochester, New York. Sachs, J. and A. Warner (1997) Sources of Slow Growth in African Economies. Journal of African Economies 6, 335-76. Selami, Sezgin and Julide Yildirm (2002) The Demand for Turkish Defence Expenditure. Defence and Peace Economics 13, 121-128. Sezgin, Selami and Julide Yildirm (2002) The Demand for Turkish Defence Expenditure. Defence and Peace Economics 13, 121-128. Shabbir, T. and A. Mahmood (1992) The Effects of Foreign Private Investment on Economic Growth in Pakistan. The Pakistan Development Review 31:4, 831-841. Shahbaz, M. (2009) A Reassessment of Finance-Growth Nexus for Pakistan: Under the Investigation of FMOLS and DOLS Techniques. The ICFAIi University Journal of Applied Economics 1, 65-80. Singer, Hans (1995) Are the Structural Adjustment Programmes Successful? Pakistan Journal of Applied Economics 11:1 and 2. Solow, R. M. (1956) A Contribution to the Theory of Economic Growth. Quarterly Journal of Economics 70, 65-94. Stark, O. and R. Lucas (1988) Migration, Remittances and the Family. Economic Development and Cultural Change 36, 465-81. Taylor, J. E. (1992) Remittances and Inequality Reconsidered: Direct, Indirect and Intertemporal Effects. Journal of Policy Modeling 14, 187-208. Tobin, J. (1965) Money and Economic Growth. Econometrica 33, 671-684. Ulku, H. (2004) R&D Innovation and Economic Growth: An Empirical Analysis. (IMF Working Paper 185.) (1) Previous theories on growth were based on the assumption of constant return to scale. But increasing productivity due to improvements in human capital, technological developments, more investment in research and development (R&D) violate this assumption. This phenomenon has been stressed in various endogenous growth models. The strong relation between innovation and economic growth has also been empirically affirmed by many studies [see Fagerberg (1987); Lichtenberg (1992); Ulku (2004)]. (2) The result is supporting with the view that poorer countries do not have the legal, social, and political institution would necessary to full enjoy the benefits of capital account (3) Better understanding to see Lopez Cordova (2005) on education a study for Mexico McCormick and Wahba (2001) on entrepreneurship in Egypt; Dustmann and Kirchkamp (2002) on entrepreneurship in Turkey: Nishat and Bilgrami (1991) also found that remittances have positive impact on consumption, investment and imports. Similarly, Iqbal and Sattar (2005) found that workers' remittances appeared to be the third important source of capital for economic growth in Pakistan. (4) Research Analyst in State Bank of Pakistan. (5) [bar.[alpha]] = -7, If xt = {1} and [bar.c] = -13.7 [bar.[alpha]] = -7, If xt = {1,t}. (6) This theoretical formation of ARDL is based on Chandan (2002). (7) If the F-test statistic exceeds the upper critical value, the null hypothesis of no long-run relationship can be rejected regardless of whether the underlying orders of integration of the variables are I(0) or I(1). Similarly, if the F-test statistic falls below the lower critical value, the null hypothesis is not rejected. However, if the sample F-test statistic falls between these two bounds, the result is inconclusive. When the order of integration of the variables is known and all the variables are I(1), the decision is made based on the upper bounds. Similarly, if all the variables are I(0), then the decision is made based on the lower bounds. (8) The mean prediction error of AIC based model is 0.0005 while that of SBC based model is 0.0063 [Shrestha (2003)]. (9) We also utilised these three tests but decision in based on Ng-Perron test. Muhammad Shahbaz and Khalil Ahmad are MPhil students and A. R. Chaudhary is Professor of Economics at National College of Business Administration and Economics, Lahore, Pakistan. Table 1 Unit Root Estimation Ng-Perron at Level Variables MZa MZt MSB MPT GDPR -2.24659 -0.93072 0.41428 34.5487 FD -6.31198 -1.77509 0.28123 14.4366 FDI -4.62640 -1.40700 0.30412 18.9167 REM -4.20373 -1.35165 0.32154 20.7106 TR -3.22751 -1.27009 0.39352 28.2284 INF -0.06434 -1.71566 0.28291 15.0046 Ng-Perron at First Difference GDPR -20.2050 (b) -3.17820 0.15730 4.51153 FD -17.9143 (b) -2.98975 0.16689 5.10575 FDI -23.9579 (a) -3.46080 0.14445 3.80512 REM -34-3966 (a) -4.14688 0.12056 2.65036 TR -31.8459 (a) -3.98930 0.12527 2.86748 INF -28.0171 (a) -3.73957 0.13347 3.27150 Note: a (b) representing significance at 1 percent (5 percent) level of significance. Table 2 Lag Length and Cointegration Estimation Akaike Information Schwarz Log Lag- order Criteria Criteria Likelihood F-statistics 3 -11.950 -8.1045 50.9202 5.441 4 -13.215 -8.1553 93.2191 5.674 Short-run Diagnostic Tests Serial Correlation LM Test = 0.2671 (0.6073) ARCH Test: 1.3585 (0.264831) Heteroscedisticity Test = 0.9213 (0.5586) Jarque-Bera Test = 0.3850 (0.8248) Table 3 Long Run Correlations Dependent Variable: GDPR Variable Coefficient T-statistic Coefficient T-statistic Constant 12.047 120.077 (a) 11.992 146.153 (a) FD 0.4642 8.0658 (a) 0.4065 7.4253 (a) REM 0.0693 3.3141 (a) 0.0565 2.9702 (a) TR -0.2369 -3.5454 (a) -0.2568 -4.1965 (a) FDI 0.0283 1.8024 (a) -- -- INV -- -- 0.2058 3.8476 (a) INF -0.0155 -1.6453 (c) -0.0132 -1.6351 (c) R-squared = 0.9229 R-squared = 0.931572 Adjusted R-squared = 0.9165 Adj-R-squared = 0.925870 Akaike info Criterion = -2.588 Akaike info Criterion = -2.75 Schwarz Criterion = -2.390 Schwarz Criterion = -2.560 F-Statistic = 146.049 F-Statistic = 163.36 Prob(F-statistic) = 0.000 Prob(F-statistic) = 0.000 Durbin-Watson = 1.95 Durbin-Watson = 2.10 Note: a (c) represent the significance at 1 percent (10 percent) level of significance. Table 4 Short-run Correlations Dependent Variable = AGDPR Variable Coefficient Std. Error t-Statistic Prob. Constant 0.0301 0.0081 3.7012 0.0005 [DELTA][GDPR.sub.t-1] 0.2642 0.1336 1.9774 0.0530 [DELTA][GDPR.sub.t-2] 0.0540 0.0768 0.7032 0.4849 [DELTA]FDI 0.0038 0.0102 0.3762 0.7082 [DELTA]FD -1.0276 0.0996 -10.308 0.0000 [DELTA][FD.sub.t-1] 0.3408 0.1658 2.6553 0.0446 [DELTA]REM -0.0048 0.0269 -0.1818 0.8564 [DELTA]TR -0.1690 0.0596 -2.8339 0.0064 [DELTA]INF -0.0073 0.0052 -1.4073 0.1650 [ecm.sub.t-1] -0.2705 0.1328 -2.0372 0.0464 R-squared = 0.9387 Adjusted R-squared = 0.9287 Akaike info criterion = -3.7654 Schwarz criterion = -3.4309 F-statistic = 93.6420 Durbin-Watson stat = 1.9488 Prob(F-statistic) = 0.0000 Gale Copyright 2008 Gale, Cengage Learning. All rights reserved.
{"url":"http://www.biomedsearch.com/article/Economic-growth-its-determinants-in/228122160.html","timestamp":"2014-04-21T07:50:10Z","content_type":null,"content_length":"58883","record_id":"<urn:uuid:436ac9d0-efc5-4e96-8ad3-81854494ecde>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Cherry Hill Township, NJ Math Tutor Find a Cherry Hill Township, NJ Math Tutor ...I look forward to working with you and your child!I am a certified and current teacher in the public schools. My New Jersey certification is k-12. In PA, I am certified k-6. 12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2 I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including calculus, trigonometry, SAT math, ACT Math ...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Trigonometry with a national tutoring chain for five years. I have taught Trigonometry as a private tutor since 2001. 12 Subjects: including geometry, logic, algebra 1, algebra 2 ...My philosophy is just that, show them all that is good about them so they will feel empowered to do more. I use a variety of methods in teaching, some students are hands on/visual students other are more logical...as such I use a blend of teaching techniques that are as diverse as our children. ... 13 Subjects: including differential equations, logic, prealgebra, reading ...I am located in the South Jersey area and would be happy to tutor you in your home or at a location that is convenient to you. I look forward to hearing from you!I have been trained in classical piano since the age of 4, and jazz piano since the age of 15. I have won national competitions and performed all across the country. 12 Subjects: including algebra 1, algebra 2, biology, prealgebra Related Cherry Hill Township, NJ Tutors Cherry Hill Township, NJ Accounting Tutors Cherry Hill Township, NJ ACT Tutors Cherry Hill Township, NJ Algebra Tutors Cherry Hill Township, NJ Algebra 2 Tutors Cherry Hill Township, NJ Calculus Tutors Cherry Hill Township, NJ Geometry Tutors Cherry Hill Township, NJ Math Tutors Cherry Hill Township, NJ Prealgebra Tutors Cherry Hill Township, NJ Precalculus Tutors Cherry Hill Township, NJ SAT Tutors Cherry Hill Township, NJ SAT Math Tutors Cherry Hill Township, NJ Science Tutors Cherry Hill Township, NJ Statistics Tutors Cherry Hill Township, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Cherry_Hill_Township_NJ_Math_tutors.php","timestamp":"2014-04-19T06:58:42Z","content_type":null,"content_length":"24470","record_id":"<urn:uuid:3758a9d4-ea4b-4ea4-bc4f-45d00325e095>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Cherry Hill Township, NJ Math Tutor Find a Cherry Hill Township, NJ Math Tutor ...I look forward to working with you and your child!I am a certified and current teacher in the public schools. My New Jersey certification is k-12. In PA, I am certified k-6. 12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2 I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including calculus, trigonometry, SAT math, ACT Math ...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Trigonometry with a national tutoring chain for five years. I have taught Trigonometry as a private tutor since 2001. 12 Subjects: including geometry, logic, algebra 1, algebra 2 ...My philosophy is just that, show them all that is good about them so they will feel empowered to do more. I use a variety of methods in teaching, some students are hands on/visual students other are more logical...as such I use a blend of teaching techniques that are as diverse as our children. ... 13 Subjects: including differential equations, logic, prealgebra, reading ...I am located in the South Jersey area and would be happy to tutor you in your home or at a location that is convenient to you. I look forward to hearing from you!I have been trained in classical piano since the age of 4, and jazz piano since the age of 15. I have won national competitions and performed all across the country. 12 Subjects: including algebra 1, algebra 2, biology, prealgebra Related Cherry Hill Township, NJ Tutors Cherry Hill Township, NJ Accounting Tutors Cherry Hill Township, NJ ACT Tutors Cherry Hill Township, NJ Algebra Tutors Cherry Hill Township, NJ Algebra 2 Tutors Cherry Hill Township, NJ Calculus Tutors Cherry Hill Township, NJ Geometry Tutors Cherry Hill Township, NJ Math Tutors Cherry Hill Township, NJ Prealgebra Tutors Cherry Hill Township, NJ Precalculus Tutors Cherry Hill Township, NJ SAT Tutors Cherry Hill Township, NJ SAT Math Tutors Cherry Hill Township, NJ Science Tutors Cherry Hill Township, NJ Statistics Tutors Cherry Hill Township, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Cherry_Hill_Township_NJ_Math_tutors.php","timestamp":"2014-04-19T06:58:42Z","content_type":null,"content_length":"24470","record_id":"<urn:uuid:3758a9d4-ea4b-4ea4-bc4f-45d00325e095>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of power factor power factor of an electric power system is defined as the of the real power to the apparent power , and is a number between 0 and 1 (frequently expressed as a percentage, e.g. 0.5 pf = 50% pf). Real is the capacity of the circuit for performing work in a particular time. Apparent power is the product of the current and voltage of the circuit. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power can be greater than the real power. Because the cost of each power line and transformer in a distribution system depends on the peak current it is designed to handle, a distribution system that is designed to handle the higher currents caused by loads with low power factor will cost more than a distribution system that delivers the same useful energy to loads with a power factor closer to 1. Power factor in linear circuit In a purely resistive AC circuit, voltage and current waveforms are in step (or in phase), changing polarity at the same instant in each cycle. Where reactive loads are present, such as with capacitors or inductors, energy storage in the loads result in a time difference between the current and voltage waveforms. This stored energy returns to the source and is not available to do work at the load. Thus, a circuit with a low power factor will have higher currents to transfer a given quantity of real power than a circuit with a high power factor. Circuits containing purely resistive heating elements (filament lamps, strip heaters, cooking stoves, etc.) have a power factor of 1.0. Circuits containing inductive or capacitive elements (lamp ballasts, motors, etc.) often have a power factor below 1.0. For example, in electric lighting circuits, normal power factor ballasts (NPF) typically have a value of (0.4 - 0.6). Ballasts with a power factor greater than (0.9) are considered high power factor ballasts (HPF). The significance of power factor lies in the fact that utility companies supply customers with volt-amperes, but bill them for watts. Power factors below 1.0 require a utility to generate more than the minimum volt-amperes necessary to supply the real power (watts). This increases generation and transmission costs. For example, if the load power factor were as low as 0.7, the apparent power would be 1.4 times the real power used by the load. Line current in the circuit would also be 1.4 times the current required at 1.0 power factor, so the losses in the circuit would be doubled (since they are proportional to the square of the current). Alternatively all components of the system such as generators, conductors, transformers, and switchgear would be increased in size (and cost) to carry the extra current. Utilities typically charge additional costs to customers who have a power factor below some limit, which is typically 0.9 to 0.95. Engineers are often interested in the power factor of a load as one of the factors that affect the efficiency of power transmission. Definition and calculation power flow has the three components: real power (P), measured in apparent power (S), measured in volt-amperes (VA); and reactive power (Q), measured in reactive volt-amperes (VAr). The power factor is defined as: In the case of a perfectly sinusoidal waveform, P, Q and S can be expressed as vectors that form a vector triangle such that: $S^2,! = \left\{P^2,!\right\} + \left\{Q^2,!\right\}.$ If $phi,$ is the phase angle between the current and voltage, then the power factor is equal to $left|cosphiright|$, and: $P = S left|cosphiright|.$ Since the units are consistent, the power factor is by definition a dimensionless number between 0 and 1. When power factor is equal to 0, the energy flow is entirely reactive, and stored energy in the load returns to the source on each cycle. When the power factor is 1, all the energy supplied by the source is consumed by the load. Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle, where leading indicates a negative sign. If a purely resistive load is connected to a power supply, current and voltage will change polarity in step, the power factor will be unity (1), and the electrical energy flows in a single direction across the network in each cycle. Inductive loads such as transformers and motors (any type of wound coil) consume reactive power with current waveform lagging the voltage. Capacitive loads such as capacitor banks or buried cable generate reactive power with current phase leading the voltage. Both types of loads will absorb energy during part of the AC cycle, which is stored in the device's magnetic or electric field, only to return this energy back to the source during the rest of the cycle. For example, to get 1 kW of real power, if the power factor is unity, 1 kVA of apparent power needs to be transferred (1 kW ÷ 1 = 1 kVA). At low values of power factor, more apparent power needs to be transferred to get the same real power. To get 1 kW of real power at 0.2 power factor, 5 kVA of apparent power needs to be transferred (1 kW ÷ 0.2 = 5 kVA). This apparent power must be produced and transmitted to the load in the conventional fashion, and is subject to the usual distributed losses in the production and transmission processes. Linear loads Electrical loads consuming alternating current power consume both real power and reactive power. The vector sum of real and reactive power is the apparent power. The presence of reactive power causes the real power to be less than the apparent power, and so, the electric load has a power factor of less than 1. The reactive power increases the current flowing between the power source and the load, which increases the power losses through transmission and distribution lines. This results in additional costs for power companies. Reactive power can require the use of wiring, switches, circuit breakers, transformers and transmission lines with higher current capacities. Therefore, power companies require their customers, especially those with large loads, to maintain their power factors above a specified amount (usually 0.90 or higher) or be subject to additional charges. Electricity utilities measure reactive power used by high demand customers and charge higher rates accordingly. Some consumers install power factor correction schemes at their factories to cut down on these higher costs. Power factor correction of linear loads It is often possible and desirable to adjust the power factor of a system to near 1.0. This practice is known as power factor correction and is achieved by switching in or out banks of . For example the inductive effect of motor loads may be offset by locally connected capacitors. When reactive elements supply or absorb reactive power near the point of reactive loading, the apparent power draw as seen by the source is reduced and efficiency is increased. Power factor correction may be applied either by an electrical power transmission utility to improve the stability and efficiency of the transmission network; or, correction may be installed by individual electrical customers to reduce the costs charged to them by their electricity supplier. A high power factor is generally desirable in a transmission system to reduce transmission losses and improve voltage regulation at the load. Power factor correction brings the power factor of an AC power circuit closer to 1 by supplying reactive power of opposite sign, adding capacitors or inductors which act to cancel the inductive or capacitive effects of the load, respectively. For example, the inductive effect of motor loads may be offset by locally connected capacitors. If a load had a capacitive value, inductors (also known as reactors in this context) are conencted to correct the power factor. In the electricity industry, inductors are said to consume reactive power and capacitors are said to supply it, even though the reactive power is actually just moving back and forth on each AC cycle. The reactive elements can create voltage fluctuations and harmonic noise when switched on or off. They will supply or sink reactive power regardless of whether there is a corresponding load operating nearby, increasing the system's no-load losses. In a worst case, reactive elements can interact with the system and with each other to create resonant conditions, resulting in system instability and severe overvoltage fluctuations. As such, reactive elements cannot simply be applied at will, and power factor correction is normally subject to engineering analysis. An automatic power factor correction unit is used to improve power factor. A power factor correction unit usually consists of a number of capacitors that are switched by means of contactors. These contactors are controlled by a regulator that measures power factor in an electrical network. To be able to measure 'power factor', the regulator uses a CT (Current transformer) to measure the current in one phase. Depending on the load and power factor of the network, the power factor controller will switch the necessary blocks of capacitors in steps to make sure the power factor stays above 0.9 or other selected values (usually demanded by the energy supplier). 1. Network connection points 2. Inrush Limiting Contactors 3. Capacitors (single-phase or three-phase units, delta-connection) 4. Transformer Suitable voltage transformation to suit control power (contactors, ventilation,...) Instead of using a set of switched capacitors, an unloaded synchronous motor can supply reactive power. The reactive power drawn by the synchronous motor is a function of its field excitation. This is referred to as a synchronous condenser. It is started and connected to the electrical network. It operates at full leading power factor and puts VARs onto the network as required to support a system’s voltage or to maintain the system power factor at a specified level. The condenser’s installation and operation are identical to large electric motors. Its principal advantage is the ease with which the amount of correction can be adjusted; it behaves like an electrically variable capacitor. Unlike capacitors, the amount of reactive power supplied is proportional to voltage, not the square of voltage; this improves voltage stability on large networks. Synchronous condensors are often used in connnection with high voltage direct current transmission projects or in large industrial plants such as steel mills. Non-linear loads Non-sinusoidal components In circuits having only sinusoidal currents and voltages, the power factor effect arises only from the difference in phase between the current and voltage. This is narrowly known as "displacement power factor". The concept can be generalized to a total, distortion, or true power factor where the apparent power includes all harmonic components. This is of importance in practical power systems which contain loads such as , some forms of electric lighting, electric arc furnaces , welding equipment, switched-mode power supplies and other devices. Non-linear loads create harmonic currents in addition to the original (fundamental frequency) AC current. Addition of linear components such as capacitors and inductors cannot cancel these harmonic currents, so other methods such as filters or active power factor correction are required to smooth out their current demand over each cycle of alternating current and so reduce the generated harmonic currents. A typical multimeter will give incorrect results when attempting to measure the AC current drawn by a non-sinusoidal load and then calculate the power factor. A true RMS multimeter must be used to measure the actual RMS currents and voltages (and therefore apparent power). To measure the real power or reactive power, a wattmeter designed to properly work with non-sinusoidal currents must be Switched-mode power supplies A particularly important class of non-linear loads is the millions of personal computers that typically incorporate switched-mode power supplies (SMPS) with rated output power ranging from a few watt to more than 1 kW. Historically, these very-low-cost power supplies incorporated a simple full-wave rectifier that conducted only when the mains instantaneous voltage exceeded the voltage on the input capacitors. This leads to very high ratios of peak-to-average input current, which also lead to a low distortion power factor and potentially serious phase and neutral loading concerns. A typical switched-mode power supply first makes a DC bus, using a bridge rectifier or similar circuit. The output voltage is then derived from this DC bus. The problem with this is that the rectifier is a non-linear device, so the input current is highly non-linear. That means that the input current has energy at harmonics of the frequency of the voltage. This presents a particular problem for the power companies, because they cannot compensate for the harmonic current by adding simple capacitors or inductors, as they could for the reactive power drawn by a linear load. Many jurisdictions are beginning to legally require power factor correction for all power supplies above a certain power level. Regulatory agencies such as the EU have set harmonic limits as a method of improving power factor. Declining component cost has hastened implementation of two different methods. To comply with current EU standard EN61000-3-2, all switched-mode power supplies with output power more than 75 W must include passive PFC, at least. 80 PLUS power supply certification requires a power factor of 0.9 or more. Passive PFC The simplest way to control the harmonic current is to use a filter: it is possible to design a filter that passes current only at line frequency (e.g. 50 or 60 Hz). This filter reduces the harmonic current, which means that the non-linear device now looks like a linear load. At this point the power factor can be brought to near unity, using capacitors or inductors as required. This filter requires large-value high-current inductors, however, which are bulky and expensive. This is a simple way of correcting the nonlinearity of a load by using capacitor banks. It is not as effective as active PFC . Active PFC Active Power Factor Corrector (active PFC) is a power electronic system that controls the amount of power drawn by a load in order to obtain a Power factor as close as possible to unity. In most applications, the active PFC controls the input current of the load so that the current waveform is proportional to the mains voltage waveform (a sinewave). Some types of active PFC are Active power factor correctors can be single-stage or multi-stage. In the case of a switched-mode power supply, a boost converter is inserted between the bridge rectifier and the main input capacitors. The boost converter attempts to maintain a constant DC bus voltage on its output while drawing a current that is always in phase with and at the same frequency as the line voltage. Another switchmode converter inside the power supply produces the desired output voltage from the DC bus. This approach requires additional semiconductor switches and control electronics, but permits cheaper and smaller passive components. It is frequently used in practice. For example, SMPS with passive PFC can achieve power factor of about 0.7–0.75, SMPS with active PFC, up to 0.99 power factor, while a SMPS without any power factor correction has a power factor of only about 0.55–0.65 . Due to their very wide input voltage range, many power supplies with active PFC can automatically adjust to operate on AC power from about 100 V (Japan) to 240 V (UK). That feature is particularly welcome in power supplies for laptops. Measuring power factor Power factor in a single-phase circuit (or balanced three-phase circuit) can be measured with the wattmeter-ammeter-voltmeter method, where the power in watts is divided by the product of measured voltage and current. The power factor of a balanced polyphase circuit is the same as that of any phase. The power factor of an unbalanced polyphase circuit is not uniquely defined. A direct reading power factor meter can be made with a moving coil meter of the electrodynamic type, carrying two perpendicular coils on the moving part of the instrument. The field of the instrument is energized by the circuit current flow. The two moving coils, A and B, are connected in parallel with the circuit load. One coil, A, will be connected through a resistor and the second coil, B, through an inductor, so that the current in coil B is delayed with respect to current in A. At unity power factor, the current in A is in phase with the circuit current, and coil A provides maximum torque,driving the instrument pointer toward the 1.0 mark on the scale. At zero power factor, the current in coil B is in phase with circuit current, and coil B provides torque to drive the pointer towards 0. At intermediate values of power factor, the torques provided by the two coils add and the pointer takes up intermediate positions. Another electromechanical instrument is the polarized-vane type. In this instrument a stationary field coil produces a rotating magnetic field (connected either directly to polyphase voltage sources or to a phase-shifting reactor if a single-phase application). A second stationary field coil carries a current proportional to current in the circuit. The moving system of the instrument consists of two vanes which are magnetized by the current coil. In operation the moving vanes take up a physical angle equivalent to the electrical angle between the voltage source and the current source. This type of instrument can be made to register for currents in both directions, giving a 4-quadrant display of power factor or phase angle. Digital instruments can be made that either directly measure the time lag between voltage and current waveforms and so calculate the power factor, or by measuring both true and apparent power in the circuit and calculating the quotient. The first method is only accurate if voltage and current are sinusoidal; loads such as rectifiers distort the waveforms from the sinusoidal shape. English-language power engineering students are advised to remember: "ELI the ICE man" or "ELI on ICE" – the voltage E leads the current I in an inductor L, the current leads the voltage in a capacitor C. Or even shorter: CIVIL – in a apacitor the (current) leads oltage leads (current) in an inductor • A. K. Maini "Electronic Projects for Beginners", "Pustak Mahal", 2nd Edition: March, 1998 (India) External links
{"url":"http://www.reference.com/browse/power%20factor","timestamp":"2014-04-19T19:23:20Z","content_type":null,"content_length":"102804","record_id":"<urn:uuid:983608a5-9156-467a-8895-1ae43010083e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
1.7 Variance and Standard Deviation From UBC Wiki Another important quantity related to a given random variable is its variance. The variance is a numerical description of the spread, or the dispersion, of the random variable. That is, the variance of a random variable X is a measure of how spread out the values of X are, given how likely each value is to be observed. │ Variance and Standard Deviation of a Discrete Random Variable │ │ The variance, Var(X), of a discrete random variable X is │ │ $\text{Var}(X) = \sum_{k=1}^{N} \Big(x_k - \mathbb{E}(X)\Big)^2\textrm{Pr}(X=x_k)$ │ │ │ │ where N is the total number of possible values of X. │ │ │ │ The standard deviation, σ, is the positive square root of the variance: │ │ │ │ $\sigma(X) = \sqrt{\text{Var}(X)}$ │ Observe that the variance of a random variable is always nonnegative (since probabilities are nonnegative, and the square of a number is also nonnegative). Observe also that much like the expectation of a random variable X, the variance (or standard deviation) is a weighted average of an expression of observable and calculable values. More precisely, notice that $\text{Var}(X) = \mathbb{E}\left(\left[X - \mathbb{E}(X)\right]^2\right).$ Example: Test Scores Using the test scores example of the previous sections, calculate the variance and standard deviation of the random variable X associated to randomly selecting a single exam. The variance of the random variable X is given by \begin{align} \text{Var}(X) &= \sum_{k=1}^{N} (x_k - \mathbb{E}(X))^2 \textrm{Pr}(X=x_k) \\ &= (30-64)^2 \frac{3}{10} + (60 - 64)^2\frac{2}{10} + (80 - 64)^2 \frac{3}{10} + (90-64)^2 \frac{1}{10} + (100-64)^2 \frac{1}{10} \\ &= 624 \end{align} The standard deviation of X is then $\sigma(X) = \sqrt{624}\approx 24.979992$ Interpretation of the Standard Deviation For most "nice" random variables, i.e. ones that are not too wildly distributed, the standard deviation has a convenient informal interpretation. Consider the intervals $S_m = \left[\mathbb{E}(X) - m\sigma(X),\ \mathbb{E}(X) + m\sigma(X)\right],$ for some positive integer m. As we increase the value of m, these intervals will contain more of the possible values of the random variable X. A good rule of thumb is that for "nicely distributed" random variables, all of the most likely possible values of the random variable will be contained in the interval S[3]. Another way to say this is that, for discrete random variables, most of the PMF will live on the interval S[3]. We will see in the next chapter that a similar interpretation holds for continuous random variables.
{"url":"http://wiki.ubc.ca/index.php?oldid=172089","timestamp":"2014-04-19T22:06:39Z","content_type":null,"content_length":"25147","record_id":"<urn:uuid:ffa26c02-99a3-4369-904a-01d4c1bc6f64>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
John Horton Conway From Wikipedia, the free encyclopedia John Horton Conway (born 26 December 1937) is a British mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He has also contributed to many branches of recreational mathematics, notably the invention of the cellular automaton called the Game of Life. Conway is currently Professor of Mathematics and John Von Neumann Professor in Applied and Computational Mathematics at Princeton University. He is also currently a visiting professor at CUNY's Queens College. He studied at Cambridge, where he started research under Harold Davenport. He received the Berwick Prize (1971),^1 was elected a Fellow of the Royal Society (1981),^2 was the first recipient of the Pólya Prize (LMS) (1987),^1 won the Nemmers Prize in Mathematics (1998) and received the Leroy P. Steele Prize for Mathematical Exposition (2000) of the American Mathematical Society. He has an Erdős number of one.^3 Conway's parents were Agnes Boyce and Cyril Horton Conway. He was born in Liverpool.^4 He became interested in mathematics at a very early age and his mother recalled that he could recite the powers of two when he was four years old. At the age of eleven his ambition was to become a mathematician. After leaving secondary school, Conway entered Gonville and Caius College, Cambridge to study mathematics. He was awarded his BA in 1959 and began to undertake research in number theory supervised by Harold Davenport. Having solved the open problem posed by Davenport on writing numbers as the sums of fifth powers, Conway began to become interested in infinite ordinals. It appears that his interest in games began during his years studying at Cambridge, where he became an avid backgammon player, spending hours playing the game in the common room. He was awarded his doctorate in 1964 and was appointed as College Fellow and Lecturer in Mathematics at the University of Cambridge. He left Cambridge in 1986 to take up the appointment to the John von Neumann Chair of Mathematics at Princeton University. Conway resides in Princeton, New Jersey. He has seven children by various marriages, three grandchildren and four great-grand children. He has been married three times; his first wife was Eileen, and his second wife was Larissa. He has been married to his third wife, Diana, since 2001.^5 Combinatorial game theory Among amateur mathematicians, he is perhaps most widely known for his contributions to combinatorial game theory (CGT), a theory of partisan games. This he developed with Elwyn Berlekamp and Richard Guy, and with them also co-authored the book Winning Ways for your Mathematical Plays. He also wrote the book On Numbers and Games (ONAG) which lays out the mathematical foundations of CGT. He is also one of the inventors of sprouts, as well as philosopher's football. He developed detailed analyses of many other games and puzzles, such as the Soma cube, peg solitaire, and Conway's soldiers. He came up with the angel problem, which was solved in 2006. He invented a new system of numbers, the surreal numbers, which are closely related to certain games and have been the subject of a mathematical novel by Donald Knuth. He also invented a nomenclature for exceedingly large numbers, the Conway chained arrow notation. Much of this is discussed in the 0th part of ONAG. He is also known for the invention of Conway's Game of Life, one of the early and still celebrated examples of a cellular automaton. His early experiments in that field were done with pen and paper, long before personal computers existed. In the mid-1960s with Michael Guy, son of Richard Guy, he established that there are sixty-four convex uniform polychora excluding two infinite sets of prismatic forms. They discovered the grand antiprism in the process, the only non-Wythoffian uniform polychoron. Conway has also suggested a system of notation dedicated to describing polyhedra called Conway polyhedron notation. He extensively investigated lattices in higher dimensions, and determined the symmetry group of the Leech lattice. Geometric topology Conway's approach to computing the Alexander polynomial of knot theory involved skein relations, by a variant now called the Alexander-Conway polynomial. After lying dormant for more than a decade, this concept became central to work in the 1980s on the novel knot polynomials. Conway further developed tangle theory and invented a system of notation for tabulating knots, nowadays known as Conway notation, while completing the knot tables up to 10 crossings. Group theory He worked on the classification of finite simple groups and discovered the Conway groups. He was the primary author of the ATLAS of Finite Groups giving properties of many finite simple groups. He, along with collaborators, constructed the first concrete representations of some of the sporadic groups. More specifically, he discovered three sporadic groups based on the symmetry of the Leech lattice, which have been designated the Conway groups. With Simon P. Norton he formulated the complex of conjectures relating the monster group with modular functions, which was named monstrous moonshine by them. He introduced the Mathieu groupoid, an extension of the Mathieu group M[12] to 13 points. Number theory As a graduate student, he proved the conjecture by Edward Waring that every integer could be written as the sum of 37 numbers, each raised to the fifth power, though Chen Jingrun solved the problem independently before the work could be published.^6 He has also done work in algebra, particularly with quaternions. Together with Neil James Alexander Sloane, he invented the system of icosian.^7 For calculating the day of the week, he invented the Doomsday algorithm. The algorithm is simple enough for anyone with basic arithmetic ability to do the calculations mentally. Conway can usually give the correct answer in under two seconds. To improve his speed, he practices his calendrical calculations on his computer, which is programmed to quiz him with random dates every time he logs on. One of his early books was on finite state machines. Theoretical physics In 2004, Conway and Simon B. Kochen, another Princeton mathematician, proved the Free will theorem, a startling version of the No Hidden Variables principle of Quantum Mechanics. It states that given certain conditions, if an experimenter can freely decide what quantities to measure in a particular experiment, then elementary particles must be free to choose their spins in order to make the measurements consistent with physical law. In Conway's provocative wording: "if experimenters have free will, then so do elementary particles." He has (co-)written several books including the ATLAS of Finite Groups, Regular Algebra and Finite Machines, Sphere Packings, Lattices and Groups,^8 The Sensual (Quadratic) Form, On Numbers and Games , Winning Ways for your Mathematical Plays, The Book of Numbers, On Quaternions and Octonions,^9 The Triangle Book (written with Steve Sigur)^10 and in summer 2008 published The Symmetries of Things with Chaim Goodman-Strauss and Heidi Burgiel. See also 1. ^ ^a ^b LMS Prizewinners 2. ^ Conway, J. H., Croft, H. T., Erdos, P., & Guy, M. J. T. (1979). On the distribution of values of angles determined by coplanar points. J. London Math. Soc.(2), 19(1), 137–143. 3. ^ "John Conway". www.nndb.com. Retrieved 2010-08-10. 4. ^ Guy, Richard K. (1989). "Review: Sphere packings, lattices and groups, by J. H. Conway and N. J. A. Sloane". Bull. Amer. Math. Soc. (N.S.) 21 (1): 142–147. 5. ^ Baez, John C. (2005). "Review: On quaternions and octonions: Their geometry, arithmetic, and symmetry, by John H. Conway and Derek A. Smith". Bull. Amer. Math. Soc. (N.S.) 42 (2): 229–243. 6. ^ http://www.goodreads.com/book/show/1391661.The_Triangle_Book Further reading External links Wikimedia Commons has media related to John Horton Conway. • O'Connor, John J.; Robertson, Edmund F., "John Horton Conway", MacTutor History of Mathematics archive, University of St Andrews. by O'Connor and Robertson • Charles Seife, "Impressions of Conway", The Sciences • Mark Alpert, "Not Just Fun and Games", Scientific American April 1999. (official online version; registration-free online version) • Jasvir Nagra, "Conway's Proof Of The Free Will Theorem" [3] • John Conway: "Free Will and Determinism in Science and Philosophy" (Video Lectures)[4] • Conway, John Horton; Curtis, Robert Turner; Norton, Simon Phillips; Parker, Richard A; Wilson, Robert Arnott (1985). Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups. Oxford University Press. ISBN 0-19-853199-0. • Video of Conway leading a tour of brickwork patterns in Princeton, lecturing on the ordinals, and lecturing on sums of powers and Bernoulli numbers. • "Bibliography of John H. Conway" – Princeton University, Mathematics Department • Conway, John H. "Does John Conway hate his Game of Life?" (video). Brady Haran. Retrieved 4 March 2014. Video commentary by Conway on his game. • Conway, John H. "Inventing Game of Life" (video). Brady Haran. Retrieved 7 March 2014. Video commentary by Conway on his game.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=John_Horton_Conway","timestamp":"2014-04-16T10:39:31Z","content_type":null,"content_length":"114815","record_id":"<urn:uuid:768be662-8238-4bae-bd10-516a3b491fff>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Weird algebra August 12th 2013, 11:58 AM Weird algebra (x+1)/(x-2)=3/(x-2)+5 - Wolfram|Alpha Why on earth can't I just multiply by (t-2) and get the solution t=2 ?! To make it absolutely clear that I should be able to just multiply by (t-2), I can rewrite this equation as: I am VERY confused.. I do realize that t can't be 2 because of the denominator becoming 0, but why does it give me the algebraic solution t=2 and then it doesn't work? August 12th 2013, 02:54 PM Re: Weird algebra I'm finding out that this is due to a problem in algebra called "extraneous solutions". Why does this problem arise? I'm having problems finding it... Is algebra flawed? August 12th 2013, 04:28 PM Re: Weird algebra The reason that extraneous solutions sometimes occur is because if you were to raise both sides of the equation to a higher power such as cubed, you would increase the number of the solution. This new number night not satisfy the original equation. August 12th 2013, 04:47 PM Re: Weird algebra Well it wouldn't give you the solution t=2 if you followed the rules of algebra and did not take $\frac{0}{0}=1$ (which is what you are doing when you cancel $\frac{t-2}{t-2}$ to 1. Simply put this equation has no solution because it is wrong to begin with, you can see this more clearly if you rearrange it. Nothing divided by itself can equal 0 so you can see that the equation is false to begin with. August 12th 2013, 05:23 PM Re: Weird algebra Well it wouldn't give you the solution t=2 if you followed the rules of algebra and did not take $\frac{0}{0}=1$ (which is what you are doing when you cancel $\frac{t-2}{t-2}$ to 1. Simply put this equation has no solution because it is wrong to begin with, you can see this more clearly if you rearrange it. Nothing divided by itself can equal 0 so you can see that the equation is false to begin with. I realize what the problem is now. However, I did not break any rules of algebra as you claim. Thanks though. August 12th 2013, 06:02 PM Re: Weird algebra (x+1)/(x-2)=3/(x-2)+5 - Wolfram|Alpha Why on earth can't I just multiply by (t-2) and get the solution t=2 ?! To make it absolutely clear that I should be able to just multiply by (t-2), I can rewrite this equation as: I am VERY confused.. I do realize that t can't be 2 because of the denominator becoming 0, but why does it give me the algebraic solution t=2 and then it doesn't work? You were going to break a rule of algebra when you planned to multiply by t-2. If you look at it in steps you'll see that Multiply both sides by t-2 At this point you have made the mistake of turning (t-2)/(t-2) into 1 August 12th 2013, 06:18 PM Re: Weird algebra August 12th 2013, 06:38 PM Re: Weird algebra It is not true for all t. When t=2 it is not equal to 1. So when you are trying to find a solution to the equation when you make the step to turn (t-2)/(t-2) into 1 you must make a note that any conclusions you reach are under the assumption that t is not equal to 2. And when you continued to find a solution you reached the conclusion that t=2. As I said "any conclusions you reach are under the assumption that t is not equal to 2." so your conclusion that t=2 is under the assumption that t is not equal to 2. August 12th 2013, 06:43 PM Re: Weird algebra It is not true for all t. When t=2 it is not equal to 1. So when you are trying to find a solution to the equation when you make the step to turn (t-2)/(t-2) into 1 you must make a note that any conclusions you reach are under the assumption that t is not equal to 2. And when you continued to find a solution you reached the conclusion that t=2. As I said "any conclusions you reach are under the assumption that t is not equal to 2." so your conclusion that t=2 is under the assumption that t is not equal to 2. That is correct, but I don't feel like I am breaking any "rules of algebra". I feel that the rules of algebra are leading me to a false conclusion due to the ALGEBRA expecting a higher degree equation...Is that crazy?
{"url":"http://mathhelpforum.com/algebra/221169-weird-algebra-print.html","timestamp":"2014-04-18T21:50:24Z","content_type":null,"content_length":"17081","record_id":"<urn:uuid:5e9be436-0211-4769-886a-880c7c85e445>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Odd / Even Functions. September 11th 2010, 01:48 PM Odd / Even Functions. Hi, I have a test on pre calc topics on Monday in my Calc class. I am reviewing odd and even functions right now. Here is the problem: True or False: The function f(x)= x^-3 is an odd function. I have true and this is what i did to prove it: (-x)^-3 = -x since it equals -x it's odd right? But then I graphed it on my graphing calculator and it did not look right O_o... am I doing something wrong? Can i be neither? September 11th 2010, 02:01 PM Hi, I have a test on pre calc topics on Monday in my Calc class. I am reviewing odd and even functions right now. Here is the problem: True or False: The function f(x)= x^-3 is an odd function. I have true and this is what i did to prove it: (-x)^-3 = -x since it equals -x it's odd right? But then I graphed it on my graphing calculator and it did not look right O_o... am I doing something wrong? Can i be neither? if function $f(-x) = -f(x)$ function is odd. if function $f(-x) = f(x)$ function is even. if function is $f(-x) eq f(x)$ and $f(-x) eq -f(x)$ than function is neither odd neither even :D $f(x) = x^{-3} = \frac {1}{x^3 }$ $f(-x) = - \frac {1}{x^3} = - f(x)$ so it's odd function :D and here you are graph of that odd function :D
{"url":"http://mathhelpforum.com/pre-calculus/155847-odd-even-functions-print.html","timestamp":"2014-04-19T12:46:16Z","content_type":null,"content_length":"6229","record_id":"<urn:uuid:38acedaa-7b9d-4f62-a25d-103f83376942>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Verify that R^3\{2 points} is homotopy equivalent to S^2 V S^2 November 22nd 2011, 12:57 AM #1 Oct 2011 Verify that R^3\{2 points} is homotopy equivalent to S^2 V S^2 I need to show that $\mathbb{R}^3\backslash A$ where $A$ is two distinct points is homotopically equivalent to $S^2\vee S^2$. I can see this geometrically but am finding it difficult to come up with an explicit proof. Trying to find a deformation retraction to the two spheres but I can't work out the actual equations. Help please! Re: Verify that R^3\{2 points} is homotopy equivalent to S^2 V S^2 I need to show that $\mathbb{R}^3\backslash A$ where $A$ is two distinct points is homotopically equivalent to $S^2\vee S^2$. I can see this geometrically but am finding it difficult to come up with an explicit proof. Trying to find a deformation retraction to the two spheres but I can't work out the actual equations. Help please! You're actually supposed to find the explicit deformation retract--blah, sounds like a Hatcher problem. Try to find an explicit map that does the following. First try extending from each of the balls outwards (in each direction not facing the other ball) a line and imagine that you are shrinking $\mathbb{R}^3$ along those lines by a factor of $t$, etc. From there you'll have to make the shapes right. Re: Verify that R^3\{2 points} is homotopy equivalent to S^2 V S^2 You're actually supposed to find the explicit deformation retract--blah, sounds like a Hatcher problem. Try to find an explicit map that does the following. First try extending from each of the balls outwards (in each direction not facing the other ball) a line and imagine that you are shrinking $\mathbb{R}^3$ along those lines by a factor of $t$, etc. From there you'll have to make the shapes right. I just emailed my lecturer and he said it doesn't have to be an explicit equation. pheww. so i'm guessing i just have to describe the deformation retraction... which seems pretty easy. November 22nd 2011, 06:53 AM #2 November 22nd 2011, 12:02 PM #3 Oct 2011
{"url":"http://mathhelpforum.com/differential-geometry/192459-verify-r-3-2-points-homotopy-equivalent-s-2-v-s-2-a.html","timestamp":"2014-04-17T22:16:37Z","content_type":null,"content_length":"40108","record_id":"<urn:uuid:033533b4-f726-4c26-bcf1-9f97d1681af2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Total # Posts: 14 Suppose you devised a training program to raise student scores on a standardized test, such as ACT, or AIMS (like in Arizona). You first administer the test to a random sample of students, record their scores, administer the training to these students, and then administer the ... Suppose you devised a training program to raise student scores on a standardized test, such as ACT, or AIMS (like in Arizona). You first administer the test to a random sample of students, record their scores, administer the training to these students, and then administer the ... a scatter plot of the number of women in the workforce versus the number of Christmas trees sold in the United States for each year between 1930 and the present, you would find a very strong correlation. Does one cause the other? Why do you think this correlation might be true? A political action committee decided to poll its membership to determine if they were in favor of a certain potential law, similar to gun control. The results that came back were 125 in favor of the law, and 120 against. Given that the crew at the center would use the fact tha... What are three audience characteristics you think are important to identify when conducting an audience analysis Simplify the expression below. Express the answer so that all exponents are positive. Whenever an exponent is 0 or negative, we assume that the base is not 0. (7x^-5/4y^-5)^-6 Is the answer 4096x^30/ Find the real solutions of this equation. If there is no real solution enter NONE. (3X+1)^1/2=8 Is the answer 2.93 Write a formula representing the following function. The strength, S, of a beam is proportional to the square of its thickness, h. NOTE: Use k as the proportionality constant. Is it S=kh^2 How do you simplify the sqrt of z^2 + 5. Any assistance would be greatly appreciated. f(t)=1/3(t+145)^2 - 67 Find the inverse? is it t=1/3(f+145)^2-67 For a given take-off weight, the take-off distance increases if the air temperature increases. For a specific example, if a KC-135 weighs 200,000 lbs, the take-off distance is modeled by f(t)=1/3 (t+145)^2 - 67. If D = f(t) , where D is the take-off distance as a function of t,... The cost of producing q articles is given by the function C=f(q)=100+2q. (A) Find the formula for the inverse function. (B) Explain in practical terms what the inverse function tells you. I am pretty sure the answer to A is f^-1(C)=C/2-50 For B, the inverse of the function is ... The question is as follows, let c=f(A) be the cost in dollars, of building a store of area A square feet. In terms of cost and square feet what do the following quantities represent? (A) f(10,000) (B) f^-1 (20,000) I am pretty sure the answerfor A is C=10,000 and for part B it... * A and B Two wires long and parallel, each carrying a distance of 1 m in two opposite directions, if the trend A equals one third B (Ia=1/3Ib), founding after the point which lies on the line where the two vertical magnetic field when the outcome is zero. This makes no sense ...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Hamilton","timestamp":"2014-04-19T22:35:03Z","content_type":null,"content_length":"9701","record_id":"<urn:uuid:362f76f1-70b4-478f-8af8-54f90d947ce4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
The Second International Conference on Mathematical Modeling and Analysis of Populations in Biological Systems October 9-11, 2009 Huntsville, Alabama About the Conference The general theme of the conference will be on the theory, modeling, and analysis of the temporal dynamics of biological populations. A special emphasis at this second conference will be placed on the modeling of epidemics. Mathematical modeling and analysis can be used to give insight and better understanding of the dynamics of biological populations. Mathematical models are built on trade-offs between biological accuracy and mathematical tractability. Of particular importance are the effects on a population's dynamics of modeling assumptions concerning spatial or temporal heterogeneities or concerning heterogeneities among the characteristics of individuals within the population and how these characteristics affect the way they interact with their environment. The invited speakers will address a wide variety of theoretical issues, applications (ecological, epidemiological, etc.), and case studies that illustrate the connection of models with data. Conference Organizers Conference Coordinator • Jia Li, Department of Mathematical Science, University of Alabama in Huntsville, li@math.uah.edu Scientific Advisory Committee • Jim Cushing, Department of Mathematics, the University of Arizona (Chair) • Thomas Banks, Center for Research in Scientific Computation, North Carolina State University • Fred Brauer, Department of Mathematics, the University of British Columbia • Carlos Castillo-Chavez, Department of Mathematics and Statistics, Arizona State University • Karl Hadeler, Lehrstuhl für Biomathematik, University of Tübingen • Mac Hyman, Theoretical Division, Los Alamos National Laboratory Organizing Committee • Jia Li, Department of Mathematical Science, University of Alabama in Huntsville (Chair), li@math.uah.edu • Jim Cushing, Department of Mathematics, the University of Arizona, cushing@math.arizona.edu • Saber Elaydi, Department of Mathematics, Trinity University, selaydi@trinity.edu Speakers will be invited to submit papers for possible publication in special issues of JBD and JDEA that will be devoted to the conference. Conference Schedule Friday, Oct. 9 Saturday, Oct. 10 Sunday, Oct. 11 8:00 AM Registration 8:30 AM Room 107 Registration Registration Opening Remarks, President Williams (UAH) 8:45 AM Room 107 Room 107 Room 107 Horst Thieme (Chair: Banks) Lou Gross (Chair: Hyman) Alun Lloyd (Chair: Cushing) 9:45 AM Break Break Break Room 109 Room 103 Room 105 Room 109 Room 103 Room 158 Room 109 Room 103 Room 158 (Chair: Gumel) (Chair: Elaydi) (Chair: Burns) (Chair: Yakubu) (Chair: Nisbet) (Chair: Ruan) (Chair: Lenhart) (Chair: Wolkowicz) (Chair: Smith) 10:00 AM Lou Sacker Banks Hyman Ackleh Huang Feng Olofsson Burns 10:30 AM Hadeler Hayward Nisbet Song X. Li Ai Bourouiba Dib Weiss 11:00 AM Break Break Zhao Boushaba Kostova 11:15 AM Brauer Henson Nevai Martcheva Smith Deng End of Conference at 11:30 AM! 11:45 AM Gaff Robertson Maroun Tridane Wolkowicz B. Li 12:15 PM Lunch on Your Own Lunch on Your Own 1:15 PM Registration Registration 1:45 PM Room 107 Room 107 Heesterbeek (Chair: Hadeler) Jianhong Wu (Chair: Brauer) 2:45 PM Break Break Room 109 Room 103 Room 105 Room 109 Room 103 Room 158 (Chair: Martcheva) (Chair: Henson) (Chair: Sacker) (Chair: Lou) (Chair: Ackleh) (Chair: Huang) 3:00 PM Chitnis Liu Lenhart Bortz Mickens H. Wang 3:30 PM Ruan Cushing Ding Dukic Rael J.X. Li 4:00 PM Break Break 4:15 PM Xu Jang Rong Eladdadi Hong Zou 4:45 PM Wesley Yakubu Gumel Sindi Leite Shuai Room 107 5:20 PM NSF Programs Related to Mathematical and Computational Biology Kostova (NSF) 6:00 PM Room 301 7:30 PM Reception and Poster Session Printable Format Conference Speakers and Participants Confirmed Plenary Speakers 1. Lou Gross, Departments of Ecology and Evolutionary Biology and Mathematics, University of Tennessee 2. Hans Heesterbeek, Faculty of Veterinary Medicine, Utrecht University 3. Alun Lloyd, Department of Mathematics, North Carolina State University 4. Horst Thieme, School of Mathematics and Statistics, Arizona State University 5. Jianhong Wu, Department of Mathematics and Statistics, Center for Disease Modeling, York University Invited Speakers 1. Azmy Ackleh, Department of Mathematics, University of Louisiana at Lafayette 2. Shangbing Ai, Department of Mathematical Sciences, University of Alabama in Huntsville 3. Thomas Banks, Center for Research in Scientific Computation, Center for Quantitative Science in Biomedicine, North Carolina State University 4. David Bortz, Department of Applied Mathematics, University of Colorado 5. Lydia Bourouiba, Department of Mathematics and Statistics, York University 6. Khalid Boushaba, Department of Mathematics, Iowa State University 7. Fred Brauer, Department of Mathematics, University of British Columbia 8. John Burns, Department of Mathematics, Virginia Tech 9. Nakul Chitnis, Malaria Modeling Team, Swiss Tropical Institute, Liverpool School of Tropical Medicine 10. Jim Cushing, Department of Mathematics, University of Arizona 11. Keng Deng, Department of Mathematics, University of Louisiana at Lafayette 12. Youssef Dib, Department of Mathematics and Physics, University of Louisiana at Monroe 13. Wandi Ding, Department of Mathematical Sciences, Middle Tennessee State University 14. Vanja Dukic, Department of Health Studies, University of Chicago 15. Amina Eladdadi, Department of Mathematics, College of Saint Rose 16. Zhilan Feng, Department of Mathematics, Purdue University 17. Holly Gaff, Virginia Modeling, Analysis and Simulation Center, Old Dominion University 18. Abba Gumel, Department of Mathematics, University of Manitoba 19. Karl Hadeler, Lehrstuhl f'r Biomathematik, University of T'bingen 20. Jim Hayward, Department of Biology, Andrews University 21. Shandelle Henson, Department of Mathematics, Andrews University 22. Dawei Hong, Department of Computer Science, Rutgers University 23. Wenzhang Huang, Department of Mathematical Sciences, University of Alabama in Huntsville 24. Mac Hyman, Department of Mathematics, Tulane University 25. Sophia Jang, Department of Mathematics and Statistics, Texas Tech University 26. Tanya Kostova-Vassilevsk, DMS, National Science Foundation 27. Maria Leite, Department of Mathematics, University of Oklahoma 28. Suzanne Lenhart, Department of Mathematics, University of Tennessee 29. Bingtuan Li, Department of Mathematics, University of Louisville 30. Jia Li, Department of Mathematical Sciences, University of Alabama in Huntsville 31. Jiaxu Li, Department of Mathematics, The University of Louisville 32. Xue-Zhi Li, Department of Mathematics, Xingyang Normal University 33. Rongsong Liu, Department of Mathematics, University of Wyoming 34. Yuan Lou, Department of Mathematics, Ohio State University 35. Maia Martcheva, Department of Mathematics, University of Florida 36. Mariette Maroun, Department of Mathematics and Physics, University of Louisiana at Monroe 37. Ron Mickens, Department of Physics, Clark University 38. Andrew Nevai, Department of Mathematics, University of Central Florida 39. Roger Nisbet, Department of Ecology, Evolution and Marine Biology, University of California, Santa Barbara 40. Peter Olofsson, Department of Mathematics, Trinity University 41. Rosalyn Rael, Department of Ecology and Evolutionary Biology, University of Michigan 42. Suzanne Robertson, Mathematical Biosciences Institute, Ohio State University 43. Libin Rong, Theoretical Biology and Biophysics, LANL 44. Shigui Ruan, Department of Mathematics, University of Miami 45. Robert Sacker, Department of Mathematics, University of Southern California 46. Zhisheng Shuai, Department of Mathematical and Statistical Sciences, University of Alberta 47. Suzanne Sindi, Center for Computational Molecular Biology, Brown University 48. Hal Smith, Department of Mathematics and Statistics, Arizona State University 49. Baojun Song, Department of Mathematical Sciences, Montclair State University 50. Abdessamad Tridane, Department of Applied Sciences and Mathematics, Arizona State University at the Polytechnic Campus 51. Haiyan Wang, Division of Mathematical and Natural Sciences, Arizona State University 52. Howie Weiss, Department of Mathematics, Georgia Tech 53. Curtis Wesley, Department of Mathematics, Louisiana State University at Shreveport 54. Gail Wolkowicz, Department of Mathematics and Statistics, McMaster University 55. Dashun Xu, Department of Mathematics, Southern Illinois University, Carbondale 56. Abdul-Aziz Yakubu, Department of Mathematics, Howard University 57. Shan Zhao, Department of Mathematics, University of Alabama 58. Xingfu Zou, Department of Applied Mathematics, University of Western Ontario Poster Presenters 1. Erin Bodine, Department of Mathematics, University of Tennessee 2. Thanate Dhirasakdanon, School of Mathematics and Statistics, Arizona State University 3. Heather Finotti, Department of Mathematics, University of Tennessee 4. Zhun Han, School of Mathematics and Statistics, Arizona State University 5. Kim Meyer, Department of Mathematics, University of Louisville 6. Thembinkosi Mkhatshwa, Department of Mathematics, Marshall University 7. Anna Mummert, Department of Mathematics, Marshall University 8. Sherry Towers, Department of Applied Statistics, Purdue University 9. Roy Trevino, School of Mathematics and Statistics, Arizona State University 10. Xiaohong Wang, Mathematical, Computational & Modeling Sciences Center, Arizona State University 11. Xiuquan Wang, Department of Mathematics, Southern Illinois University, Carbondale 12. Mohammed Yahdi, Department of Mathematics and Computer Science, Ursinus College 13. Yiding Yang, Department of Mathematics, Purdue University 14. Feng Yu, Department of Statistics and Epidemiology, RTI International 15. Peng Zhong, Department of Mathematics, University of Tennessee 1. Joy Agee, Department of Biology, University of Alabama in Huntsville 2. Folashade Agusto, National Institute for Mathematical & Biological Synthesis, University of Tennessee 3. Barbara Benitez-Gucciardi, Department of Mathematics, Houston Baptist University 4. Fengjuan Chen, Department of Mathematics, Zhejiang Normal University 5. Amina Dozier, Department of Mathematical Sciences, University of Alabama in Huntsville 6. Sean Ellermeyer, Department of Mathematics and Statistics, Kennesaw State University 7. Michael Kelly, National Institute for Mathematical & Biological Synthesis, University of Tennessee 8. Douglas Langille, Department of Mathematical Sciences, University of Alabama in Huntsville 9. Michael Lawton, Department of Ecology and Evolutionary Biology, University of Tennessee 10. Nianpeng Li, Department of Mathematics, Howard University 11. Junliang Lu, Department of Mathematical Sciences, University of Alabama in Huntsville 12. Shushuang Man, Department of mathematics & computer science, Southwest Minnesota State University 13. Marco Martinez, Department of Mathematics, University of Tennessee 14. Myla Menitt, Department of Mathematical Sciences, University of Alabama in Huntsville 15. Jing Qing, Department of Mathematics, University of Miami 16. Paul Salceanu, Department of Mathematics, University of Louisiana at Lafayette 17. Deirdre Watts, Department of Mathematical Sciences, University of Alabama in Huntsville 18. Aaron Willmon, Department of Mathematics, Walter State Community College Titles and Abstracts Plenary Speakers Dr. Louis Gross Departments of Ecology and Evolutionary Biology and Mathematics, University of Tennessee NIMBioS and the Math/Biology Interface One indicator of the potential for mathematical approaches to enhance research across the biological sciences is the increased funding in this area by agencies such as the NSF. The advent of the National Institute for Mathematical and Biological Synthesis, as a second major NSF-funded center at this interface (MBI being the other), provides evidence that the field is not only deserving of enhanced support, but additionally that there are advantages to multiple approaches to foster the growth of these interdisciplinary interactions. NIMBioS provides multiple routes for increased research and educational connections between these fields and to foster connections as well to other areas such as computation and social science. I will describe the opportunities that NIMBioS provides, give examples of the research and educational initiatives already underway, and provide some personal thoughts about future directions in both the development of general theory and the application to important practical issues. Dr. Hans Heesterbeek Faculty of Veterinary Medicine, University of Utrecht The basic reproduction number $R_{0}$ is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of $R_{0}$ where finitely many different categories of individuals are recognised. I clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. I present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterisation of $R_{0}$. I show how they are connected and how their construction follows from the basic model ingredie! nts, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number $R_{0}$. Although I present formal recipes based on linear algebra, the construction of the NGM by way of direct epidemiological reasoning is strongly encouraged, using the clear interpretation of the elements of the NGM and of the model ingredients. I present a selection of examples as a practical guide to the methods. Finally, I will show several applications of next-generation matrices for epidemiological systems, notably the possible insights that can be gained from sensitivity analysis of $R_{0}$ using the NGM. The largest part of this lecture is based on recent joint work with Odo Diekmann (Utrecht) and Mick Roberts (Auckland): The construction of next-genenation matrices for compartmental epidemic systems (Diekmann, Heesterbeek & Roberts, submitted). The latter part is based on the paper: Elasticity analysis in epidemiology: an application to tick-borne infections (Matser, Hartemink, Heesterbeek, Galvani & Davis; Ecology Letters, 2009, in press) Dr. Alun Lloyd Department of Mathematics, North Carolina State University Mosquito borne infections, most notably malaria and dengue, kill over a million people every year. Traditional control measures (such as insecticides) against these infections in developing countries have had mixed success. A novel avenue of attack involves the production and release of mosquitoes that have been manipulated or genetically engineered to be less able, or even unable, to transmit infection. The manipulated mosquitoes will, however, be less fit than native (wildtype) mosquitoes, and so would not be expected to spread in the wild. Selfish genesones that are able to "bend" the laws of Mendelian inheritance, getting transmitted to a higher fraction of offspring than would be expectedhave been suggested as a way of overcoming this problem, driving the desired trait into wild populations. Mathematical modeling is playing an important role in several largescale projects that are currently under way to assess the feasibility of these genetic techniques. In this talk I shall discuss the biology of some of the approches and the accompanying modeling work, illustrating how a number of different models are being used as the projects move along the path from labbased studies to possible field deployment. Dr. Horst R. Thieme School of Mathematical and Statistical Sciences, Arizona State University (Joint work with Hal L. Smith) The theory of persistence is designed to provide an answer to such questions as which species, in a mathematical model of interacting species, will survive over the long term. In a mathematical model of an epidemic, will the disease drive a host population to extinction or will the host persist? Can a disease remain endemic in a population? Persistence theory can give a mathematically rigorous answer to these questions: it establishes an positive long-term lower bound for the component of a dynamical system such as population size or disease prevalence; if persistence is uniform, this lower bound does not depend on the initial state of the system. Persistence theory conveniently uses the language of dynamical systems, notably semiflows on metric spaces. A powerful but also restricting assumption is the existence of a compact attractor of points. This assumption excludes, among other things, the consideration of growing populations. This talk explores how much it can be relaxed. The Lotka-Volterra predator-prey system shows that some features of a compact attractor must be retained for uniform persistence. Applications are presented to the spread of infectious diseases in growing populations and to dividing cells in a chemostat with age-dependent resource uptake and division rates. Dr. Jianhong Wu Department of Mathematics and Statistics, Centre for Disease Modeling, York University Virulent outbreaks of Highly Pathogenic Avian Influenza since 2005 have raised the question about the roles of migratory and wild birds in this disease's transmission dynamics. Despite increased monitoring, the role of wild waterfowl as the primary source of the highly pathogenic H5N1 has not been clearly established, and the consequence of outbreaks of HPAI among species of wild birds for the local and non-local ecology where migratory species are established has not been quantified. Understanding the entangled dynamics of migration and the disease dynamics is key to planning of prevention and control strategies for humans, migratory birds and the poultry industry. This talk will introduce the various factors involved in the spatial spread of H5N1 in Asia and present the results of a few dynamical models of seasonal migration linking the local dynamics during migratory stopovers to the larger-scale migratory routes. The effect of repeated epizootic at specific migratory stopovers for Bar-headed geese (Anser indicus) will be discussed as an illustration of the ecological impact of H5N1 outbreaks. Issues relevant to the co-existence and interaction of low and high pathogenic strains will be addressed, and some challenging problems in the theory of monotone periodic processes and nonlinear dynamical systems described by delay differential equations with periodic coefficients will be presented. (This talk is based on projects in collaboration with Lydia Bourouiba, Venkaka Duvvuri, Stephen Gourley, Rongsong Liu and Sasha Alexandra Teslya.) Invited Talks Competitive Exclusion in a Discrete-Time, Stage-Structured Population Model Dr. Azmy Ackleh Department of Mathematics, University of Louisiana at Lafayette We develop and analyze a discrete-time stage-structured population model that describes the competition of two similar species. We show that if one of the species has invasion reproductive number greater than one and the other has invasion reproduction number less than one, then competitive exclusion occurs and the winner species is the one with the larger invasion reproductive number. Stationary Periodic and Homoclinic Solutions for 1-D Nonlocal Reaction-diffusion Equations Dr. Shangbing Ai Department of Mathematical Sciences, University of Alabama in Huntsville Spatially periodic patterns for 1-D nonlocal reaction-diffusion equations arise from various biological models. The problem reduces to study periodic and homoclinic solutions of differential equations with perturbations containing convolution terms. We consider the case that the system is time-reversible. Assuming the unperturbed system has a family of periodic orbits surrounded by a homoclinic orbit, we establish the persistence of these solutions for the perturbed system. Estimation of Cell Proliferation Dynamics Using CFSE Data Dr. H. T. Banks Center for Research in Scientific Computation, Center for Quantitative Science in Biomedicine, North Carolina State University Advances in fluorescent labeling of cells as measured by flow cytometry have stimulated recent illuminating quantitative studies of proliferating populations of cells. We discuss our recent efforts on a new class of mathematical models based on fluorescence intensity as a structure variable to describe the evolution in time of proliferating cells labeled by carboxyfluorescein succinimidyl ester (CFSE). Early models and several extensions/modifications are discussed. Suggestions for improvements are presented and analyzed with respect to statistical significance for better agreement between model solutions and experimental data. These investigations reveal that the new decay/label loss and time dependent effective proliferation and death rates which we introduce do indeed provide improved fits of the model to data as well as new understanding of the data itself. Statistical models for the observed variability/noise in the data are discussed with implications for uncertainty quantification. The resulting new cell dynamics models should prove useful in proliferation assay tracking and modeling, with applications in numerous areas of disease progression (such as cancer, HIV and other viruses, etc.) well as in microbiology. Fragmentation and Aggregation of Bacterial Emboli Dr. David Bortz Department of Applied Mathematics, University of Colorado Klebsiella pneumoniae is one of the most common causes of intravascular catheter infections, potentially leading to life-threatening bacteremia. These bloodstream infections dramatically increase the mortality of illnesses and often serve as an engine for sepsis. Our current model for the dynamics of the size-structured population of aggregates in a hydrodynamic system is based on the Smoluchowski coagulation equations. In this talk, I will discuss the progress of several investigations into properties of our model equations. In particular, I will focus on: a) accurate characterization of the fractal properties for the aggregates, b) a differential geometry approach to fragmentation modeling, and (time permitting) c) self-similar solutions to the equations. Effect of Cross-Immunity between High and Low Pathogenic Strains of Avian Influenza in Wild Birds at Seasonal Migration Stopovers Dr. Lydia Bourouiba Department of Mathematics and Statistics, York University Many species of wild birds are identified to be highly susceptible to the highly pathogenic strain of H5N1 despite being known as natural reservoirs of low pathogenic avian influenza viruses. Understanding the disease dynamics of avian in uenza in wild birds at various stopovers of their seasonal migration is important for the evaluation of both the role of wild birds in the spread of H5N1, and the ecological impact of H5N1 outbreaks on these species. Recent experimental studies identified a temporary cross-immunity between low pathogenic and the high pathogenic H5N1 strains in certain species of wild birds. The data focused on species of birds which are more susceptible to these strains. In this talk, I will discuss the impact of this cross-immunity observed at the individual bird level on the population as a whole. The effect of the seasonal prevalence of the low pathogenic strains on the change of the highly pathogenic strain disease dynamics will be discussed in the context of previous epidemics observed in bird populations. This talk is based on projects in collaboration with Sasha Alexandra Teslya and Jianhong Wu. A Mathematical Feasibility for the Use of Aptamers in Chemotherapy and Imaging Dr. Khalid Boushaba Department of Mathematics, Iowa State University A challenge for drug design is to create molecules with optimal function that also partition efficiently into the appropriate in vivo compartment(s). This is particularly true in cancer treatments because cancer cells upregulate their expression of multidrug resistant transporters, which necessitates a higher concentration of extracellular drug to promote sufficiently high intracellular concentrations for cell killing. Pharmacokinetics can be improved by ancillary molecules, such as cyclodextrins, that increase the effective concentrations of hydrophobic drugs in the blood by providing a hydrophobic binding pocket. However, the extent to which the extracellular concentration of drug can be increased is limited. A second approach, different from the "push" mechanism just discussed, is a "pull" mechanism by which the effective intracellular concentrations of a drug is increased by a molecule with an affinity for the drug that is located inside the cell. Here we propose and give a proof in principle that intracellular RNA aptamers might perform this function. The mathematical model considers the following: Suppose I denotes a drug (inhibitor) which must be distributed spatially throughout a cell, but tends to remain outside the cell due the transport properties of the cell membrane. Suppose that E, a deleterious enzyme that binds to I, is expressed by the cell and remains in the cell. Here we evaluate the use of an intracellular aptamer with affinity for the inhibitor (I) to increase the efficiency of inhibitor transport across the cell membrane. We show that this outcome will occur if (1) the aptamer neither binds too tightly nor too weakly to the inhibitor than the enzyme and (2) the aptamer is much more diffusible in the cell ytoplasm than the enzyme. We illustrate these possibilities with numerical aimulations. The ability of the aptamer to increase the intracellular concentration of aptamer ligand (inhibitor in the above case) could also be put to use for imaging the cell. Thus, we propose and show by simulation that an intracellular aptamer can be enlisted for an integrated approach to increasing inhibitor effectiveness and imaging aptamer-expressing cells. Backward Bifurcations in a Simple Disease Transmission Model Dr. Fred Brauer Department of Mathematics, University of British Columbia We describe a simple disease transmission model with demographics, imperfect vaccination, and recovery with temporary immunity. We derive a necessary condition for the existence of a backward bifurcation which can not be satisfied if the immunity is permanent. For sufficiently rapid disease dynamics, this condition is also sufficient. Sensitivity Analysis of Cancer Models with Proliferating and Quiescent Cells Dr. John A. Burns Interdisciplinary Center for Applied Mathematics, Virginia Tech In this presentation we discuss a model of tumor growth that includes quiescence cells and the immune system response to a cycle-phase-specific drug. Tumor cells can be divided into proliferating or cycling cells and non-proliferating or quiescent cells. A cell is considered "cancerous" when it has lost its ability to regulate cell growth and division leading to a rapid uncontrolled growth of malignant cells. The model considers three populations of cancer cells and the immune system. The three populations considered in this talk are the quiescence cells, the tumor cells during interphase and the tumor cells during mitosis. Delay differential equations are used to model the system to take into account the phases of the cell cycle. We then focus on a particular method (the Sensitivity Equation Method) for computing the model sensitivities and use these sensitivities to help predict the long term behavior of the model with and without drug treatment. Mathematical Modeling of Malaria Epidemiology and Control Dr. Nakul Chitnis Malaria Modeling Team, Swiss Tropical Institute, Liverpool School of Tropical Medicine Malaria interventions are usually prioritized using efficacy estimates from intervention trials, without considering the context of existing intervention packages or long term dynamics. We use numerical simulation of mathematical models of malaria in humans and mosquitoes to provide robust quantitative predictions of effectiveness of different strategies in reducing transmission, morbidity and mortality. We link individual-based stochastic simulation models for malaria in humans with a deterministic model for mosquito infection and survival, incorporating variations in host exposure to infectious bites, naturally acquired immunity to infection and disease, effects of co-infection, and variations in human infectiousness. We can reasonably well reproduce malariological patterns in endemic areas, including non-monotonic relationships between parasite prevalence and disease incidence with host age and force of infection; and provide quantitative relationships between malaria morbidity and mortality, and increasing coverage of vector control interventions, intermittent preventive treatment in infants, and different vaccines. The Fundamental Bifurcation Theorem and Darwinian Matrix Models Dr. J. M. Cushing Department of Mathematics, University of Arizona Stability of a Delay Equation Arising from a Juvenile-Adult Model Dr. Keng Deng Department of Mathematics, University of Louisiana at Lafayette We consider a delay equation that has been formulated from a juvenile-adult population model. We give conditions on the vital rates to ensure local stability of the positive equilibrium. We also show that under certain conditions the trivial equilibrium is asymptotically stable. We then make numerical simulations to describe the rich dynamical behavior of the model. VIVO and VITRO HSV_1 Infections, Latency-Reactivation by Systems Theory Approach Dr. Youssef Dib Department of Mathematics and Physics, University of Louisiana at Monroe A nonlinear methematical model for HSV1 viral infections will be produced from its background. Differential cell are the host of this virus. Once infected, this differential cell would survive as long as it host this virus. It is assumed that both HSV1\'s DNA and Nuclear DNA in the differential cell depend on Thyroid Hormone liganded with its receptore. Numerical simulation proving the biological relevence will be shown. In addition, future research direction for this model will be discussed. Optimal Control for a Tick Disease Model Using Hybrid ODE Systems Dr. Wandi Ding Department of Mathematical Sciences, Middle Tennessee State University We are considering an optimal control problem for a type of hybrid system involving ordinary differential equations and a discrete time feature. One state variable has dynamics in only one season of the year and has a jump condition to obtain the initial condition for that corresponding season in the next year. The other state variable has continuous dynamics. Given a general objective functional, existence, necessary conditions and uniqueness for an optimal control are established. We apply our approach to a tick-transmitted disease model with age structure in which the tick dynamics changes seasonally while hosts have continuous dynamics. The goal is to maximize disease-free ticks, minimize infected ticks through an optimal control strategy of treatment with acaricide. Numerical examples are given to illustrate the results. Tracking Influenza Using Particle Learning Algorithms Dr. Vanja Dukic Department of Health Studies, University of Chicago In this talk we introduce a novel approach, based on the particle learning (PL) methodology, for classic epidemics models from the family of the susceptible-exposed-infected-recovered (SEIR) models. The proposed approach is particularly well-suited to on-line learning and surveillance of infectious diseases. As compared to the widely used MCMC (O'Neil and Roberts 1999, Elderd et al. 2006, Leman et al. 2009) and perfect sampling (Fearnhead and Meliglokou 2004) based methods, the PL method, which is based on the clever use of sufficient statistics as added states, is more robust, easier to implement, and readily generalizable to problems with more complex dynamics. Mathematical Modeling of the Effects of HER2 Over-Expression in Breast Cancer Dr. Amina Eladdadi Department of Mathematics, The College of Saint Rose (Joint work with David Isaacson, Department of Mathematical Sciences, Rensselaer Polytechnic Institute) Members of the type I receptor tyrosine kinase (RTK) family, which consists of the epidermal growth factor receptor (EGFR), HER2 (ErbB2), HER3 (ErbB3) and HER4 (ErbB4) play a crucial role in growth and differentiation of both normal and malignant mammary epithelial cells. The carcinogenic effects of HER2 protein overex-pression on cell growth and cell proliferation have been observed in a variety of experimental systems. These observations suggest that HER2 overexpression provides tumor cells with a growth advantage leading to a more aggressive phenotype. Although these effects have been attributed to high levels of HER2-expression, there have been no quantitative linkages between HER2 expression levels and the proliferation rate of HER2-overexpressing cells. To investigate the effects of HER2 receptor overexpression on cell proliferation, we have developed a mathematical model that describes the proliferative behavior of HER2-overexpressing cells as a function of the HER2 expression level. The proliferation model formulates the cell proliferation rate as a function of the cell surface HER2 and EGFR receptor numbers and ligand concentration. The model enables us to simulate the proliferative behavior of the HER2-overexpressing cells with various HER2 and EGFR expression levels at various ligand concentrations. Numerical simulations of the model give good agreement with the experimental data in which an increase in HER2 receptors leads to increased cell proliferation. Modeling the Evolutionary Implications of Influenza Medication Strategies Dr. Zhilan Feng Department of Mathematics, Purdue University (Joint work with J. Glasser, R. Liu, Z. Qiu, D. Xu, and Y. Yang) Medication and treatment are important measures for prevention and control of influenza. However, the benefit of antiviral use can be compromised if drug-resistant strains arise. Consequently, not only the epidemic size may increase with a higher level of treatment but also the viruses may become more resistant to the antiviral drugs. We use a mathematical model to explore the impact of antiviral treatment on the transmission dynamics of influenza. The model includes both drug-sensitive and -resistant strains. Analytical and numerical results of the model show that the conventional quantity for the control reproduction number is not appropriate to use for gaining insights into the disease dynamics. We derive a new reproduction number by considering multiple generations of infection, and demonstrate that this new reproduction number provides a more reasonable measure for evaluating control programs as well as evolutionary implications of influenza medication Optimal Control of Tick-Borne Disease Dr. Holly Gaff Virginia Modeling, Analysis and Simulation Center, Old Dominion University Human monocytic ehrlichiosis (Ehrlichia chaffeensis ), or HME, is a ticktransmitted, ricksettisal disease with growing impact in the United States. Risk of a tick-borne disease such as HME to humans can be estimated using the prevalence of that disease in the tick population. A deterministic model for HME is explored to investigate the underlying dynamics of prevalence in tick populations, particularly when spatial considerations are allowed. Optimal control is applied to this model to identify how limited resources can best be used to reduce the risk of tick-borne diseases to humans. Mathematical Recipe for HIV Elimination in Resource-Poor Settings Dr. Abba Gumel Department of Mathematics, University of Manitoba I will present a model for the transmission dynamics of HIV/AIDS in a population, and show how such a model (and its analyses) could provide a cost-effective roadmap for the effective control and/or elimination of HIV in a resource-poor setting, such as Nigeria. The Largest Basic Reproduction Number in Multi-Type Epidemic Models Dr. Karl Hadeler Department of Mathematical and Statistical Sciences, Arizona State University The basic reproduction number for a multi-group epidemic model depends on the distribution of types. Determining the worst case amounts to maximizing the spectral radius ρ(XA) where A is a given non-negative matrix and X is a variable non-negative diagonal matrix with trace equal to one. Lower bounds for the maximum can be obtained and improved without computing eigenvalues. Upper bounds can be computed using the max eigenvalue of the matrix A. (Joint work with Ludwig Elsner) Socially-induced Ovulation Synchrony in a Seabird Colony: a Discrete-Time Model Dr. James L. Hayward Department of Biology, Andrews University (Joint work with Shandelle M. Henson, and J. M. Cushing) Spontaneous oscillator synchrony has been documented in a wide variety of electrical, mechanical, chemical, and biological systems, including the menstrual cycles of women and estrous cycles of Norway rats. In temperate regions, many colonial birds breed seasonally in a time window set by photoperiod; some studies have suggested that heightened social stimulation in denser colonies can lead to a tightened annual reproductive pulse. It has been unknown, however, whether the analogue of menstrual synchrony occurs in birds, that is, whether avian ovulation cycles can synchronize on a daily timescale within the annual breeding pulse. We present data on every-other-day egg-laying synchrony in a breeding colony of glaucous-winged gulls (Larus glaucescens) and show that the level of synchrony declined with decreasing colony density. We also discuss a discrete-time mathematical model based on the hypothesis that preovulatory luteinizing hormone surges synchronize through social The selective Advantage of Ovulation Synchrony in Colonial Seabirds: a Darwinian Dynamics Model Dr. Shandelle M. Henson Department of Mathematics, Andrews University (Joint work with J. M. Cushing and James L. Hayward) The existence of socially-induced ovulation synchrony in colonial seabirds begs the question of selective advantage. We pose a discrete-time population model for colonial seabirds, incorporating a social stimulation parameter that can induce ovulation synchrony during the breeding season. Using the birth rate as a bifurcation parameter, we prove the existence of a transcritical bifurcation of positive periodic solutions, and show that the bifurcation is supercritical in the absence of social stimulation. In the presence of social stimulation, the bifurcation can become subcritical, and the branch of positive solutions bends back to the right, lying above the branch for which there is no social stimulation. If the population model is coupled to a dynamic model for an evolving trait related to social stimulation, the resulting Darwinian dynamics model predicts that the system will evolve to a state for which ovulation synchrony exists. A New Look at Stochastic Resonance Enhancement of Mammalian Auditory Information Processing Dr. Dawei Hong Center for computational and integrative biology, Department of computer science, Rutgers University Dynamics of an SIS Type of Reaction-Diffusion Epidemic Model Dr. Wenzhang Huang Department of Mathematical Sciences, University of Alabama in Huntsville Recently an SIS epidemic reaction-diffusion model with Neumann (or no-flux) boundary condition have been proposed and studied by several authors to understand the dynamics of disease transmission in a spatially heterogeneous environment in which the individuals are subject to a random movement. Many important and interesting properties have been obtained: such as the role of diffusion coefcients in defning the reproductive number; the globally stability of disease-free equilibrium; the existence of positive endemic steady; and the asymptotical profiles of the endemic steady states as one of diffusion coefcients is sufficiently small (or large). In this research we will study a modified SIS diffusion model with the Dirichlet boundary condition. Results on the dynamics of disease transmission and problems on the model will be presented. Using Models to Help Control the H1N1 Pandemic Dr. Mac Hyman Department of Mathematics, Tulane University We must use all of the tools available to advance epidemic models, from qualitative insight to quantitative predictions, to devise effective strategies to minimize the impact and spread of infectious diseases such as the current H1N1 flu pandemic. I will review the lessons learned from mathematical modeling previous epidemics, with the goal of identifying ways that our mathematical models can be used to help improve the effectiveness of public health interventions measures. In particular, I will describe how mathematical models can estimate the benefits and the costs of projected interventions and project the requirements that an epidemic will place on the health care system. Dynamics of an Age-Structured Population with Allee Effects and Harvesting Dr. Sophia Jang Department of Mathematics and Statistics, Texas Tech University In this talk we introduce a discrete-time, age-structured single population model with Allee effects and harvesting. It is assumed that survival probabilities from one age class to the next are constants and fertility rate is a function of weighted total population size. Global extinction is certain if the maximal growth rate of the population is less than one. The model can have multiple attractors and the asymptotic dynamics of the population depends on its initial distribution if the maximal growth rate is larger than one. An Allee threshold depending on the components of the unstable interior equilibrium is derived when only the last age class can reproduce. The population becomes extinct if its initial population distribution is below the threshold. Harvesting on any particular age class can decrease the magnitude of the possible stable interior equilibrium and increase the magnitude of the unstable interior equilibrium simultaneously. On the Existence of Error Threshold of the Quasispecies Model Dr. Tanya Kostova National Science Foundation The quasispecies theory was introduced approximately 30 years ago by Eigen and Schuster. It became very popular within the virology community when experimental evidence showed that viruses have so high mutation rates that the viral populations consist of numerous diverse genotypes. It has been widely accepted that the quasispecies model predicts that the fittest genotype (with the highest replication rate) loses dominance when the mutation rate becomes sufficiently high. These conclusions have been largely based on computer simulations and have led to the definition of the so called error threshold. I show that it is easy to construct counter examples where the fittest genotype remains dominant independently of the value of the mutation rate and therefore the error threshold does not exist. Multistability and Oscillations in Feedback Loops Dr. Maria Leite Department of Mathematics, University of Oklahoma (Joint work with Yunjiao Wang, University of Manchester) Feedback loops are found to be important network structures in biological systems. Recently, the dynamical role of feedback loops have received extensive attention. In this talk we discuss some of the interesting dynamical features of those loops such as multistability and oscillations. Optimal Control of Treatments in a Cholera Model Dr. Suzanne Lenhart Department of Mathematics, University of Tennessee While cholera has been a recognized disease for two centuries, there is no strategy for its effective control. We formulate a mathematical model to include essential components such as a hyperinfectious, short-lived bacterial state, a separate class for mild human infections, and waning disease immunity. A new result quantifies contributions to the basic reproductive number from multiple infectious classes. Using optimal control theory, parameter sensitivity analysis, and numerical simulations, a cost-effective balance of multiple intervention methods is compared for two endemic populations. Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. Spreading Speeds and Traveling Wave Solutions in Partially Monotone Systems Dr. Bingtuan Li Department of Mathematics, University of Louisville Investigating the spreading speeds and traveling waves for spatial multiple species models has been fascinating and challenging. Most of the existing results on spread of species assume that the system is monotone throughout the region of biological interest. In this talk, we will present mathematical results on spatial spread of partially monotone models in the form of reaction-diffusion equations and in the form of integro-difference equations. By a partially monotone model we mean that the model is monotone near an unstable equilibrium from which the spatial transition moves away. In such a model species interact with each other to promote growth and migration in a cooperative manner in one region while they may behave differently in other regions. A partially monotone model may generate complicated dynamics including chaos. We will show results on the so-called "linear determinacy" that equates spreading speed in the full nonlinear model with spread rate in the system linearized about the leading edge of the invasion. We will then show that the spreading speed can be characterized as the slowest speed of a class of traveling wave solutions. We will finally discuss the applications of the general mathematical results to some specific ecological models in which the predator-prey interaction can be incorporated. (Joint work with Hans F. Weinberger, University of Impact of Mosquito Transgenes on Malaria Transmission Dr. Jia Li Department of Mathematical Sciences, University of Alabama in Huntsville We formulate continuous-time models for interactive wild and transgenic mosquitoes. With fundamental analysis of their dynamics, we introduce the transgenic mosquitoes into a simple compartmental malaria transmission mode l. We study the dynamics of the simple malaria model and the model with the transgenic mosquitoes, and investiga te the impact of transgenic mosquitoes on the malaria Delay Dependent Conditions for Global Stability of an Intravenous Glucose Tolerance Test Model Dr. Jiaxu Li Department of Mathematics, University of Louisville Diabetes mellitus has become an epidemic disease in the sense of life style. Detecting the onset of diabetes is one of the fundamental steps in treatment of diabetes including determining the insulin sensitivity and glucose effectiveness. An effective method for this end is the intravenous glucose tolerance test (IVGTT). Several mathematical models have been proposed and some are widely used in clinics. The most recent model proposed by P. Palumbo, S. Panunzi and A. De Gaetano (2007) demonstrates reasonable profiles with their experimental data. To analytically ensure the global stability of the basal equilibrium, several attempts have been made. The existing results are either delay independent conditions or the convergence is for a type of specific solutions. In this talk, we study the global stability and obtain delay dependent conditions to ensure the global and asymptotic stable equilibrium. An easy-to-check condition that is an estimate of the upper bound of time delay is An Age-structured Two-Strain Epidemic Model with Super-Infection Dr. Xue-Zhi Li Department of Mathematics, Xinyang Normal University This article focuses on the study of an age-structured two-strain model with super-infection. The explicit expression of basic reproduction numbers and the invasion reproduction numbers corresponding to strain one and strain two are obtained. It is showed that the infection-free steady state is globally stable if the basic reproductive number R_0 is below one. Existence of strain one and strain two exclusive equilibria is established. Conditions for local stability or instability of the exclusive equilibria of the strain one and strain two are established. Existence of coexistence equilibrium is also obtained under the condition that both invasion reproduction numbers are larger than one. Keywords: age-structured; two-strain epidemic model; super-infection; basic reproduction number; invasion reproduction number, the exclusive equilibrium, the coexistence equilibrium, stability. A Mathematical Approach to Study the Impact of Predators on the Vegetation Succession Dr. Rongsong Liu Department of Mathematics, University of Wyoming In order to study the role of predator of on the vegetation succession, we use a system of ordinary differential equations to model the interaction among two plant species, herbivores, and predators. The toxin-determined functional response is applied to describe the interactions between plant species and herbivores and Holling Type II functional response is used to model the interactions between herbivores and predators. In order to study how the predators impact the succession of vegetation, we derive the invasion condition. Numerical simulations are conducted to reinforce of analytical Tracking Prey or Tracking Prey's Resource? Dr. Yuan Lou Department of Mathematics, Ohio State University We consider a continuous environment with an arbitrary distribution of resources, randomly diffusing prey that consumes the resources, and predators that consume the prey. Our model introduces a class of movement rules in which the direction of predators\' movement is determined (i) randomly, (ii) by prey density, and/or (iii) by the density of the prey's resource. We find that, for some resource distributions, predators that track the gradient of the prey's resource may have an advantage compared to predators that track the gradient of prey directly. Permanence of the Water Hyacinth in Northeast Louisiana Dr. Mariette Maroun Department of Mathematics and Physics, University of Louisiana at Monroe The invasive specie of Water Hyacinth was introduced into the United States of America in late 1800. This aquatic plant consists of three different stages. It take over upon introduction into fresh water bodies because of its sexual and as-sexual reproductive system. From early 1970's till now, scientists have been trying to control it in different manners from chemical to biological. A model will be provided to show that long term control of this plant depends only on its survival. Avian Influenza: Modeling, Analysis, and Data Fitting Dr. Maia Martcheva Department of Mathematics, University of Florida Low Pathogenic Avian Influenza (LPAI) virus, which circulates in wild bird populations in mostly benign form, is suspected to have mutated into a highly pathogenic (HPAI) strain after transmission to the domestic birds. HPAI has recently garnered worldwide attention because of the ''spillover" infection of this strain from domestic birds to humans - primarily those in poultry industry - causing significant human fatality and thus creating potentially favorable conditions for another flu pandemic. We use an ordinary differential equation model to describe this complex dynamics of the HPAI virus, which epidemiologically links a number of species in a multi-species community. We include the wild bird population as a periodic source feeding infection to the coupled domestic bird-human system. We also account for mutation between the low and high pathogenic strains. We fit our model to the actual number of human avian influenza cases obtained from WHO, and estimate the relevant reproduction numbers and invasion reproduction numbers. We conclude that low pathogenic avian influenza is maintained in the domestic bird population through "spill over" from wild birds, while high pathogenic avian influenza is endemic in the domestic bird population. A NSFD Scheme for a Model of Respiratory Virus Transmission Dr. Ronald Mickens Department of Physics, Clark Atlanta University We construct a nonstandard finite difference numerical integration scheme for an SIRS model of respiratory virus transmission. Our work extends that done by A. J. Arenas et al. (Computers and Mathematics with Applications, Vol. 56 (2008), 670-678) by using the system's exact conservation law to place constraints on the discretization. The scheme satisfies a positivity condition for all time step-sizes. We note that neither of the latter two conditions holds for the scheme derived by Arenas et al. Stability of Choice in the Honey Bee Nest-Site Selection Process Dr. Andrew Nevai Department of Mathematics, University of Central Florida A pair of compartment models for the honey bee nest-site selection process is introduced. The first model represents a swarm of bees deciding whether a site is viable, and the second characterizes its ability to select between two viable sites. The one-site assessment process has two equilibrium states: a disinterested equilibrium (DE) in which the bees show no interest in the site and an interested equilibrium (IE) in which bees show interest. In analogy with epidemic models, basic and absolute recruitment numbers (R0 and B0) are defined as measures of the swarm\'s sensitivity to dancing by a single bee. If R0 is less than one then the DE is locally stable, and if B0 is less than one then it is globally stable. If R0 is greater than one then the DE is unstable and the IE is stable under realistic conditions. In addition, there exists a critical site quality threshold Q* above which the site can attract some interest (at equilibrium) and below which it can! not. There also exists a a second critical site quality threshold Q** above which the site can attract a quorum (at equilibrium) and below which it cannot. The two-site discrimination process, which examines a swarm's ability to simultaneously consider two sites differing in both site quality and discovery time, has a stable DE if and only if both sites\' individual basic recruitment numbers are less than one. Numerical experiments are performed to study the influences of site quality on quorum time and the outcome of competition between a lower quality site discovered first and a higher quality site discovered second. Stochastic Energy Budgets, Integrate-and-Fire Models, and Population Dynamics Dr. Roger Nisbet Department of Ecology, Evolution and Marine Biology, University of California, Santa Barbara In many adult animals, energy-rich material is accumulated as "reserves" that are eventually converted to reproductive material and released in a pulse. Similarly, achieving reproductive maturity in juveniles typically involves sustained energy allocation, with the onset of reproduction occurring after some threshold level is achieved. These observations have motivated study of a stochastic bioenergetic model involving integration of a varying energy input and firing on achieving some threshold, similar to the well studied integrate-and-fire (I-F) neuron model. The model dynamics in a periodically variable environment give insight on the synchronization of reproduction in many organisms, and I shall show an example involving spawning corals. I-F dynamics are also central to stage-structured population models with time varying maturation delays. I shall discuss an application to zooplankton populations. Growing, Slowing, Growing: A Branching Process Model of How Cells Survive Telomere Shortening Dr. Peter Olofsson Department of Mathematics, Trinity University Telomeres are specialized structures found at the ends of chromosomes. During DNA replication, telomeres shorten and once a critical length is reached, the cell stops dividing and becomes senescent. A cell population that experiences telomere shortening exhibits initial exponential growth, and once senescent cells start dominating the population, growth slows down and the population size levels off resulting in a typical sigmoid-shaped growth curve. However, experimental data on the yeast Saccharomyces cerevisiae indicate that some populations regain exponential growth after slowing down. The explanation for this phenomenon is that some cells develop ways to maintain short telomeres and become "survivors." We suggest a bracnhing process model that takes into account random variation in individual cell cycle times, telomere shortening, finite lifespan of mother cells, and the possibility of survivorship. We identify and estimate crucial parameters such as cell cycle mean and variance, and the probability of an individual cell becoming a survivor, and compare our model to experimental data. Evolutionary Changes in Competitive Outcomes Dr. Rosalyn Rael Department of Ecology and Evolutionary Biology, University of Michigan Evolution is a natural process that can occur on time scales commensurate with ecological dynamics and result in changes in expected outcomes of interactions such as competition. Evolutionary game theory is a modeling technique that combines both ecological dynamics and evolution to form "Darwinian dynamics." With this method, natural selection is viewed as a game, where traits are strategies that affect payoff in the form of species\' fitness. I will give a brief introduction to Darwinian dynamics and describe applications of this modeling approach to two-species competition. Using evolutionary game theory, we show how evolution can lead to the coexistence of species or other outcomes not expected in the absence of evolution. I will discuss the conditions necessary to see such changes, and show that these results compare well with data from classic flour beetle experiments. Formation of Spatial Patterns in Stage-Structured Populations with Density Dependent Dispersal Dr. Suzanne Robertson Mathematical Biosciences Institute, Ohio State University Spatial segregation among life cycle stages has been observed in many stage-structured species, both in homogeneous and heterogeneous environments. We investigate density dependent dispersal of life cycle stages as a mechanism responsible for this separation by using stage-structured, integrodifference equation (IDE) models that incorporate density dependent dispersal kernels. After investigating mechanisms that can lead to spatial patterns in two dimensional Juvenile-Adult IDE models, we construct spatial models to describe the population dynamics of the flour beetle species T. castaneum, T. confusum and T. brevicornis and use them to assess density dependent dispersal mechanisms that are able to explain spatial patterns that have been observed in these species. Asymmetric Division of Activated Latently Infected Cells May Explain the Divergent Decay Kinetics of the HIV Latent Reservoir Dr. Libin Rong Theoretical Biology and Biophysics, Los Alamos National Laboratory Most HIV-infected patients when treated with combination therapy achieve viral loads that are below the current limit of detection of standard assays after a few months. Despite this, virus eradication from the host has not been achieved. Latent, replication-competent HIV-1 can generally be identified in resting memory CD4+ T cells in patients with "undetectable" viral loads. Turnover of these cells is extremely slow but virus can be released from the latent reservoir quickly upon cessation of therapy. In addition, a number of patients experience transient episodes of viremia, or HIV-1 blips, even with suppression of the viral load to below the limit of detection for many years. The mechanisms underlying the slow decay of the latent reservoir and the occurrence of intermittent viral blips have not been fully elucidated. In this study, we address these two issues by developing a mathematical model that explores a hypothesis about latently infect! ed cell activation. We propose that asymmetric division of latently infected cells upon sporadic antigen encounter may both replenish the latent reservoir and generate intermittent viral blips. Interestingly, we show that occasional replenishment of the latent reservoir induced by reactivation of latently infected cells may reconcile the differences between the divergent estimates of the half-life of the latent reservoir in the literature. Modelling the Transmission Dynamics and Control of Hepatitis B Virus in China Dr. Shigui Ruan Department of Mathematics, University of Miami Hepatitis B is a potentially life-threatening liver infection caused by the hepatitis B virus (HBV) and is a major global health problem. HBV is the most common serious viral infection and a leading cause of death in mainland China. Around 130 million people in China are carriers of HBV, almost a third of the people infected with HBV worldwide and about 10% of the general population in the country; among them 30 million are chronically infected. Every year, 300,000 people die from HBV-related diseases in China, accounting for 40 - 50% of HBV-related deaths worldwide. Despite an effective vaccination program for newborn babies since the 1990s, which has reduced chronic HBV infection in children, the incidence of hepatitis B is still increasing in China. We propose a mathematical model to understand the transmission dynamics and prevalence of HBV infection in China. Based on the data reported by the Ministry of Health of China, the model provides an approximate estimate of the basic reproduction number R0 =2.406. This indicates that hepatitis B is endemic in China and is approaching its equilibrium with the current immunization programme and control measures. Although China made a great progress in increasing coverage among infants with hepatitis B vaccine, it has a long and hard battle to fight in order to significantly reduce the incidence and eventually eradicate the virus. Keywords: Hepatitis B virus, mathematical modeling, transmission dynamics, basic reproduction number, disease endemic equilibrium. *Research was partially supported by the State Scholarship Fund of China Scholarship Council (Z.L.), NSFC grant #10825104 and the China MOE Research Grant (W.Z.), and NSF grant DMS-0715772 (S.R.) Global Stability in a Multi-Species Periodic Leslie-Gower Model Dr. Robert Sacker Department of Mathematics, University of Southern California The $d$-species Leslie-Gower competition model is studied in which all the parameters are $p$-periodic. It is shown that whenever the coupling is small, there is a positive $p$-periodic state that is exponentially asymptotically stable and globally attracts all initial states having positive coordinates. Global-Stability Problems for Coupled Systems Arising in Population Dynamics Mr. Zhisheng Shuai Department of Mathematical and Statistical Sciences, University of Alberta (Joint work with Michael Li) We study the global-stability problem of equilibria for coupled systems of differential equations arising in population dynamics. Using results from graph theory, we develop a systematic approach to construct global Lyapunov functions/functionals for coupled systems from individual Lyapunov functions/functionals for vertex systems. We apply our general approach to several coupled systems in ecology and epidemiology, for example, single species model with dispersal, predator-prey model with dispersal, and multi-group epidemic model with time delays. Modeling the Evolution of Repetitive Sequence in DNA Dr. Suzanne Sindi Center for Computational Molecular Biology, Brown University There are families of nearly identical sequences within the genomes of human, fly, worm and every non-microbial genome that has been determined. Such sequences were originally hypothesized to be "junk DNA", but biologists continue to find many functions these sequences perform. Several features of repetitive DNA follow power law distributions, a natural question is how such distributions have emerged over time from individual duplication events. I will describe mathematical models that demonstrate how power law and generalized Pareto Law distributions can emerge naturally from random duplication and deletion in a genome. Lyapunov Exponents and Persistence Dr. Hal Smith Department of Mathematics and Statistics, Arizona State University (Joint work with P. Salceanu) We will show that Lyapunov exponents can be employed in establishing persistence of discrete and continuous-time finite dimensional dynamical systems. Transmission of Avian Influenza between Two Patches Dr. Baojun Song Department of Mathematical Sciences, Montclair State University Patch models are constructed and analyzed to study the role of a migratory bird population in the transmission of the highly pathogenic H5N1 strain of avian influenza. Our discrete models consider a migratory bird population and two local bird populations. The local bird populations live in their own patches and the migratory birds migrate back and forth between patches seasonally. The models are tested by using the prevalence of avian influenza in Mallard Duck by season in United States and Canada from 1974 to 1986. Both our analytic results and simulations predict a pattern of seasonal oscillation of the prevalence of avian influenza in Mallard Duck A variety types of demographic growth modes are discussed. The models for most nonlinear reproductions or nonlinear survive functions undergo double period-double bifurcations. Dynamics of Killer T Cells and Immunodominance in the Influenza Infection Dr. Abdessamad Tridane Department of Applied Sciences and Mathematics, Arizona State University at the Polytechnic Campus Antigen-specific killer T cells ( CD8+ cells ) play an important role in virus clearance. The aim of this talk is to introduce and analyze mathematical models of the dynamics of killer T cells and the differential expansion of antigen-specific CD8+ cell, called immunodominance, in the influenza infection. Understanding qualitative impact of killer T cells is very important for the design of T-cell-based vaccines that promote early virus clearance. The systematical analysis of these model systems show that the behaviors of the models are similar for high killer T cells density generating reasonable dynamics. Our models try to shed some light on possible explanations of the some aspect immunodominance in influenza infection by studying the effect of the epitope of the antigen presented on the surface of the infected cells and the effect of Interferon-γ Traveling Wave Solutions of Delayed Reaction-Diffusion Equations Dr. Haiyan Wang Division of Mathematical and Natural Sciences, Arizona State University Traveling wave phenomena in reaction-diffusion equations arise from many biological problems. Combining upper and lower solutions, monotone iterations and fixed point theorems, we study the existence and asymptotic behavior of traveling wave solutions for nonmonotone reaction-diffusion equations with nonlocal delay. Our results extend and improve some related results in the literature. A Formula for the Time-Dependent Transmission Rate in an SIR Model from Data Dr. Howie Weiss Department of Mathematics, Georgia Tech The transmissibility of many infectious diseases varies significantly in time, but has been thought impossible to measure directly. Based on solving an inverse problem for SIR-type systems, we devise a mathematical algorithm to recover the time-dependent transmission rate from infection data. We apply our algorithm to historic UK measles data and observe that for most cities the main spectral peak of the transmission rate has a two-year period. Our construction clearly illustrates the danger of overfitting an epidemic transmission model with a variable transmission rate function. Models for the Spread of Hantavirus between Reservoir and Spillover Species Dr. Curtis Wesley Department of Mathematics, Louisiana State University at Shreveport The modeling of interspecies transmission has the potential to provide more accurate predictions of disease persistence and emergence dynamics. We describe various models which are motivated by recent work on hantavirus in rodent communities in Paraguay. Each model is a system of ordinary differential equations (ODE) which are developed for modeling the spread of hantavirus between a reservoir and a spillover species. The basic reproduction number is calculated for each model, with global stability results given for some models. Numerical simulations are created that illustrate the dynamics of each model. Competition in the Presence of a Virus in an Aquatic Environment Dr. Gail Wolkowicz Department of Mathematics and Statistics, McMaster University Recent research has determined that viruses are much more prevalent in aquatic environments than previously imagined. We derive a model of competition between two populations of bacteria for a single limiting nutrient in a chemostat where a virus is present. It is assumed that the virus can only infect one of the populations, the population that would be a more efficient consumer of the resource in a virus free environment, in order to determine whether introduction of a virus can result in coexistence of the competing populations. Criteria for the global stability of the disease free and endemic steady states are obtained. It is also shown that it is possible to have multiple attracting endemic steady states, oscillatory behavior involving Hopf and homoclinic bifurcations, and a hysteresis effect. Mathematical tools that are used include Lyapunov functions, persistence theory, and bifurcation analysis. Evolution of Schistosome's Drug Resistance and Virulence Dr. Dashun Xu Department of Mathematics, Southern Illinois University, Carbondale Motivited by some recent empirical studies on Schistosoma mansoni, we use a set of ordinary differential and integral equations to investigate the role of drug treatments of human hosts in the evolution of drug resistant parasites. By studying evolutionarily singular strategies (ESS) of parasites, we found that high drug resistance (and low virulence) is likely to develop for high drug treatment rates, which usually tend to promote monomorphism as the evolutionary endpoint. Our study also shows that the coinfection of the intermediate host does not affect the drug resistance and virulence levels of parasites, but tends to destabilize ESS points and hence promote dimorphism or even polymorphism as the evolutionary endpoint. The Impact of Periodic Proportional Harvesting Policies on TAC-Regulated Fishery Systems Dr. Abdul-Aziz Yakubu Department of Mathematics, Howard University We extend the TAC regulated fish population model of Ang et al. to include stock under compensatory and overcompensatory dynamics with and without the Allee effect. We focus on periodic harvesting strategy, a subset of variable proportion harvesting strategies that includes constant harvesting as a special case. Both periodic and constant harvesting strategies have the potential to stabilize complex overcompensatory stock dynamics with or without the Allee effect. Furthermore, we show that both strategies force a sudden collapse of TAC fishery systems that exhibit the Allee mechanism. However, in the absence of the Allee effect, TAC fishery systems decline to zero smoothly under high exploitation. As case studies, we apply the TAC theoretical model framework to Gulf of Alaska Pacific halibut data from the International Pacific halibut Commission (IPHC) annual reports and Georges Bank Atlantic cod data from the North East Fisheries Science Center !(NEFSC) Reference Document 08-15. We show that TAC does a good job of preventing the collapse of halibut while cod is endangered. Furthermore, we observe that the likelihood of stock collapse increases with increased weather variability. Geometric and Potential Driving Formation and Evolution of Biomolecular Surfaces Dr. Shan Zhao Department of Mathematics, University of Alabama In this talk, I will present some geometrical flow equations for the theoretical modelling of biomolecular surfaces in the context of multiscale implicit solvent models. When a less polar macromolecule is immersed in a polar environment, the surface free energy minimization occurs naturally to stabilize the system. This motivates us to propose a new concept, the minimal molecular surface (MMS), for modelling the solventbiomolecule interface. The intrinsic curvature force is used in the MMS model to drive the surface formation and evolution. To further account for the local variations near the biomolecular surfaces due to interactions between solvent molecules, and between solvent and solute molecules, we recently proposed some new potential driven geometric flows, which balance the intrinsic geometric forces with the potential forces induced by the atomic interactions. High order geometric flows are also considered and tested for biomolecular surface modelling. Extensive numerical experiments are carried out to demonstrate the proposed concept and algorithms. Comparison is given to a classical model, the molecular surface. Unlike the molecular surface, biomolecular surfaces generated by our approaches are typically free of geometric singularities. (This is a joint work with Peter Bates and G.W. Wei, Michigan State University). Dynamics of an Epidemic Model with Non-Local Infections for Diseases with Latency over a Patchy Environment Dr. Xingfu Zou Department of Applied Mathematics, University of Western Ontario Assuming that an infectious disease in a population has a fixed latent period and the latent individuals of the population may disperse, we formulate an SIR model with a simple demographic structure for the population living in an $n$-patch environment (cities, towns, or countries, etc.). The model is given by a system of delay differential equations with a fixed delay accounting for the latency and a non-local term caused by the mobility of the individuals during the latent period. Assuming irreducibility of the travel matrices of the infection related classes, an expression for the basic reproduction number \\mathcal{R}_0$ is derived, and it is shown that the disease free equilibrium is globally asymptotically stable if $\\mathcal{R}_0<1$, and becomes unstable if $\\mathcal{R}_0>1$. In the latter case, there is at least one endemic equilibrium and the disease will be uniformly persistent. When $n=2$, two special cases allowing reducible travel matrices are considered to illustrate joint impact of the disease latency and population mobility on the disease dynamics. In addition to the existence of the disease free equilibrium and interior endemic equilibrium, the existence of a boundary equilibrium and its stability are discussed for these two special cases. Poster Presentations Discrete Time Optimal Control of Species Augmentation: Augment then Grow Ms. Erin Bodine Department of Mathematics, University of Tennessee Stability of the Endemic Coexistence Equilibrium for One Host and Two Parasites Mr. Thanate Dhirasakdanon School of Mathematics and Statistics, Arizona State University For an SI type endemic model with one host and two parasite strains with complete cross protection between the parasite strains, we study the stability of the endemic coexistence equilibrium, where the host and both parasite strains are present. Our model assumes reduced fertility and increased mortality of infected hosts. The model also assume that one parasite strain is exclusively vertically transmitted and cannot persists just by itself. We find several sufficient conditions for the equilibrium to be locally asymptotically stable. One of them is that the horizontal transmission is of density-dependent (mass-action) type. If the horizontal transmission is of frequency-dependent (standard) type, then, under certain conditions, the equilibrium can be unstable and undamped oscillations can occur. We support and extend our analytical results by numerical simulations and by two-dimensional plots of stability regions for various pairs of parameters. Optimal Control of Growth Coefficient in a Steady State Population Model Dr. Heather Finotti Department of Mathematics, University of Tennessee (Joint work with Ding, Lenhart, Lou, and Ye) We study the control problem of maximizing the net benefit in the conservation of a single species with a fixed amount of resources. The existence of an optimal control is established and the uniqueness and characterization of the optimal control are investigated. Numerical simulations illustrate several cases, for both one- and two-dimensional domains, in which several interesting phenomena are found. Some open problems are discussed. An Age-Structured Setup of the Bacteria-Bacteriophage Model Mr. Zhun Han School of Mathematics and Statistics, Arizona State University The mathematical model of the bacteria-bacteriophage interaction has been an interesting topic since 1960's. In this alternate setup, we introduce an age structure on the infected species and rewrite the model into a combination of delayed differential equations and integral equations. We show that this alternate setup coincides with an existing model. However, by employing this age structure, we may have a better view in the biological sense. The Spreading Speed and Traveling Wave Solutions of a Spatial Competition Model Ms. Kim Meyer Department of Mathematics, University of Louisville Integro-difference equations are used to model spatial spread of species with nonoverlapping generations. We look at a two species competition model with Ricker's growth functions in the form of integro-difference equations. We investigate spatial dynamics about how an introduced competitor spreads into a habitat pre-occupied by a resident species. We found a formula for the so called spreading speed at which the resident species retreats and the introduced species expands in space. We also obtained conditions under which the spreading speed can be characterized as the slowest speed of a class of traveling wave solutions. In addition, we conducted numerical simulations and showed that a traveling wave solution can have a complicated tail. (Joint work with Bingtuan Li) Mathematical Modeling of SARS with a Focus on Super-Spreading Events (SSEs) Mr. Thembinkosi Mkhatshwa Department of Mathematics and Applied Sciences, Marshall University "Super spreading events" (SSEs) have been cited to have been one of the major factors which were responsible for the spread of severe acute respiratory syndrome (SARS), the first epidemic of the 21st century. The understanding of these SSEs is critical to understanding the spread of SARS. We present a modification of the basic SIR disease model, an SIPR model, which captures the effect of the Modeling Antibiotic Resistant Bacteria in the Mud River, WV Dr. Anna Mummert Department of Mathematics, Marshall University When antibiotics are used by humans, or for livestock or crop production, antibiotic resistant bacteria enters the environment. The antibiotic resistant bacteria wash into rivers, where the antibiotic resistance gene is transferred to naturally occurring bacteria in the river. In this poster we present and study a system of ordinary differential equations that model antibiotic resistant bacteria in rivers. The influence of bacteria entering the river due to nearby land use appears in the model as an external forcing term. The model is compared with data from the Mud River, WV. Bacteriophage Infection Dynamics:Multiple Host Binding Sites Mr. Roy Trevino School of Mathematical and Statistical Sciences, Arizona State University We construct a stochastic model of bacteriophage parasitism of a host bacteria that accounts for demographic stochasticity of host and parasite and allows for multiple bacteriophage adsorption to host. We analyze the associated deterministic model, identifying the basic reproductive number for phage proliferation, showing that host and phage persist when it exceeds unity, and establishing that the distribution of adsorbed phage on a host is binomial with slowly evolving mean. Not surprisingly, extinction of the parasite or both host and parasite can occur for the stochastic model. Novel H1N1: Predicting the Course of a Pandemic Dr. Sherry Towers Department of Applied Statistics, Purdue University In April, 2009 a new strain of H1N1 was identified, causing a spring epidemic in Mexico, and a summer wave of infection in the US and elsewhere. Because influenza is seasonal in nature (more infectious in winter than summer in the northern hemisphere), world health officials anticipate a second, larger fall wave, similar to that seen in 1918. We examine the prevalence of H1N1 in the US during summer 2009. In a unique study, we use this information, along with what we know about the seasonal behavior of influenza, to predict the prevalence of influenza during fall 2009, and examine the efficacy of the planned CDC H1N1 vaccination campaign. Dispersal Behavior with Biased Edge Movement Between Two Different Habitat Types Ms. Xiuquan Wang Department of Mathmatics, Southern Illinois Univercity, Carbondale (Joint work with Drs. Mingqing Xiao, John D. Reeve, Dashun Xu, and James T. Cronin) we analyze the behavior of the reaction-diffusion model of organisms to study insect diffusion. The main idea is based on some set of hypotheses about the scale and structure of the spatial environment and the way that insects disperse through it to express how the insect responds to edges between two different habitats, therefore, its behavior can be analyzed according to the shape of corresponding distribution curves. By analyzing the reaction-diffusion equations, this also provides a simulation of occupancy mean times for insect in different habitats. Mathematical Modeling of the Vancomycin-Resistant Enterococci Dr. Mohammed Yahdi Department of Mathematics and Computer Science, Ursinus College A mathematical model of the Vancomycin-Resistant Enterococci (VRE) is introduced. It includes a system of three nonlinear differential equations with three variables connected and twelve parameters, such as fitness costs, rates of colonization, and hygiene compliance. Equilibrium point simulation and outbreak analysis are performed to visualize and measure the impact of the parameters on the spread of the antibiotic resistant VRE, and to provide optimal control strategies. Effect of Host Heterogeneity on the Coevolution of Parasite and Host Ms. Yiding Yang Department of Mathematics, Purdue University A system of differential equations which models the disease dynamics of schistosomiasis is used to study the evolution of parasite virulence. The model incorporates both the definitive human hosts and two strains of intermediate snail hosts. An age-structure of human hosts is considered to reflect the age-dependent transmission rate and age-targeted drug treatment rate. The basic parasite reproductive number R_i of strain i snail hosts is computed, and the invasion reproductive number R_{ij} for strain i snail host when type j snail hosts are at the equilibrium. We establish the criterion for strain i to invade strain j snail host, and the criterion is used to examine the evolutionary dynamics of snail hosts and the parasite. Modeling Disease Transmission with Age and Spatial Heterogeneities Dr. Feng Yu Statistics and Epidemiology, Statistics Research Division, RTI International We have developed a comprehensive disease epidemic model that takes both age-heterogeneity and spatial-heterogeneity into account. The model features an age structure that captures the dynamics of disease epidemic with seasonal effect for various age groups. This model also simulates the impact of migration among geographic locations on disease transmission. In addition, vaccination strategy has been built into the model for disease intervention and control. We implemented the model using the AnyLogicTM software package with graphical user interface for presentation of results and for changing parameters interactively. For illustration purpose, we will use Measles transmission as an example. Different Orders of Harvesting an Integrodifference Model Ms. Peng Zhong Department of Mathematics, University of Tennessee An optimal control harvesting problem for a population modeled by an integrodifference equation model is considered. The proportion to be harvested is taken to be control. The goal is to find the optimal harvesting control to maximize the profit. The effect of order on optimal control of harvesting on integrodifferential equations is studied. Conference Location All talks will be held in the Shelby Center for Science and Technology on the UAHuntsville Campus 550 Sparkman Drive, Huntsville, AL 35899 Located on the UAH Campus (256) 721-9428 • Complimentary deluxe continental breakfast • Shuttle service available • Business center available for copies, fax machine, computer / printer, ATM, etc. • Complimentary pass to the University Fitness Center • Coffee makers with complimentary coffee • Study area with desk and chair, • Mini refrigerators, microwAves, iron & iron board, radio/alarm clocks in all the rooms • Remote control televisions with 70 cable channels • Telephone with voice mail and data port capability • Individual thermostatically controlled air conditioning and heat • All interior corridor entrances Call (256) 721-9428 to make reservations. Mention you are attending the conference hosted by the UAH Math Department. Room rates are • Single Room, Non-Smoking/1 Double: $67.00 • Double Room, Non-Smoking/2 Double: $71.00 • Double Room, Smoking/2 Double: $71.00 • Junior Suite/1 King: $89.00 From Huntsville International Airport: • Start out going north on Glenn Hearn Blvd SW toward John Harrison Dr. SW • Merge onto I-565 E / US 72 Alt E toward Huntsville. • Take Exit 15 toward Madison Pike / Sparkman Dr / Bob Wallace Ave. • Take the Sparkman Dr / Bob Wallace Ave ramp. • Turn left onto Bob Wallace Ave SW / Sparkman Dr NW. continue to follow Sparkman Dr NW. • End at 550 Sparkman Dr NW From Birmingham or Nashville • Take I-65 Exit to I-565 E / US 72 Alt E toward Huntsville • Take Exit 15 toward Madison Pike / Sparkman Dr / Bob Wallace Ave. • Take the Sparkman Dr / Bob Wallace Ave ramp. • Turn left onto Bob Wallace Ave SW / Sparkman Dr NW. continue to follow Sparkman Dr NW. • End at 550 Sparkman Dr NW From Atlanta: • Merge onto US-72 W via the ramp on the left toward Huntsville. • US-72 W becomes I-565 W. • Take the Sparkman Dr / Bob Wallace Ave Exit, Exit 15, toward Madison Pike. • Keep right at the fork to go on Sparkman Dr NW • End at 550 Sparkman Dr NW 4801 Governors House Drive, Huntsville, AL 35805 Located 1.4 miles from UAH campus (256) 430-1778 • One king-sized bed or two queen-sized beds • Microwave, refrigerator, coffee maker, iron & ironing board • Ergonomic Herman Miller Mirra chair, Garden Sleep System bed • 26" flat screen high-definition television • Adjustable lighting, desk level electrical outlets • 2 phone lines in each room and voice mail • Complimentary wired and wireless high-speed Internet access • Secure PrinterOn remote printing Call (256) 430-1778 to make reservations. Mention you are attending the conference hosted by the UAH Math Department. Room rates are • Single Rooms/King or Queen: $89.00 From Huntsville International Airport: • Start out going north on Glenn Hearn Blvd SW toward John Harrison Dr. SW • Merge onto I-565 E / US 72 Alt E toward Huntsville. • Take Exit 15 toward Madison Pike / Sparkman Dr / Bob Wallace Ave. • Take the Sparkman Dr/Bob Wallace Ave ramp. • Merge onto Bob Wallace Ave. SW. • Turn left onto Governors House Drive SW • End at 4801 Governors House Drive SW From Birmingham or Nashville: • Take I-65 Exit to I-565 E / US 72 Alt E toward Huntsville • Take Exit 15 toward Madison Pike / Sparkman Dr / Bob Wallace Ave. • Take the Sparkman Dr / Bob Wallace Ave ramp. • Merge onto Bob Wallace Ave. SW. • Turn left onto Governors House Drive SW • End at 4801 Governors House Drive SW From Atlanta: • Merge onto US-72 W via the ramp on the left toward Huntsville. • US-72 W becomes I-565 W. • Take the Sparkman Dr / Bob Wallace Ave Exit, Exit 15, toward Madison Pike. • Keep right at the fork to go on Sparkman Dr NW • Turn left onto Governors House Drive SW • End at 4801 Governors House Drive SW To UAH Campus: • Start out going west on Governors House Dr SW toward Bob Wallace Ave SW. • Turn right onto Bob Wallace Ave SW. • Bob Wallace Avenue becomes Sparkman Drive. • Turn right onto UAH campus Registration and Abstract Submission There will be a registration fee of $100 per participant except students (with confirmed student status). To register, you need to complete the online form below to provide contact information. You also need to download, print, and complete the registration form, and mail it to the following address: Attn: Ms. Tami Lang Department of Mathematical Sciences Shelby Center Room 258A University of Alabama in Huntsville Huntsville, AL 35899 With your registration form, please send a check payable to "Math Department, UAH" denominated in US dollars, no later than September 7, 2009 to the address above. The late registration fee will be $150 after September 7. You may also pay your registration fee by cash or check at the registration desk of the conference with the late registration fee if you wish. Unfortunately, we won't be able to accept credit card or payment online. **Online Form disabled. Financial Support Our National Science Foundation grant has been recommended for funding. The grant stipulates that we provide financial support to a limited number of U.S. based junior faculty, post doctorates, and graduate students. This includes up to $75 for lodging per night (maximum 3 nights) and up to $500 for travel allowance for each person funded. In order to be considered for funding, please print and complete the PDF Application Form, and mail it along with the supporting documents to the address below. Dr. Jia Li, Chair Department of Mathematical Sciences Shelby Center Room 258A University of Alabama in Huntsville Huntsville, AL 35899 The supporting documents should include a CV, and for graduate students, a recommendation letter from the student's supervisor. Qualified participants are encouraged to apply.
{"url":"http://www.uah.edu/science/departments/math/research/special-conferences/290-main/science/science-mathematical-science/3854-the-second-international-conference-on-mathematical-modeling-and-analysis-of-populations-in-biological-systems-2","timestamp":"2014-04-19T00:34:43Z","content_type":null,"content_length":"156282","record_id":"<urn:uuid:a7a61715-79d6-4ba7-8018-d0553b247417>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Inherent High Correlation of Individual Motility Enhances Population Dispersal in a Heterotrophic, Planktonic Protist Quantitative linkages between individual organism movements and the resulting population distributions are fundamental to understanding a wide range of ecological processes, including rates of reproduction, consumption, and mortality, as well as the spread of diseases and invasions. Typically, quantitative data are collected on either movement behaviors or population distributions, rarely both. This study combines empirical observations and model simulations to gain a mechanistic understanding and predictive ability of the linkages between both individual movement behaviors and population distributions of a single-celled planktonic herbivore. In the laboratory, microscopic 3D movements and macroscopic population distributions were simultaneously quantified in a 1L tank, using automated video- and image-analysis routines. The vertical velocity component of cell movements was extracted from the empirical data and used to motivate a series of correlated random walk models that predicted population distributions. Validation of the model predictions with empirical data was essential to distinguish amongst a number of theoretically plausible model formulations. All model predictions captured the essence of the population redistribution (mean upward drift) but only models assuming long correlation times (minute), captured the variance in population distribution. Models assuming correlation times of 8 minutes predicted the least deviation from the empirical observations. Autocorrelation analysis of the empirical data failed to identify a de-correlation time in the up to 30-second-long swimming trajectories. These minute-scale estimates are considerably greater than previous estimates of second-scale correlation times. Considerable cell-to-cell variation and behavioral heterogeneity were critical to these results. Strongly correlated random walkers were predicted to have significantly greater dispersal distances and more rapid encounters with remote targets (e.g. resource patches, predators) than weakly correlated random walkers. The tendency to disperse rapidly in the absence of aggregative stimuli has important ramifications for the ecology and biogeography of planktonic organisms that perform this kind of random walk. Author Summary Organism movement is fundamental to how organisms interact with each other and the environment. Such movements are also important on the population level and determine the spread of disease and invasion, reproduction, consumption, and mortality. Theoretical ecologists have sought to predict population dispersal rates, which are often hard to measure, from individual movement behaviors, which are often easier to measure. This problem has been non-trivial. This manuscript contributes seldom available, simultaneously measured movement behaviors and population distributions of a single celled planktonic organism. The empirical data are used to distinguish amongst a set of plausible theoretical modeling approaches to suggest that organism movements are highly correlated, meaning movement direction and speed is consistent over several minutes. Previous estimates suggested persistence only lasted several seconds. Minute-scale correlations result in much more rapid organism dispersal and greater dispersal distance, indicating that organisms encounter and impact a greater portion of their surrounding habitat than previously suspected. Citation: Menden-Deuer S (2010) Inherent High Correlation of Individual Motility Enhances Population Dispersal in a Heterotrophic, Planktonic Protist. PLoS Comput Biol 6(10): e1000942. doi:10.1371/ Editor: Simon A. Levin, Princeton University, United States of America Received: June 7, 2010; Accepted: August 25, 2010; Published: October 21, 2010 Copyright: © 2010 Susanne Menden-Deuer. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported through funding from the National Science Foundation (Bio OCE 0826205) and the German National Science Foundation (Deutsche Forschungsgemeinschaft). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Movement is fundamental to many ecological processes and often dictates relevant biotic and abiotic encounter rates, particularly for planktonic organisms inhabiting a highly dynamic and heterogeneous habitat. On the individual level, movement impacts encounter rates with favorable (e.g. mates, resources) and unfavorable (e.g. disease, consumers) targets. On the population level, these microscopic encounters directly affect growth and mortality rates, dispersal rates, population distributions, the spread of disease and invasion, home ranges, reproduction and survival (e.g. [1]). Particularly for micoroganisms, recent methodological advances have enabled the high resolution quantification of organism movements (e.g. [2]), their statistical features (e.g. [3]) and changes therein in response to external stimuli (e.g. [4]). Significant efforts have sought to establish mechanistic linkages between these individual movement behaviors and the resulting population distributions (reviewed in [5].) Deciphering these linkages for planktonic organisms, but also others, provides powerful tools to predict rates of organism encounters with environmentally relevant factors and ultimately, their ecological function. Efforts to bridge the gap between individual movement behaviors and large scale population dispersal have been intense, especially in spatial ecology. Random walk theory has been a particularly powerful approach. Founded on observations of the irregular motions of pollen, i.e. Brownian motion [6], random walk theory relates organism movements in terms of speed, direction or turning rate to probabilities of particle distribution [7]–[9]. Correlated random walk models that assume correlation in successive movement direction, turning angle or velocity, have been particularly successful in linking movements and dispersion in diverse organisms [5], [10]–[12]. Every formulation of a random walk model rests on a set of assumptions about the underlying movement parameters, their changes over time and dependence on internal or external stimuli [13]. Predicted rates of population distributions are extremely sensitive to the underlying assumptions and to the exact model formulation [14]–[16]. As was recently shown for movement data of single-celled algae, widely used models with differing assumptions may yield significantly different predictions of organism distributions [17]. Thus, it is impossible to determine the most appropriate set of assumptions based on theoretical considerations alone. Concurrent empirical data of both organism movements and their resulting population distributions and the stimuli that modulate these distributions are necessary to inform predictive model formulations. In a recent advancement, [18] have developed empirical methods that allow the simultaneous quantification of individual movement behaviors and population distributions of free swimming, planktonic organisms in stable and spatially structured environments. The approach taken here was to use these methods and empirically motivate a series of individual based, hidden Markov models to predict population distributions and examine the goodness of fit between model predicted and empirically measured distributions. The goals of this study were to (1) examine the feasibility of reproducing empirically observed population distributions from individual movement behaviors and (2) to identify the key characteristics necessary to adequately link individual movements with population distributions. Advancement on these goals is necessary to developing analytical solutions to random walks and predicting individual encounter rates, population distributions and ultimately the role of movement in driving organism abundance and distribution patterns. The results of both empirical and numerical analyses strongly suggest that motility patterns of some planktonic protists must have correlation times on the order of minutes. Organism swimming behaviors and vertical distributions were measured in 3D using vertically moveable, stereo video cameras that recorded in randomized order at 6 vertically separate horizons. Each video segment yielded both individual movement behaviors and abundance of organisms. The footage was processed through a series of automated video-analysis steps that yielded organism positions, which were then used to reconstruct and analyze 3D movement behaviors. The empirical movement data consist of a total 1032 movement trajectories of Oxyrrhis marina swimming freely within 1L, 30 cm high column of filtered seawater, several cm distance from the nearest wall. The minimum trajectory length was 3 seconds, with 124 trajectories exceeding 10 seconds in duration. In total, these observations represent 108 minutes of movement data, with the median trajectories length 5.2 seconds and the longest observation 33 seconds. The mean swimming speed was 235 (103) and the mean swimming direction was 57 (34) off the vertical axis. The frequency distribution of the over 24000 empirically determined vertical velocities shows that their distribution is non-gaussian, with a significant negative skewness (Fig. 1). Thus, the population was characterized by few strong down-swimmers and many, relatively slower up-swimmers. The median vertical velocity was 118 with a considerable standard deviation of 110 . There was some indication that the population either underwent behavioral shifts during the time of observation, or that there were multiple behavioral types represented within this clonal lineage of O. marina. Vertical velocity significantly increased over the period of observation (p = 0.01), whereas there were no significant differences among vertical velocities measured at the six depths in the water column (p = 0.13). The frequency distribution of vertical velocities remained positively biased, irrespective of the time elapsed since introduction. Consistent upward swimming bias indicates that this bias was inherent to the organisms and not a function of the point of introduction at the base of the water column. Figure 1. Frequency distribution of empirically measured vertical velocities for all swimming trajectories. Negative values indicate downward and positive values upward swimming. The probability density function of a normal distribution, with the same mean and variance, is superimposed to show the negative skew in the empirical data. This indicates that the empirical velocity data contained more and stronger downward swimmers and more, relatively weaker upward swimmers than normally distributed data. Simultaneously to measuring individual movement behaviors, the population distribution of Oxyrrhis marina was quantified throughout the entirety of the tank over 1.5 hours (Fig. 2). The time course of abundance changes are shown in three successive vertical profiles (i.e. passes) that each lasted 20–30 minutes. In the laboratory, the population showed a progressive upward drift, slowly increasing the number of cells at higher horizons. Because cells were introduced at the bottom of the tank, abundances at the upper horizons were initially low. Few individuals were seen rising upward rapidly, arriving at the top of the tank within the first 40 minutes (pass 1 and 2). The majority of individuals remained in the lower half of the tank for the first 40 minutes. After approximately 1 hr, the population appeared uniformly distributed throughout the tank. Figure 2. Empirical abundance of organisms observed at 6 vertical horizons in three successive passes, lasting 20–30 minutes each. Standard error of the abundance estimates are contained within the data symbols. The population was slowly moving upward and dispersed throughout the water column. An individual-based, biased random walk model was formulated to establish linkages between individual movement behaviors on the microscopic level and the macroscopic population distributions and changes therein. To seed this model, individual movement behaviors needed to be characterized both in terms of vertical velocities as well as their correlation times. The movement paths showed highly periodic movements (Fig. 3, left panel), with correlation coefficients failing to asymptotically approach zero (right, bottom panel) and net distance traveled growing rapidly (left, top panel) as would be expected for highly correlated movements. Individual movement paths were characterized by high degrees of auto-correlation, in all three dimensions. De-correlation of velocities was not observable over the measured path durations. The auto-correlation coefficient calculated for the entirety of all trajectories failed to identify a de-correlation time in the up to 33 second long observations. However, sample size for trajectories 15 seconds was low, (30 trajectories). Thus, autocorrelation analysis suggested that correlation times were 30 seconds but did not identify a distinct correlation time scale. Given this uncertainly, a range of 12 correlation times between = 1 to 1800 seconds were chosen for the model analysis. Figure 3. Empirical, 30-second, 3D-trajectory of Oxyrrhis marina (left panel) and corresponding net distance traveled (top, right panel) and autocorrelation coefficient for the three velocity components respectively (bottom, right panel). There was no evidence of a de-correlation in motion (i.e. transition from ballistic to diffusive motion). De-correlation would be indicated by a change in the slope of the line showing maximum distance travelled over time or by the correlation coefficients approaching zero. The cell continued to progress with a high degree of correlation even over 30 seconds. De-correlation was not observable in any of the paths recorded. Predictions of population distribution from empirically measured vertical velocities through a series of individual-based simulation models showed that the empirically observed mean upward drift of the population was captured well by all model predictions irrespective of the assumed correlation time (Fig. 4). Correlation times of 1 second predicted the population to tightly cluster vertically as cells moved upward through the water column (Fig. 4, panels 2 & 3). After 30 minutes of simulation, the mean vertical position of this population was 17.5 cm, with a standard deviation of 0.5 cm. The empirically observed, greater variance of the population dispersal and the delay in upward flux of the majority of the population were not predicted by model iterations assuming short correlation times. Simulations assuming minute-scale correlation times did capture the increased variance in population distribution (Fig. 4, panels 6 & 7). Figure 4. Individual-based model predictions of vertical population distributions of individuals performing a random walk with increasing correlation time, (stated in seconds above each panel). The initial distribution is shown in the left most panel. Duration of the simulation was 30 minutes for 1000 individuals. Note difference in x-axis ranges. Increases in resulted in rapid increases in variance of the population distribution. Variance in population distribution increased rapidly with increasing correlation time over the first 30 minutes of model simulation (Fig. 5). Correlation times of 1 second resulted in low and near constant variance in population distributions. Increased correlation times of 100 seconds lead to more rapid dispersal with standard deviations increasing by approximately 1 mm per minute. Assumed correlation times of 100 seconds predicted cells distributed throughout the water column and standard deviations of the population distributions increased at nearly 5 mm per minute. Increasing correlation time lead to emergence of the behavioral heterogeneity observed in the empirical data, signified by greater cell-to-cell variation in movement and resultant position. Figure 5. Mean standard deviations of model predicted population distribution over time as a function of . Errorbars show standard deviations of triplicate runs. Standard deviations were low and nearly constant for of 1 second or less. For correlation times 100 s the variability in population distribution increased moderately and rapidly for correlation times of 100 s. After 30 minutes, the standard deviation in population distribution for correlation times 500 was over 20 fold greater than for an uncorrelated random walk. As is frequently observed (e.g. [9]), the uncorrelated random walk model predicted a gaussian cohort advancing upward at high cell concentration in close proximity. Longer correlation times resulted in rapid increases in population dispersal and more rapid spreading throughout the water column (Fig. 6). Although the mean net dispersal distance was identical, irrespective of the correlation time, long correlation times resulted in much higher variance in net dispersal distances because some individuals remained near the point of entry for the entirety of the model simulation, whereas few, fast upward swimming cells reached the surface of the tank within a few minutes. Behavioral heterogeneity was also suggested by the variance in the empirically measured vertical velocities (Fig. 1). Assumption of longer correlation times reproduced the observed cell-to-cell variation in motility, suggesting behavioral heterogeneity is an important contributor to the observed behaviors and population distributions, even though the source population was clonal. Figure 6. Mean distance farthest 25th percentile of the population moved in 30 minute simulations in an infinite water column. Maximum dispersal distance increased rapidly with increasing and model organisms with 100 seconds were predicted to move on average two and up to three times farther than those with lower correlation Root mean square error (RMSE) of model predictions compared to the empirical distribution data decreased significantly with increasing correlation time (Fig. 7). Model predictions differed most from empirical observations when assumed correlation was weak. Abundance predictions from highly correlated random walk models with 500 seconds differed least from the empirical data. RMSE was highest and statistically significantly different among models assuming = 1 to 300 seconds. RMSE estimates for 500 were lowest and statistically indistinguishable from one another, suggesting a minimum correlation time of 8 minutes. Further refinement or an upper limit of the correlation time was not identifiable based on this comparison of empirical and predicted population distributions. Figure 7. Root mean square error (RMSE) of triplicate, model predicted distributions decreased significantly with increasing correlation time relative to the empirically measured distribution. Error bars are three standard deviations of the mean. RMSE was scaled to the maximum RMSE estimate at 0.25 seconds. Empirical and simulation data were sampled with identical order and frequency. Model predictions for 500 were statistically indistinguishable from one another and deviated the least from the empirical data. The time and space scales of the model simulations were expanded to a 15 m water column and run for 24 hrs to explore the consequences of correlation duration on individual dispersal distances as well as population distributions. Total population size, evaluation frequency and all other aspects of the simulation were identical to those used in the simulations evaluated above and stated in the methods. Within patch retention mechanisms have been clearly demonstrated for this species [18] but were not implemented in the simulation. First, expansion of the time and space scales of the model dimensions illustrated how longer correlation times increased population dispersal and thus variance in distribution. Based on the empirically measured, vertical velocity distribution, organisms moving with = 1 second occupied a vertical range of 10 cm after 12 hours. For organisms with correlation times of = 300 and 900 seconds, the predicted vertical ranges were 2.5 and 4 m respectively. Thus, correlation times increased population dispersal rates by at least an order of magnitude. Second, individual dispersal distance of the farthest traveling 25th percentile increased rapidly as correlation time increased. An individual with a correlation time of 900 seconds would travel on average twice as far and up to 3 times farther than an individual with a weakly correlated random walk. Therefore, individuals with highly correlated random walk behaviors are expected to encounter remote targets more rapidly than weakly correlated random walkers. Simulation of a 1m thick phytoplankton prey layer within the 15 m water column provides quantitative estimates of the impact of correlation time on the encounter of remote targets. Dimensions of the phytoplankton layer were based on empirical measurements in a shallow, coastal fjord [19]. Higher correlation times lead to considerably earlier arrival of 25% of the population within the prey layer, over 1 hour earlier in the case of = 1800 seconds (Fig. 8, top panel). Populations with strongly correlated random walks remained within this prey layer over 2 hours longer than populations with weakly correlated random walks (Fig. 8, bottom panel). Figure 8. Relative time of first arrival (top panel) and residence duration (bottom panel) of the 25th fastest percentile of correlated random walkers in a simulated prey patch (1m thickness) within a 15 m water column. Difference in arrival and residence time were calculated relative to uncorrelated random walkers. Larger values result in over 1 hour earlier arrival and over 2 hour longer residence in the target prey patch. Long correlation time and cell-to-cell variation were identified as key characteristics necessary to reproduce empirically observed population distributions from individual movement behaviors. Simultaneous measurements of both individual movement behaviors and population distributions were essential in linking microscopic movement behaviors with macroscopic population distributions. The results strongly suggest that motility patterns of some planktonic protists must have correlation times on the order of minutes, rather than seconds as is currently thought. Persistent similarity of movement in individual cells resulted in vastly higher dispersal rates for the population and significantly increased predicted rates of encounter with remote targets. The correlation times estimated here far exceed previously measured correlation times. Previous studies suggest that transitions from highly correlated movements to more diffusive motion were observed to occur within 10 seconds for taxonomically diverse planktonic organisms ranging from bacteria to copepods [20], [21]. Uncorrelated movements were not identifiable in a set of several hundred movement tracks. To identify the correlation time-scales empirically would require minute-scale observations of 100s of individuals. The longer correlation times observed in this study may be due to the much larger than typical observation volume used, which may have resulted in longer free path lengths. The consequences of long correlation times in individual motility patterns of plankton are significant. Planktonic organisms live in a nutritionally dilute environment (e.g. [?]). Recent, high-resolution observations in the ocean have shown that phytoplankton, the principal prey of many heterotrophic protists, are frequently concentrated in discrete layers or patches, rather than uniformly distributed [22]. Early hypotheses identified that planktonic predators must exploit these patches to sustain measured levels of secondary productivity [23]. Asexually reproducing organisms in particular can quickly transfer increased resource availability to increased growth and abundance. Long correlation times of individual movements result in significant increases in predicted dispersal distances of individuals and thus increases in the probable encounter with remote targets, including prey patches. Conversely, the probability of less advantageous encounters, including with predators is also increased [20] unless dispersive escape responses are evoked. For clonal organisms, increased probability of encountering unexploited resource patches may offset the increased risk of mortality due to predator encounters. The exact rate of encounter of remote targets will depend on the distribution, size and persistence of targets. Irrespective of target characteristics, individuals with long correlation time will encounter specific targets faster, given their, on average, almost two and up to three-fold greater dispersal distance. Behavioral modifications in response to prey derived stimuli that lead to consumer aggregations within resource patches are well documented (e.g. [18], [24], [25]) and are expected to provide further advantages to consumers exploiting dilute environments. At the population level increased rates of population dispersal would erode aggregations and patchiness. It is noteworthy that the population also contained a small fraction of strong downward swimmers, which would further increase population dispersal rates. In the absence of aggregative stimuli the behavioral heterogeneity observed here may serve an important dispersive function and provide adaptive advantages to counteract long term aggregations. Long correlation times may have a homogenizing effect in light of many physical and biological processes that lead to cell aggregations and patchiness. This dispersive behavior could lead to reduced competition among cells [26], reduced risk from predators attracted to high cell concentrations [27] and reduced risk of the entire population being subjected to a localized risk or condition. Accelerated population dispersion may also counteract the tendency of cluster formation due to rapid asexual reproduction in planktonic organisms [28]. It is unknown how constant the measured rates of dispersal are over time. The observations made here were made shortly after organisms were introduced into the tank, thus population distributions were transient and dispersal rates likely at their maximum. The experimental set up deliberately did not include any stimulus that would either limit (e.g. aggregation) or enhance dispersal, so that measured dispersal rates were independent of external stimuli. However, organisms likely modulate their dispersal rates both over time and in response to external cues. Such modulation of correlation time has been suggested as an effective prey search strategy for organisms lacking sensory capacity [15]. Similarly, [29] have identified high variance in the turn rate of freshwater zooplankton (Daphnia spp.) and proposed that variation in movement behaviors has adaptive advantages. The observed upward bias was a consistent characteristic of the measured swimming behaviors irrespective of the point of introduction or time of sampling. The same vertical bias was previously observed for the same species and the presence of a prey stimulus significantly reduced but did not eliminate upward bias [18]. In the absence of aggregative stimuli, this upward bias would ultimately lead to surface aggregations of organisms. Although surface aggregations were indeed observed in the laboratory, the stable, convection-suppressing conditions of this laboratory set up are neither realistic nor characteristic of planktonic habitats. The dynamic hydrography, including breaking internal waves, shear instability at boundaries and turbulent mixing, characteristic of the coastal ocean may counteract the observed net upward flux of organisms and prevent aggregations at the surface. Reported eddy diffusivities are an order of magnitude higher than the upward swimming velocities measured here and would counteract surface aggregations [30]. Given these dispersive factors, an inherent up swimming bias may hold adaptive advantages for planktonic organisms in the ocean, which is characterized by weak horizontal but strong and predictable vertical gradients in resource availability. The data presented here strongly suggest that correlation times of motility patterns for some planktonic organisms are significantly longer than currently assumed. Long correlation times suggest that organisms with these motility patterns have higher dispersal rates and higher encounter rates with remote targets than organisms with only weakly correlated random walks. Simultaneous empirical observations of individual movement behaviors and the resulting population distributions were essential in linking statistical properties of cell movements to predictions of population distribution, a connection across disparate time and space scales. Model simulations of organism movements and population distributions were necessary to extrapolate beyond empirically measurable time and space scales. Verification of model predictions against empirical observations helped distinguish among a number of reasonable model formulations and ultimately in estimating the minimum correlation time. Quantifying the magnitude of the correlation time provides a basis for estimating individual encounter rates as well as population distributions. These quantitative tools are indispensable to predicting organism distributions and their function in the environment. Culture of microorganisms The heterotrophic dinoflagellate Oxyrrhis marina was used to study the effects of swimming behaviors on population dispersal. O. marina is 12–18 m in length and is a globally distributed species [31] . Cells swim and steer with the aid of perpendicular transverse and longitudinal flagellae that each propagate helicoidal waves [32]. O. marina was fed the haptophyte prey alga Isochrysis galbana, grown in nutrient-amended filtered seawater, f/2 [33]. Cultures were maintained on a 16:8 hr light:dark cycle, at 18 and 50 mol photons m provided by cool and warm white lights. The cultures were not axenic. The salinity of the medium was 30. Both predator and prey cultures showed positive growth in all tested media ranging in salinity from 24 to 32. Cultures were transferred every 4–6 days to maintain exponential growth. Cell concentrations of both predator and prey cultures were determined with a Coulter Multisizer (Beckman Coulter, Miami, Florida) just prior to experiments. Predators were starved for 48 hrs prior to the experiment to minimize variation between cells. Empirical data collection & extraction Organism swimming behaviors and vertical distributions were measured in complete darkness in a 1L, octagonal plexiglas tank of 30 cm height at ambient room temperature of approximately . All organisms were introduced at the bottom of the tank and observations were made without external stimuli. To suppress water movement, the water column was stabilized through a weak, linear salinity gradient, ranging from 28 to 30. Video images were captured with two infra-red sensitive cameras (Cohu 4815-3000/000), equipped with Nikon 60 mm Micro Nikkor lenses and illuminated by infra-red light emitting diodes (Ramsey Electronics, 960 nm). The cameras were mounted on a vertically movable stage. Vertical position was controlled through a ruler fixed to the side of the stage. Video was recorded at 15 frames per second. Prior to these experiments, it was verified that some cells reached the top of the experimental tank within 15 minutes and filming was commenced after a waiting period of 15 minutes. Footage was collected in the center of the water column at six equally spaced horizons, approximately 5 cm apart, for 2 minutes every 20 to 30 minutes for a total duration of 1.5 hours. This resulted in 3 video segments being collected at each horizon. At the beginning of the experiment the order of sampling horizons was randomized. The position of organisms in the video footage was determined with ImageJ image processing software by removing stationary background objects and thresholding. A three-dimensional calibration grid was used to convert video pixel dimensions to physical units. The stereoscopic field of view was approximately 1.8 cm wide, 1.3 cm high and 4.0 cm deep. Thus, cells within a volume of approximately 9 ml were observed. These movement data were unencumbered by frequently encountered methodological limitations such as low temporal resolution, physical restriction (e.g. container size) and the 3D rendition avoids underestimates of swimming velocities and directions. Three-dimensional swimming paths were generated from pixel positions by Tracker3D, a Matlab-based motion analysis package for tracking organism movement written by Danny Grünbaum (Univ. of Washington). Before analysis, swimming paths were smoothed with a cubic spline to remove high-frequency noise. Individual movement statistics were calculated from 3D swimming paths, subsampled at 0.25 second intervals, including only trajectories of at least 3 seconds duration. Abundance of O. marina was estimated from the average number of 3D trajectories observed in each video frame. Further details on the water column set-up, filming and data collection are reported in [18]. Model formulation An individual-based, hidden Markov model was formulated to predict the vertical redistribution of the Oxyrrhis marina population in the water column. The successive positions and movement parameters for each individual were modeled explicitly based on the empirically observed behaviors. The magnitude and frequency distribution of empirically measured vertical velocities provided the basis for modeled velocities (Fig. 1). Negative velocities indicate downward movement. The -th model organism at time was characterized by a position , vertical velocity and associated with a specific swimming trajectory , randomly drawn from the entirety of observed paths and then assigned the first velocity measured within that path. Triplicate model iterations were evaluated at time increments of 4 Hz with 1000 individuals each. Successive organism positions were calculated as: Model organisms encountering the upper or lower boundaries were assigned movement paths with net downward or upward movements respectively. The model was chosen to be 1-dimensional, since there were no horizontal gradients in external stimuli and the variable of interest was the rate of vertical population redistribution in the water column. The spatial and temporal scales of the model were identical to the laboratory set up. Implementation of correlation time In all model formulations, individuals were randomly assigned new velocities at the model iteration frequency of 4 Hz. In the uncorrelated random walk model, the assigned vertical velocity was drawn from the entirety of all observed vertical velocities. Thus, information on the associated trajectory was meaningless for the uncorrelated random walk model. In the correlated random walk model, subsequent velocities were sampled from the associated swimming trajectory in sequence of observation. New trajectories were assigned at the frequency with probability Thus, at seconds (i.e. the iteration frequency of 4 Hz), individuals sampled repeatedly and in sequence from the velocities within one empirically determined trajectory. Modeled correlation time ranged from 0 to 1800 seconds. Population size was held constant, since demographic processes were not expected to change abundance over the model duration of 1.5 hrs. Statistical analysis Comparisons of the vertical velocities over time and at different filming horizons were made using a two-way ANOVA. Sensitivity analyses were conducted to ensure that the path discretization parameters did not significantly change the calculated vertical velocities. Furthermore, artificial data sets were created to test the sensitivity of model predictions to deviations from normality for the frequency distribution of vertical velocities and total sample size. Neither analysis suggested a change in conclusions. The autocorrelation coefficients of vertical velocities were calculated for each path separately, with mean velocity subtracted. To facilitate comparison among paths, correlation coefficients were normalized. The root mean square error (RMSE) between empirically observed and model predicted vertical population distributions were calculated to facilitate among model comparisons. To do so, vertical population distribution from model predictions were sampled in the same order and frequency as the empirical data were collected. All RMSE estimates were scaled to the maximum RMSE observed to remove the effect of sample size from estimates. Comparison of RMSE were made with a one-way ANOVA. Significant differences among means were assessed using a Bonferroni corrected post-hoc test. Statistical significance was assigned at . All analyses and simulations were done using the software package Matlab 7.9.0.. Daniel Grünbaum is gratefully acknowledged for generous advice with the analysis and constructive review of an earlier draft. Author Contributions Conceived and designed the experiments: SMD. Performed the experiments: SMD. Analyzed the data: SMD. Contributed reagents/materials/analysis tools: SMD. Wrote the paper: SMD.
{"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000942?imageURI=info:doi/10.1371/journal.pcbi.1000942.g001","timestamp":"2014-04-16T06:47:47Z","content_type":null,"content_length":"171288","record_id":"<urn:uuid:b8a979bc-b05a-43cf-a07c-27629eabea63>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Inductive reasoning August 22nd 2009, 04:39 AM #1 Aug 2009 Inductive reasoning Ok first off let me start by saying I am 34 years old just now headed back to school so it has been a good while since I have done any math. The question I have to answer is Use inductive reasoning to decide whether the conclusion for each argument is correct or incorrect. Show at least 3 specific examples as part of my inductive reasoning. 1. If a number with three or more digits is divisible by 4, then the last two digits of the number are divisible by 4. My problem is I couldn't remember all the tips and tricks from school so I looked online and supposedly that is correct. But I have a number, actually a couple numbers that would make that incorrect. So this is where I am baffled. If you take 100/4=25 but 00/4=0 so its a 3 digit number and to me would make that statement false but according to the things I find online the statement is correct. Any help would be appreciated. Same goes for like 200/4=50 00/4=0. Hmm Thank you. Last edited by Ancobaca; August 22nd 2009 at 07:18 AM. Hi Ancobaca, and good luck with your return to school and to mathematics. The number 0 is considered to be divisible by any number. In particular, 4×0 = 0, so 0 is a multiple of 4 and thus 0 is divisible by 4. Any number ending in 00 is a multiple of 4. Also, the last two digits give you the number 0, and that agrees with the rule that if a number is a multiple of 4 then its last two digits form a multiple of 4. August 22nd 2009, 08:29 AM #2
{"url":"http://mathhelpforum.com/algebra/98893-inductive-reasoning.html","timestamp":"2014-04-18T06:23:07Z","content_type":null,"content_length":"33651","record_id":"<urn:uuid:6ef05c3a-2b05-4c63-b5df-84a1de905a6e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting function to Quadratic form February 13th 2010, 01:59 PM #1 Sep 2009 Converting function to Quadratic form Hi everyone. I'm looking for help with converting this function to quadratic form. The function is f(x1,x2)=(x2-x1)^4 + (12*x1*x2) - x1 + x2 - 3. The quadratic form I need to convert to is: f(x)=(1/2)x'Qx - x'b + h where x is a vector=[x1 x2]', '=transpose, Q and b are vectors, and h is the constant. Also, Q is symmetric and positive definite (Q=Q' > 0). The trouble I'm running into is that f(x1,x2) is 4th order, and the examples I have convert only 2nd order functions to the f(x) quadratic objective format above. In case your interested, I'm looking for this info to solve a steepest descent problem, where the varying ak value is ak=g(k)'g((k))/( g(k)'Qg(k) ) where g(k)=Qx(k) - b. Thanks in advance! Last edited by scg4d; February 13th 2010 at 02:13 PM. Hi everyone. I'm looking for help with converting this function to quadratic form. The function is f(x1,x2)=(x2-x1)^4 + (12*x1*x2) - x1 + x2 - 3. The quadratic form I need to convert to is: f(x)=(1/2)x'Qx - x'b + h where x is a vector=[x1 x2]', '=transpose, Q and b are vectors, and h is the constant. Also, Q is symmetric and positive definite (Q=Q' > 0). The trouble I'm running into is that f(x1,x2) is 4th order, and the examples I have convert only 2nd order functions to the f(x) quadratic objective format above. Well, yes. That's because 2nd order functions are quadratic! You cannot write a function that is NOT quadratic in a quadratic format. In case your interested, I'm looking for this info to solve a steepest descent problem, where the varying ak value is ak=g(k)'g((k))/( g(k)'Qg(k) ) where g(k)=Qx(k) - b. Thanks in advance! February 14th 2010, 03:09 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/128692-converting-function-quadratic-form.html","timestamp":"2014-04-18T03:06:46Z","content_type":null,"content_length":"34916","record_id":"<urn:uuid:6394ac35-3662-4614-a2cf-778092e936be>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/cookie_009/answered/1","timestamp":"2014-04-17T07:14:11Z","content_type":null,"content_length":"112412","record_id":"<urn:uuid:5262fca2-2eec-48ce-8ec6-8768fd93bbec>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"}