content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Use complete sentences to explain how you would use graphing technology to find the solutions of 0 = 2x2 − 3x + 0.5. What do the solutions of 0 = 2x2 − 3x + 0.5 represent on the graph of the • one year ago • one year ago Best Response You've already chosen the best response. plot y = 2x^2 − 3x + 0.5 at the points where the curve touches the x-axis 2x^2 − 3x + 0.5 = 0 the points where y = 0 represent the real factors of 2x^2 − 3x + 0.5 in this case x ≈ 0.19098 and x ≈ 1.30902 so y ≈ 2(x - 0.19098)(x - 1.30902) Best Response You've already chosen the best response. Best Response You've already chosen the best response. Set the equation to a funtion f(x) or y, such that y = 2x^2 - 3x +0.5, Let x = various values, say -2, -1, 1, 2 and maybe a few more, the more you have the more accurate your answer will be Calculate the y values, by substituting each value of x into the above equation. For example, if you let x = 1, then y = 1.5, you will plot the point (1,1.5), Connect all the dots. The graph you blot will look like a smooth upward skinny V, just to the left of the middle, or y, axis. Where the graph hits the horizontal, or x axis, are the solution to the above equation. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/512a7e2de4b098bb5fba98a1","timestamp":"2014-04-17T04:24:28Z","content_type":null,"content_length":"33128","record_id":"<urn:uuid:cb74af2b-412e-4a37-b9de-43b9d3acb1c3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
[no subject] [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] [no subject] Get this: x1 x2 x3 For all I know, there is no linear complexity O(n) sorting Here you can see how different algorithms actually perform: For N observations and K variables, Nick suggested reshaping data to get NxK observations, sort them (perhaps on a complex key) and reshape it back. I would doubt that this strategy improves performance, because of the above considerations of nonlinearity of sorting with respect to N: "sorting K values N times" is much simplier than "sorting NxK values". Just how much easier - it depends. When K is rather small, the gain can be quite big. Consider e.g. K=2 :) -- a simple conditional swap -- linear! (number of swaps in the worst case = N, number of comparisons: also N). Reshape itself works quite slow. Like really slow. Because it is implemented as an .ado program. A commentary by the author of rowsort notes that "It may be faster to reshape, sort within blocks, and reshape again". So I take that as a sign of even worse performance. Another issue to consider is memory requirements. How much memory does reshape require? (we are all so concerned about fitting our data into 2GB/1GB/700MB/... limit, but often forget that once it is there, working with it requires memory too!) In my experience reshape requires at least two times as much: . memory Details of set memory usage overhead (pointers) 14,800 14.45% data 37,000 36.14% data + overhead 51,800 50.59% free 50,592 49.41% Total allocated 102,392 100.00% Other memory usage set maxvar usage 2,041,738 set matsize usage 1,315,200 programs, saved results, etc. 45,523 Total 3,402,461 Grand total 3,504,853 . reshape long r p, i(id) (note: j = 1 2 3 4 5 6) no room to add more observations (sorry for mentioning this taboo topic on public, have you ever estimated how much memory does an XYZ command require? Even Stata people guesstimate here). So Ashim, don't get discouraged and write your own code. It might work better for your particular case. One last advice is that sometimes the problem can be resolved by a radically different solution. E.g. if your variables are, say age of children for a particular mother in a [e.g. DHS] survey, you might find a dataset already in the long-format and start working with it. The dataset description will let you know what is the ordering there. In particular, data may be ordered with extra (not available to you) information which may be lost during sorting. E.g.: the codebook may say that children are ordered by age: 6, 8, 8, 12. You might not know whether 2nd and 3rd children are twins, or were born in the same year, but the data providers might very well do! Changing their places (remember that Stata does not guarantee sort stability for observations, unless special option is used, and I am not sure how this works for sorting rows) will change their birth ranks - quite an important characteristic, if we believe a large body of literature on the topic. Hope this helps. Anyways, I don't immediately see how -sortrows- or -rowssort- will deal with this situatuion: motherid child_id1 child_id2 child_id3 age1 age2 age3 gender1 gender2 gender3 and I might want to sort children by age keeping their ordering stable and moving their (string) ids and (numeric) gender dummies in sync. Regards, Sergiy Radyakin On 8/26/08, Nick Cox <n.j.cox@durham.ac.uk> wrote: > This generated various suggestions. > My main thought is that you should never have to write your own sort > programs, bubble sort or other, in Stata. > Even sorting rows can make use of Stata's own sort functionality -- as > in a combination of -reshape- and -sort-, as mentioned by Maarten Buis > -- or as in -rowsort or -sortrows-, as mentioned in part by Martin > Weiss. > -rowsort- is on SSC and was written for Stata 8. > -sortrows- is also on SSC and was written for Stata 9. It makes use of > Mata and in large part supersedes -rowsort-, so long as you have Stata 9 > at least. > Nick > n.j.cox@durham.ac.uk > Ashim Kapoor > I have a simple code to write. > I have 6 variables v1, v2 , v3, v4 ,v5 , v6. > I want to sort "according to each observation". That is , in the end > for each i I want , v1[i] < = v2[i] <=... v6[i]. > So I wrote a simple bubble for each row and then loop from 1 to _N. > Now the problem is that this works but it's VERY VERY slow. Any > comments anyone ? > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-08/msg01149.html","timestamp":"2014-04-17T09:47:13Z","content_type":null,"content_length":"9634","record_id":"<urn:uuid:2e3585c7-f0d4-4dda-8b40-09ddf5fe0365>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
try to get X Re: try to get X And that's a quartic equation. They can be solved, but the method is very complex. I think I've probably taken the wrong route. That or the quartic can be factorised into something nicer. Why did the vector cross the road? It wanted to be normal.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=25927","timestamp":"2014-04-18T23:44:12Z","content_type":null,"content_length":"15977","record_id":"<urn:uuid:d17cd00e-f41c-41e5-ac10-c1a423047477>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Integrals This unit introduces the concept of the integral and presents techniques and applications of integration. The integral is presented in two ways. First, the indefinite integral of f is introduced as the antiderivative, which is a function F whose derivative is f . Next, the definite integral is introduced as a representation of the exact area under the graph of a function. Those areas are first approximated by rectangle-based partition methods known as Riemann sums. Then, an exact method for calculating these areas is given by the fundamental theorem of calculus, which links together the concepts of integration and differentiation. The unit concludes with a discussion of integration by substitution and the second fundamental theorem of calculus.
{"url":"http://www.sparknotes.com/math/calcab/introductiontointegrals/summary.html","timestamp":"2014-04-17T18:49:40Z","content_type":null,"content_length":"50800","record_id":"<urn:uuid:00680c98-c58b-465a-8ac8-f81e790963b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sun and Solar Activity Solar Escape Velocity On the surface of any body in space (such as a planet or star), there is a certain minimum speed that must be given to an object, which when directed vertically upwards will allow the object to escape the gravitational pull of that body, and for the Earth it has a value of 11 km/sec. The sun, being much more massive than the earth, has a much larger escape velocity of around 600 km/sec. Even so, activity on the sun is often so violent that this velocity is exceeded and clouds of material are ejected into interplanetary space. The exact speed required for solar plasma to reach a planet in the solar system depends on the distance of the planet from the sun, and the height in the atmosphere from which the ejection takes place. This is given by the formula: v = sqrt { 2 G M ( 1/r1 - 1/r2 ) } where v is in units of meters per second; M is the mass of the sun (1.991x10e30 Kg) and G is the Universal Gravitation constant (6.67x10e(-11) in these units). Escape velocities from various heights are given in the table below. Heights are given in terms of the radius of the sun (denoted Rs). When observing mass ejections from the sun, there is a very simple rule - escape velocity has been achieved if the material moves 0.1 solar radius (70,000 km) in less than 2 minutes. Material prepared by John Kennewell
{"url":"http://www.ips.gov.au/Educational/2/1/4","timestamp":"2014-04-16T13:25:58Z","content_type":null,"content_length":"12831","record_id":"<urn:uuid:9154d104-b30f-4995-85dc-b0141af05db3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
On the irregular behavior of LS estimators for Asymptotically singular designs Andrej Pazman and Luc Pronzato Statistics and Probability Letters 2005. Optimum design theory sometimes yields singular designs. An example with a linear regression model often mentioned in the literature is used to illustrate the difficulties induced by such designs. The estimation of the model parameters $\mt$, or of a function of interest $h(\mt)$, may be impossible with the singular design $\xi^*$. Depending on how $\xi^*$ is approached by the empirical measure $\xi^n$ of the design points, with $n$ the number of observations, consistency is achieved but the speed of convergence may depend on $\xi^n$ and on the value of $\mt$. Even in situations where convergence is in $1/\sqrt{n}$ and the asymptotic distribution of the estimator of $\mt$ or $h(\mt)$ is normal, the asymptotic variance may still differ from that obtained from $\xi^*$. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00001030/","timestamp":"2014-04-19T14:40:03Z","content_type":null,"content_length":"6877","record_id":"<urn:uuid:0c94977a-0d46-4651-94bf-bb0dde502168>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
IACR News 09:17 [Pub][ePrint] Non-Malleable Codes from Two-Source Extractors, by Stefan Dziembowski and Tomasz Kazana and Maciej Obremski We construct an efficient information-theoretically non-mall\\-eable code in the split-state model for one-bit messages. Non-malleable codes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS 2010), as a general tool for storing messages securely on hardware that can be subject to tampering attacks. Informally, a code $(Enc : \\cal M \\rightarrow \\cal L \\times \\cal R, Dec : \\cal L \\times \\cal R \\rightarrow \\cal M)$ is {\\em non-malleable in the split-state model} if any adversary, by manipulating {\\em independently} $L$ and $R$ (where $(L,R)$ is an encoding of some message $M$), cannot obtain an encoding of a message $M\'$ that is not equal to $M$ but is ``related\'\' $M$ in some way. Until now it was unknown how to construct an information-theoretically secure code with such a property, even for $\\cal M = \\{0,1\\}$. Our construction solves this problem. Additionally, it is leakage-resilient, and the amount of leakage that we can tolerate can be an arbitrary fraction $\\xi < {1}/{4}$ of the length of the codeword. Our code is based on the inner-product two-source extractor, but in general it can be instantiated by any two-source extractor that has large output and has the property of being {\\em flexible}, which is a new notion that we define. We also show that the non-malleable codes for one-bit messages have an equivalent, perhaps simpler characterization, namely such codes can be defined as follows: if $M$ is chosen uniformly from $\\ {0,1\\}$ then the probability (in the experiment described above) that the output message $M\'$ is not equal to $M$ can be at most $1/2 + \\epsilon$.
{"url":"https://www.iacr.org/news/index.php?p=detail&id=2703","timestamp":"2014-04-21T09:37:24Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:3c2f0b85-924f-46ea-9760-54a87e06d091>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
High Speed Downloads Speed Health essential university physics richard wolfson solution manual - High Speed Download 4278 Kb/s essential university physics richard wolfson solution manual [ Direct Download ] 4278 Kb/s Instant Access! - essential university physics richard wolfson solution manual 413 Kb/s essential university physics richard wolfson solution manual [ Usenet Download ] 380 Kb/s essential university physics richard wolfson solution manual Full Download 413 Kb/s essential university physics richard wolfson solution manual search, download with torrent files free full cracked downloads. essential university physics richard wolfson solution manual torrent download, essential university physics richard wolfson solution manual free download with pass keys. essential university physics richard wolfson solution manual rar and zip archives files pass. Search results 50 articles (Search results 1 - 20) :
{"url":"http://www.linexdown.net/e/essential+university+physics+richard+wolfson+solution+manual","timestamp":"2014-04-20T06:17:49Z","content_type":null,"content_length":"85081","record_id":"<urn:uuid:4e6b8e5d-1658-4a00-9cd7-4fa546684c0b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
SAT grid ins practice test 10 1. A time lapse camera takes pictures once every 40 seconds. How many pictures does it take in a 24 hour period? (Assume that it takes its first picture 40 seconds after the start of the time 2. Triangle ABC is equilateral. What is the degree measure of angle y ? (Ignore the degree sign when gridding your answer) 3. If a sack of dried dog food feeds 4 dogs or 5 puppies for one week, then 5 sacks of the food will feed 15 puppies and how many dogs ? 4. The sum of three numbers is 6. Each number is increased by 20 and the new numbers are multiplied by 10. What is the sum of the resulting numbers? 5. What is the largest odd-numbered factor of 4500 ? 6. Points A and B are on the top and bottom edges of a cylindrical roll of paper of height 8 and circumference 12. A and B are diagonally opposite each other. The paper is cut along line C and opened out. How far apart are A and B on the flat surface? 7. 2 cars travel from the same point along parallel lanes of a highway for a distance of 10 miles. When car M, travelling at 60 miles an hour reaches the end of the distance, how much further will car N have to travel if it is travelling at 48 miles an hour? 8. ♣ ♥ ¥ ♠ ¤ How many different 3-symbol arrangements of the symbols above are possible if the symbol ¤ must be in the last position, and the symbol ♣ can be used in only one arrangement. The other symbols can be used more than once in an arrangement. 9. What one value for x can be correctly entered into the answer grid? 10. Family 1 comprising mother, father and son are to be seated at a table with family 2 comprising mother, father and daughter. The layout of the table is shown in the diagram. F represents one of the fathers and M represents one of the mothers. X represents any family member. If a male family member must sit opposite a female of the other family, how many different seating plans are possible?
{"url":"http://majortests.com/sat/grid-ins-test10","timestamp":"2014-04-21T04:32:33Z","content_type":null,"content_length":"13005","record_id":"<urn:uuid:029d4552-70fc-4a64-b3ed-d08cf05d3e91>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
The random vortexes model and the 2D Navier-Stokes equation Seminar Room 1, Newton Institute In two classical papers E.Caglioti, P.L.Lions, C.Marchioro, M.Pulvirenti have introduced the so called "mean-field stationary solution" to the 2D Euler equation, inspired by the vortexes model. Similarly, the random vortexes model provides an approximation of the 2D Navier-Stokes equation. By studying asymptotic features of such a model, we try to discuss the behavior of the 2D Navier-Stokes equation and the relevance of the mean-field stationary solution.
{"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010051216001.html","timestamp":"2014-04-19T13:09:06Z","content_type":null,"content_length":"3957","record_id":"<urn:uuid:da8e9407-f0d5-4656-80f8-e5075cf34a9c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
[Python-Dev] Why is nan != nan? Mark Dickinson dickinsm at gmail.com Thu Mar 25 00:31:57 CET 2010 On Wed, Mar 24, 2010 at 11:11 PM, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote: > On Wed, Mar 24, 2010 at 7:02 PM, Mark Dickinson <dickinsm at gmail.com> wrote: > .. >> So if I understand correctly, you propose that float('nan') == >> float('nan') return True. Would you also suggest extending this >> behaviour to Decimal nans? > yes Okay. So now it seems to me that there are many decisions to make: should any Decimal nan compare equal to any other? What if the two nans have different payloads or signs? How about comparing a signaling nan with either an arbitrary quiet nan, or with the exact quiet nan that corresponds to the signaling nan? How do decimal nans compare with float nans? Should Decimal.compare(Decimal('nan'), Decimal('nan')) return 0 rather than nan? If not, how do you justify the difference between == and compare? If so, how do you justify the deviation from the standard on which the decimal modulo is based? In answering all these questions, you effectively end up developing your own standard, and hoping that all the answers you chose are sensible, consistent, and useful. Alternatively, we could do what we're currently doing: make use of *existing* standards to answer these questions, and rely on the expertise of the many who've thought about this in depth. You say that you don't see any compromise: I say that there's value in adhering to (de facto and de jure) standards, and I see a compromise between standards adherence and Python pragmatics. More information about the Python-Dev mailing list
{"url":"https://mail.python.org/pipermail/python-dev/2010-March/098862.html","timestamp":"2014-04-21T14:03:46Z","content_type":null,"content_length":"4334","record_id":"<urn:uuid:4ff43b50-f668-4c2f-bdcd-d40a8b65c966>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
February 7th 2009, 10:14 AM I'm really lost with these substitution problems everytime I try to solve it it comes out with a weird number and doesn't work when double checking. Thanks for your help! Also is there anyway to do this on a graphing calculator? February 7th 2009, 10:25 AM The first step is finding a variable in one of the equations to solve for. I like how the x variable in your second equation has no coefficient, so I think I will solve for it: $x=-4y-4$ *** Now that I have x in terms of y, I'm going to plug that x value into the OTHER equation: $3( -4y-4 )+5y=2$ So now we have an equation with just one variable. This should be easy to solve if we're careful with distribution and minus signs: $3(-4y-4 )+5y=2$ $-12y -12 +5y=2$ $-7y -12 =2$ $-7y =14$ $y =-2$ Alrighty, we have a y value. Now we want the x value. If only we had x written down in terms of y someplace .. oh hey, peek up at the *** above: So our solution may be (4, -2). Let's check just to make certain we're correct: We've got it. And more importantly, we know we are right because of our check. February 7th 2009, 11:33 AM Thanks so much for your help! For step one how did you get -4y-4? Nevermind this question, I figured it out! Thanks so much for your help still! I also have another question if you don't mind how do I solve for this equation y=-3x+12 and y=-7x-36 they both have Ys and I don't know how to do with them February 7th 2009, 12:06 PM RE: For Step 1 How Did You Get -4y-4 It actually should be x=-4-4y. Take 4 y and subtract it. February 7th 2009, 02:00 PM I'm really lost with these substitution problems everytime I try to solve it it comes out with a weird number and doesn't work when double checking. Thanks for your help! Also is there anyway to do this on a graphing calculator? Linear systems with variables isolated(no co-efficient infront) should be solved with the substitution method rather then graphing or the elimination method. you have isolated y in both equations, this you can substitute it into one equation to solve for a variable(in this case x) $<br /> -7x+3x=12+36$ $\boxed{<br /> x=-12}$ Now that you have the value of x you can substitute it into one of the original equations and solve for the other variable (in this case y) To check substitute the numbers back into the equation and see if the LHS and RHS are equal to each other. You can see now that this is the best way to solve all of the systems of equations you have there!
{"url":"http://mathhelpforum.com/algebra/72316-substitution-print.html","timestamp":"2014-04-18T19:30:08Z","content_type":null,"content_length":"13556","record_id":"<urn:uuid:9905e698-e692-4c82-9f41-1c63b7a7e4bb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2003 [00312] [Date Index] [Thread Index] [Author Index] Re: What Happens to Garbage in Mathematica? • To: mathgroup at smc.vnet.net • Subject: [mg44033] Re: What Happens to Garbage in Mathematica? • From: Olaf Rogalsky <olaf.rogalsky at theorie1.physik.uni-erlangen.de> • Date: Sat, 18 Oct 2003 03:12:33 -0400 (EDT) • References: <bm84s2$fug$1@smc.vnet.net> <bmg0nf$e9t$1@smc.vnet.net> <bmj2j5$pqs$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Disclaimer: The following statements of mine don't realy express wisdom, but my partial piece of knowledge, which is an other word for opinion. If anyone of you has deeper insight, I would be delighted if you can enlighten me, too. "Steven T. Hatton" wrote: > [...] Currently, the only way I know to change > the location of a primitive is to create a new one, and assign it to the > original variable. Yes, that is the way how functional programming languages work. In pure functional languages one can't assign a new value to an existing variable. Here, the term "variable" stands for: a specific place, where to store values. But one can have several names (sequence of characters) for a given variable. The connection between the name and the corresponding variable is called binding. E.g., if one evaluates "i=1" in a new mathematica session, a new binding between the name "i" and a newly created variable, which holds the value "1", will be established. If one then evaluates "i=2", a new binding between "i" and an *other*, *new* variable, which holds the value "2" will be Note 1: Out[1] still references the first variable, and therefore the allocated memory for the first variable is not freed. Note 2: Most functional languages are not puristic, which means, that they have some way to reassign a variable. But I don't know, if Mathematica has. Note 3: In garbage collected languages, memory allocation is very cheap, and even when taking the price for garbage collection into account, they most often outperform other memory management systems. Mathematica is reference counted, which has a not so good performance reputation today. > I'm still learning the syntax of Mathematica, so the following snippet is a > bit hard to understnad. It's from the following file: Mathematica's syntax is a pain in the ass, but I try to explain: > RotateShape[ shape_, phi_, theta_, psi_ ] := Defines a new function > Block[{rotmat = RotationMatrix3D[N[phi], N[theta], N[psi]]}, with a local binding for rotmat to the rotation matrix, > shape /. { which replaces all subexpressions in "shape", which match any of the following > poly:Polygon[_] :> Map[(rotmat . #)&, poly, {2}], This is the first rule, constisting of "pattern :> replacement". The pattern matches any occurance of Polygon[<anything>], and gives this subexpression the name "poly". Since the only valid agrument to the Polygon function is a list of coordinates, for every math "poly" will be (temporary) bound to Polygon[<list of coordinates>]. The "&" in the expression "(rotmat . #)&" denotes, that the preceding expression defines an unnamed function (a function, which has no name bound to it). In the definition of that function, the "#" denotes the argument of the function. Since "." denotes the matrix/vector product, the given function rotates any input vector by the given Euler angles. The "Map" function applies this function to any coordinate of "poly" in turn and returns a new Polygon[_] object, where the coordinates are replaced by the rotated ones. > line:Line[_] :> Map[(rotmat . #)&, line, {2}], > point:Point[_] :> Map[(rotmat . #)&, point,{1}] } > ] Two other, similar rules. > Obviously it is something of a "switch" which branches to the appropriate Not quite a switch, but similar. > function (transformation rule?). It pulls the points out of the shape and > transforms them using the rotation matrix. It would seem it places the > results in a list with a head of the same type as the shape. What it > clearly doesn't do is modify the 'members' or elements of the original Yes, it returns a new copy. Again, consider the following code snipped: "a=1; b=a; a=2". The first "=" does not assign "a" to the value "1", but bounds "a" to a newly created variable which holds "1". The second "=" bounds "b" to the same (identical) variable as "a". The third "=" establishes a new binding for "a" to a new variable, which holds "2". The old variable of "a" is unchanged and "b" still is bound to that variable. > It would seem that if I were to assign the return value of RotateShape[] to > lna like this: lna = RotateShape[ln, 2, 2, 2], I would create a dirty chunk > of memory which will require housekeeping before it can be reused. I don't understand you here. > I've learned that Java3D tends to avoid allocating new memory in ways > [...] > be cheaper than applying the matrix multiplication on each point. I have nothing new to say :-). > I guess the essence of my original question is: how expensive is this > garbage production and recycling? There is an overhead to this, but don't worry too much. Mathematica is an interpreted language, which does a lot of symbolic manipulations. E.g. in your problem, the pattern matching propably is much more expensive than the memory management. Olaf Rogalsky I Dr. rer. nat. Olaf Rogalsky Institut fuer Theoretische Physik I I Universitaet Erlangen-Nuernberg I I Tel.: 09131 8528440 Staudtstr. 7 B3 I I Fax.: 09131 8528444 D-91058 Erlangen I | rogalsky at theorie1.physik.uni-erlangen.de I
{"url":"http://forums.wolfram.com/mathgroup/archive/2003/Oct/msg00312.html","timestamp":"2014-04-17T21:26:08Z","content_type":null,"content_length":"40418","record_id":"<urn:uuid:c93a002c-3e72-45ee-8540-2a9fa10e559c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
how ~ operator works? Author how ~ operator works? Ranch Hand Joined: Mar 13, package Samples; 2012 public class Byte1 Posts: 56 { public static void main(String[] a) byte x = 3; x = (byte)~x; I expected the output to be 12, but the output is -4. I know the operator ~ provides the 1's complement. But I could not interpret the output. Can anyone help? Sheriff Ilakya Mukunth wrote:Hi, package Samples; Joined: Sep 28, public class Byte1 2004 { Posts: 18118 public static void main(String[] a) 39 byte x = 3; x = (byte)~x; I like... } I expected the output to be 12, but the output is -4. I know the operator ~ provides the 1's complement. But I could not interpret the output. Can anyone help? Can you explain to us why you expected the output to be 12? Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor) Ranch Hand Joined: Mar 13, 2012 The bit pattern for 3 is 0011. The 1;s complement is 1100. The decimal value of this is 12. Posts: 56 Ilakya Mukunth wrote:The bit pattern for 3 is 0011. The 1;s complement is 1100. The decimal value of this is 12. Joined: Sep 28, Posts: 18118 39 First, a byte is 8 bits long -- not four bits. Second, Java uses twos complement to store numbers -- which I have to ask, do you know how to interpret twos complement numbers? I like... Ranch Hand Joined: Mar 13, I do not know java uses 2's complement to store the number. :-( 2012 I know how to find 2's comp for a given number. (Add 1 to LSB to the 1's complement of the given number) Posts: 56 Can you please explain how the ~ operator works? I would be great help. Sheriff Ilakya Mukunth wrote:I do not know java uses 2's complement to store the number. :-( I know how to find 2's comp for a given number. (Add 1 to LSB to the 1's complement of the given number) Joined: Sep 28, Can you please explain how the ~ operator works? I would be great help. Posts: 18118 39 Well, you explained everything already. And you are correct in that regard, so I don't know what you are confused with. First, as you already mentioned, the ~ operator negates all the bits (also known as 1's complement). So.... I like... 3 = 0000 0011 which means ~3 = 1111 1100 The resultant byte is a negative number as the sign bit is a one. There is actually a formula to see what this value is, but the easiest way is to take the twos complement of it to see what the positive version of this number is... And, as you already mentioned, to do that, negate and add one. So... X = 1111 1100 Negate X = 0000 0011 Negate X plus one = -X = 0000 0100 = 4 And since negative X equals four, X (the pre twos complement number) must have been -4. Sheriff Henry Wong wrote: There is actually a formula to see what this value is Joined: Sep 28, Posts: 18118 BTW, just in case you are interested in the formula, the bit values are, from LSB to MSB of the byte, are ... 1, 2, 4, 8, 16, 32, 64, and -128. So... 1111 1100 = 4 + 8 + 16 + 32 + 64 - 128 = -4 I like... Henry Ranch Hand Henry Wong wrote: Joined: Mar 13, 2012 Henry Wong wrote: Posts: 56 There is actually a formula to see what this value is BTW, just in case you are interested in the formula, the bit values are, from LSB to MSB of the byte, are ... 1, 2, 4, 8, 16, 32, 64, and -128. So... 1111 1100 = 4 + 8 + 16 + 32 + 64 - 128 = -4 Thank you very much for your time and patience. Thanks a lot. Have a great day. Joined: May 26, This would probably be a good time to make sure you are aware that ~ is not on the exam. Bit operations were removed from the SCJP. Posts: 29253 [Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos] I like... Blogging on Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, OCAJP, OCPJP beta, TOGAF part 1 and part 2 Ranch Hand Jeanne Boyarsky wrote:This would probably be a good time to make sure you are aware that ~ is not on the exam. Bit operations were removed from the SCJP. Joined: Mar 13, Posts: 56 Thanks for the info. Yes I knew its not there in OCPJP. But just curious to know Ranch Hand Jeanne Boyarsky wrote:This would probably be a good time to make sure you are aware that ~ is not on the exam. Bit operations were removed from the SCJP. Joined: Mar 13, Posts: 56 I believe even the shift operators are removed from the OCPJP. is it so? subject: how ~ operator works?
{"url":"http://www.coderanch.com/t/594661/java-programmer-SCJP/certification/operator-works","timestamp":"2014-04-20T16:36:44Z","content_type":null,"content_length":"45288","record_id":"<urn:uuid:d95e18ab-c0ed-4d70-a172-be64c02d5391>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Concentration bound for weakly dependent random variables up vote 1 down vote favorite Suppose we observe a sequence $R_1, ..., R_T$ of iid. random variables that equal $0$ with probability $p$ and with probability $1-p$ are sampled from a distribution with expected value $E(R) > 0$. Given $t \leq T$, let $X_t$ denote the mean of the $R_1, ..., R_t$ that were sampled from the distribution. What can we say about the convergence of $\sum_{t=1}^T X_t$ around its mean $T E(R)$? I would like to obtain some kind of Chernoff-Heoffding bound, but the variables $X_t$ are not independent. However, $|X_t - X_{t-1}| < O(1/S(t))$, where $s(t)$ is the number of random variables that were sampled from the distribution at time $t$. Also, note that a variable $X_t$ is independent of $X_{t-2},...,X_1$ given $X_{t-1}$. Are there any tools out there that can be used for this problem? Also, if the above problem can be solved, I would like to obtain an analogous bound on $\sum_{t=1}^T 1/(X_t)^2$ (assuming that $P(X_t = 0) = 0$). Thank you in advance! pr.probability stochastic-processes real-analysis 1 The parameter $p$ seems completely redundant in your current setting. Also, do you assume a bound on the $R_i$? In any case, try searching for "Azuma's inequality" and "concentration of measure". – Ori Gurel-Gurevich Feb 24 '12 at 22:58 It's not clear to me that $p$ is redundant, because if an $R_t$ is not "activated" with probability $p$, then it is not counted in $X_t$. Also, I don't see how Azuma's inequality would apply, because the $X_t$ are not a martingale (as far as I can tell). – Woland Feb 25 '12 at 3:40 1 Maybe I misunderstood your question. Can you give precise definition of $X_t$? is it $R_1+,\ldots,+R_t$ divided by the number of nonzeros among them? – Ori Gurel-Gurevich Feb 25 '12 at 6:39 @Ori: Why can't it be $(R_1+\cdots+R_t)/t$ as Volodymyr says? – Brendan McKay Feb 25 '12 at 7:52 1 Volodymyr, is $R$ bounded? Otherwise, where does your bound on $|X_t-X_{t-1}|$ come from? Also, are $s(t)$ and $S(t)$ the same? – Brendan McKay Feb 26 '12 at 23:02 show 7 more comments 1 Answer active oldest votes $\sum_{t=1}^T X_t$ is the sum of $t$ independent random variables, for example $\sum_{t=1}^4 X_t = \frac{25}{12}R_1 + \frac{13}{12}R_2 + \frac{7}{12}R_3 + \frac{1}{4}R_4$. To get a Hoeffding-type tail estimate you might need information about the tails of $R$. Similarly for a Berry-Esseen bound. I don't understand your comment about $s(t)$ and $S(t)$ at all. up vote 1 down vote With the OP's clarifications of the question (above), this answer is obsolete so please discard it. I don't understand your comment very well, but that's probably because my question wasn't clear in the first place. I hope my comments above made it more clear... – Woland Feb 27 '12 at add comment Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-processes real-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/89450/concentration-bound-for-weakly-dependent-random-variables","timestamp":"2014-04-17T18:39:07Z","content_type":null,"content_length":"59136","record_id":"<urn:uuid:89877ac1-e4e2-40e8-9175-8c02256171a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Fraction July 3rd 2013, 10:07 AM #1 Junior Member Jun 2013 Algebraic Fraction I would appreciate your help answering the following question. If Y = 3x+2/5-x, express x in terms of y. Thank you. Re: Algebraic Fraction Please clarify: do yuo mean $Y = 3x + \frac 2 5 -x$ (which is what you wrote), or do you mean $Y= \frac {3x+2}{5-x}$ ? Or perhaps $Y=\frac {3x + 2} 5 - x$? Or perhaps since you posted your question in the LaTeX help section you're looking for asistance in writing the equation as I did, using LaTeX? Re: Algebraic Fraction Y = 3X + 2 divided by 5 - x as per the second option in your post. Apologies for not using latex. Yes, I would like help, if possible, in using latex. I am using a mac - if that's relevant. Thanks for your help. Re: Algebraic Fraction The platform you use is irrelevant. LaTeX is a function of the webpage. [TEX] \frac{3x+2}{5-x} [/TEX] gives $\frac{3x+2}{5-x}$. This subforum will help you with the code. Once you begin, you quickly learn the code. If you click on the “go advanced tab” you should see $\boxed{\Sigma}$ on the tool-bar. That gives the [TEX]..[/TEX] wrap. Your LaTeX code goes between them. Here are more examples: [TEX](5+2\sqrt{6})^{x^2-3}+(5-2\sqrt{6})^{x^2-3} [/TEX] gives $(5+2\sqrt{6})^{x^2-3}+(5-2\sqrt{6})^{x^2-3}$ Re: Algebraic Fraction As for your problem, to solve y= (3x+2)/(5- x) for x, start by multiplying both sides by 5- x. Re: Algebraic Fraction Ho capito. Thanks a lot. Re: Algebraic Fraction Thanks a lot. July 3rd 2013, 10:59 AM #2 July 4th 2013, 12:45 PM #3 Junior Member Jun 2013 July 4th 2013, 01:08 PM #4 July 4th 2013, 02:39 PM #5 MHF Contributor Apr 2005 July 8th 2013, 06:14 AM #6 Junior Member Jun 2013 July 8th 2013, 06:15 AM #7 Junior Member Jun 2013
{"url":"http://mathhelpforum.com/latex-help/220338-algebraic-fraction.html","timestamp":"2014-04-18T20:48:43Z","content_type":null,"content_length":"47208","record_id":"<urn:uuid:780a7ecb-a57d-49f9-ab8a-4a63f247cd6f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
[ lucid ] Package: r-base-core-ra (1.2.8-4) [universe] Links for r-base-core-ra Ubuntu Resources: Download Source Package r-base-core-ra: Please consider filing a bug or asking a question via Launchpad before contacting the maintainer directly. Original Maintainer (usually from Debian): It should generally not be necessary for users to contact the original maintainer. External Resources: Similar packages: 'ra' variant of GNU R core of statistical computing language and environment R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the underlying implementation and semantics are derived from Scheme. The core of R is an interpreted computer language which allows branching and looping as well as modular programming using functions. Most of the user-visible functions in R are written in R. It is possible for the user to interface to procedures written in the C, C++, or FORTRAN languages for efficiency, and many of R's core functions do so. The R distribution contains functionality for a large number of statistical procedures and underlying applied math computations. There is also a large set of functions which provide a flexible graphical environment for creating various kinds of data presentations. Additionally, over thousand extension "packages" are available from CRAN, the Comprehensive R Archive Network, many also as Debian packages, named 'r-cran-<name>'. This package provides a patched version of GNU R with 'just-in-time' compilation of R code using the 'jit' package (in r-cran-jit). See http://www.milbo.users.sonic.net/ra for details. Other Packages Related to r-base-core-ra • depends • recommends • suggests rec: r-base-core GNU R core of statistical computation and graphics system rec: r-base-dev GNU R installation of auxiliary GNU R packages rec: r-cran-jit GNU R package just-in-time compilation support
{"url":"http://packages.ubuntu.com/lucid/r-base-core-ra","timestamp":"2014-04-19T14:45:00Z","content_type":null,"content_length":"22261","record_id":"<urn:uuid:2d495aa7-809b-480e-8ae0-58ad514ceb35>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Decide convergence/divergence from n=1 to infinity (2^n+1)/(3^n-1) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f2da956e4b0571e9cba8be9","timestamp":"2014-04-19T07:19:50Z","content_type":null,"content_length":"48949","record_id":"<urn:uuid:33201a4a-81e2-46de-ae91-92bbc9aba713>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Stock Technical Analysis: Technical Anaylsis Tutorial: Candlestick, Fibonacci and Books Fibonacci Candlestick Charting Books Fibonacci Numbers Use in Trading Trading Strategies Links (Elliot Wave Tutorials) Fibonacci numbers are the result of work by Leonardo Fibonacci in the early 1200's while studying the Great Pyramid of Gizeh. The fibonacci series is a numerical sequence comprised of adding the previous numbers together, i.e., (1,2,3,5,8,13,21,34,55,89,144,233 etc..) An interesting property of these numbers is that as the series proceeds, any given number is 1.618 times the preceding number and 0.618% of the next number. (34/55 = 55/89 = 144/233 =0.618) (55/34 =89/55 =233/144 =1.618), and 1.618 =1/0.618. This properties of the fibonacci series occur throughout nature, science and math and is the number 0.618 is often referred to as the "golden ratio" as it is the root of the following polynomial x^ 2+x-1=0 which can be rearranged to x= 1/(1+x). So that's were the fib # 0.618 comes from. The other fibs 0.382 and 0.5 commonly used in technical analysis have a less impressive background but are just as powerful in Technical analysis. and 0.5 is the mean of the two numbers. Other neat fib facts (0.618*(1+0.618)=1 and (0.382*(1+.618))=0.618. Use of Fibonacci #'s in Technical Analysis Fibonacci numbers are commonly used in Technical Analysis with or without a knowledge of Elliot wave analysis to determine potential support, resistance, and price objectives. 38.2% retracements usually imply that the prior trend will continue, 61.8% retracements imply a new trend is establishing itself. A 50% retracement implies indecision. 38.2% retracements are considered nautral retracements in a healthy trend. Price objectives for a natural retracement (38.2%) can be determined by adding (or subtracting in a downtrend) the magnitude of the previous trend to the 38.2% retracement. After the 38.2% retracement the stock should break through the previous swing point(B) on heavier volume. If the volume isn't there the magnitude of the move will usually be diminished, especially on very low A-B =C-D when B-C =38.2% of A-B 61.8% retracements are warning signs of a potential trend changes. For a more detailed explanation of Fibonacci price projections and price wave theory I highly recommend the Elliot Wave Principle links below. Confluence Confluence occurs when you take fibonacci projections off of multiple trends and get the same number and strengthens when it corresponds with other technical advents such as gaps, swing high/lows, chart indicators crossovers (MACD, RSI, Stochastics, etc.), trading congestion, etc. The more confluence, the more significant the level. I really take notice when I get two or more fib #s (say a 38.2% and 61.8%) to correspond with a gap in the chart or a swing high. Confluence is very powerful as it combines multiple technical analysis techniques to arrive at the same conclusion, and should be relied on accordingly IMHO Trading Strageties&nbsp JMHO Once a new swing point is established in an equity, a new set of fibonacci numbers should be calculated, and confluence checked to determine potential support/resistance levels and trading strategies. (let the Fibonacci Calculator do most of the work for you). For instance: If a stock is trending up, one may watch it until it forms a top then calculate the fibs. If she retraces 38.2% and turns with confluence, one could bite with an automatic stop under the 50% retracement and objective of the ABC. The Risk/Reward ratio for that trade is 0.118. (If you got stopped out 8 times and hit once you would have a 5.6% profit). If she's trending down, you could bite at the 38.2% bounce with a stop at the 50% and get the same risk/reward ratio. With both strategies it is critical for the volume to be heavier on the swing point breakout. If a position is going with you and you're looking for an exit point, calculate the 38.2% fib once a top is clear and put a stop below it. Won't get you out at the top but you may not miss that monster rally either. Think a stock is a dog but it's trading at it's high wait for a 61.8% retracement from the last trend and sell it, with the stop below the 50% retracement. Recommended Reading Suggestions/Comments/Bugs/Questions? Click here! Market Data Provided by StockWiz ©1999-2005 StockTA.com. All Rights Reserved.
{"url":"http://www.stockta.com/cgi-bin/school.pl","timestamp":"2014-04-20T18:22:41Z","content_type":null,"content_length":"22142","record_id":"<urn:uuid:641f3562-09fd-4ab2-98bf-ea23f2b5f5eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Washington Street, NJ Geometry Tutor Find a Washington Street, NJ Geometry Tutor ...I am currently a graduate student at Cornell University as a physics PhD student. However, I took a semester off, so I am located in the Lehigh Valley area. While at Kutztown, I spent one year as a Supplemental Instructor and one-on-one tutor for introductory physics classes. 16 Subjects: including geometry, chemistry, GRE, physics ...I have been teaching in the middle school classroom for the past two years and work with students of all ability levels. I taught CPR, First Aid, First Responder, Lifeguard Training, and Swim Lessons for nine years. I focus on identifying the way individual students learn and providing them with new ways to understand the information to increase their success. 19 Subjects: including geometry, biology, algebra 1, algebra 2 ...I have prior tutoring experience in middle school math and high school math, and I have taught at the college level and in business. I am a Certified Teacher in K-12 Mathematics. My goal is to make a difference, to help students understand the basic match concepts, and to explain how math is used in everyday life and business. 8 Subjects: including geometry, algebra 1, elementary math, Microsoft Excel ...My goal is for the student to have mastered all the concepts of Algebra 1 before moving on to Algebra 2. Algebra 2 builds upon the foundation of Algebra 1 and requires even more rigor because problems become multi-step. My goal is to explain the material in detail and then provide lots of practice problems to solidify the student's knowledge. 15 Subjects: including geometry, calculus, GRE, algebra 1 ...During the school day my duties are assisting and assessing students in all subjects. I have learned a ton this year about teaching strategies and managing the classroom. My passion is helping children learn. 31 Subjects: including geometry, English, reading, writing Related Washington Street, NJ Tutors Washington Street, NJ Accounting Tutors Washington Street, NJ ACT Tutors Washington Street, NJ Algebra Tutors Washington Street, NJ Algebra 2 Tutors Washington Street, NJ Calculus Tutors Washington Street, NJ Geometry Tutors Washington Street, NJ Math Tutors Washington Street, NJ Prealgebra Tutors Washington Street, NJ Precalculus Tutors Washington Street, NJ SAT Tutors Washington Street, NJ SAT Math Tutors Washington Street, NJ Science Tutors Washington Street, NJ Statistics Tutors Washington Street, NJ Trigonometry Tutors Nearby Cities With geometry Tutor Bridgewater, NJ geometry Tutors Broadway, NJ geometry Tutors Changewater geometry Tutors East Brunswick geometry Tutors Easton, PA geometry Tutors Hampton, NJ geometry Tutors Harmony Township, NJ geometry Tutors High Bridge, NJ geometry Tutors Hillsborough, NJ geometry Tutors North Plainfield, NJ geometry Tutors Oxford, NJ geometry Tutors Piscataway geometry Tutors Port Murray geometry Tutors Union, NJ geometry Tutors Washington, NJ geometry Tutors
{"url":"http://www.purplemath.com/Washington_Street_NJ_Geometry_tutors.php","timestamp":"2014-04-20T11:09:06Z","content_type":null,"content_length":"24650","record_id":"<urn:uuid:2bb64885-5bc2-4cba-bcdf-737340f94741>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Manhattan, IL Geometry Tutor Find a Manhattan, IL Geometry Tutor ...I have taught and tutored grades PK-6th. I enjoy making learning fun and interactive. I use my students' interests and learning styles to guide my instruction. 46 Subjects: including geometry, writing, statistics, algebra 2 ...I took a GED test myself as a teen, and since then I have worked in a technical and scientific career for more than 30 years as well as earning my BS in Science and entering graduate school for a Master of Education with a minor in Science. How things work is an area of knowledge that we will ap... 36 Subjects: including geometry, English, reading, writing ...They are experts at teaching study skills for the long-term; I have been applying these methods for the past few months, with dramatic results. These methods include: * Taking notes, and having students follow along, copying the full notes for themselves. * Frequently stopping to have students e... 21 Subjects: including geometry, chemistry, calculus, statistics ...I have been tutoring honors chemistry, AP chemistry, Intro. and regular chemistry and college level general chemistry 1 and 2 for last several years and have acquired the proficiency of making difficult aspects easy to understand for the students.There are two main aspects in learning chemistry -... 23 Subjects: including geometry, chemistry, biology, ASVAB ...I am passionate about what I teach and will help your son or daughter master the material they need to learn. I believe in understanding the concepts of biology and genetics, not just memorizing terms. Please allow me to help improve their knowledge in these fascinating subjects. 10 Subjects: including geometry, biology, algebra 1, elementary (k-6th) Related Manhattan, IL Tutors Manhattan, IL Accounting Tutors Manhattan, IL ACT Tutors Manhattan, IL Algebra Tutors Manhattan, IL Algebra 2 Tutors Manhattan, IL Calculus Tutors Manhattan, IL Geometry Tutors Manhattan, IL Math Tutors Manhattan, IL Prealgebra Tutors Manhattan, IL Precalculus Tutors Manhattan, IL SAT Tutors Manhattan, IL SAT Math Tutors Manhattan, IL Science Tutors Manhattan, IL Statistics Tutors Manhattan, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Manhattan_IL_Geometry_tutors.php","timestamp":"2014-04-18T23:31:56Z","content_type":null,"content_length":"23944","record_id":"<urn:uuid:1c8874ac-cdf3-4b17-ae8d-ec3e9c189dd7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 464 math riddle ALGEBRA WITH PIZZAZZ Geez! Homework help means you ask about problems, not the answers to your Pizzazz worksheet. Do your own Pizzazz worksheet! Why do you think your teacher gave it to you? Lazy much? Sunday, April 6, 2008 at 5:28pm by Jenny math riddle ALGEBRA WITH PIZZAZZ i need the answers to hexagon code with pizzazz Sunday, April 6, 2008 at 5:28pm by sheeta math riddle ALGEBRA WITH PIZZAZZ what is the answer to pizzazz number 39 , "Did you hear about" Sunday, April 6, 2008 at 5:28pm by kyle Math PIZZAZZ PIZZAZZ Q: what should you say when you meet an angel plz give me answer! Monday, October 25, 2010 at 8:40pm by Sarah math riddle ALGEBRA WITH PIZZAZZ Algebra with Pizzazz...Did you hear about... page 211 Sunday, April 6, 2008 at 5:28pm by Boca Math pizzazz worksheet answers Can anyone help me with math w/ pizzazz middle school d- 58?????? Sunday, March 8, 2009 at 4:38pm by Natalie Math PIZZAZZ Math pizzazz worksheet answers book D page 31Math pizzazz worksheet answers book D page 31Math pizzazz worksheet answers book D page 31Math pizzazz worksheet answers book D page 31Math pizzazz worksheet answers book D page 31Math pizzazz worksheet answers book D page 31Math ... Monday, October 25, 2010 at 8:40pm by Anoymouse Math PIZZAZZ i know it Monday, October 25, 2010 at 8:40pm by Anonymous Monday, September 10, 2012 at 11:19pm by lexa algebra with pizzazz Sunday, October 19, 2008 at 11:26pm by shaboobliontay algebra with pizzazz Monday, May 10, 2010 at 3:41pm by jesus Math PIZZAZZ Monday, October 25, 2010 at 8:40pm by Ryan algebra with pizzazz that is wrong Monday, May 10, 2010 at 3:41pm by Anonymous algebra with pizzazz Sunday, October 19, 2008 at 11:26pm by fsdf Math PIZZAZZ its HALO;D Monday, October 25, 2010 at 8:40pm by Anonymous Math PIZZAZZ I want The Answers D: Monday, October 25, 2010 at 8:40pm by Anonymous algebra with pizzazz Sunday, October 19, 2008 at 11:26pm by biznasty genetic throwback Monday, September 10, 2012 at 11:19pm by Steve algebra with pizzazz Sunday, October 19, 2008 at 11:26pm by alegebra they like to fascinate Monday, September 10, 2012 at 11:19pm by Brooklynn math algebra with pizzazz Wednesday, April 28, 2010 at 1:41am by keondrae Math PIZZAZZ Monday, October 25, 2010 at 8:40pm by cassidy Math PIZZAZZ Monday, October 25, 2010 at 8:40pm by monkey Sunday, April 6, 2008 at 5:28pm by RA math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by ds alfine math riddle ALGEBRA WITH PIZZAZZ thank you! Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by …e…e…e math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by ¡­e¡­e¡­e math riddle ALGEBRA WITH PIZZAZZ no no no no no no no no no no no no no no no no no no no no no no no no no no no no Sunday, April 6, 2008 at 5:28pm by no math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Lindsey math riddle ALGEBRA WITH PIZZAZZ y 1-x Sunday, April 6, 2008 at 5:28pm by hilario Math riddle PIZZAZZ Worksheet Thursday, October 30, 2008 at 1:35am by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by pie algebra with pizzazz what is the title of this picture Sunday, October 19, 2008 at 11:26pm by ob math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Sheila math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by jj Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by Anonymous algebra with pizzazz What IS the Title of This Picture Sunday, October 19, 2008 at 11:26pm by keana math riddle ALGEBRA WITH PIZZAZZ what do you get when you Sunday, April 6, 2008 at 5:28pm by Jesica algebra with pizzazz Monday, May 10, 2010 at 3:41pm by Anonymous Math PIZZAZZ But.. but this doesn't make sense!! Monday, October 25, 2010 at 8:40pm by Louise Math PIZZAZZ Yeah it's halo Monday, October 25, 2010 at 8:40pm by Nick algebra with pizzazz THE LOAN OFFICER!! Sunday, October 19, 2008 at 11:26pm by Chris K math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by michel Pre-Algebra with PIZZAZZ Wednesday, February 1, 2012 at 6:40pm by Urmom Pre-Algebra with PIZZAZZ Wednesday, February 1, 2012 at 6:40pm by Urmom Pre-Algebra with PIZZAZZ Wednesday, February 1, 2012 at 6:40pm by Esmeralda pre algebra with pizzazz 239 Thursday, October 27, 2011 at 8:59pm by Blank math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by rigo math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by What problem did the dumb gangster have when the b math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by What problem did the dumb gangster have when the b math riddle ALGEBRA WITH PIZZAZZ I need help with this!!!!!!!!!!!! Sunday, April 6, 2008 at 5:28pm by megan math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ idk ] \] Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ thank you rosa Sunday, April 6, 2008 at 5:28pm by joseph math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by $^&*(*&^^##^&*( math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Mikey math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Chris math riddle ALGEBRA WITH PIZZAZZ was up Sunday, April 6, 2008 at 5:28pm by fghfh math riddle ALGEBRA WITH PIZZAZZ was up Sunday, April 6, 2008 at 5:28pm by fghfh math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Wubster math riddle ALGEBRA WITH PIZZAZZ help me please Sunday, April 6, 2008 at 5:28pm by pie Math pizzazz worksheet answers need help well i do! Sunday, March 8, 2009 at 4:38pm by Miranda Cosgrove Math pizzazz worksheet answers this is hell! Sunday, March 8, 2009 at 4:38pm by Miranda Cosgrove math riddle ALGEBRA WITH PIZZAZZ tell me Sunday, April 6, 2008 at 5:28pm by ABC man Algebra with pizzazz Wednesday, January 27, 2010 at 9:40pm by Anonymous algebra with pizzazz he wanted a to go meal Sunday, October 19, 2008 at 11:26pm by cameron math riddle ALGEBRA WITH PIZZAZZ what is a metaphor Sunday, April 6, 2008 at 5:28pm by brooks math riddle ALGEBRA WITH PIZZAZZ what is a metaphor Sunday, April 6, 2008 at 5:28pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by bob algebra with pizzazz He wanted a meal on wheels Sunday, October 19, 2008 at 11:26pm by Dede math riddle ALGEBRA WITH PIZZAZZ Sunday, September 14, 2008 at 11:30pm by ur mom Math PIZZAZZ We don't cheat. We're teachers, not cheaters. Monday, October 25, 2010 at 8:40pm by Ms. Sue What is the answer to Algebra With Pizzazz 190? Monday, November 8, 2010 at 7:23pm by zackT Math PIZZAZZ @Ryan yea it is Halo... Monday, October 25, 2010 at 8:40pm by Courtney Math PIZZAZZ @Ryan yea it is Halo... Monday, October 25, 2010 at 8:40pm by Courtney Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ your mom Sunday, September 14, 2008 at 11:30pm by IDONTKNOW math riddle ALGEBRA WITH PIZZAZZ this is hard Sunday, April 6, 2008 at 5:28pm by bobby math algebra with pizzazz what is the equation for the percents? Wednesday, April 28, 2010 at 1:41am by sean algebra with pizzazz he always wants a big mack Sunday, October 19, 2008 at 11:26pm by jazmine Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by sadarious math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by ur mom Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by idk Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by Anji Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by bob math riddle ALGEBRA WITH PIZZAZZ Sunday, September 14, 2008 at 11:30pm by Vince math riddle ALGEBRA WITH PIZZAZZ OH LAY! Sunday, September 14, 2008 at 11:30pm by BONITA Math pizzazz worksheet answers HELP [c-30] Sunday, March 8, 2009 at 4:38pm by The Diego math riddle ALGEBRA WITH PIZZAZZ Sunday, April 6, 2008 at 5:28pm by Chris K Math pizzazz worksheet answers this retarted Sunday, March 8, 2009 at 4:38pm by hanen Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ Sunday, September 14, 2008 at 11:30pm by Anonymous Math pizzazz worksheet answers Sunday, March 8, 2009 at 4:38pm by Anonymous math riddle ALGEBRA WITH PIZZAZZ olay !!! Sunday, September 14, 2008 at 11:30pm by sofi pre algebra pizzazz Monday, October 3, 2011 at 10:17pm by jake Math PIZZAZZ it cant be halo...there are 14 answers Monday, October 25, 2010 at 8:40pm by Madi pre algebra with pizzazz 239 Thursday, October 27, 2011 at 8:59pm by Jack Pages: 1 | 2 | 3 | 4 | 5 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Pizzazz","timestamp":"2014-04-20T06:07:20Z","content_type":null,"content_length":"22935","record_id":"<urn:uuid:394ef8b2-029f-4db1-923e-05d24b96fb01>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Tuesday, April 12, 2005 15:21:41 Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org 2004 Spring Western Section Meeting Los Angeles, CA, April 3-4, 2004 Meeting #996 Associate secretaries: Michel L Lapidus , AMS Sunday April 4, 2004 • Sunday April 4, 2004, 8:00 a.m.-12:00 p.m. Meeting Registration Foyer, Mark Taper Hall of Humanities • Sunday April 4, 2004, 8:00 a.m.-10:50 p.m. Special Session on Contact and Symplectic Geometry, III Room 110, Mark Taper Hall of Humanities Dragomir Dragnev, University of Southern California dragnev@math.usc.edu Ko Honda, University of Southern California khonda@math.usc.edu Sang Seon Kim, University of Southern California kimss@math.usc.edu □ 8:00 a.m. The failure of the parametric h-principle for maps with prescribed Jacobian. Joseph L Coffey*, Courant Institute of Mathematical Sciences □ 8:30 a.m. Lagrangian intersections and the Serre spectral sequence. Jean-Francois Barraud, Universite de Lille 1 Octav Cornea*, University of Montreal □ 9:00 a.m. Branching Floer homology: the definition. Viktor Ginzburg, University of California, Santa Cruz Basak Gurel*, SUNY at Stony Brook Ely Kerman, SUNY at Stony Brook □ 9:30 a.m. Branching Floer homology and the Hofer-Zehnder capacity. Viktor L Ginzburg, UC Santa Cruz Basak Z Gurel, SUNY Stony Brook Ely Kerman*, SUNY Stony Brook □ 10:00 a.m. Instanton Floer homology with Lagrangian boundary conditions. Katrin Wehrheim*, Princeton University □ 10:30 a.m. • Sunday April 4, 2004, 8:00 a.m.-10:50 a.m. Special Session on Fluid Problems and Related Questions, III Room 208, Mark Taper Hall of Humanities Maria Schonbek, University of California Santa Cruz schonbek@math.ucsc.edu Yuxi Zheng, Pennsylvania State University yzheng@math.psu.edu. • Sunday April 4, 2004, 8:00 a.m.-10:50 a.m. Special Session on Smooth Ergodic Theory and Related Topics, III Room 116, Mark Taper Hall of Humanities Nicolai Haydn, University of Southern California nhaydn@postal.usc.edu Huyi Hu, Michigan State University hu@math.psu.edu • Sunday April 4, 2004, 8:00 a.m.-10:50 a.m. Special Session on Dynamic Equations on Time Scales: Theory and Applications, III Room 214, Mark Taper Hall of Humanities John M. Davis, Baylor University John_M_Davis@baylor.edu Johnny Henderson, Baylor University Johnny_Henderson@Baylor.edu Qin Sheng, University of Dayton Qin.Sheng@notes.udayton.edu • Sunday April 4, 2004, 8:00 a.m.-10:50 a.m. Special Session on Complex and Hyperbolic Geometry, III Room 114, Mark Taper Hall of Humanities Francis Bonahon, University of Southern California fbonahon@math.usc.edu Dragomir Saric, University of Southern California saric@math.usc.edu • Sunday April 4, 2004, 8:00 a.m.-10:40 a.m. Special Session on Partial Differential Equations, III Room 212, Mark Taper Hall of Humanities Igor Kukavica, University of Southern California kukavica@math.usc.edu Qi S. Zhang, University of California Riverside qizhang@math.ucr.edu □ 8:00 a.m. The $L^p$ Boundedness of Riesz Transforms for Second Order Elliptic Operators. Zhongwei Shen*, University of Kentucky □ 9:00 a.m. On the motion of an elastic solid inside of an incompressible viscous fluid. Steve Shkoller*, UC Davis □ 10:00 a.m. On depletion of the vortex-stretching term in the 3D Navier-Stokes equations. Zoran Grujic*, University of Virginia Anastasia Ruzmaikina, Purdue University • Sunday April 4, 2004, 8:00 a.m.-12:00 p.m. Exhibit and Book Sale Room 106, Mark Taper Hall of Humanities • Sunday April 4, 2004, 8:30 a.m.-10:50 a.m. Special Session on Nonlinear and Harmonic Analysis, III Room 210, Mark Taper Hall of Humanities Rowan Killip, University of California Los Angeles killip@math.ucla.edu Christopher Thiele, University of California Los Angeles thiele@math.ucla.edu • Sunday April 4, 2004, 8:30 a.m.-10:50 a.m. Special Session on Modern Problems of Integration: Theory and Applications, II Room 108, Mark Taper Hall of Humanities Mark Burgin, University of California Los Angeles mburgin@math.ucla.edu • Sunday April 4, 2004, 8:30 a.m.-10:50 a.m. Special Session on Financial Mathematics, III Room 213, Mark Taper Hall of Humanities Jaksa Cvitanic, University of Southern California cvitanic@math.usc.edu Janfeng Zhang, University of Southern California jfzhang@math.umn.edu • Sunday April 4, 2004, 9:00 a.m.-10:50 a.m. Special Session on Noncommutative Algebra and Algebraic Geometry, III Room 119, Mark Taper Hall of Humanities Lance W. Small, University of California San Diego lwsmall@ucsd.edu Paul Smith, University of Washington smith@math.washington.edu • Sunday April 4, 2004, 9:00 a.m.-10:40 a.m. Special Session on Arithmetic Geometry and K-Theory, III Room 112, Mark Taper Hall of Humanities Thomas Geisser, University of Southern California geisser@usc.edu Wayne Raskind, University of Southern California raskind@usc.edu □ 9:00 a.m. • Sunday April 4, 2004, 9:00 a.m.-10:50 a.m. Special Session on Recent Advances in the Mathematical Analysis of Geophysical and Hydrodynamical Models, III Room 215, Mark Taper Hall of Humanities Mohammed Ziane, University of Southern California ziane@math.usc.edu • Sunday April 4, 2004, 9:00 a.m.-10:50 a.m. Special Presentation, Part I TA development using case studies: A workshop for faculty Room 118, Mark Taper Hall of Humanities Solomon Friedberg, Boston College • Sunday April 4, 2004, 11:00 a.m.-11:50 a.m. Invited Address Fluid Equations and their asymptotic behavior. Room 101, Mark Taper Hall of Humanities Maria Elena Schonbek*, University of California Santa Cruz • Sunday April 4, 2004, 12:00 p.m.-1:50 p.m. Special Presentation, Part II TA development using case studies: A workshop for faculty Room 118, Mark Taper Hall of Humanities Solomon Friedberg, Boston College • Sunday April 4, 2004, 2:00 p.m.-2:50 p.m. Invited Address Noncommutative algebraic geometry. Room 101, Mark Taper Hall of Humanities Paul Smith*, University of Washington • Sunday April 4, 2004, 3:00 p.m.-4:20 p.m. Special Session on Noncommutative Algebra and Algebraic Geometry, IV Room 119, Mark Taper Hall of Humanities Lance W. Small, University of California San Diego lwsmall@ucsd.edu Paul Smith, University of Washington smith@math.washington.edu □ 3:00 p.m. Simplicity of noncommutative Dedekind domains. K R Goodearl*, University of California at Santa Barbara J T Stafford, University of Michigan □ 3:30 p.m. Classifying finite dimensional representations with squarefree tops. Birge Huisgen-Zimmermann*, University of California at Santa Barbara □ 4:00 p.m. Symplectic reflection algebras. Iain Gordon*, University of Glasgow (visiting UCSB) • Sunday April 4, 2004, 3:00 p.m.-5:50 p.m. Special Session on Fluid Problems and Related Questions, IV Room 208, Mark Taper Hall of Humanities Maria Schonbek, University of California Santa Cruz schonbek@math.ucsc.edu Yuxi Zheng, Pennsylvania State University yzheng@math.psu.edu. • Sunday April 4, 2004, 3:00 p.m.-3:50 p.m. Special Session on Smooth Ergodic Theory and Related Topics, IV Room 116, Mark Taper Hall of Humanities Nicolai Haydn, University of Southern California nhaydn@postal.usc.edu Huyi Hu, Michigan State University hu@math.psu.edu □ 3:00 p.m. On the Universality of Bosco's Rule. Kellie Michele Evans*, California State University, Northridge □ 3:30 p.m. Complex dynamics in learning. Thomas J. Taylor*, Arizona State University • Sunday April 4, 2004, 3:00 p.m.-4:40 p.m. Special Session on Partial Differential Equations, IV Room 212, Mark Taper Hall of Humanities Igor Kukavica, University of Southern California kukavica@math.usc.edu Qi S. Zhang, University of California Riverside qizhang@math.ucr.edu □ 3:00 p.m. Nonlinear Schrodinger regimes for nonlinear Maxwell equations in periodic media. Anatoli Babine*, University of California, Irvine □ 4:00 p.m. On the Solvability for Parabolic Equations with one Space Variable. Martín Lopez Morales*, Monterrey Institute of Technology Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2106_program_sunday.html","timestamp":"2014-04-19T13:05:01Z","content_type":null,"content_length":"64418","record_id":"<urn:uuid:f8b321bc-0a07-4f90-8003-d21b96ec44ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - geometry.pre-college.independent Discussion: geometry.pre-college.independent A discussion of secondary school geometry curricula, teaching geometry classes, new software and texts, problems for students, and supplementary materials such as videos and manipulatives. It's also the place for geometry students to talk about geometry with their peers. To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe geometry-pre-college in the body of the message. To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe geometry-pre-college in the body of the message. Posts to this group from the Math Forum do not disseminate to usenet's geometry.pre-college newsgroup.
{"url":"http://mathforum.org/kb/forum.jspa?forumID=128&start=405","timestamp":"2014-04-19T12:51:33Z","content_type":null,"content_length":"38844","record_id":"<urn:uuid:6b56f9f3-b312-4f62-ad33-0eec58a3e81d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Re: Arbitrary Objects P.T.M.Rood@ph.vu.nl P.T.M.Rood at ph.vu.nl Wed Jan 30 07:27:31 EST 2002 On Januari 28 Charlie Silver wrote: > It seems that mathematicians do not to want to scrutinize exactly what >they're doing when they say "let z be...", where, even if it's not >explicitly mentioned, z is intended to be "arbitrary". They just reason >with arbitrary objects as a matter of course. And he wondered: > What >*are* those arbitrary objects that figure in so many mathematical proofs? In saying things like "let x be an integer" and the like, mathematicians frequently suggest that x was meant to be arbitrary. Kit Fine, in his well-known book, proposes to take this literally: besides ordinary objects, there are also arbitrary objects. And Fine has achieved some highly technical achivements on this point. Nonetheless, I have to admit that I find myself somewhat unconmfortable with the idea of an arbitrary object. One might wonder whether in saying "let x be an integer", x is meant to be an arbitrary object or an arbitray integer. It seems that in this respect James R. Brown touched upon an interesting point in his FOM message from january 29. Contrary to Fine, Brown seems inclined to think that we should choose for the second option. I take it for granted that mathematicians do say things like "let x be an F". How should such sentences be understood? In terms of arbirtary objects? I would like to propose to change our perspective on the aforementioned sentences a bit. More specifically, I would like to propose to change our semantic perspective on such sentences. As a consequence, I feel that the questions which seem to have arisen on behalf of them will also be put in a different light. Moreover, I think that on the basis of this change in we do not need arbitrary objects. Still, I believe that we can make perfect sense of sentences of the aformentioned type. I do not no whether and to what extent what I am going to say makes sense and whether it can be maintained if worked out in more detail. Nonethless, it seems to me worth trying it. Let me explain what I have in mind. I take it that Kit Fine holds that, in case of a sentence of the form "let x be an F" we should think of "x" as referring to an arbitrary object satisfying the "is an F". In other words, Fine interprets such sentences in terms of truth conditional semantics. However, by allowing for arbitrary objects he does so in a "nonstandard manner", so to speak. Upon closer inspection, however, it seems that Fine wants to think so because he attempts, among other things, to understand some specific inferential moves mathematicians are supposed to make in practice. For example, inferential like elimination of the universal quantifier, that is, an inference from "(Ay)(y is an F)" to "x is an F" . The latter sentence would then be the first-order counterpart of "let x be an F". I take it that Fine thinks that if we understand "x" as referring to an arbitrary object, then we can in some way illuminate or increase our of some inferential practices adopted by mathematicians. I won't go into Fine's reasons for thinking so; these can be found in his book on the matter. Let me first say that I find it hard to understand *what* inferential practices the aformentioned inference types (e.g., elimination of the universal quantifier) supposed to illuminate. Taken at face value, Fine is concerned with an semantics of a specific class of formal languages. In terms of such formal it can be rigorously specified what a proof is, and with the help of an semantics it can be exactly specified what "logical validity" means. It is not however, to what extent and in what sense such proofs increase our of the proofs resulting from mathematical practice. On this point, let me suffice to say that it seems to me that the logician's proofs are much more concerned with the systematic organization of mathematical results--that is: theorems--than to mathematical proofs. This relates to my second point. Notice that the original sentence "let x be an F" is interpreted by Fine as "x is an F", where, again, "x" is taken to refer to an object. However, this reading is far from obvious. For notice that "let x be an F" is an imperative, while "x is an F" is a declarative sentence. It is not clear that the semantic interpretation of the former should proceed along the lines Fine Indeed, it is not even clear that the semantic interpretation of "let x be an F" should be formulated in terms of classical truth conditions at all. I propose that we do not react by saying that the imperative nature of the former sentence is simply due to "misleading grammatical surface structure"; its "genuine" logical form would then be such that, in some way, it can be interpreted truth conditionally along familiar lines. Why? Wait and see. Notice, by the way, that actual mathematical proofs are often full of language: besides "let x be an integer" we also have, for example: "take x the set A", "let C be the union of A and B", "assume that \phi holds", and so If taken seriously, it seems to me that this suggests that actual mathematical may very well be thought of a some kind of programs, written in what often to come close to a kind of imperative programming language. More we could think of proofs as a kind of programs for cognition. I am not yet whether and to what extent all this makes sense. I am very well aware that leaves open a whole lot of questions, and I am surely unable to answer them Nonetheless, I cannot help but finding it a promising idea. (There have beem who have related mathematical proofs to programs. For example, Martin-Lof. Interestingly, I recently found John von Neumann, in his little book "The and the Brain", making a comparison between the language of mathematics and a higher-lever programming language running on the brain--this perhaps that proofs should indeed be interpreted cognitively). If we think of proofs as a kind of (imperative) programs, then it seems to wonder whether they should be interpreted in terms of a dynamic semantics, that is, a semantics which interprets "statements" (in the sense in which scientists use the term) in terms of pairs of states, i.e., pairs consisting of an "input state" and an "output state". Classical, "static" semantics may very well be considered inappropriate here. If we pursue this semantic line of thought then we could try for "let x be an It seems promising to interpret such a sentence in terms of a "reset change the value of the register named "x" to an object of type F. Thus we need no arbitrary objects, only the objects we are long familiar with, together with their associated types. Moreover, by interpreting this sentence terms of a kind of operation, I take it that we are on a way to taking its imperative character seriously. But what, then, is the object to which the value of the register named "x" is It seems to me that we don't need to answer this question in full detail. The is unknown to the "user" of the variable "x", and he need not know it. We might think of this value as "irrelevant implementation detail". Ron Rood More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-January/005149.html","timestamp":"2014-04-17T04:19:10Z","content_type":null,"content_length":"10064","record_id":"<urn:uuid:15e30f96-18b6-4173-a771-ba52b583dede>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of an acoustic--trawl survey design to improve estimates of rockfish biomass. Application of an acoustic--trawl survey design to improve estimates of rockfish biomass. Article Report Subject: Striped bass (Protection and preservation) Stock assessment (Wildlife management) (Methods) Trawling (Methods) Authors: Hanselman, Dana H. Spencer, Paul D. McKelvey, Denise R. Martin, Michael H. Pub Date: 10/01/2012 Publication: Name: Fishery Bulletin Publisher: National Marine Fisheries Service Audience: Academic Format: Magazine/Journal Subject: Zoology and wildlife conservation Copyright: COPYRIGHT 2012 National Marine Fisheries Service ISSN: 0090-0656 Issue: Date: Oct, 2012 Source Volume: 110 Source Issue: 4 Geographic: Geographic Scope: United States Geographic Code: 1USA United States Accession 311184830 Full Text: Typically, surveys of resource biomass are designed around simple random sampling (SRS), stratified simple random sampling (SSRS), or systematic sampling (Thompson, 2002). One of these standard designs will perform adequately when the resource is relatively uniformly distributed or when the areas where variability in biomass is highest are static and well known. In practice, many resources, such as fish populations, exhibit highly variable and complex spatial structure, and standard survey methods lead to extremely imprecise estimates of biomass (Hanselman and Quinn, 2004). Novel sampling designs have been developed to improve abundance estimation under these circumstances. One example is adaptive cluster sampling (ACS; Thompson, 1990; Thompson and Seber, 1996), which has been explored both in the field (e.g., Lo et al., 1997; Woodby, 1998; Conners and Schwager, 2002; Hanselman et al., 2003) and in simulation studies (Christman, 1997; Brown, 1999; Christman and Pontius, 2000; Christman and Lan, 2001; Brown, 2003; Su and Quinn, 2003). Other methods have been used: double sampling, ratio, and regression estimator approaches to improve precision (Eberhardt and Simmons, 1987; Hanselman and Quinn, 2004; Fujioka et al., 2007). These approaches improve precision by relating a variable that is expensive or difficult to collect (e.g., trawl catches) to a correlated auxiliary variable of which many samples can be collected quickly or inexpensively (e.g., acoustic data). A resource for which standard survey methods have proven inadequate, Alaskan rockfishes (Sebastes spp.) are abundant and supported a valuable commercial trawl fishery with an average exvessel value of US$ 15 million between 2008 and 2010. Survey estimates of biomass for many Alaskan rockfish species exhibit large interannual variations that are not consistent with the longevity (>80 years) and relatively low productivity of these species (Hanselman et al., 2003; Fig. 1). One of the causes of imprecision in survey estimates of biomass is the high variability in the spatial distributions of rockfish populations. For example, the biomass estimate of Pacific ocean perch (Sebastes alutus) from the survey conducted in 1999 was driven by several very large catches, out of >800 trawls, that resulted in extremely imprecise estimates (Fig. 1). In addition to having variable spatial distributions, some rockfish species have an affinity for rocky habitat, school semipelagically, and use different habitat types by size class (Stanley et al., 2000; Zimmermann, 2003; Rooper et al., 2010). These factors contribute to high sampling variability and demonstrate the need for examining alternative sampling designs or other technologies to improve survey estimates of biomass (Godo, 2009). [FIGURE 1 OMITTED] The difficulty of surveying rockfish populations has been studied by using traditional survey designs like SSRS for some time (e.g., Lenarz and Adams, 1980). More recently, several attempts to improve survey precision for Alaskan rockfish species have been made by using alternative sampling designs. The utility of ACS has been examined in several studies (Hanselman et al., 2001; 2003). Many recent attempts have been made to use concurrently collected acoustic data to improve abundance estimation for demersal species (Ona et al. (1); Hanselman and Quinn, 2004; McQuinn et al., 2005; Fujioka et al., 2007). This subject also was the focus of a European-Union--funded project (combining acoustic and trawl surveys to estimate fish abundance, CATEFA; Hjellvik et al., 2007). These studies showed improvements in survey precision with the use of various measures, including accuracy and travel costs, but none of the survey designs were much more precise than that of a design that was stratified optimally for a particular species. For Pacific ocean perch (POP) in the Gulf of Alaska (GOA), Krieger et al. (2001) showed a relatively strong relationship between catch rates and raw acoustic backscatter in a small study area. Acoustic data were collected sporadically during the NMFS GOA trawl surveys between 2001 and 2004 (Hanselman and Quinn, 2004) and have been collected consistently from 2005 to the current study (2012). Several studies have correlated these acoustic data with trawl catch for rockfishes (Hanselman and Quinn, 2004; Fujioka et al., 2007) and walleye pollock (Theragra chalcogramma) (von Szalay et al., 2007). Although much of the previous research has focused on combining results from trawl surveys and acoustic surveys into a single biomass estimate by assessing their relative catchabilities, the focus of our study was to attempt to use acoustic data to improve a traditional trawl survey design. Our objective was to test the hypothesis that the use of acoustic data in real time in the field to delineate areas with higher trawl-survey catch per unit effort (CPUE) of POP, relative to other survey areas, could increase precision of biomass estimates from trawl surveys. To test this hypothesis, we employed an experimental sampling design, the Trawl and Acoustic Presence/Absence Survey (TAPAS) (Everson et al., 1996). This design is a variant of the double sampling design (Thompson, 2002) and acoustic backscatter data are used to estimate the presence and size of areas, or "patches," where CPUE may be high, compared with other survey areas, and to estimate the proportion of the total area classified as patches. Trawls are conducted at stations randomly selected before a cruise (planned stations) and in the acoustically detected highCPUE patches identified during a cruise. The rationale of this design is to reduce sampling variability by allocating more sampling effort in the areas of higher CPUE. If high-CPUE areas can be correctly identified with acoustic backscatter, it should be possible to estimate biomass more efficiently. As with other double sampling designs, a critical assumption is that the auxiliary variable (e.g., acoustic backscatter) shows a strong correlation with the primary variable (e.g., trawl CPUE). We believe our study describes the first field application of this TAPAS design. Materials and methods Field methods The study area for our 2009 field experiment was chosen because we had prior CPUE and acoustic data from the NMFS GOA trawl surveys and CPUE data from a prior ACS experiment (Hanselman et al., 2003). We confined our study area to the NMFS-delineated strata on the continental shelf break at depths of 200-500 m in the Yakutat area of the GOA (Fig. 2) because these depths contain the bulk of POP biomass. The sampling area was 7800 [km.sup.2]. [FIGURE 2 OMITTED] The vessel used for our 2009 study was the FV Sea Storm, a 38-m stern ramp trawler with 1710 continuous horsepower. Stations were sampled with a standardized Poly-Nor'eastern high-opening bottom trawl rigged with roller gear and a 27.2-m headrope. All gear was the standard gear used for the NMFS GOA trawl surveys. For further details on the vessel and gear used for our 2009 study, see the report by von Szalay et al. (2010). Acoustic backscatter was measured continuously during the day and during trawling, with a calibrated Simrad (2) (Kongsberg Maritime AS, Horten, Norway) ES60 echosounder and a hull-mounted 38-kHz transducer. A total of 48 stations were preselected randomly from among stations that were successfully trawled during previous NMFS GOA trawl surveys (Fig. 2). The use of previously trawled locations eliminated search time for new locations suitable for random trawls. Once random stations were selected, we constructed the most efficient path, or trackline, to connect these planned stations. Depending on the acoustic backscatter encountered during a survey, these planned stations were later classified as either "background stations" (with low CPUE) or "patch stations" (with high CPUE). The identification of patch stations required a simple and consistent definition for the spatial variability in acoustic backscatter along the trackline so that we could determine areas of intense backscatter that were large enough for bottom trawling. Acoustic backscatter data were examined in real time by using the Echoview live viewing module (Myriax Pty., Ltd., Hobart, Australia), and Echoview scripts were used to integrate the acoustic backscatter in cells along the seafloor. The conformal cells in this analysis had a height of 10 m (from 1.5 m to 11.5 m off the seafloor) and a length of 100 m. The lower boundary of each cell was situated 1.5 m off the seafloor to avoid errors in Echoview-derived bottom detection and to account for the "acoustic dead zone" (Simmonds and MacLennan, 2005)--the area where fishes are difficult to detect acoustically because the echo from the seafloor masks their acoustic signals. The value of 1.5 m was estimated with the equations in Ona and Mitson (1996) and a peak POP depth of ~225 m (Hanselman et al., 2001). The 10-m height of the cells examined in our study was considerably larger than the mean height (~6 m) of the nets used in NMFS bottom trawl surveys, but this difference accounts for POP swimming above a trawl net that may dive down into the net path in response to the pressure wave of the trawl. This potential for "herding" may increase the effective height of the net. In addition, Aglen (1996) found that the correlation between catch and acoustic backscatter off the seafloor was greatest for Atlantic species of Sebastes and suggested that a taller acoustic layer should be more robust for identification of areas of intense backscatter. The actual size of the acoustic layer, however, does not contribute directly to biomass estimates, which are based on CPUE data from Patch definition was determined with the use of 2 metrics: 1) the value of mean volume backscattering, [S.sub.v] (log decibels re 1 [m.sup.-1]. MacLennan et al., 2002) that defines high acoustic intensity ([S.sub.v] threshold) and 2) the proportion of cells where the [S.sub.v] threshold was exceeded. A proportion criterion was used to smooth the [S.sub.v] values across cells to avoid defining small areas with high acoustic backscatter as discrete patches. Analysis of archived data indicated that a proportion was preferable to a moving average that was sensitive to intermittent large increases in [S.sub.v]. The distance for evaluating the proportion of cells was a sampling window that spanned 31 cells for a total of 3.1 km, which is comparable to the distance needed to prepare for and conduct a bottom trawl. For our study, an area became designated as a patch when the proportion of cells in the sampling window that exceeded an [S.sub.v] of -65.6 dB was 0.39 or higher. The criteria for patch definition were determined by using the 80th percentile of values from acoustic backscatter data measured aboard the FV Sea Storm during a NMFS GOA trawl survey in 2005 in the same Yakutat area. The acoustic backscatter data were echo-integrated in Echoview, and the [S.sub.v] values were exported and analyzed with R software scripts, vers. 2.9.0 (R Development Core Team, 2009), which generated graphs showing values of [S.sub.v] that defined the start and end of patches meeting the threshold criteria (Fig. 3). In each identified patch, a location for a patch station that was at least 1 km (a single trawl length) from the edge of that patch was randomly selected, and a 10-min trawl was conducted from that random location as the starting point. [FIGURE 3 OMITTED] The CPUE data collected from these trawls were assigned to patch stations (random trawls conducted within identified patches) or background stations (trawls conducted within planned stations at which the acoustic threshold was not exceeded). It is important to note that if a planned station was found to be located within an acoustically identified patch, a trawl was conducted at a patch station that was randomly selected within that patch rather than at the preselected location. Data analysis The acoustic backscatter data were processed and then categorized according to vessel activity. Echoview software was used to correct the backscatter data for noise and erroneous seafloor tracking. Partitioning backscatter by vessel activity was necessary to accurately estimate the size of patches and the total length of the path traveled by the FV Sea Storm inside patches. Hence, to eliminate double counting, we avoided trackline segments where the boat circled around to set up trawls or searched for ground suitable for trawls. Seven vessel-activity categories were assigned to each 100-m cell: 1) transiting between stations, 2) returning to set up a trawl, 3) searching for ground suitable for trawls, 4) trawl deployment, 5) trawling (with offset for trawl distance behind the vessel), 6) trawl recovery, and 7) other transit that was not part of our study. Categories 1 and 4-6 were included in this study. Overlap, defined as anywhere the vessel path was within 50 m of the haul or earlier vessel path, was measured with ArcGIS software (ESRI, Redlands, CA, vers. 9.2). CPUE was an estimate of fish density (kg/[km.sup.2]) at each station and was calculated as the catch of a species in kilograms divided by the area sampled (i.e., the product of the net width in kilometers and the trawl trackline in kilometers). Patch length was computed with the haversine formula to calculate great-circle distances (as implemented in the R package argosfilter; R Development Core Team. 2009) between GPS coordinates for every 100-m interval. These calculations were compared with results from summing the number of cells to verify that all cells were very close to 100 m in length and that the GPS systems functioned correctly. For example, using the GPS coordinates, we checked that a 10-cell window in Echoview was ~1 km in length. We computed biomass estimates with 2 types of methods to compare magnitude and precision. With the first method, we omitted the patch stations, except when a patch station was originally a planned station, and calculated the abundance with an SRS estimator with the sample size used as if the full number of trawls had been sampled by simple random sampling (Thompson, 2002). For the second method, we used the estimator derived for the TAPAS design. The TAPAS estimator is functionally similar to an SSRS estimator, with an important exception: in the TAPAS estimator, CPUE values from patch stations are treated as multiple strata weighted by their associated patch size, but, in an SSRS estimator, only the total area estimated to be in the patch stratum is used. An SSRS estimator was not used in our study for 2 reasons: 1) each patch is a separate stratum with a sample size of one, and therefore within-strata variances cannot be computed and 2) the sampling design introduces patch length as an additional random variable that may or may not correlate with CPUE. If there is no correlation with patch length and CPUE, and the relationship between [S.sub.v] and CPUE is weak, then using the TAPAS design is similar to suboptimally allocating samples in an SSRS design. This suboptimal allocation would cause the TAPAS estimator to perform slightly worse than an SSRS estimator because of the extra random variable introduced, and the SSRS estimator would in turn be no better than an SRS estimator. The focus of the TAPAS design is to reduce the sampling variance in estimating biomass based upon the degree to which acoustic backscatter corresponds with trawl CPUE. Each of these measures shows a relationship with true fish density, and systematic biases relative to true density may exist in either measure because of processes such as fishes herding to trawl nets or responding to vessel noise. For Alaskan groundfishes, it is commonly assumed that trawl CPUE is less variable than acoustic backscatter as a measure of fish density (over the path of the trawl), although scenarios could occur where this assumption was not realistic (Freon and Misund, 1999). Information that addresses systematic biases, such as catchability and availability of fish to a sampling method, could be incorporated into the TAPAS design, although this approach would not address the central issue of the imprecision of survey estimates that result from variable spatial distributions of rockfish. For stocks with quantitative stock assessment models, the degree of systematic biases potentially can be addressed by estimating catchability and gear selectivity parameters. The stratum-wide TAPAS and SRS estimates of biomass were calculated with the following formulae based on Everson et al. (1996): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (7) where [[??].sub.0] = the mean CPUE (kg/[km.sup.2]) of the background trawls; n = the total sample size; I = the total number of patches encountered; [[??].sub.i] = the CPUE (kg/[km.sup.2]) of trawl i; [[??].sub.1] = the mean CPUE of the patch trawls; l' = the total track length within patches; [[B.sub.0] = the estimated biomass for swept areas at background stations (kg); A = the total sampling area ([km.sup.2]); L = the total length (km) of the trackline traveled by the vessel throughout this study; [[??].sub.1] = the estimated biomass for swept areas at patch stations (kg); [??] = the TAPAS estimate of total biomass in the sampling area; [B.sub.SRS] = the SRS estimate of total biomass in the sampling area; and [I.sup.*] = the number of patches that were not planned stations. The variance derived in Equation 7 of Everson et al. (1996) left out covariance and area terms. We derived an improved estimator of the variance (Table 1) using the delta method (Quinn and Deriso 1999); this derivation is presented in the Appendix. We computed confidence intervals with the "log-Bayes" method suggested by Everson et al. (1996). Finally, we computed SRS and TAPAS confidence intervals with the bootstrap method (Efron and Tibshirani, 1993). In complex sampling designs, there are alternative ways to bootstrap confidence intervals (Rao and Wu, 1988; Smith, 1997; Christman and Pontius, 2000). For our study, we examined several bootstrap methods and found that the results among them were similar. Thus, for comparison with analytical results, bootstrapping was conducted as suggested by Everson et al. (1996): pairs of patch lengths and associated values of patch CPUE were resampled to preserve any correlation between patch length and CPUE. The number of patches selected was parametrically bootstrapped by drawing from a Poisson distribution with the realized number of patches as the mean of the distribution. An additional Poisson random variable was drawn to determine whether a patch station was included in the SRS estimator. The mean of this second Poisson distribution was the number of planned stations that occurred in a patch during the survey. This source of variability reflects the probability that any of the observed patch stations could have been located at one of our planned stations. Bootstrapping was conducted 10,000 times with the R statistical package (R Development Core Team, 2009). For the TAPAS estimators, both the CPUE values and the patch lengths were resampled with replacement, but, for the SRS estimator, only the CPUE values were resampled. Percentile confidence intervals were constructed with the bias-corrected method of Efron and Tibshirani (1993) that was used in Everson et al. (1996). This method centers intervals on the analytically estimated mean. To improve the precision of the biomass estimates obtained with our planned design, we re-analyzed the data with alternative patch definitions. First, we examined the relationships of trawl CPUE to other variables, such as the maximum [S.sub.v], variance or standard deviation of [S.sub.v], median [S.sub.v], depth, products and ratios of these quantities, and multiple regressions. These examinations were done to see if focusing on different quantitative characteristics of the acoustic backscatter could result in an improved threshold. We then chose a number of alternative patch definitions and, with the best alternative, estimated biomass and precision for comparison with our original field results. We examined the spatial structure of the [S.sub.v] along the entire trackline and fish densities from trawls using classical method of moments sample variograms (Cressie, 1993). We also re-examined the densities of POP in trawls from an ACS experiment conducted in 1998 (Hanselman et al., 2001), during which trawls were conducted at a higher spatial resolution (i.e., closer together) than they were in our 2009 study. We coarsened the spatial resolution (upscaled the support) of the acoustic data by aggregating the [S.sub.v] values so that the distance between [S.sub.v] values was 1 km, which was the sampling resolution (support) of the trawl data (Atkinson and Tate, 2000). We varied the maximum distance of spatial correlation until a clear range was identified. We then fitted different variogram models (spherical, circular, exponential, and linear) to determine the best shape of the variogram model. Field sampling occurred during daylight hours over 12 days in August 2009. A total of 59 trawls were completed, with 40 background trawls and 19 patch trawls (Fig. 2). The total weight of all species caught was 30.1 metric tons (t). POP made up 55% of the overall catch from our study, followed by walleye pollock and shortraker rockfish (S. borealis) (Table 2). Mean CPUE of POP was 42,450 kg/[km.sup.2] in patch trawls and 7,475 kg/[km.sup.2] in background trawls. The total trackline covered was 1250 km; 112 km of this total was in patches where we trawled. Overall, about 20% of the trackline (230 km) was above the threshold [S.sub.v] but was either not long enough to invoke our patch definition or deemed untrawlable by the captain of the FV Sea Storm. A return to trawl inside patch stations added an additional travel cost of about 2% beyond the cost of trawling only at planned stations. The last 2 of the 59 trawls were conducted after our planned trackline was completed, and the stations of these 2 trawls were identified with an alternative patch definition (see discussion later in this section); therefore, we did not use them in our main analysis. Before comparing [S.sub.v] measurements with trawl densities, we checked for normality of the data. The distribution of [S.sub.v] along the trackline was reasonably normal (Fig. 4), but trawl densities of POP were left-skewed and required transformation to approach normality. Hanselman and Quinn (2004) showed that power transformations were superior to the logarithm for POP survey data. Applying the Box-Cox power transformation showed that the likelihood surface at different powers was relatively flat between 0.1 and 0.3. We chose to use the fourth-root of trawl CPUE because it showed a better residual pattern and had higher correlation with [S.sub.v] than did the logarithm and lower power transformations. The relationship between [S.sub.v] and POP CPUE was relatively weak, particularly below -70 dB (Fig. 5). The relationship between POP CPUE and patch length was tenuous, with a low correlation coefficient (r= 0.08). In some cases when our patch algorithm detected a patch, schools of POP appeared to dissipate or move off the seafloor in the time it took to return to the same location and set up a trawl (Fig. 6). The resulting biomass estimates were very similar among the different types of estimators (Table 3, Fig. 7). All estimates of biomass from our study were much more precise, in terms of the coefficient of variation (CV), than estimates of biomass based on data for the same area from the NMFS GOA trawl survey conducted in 2009 (Fig. 7). The bootstrap procedure yielded similar estimates of biomass and precision between the TAPAS and SRS estimators. If we included the 2 trawls conducted opportunistically off the planned trackline, on the basis of our alternative patch definition, the TAPAS design, with much higher biomass estimates, performed better than the SRS design (CV=27% vs. CV=34%). We examined our results with respect to the variables that would have produced a better correlation with trawl CPUE. The weak relationship between [S.sub.v] and POP CPUE was obtained when comparing only for the length of the trawl trackline (offset for trawl distance behind the vessel). Higher correlations between acoustic backscatter and trawl CPUE resulted when acoustic backscatter was calculated from segments that were centered at the trawl track and 3-5 times the length of the trawl trackline than from segments that were only the length of the trawl trackline. We derived 4 new patch definitions, using a 3-trawl-length sampling window (~3 km), in addition to the patch definitions we used in the field (Table 4). We show results as if we had used the patch definition with the best relationship between [S.sub.v] and POP density in the field. [FIGURE 4 OMITTED] [FIGURE 5 OMITTED] Comparing these patch definitions, we found that the strongest predictor of POP CPUE was the one that used the 90th percentile of maximum [S.sub.v] in a 3-trawl-length sampling window, which approximated the window we used for our 2009 survey (Fig. 8). This sampling window also gave the lowest error rate in identifying areas of below-average CPUE as a patch station when they should not be (Table 5). The standard deviation of the 3-trawl-length sampling window also performed reasonably well. Alternative 5, one of the alternative patch definitions (Table 4), was attempted to combine backscatter variability and maximum [S.sub.v], but it did not perform better than maximum [S.sub.v] alone. The addition of depth as a variable to any of these alternatives in a multiple regression yielded minor, insignificant improvements to the model. As a basis for a modified patch definition, we re-analyzed the acoustic data using an [S.sub.v] criterion of -58.11 dB derived from the 90th percentile of the maximum [S.sub.v] from the original 2005 FV Sea Storm data in our 31-cell window. Only 8 of the previous 19 patch stations were located in patches under this new definition. Because of this smaller sample size, SRS estimates were less precise with this new patch definition than with the original patch definition. However, despite the smaller sample size, the new threshold for TAPAS did yield a slightly improved CV than the CV obtained with the original threshold (Table 3). Overall biomass estimates were slightly higher, and all measures of precision yielded similar results (Table 3). Variogram analysis of the [S.sub.v] measurements showed strong spatial correlation at the spatial resolution of the trawl data (Fig. 9A). Variogram analysis of the values of trawl CPUE collected during our study revealed no appreciable spatial structure, likely because the trawls were relatively far apart (146 km on average). Alternatively, we compared the [S.sub.v] measurements from our 2009 study with the values of CPUE collected during an ACS experiment conducted in 1998 (Hanselman et al. 2001); CPUE data were collected at a finer scale (27 km on average) in the ACS experiment than in our study (Fig. 9B). We fitted a spherical model to the [S.sub.v] measurements and a linear model to the trawl CPUE on the basis of visual fit and goodness of fit (coefficient of determination, [r.sup.2]). The range of correlation for the [S.sub.v] measurements was larger (~13 km) than the range of correlation for the trawl densities (~8 km). The variogram for the trawl CPUE data had a relatively larger nugget, or unexplained microscale variance, than the variogram for the [S.sub.v] data. [FIGURE 6 OMITTED] The study area, Yakutat, and target species, POP, for our field study were chosen to increase the likelihood of obtaining a strong relationship between acoustic backscatter and trawl CPUE. Several previous studies had observed relatively strong relationships for rockfishes between [S.sub.v] and trawl CPUE in the GOA (Krieger et al., 2001; Hanselman and Quinn, 2004; Fujioka et al., 2007), and the Yakutat area was known to have high rockfish abundance. Additionally, Hanselman and Quinn (2004) and Fujioka et al. (2007) showed that stratifying by acoustic backscatter or double sampling could improve precision of biomass estimates on the basis of data collected during previous ACS surveys for rockfishes and biennial NMFS GOA trawl surveys. The use of real-time processing of acoustic backscatter to determine patches was efficient, and POP were the most commonly caught species and were found in higher densities than other fishes at patch stations. However, the conditions that make the TAPAS design more efficient than random sampling, as shown in simulation studies (Spencer et al., 2012), did not materialize in the fieldwork described here. For the TAPAS design to be more effective than SRS, the categorization of patch and background areas must show a correspondence with trawl CPUE (i.e., CPUE values consistently should be higher at the patch stations than at the background stations). When this correspondence does not occur, the use of these categories does not improve the precision of biomass estimates and increases variability because the sizes of the patch and background areas are estimated. Despite the minimal requirement of classifying the acoustic data into only 2 categories, the results of our study indicate that the effectiveness of the TAPAS design remains dependent upon the strength of the relationship between [S.sub.v] and trawl CPUE. Patch size and CPUE were only weakly correlated, and the variance of the planned stations was not as high as expected. Variogram analysis of ACS data showed that the spatial correlation range for trawl CPUE may be smaller than the range for [S.sub.v] data. Previous variograms estimated for the NMFS GOA trawl surveys had indicated a range of ~4.5 km (Hanselman et al. 2001), which was also smaller than the range of the acoustic backscatter collected in our study. The larger range of the [S.sub.v] data may indicate that some of the intensity of [S.sub.v] is a result of ambient variables other than POP density. The nugget (unexplained variance) of the trawl CPUE is large, relative to the total variance for the trawl CPUE, an indication that the trawl CPUE data likely have more measurement error than the acoustic data and that the data were sparser. The trawl CPUE variogram in our study had a larger range than did the individual areas analyzed in Hanselman et al. (2001). This difference in range could have occurred because the aggregated data in our study had more pairs of trawl densities at larger lag distances than did the spatially explicit variograms with smaller sample sizes in that earlier study. [FIGURE 7 OMITTED] One source of discrepancy between the acoustic and trawl data is that multiple species contribute to the acoustic backscatter. Von Szalay et al. (2007) had success relating acoustic backscatter of walleye pollock with CPUE in the Bering Sea. However, walleye pollock make up the majority of the biomass in the Bering Sea; in contrast, POP is one of a number of abundant species in the GOA. Krieger et al. (2001) had more success relating acoustic backscatter with rockfishes using a Simrad EK500 quantitative echosounder. In their study, which was conducted in the more rugged habitat off Southeast Alaska, the catch was primarily rockfishes and contained species that were smaller in size than the larger rockfish species and walleye pollock that made up the non-POP catch in our study. Although we restricted our study area to depths where POP would be the dominant species and, indeed, where POP was the largest component of our catch, our original sampling algorithm revealed patches of acoustic backscatter that were not characteristic of rockfishes. Steadier and less intense than backscatter associated with rockfishes, these patches may have been caused by squid (Berryteuthis spp.) or eulachon (Thaleichthys pacificus). In addition, a substantial amount of walleye pollock was caught coincident with POP catches. The TAPAS design may perform better in multispecies situations because of the relatively relaxed requirement of categorizing data into 2 groups, as opposed to the more involved effort of a statistical regression required for double sampling in a regression design. [FIGURE 8 OMITTED] Differences in the portions of the water column surveyed by the 2 sampling methods also can lead to low correspondence between acoustic and trawl data. Rockfishes can be closely associated with the seafloor and, perhaps, in the acoustic dead zone, but walleye pollock and other species are typically observed higher in the water column. We also noted the ephemeral nature of fish schools (Fig. 6), which may be attributed to responses to vessel noise or to changes in the position of fishes in the water column for foraging. Diurnal and seasonal changes in the level of aggregation clearly could hinder the effectiveness of our acoustic algorithm in relation to fish CPUE. Changes of the vertical orientation of POP to the seafloor also could influence backscatter and may have affected our acoustic algorithm (Freon and Misund 1999). When the field data from our study were re-analyzed with different patch definitions, we found that CPUE was more strongly related to acoustic backscatter in a window longer than the typical trawl distance--likely a result of the extremely fine spatial structure of schools or to the behavioral reactions of fishes to the initial pass of the FV Sea Storm over the patch (Mitson and Knudsen, 2003). If the spatial structure of schools was relatively narrow, then the trawl net may not have passed through the same school that was identified by the echosounder because of currents and imperfect tracking of the original vessel path (Ona and Godo, 1990; Engas et al., 2000). Re-analysis revealed that the use of the 90th percentile of maximum [S.sub.v] was more successful in identifying stations where rockfish CPUE was high and resulted in slightly more precise biomass estimates, compared with results from the original patch definition, despite a lower sample size. As with the analysis of Hanselman and Quinn (2004) with their ACS simulations, our re-analysis of the acoustic data showed that the TAPAS estimator can be improved when a high criterion of acoustic backscatter is used for the patch definition (i.e., additional sampling is invoked only in a few, high fish-density instances) and essentially outliers are removed from the random sampling portion of the ACS and TAPAS estimators. The TAPAS design incorporates aspects of both adaptive sampling, which usually consists of a single sampling gear applied to a highly variable spatial distribution, and double sampling designs that rely on sampling primary and auxiliary variables (Thompson, 2002). The TAPAS design provides one operational method for implementing a double sampling for stratification design. The use of acoustics to stratify a survey area was generally recommended by Fujioka et al. (2007) and Hjellvik et al. (2007), with the difference that acoustic backscatter is continuously monitored rather than sampled in discrete units. Results from our study and the ACS design attempted by Hanselman et al. (2003) highlight that even when focusing specifically on the abundance of rockfishes, it is difficult to survey stocks with high spatial variability that exist on both trawlable and untrawlable grounds. In the ACS surveys of Hanselman et al. (2003) specialized tire gear was used, which made trawling on each cluster station possible, but made comparisons of CPUE impractical between those ACS surveys and surveys that used typical NMFS trawl gear. In our study, we used standard NMFS trawl gear; however, it could not be used in all observed patch stations. If POP were more abundant in some of these untrawlable patches and we had used different gear that would have allowed us to survey those patches, we may have had higher POP densities in our patch trawls. When comparing our estimates with assessments of Hanselman et al. (2003), we found that the CV on mean CPUE was lower at the planned stations in our study than in the SRS portion of the ACS study. Unlike the bimodal bootstrap distribution of the SRS estimates in Hanselman et al. (2003), a relatively Gaussian distribution resulted when bootstrapping the TAPAS and SRS estimators. Both designs have the disadvantage of having a variable sample size, but both have the advantage of completing a survey in a single pass through a study area. The TAPAS design imposed a small additional cost for travel time because our vessel had to return to trawl a random location in a patch, but the daily number of trawls conducted was not affected. The ACS and TAPAS designs are both more efficient than some of other two-stage designs that require the completion of an initial random sample before the second stage can begin. Another challenge with field studies of spatially variable species is that performance of survey designs depends highly on the fish densities encountered in a given survey. Previous attempts to improve the correspondence between acoustic backscatter and trawl CPUE have focused on partitioning the acoustic backscatter to species (Mackinson et al., 2005) and quantifying relative catchability of these 2 sampling methods (McQuinn et al., 2005). Beare et al. (3) found that using the length and species composition information from trawls to partition acoustic backscatter to species improved correlations. Mackinson et al. (2005) used a fuzzy logic approach to examine the relationship between acoustic backscatter and trawl CPUE, and they found that depth was a better predictor of trawl CPUE than was acoustic backscatter. For Alaskan groundfishes, species composition can be inferred relatively accurately by depth (Hanselman and Quinn 2004). Further work should focus on identifying specific characteristics of acoustic backscatter, such as school shape, target strength, and school density that would contrast rockfishes from co-occurring species. However, multivariate analyses have shown that distinguishing POP backscatter from walleye pollock backscatter is challenging (Spencer et al. (4)). [FIGURE 9 OMITTED] Increased precision for future applications of the TAPAS design could be attained in several ways. Improved correspondence between acoustic backscatter and trawl CPUE, for example, could be obtained from better partitioning of acoustic backscatter to species and quantifying the availability and vulnerability of a fish to these 2 sampling methods. Spencer et al. (2012) showed that the highest gains in efficiency for the TAPAS design, compared with SRS in simulations, were achieved when the spatial correlation of fish density was low and there was a large number of patches of small size. Such circumstances resulted in the TAPAS design sampling a high proportion of the total area of patches in the population. In addition, Spencer et al. (2012) showed that a modified TAPAS design, in which every third patch was sampled, resulted in higher efficiency than did an SRS design. However, the situations in Spencer et al. (2012) where there were significant gains in performance relative to SRS occurred only when there was a strong relationship between [S.sub.v] and CPUE. Everson et al. (1996) showed that precision could be most improved when patches were smallest and they were a low proportion of the total survey area such that the probability of sampling high-CPUE areas during a random survey was low. These results indicate that the TAPAS design may show greater gains in precision for biomass estimates of a stock that is even more concentrated into small areas than is For these rockfish stocks, the greatest improvement in precision of trawl-survey indices of biomass can be achieved by increasing the overall sample size in the narrow depth band where they are most abundant. The ACS and TAPAS designs are useful frameworks for efficiently adding samples in abundant areas, and they also can serve to improve the NMFS trawl index in specific high-variability strata. Clearly, these designs should be applied only in depths and areas of known high abundance and variability of a species of interest, and the design should use a high threshold for invoking additional sampling. For the TAPAS design to be applied efficiently, the specific acoustic backscatter characteristics of a target species need to be well known so that the relationships between patch definition, patch length, and CPUE are strong. Under these conditions (e.g., a patch station reliably has high CPUE), it might be beneficial to obtain an additional commercial vessel to follow the primary survey vessel, sample patch stations, and retain the catch, while the primary survey vessel continues to sample planned stations. These cost-recovery surveys (e.g., Hanselman et al. 2003) have been useful in Alaska as zero- or low-cost alternatives to the normal practice of discarding catch on purely random surveys. Even if a design that combines acoustic surveys and trawl surveys could provide superior estimates of biomass, in practice, such a design would have to be modified to a context of a multispecies groundfish survey in most situations. Such adaptation is an additional complication in the use of novel sampling designs, given the competing sampling goals and limited resources of fisheries monitoring. In a multispecies context, the TAPAS design may be a way to add more sampling effort for major species groups that occupy a similar depth or area when differentiation of backscatter is difficult (as it is for rockfishes and walleye pollock). An avenue of future research would be to examine the precision of biomass estimates determined with the TAPAS design for multiple species that produce significant acoustic backscatter. Our work shows that sampling fish populations with high spatial variability remains a challenge. To more accurately understand acoustic and spatial patterns for POP and other rockfishes, it may be necessary to consider more quantitative acoustic or geostatistical methods and to move away from the traditional paradigm of bottom trawl surveys (Godo, 2009). However, in areas that are fortunate enough to have a long time series of standardized fishery-independent surveys, it is rare and, perhaps, unwise to make changes to the sampling design or the sampling method. TAPAS and analogous designs could be used to increase sampling intensity for specific stocks, without necessarily creating a break in a biomass time series. The potential improvement in the precision of biomass estimates through the use of the TAPAS design when a strong relationship exists between [S.sub.v] and CPUE (Spencer et al., 2012) offers motivation for continuing to refine our understanding of acoustic and spatial patterns and the methods used to define high-CPUE patches. This appendix outlines the method with which we derived TAPAS variance estimators. Capital letters denote random variables; lower case letters denote realized values of random variables. This formulation redefines l'/L as an estimate of p, which is the proportion of the survey area in the patches, so that the properties of the binomial distribution can be used to capture the variability of track lengths of patches. Overbar notation refers to the mean, and hat notation refers to a sample estimate. p--Proportion of survey area in patches, a-Total area swept by bottom trawl, A-Total area of sampling area, [D.sub.0]--Mean background CPUE, [B.sub.0]--Background biomass, A[D.sub.0](1-p), t--Total track length, l'--Total track length within patches, [l.sub.i]--Length of track in patch i, I--Total number of patches encountered, L--Sum of length in patches, [L.sub.i]--Length of patch i, [[bar.D].sub.i]-Mean CPUE within patch i, [D.sub.1]--Mean patch CPUE in all patches, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] [B.sub.1]--Patch biomass, B--[B.sub.0] + [B.sub.1], B-Total biomass [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Biomass variance After defining the variables, we derived the variance of the overall biomass estimate (V[B]): V [B] = [A.sup.2]V [[D.sub.0] (1 - p) + [D.sub.1] p]. We used the definition of the variance of a sum: V[B]=[A.sup.2](V[[D.sub.0] (1- p)]+ V[[D.sub.1]p]+ 2Cov[[D.sub.0] (1- p),[D.sub.1]p]). We applied the definition of the variance of a sum again: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We removed constants and re-arranged the equation so that covariance terms were at the end: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We defined parts to simplify the derivation with the delta method: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Part P: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Part Q: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Part R: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We aggregated the parts: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We rearranged the terms (covariance between Do and [??] was assumed to be zero): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We estimated the biomass variance by replacing expected values with sample statistics: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Derivation of the variance of the estimates of p, [D.sub.0], and [D.sub.1] Variance of the estimate of p: Each patch accounted for some proportion of the total length of the trackline so that [P.sub.i]=[L.sub.i]/t. We were interested in the overall proportion of the trackline that was in the patches, or p. The parameter p was considered to be a parameter of a binomial distribution. In a binomial distribution, an estimate of p is X/n, where X was the number of successes in n discrete observations. In our TAPAS application, the total of the discrete observations was [[??].sub.L] (the number of 100-m segments along the survey trackline) and X was the number of these observations that were in a patch. Our sample estimate of X/n was [??] with the binomial estimated variance: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] These [[??].sub.L] observations could have been assumed to be independent, but there was likely some spatial correlation. For our application, variogram analysis of acoustic backscatter data indicated that the range parameter was ~12 km. This range resulted in an effective sample size that was much smaller than the total number of discrete sampling units, and variance was underestimated. The value of [[??].sub.L] used in the variance equation should reflect this autoeorrelation. In our application, we divided our total trackline length (~1200 km) by the variogram range parameter (~12 km), a calculation that yielded an [[??].sub.L] ~100. Variance of the estimate of [D.sub.0]: The variance in [D.sub.0] was the straightforward random sampling estimator shown as the variance of [[??].sub.0] in Table 1. Variance of the estimate of [D.sub.1]: Recall that [D.sub.1] was estimated as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We expressed the [L.sub.i] in terms of p and made substitutions to obtain [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] where [z.sub.i] = [p.sub.i]/p. This expression rescaled the values of [p.sub.i] so that they summed to one. In this case, we observed a given number of "samples" of trackline from the patches, and [z.sub.i] was the proportion of all the patch trackline that was in patch i. This calculation was still a binomial distribution, except, in this case, we ignored the background category and were concerned only with the patches. We applied the delta method to this sum of products of random variables: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] The variance of [z.sub.i] could be obtained with the same method as that for the variance of p, with an adjusted [n.sub.L]. We substituted sample statistics for expected values to obtain the estimated variance of [D.sub.1]: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] In theory, the term for Cov([[bar.D].sub.i], [z.sub.j]) should be a non-zero value. For example, consider a case with 2 patches. If the proportion in one patch is large, the proportion in the other patch is small, and CPUE and patch length (the proportion) are correlated, then the CPUE would be small in the patch with the small proportion. However, as the number of patches becomes much greater than 2, the covariance between patches and density decreases as [z.sub.j][right arrow]0. We assumed this covariance was negligible: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Covariance of [[D.sub.1] p]: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Here, we did not substitute [z.sub.i] for [p.sub.i]/p because the set of [P.sub.i] was common to both functions. Recall that the covariance of 2 functions of random variables was [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] In our application, g(x) = [D.sub.1] and h(x) = p. We applied the delta method: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We used the argument above that Cov ([[bar.D].sub.i], [p.sub.j]) , where i [not equal to] j, can be ignored. This argument leaves only the Coy ([[bar.D].sub.i], [p.sub.i]), which, in the sampling design of the TAPAS, was expected to be a nonzero value (i.e., the length of a given patch is correlated with the CPUE of that patch): [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] We substituted sample statistics to obtain the covariance of [[D.sub.1], p] : [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] This work was supported in part by a grant from the North Pacific Research Board (NPRB), project 809, and we are grateful to the NPRB for support of this project. We thank the crew of the FV Sea Storm, B. VanWinkle, and C. Conrath for excellent fieldwork. We thank the 3 anonymous reviewers who helped to greatly improve the manuscript. We also thank P. Hulson and T. Quinn for helpful discussions, K. Shotwell for GIS support, and G. Fleischer and D. King for their logistical support of this project. Manuscript submitted 15 September 2011. Manuscript accepted 11 July 2012. Literature cited Aglen, A. 1996. Impact of fish distribution and species composition on the relationship between acoustic and swept-area estimates of fish density. ICES J. Mar. Sci. 53:501-505. Atkinson, P. M., and N. J. Tate. 2000. Spatial scale problems and geostatistical solutions: a review. Prof. Geogr. 52:607-623. Brown, J. A. 1999. A comparison of two adaptive sampling designs. Aust. N. Z. J. Star. 41:395-403. 2003. Designing an efficient adaptive cluster sample. Environ. Ecol. Stat. 10:95-105. Christman, M. C. 1997. Efficiency of some sampling designs for spatially clustered populations. Environmetrics 8:145-166. Christman, M. C., and F. Lan. 2001. Inverse adaptive cluster sampling. Biometrics 57:1096-1105. Christman, M. C., and J. S. Pontius. 2000. Bootstrap confidence intervals for adaptive cluster sampling. Biometrics 56:503-510. Conners, M. E., and S. J. Schwager. 2002. The use of adaptive cluster sampling for hydroacoustic surveys. ICES J. Mar. Sci. 59:1314-1325. Cressie, N. A. C. 1993. Statistics for spatial data, rev. ed., 928 p. John Wiley & Sons, Inc., New York. Eberhardt, L. L., and M. A. Simmons. 1987. Calibrating population indices by double-sampling. J. Wildl. Manage. 51:665-675. Efron, B., and R. J. Tibshirani. 1993. An introduction to the bootstrap, 436 p. Chapman and Hall, New York. Engas, A., O. R. Godo, and T. Jorgensen. 2000. A comparison between vessel and trawl tracks as observed by the ITI trawl instrumentation. Fish. Res. 45:297-301. Everson, I., M. Bravington, and C. Goss. 1996. A combined acoustic and trawl survey for efficiently estimating fish abundance. Fish. Res. 26:75-91. Freon, P., and O. A. Misund. 1999. Dynamics of pelagic fish distribution and behaviour: effects on fisheries and stock assessment, 348 p. Blackwell Science, Oxford. Fujioka, J. T., C. R. Lunsford, J. Heifetz, and D. M. Clausen. 2007. Stratification by echosounder signal to improve trawl survey precision for Pacific ocean perch. In Biology, assessment, and management of North Pacific rockfishes (J. Heifetz, J. DiCosimo, A. J. Gharrett, M. S. Love, V. M. O'Connell, and R. D. Stanley, eds.), p. 473-492. Alaska Sea Grant College Program Rep. AK-SG-07-01, Fairbanks, AK. Godo, O. R. 2009. Technology answers to the requirements set by the ecosystem approach. In The future of fisheries science in North America (R. J. Beamish and B. J. Rothschild, eds.), p. 373-403. Springer Science and Business Media, New York. Hanselman, D. H., and T. J. Quinn II. 2004. Sampling rockfish populations: adaptive sampling and hydroacoustics. In Sampling rare or elusive species (W. Thompson, ed.), p. 271-296. Island Press, Washington, D.C. Hanselman, D. H., T. J. Quinn II, C. R. Lunsford, J. Heifetz, and D. M. Clausen. 2001. Spatial inferences of adaptive cluster sampling on Gulf of Alaska rockfish. In Spatial processes and management of marine populations (G. H. Kruse, N. Bez, A. Booth, M. W. Dorn, S. Hills, R. N. Lipcius, D. Pelletier, C. Roy, S. J. Smith, and D. Witherell, eds.), p. 303-325. Alaska Sea Grant College Program Rep. AK-SG-01-02, Fairbanks, AK. Hanselman, D. H., T. J. Quinn II, C. R. Lunsford, J. Heifetz, and D. M. Clausen. 2003. Applications in adaptive cluster sampling of Gulf of Alaska rockfish. Fish. Bull. 101:501-512. Hjellvik, V., D. Tjostheim, and O. R. Godo. 2007. Can the precision of bottom trawl indices be increased by using simultaneously collected acoustic data? The Barents Sea experience. Can. J. Fish. Aquat. Sci. 64:1390-1402. Krieger, K., J. Heifetz, and D. Ito. 2001. Rockfish assessed acoustically and compared to bottom-trawl catch rates. Alaska Fish. Res. Bull. 8:71-77. Lenarz, W. H., and P. B. Adams. 1980. Some statistical considerations of the design of trawl surveys for rockfish (Scorpaenidae). Fish. Bull. 78:659-674. Lo, N., D. Griffith, and J. R. Hunter. 1997. Using a restricted adaptive cluster sampling to estimate Pacific hake larval abundance. Calif. Coop. Oceanic Fish. Invest. Rep. 38:103-113. Mackinson, S., J. van der Kooij, and S. Neville. 2005. The fuzzy relationship between trawl and acoustic surveys in the North Sea. ICES J. Mar. Sci. 62:1556-1575. MacLennan D., P. Fernandes, and J. Dalen. 2002. A consistent approach to definitions and symbols in fisheries acoustics. ICES J. Mar. Sci. 59:365-369. McQuinn, I. H., Y. Simard, T. W. F. Stroud, J-L. Beaulieu, and S. J. Walsh. 2005. An adaptive, integrated "acoustic-trawl" survey design for Atlantic cod (Gadus morhua) with estimation of the acoustic and trawl dead zones. ICES J. Mar. Sci. 62:93-106. Mitson, R. B., and H. P. Knudsen. 2003. Causes and effects of underwater noise on fish abundance estimation. Aquat. Living Resour. 16:255-263. Ona, E., and O. R. Godo. 1990. Fish reaction to trawling noise: the significance for trawl sampling. Rapp. P.-v. Reun. Con. Proc. Int. Mer 189:159-166. Ona, E., and R. B. Mitson. 1996. Acoustic sampling and signal processing near the seabed: the deadzone revisited. ICES J. Mar. Sci. 53:677-690. Quinn II, T. J., and R. B. Deriso. 1999. Quantitative fish dynamics, 542 p. Oxford Univ. Press, New York. Rao, J. N. K., and C. F. J. Wu. 1988. Resampling inference with complex survey data. J. Am. Stat. Assoc. 83:231-241. R Development Core Team. 2009. R: A language and environment for statistical computing. R Foundation for Statistical Computing Vienna, Austria. [Available from http://www.R-project.org, accessed Jul 2009.] Rooper, C. N., G. R. Hoff, and A. De Robertis. 2010. Assessing habitat utilization and rockfish (Sebastes spp.) biomass on an isolated rocky ridge using acoustics and stereo image analysis. Can. J. Fish. Aquat. Sci. 67:1658-1670. Simmonds, E. J., and D. N. MacLennan. 2005. Fisheries acoustics: theory and practice, 2nd ed., 437 p. Blackwell Science Ltd., Oxford. Smith. S. S. 1997. Bootstrap confidence limits for groundfish survey estimates of mean abundance. Can. J. Fish. Aquat. Sci. 54:616-630. Spencer, P. D., D. H. Hanselman, and D. R. McKelvey. 2012. Simulation modeling of a trawl-acoustic survey design for patchily distributed species. Fish. Res. 125-126:289-299. doi: 10.1016/j.fishres.2012.03.003. Stanley, R. D., R. Kieser, K. Cooke, A. M. Surry, and B. Mose. 2000. Estimation of a widow rockfish (Sebastes entomelas) shoal off British Columbia, Canada as a joint exercise between stock assessment staff and the fishing industry. ICES J. Mar. Sci. 57:1035-1049. Su, Z., and T. J. Quinn II. 2003. Estimator bias and efficiency for adaptive cluster sampling with order statistics and a stopping rule. Environ. Ecol. Stat. 10:17-41. Thompson, S. K. 1990. Adaptive cluster sampling. J. Am. Stat. Assoc. 412:1050-1059. 2002. Sampling. 2nd ed., 367 p. John Wiley & Sons, Inc., New York. Thompson, S. K. and G. A. F. Seber. 1996. Adaptive sampling, 265 p. John Wiley & Sons, Inc., New York. von Szalay, P. G., D. A. Somerton, and S. Kotwicki. 2007. Correlating trawl and acoustic data in the eastern Bering Sea: A first step toward improving biomass estimates of walleye pollock (Theragra chalcogramma) and Pacific cod (Gadus macrocephalus)? Fish. Res. 86:77-83. von Szalay, P. G., N. W. Raring, F. R. Shaw, M. E. Wilkins, and M. H. Martin. 2010. Data report: 2009 Gulf of Alaska bottom trawl survey. NOAA Tech. Memo. NMFS-AFSC-208,245 p. Woodby, D. 1998. Adaptive cluster sampling: efficiency, fixed sample sizes, and an application to red sea urchins Strongylocentrotus franciscanus in Southeast Alaska. In Proceedings of the North Pacific symposium on invertebrate stock assessment and management, Can. Spec. Publ. Fish Aquat. Sci. 125 (G. Jamieson and A. Campbell, eds.), p. 15-20. NRC Research Press, Ottawa, Zimmermann, M. 2003. Calculation of untrawlable areas within the boundaries of a bottom trawl survey. Can. J. Fish. Aquat. Sci. 60:657-669. (1) Ona, E., M. Pennington, and J. H. Volstad. 1991. Using acoustics to improve the precision of bottom-trawl indices of abundance. ICES Council Meeting (CM) document, 1991/D:13, 11 p. (2) Mention of trade names or commercial companies is for identification purposes only and does not imply endorsement by the National Marine Fisheries Service, NOAA. (3) Beare, D. J., D. G. Reid, T. Greig, N. Bez, V. Hjellvik, O. R. Godo, M. Bouleau, J. van der Kooij, S. Neville, and S. Mackinson. 2004. Positive relationships between bottom trawl and acoustic data. ICES CM (council meeting) document 2004/R:24, 15 p. (4) Spencer, P. D., D. H. Hanselman, and D. R. McKelvey. 2011. Evaluation of echosign data in improving trawl survey biomass estimates for patchily-distributed rockfish. North Pacific Research Board Final Report 809, 110 p. [Available from http://doc.nprb.org/web/08_prjs/809_final%20report_revised%20_2_.pdf, accessed September 2011.] Dana H. Hanselman (contact author) [1] Paul D. Spencer [2] Denise R. McKelvey [3] Michael H. Martin [3] Email address for contact author: dana.hanselman@noaa.gov [1] Marine Ecology and Stock Assessment Program Auke Bay Laboratories Ted Stevens Marine Research Institute Alaska Fisheries Science Center, NMFS, NOAA 17109 Pt. Lena Loop Road Juneau, Alaska 99801-8626 [2] Resource Ecology and Fisheries Management Division Alaska Fisheries Science Center, NMFS, NOAA 7600 Sand Point Way, NE Seattle, Washington 98115-6349 [3] Resource Assessment and Conservation Engineering Division Alaska Fisheries Science Center, NMFS, NOAA 7600 Sand Point Way, NE Seattle, Washington 98115-6349 Table 1 Biomass and variance estimators for 2 sampling designs, simple random sampling (SRS) and Trawl and Acoustic Presence/Absence Survey (TAPAS), the latter of which was evaluated as a way to reduce the variability in estimated biomass for Pacific ocean perch (Sebastes alutus). [??]=estimated biomass (kg), [??]=estimated catch per unit of effort (CPUE, kg/[km.sup.2]) in trawl i, [bar.d]=the mean CPUE, A=total sampling area ([km.sup.2]), a=the amount of A sampled ([km.sup.2]), n=total sample size, [??]=the estimated proportion of the trackline in patches, [l.sub.i]=the estimated length (km) of trackline in patch i, l'=the estimate of length (km) of total trackline in patches, [bar.l] =the mean patch length, and L=the length of the entire trackline, I=the number of patches, [I.sup.*]=the number of patches excluding those originally in the background, and n,.=an estimate of the effective number of independent samples on the trackline; the denominator of 12 was derived from the range parameter of the acoustic variogram. Estimator Biomass Variance SRS [[??].sub.SRS] = A [MATHEMATICAL [SIGMA].sup.n/ EXPRESSION NOT [I.sup.*] REPRODUCIBLE IN [[??].su.bi]/ ASCII.] TAPAS [MATHEMATICAL [??] = l'/L EXPRESSION NOT REPRODUCIBLE IN [MATHEMATICAL ASCII.] EXPRESSION NOT REPRODUCIBLE IN EXPRESSION NOT REPRODUCIBLE IN [??][[??]] = [??] [[??].sub.L] = L/12 EXPRESSION NOT REPRODUCIBLE IN Table 2 Catch (kg), number of individuals, and mean fork length (cm) of fish and associated coefficient of variation (CV) for the top species caught during our experimental rockfish acoustic-trawl survey conducted in 2009 near Yakutat, Alaska. Weight Number of Common name Scientific name (kg) individuals Pacific ocean perch Sebastes alutus 16,603 27,276 Walleye pollock Theragra chalcogramma 3110 3988 Shortraker rockfish Sebastes borealis 2173 426 Arrowtooth flounder Atheresthes stomias 1738 1506 Shortspinethornyhead Sebastolobusalascanus 1020 5131 Dover sole Microstomus pacificus 789 963 Sablefish Anoplopoma fimbria 775 292 Dusky rockfish Sebastes uariabilis 426 262 Silvergray rockfish Sebastes brevispinis 381 167 Jellyfish Chrysaora melanaster 320 187 Other 2836 8444 Mean fork length Length CV Common name (cm) (%) Pacific ocean perch 32.3 19 Walleye pollock 45.8 12 Shortraker rockfish 65.3 15 Arrowtooth flounder 37.6 32 Shortspinethornyhead 24.0 27 Dover sole 40.2 15 Sablefish 61.0 20 Dusky rockfish 46.1 5 Silvergray rockfish 54.8 13 Jellyfish -- -- Other -- -- Table 3 Parameter estimates from 2 sampling designs, Trawl and Acoustic Presence/Absence Survey (TAPAS), and simple random sampling (SRS), with the use of 2 different patch definitions. Patch definitions are based on percentiles of mean or maximum volume backscattering ([S.sub.v]) from acoustic data collected during our 2009 acoustic/ trawl survey. Rockfish densities and biomass estimates are given in metric tons per square kilometers (t/[km.sup.2]) and metric tons (t), respectively. n=total sample size, 7=the number of patches, I'= the estimate of length (km) of total trackline in patches, L=the length of the entire trackline, [D.sub.0]=the mean background CPUE, [D.sub.1]=the mean patch CPUE, [B.sub.0] =the background biomass, [B.sub.l]=the patch bio mass, B =the TAPAS estimate of total biomass (kg), [B.sub.SRS] = the SRS estimate of total biomass. SRS coefficients of variation (CVs) were calculated by using the full sample size (n). 80th percentile 90th percentile Parameter of mean [S.sub.v] of max [S.sub.v] N-I 40 41 n 57 49 I 17 8 l' 93.6 43.5 L 1251 1251 [D.sub.0] 7.48 7.43 [D.sub.1] 9.74 24.82 [B.sub.0] 53,928 55,898 [B.sub.l] 5684 6734 B 59,612 62,632 [CV.sub.B] (analytical) 34.6 34.0 [CV.sub.B] (bootstrap) 34.5 33.6 [B.sub.SRS] 68,517 68,517 [CV.sub.SRS] (analytical) 27.8 30.0 [CV.sub.SRS] (bootstrap) 30.2 31.9 Table 4 The original method used in our 2009 acoustic--trawl survey and proposed alternative methods for selection of "patches," or areas where catch per unit of effort may have been high, compared with other survey areas, on the basis of acoustic backscatter over a sampling window of 3 trawl lengths. Patch definitions were based on a threshold of mean volume backscattering ([S.sub.v]). Alternatives were created to maximize the strength of the relationship of [S.sub.v] to CPUE and improve survey precision. Original patch definition The [S.sub.v] was computed for each 100-m cell within a moving window of 31 cells or 3.1 km. A patch was defined when the proportion of these cells exceeding an [S.sub.v] value of -65.6 dB was greater than 0.39 (the 80th percentile of the backscatter data collected in the Yakutat, Alaska, area in 2005 aboard the FV Sea Storm). Alternative 1 Higher field threshold definition To account for the uniform and weak nonrockfish backscatter encountered in the field, the [S.sub.v] threshold was increased to -61.4 dB from the value used in the original method. The threshold for the moving proportion was lowered to 0.13. These values were computed from the 90th and 50th percentiles of our field data, respectively. The rationale for this definition was to detect patches when the acoustic backscatter was more variable but stronger than the backscatter detected as patches with the original patch definition. Alternative 2 Standard deviation of [S.sub.v] To capture the tight intermittent clustering of rockfish schools, we used the following threshold: the standard deviation of [S.sub.v] was above the 80th percentile. The rationale of this definition was to capture some distributional properties associated with rockfish acoustic backscatter. Alternative 3 Variance to mean ratio of [S.sub.v] To remove uniform, diffuse acoustic backscatter and account for tight intermittent clustering of rockfish schools, we used the following threshold: the variance-to-mean ratio was above the 80th percentile. The rationale of this definition was to identify a patch when the variance-to-mean ratio moved far above 1 (e.g., departing from a Poisson distribution toward a hypergeometric distribution). Alternative 4 Maximum [S.sub.v] If the survey was conducted in a depth stratum and area where the target species was abundant, it was assumed that pulses in maximum [S.sub.v] should reflect the dominant species. For this alternative, the 90th percentile of maximum [S.sub.v] was used. Alternative 5 Maximum [S.sub.v] and standard deviation of [S.sub.v] This method refined Alternative 4 by adding variability into the criterion in a multiple regression. The rationale of this definition was similar to the rationale of Alternative 2. Table 5 Comparison of 5 alternative methods of patch selection to the original design for a 3-trawl-length (-3.0 km) acoustic sampling window. Patch Selects above Selects below definition Description average CPUE average CPUE Original 80th percentile, 14 4 0.38 of the time 1 90th percentile, 7 2 0.12 of the time 2 80th percentile of 4 1 the standard deviation of [S.sub.v] 3 80th percentile of 4 1 variance to mean 4 90th percentile of 6 1 max [S.sub.v] 5 80th percentile of 4 1 1/max [S.sub.v] x SD Patch Error rate definition (%) Original 22 Gale Copyright 2012 Gale, Cengage Learning. All rights reserved.
{"url":"http://www.biomedsearch.com/article/Application-acoustic-trawl-survey-design/311184830.html","timestamp":"2014-04-18T06:01:56Z","content_type":null,"content_length":"83230","record_id":"<urn:uuid:d8925635-1037-4f40-b2cb-6f7129f8e70e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you find the average force exerted on something, using mass, velocity, and time? In physics, classical mechanics and quantum mechanics are the two major sub-fields of mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. Besides this, many specializations within the subject deal with gases, liquids, solids, and other specific sub-topics. Classical mechanics provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When the objects being dealt with become sufficiently small, it becomes necessary to introduce the other major sub-field of mechanics, quantum mechanics, which reconciles the macroscopic laws of physics with the atomic nature of matter and handles the wave–particle duality of atoms and molecules. However, when both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) becomes applicable. QFT deals with small distances and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. To deal with large degrees of freedom at the macroscopic level, statistical mechanics becomes valid. Statistical mechanics explores the large number of particles and their interactions as a whole in everyday life. Statistical mechanics is mainly used in thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. General relativity unifies special relativity with Newton's law of universal gravitation, allowing physicists to handle gravitation at a deeper level. Mass in special relativity incorporates the general understandings from the concept of mass-energy equivalence. Added to this concept is an additional complication resulting from the fact that "mass" is defined in two different ways in special relativity: one way defines mass ("rest mass" or "invariant mass") as an invariant quantity which is the same for all observers in all reference frames; in the other definition, the measure of mass ("relativistic mass") is dependent on the velocity of the observer. The term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles. The more general invariant mass (calculated with a more complicated formula) loosely corresponds to the "rest mass" of a "system". Thus, invariant mass is a natural unit of mass used for systems which are being viewed from their center of momentum frame (COM frame), as when any closed system (for example a bottle of hot gas) is weighed, which requires that the measurement be taken in the center of momentum frame where the system has no net momentum. Under such circumstances the invariant mass is equal to the relativistic mass (discussed below), which is the total energy of the system divided by c (the speed of light) squared. In physics, escape velocity is the speed at which the kinetic energy plus the gravitational potential energy of an object is zero. It is the speed needed to "break free" from the gravitational attraction of a massive body, without further propulsion. For a spherically symmetric body, the escape velocity at a given distance is calculated by the formula Related Websites:
{"url":"http://answerparty.com/question/answer/how-do-you-find-the-average-force-exerted-on-something-using-mass-velocity-and-time","timestamp":"2014-04-18T04:13:00Z","content_type":null,"content_length":"27702","record_id":"<urn:uuid:367844a2-4349-4269-988e-2321dc17de72>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: An electronics store makes a profit of $32 for every portable DVD player sold and $64 for every DVD recorder sold. The manager’s target is to make at least $256 a day on sales of the portable DVD players and DVD recorders. Write and graph an inequality that represents the number of both kinds of DVD players that can be sold to reach or beat the sales target. Let p represent the number of portable DVD players and r represent the number of DVD recorders. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508039dfe4b0b569600556bd","timestamp":"2014-04-16T13:34:58Z","content_type":null,"content_length":"45831","record_id":"<urn:uuid:070ab145-e838-4451-99f4-ce510b2a89c0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Division and Rational Expressions Lights, Camera...Fraction! Quiz Think You Got it Down? Test your knowledge again: Polynomial Division and Rational Expressions: Lights, Camera...Fraction! Quiz Think you’ve got your head wrapped around Polynomial Division and Rational Expressions? Put your knowledge to the test. Good luck — the Stickman is counting on you! evaluated at x = -3 is Q. The constant 0 is not a function. neither a polynomial nor a rational expression. a polynomial but not a rational expression. a rational expression but not a polynomial. both a polynomial and a rational expression. Simplify the following expression as much as possible: The expression is already simplified as much as possible. Q. Which two expressions are equivalent to each other? Q.The expressions all values of x. all values of x except 0 and -4.5. all values of x except 3.5. all values of x except 0, 3.5, and 4.5. no values of x. Q. Which of the following statements is not always true? The sum of two rational expressions is a rational expression. The sum of two rational expressions is never a rational expression. The difference of two rational expressions is a rational expression. The product of two rational expressions is a rational expression. The quotient of two rational expressions is a rational expression. Q. Which of the following is a valid simplification of the complex rational expression
{"url":"http://www.shmoop.com/polynomial-division-rational-expressions/quiz-2.html","timestamp":"2014-04-18T03:29:46Z","content_type":null,"content_length":"52416","record_id":"<urn:uuid:75cb0c68-25d4-4b91-b6b7-871ddda4650e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Rocket Science Getting Started Real spaceflight dynamics are complex. There are only a few basic forces involved, but some of them (namely air resistance) are non-conservative and tricky. In order to make a useful, accurate model of spaceflight you have to build a complex numerical approximation. However there is no need to be too complicated too quickly. A great place to start is to find an analytical solution for a rocket, solving for the height it will travel. Put everything in the 'up' axis and ignoring a ton of things will result in an easy place to start. The Simplest Case The true simplest case is a point-mass rocket that is in free space with no gravity, air-resistance and has a simplified model of propulsion. Describing this situation mathematically should result in the classic Tsiolkovsky rocket equation. Rocket in free space Consider a rocket in free space. There are no external forces. We will be nice and constrain the motion of the rocket in one direction. At some time the rocket starts to eject mass out of one end of the rocket. Through conservation of linear momentum this will propel the rocket forward. The motion of the rocket in our simple case will be determined by Newton's 2^nd law. Usually written as F = ma since we have a variable mass system we use the more general form of We can safely ignore the vector part because all motion is in one dimension. Also we can re-write p-dot as dp/dt because we are going to use some arguments about infinitesimals to finish our We can write an equation for dp from the diagram below because we know the momentums involved. From the definition of infinitesimal values The momentum at some time plus dt will be the momentum of the rocket plus the momentum of the exhaust gas In this diagram you can see we have a rocket of mass m moving at some velocity relative to an inertial reference frame v. At some time t + dt it will release an exhaust mass dm' at some velocity relative to the rocket v[e]. Many textbooks refer to v[e] as u. In a real rocket is an ideal "effective velocity" of the exit gasses and directly proportional to efficiency of the engine. It is related to the more common term I[sp]. Simplifying and throwing out higher order terms will reveal A positive exhaust mass is a mass lost from the rocket so we say that In this simple case we have no external forces. Solving for dv gives We can integrate both sides to find Simplified this becomes the Tsiolkovsky rocket equation Since it is hard to look up values of v[e] in books about engines we can use its relation to I[sp] Where g[0] is the acceleration due to gravity at the surface of the Earth and I[sp] is the specific impulse in seconds. Adding Gravity Starting with our previous result of But now having an external force of gravity, mg Solving for dv gives Or as an equation of motion Integrating over v, t and m (Where t[bo] is the burn-time of the rocket) Gives an equation for the velocity at burnout Burnout Velocity The term -gt is referred to as gravity loss. This represents the losses endured by launching in a gravity well. To maximize burnout velocity you want to minimize gravity loss, which means burning the fuel as fast as possible. This makes sense because when you spend a long time burning fuel you are wasting energy lifting unburnt fuel to a higher altitude rather than your payload. In the real world usually rocket motors are categorized by thrust and not burn time. So it becomes useful to rewrite this equation in terms of a new parameter, thrust-to-weight ratio. We define thrust-to-weight ratio, Ψ, as the thrust (which we assume is constant) divided by the weight at liftoff, m[0]g. If we realize that thrust is We can find a relation to the time it takes to burn through the fuel. If we take the burn rate to be constant (again, not a bad assumption) then the time is to burn out is However we see that m[f] - m[0] is negative! This okay because m-dot is secretly also negative. What we actually want is the fuel mass divided by a (positive) burn rate. We also want this in terms of mass ratio and thrust-to-weight ratio. So we do some clever math, first pulling out m[0] Now if we multiply by "one" we get We recognize the term Plugging this in for t in our burnout velocity equation gets Now we can also introduce the symbol μ for the mass ratio (for brevity's sake) and replace v[e] with gI[sp] We see now that we have burnout velocity that is in general a function of three parameters: μ, Ψ and I[sp] Three variables and 3 dimensions is just begging for some graphing. Here is a graph showing the burnout velocity of a vertical rocket at an I[sp] of 250 [s] for an Isp of 250. Click to You can see that there a heavy dependence on mass ratio. But also to avoid gravity loss it is beneficial to have Ψ as high as you can too. A typical thrust-to-weight ratio is somewhere between 2 and 3 for large commercial rockets. We can also talk about what effect changing I[sp] has on the burnout velocity. If we take the partial with respect to I[sp] we get For a rocket with a μ of 10 and a Ψ of 2 we see that adding an extra second of I[sp] results in of 18.2 m/s of extra velocity. Burnout Height Now that we have done some work to find the burnout velocity we can find the burnout height much the same way. Starting with our differential equation for rocket motion we double integrate to find I had to look this one up in a table of integrals but the answer should be This can be rearranged to look like Doing the same substitution of t and using μ we get Now making our I[sp] substitution Ballistic Trajectory After burnout the rocket will continue to move upwards on a ballistic trajectory. Since we are considering gravity as a constant and ignoring air resistance we can use energy arguments to easily figure out the final height the rocket will go Solving for the change in height Bringing it all together So if we really like math we can combine the ballistic and burnout equations to find the total height a rocket under constant gravity with no air resistance will travel (as a function of μ, Ψ and I [1] Turner, Martin J.L. Rocket and Spacecraft Propulsion. Springer Praxis Books. [New York]: Praxis Publishing Ltd, Chichester, UK, 2005. [2] Thomson, William Tyrrell. Introduction to Space Dynamics. New York: Wiley, 1961.
{"url":"http://psas.pdx.edu/simplerockets_1d/","timestamp":"2014-04-19T14:29:20Z","content_type":null,"content_length":"19354","record_id":"<urn:uuid:d045baca-c9e6-4593-ae8d-a791c3bbe395>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson Plan: Telling Time Students will tell time in hours and half-hours. Class: 1st Grade Duration: Two class periods, each 30-45 minutes each Key Vocabulary: clock, hour, half hour, sixty, thirty, analog, digital, vertical, horizontal Objectives: Students will use their bodies to demonstrate time in hours and half hours. Standards Met: 1.MD.3. Tell and write time in hours and half-hours using analog and digital clocks. Lesson Introduction: Depending on when you teach math during the day, it would be helpful to have a digital clock sound an alarm when math class begins. If your math class begins on the hour or the half hour, even better! Step-by Step Procedure: 1. If you know your students are shaky on time concepts, it’s best to start this lesson with a discussion of morning, afternoon, and night. When do you get up? When do you brush your teeth? When do you get on the bus for school? When do we do our reading lessons? Have students put these into the appropriate categories of morning, afternoon, and night. 2. Tell students that we are going to get a little more specific. There are special times of day that we do things, and the clock shows us when. Show them the analog clock (the toy or the classroom clock) and the digital clock. 3. Set the time on the analog clock for 3:00. First draw their attention to the digital clock. The number(s) before the : describe the hours, and the numbers after the : describe the minutes. So for 3:00, we are exactly at 3 o’clock and no extra minutes. 4. Then draw their attention to the analog clock. Tell them that this clock can also show the time. The short hand shows the same thing as the number(s) before the : on the digital clock - the 5. Show them how the long hand on the analog clock moves faster than the short hand - it is moving by minutes. When it is at 0 minutes, it will be right up at the top, by the 12. (This is hard for kids to understand.) Have students come up and make the long hand move quickly around the circle to reach the 12 and zero minutes several times. 6. Have students stand up. Have them use one arm to show where the long clock hand will be when it is at zero minutes. Their hands should be straight up above their heads. Just like they did in Step 5, have them move this hand rapidly around an imaginary circle to represent what the minute hand does. 7. Then have them imitate the 3:00 short hand. Using their unused arm, have them put this out to the side so that they are imitating the hands of the clock. Repeat with 6:00 (do the analog clock first) then 9:00, then 12:00. Both arms should be straight above their heads for 12:00. 8. Change the digital clock to be 3:30. Show what this looks like on the analog clock. Have students use their bodies to imitate 3:30, then 6:30, then 9:30. 9. For the remainder of the class period, or at the introduction of the next class period, ask for volunteers to come up to the front of the class and make a time with their bodies for other students to guess. Homework/Assessment: Have students go home and discuss with their parents the times (to the nearest hour and half hour) that they do at least three important things during the day. They should write these down on paper in the correct digital format. Parents should sign the paper indicating that they have had these discussions with their child. Evaluation: Take anecdotal notes on students as they complete Step 9 of the lesson. Those students who are still struggling with the representation of hours and half hours can receive some extra practice with another student or with you.
{"url":"http://mathlessons.about.com/od/firstgradelessons/a/Lesson-Plan-Telling-Time.htm","timestamp":"2014-04-17T15:27:12Z","content_type":null,"content_length":"41649","record_id":"<urn:uuid:6c7d9a39-e4ca-4ae1-bb50-f8f761f3bbe1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Stress analysis Stress analysis: recent developments in numerical and experimental methods From inside the book Your search - triangular - did not match any documents. Try this search over all volumes: triangular We haven't found any reviews in the usual places. The finitedifference approach 3 Twodimensional stress analysis and plate flexure 20 15 other sections not shown Bibliographic information
{"url":"http://books.google.com/books?id=56RRAAAAMAAJ&q=triangular&dq=related:ISBN0853347409&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-21T11:33:51Z","content_type":null,"content_length":"111244","record_id":"<urn:uuid:8278ed24-2c46-478f-8830-77b73e538488>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract Algebra 3rd Edition Chapter 1.2 Solutions | Chegg.com Now we will find the order of the elements of the group Therefore, elements of the group Therefore, orders of the elements of the group Order of 1 is Order of r is Order of s is Order of Order of Order of Order of Order of Order of Order of
{"url":"http://www.chegg.com/homework-help/abstract-algebra-3rd-edition-chapter-1.2-solutions-9780471433347","timestamp":"2014-04-18T14:11:25Z","content_type":null,"content_length":"47783","record_id":"<urn:uuid:363237a5-9093-4aed-b12a-1ddb1c6026f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
• Atomic & nuclear physics and Quantum physics & nuclear physics Topic 7: Atomic and nuclear physics & AHL Topic 13 (SL option B): Quantum physics and nuclear physics IB equations: Important Terms & Concepts: • nucleus, proton, electron, neutron, photon, ion, neutrino • isotope, nuclide, nucleon, atomic & mass number • “plum pudding” model, Rutherford model, Bohr model, Schrodinger wave model • black-body radiation, emission/absorption spectra, energy levels, Balmer lines, “electron in a box” • photoelectric effect, de Broglie hypothesis, matter waves, Heisenberg uncertainty principle • alpha, beta, gamma radiation, radioactivity, radioactive decay, half-life, decay constant, • transmutation, fission, fusion, mass-energy equivalence, mass defect, binding energy, mass spectrometer • Geiger – Marsden (Rutherford, Goldfoil) αlpha “Coulomb” scattering experiment Important Experiments: • Milikan i) Oil Drop (mass & charge of e-) and ii) Photoelectric Effect experiment (proof for Einstein) • Davisson–Germer experiment (electron as wave), electron diffraction experiments HyperPhysics Atomic-Quantum HyperPhysics Nuclear 7.1 The atom Atomic Model - Tsokos: Atomic/Nuke Core Kirk Review: Atomic & Nuke I Course Companion Ch 16 Mech Universe Video - The Atom • 7.1.1 Describe a model of the atom that features a small nucleus surrounded by electrons. (Students should be able to describe a simple model involving electrons kept in orbit around the nucleus as a result of the electrostatic attraction between the electrons and the nucleus.) • The model of the atom consists of a small central nucleus that is surrounded by electrons orbiting in circular orbits, organized in different energy levels. • The central nucleus contains most of the mass of the atom, where as the electrons provide the little mass of the negative charges. • The positive and negative charges attract each other by electrostatic attraction and are balance out to make the atom neutral. - Sweta (Tim Kirk) • 7.1.2 Outline the evidence that supports a nuclear model of the atom. (A qualitative description of the Geiger Marsden experiment and an interpretation of the results are all that is required.) • Geiger Marsden/ Rutherford/ Gold-Foil Experiment □ Positive alpha particles are “fired” through screens at a thin gold leaf □ According to the mass and velocity of the alpha particles, it was predicted that the alpha particles would pass through the gold leaf. □ They discovered that some of the alpha particle had deflected through huge angles. The deflection was caused by the alpha particle repelling off the nucleus. □ - Sweta (Tim Kirk) Thompson’s model compared to Rutherford’s model - Sweta (Tim Kirk) • 7.1.3 Outline one limitation of the simple model of the nuclear atom. □ The problem with this theory was that accelerating charges are known to lose energy. If the orbiting electrons were to lose energy they would spiral into the nucleus. The Rutherford model cannot explain to us how atoms are stable. (Kevin - __Allan Riddick__) □ Charges that are accelerating radiate energy since it’s constantly changing direction as it moves in a circular orbit. Therefore, the charges would lose energy and move towards the nucleus. In this case, atoms can’t exist, proving that there is a fallacy in the model. (Sweta- Tsokos) □ The simple model of the nuclear atom is limited because accelerated charges are known to radiate energy so orbital electrons should constantly lose energy (this changing direction means the electrons are accelerating). (prakash - kirk) • 7.1.4 Outline evidence for the existence of atomic energy levels. (Students should be familiar with emission and absorption spectra, but the details of atomic models are not required. Students should understand that light is not a continuous wave but is emitted as “packets” or “photons” of energy, each of energy hf.) • This is found in the emission and absorption spectra. □ Elements with enough energy can emit light. By using either diffraction grating or a prism, it is possible to analyze the different colors within the given light. □ - Sweta (Tim Kirk) □ - Sweta • By analyzing the frequencies of light emitted from an element, it is possible to figure out what it is. (Kevin - Kirk) □ A Continuous spectrum would mean that all frequencies of the electromagnetic spectrum are present - light from the sun contains all visible wavelengths of light. (Kevin - Kirk) □ An emission spectrum is not continuous (see Sweta’s photo) but contain certain frequencies of light relative to the discrete energy levels present in the atom. Each possible combination of drops in energy level (e.g. all the ones dropping to the 2th energy level of the hydrogen atom are all part of the Blamer series) emit different wavelengths of light enabling us to deduce what element created that light. (Kevin - Kirk) • The different distances between the lines of color indicate the distance between each energy level, and the type of color indicates the amount of energy that is released when an electron moves from a higher energy level to a lower energy level. (Ilkyu Pudong Physics) Nuclear structure • 7.1.5 Explain the terms nuclide, isotope and nucleon. □ Nuclide is a particular species of atoms whose nucleus contains a specified number of protons and a specified number of neutrons (protons and neutrons that form a nucleus) (Wamiq, Kirk) □ Isotopes are element nuclides with the same number of proton but different number of neutrons. (Wamiq, Kirk) □ Isotopes: Atoms that have the same atomic number but have a different neutron matter. This is because the number of neutrons can vary in an element. (Sweta- Rinehart Holt) □ Protons and neutrons collectively are called Nucleons (Wamiq, Kirk) • Example: 611C , 612 C , 613C, 614C Four isotopes of Carbon (Sweta- Rinehart Holt) • 7.1.6 Define nucleon number A, proton number Z and neutron number N. • ZAX- example • Nucleon number (A) - mass number, equal to number of nucleons (Wamiq, Kirk) • Proton number (Z) - atomic number, equal to number of protons in the nucleus (Wamiq, Kirk) • Neutron number (N) - Mass Number - Proton Number, equal to number of neutrons in the nucleus (Wamiq, Kirk) • To calculate the neutron number of an isotope is determined by the relationship A= Z + N (the mass number of an atom A = the mass of protons Z added to the number of neutrons N). (Sweta- Rinehart • Add the number of Protons and the number of Neutrons to get the Neucleon number (Yun Hwan) (Yun Hwan- __Allan Riddick__) • 7.1.7 Describe the interactions in a nucleus. (Students need only know about the Coulomb interaction between protons and the strong, short‑range nuclear interaction between nucleons.) • According to our knowledge of electrostatics a nucleus should not be stable. Protons are positive charges so should repel each other. There must be another force in the nucleus that overcomes the electrostatic repulsion and hold the nucleus together. This force is called the strong nuclear force. (Yun Hwan- __Allan Riddick__) • Strong nuclear forces must be very strong to overcome the electrostatic forces. They must also have a very small range as they are not observed outside of the nucleus. (Yun Hwan- __Allan • Neutrons have some involvement in strong nuclear forces. Small nuclei have equal numbers of protons and neutrons. Larger nuclei, which are harder to hold together, have a greater ratio of neutrons to protons. (Yun Hwan, Prakash- __Allan Riddick__) • The nucleons are held within the nucleus of the atom by a force called the strong nuclear force. This attractive force is stronger than the electrical force if the distance between the nucleons is small (r= 10-15m or less). Those with a greater distance are said to have a negligible nuclear force. (Sweta- Tsokos) • Nucleus consists of protons (positively charged) and neutrons (neutral). Together they are called nucleons. (Ilkyu - Pudong Physics) • Protons tend to repel against each other because of their positive electric force. (Ilkyu - Pudong Physics) • Strong nuclear force is another force that eists among the nucleons and that is what unites all the protons and neutrons together. (Ilkyu - Pudong Physics) • Strong nuclear force is greater than the electric force of protons which is why it is able to stabilize protons despite their repulsive forces. (Ilkyu - Pudong Physics) • Strong nuclear force is short-ranged, which is very strong when the distance between nucleons is less than 10^-15m apart. Beyond that distance, the force becomes zero. (Ilkyu - Pudong Physics) • Gravitational and electric forces on nucleus are long-ranged, which pushes away the approaching nucleons. (Ilkyu - Pudong Physics) • In order for strong nucleus force to apply to each other, they must be traveling at a higher speed (hot temperature) or be pushed against each other by a great force. (Ilkyu - Pudong Physics) • According to our knowledge of electrostatics a nucleus should not be stable. Protons are positive charges so should repel each other. There must be another force in the nucleus that overcomes the electrostatic repulsion and hold the nucleus together. This force is called the strong nuclear force. • Strong nuclear forces must be very strong to overcome the electrostatic forces. They must also have a very small range as they are not observed outside of the nucleus. • Neutrons have some involvement in strong nuclear forces. Small nuclei have equal numbers of protons and neutrons. Larger nuclei, which are harder to hold together, have a greater ratio of neutrons to protons. 7.2 Radioactive decay Radioactivity - Tsokos: Radioactivity/ Core Tsokos: Nuclear Reactions/ Core • 7.2.1 Describe the phenomenon of natural radioactive decay. (The inclusion of the antineutrino in β − decay is required.) • 7.2.2 Describe the properties of alpha (α)and beta (β) particles and gamma (γ) radiation. • 7.2.3 Describe the ionizing properties of alpha (α) and beta (β) particles and gamma (γ) radiation. • 7.2.4 Outline the biological effects of ionizing radiation. (Students should be familiar with the direct and indirect effects of radiation on structures within cells. A simple account of short‑term and long‑term effects of radiation on the body is required.) • 7.2.5 Explain why some nuclei are stable while others are unstable. (An explanation in terms of relative numbers of protons and neutrons and the forces involved is all that is required.) Half-life • 7.2.6 State that radioactive decay is a random and spontaneous process and that the rate of decay decreases exponentially with time. (Exponential decay need not be treated analytically. It is sufficient to know that any quantity that reduces to half its initial value in a constant time decays exponentially. The nature of the decay is independent of the initial amount.) • 7.2.7 Define the term radioactive half‑life. • 7.2.8 Determine the half-life of a nuclide from a decay curve. • 7.2.9 Solve radioactive decay problems involving integral numbers of half-lives. 7.3 Nuclear reactions, fission and fusion Nuclear reactions - Tsokos: Matter & Energy • 7.3.1 Describe and give an example of an artificial (induced) transmutation. • 7.3.2 Construct and complete nuclear equations. • 7.3.3 Define the term unified atomic mass unit. (Students must be familiar with the units MeV c-2 and GeV c-2 for mass.) • 7.3.4 Apply the Einstein mass–energy equivalence relationship. • 7.3.5 Define the concepts of mass defect, binding energy and binding energy per nucleon. • 7.3.6 Draw and annotate a graph showing the variation with nucleon number of the binding energy per nucleon. (Students should be familiar with binding energies plotted as positive quantities.) • 7.3.7 Solve problems involving mass defect and binding energy. Fission and fusion • 7.3.8 Describe the processes of nuclear fission and nuclear fusion. • 7.3.9 Apply the graph in 7.3.6 to account for the energy release in the processes of fission and fusion. • 7.3.10 State that nuclear fusion is the main source of the Sun’s energy. • 7.3.11 Solve problems involving fission and fusion reactions. 13.1 Quantum physics The quantum nature of radiation - Tsokos: Quantum & Uncertainty Kirk Review: Quantum & Nuke II Course Companion Ch 17 • 13.1.1 Describe the photoelectric effect. Under certain conditions, when light is shone onto a metal surface (such as zinc) electrons are emitted from the surface. This is the p.e effect. sanchit • Below a certain threshold frequency, no photo-electrons are emitted. sanchit • Above the threshold frequency, the maximum kinetic energy depends on the frequency of incident light.sanchit • The number of electrons emitted is directly related to intensity, the more intensity means the more number of electrons there are. sanchit • There is no noticeable delay between the arrival of the light and the emission of electrons. (Kevin - Kirk) • 13.1.2 Describe the concept of the photon, and use it to explain the photoelectric effect. (Students should be able to explain why the wave model of light is unable to account for the photoelectric effect, and be able to describe and explain the Einstein model.) • Einstein introduced a new idea of thinking of light as being made up of particles, called photons. The energy photons carried by a photon is equal to E=hf. The wave model of light does not explain the photoelectric effect, since according to the wave model radiation of any frequency would eventually bring enough energy to the metal plate to free some electrons. This, however, is not true. The absorption of light by electrons is not continuous, but discrete. There is no continuous build up of energy, but energy is received in packets (photons). (Tom - Kirk Study Guide) • It is not enough to explain the photoelectric effect based on the classical theories: the kinetic energy to emit electrons depends on the intensity of light. Einstein extended the plank’s concept of quantization and integrated it to electromagnetic waves by introducing photons (stream of particles). Each photon carries energy (E=hf) packets of light that are discrete. (Sweta - Rinehart • 13.1.3 Describe and explain an experiment to test the Einstein model. (Millikan’s experiment involving the application of a stopping potential would be suitable.) • The distance of closest approach of a particle to the nucleus can be found by using the law of conservation of energy. The initial energy of the particle is defined by Ek =12mv2 and then converted into potential energy as the charge approaches the nucleus (W = qαV). When this is equated with the kinetic energy, all the terms are known except the distance, which can therefore be found. (Kevin - WikiBooks) • Electrons at the surface need a certain minimum energy in order to escape from the surface. This minimum energy is called work function of the metal and has the symbol ϕ (Kevin - Kirk) • The Stopping Potential Experiment is what tests the electron model, the apparatus consists of a vacuum with opposing metal plates ( cathode and anode) where UV shines on the cathode and the photoelectric effect releases electrons in packets (discrete chunks, as opposed to continuous) called photoelectrons. They are accelerated across to the anode, through vacuum due to a potential difference created. This is going against the current, and the amount of voltage is steadily • The energy carried by a photon is given by □ E = hf; E- energy in joules, h- Planck’s constant: 6.626068 × 10-34 m2 kg/s (Yun Hwan- Kirk) • The UV light energy arrives in lots of photons (prakash, kirk) • The energy in each packet is fixed by the frequency of UV light that is being used, whereas the number of packets arriving per second is fixed by the intensity of the source. (prakash, kirk) • Dirrferent electrons absorb different photons. If the energy of the photons. If the enerdy of the photon is large enough, it gives the electron enough energy to leave the surface of the metal. (prakash, kirk) • Extra energy would be retained by the electron as kinetic energy. (prakash, kirk) • The threshold frequency: incoming energy of photons = energy needed to leave the surface + kinetic energy; Thus a graph of frequency against stopping potential should be a straight line of gradient e/h.(prakash, kirk) • 13.1.4 Solve problems involving the photoelectric effect. • Example: What is the maximum velocity of electrons emitted from a zinc surface (ϕ=4.2eV) when illuminated by EM radiation of wavelength 200nm? (Kevin - Kirk) The wave nature of matter Mech Universe Video - Particles & Waves • 13.1.5 Describe the de Broglie hypothesis and the concept of matter waves. (Students should also be aware of wave–particle duality (the dual nature of both radiation and matter).) The de Brogie hypothesis is that all moving particles have a “matter wave” associated with them. In other words, all moving particles exhibit a wave like nature. Both particles and radiation have a dual wave-particle nature. The de Broglie wavelength of a particle can be calculated using the following equation: =h/p=h/mv. (Tom - Kirk Study Guide) This matter wave can be thought of as a probability function associated with the moving particle. The (amplitude)^2 of the wave at any given point is a measure of the probability of finding the particle at that point. (Prakash, kirk) • 13.1.6 Outline an experiment to verify the de Broglie hypothesis. (A brief outline of the Davisson–Germer experiment will suffice.) • Davisson-Germer experiment: a beam of electron strikes a target nickel crystal. The electrons are scattered from the surface. The intensity of these scattered electrons depends on the speed of the electron and the angle. A maximum scattered intensity was recorded at an angle that quantitatively agrees with the constructive interference condition from adjacent atoms on the surface. Interference of these electrons shows that they have a wave-like nature. (Tom - Kirk Study Guide) • Electron diffraction experiment □ electron wave travels through a gap of same order as its wavelength ☆ crystals (such as powdered graphite) provides such spacing □ when a beam of electrons goes through the powdered graphite, the electrons will be diffracted according to the wavelength □ the diffraction is in the form of circles that correspond to the angles where constructive interference takes place(Erfan, Kirk) • 13.1.7 Solve problems involving matter waves. (For example, students should be able to calculate the wavelength of electrons after acceleration through a given potential difference.) Atomic spectra and atomic energy states Mech Universe Video - The Atom • 13.1.8 Outline a laboratory procedure for producing and observing atomic spectra. (Students should be able to outline procedures for both emission and absorption spectra. Details of the spectrometer are not required.) • Emission spectra □ Particular frequencies of light are produced when an element is hot enough to radiate light □ This light needs to be split into its component frequencies using spectrometer (either a prism to defract the light or a diffraction grating) this creates a line spectrum □ different frequencies are diffracted at different angles □ the angle is used to calculate wavelengths and frequencies □ (Erfan, New IB Course Companion) • Absorption Spectra • 13.1.9 Explain how atomic spectra provide evidence for the quantization of energy in atoms. (An explanation in terms of energy differences between allowed electron energy states is sufficient.) • When an electron moves between energy levels it must emit or absorb energy. The energy emitted or absorbed corresponds to the difference between the two allowed energy levels. The energy is emitted or absorbed as photons. The lines in the atomic spectra correspond with the energy levels. (prakash, kirk) • 13.1.10 Calculate wavelengths of spectral lines from energy level differences and vice versa. E=hf=hc/ (Tom) • 13.1.11 Explain the origin of atomic energy levels in terms of the “electron in a box” model. (The model assumes that, if an electron is confined to move in one dimension by a box, the de Broglie waves associated with the electron will be standing waves of wavelength 2L/n where L is the length of the box and n is a positive integer. Students should be able to show that the kinetic energy EK of the electron in the box is (n2h2)/(8meL2).) • The wavelike nature of electrons is hard to imagine in three dimensions, to simplify the situation we consider only one dimension in the electron in a box model • Different energy levels are predicted for the electron which correspond to the different possible standing waves • Ek=n2h28meL2 (Erfan, Kirk) •13.1.12 Outline the Schrodinger model of the hydrogen atom. (The model assumes that electrons in the atom may be described by wave-functions. The electron has an undefined position, but the square of the amplitude of the wave-function gives the probability of finding the electron at a particular point.) • Copenhagen interpretation is a way to give a physical meaning to the mathematics of wave mechanics. (Wamiq, Kirk) □ The description of particles is in terms of a wave function □ At any instant of time, the wave function has different values at different points in space. □ The mathematics of how this wave function develops is like the mathematics of a travelling wave. □ The probability of finding the particle at any point in space within the atom is square of the amplitude of the wave function at that point. □ Electron Wave Functions don’t have a fixed wavelength. As an electron move aways from the nucleus, it must lose kinectic energy because they have opposite charges. • The Schrödinger theory assumes as a basic principle that there is a wave associated to the electron and that wave is called the wavefunction. Schrödinger suggested that (x,t)2can be used to find the probability that an electron will be found near position(x,t). (Wamiq, Tokos) • 13.1.13 Outline the Heisenberg uncertainty principle with regard to position–momentum and time–energy. (Students should be aware that the conjugate quantities, position–momentum and time–energy, cannot be known precisely at the same time. They should know of the link between the uncertainty principle and the de Broglie hypothesis. For example, students should know that, if a particle has a uniquely defined de Broglie wavelength, then its momentum is known precisely but all knowledge of its position is lost.) The Heisenberg uncertainty principle identifies a fundamental limit to the possibly accuracy of any physical measurement. Conjugate properties, position-momentum and energy-time, cannot be known precisely at the same time. where x is the uncertainty in the measurement of position where p is the uncertainty in the measurement of momentum where E is the uncertainty in the measurement of energy where t is the uncertainty in the measurement of time (Tom - Kirk Study Guide) 13.2 Nuclear physics - Tsokos: Nuclear Physics • 13.2.1 Explain how the radii of nuclei may be estimated from charged particle scattering experiments. (Use of energy conservation for determining closest-approach distances for Coulomb scattering experiments is sufficient.) • 13.2.2 Describe how the masses of nuclei may be determined using a Bainbridge mass spectrometer. (Students should be able to draw a schematic diagram of the Bainbridge mass spectrometer, but the experimental details are not required. Students should appreciate that nuclear mass values provide evidence for the existence of isotopes.) • 13.2.3 Describe one piece of evidence for the existence of nuclear energy levels. (For example, alpha (α) particles produced by the decay of a nucleus have discrete energies; gamma‑ray (γ-ray) spectra are discrete. Students should appreciate that the nucleus, like the atom, is a quantum system and, as such, has discrete energy levels.) Radioactive decay • 13.2.4 Describe β+ decay, including the existence of the neutrino. (Students should know that β energy spectra are continuous, and that the neutrino was postulated to account for these spectra.) • 13.2.5 State the radioactive decay law as an exponential function and define the decay constant. (Students should know that the decay constant is defined as the probability of decay of a nucleus per unit time.) • 13.2.6 Derive the relationship between decay constant and half-life. • 13.2.7 Outline methods for measuring the half-life of an isotope. (Students should know the principles of measurement for both long and short half‑lives.) • 13.2.8 Solve problems involving radioactive half-life.
{"url":"http://hendricksphysics.wikispaces.com/%E2%80%A2+Atomic+%26+nuclear+physics+and+Quantum+physics+%26+nuclear+physics?responseToken=f3b7e5a772c79d33e69adf346876413a","timestamp":"2014-04-23T11:46:54Z","content_type":null,"content_length":"129750","record_id":"<urn:uuid:1a9c5c77-11d0-42a0-ab1a-fd479148ec96>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Proceedings of Symposia in Pure Mathematics 1983; 905 pp; softcover Volume: 39 Reprint/Revision History: reprinted with corrections to Part 1, 1984 ISBN-10: 0-8218-1442-7 ISBN-13: 978-0-8218-1442-0 List Price: US$114 Member Price: US$91.20 Order Code: PSPUM/39 On April 7-10, 1980, the American Mathematical Society sponsored a Symposium on the Mathematical Heritage of Henri Poincaré, held at Indiana University, Bloomington, Indiana. This volume presents the written versions of all but three of the invited talks presented at this Symposium (those by W. Browder, A. Jaffe, and J. Mather were not written up for publication). In addition, it contains two papers by invited speakers who were not able to attend, S. S. Chern and L. Nirenberg. If one traces the influence of Poincaré through the major mathematical figures of the early and midtwentieth century, it is through American mathematicians as well as French that this influence flows, through G. D. Birkhoff, Solomon Lefschetz, and Marston Morse. This continuing tradition represents one of the major strands of American as well as world mathematics, and it is as a testimony to this tradition as an opening to the future creativity of mathematics that this volume is dedicated. Part 1. Section 1, Geometry • S.-S. Chern -- Web geometry • J.-I. Igusa -- Problems on abelian functions at the time of Pincaré and some at present • J. Milnor -- Hyperbolic geometry: The first 150 years • N. Mok and S.-T. Yau -- Completeness of the Kähler-Einstein metric on bounded domains and the characterization of domains of holomorphy by curvature conditions • A. Weinstein -- Symplectic geometry Section 2, Topology • J. F. Adams -- Graeme Segal's Burnside ring conjecture • W. P. Thurston -- Three dimensional manifolds, Kleinian groups and hyperbolic geometry Section 3, Riemann surfaces, discontinuous groups and Lie groups • L. Bers -- Finite dimesnional Teichmüller spaces and generalizations • W. Schmid -- Poincaré and Lie groups • D. Sullican -- Discrete conformal groups and measurable dynamics Section 4, Several complex variables • M. Beals, C. Fefferman, and R. Grossman -- Strictly pseudoconvex domains in \(\mathbf C^n\) • P. A. Griffiths -- Poincaré and algebraic geometry • R. Penrose -- Physical space-time and nonrealizable CR-structures • R. O. Wells, Jr. -- The Cauchy-Riemann equations and differential geometry Part 2. Section 5, Topological methods in nonlinear problems • R. Bott -- Lectures on Morse theory, old and new • H. Brezis -- Periodic solutions of nonlinear vibrating strings and duality principles • F. E. Browder -- Fixed point theory and nonlinear problems • L. Nirenberg -- Variational and topological methods in nonlinear problems Section 6, Mechanics and dynamical systems • J. Leray -- The meaning of Maslov's asymptotic method: The need of Planck's constant in mathematics • D. Ruelle -- Differentiable dynamical systems and the problem of turbulence • S. Smale -- The fundamental theorem of algebra and complexity theory Section 7, Ergodic theory and recurrence • H. Furstenberg -- Poincaré recurrence and number theory • H. Furstenberg, Y. Katznelson, and D. Ornstein -- The ergodic theoretical proof of Szemerédi's theorem Section 8, Historical material • P. S. Aleksandrov -- Poincaré and topology • H. Poincaré -- Résumé analytique • J. Hadamard -- L'oeuvre mathématique de Poincaré • Lettre de M. Pierre Boutroux à M. Mittag-Leffler • Bibliography of Henri Poincaré • Books and articles about Poincaré
{"url":"http://ams.org/bookstore?fn=20&arg1=pspumseries&ikey=PSPUM-39","timestamp":"2014-04-19T20:21:20Z","content_type":null,"content_length":"17828","record_id":"<urn:uuid:f34c3438-7179-435a-9123-a49c2b430017>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Omni dtrectional drive system We are looking at using a 4 wheel omni drive system on our robot. We are looking for help or a sample code that can be used to start us out on this large but interesting task. Any help or information will be greatly appriciated. Re: Omni dtrectional drive system It is really important you understand the underlying math of an omniwheeled system. It's not terribly complex but it really makes things easier. I have a somewhat working omniwheeled robot that I am working on at the moment. It uses 3 wheels instead of the 4 you want. This is the code I have: #define MAXMOTORSPEED 50 const float sqrt3div2 = 0.866025; int round(float f) return (f>0)?(int)(f+0.5):(int)(f - 0.5); void getMotorSpeeds(int &motorSpeedA, int &motorSpeedB, int &motorSpeedC, int angle, int Vb) { float Vw1, Vw2, Vw3, norm_factor; Vw1 = Vb*cosDegrees(angle); Vw2 = Vb*(-0.5*cosDegrees(angle) + sqrt3div2*sinDegrees(angle)); Vw3 = Vb*(-0.5*cosDegrees(angle) - sqrt3div2*sinDegrees(angle)); norm_factor = 1.0; if (Vw1 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw1; } else if (Vw2 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw2; } else if (Vw3 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw3; motorSpeedA = round(Vw1 * norm_factor); motorSpeedB = round(Vw2 * norm_factor); motorSpeedC = round(Vw3 * norm_factor); For any given wheel, the speed it needs to turn at is : Vw = Vb (cos Aw * cos At + sin Aw * sin At) Vw = speed of the wheel Vb = speed of the robot's body Aw = the angle of the wheel on the body At = the angle at which the body is travelling So for a robot with 3 wheels (at 0, 120 and 240 degrees around the body), like my example, you get the following formulae: Vw1 = Vb (cos 0 * cos At + sin 0 * sin At) Vw2 = Vb (cos 120 * cos At + sin 120 * sin At) Vw3 = Vb (cos 240 * cos At + sin 240 * sin At) Sometimes a term will disappear from the calculations because it's 0, sometimes it's simplified because it's 1. You need to figure out what angles your wheels are at and create the formulae for your Vw1, Vw2, Vw3 and Vw4. I am not going to give you the answers, you have all the information here for a robot with 3 and a generic formula. You need to figure out why I have the following code: if (Vw1 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw1; } else if (Vw2 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw2; } else if (Vw3 > MAXMOTORSPEED) { norm_factor = MAXMOTORSPEED / Vw3; Good luck, if you have more questions, don't hesitate to ask, but don't ask me to write out the formulae for the 4 wheels, you should be able to do that yourself. Professional Conduit of Reasonableness | (Title bestowed upon on the 8th day of November, 2013) | My Blog: I'd Rather Be Building Robots | ROBOTC 3rd Party Driver Suite: [ Project Page I have figured out all the codes and the 4 wheel coding and alot of the formulas drop out with the use of the wheel angles. So it is simple. I am confused where the Vb and the Angle the body is traveling (At) comes from. Are you using a gyro for input. I have developed an array to draw the sine and cosine vales from vice using the nxt to calculate them each time. This should speed up the process. I have also written a code for using joysticks to directly control the motors will this work in direct control modes. gwinter wrote: I have figured out all the codes and the 4 wheel coding and alot of the formulas drop out with the use of the wheel angles. So it is simple. I am confused where the Vb and the Angle the body is traveling (At) comes from. Vb is the velocity of the body. A four wheeled omniwheel robot is much more efficient when it comes to translating the motor's movement to that of the body. At is the angle at which the body is travelling. You should check out for more info on the physics of this. Also be sure to read up on this http://technicbricks.blogspot.com/2008/ ... ns_29.html Are you using a gyro for input. Nope, math is all I am using for now. I am planning to make it remote controllable (as well as autonomous) using the new HiTechnic IRReceiver sensor Professional Conduit of Reasonableness | (Title bestowed upon on the 8th day of November, 2013) | My Blog: I'd Rather Be Building Robots | ROBOTC 3rd Party Driver Suite: [ Project Page
{"url":"http://www.robotc.net/forums/viewtopic.php?f=52&t=1858","timestamp":"2014-04-20T23:57:14Z","content_type":null,"content_length":"35602","record_id":"<urn:uuid:cf18fce3-75e6-43e5-9626-ff9693a388b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Measure Length How to Measure Length and Distance Length or distance is a measurement of how far two points are away from each other. Typically we use length to describe the amount of space between one end of an object and an opposite end of that object. Distance is typically used to measure the space between two points on an object. In order to measure the length of an object we need some type of measuring device. Really anything can be used to measure the length of something. For example we can measure the length of a room using out bodies. Let's say we are in a room and in our room we have a friend named Connor. We want to find the length or distance from one side of the room to the other. Connor can lie down with his feet against one wall. Now we can put a block at his head. Connor can then get up and put his feet against that block and lie down again. We can then put another block at his head. We can repeat this over and over until we get to the opposite wall. In the picture above Connor had to lay down 4 times before he reached the opposite wall. We can say that the length of this room is 4 Connors long. While this is fun to do, it's not really super helpful because not everyone knows Connor. If we were to call up someone on the other side of the country who we didn't know and tell them the room is 4 Connors long, they wouldn't know what we are talking about. They have never met Connor so they have no idea how long 1 Connor is let alone 4! Typically we use standard lengths to measure. Units of length called feet, meters, inches, kilometers and miles are all well known by most of the world. Since these units are so well known people have made rulers which indicate these different lengths. We can use these rulers to find out how long something is. Let's look at a pencil for example. If we want to find the length of this pencil in inches, we can use a ruler. Many ruler's start with the number 0 and have a number marked off every inch. To find the length of the pencil we line up the end of the ruler to one end of the pencil and then read the number that the other end of the pencil is lined up with. In this case we can see that our pencil is 7 inches long. It is important to line up one end with the end of the ruler (or the 0 inches spot) so we get the correct measurement. It is also important that when measuring something you are looking directly at the object you are measuring and the ruler. If you are looking from the side, your eyes can play tricks on you and give you a measurement that is too large or too small.
{"url":"http://www.mathmix.com/content/Subjects/Measurement/Length/measure_length","timestamp":"2014-04-21T01:59:24Z","content_type":null,"content_length":"18001","record_id":"<urn:uuid:a43ba8e4-94ff-4b06-be87-bd2a95d7662b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
West Bridgewater Geometry Tutor Find a West Bridgewater Geometry Tutor I am a long-time classroom teacher--licensed in Massachusetts and Rhode Island--but prefer the give-and-take and personalization possible in small-group and individual instruction. Although my Bachelors and Masters degrees are in a biological science, I have considerable experience teaching physica... 15 Subjects: including geometry, English, chemistry, biology ...I primarily tutored probability and statistics, single and multi variable calculus, and college algebra. I have also done a considerable amount of private tutoring in a variety of different areas of math. I am qualified to tutor nearly all areas of high school and college math and can also ass... 14 Subjects: including geometry, calculus, GRE, algebra 1 ...I like to check my students' notebooks and give them guidelines for note-taking and studying. Here too I stress active learning, and my goal is usually to get my students to study by challenging themselves with problems rather than simply reading over their text, notes, and previous work. If you are in need of assistance for a student struggling in Physics or Math, I am the man for you. 9 Subjects: including geometry, calculus, physics, algebra 1 ...As a licensed, high-school, Spanish teacher, I know the difficulties that most English-speaking learners have as well as the tricks that make learning certain aspects of the language less difficult. If you need a tutor in Spanish, I can definitely help you, but also if you need translation, simu... 21 Subjects: including geometry, Spanish, English, ESL/ESOL ...I obtained my Bachelor's degree in Biopsychology and have extensive coursework and research experience in this subject. I emphasize the importance of study skills to retain course material. I also utilize my own experiences to challenge students to think like psychologists and bring those skills to tackle future endeavors. 10 Subjects: including geometry, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/west_bridgewater_ma_geometry_tutors.php","timestamp":"2014-04-17T13:00:44Z","content_type":null,"content_length":"24407","record_id":"<urn:uuid:69ba833a-9bfb-441e-ac34-c8dc72f160e7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
+ add to my flashcards The T-Value is the score obtained when you perform a T-Test. It represents the difference between the mean or average scores of two groups, while taking into account any variation in scores. For example, if you hypothesize that pet owners are more sociable than non-pet owners, you would find a way to measure sociability so that you would get a sociability score for all participants in each group. Let's say that the mean score of pet owners is higher than that of non-pet owners. Your question would be: is the sociability score of pet owners significantly higher than the other group? The t-value measures if the difference in scores of the two groups is big enough for you to say that pet owners are indeed more sociable than non-pet owners, or if the result was something that could have just happened by chance. How helpful is this page: Thank you for taking the time to help offer your feedback. How could we make this page more useful?
{"url":"http://www.alleydog.com/glossary/definition.php?term=T-Value","timestamp":"2014-04-19T17:01:11Z","content_type":null,"content_length":"26174","record_id":"<urn:uuid:d554328f-6d41-4e03-9f2a-44fc8a8e23e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Can't find delta in delta-epsilon proof of strange function October 27th 2009, 07:06 AM #1 Junior Member May 2007 Can't find delta in delta-epsilon proof of strange function f(x) = 0 when x is irrational, 1/n when x = m/n, reduced the domain of f is all reals except 0. I need to show that the limit of f(x) as x approaches c equals 0 for all c. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/110809-can-t-find-delta-delta-epsilon-proof-strange-function.html","timestamp":"2014-04-19T07:00:43Z","content_type":null,"content_length":"29222","record_id":"<urn:uuid:562173f2-4b86-4b9b-9463-04a95faf3534>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Distribution of recombination distances between trees - poster at SMBE2010 Distribution of recombination distances between trees – poster at SMBE2010 I just came back from SMBE2010, where I presented a poster about our recombination detection software and had the chance to see awesome research other people are doing. The poster can be downloaded here (1.MB in pdf format) and I’m distributing it under the Creative Commons License. Given the great feedback I got from other participants of the meeting, I thought it might be a good opportunity to comment about the work, guided by the poster figures. Please refer to the poster to follow the figures and the explanation, I’ll try to reproduce here my presentation taking into account the commentaries I received. The motivation for the development of the model was to be able to say, for a given a mosaic structure, if the breakpoints can be explained by one recombination event or several. The recombination mosaic structure is usually inferred assuming the parental sequences (those not recombining) are known beforehand – in the figure they are the subtypes B and F – and recombination is then inferred when there is a change in the parental closest to the query sequence. Another problem is that it is common to analyze each one of the query sequences independently against the parentals – if all one wants is the “coloring”, then this might be enough. For the above figure I analyzed each query sequence against one reference sequence from the subtypes B, F and C (thus comprising a quartet for each analysis). And we know that these mosaics don’t tell the whole story. If we know the topologies for both segments separated by the recombination breakpoint, then we can say, at least in theory, the minimum number of recombination events necessary to explain the difference (the real number can be much larger since we only detect those that lead to a change in the topology…). This minimum number is the Subtree Prune-and-Regraft distance, and is related to the problem of detection of horizontal gene transfers. In our case we devised an approximation to this distance based on the disagreement between all pairs of bipartitions belonging to the topologies: at each iteration we remove the smallest number of “leaves” such that the topologies will become more similar, and our approximate “uSPR distance” will be how many times we iterate this removal. It is just an approximation, but it is better (closer to the SPR distance) than the Robinson-Foulds or the (complementary) Maximum Agreement Subtree distances, which compete in speed with our algorithm. For larger topologies it apparently works better than for smaller, but this is an artifact of the simulation – one realized SPR “neutralizes” previous ones, and it happens more often for small trees. Our Bayesian model works with a partitioning of the alignment, where recombination can only occur between segments and never within. This doesn’t pose a problem in practice since it will “shift” the recombinations to the border – the idea is that several neighboring breakpoints are equivalent to one breakpoint with a larger distance. This segments could be composed of one site each, but for computational reasons we usually set it up at five or ten base pairs. The drawback is the loss of hability to detect rate heterogeneity within the segment. Each segment will have it own toplology (represented by Tx, Ty and Tz in the figure), which will coincide for many neighboring segments since we have the distance between them as a latent variable penalizing against too many breakpoints: This is a truncated Poisson distribution, modified so that it can handle underdispersion – the parameter w will make the Poisson sharper around the mean – and each potential breakpoint has its own lambda and w. The posterior distribution will have K terms for the segments (topology likelihood and evolutionary model priors) and K-1 terms for the potential breakpoints (distances between segments), as well as the hyper-priors. I use the term “potential breakpoint” because if two consecutive segments happen to have the same topology (distance equals zero) then we don’t have an actual breakpoint. Again, considering only the recombinations that change the topology. This posterior distribution is calculated through MCMC sampling in the program called biomc2. To test the algorithm, we did simulations with eight and twelve taxa datasets, simulating one (for the eight taxa datasets) or two recombinations per breakpoint. We present the output of the program biomc2.summarise, which interprets the posterior samples for one replicate: based on the posterior distribution of distances for each potential breakpoint, we neglect the actual distances and focus on whether it is larger than or equal to zero (second figure of the panel). Based on this multimodal distribution of breakpoints we infer the regions where no recombination was detected (that we call “cold spots”), credible intervals around each mode (red bars on top) or based on all values (red dots at bottom, together with the cold spots). We also looked at the average distance for each potential breakpoint per replicate dataset, and show that indeed the software can correctly infer the location and amount of recombination for most replicates. It is worth remembering that we were generous in our simulations, in that there is still phylogenetic signal preserved on alignments with many mutations and a few recombinations. If recombination is much more frequent, then any two sites might represent distinct evolutionary histories and our method will fail. We then analyzed HIV genomic sequences with a very similar mosaic structure, as inferred by cBrother (an implementation of DualBrothers). Here it is important to say that we used cBrother only to estimate the mosaic structure of the recombinants, doing an independent analysis for each sequence against three reference parental sequences. Therefore the figure is not a direct comparison of the programs, contrary to what its unfortunate caption might induce us to think. The distinction is between analyzing all sequences at once or independently, in quartets of sequences. If we superpose the panels it might become clearer to compare them: The curve in blue shows the positions where there is a change in closest parental for the query sequence, if each query sequence is analyzed neglecting the others. In red we have our algorithm estimating recombinations between all eleven sequences (eight query and three parental sequences). We can see that: 1. all breakpoints detected by the independent analysis were also detected by our method; 2. many recombinations were detected only when all sequences were analyzed at once, indicating that they do not involve the parental sequences – de novo recombination; 3. if we look at the underlying topologies estimated by our method (figure S2 of the PLoS ONE paper), we see that those also detected by the independent analysis in fact involve the parentals while the others don’t; 4. biomc2 not only infers the location of recombination, but also its “strength” – given by the distance between topologies; Finally, we show two further developments of the software: a point estimate for the recombination mosaic, and the relevance of the chosen prior over distances. The point estimate came from the need of a more easily interpretable summary of the distribution of breakpoints: instead of looking at the whole multimodal distribution, we may want to pay attention only to the peaks, or some other similar measure. This is a common problem in bioinformatics: to represent a collection of trees by a single one or to find a protein structure that represents best an ensemble of structures. In our case we have a collection of recombination mosaics (one per sample of the posterior distribution), and we elect the one with the smallest distance from all other mosaics – we had to devise a distance for this as well… To show the importance of the prior distribution of distances, we compared it with simplified versions, like setting the penalty parameter w fixed at a low or high value. The overall behavior for all scenarios is lower resolution around breakpoints, and for weaker penalties we reconstruct the topologies better than for stronger ones, at the cost of inferring spurious breakpoints more often. We also compared the original model with a simplification where the topological distance is neglected and the prior considers only if the topologies are equal or not. This is similar to what cBrother and other programs do, and by looking at the top panel we observe that the results were also equivalent (blue lines labeled “cBrother” and “m=0″). In the same panel we plot the performance using our original (“unrestricted”) model as a gray area. I also submitted the poster to the F1000 Poster Bank, let’s see how it works… de Oliveira Martins, L., Leal, É., & Kishino, H. (2008). Phylogenetic Detection of Recombination with a Bayesian Prior on the Distance between Trees PLoS ONE, 3 (7) DOI: 10.1371/journal.pone.0002651 Oliveira Martins, L., & Kishino, H. (2009). Distribution of distances between topologies and its effect on detection of phylogenetic recombination Annals of the Institute of Statistical Mathematics, 62 (1), 145-159 DOI: 10.1007/s10463-009-0259-8 One Response to Distribution of recombination distances between trees – poster at SMBE2010 This entry was posted in New Publications, Research Blogging and tagged Bayesian, data, HIV-1, MCMC, recombination, tree. Bookmark the permalink.
{"url":"http://biomcmc.wordpress.com/2010/07/12/distribution-of-recombination-distances-between-trees-poster-at-smbe2010/","timestamp":"2014-04-18T03:22:49Z","content_type":null,"content_length":"79908","record_id":"<urn:uuid:34b7335c-6aad-4be3-8f38-6fdcb9562df9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Geometry - Triangles/Polygons This page: Dr. Math See also the Dr. Math FAQ: geometric formulas naming polygons and polyhedra Pythagorean theorem Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Triangles and Other Polygons Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Area of an irregular shape. Classifying quadrilaterals. Heron's formula. Polygon diagonals. Pythagorean theorem proofs. Triangle congruence. Given median lengths 5, 6, and 7, construct a triangle. Given two angles, A and B, and the semiperimeter, construct the triangle. If the angles of a triangle are equal, does it necessarily mean that the sides are also equal? The lengths of the sides of a non-isosceles triangle, in size order, are 5, x, and 15. What are all possible integral values of x? Prove that in any triangle, the greatest side is opposite the greatest angle. How do you figure out the vertices of a triangle algebraically by using its three midpoints? How many triangles have sides whose lengths total 15 units? Maybe if two sides of a triangle are not congruent, then the angles opposite them are not congruent, and the larger angle is opposite the longer side. I'm not sure how to say this in a proof. How do I prove that a line which cuts two sides of a triangle proportionately is parallel to the third side? Let CD be an altitude of triangle ABC, and assume that angle C = 90 degrees. Let r1 and r2 be the inradii of triangle CAD and triangle CBD, respectively, and show that r+r1+r2=CD, where r is the inradius of triangle ABC. Mapping out a general method for proceeding with proofs. Let D, E lie internally on side BC of triangle ABC and consider the following conditions: 1) angle BAD = angle DAE = angle EAC 2) |BD| = |DE| = |EC| Prove that, whatever the shape of triangle ABC, 1) and 2) cannot both be true, that is, if either is true, then the other is false. A cone has a circular base radius 1, and vertex of height 3 directly above the center of the circle. A cube has four vertices in the base and four on the sloping sides. What is the length of a side of the cube? Can you draw a triangle in which the sum of any two angles - no matter which two you pick - is always less than 120 degrees? A regular 18-sided polygon is inscribed in a circle and triangles are formed by joining any three of the eighteen vertices. How many obtuse triangles are there? Proving that the six triangles constructed from the three medians of any triangle have the same area. If P is a regular n-gon, what is the number of triangles whose vertices are the vertices of P but whose sides are NOT the sides of P? How can you make a triangle with three right angles? Find the length of a fence that runs from the right angle to the hypotenuse and separates the garden into two parts of equal perimeter. In right triangle ABC, with C as the right angle... what is the length of AB (the hypotenuse)? Prove that it is impossible to have a triangle in which the trisectors of an angle also trisect the opposite side. I want to make an octagon by cutting the corners off of a square. Where do I make the cuts? How can you get 20 quadrilaterals from 9 dots? Can you help me understand a proof about perpendicular lines and congruent triangles in a kite? Write a two-column proof and give numbered statements with reasons.... Two circles intersect each other at B and C. Their common tangent touches them at P and Q. A circle is drawn through B and C cutting PQ at L and M. Prove that {PQ:LM} is harmonic. My 5th grade math teacher said that we had to draw a polygon using two straight lines. Is this possible? A boat sails 10km from a harbor H on a bearing of S30 degree E. It then sails 15 km on a bearing of N20 degree E. How far is the boat from H? What is the bearing from H? True or false: if the perimeter of a rectangle increases, the rectangle's area always also increases. Is it true that if you know the side order, side lengths, and area of a polygon, as well as whether each of its angles is obtuse or acute, you have uniquely determined it? The midpoints of the sides of a triangle have coordinates G(3,1), H (-1,2) and J (1,-3). Determine the coordinates of the vertices of the triangle. I need to construct a triangle to fit inside a triangle. ABCDEFGH is a regular octagon and AB = p and BC = q. Express AH in terms of p and q... I am looking for a Venn diagram that will accurately display the relation among trapezoids, parallelograms, kites, rhombi, rectangles, and squares. I am looking for a picture of a myriagon. A triangle, ABC, is obtuse angled at C. The bisectors of the exterior angles at A and B meet BC and AC produced at D and E respectively. If AB=AD=BE, then what does angle ABC equal? Can you explain the statement: "In an N-gon, n-3 diagonals can be drawn from one vertex"? Is the length of a rectangle the longest side, whether vertical or horizontal? Proof of Menelaus' Theorem, and discussion of its converse and Desargues' Theorem. Two circles intersect such that their centers and their points of intersection form a square with each side equal to 3. What is the total area of the sections of the square that are not shared by both circles? Page: [<prev] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_triangles.html?start_at=561&s_..&s_keyid=39098293&f_keyid=39098294&num_to_see=40","timestamp":"2014-04-20T14:01:26Z","content_type":null,"content_length":"25722","record_id":"<urn:uuid:0ee71e5d-cd63-4d2e-a26e-def1af32765c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Kalman Filtering Alazard, Daniel Introduction to Kalman Filtering. (2005) . (Unpublished) (Document in English) PDF (Author's version) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader This document is an introduction to Kalman optimal Filtering applied to linear systems. It is assumed that the reader is already aware of linear servo-loop theory, frequency-domain Filtering (continuous and discrete-time) and state-space approach to represent linear systems. Generally, Filtering consists in estimating a useful information (signal) from a measurement (of this information) perturbed by a noise. Frequency-domain Filtering assumes that a frequency-domain separation exists between the frequency response of the useful signal and the frequency response of the noise. Then, frequency-domain Filtering consists in seeking a transfer function fitting a template on its magnitude response (and too much rarely, on its phase response). Kalman optimal filtering aims to estimate the state vector of a linear system (thus, this state is the useful information) and this estimate is optimal w.r.t. an index performance: the sum of estimation error variances for all state vector components. First of all, some backgrounds on random variables and signals are required then, the assumptions, the structure and the computation Kalman Filter could be introduced. In the first chapter, we remind the reader how a random signal can be characterized from a mathematical point of view. The response of a linear system to a random signal will be investigated in an additional way to the more well-known response of a linear system to a deterministic signal (impulse, step, ramp, ... responses). In the second chapter, the assumptions, the structure, the main parameters and properties of Kalman Filter will be defined. The reader who wish to learn tuning methodology of the Kalman filtering can directly start the reading at chapter 2. But the reading of chapter 1, which is more cumbersome from a theoritical point of view, is required if one wishes to learn basic principles in random signal processing, on which is based Kalman Filtering. There are many applications of Kalman Filtering in aeronautics and aerospace engineering. As Kalman filter provides an estimate of plant states from an a priori information of the plant behaviour (model) and from real measurement, Kalman Filter will be used to estimate initial conditions (ballistics), to predict vehicle position and trajectory (navigation) and also to implement control laws based on a state feedback and a state estimator (LQG: Linear Quadratic Gaussian control). The signal processing principles on which is based Kalman Filter will be also very useful to study and perform test protocols, experimental data processing and also parametric identification, that is the experimental determination of some plant dynamic parameters. Repository Staff Only: item control page
{"url":"http://oatao.univ-toulouse.fr/377/","timestamp":"2014-04-17T04:19:54Z","content_type":null,"content_length":"22559","record_id":"<urn:uuid:372d2f4e-8cd0-432c-8942-bc3c185ed3ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Norma Math Tutor Find a Norma Math Tutor ...I look forward to working with you to develop a plan to achieve academic success! After graduating with my BSN in Nursing, I took the NCLEX and passed on my first try. I am able to help students develop test taking and study strategies specific to the NCELX exam in addition to targeted content review based on the student's NCLEX strengths and weaknesses. 39 Subjects: including SAT math, ACT Math, SPSS, English ...There and since I have worked with several students with ADD and ADHD both in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored test taking for many tests, including the Praxis many times. I received a perfect score on the math section of the Praxis I, and scored in the upper 170s for reading and writing. 58 Subjects: including calculus, differential equations, biology, algebra 2 ...I am currently enrolled in Calculus 2 at Rowan University, which allows me to use Algebra every day, so I am very current in the subject. I have also volunteered to tutor children at my local Library. I am very patient and will take everything step-by-step. 4 Subjects: including linear algebra, algebra 1, algebra 2, prealgebra Your search for an experienced and knowledgeable tutor ends here.I have been coaching school teams for math league contests. and have coached school literature groups in preparation of Battle of the Books contests. I enjoy teaching math and language arts. Although my specialty lies in elementary m... 15 Subjects: including algebra 2, study skills, Hindi, algebra 1 ...I apply this same technique to history and government. Everything has a reason as to why it happened or existed. We create a mental ladder between events, laws, polices, etc. 14 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2 Nearby Cities With Math Tutor Bridgeton, NJ Math Tutors Buena, NJ Math Tutors Deerfield Street Math Tutors Elmer, NJ Math Tutors Fairton Math Tutors Franklinville, NJ Math Tutors Landisville, NJ Math Tutors Malaga, NJ Math Tutors Minotola Math Tutors Monroeville, NJ Math Tutors Newfield, NJ Math Tutors Newtonville, NJ Math Tutors Pittsgrove Township, NJ Math Tutors Richland, NJ Math Tutors Shiloh, NJ Math Tutors
{"url":"http://www.purplemath.com/norma_math_tutors.php","timestamp":"2014-04-19T07:19:41Z","content_type":null,"content_length":"23627","record_id":"<urn:uuid:038d81b7-4eaa-41b3-ad42-fced7c544c72>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
The basic Amman-Beenker tiling With the details presented in the earlier posts we can easily get large parts of this tiling. I have tried to choose colors, which are not too contrasting and too hard on the eyes: We see that small patterns are repeated throughout in a rather systematic but not periodic way. Thus this tiling is “quasiperiodic”. Note particularly the stars made of eight rhombs with a common corner. They are always surrounded by eight squares and eight rhombs. Further out the patterns vary. But larger parts of the tiling repeat too – we have just to search further away. The same distances appear throughout between the stars of rhombs. They lie at the corners of larger rhombs and squares. Really, the centers of these stars are the points of the same tiling, but of larger lengths: For the large rhombs and squares we see always the same pattern of smaller squares and rhombs. The pattern inside the square is not symmetric and has four different orientations. We easily get that the edges of the large tiling are larger by a factor of (3+2*sqrt(2)) than the edges of the basic tiling. Obviously, the stars of the larger tiling are again corners of the tiles of an even larger tiling, and so on. All these tilings have the same shape and thus the Amman-Beenker tiling is self-similar at all scales in the same way as some fractals, such as the Sierpinsky triangle. Thus we can make a quasiperiodic tiling with eight-fold rotational symmetry with only squares of different sizes. This entry was posted in Tilings and tagged fractal, Geometry, Math, Quasiperiodicity, Recreation, Tessellation, Tile, Tiling. Bookmark the permalink.
{"url":"http://geometricolor.wordpress.com/2012/04/15/the-basic-amman-beenker-tiling/","timestamp":"2014-04-17T18:34:03Z","content_type":null,"content_length":"54297","record_id":"<urn:uuid:26d666db-81d4-4a56-9f26-587bcc23fb34>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional Distributions This site is under construction. Please check back every few weeks for updates Conditional Distributions The concept of conditional distribution of a random variable combines the concept of distribution of a random variable and the concept of conditional probability. If we are considering more than one variable, restricting all but one^1 of the variables to certain values will give a distribution of the remaining variables. This is called a conditional For example, if we are considering random variables X and Y and 2 is a possible value of X, then we obtain the conditional distribution of Y given X = 2. This conditional distribution is often denoted by Y|(X = 2). A conditional distribution is a probability distribution, so we can talk about its mean, variance, etc. as we could for any distribution. For example, the conditional mean of the distribution Y|(X = x) is denoted by E(Y|(X = x). 1. More generally, if we restrict just some of the variables to specific values or ranges, we obtain a joint conditional distribution of the remaining variables. For example, if we consider random variables X, Y, Z and U, then restricting Z and U to specific values z and u (respectively) gives a conditional joint distribution of X and Y given Z = z and U = u.
{"url":"http://www.ma.utexas.edu/users/mks/statmistakes/conddist.html","timestamp":"2014-04-21T09:37:18Z","content_type":null,"content_length":"3133","record_id":"<urn:uuid:73c543d7-b993-455e-b65b-cd8003418c4c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Desoto ACT Tutor Find a Desoto ACT Tutor ...Teaching has been my passion and professional goal since I was an undergraduate student. In order to gain teaching experiences with diverse groups in mechanical engineering, physics and mathematics, I started teaching and tutoring undergraduate and graduate students with engineering majors from ... 28 Subjects: including ACT Math, calculus, physics, geometry ...As a tenured professor at Oakland University, Rochester, MI, I was an advisor for over seventy undergraduate students. I was also advisor to twelve graduate students. I believe my experience as an undergraduate advisor along with fifteen years on a university faculty qualifies me to be a college counseling tutor. 55 Subjects: including ACT Math, reading, English, calculus ...I have held tutoring jobs tutoring math and the sciences during high school, college, and grad school, so I am very experienced!I have taken, and received an A in ordinary differential equations. I have also taken many other mathematical modeling courses that use differential equations. My degr... 33 Subjects: including ACT Math, chemistry, reading, calculus ...I was a National Merit Finalist based on PSAT scores and was also an AP scholar. My ACT score was a 32 and my SAT scores for Math, Verbal, and Writing were all 700-790. I am equipped to tutor in a variety of subject areas with a concentration on Science and English/Verbal/Reading. 18 Subjects: including ACT Math, English, geometry, biology ...I have tutored as an undergraduate and graduate students as well as tutoring as a professional. I have had at least 4 years of experience in private/small group tutoring. As a mathematics instructor I have been had 4 years of experience in both high school and college classrooms. 17 Subjects: including ACT Math, calculus, geometry, statistics Nearby Cities With ACT Tutor Balch Springs, TX ACT Tutors Bedford, TX ACT Tutors Cedar Hill, TX ACT Tutors Dalworthington Gardens, TX ACT Tutors Duncanville, TX ACT Tutors Euless ACT Tutors Glenn Heights, TX ACT Tutors Grand Prairie ACT Tutors Highland Park, TX ACT Tutors Lancaster, TX ACT Tutors Mansfield, TX ACT Tutors Midlothian, TX ACT Tutors Pantego, TX ACT Tutors Red Oak, TX ACT Tutors Rowlett ACT Tutors
{"url":"http://www.purplemath.com/desoto_act_tutors.php","timestamp":"2014-04-19T09:44:07Z","content_type":null,"content_length":"23588","record_id":"<urn:uuid:b3b9be09-a004-44bf-b2d0-67685dd25b41>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating standard deviation in one pass Standard deviation is a statistic parameter that helps to estimate the dispersion of data series. It's usually calculated in two passes: first, you find a mean, and second, you calculate a square deviation of values from the mean: double std_dev1(double a[], int n) { if(n == 0) return 0.0; double sum = 0; for(int i = 0; i < n; ++i) sum += a[i]; double mean = sum / n; double sq_diff_sum = 0; for(int i = 0; i < n; ++i) { double diff = a[i] - mean; sq_diff_sum += diff * diff; double variance = sq_diff_sum / n; return sqrt(variance); But you can do the same thing in one pass. Rewrite the formula in the following way: double std_dev2(double a[], int n) { if(n == 0) return 0.0; double sum = 0; double sq_sum = 0; for(int i = 0; i < n; ++i) { sum += a[i]; sq_sum += a[i] * a[i]; double mean = sum / n; double variance = sq_sum / n - mean * mean; return sqrt(variance); Unfortunately, the result will be inaccurate when the array contains large numbers (see the comments below). This trick is mentioned in an old Soviet book about programmable calculators (here is the reference for Russian readers: Финк Л. Папа, мама, я и микрокалькулятор. — М.: Радио и связь, 1988). 14 comments Ten recent comments are shown below. Show all comments Greg Wiszkowitz, I have just scanned the book in djvu format and and uploaded it to It is a great little book. Have fun. Greg W. James Mayfield, http://www.cs.berkeley.edu/~mhoemmen/cs194/Tutorials/variance.pdf suggests a one-pass calculation that avoids many of the round-off errors (I haven't tried it though). Peter Kankowski, Thank you very much, Greg. IMHO, it's one of the best Russian books on programming. James, thank you for the link. This method looks interesting, but, if I understand correctly, it requires 2 divisions at every iteration, which is very slow. It's probably makes more sense to use the two-pass calculation. Still thanks for the info. James, thank you, really very good point. Mark Hoemmen's paper has a perfect example to illustrate the danger: take only three numbers 10000, 10001 and 10002, use floats and get the wrong result. (It's probably not accidental that M. Hoemmen is PhD student on Berkeley, where Prof. W. Kahan is. And I wouldn't be surprised if James is there too. :) ) That simple example shows once again why Prof. W. Kahan designed the x87 FPU with the possibility to use 80 bits for partial results. But Microsoft intentionally designed compilers to ignore that feature (and more). Well that's a good moment to visit again Prof. Kahan's web page and also read, for example Marketing versus Mathematics "Old Kernighan-Ritchie C works better than ANSI C or Java!" "In 1980 we went to Microsoft to solicit language support for the 8087, for which a socket was built into the then imminent IBM PC. Bill Gates attended our meeting for a while and then prophesied that almost none of those sockets would ever be filled! He departed, leaving a dark cloud over the discussions. Microsoft’s languages still lack proper support for Intel’s floating-point." "In particular Bill Gates Jr., Microsoft’s language expert, disparaged the extra-wide format in 1982 with consequences that persist today in Microsoft’s languages for the PC. Sun’s Bill Joy did Floating-Point Arithmetic Besieged by “Business Decisions” The first paper is from 2000 and I don't know if Java now has the extended precision floating point for partial results. Modern CPU instructions "for multimedia" of course don't. Which makes them potentially dangerous when used on formulas like the starting one. Finally, from: "Routine use of far more precision than deemed necessary by clever but numerically naive programmers, provided it does not run too slowly, is the best way available" Arne Hormann, It does not require two divisions - only one. // for sample variance double std_dev(double a[], int n) { if(n == 0) return 0.0; int i = 0; double meanSum = a[0]; double stdDevSum = 0.0; for(i = 1; i < n; ++i) { double stepSum = a[i] - meanSum; double stepMean = ((i - 1) * stepSum) / i; meanSum += stepMean; stdDevSum += stepMean * stepSum; // for poulation variance: return sqrt(stdDevSum / elements); return sqrt(stdDevSum / (elements - 1)); Arne, thank you for your algorithm. I've already seen similar algorithms but I don't know how they are derived. I'd greatly appreciate if you would point me to some source which explains derivation of this and/or similar algorithms. According to some experiments I made for some similar calculations the algorithms like the one you presented here give significantly better results compared to the "one pass, full squares" (which can produce even completely wrong results for "inconvenient" input) but they can still be noticeably less accurate than the real two pass algorithm. Do you know of any source where such error analysis is shown? Thank you. David Jones, As documented in previous comments, the error can be large for the one-pass method. But sometimes the input to sqrt can be negative, resulting in... well, who knows what, given that you're coding in C. Try a list with 3 copies of 1.4592859018312442e+63 for example. Peter Kankowski, Thank you for the example. There seems to be a problem with very large values and little or no difference between them. std_dev1 works fine, while std_dev2 gives incorrect results. Thanks for your stimulating input!, typedef struct statistics_s { size_t k; // sample count double Mk; // mean double Qk; // standard variance } statistics_t; static void statistics_record (statistics_t *stats, double x) { if (1 == stats->k) { stats->Mk = x; stats->Qk = 0; } else { double d = x - stats->Mk; // is actually xk - M_{k-1}, // as Mk was not yet updated stats->Qk += (stats->k-1)*d*d/stats->k; stats->Mk += d/stats->k; static double statistics_stdev (statistics_t *stats) return sqrt (stats->Qk/stats->k); static size_t statistics_nsamples (statistics_t *stats) return stats->k; static double statistics_variance (statistics_t *stats) return stats->Qk / (stats->k - 1); static double statistics_stdvar (statistics_t *stats) return stats->Qk / stats->k; static double statistics_init (statistics_t *stats) stats->k = 0; Knuth actually has one pass algorithm like this for N=1 _M = x _C = 0.0; for N>1 delta = x - _M; _M += delta / N; _C += delta*(x-_M) _M gets you mean, _C / (N-1) is variance
{"url":"http://www.strchr.com/standard_deviation_in_one_pass","timestamp":"2014-04-16T04:24:29Z","content_type":null,"content_length":"18588","record_id":"<urn:uuid:733a9a4f-f09d-4b2e-9f9e-fdf64dce25cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2000 [00107] [Date Index] [Thread Index] [Author Index] Minimize number of mulitplications • To: mathgroup at smc.vnet.net • Subject: [mg23818] Minimize number of mulitplications • From: Jihad Haddad <jihad at imada.sdu.dk> • Date: Sat, 10 Jun 2000 03:00:26 -0400 (EDT) • Organization: UNI-C • Sender: owner-wri-mathgroup at wolfram.com I am sitting in the middle of my thesis and bumped into the following problem which I hope Mathematica has a solution for. Or maybe there exists a package I can download that can solve it. I can illustrate it with a simple problem, even though my problem is of a much larger magnitude. Multiplying two first degree polynomials (ax+b) and (cx+d) yields the following 2nd degree polynomial: (ac)x^2 + (ad+bc)x + bd. So to compute the second degree polynomial given the two first degree polynomials we have to compute the three expressions (ac), (ad+bc) and (bd). Intuitively that will take four multiplications to perform, namely (a*c), (a*d), (b*c) and (b*d). My aim is to minimize the number of multiplications, and in the case above, it is possible to compute the three expressions in three multiplications. We can find (a*c), (b*d) and (a+b)*(c+d). That is three multiplications. But we still need the expression (ad+bc). But that is equal to (a+b)*(c+d)-(a*c)-(b*d) which we already have computed. Note that I do not care how many additions and subtractions I perform. That is because multiplications are a lot more time consuming than additions or Now the question is: Is there a Mathematica function that can find the minimal number of multiplications to compute a certain number of expressions, given the expressions? I will be very glad to get an answer. I am in desperate need for such an optimizing function. Thank you in advance Jihad Haddad (Denmark) • Follow-Ups:
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jun/msg00107.html","timestamp":"2014-04-19T09:33:05Z","content_type":null,"content_length":"35927","record_id":"<urn:uuid:1508a46f-780a-49df-bfbe-3a04246bf417>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
converges uniformly converges uniformly If for every $\varepsilon>0$ there exists an integer $N$ such that for all $x\in X$ and all $n>N$, then we say that $f_{n}$converges uniformly to $f$. UniformConvergence, AbsoluteConvergence Mathematics Subject Classification no label found Sorry, I didn't mean to post this here. I don't see any way to delete it. this is the same with "uniform convergence" entry, only the title is different(which could have been added as "also defines"). Added: 2003-10-15 - 05:26
{"url":"http://planetmath.org/convergesuniformly","timestamp":"2014-04-19T09:27:56Z","content_type":null,"content_length":"44197","record_id":"<urn:uuid:c2cae3ed-0792-4e2d-bf52-4c3542bcb2ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
West Hills, Los Angeles, CA West Hills, CA 91307 Learn from a patient, experienced MIT math tutor ...Once I know their problem areas, I make sure they understand the concepts and provide them extra practice until they are comfortable with them. I have over 20 years of solid experience tutoring I and II, geometry, trigonometry, pre-calculus,... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/West_Hills_Los_Angeles_CA_algebra_tutors.aspx","timestamp":"2014-04-16T08:05:42Z","content_type":null,"content_length":"61280","record_id":"<urn:uuid:19bdaa5d-fa25-40af-b9a6-089dab6ecf8e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Subscripting for subscripts greater than 2 June 25th 2011, 06:25 PM #1 Subscripting for subscripts greater than 2 In a recent post to MHF I tried to form the sum $r_{2}$$c_{2}$ + $r_{1}$$c_{1}$ only with the second set of subscripts equaling 7 However Latex always returned an error for all integer subscipts greater than 2 Can anyone please help Re: Subscripting for subscripts greater than 2 Hello, Bernhard! In a recent post to MHF I tried to form the sum $c_1$$+$$c_2$ with subscripts greater than 2. However Latex always returned an error. Can anyone please help? Using [tex], it will not support any expression longer than $c_1$ We can produce the terms separately: .[tex]c_5[/tex] . $\Rightarrow\;\;c_5$ . . but this is very inefficient. I recommend using [tex] tags. [tex]c_1,c_2,c_3,c_4,\hdots[/tex] . $\Rightarrow\;\;c_1,c_2,c_3,c_4,\hdots$ Re: Subscripting for subscripts greater than 2 Hello Soroban, Thanks for your help - later I will try to learn how to use [tex] tags Re: Subscripting for subscripts greater than 2 Hello Soroban, Thanks for your help - later I will try to learn how to use [tex] tags $r_{5}$ + $C_{5}$ Re: Subscripting for subscripts greater than 2 Hello Soroban, Thanks for your help - later I will try to learn how to use [tex] tags Re: Subscripting for subscripts greater than 2 Hello, Bernhard! Thanks for your help - later I will try to learn how to use [tex] tags [tex]r_{8}[/tex] . $r_8$ [tex]r_2[/tex] . $r_2$ [tex]r_3[/tex] . $r_3$ [tex]r_4[/tex] . $r_4$ [tex]r_5[/tex] . $r_5$ Interesting . . . [tex] doesn't like subscripts greater than 4. Re: Subscripting for subscripts greater than 2 It's picky with subscripted letters too: [tex]r_n[/tex] . $r_n$ [tex]r_k[/tex] . $r_k$ [tex]r_w[/tex] . $r_w$ My theory is that the [tex] compiler, in its present debilitated state, can only render formulae that it has at an earlier time (before its troubles started) stored in some library of previous June 25th 2011, 07:24 PM #2 Super Member May 2006 Lexington, MA (USA) June 25th 2011, 07:41 PM #3 June 25th 2011, 07:43 PM #4 June 25th 2011, 07:46 PM #5 June 26th 2011, 05:50 AM #6 Super Member May 2006 Lexington, MA (USA) June 26th 2011, 11:15 AM #7
{"url":"http://mathhelpforum.com/latex-help/183622-subscripting-subscripts-greater-than-2-a.html","timestamp":"2014-04-19T03:49:57Z","content_type":null,"content_length":"55499","record_id":"<urn:uuid:c15b3f0f-be7c-458a-b99a-d4a7a28500b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
My huge Summer Backyard Bash Review-Giveaway! I am so excited to present to you my huge Summer Backyard Bash! I have lined up 4 awesome sponsors: Puzzle Barrow 3 set of puzzle books, Noblo Beach Umbrella anchor, Doodle Roll, and the Knot Genie. All 4 of these great products can be used in some capacity during the summer months. While I have my own giveaway, you can then hop on to the great other blogs and enter to win. The grand prize winner giveaway is sponsored by TruKid and Imperial Toy. You have the chance to win a water table, wagon, kid-friendly skincare products from TruKid, and a variety or water and bubble toys from Imperial Toy. The grand prize giveaway is only open to the US. Trukid aims to encourage kids to live a healthy, well-rounded lifestyle. By combining natural skin-care with fun and education, TruKid cultivates a relationship between healthy habits and being a kid. Inperial Toy is the worldwide leader in the sale and manufacturer of Bubbles and Novelty Toys. At Imperial, the mission is to be First in Fun! Now onto my giveaway and my wonderful sponsors: Sponsor 1: The Complete Idiot’s Guide to Picture Puzzles and Puzzle Barrow’s Logic Puzzles/Sudoku: I can not tell you how much I love doing puzzles. I can spend hours trying to complete a couple of Sudoku puzzles. My favorite time of the year to do puzzles is when I am sitting at the pool sans kids and I can just enjoy myself. I start out reading a book, but then want to get my brain working, so I whip out the puzzle books. I have to tell you that I am not fantastic at doing puzzles but I am getting better. As the saying goes, practice makes perfect! I got the chance to review three awesome puzzle books: 1. The Complete Idiot’s Guide Picture Puzzles Volume 2: In this puzzle book, there are 100 puzzles for beginner through expert solvers. There are tons of color photos that you will love looking at to find the differences (these are my personal favorite), as well as 20 unscramble-the-picture and choose-which one-is-different puzzles. At the end there is a helpful answer key to see how well you did and give you hints when you get stuck. This book retails for $10.95 and has 158 exciting puzzle pages. 2. Puzzle Barow’s Logic Puzzles Volume 2: If problems get your brain churning, Puzzle Baron’s Logic Puzzles, Volume 2, will send your brain into overdrive! Inside, the book you will get -200 never-before-published grid-based logic puzzles-Information such as average completion time, other puzzlers’ high scores, and more to help you push your brain a little harder-An easy -to-use answer key so you can check your work or get a hint when you get stuck. This book retails of $14.95 and is 208 pages of puzzle fun. 3. Puzzle Barow’s Sudoku: As a sudoku enthusiast, you know how challenging and addicting-they can be. If you are new to these deceptively simple yet complicated puzzles, you’ll soon understand what all the fuss is about. If sudoku puzzles get your brain churning, the Puzzle Baron’s Sudoku will send you into overdrive! Inside you will get 400 never before published sudoku puzzles, from easy to difficult that will test the most hard-core sudoku solvers. There is also an easy-to-use answer key so you can check your work. Sponsor 2: Knot Genie ultimate detangling brush: I was so excited to receive the knot genie detangling brush. If you follow my blog, and I sure hope you do, you will know that my kid have seriously crazy amounts of hair. Especially with Hayley, brushing her hair becomes a nighttime disaster. We went through nine different brushes before we found the Knot Genie! Everytime, Hayley would freak out when I took a brush to her head. Since we started using the Knot Genie, absolutely no more tears. I love the Knot Genie and highly recommend it to everyone who has kids with unruly hair. WIth the summer coming and the sand days, you might want to purchase two to leave one in your summer bag. Trust me, it is a lifesaver! The Knot Genie Detangling Brush was created for every mom that has endured the foot stomping and screaming matches that come with brushing their child’s hair. With the Knot Genie, even the cur lies, most knotted hair practically slips through the unique bristle configuration-gently and painlessly. Try the Knot Genie on your kids, or even yourself, and you will see how easy it is so brush. This brush also stops breakage and split ends. The brush retails for $19.99 and available in fairy pink, puff of purple, blue cloud, and tinseltown silver. They are available for purchase at Knot Genie Sponsor #3: Doodle Roll-Fun with Crayons: You will not want to go to the beach without some activities this summer and Doodle Roll is one fun activity you don’t want to forget! Doodle Roll is the most versatile and portable arts and crafts activity kit. This all in one kit contains a roll or paper and crayons in a compact package, perfect for fun on the go-anytime/anywhere! The Doodle Roll comes in two sizes, large and small, based on your carrying needs. Here is a snippet video from an expert: I have to admit, this might be one of Hayley’s favorite “toys”. She asks me to take the doodle roll with us wherever we go. She is at the age where coloring is everything, and to have a product like the doodle roll makes clean-up and storage, a cinch. No more papers flying around the house… No more crayons all over. This simple activity set makes fun not only fun for children alike, but parents as well! There is over 15 feet of paper so your kids can have all fun the coloring with the new doodle roll. It really is an ingenious product. If you are interested in checking out the Doodle Roll, be sure to out where the wonderful products are sold: Doodle Roll Retailers. Sponsor #4: Noblo Beach Umbrella Anchor: How many times have you gone to the beach and brought your umbrella, only to have it fall down? What a complete hassle? I remember being with my mom last year at Newport beach when Zane was only a few months old, and bringing along our beach umbrella. After thirty minutes of fighting with the silly umbrella, we gave up and went home. Well, we definitely won’t have that problem anymore since we have a new lifesaver, the Noblo Beach Umbrella Anchor. The Noblo is Martha Stewart’s 20 must haves for an organic, natural, and eco-friendly summer! You does it work you might ask- very simple!! The noblo is a bag that you take with you in your beach bag, and when you get to the beach, fill it up with sand. Then attach the velcro strap above the umbrella spokes and you are good to go! That is all! I can assure you that this product really works. We recently got back from Hawaii where we used our Noblo. We got stuck in a wind storm one day in Maui and guess what, our umbrella stayed exactly where it was! I was amazed, as well as my husband, who originally had some doubts. The Noblo is terrific, sturdy, and a must-have for the summer! Be sure to put Noblo on your summer shopping list. For purchasing information, check out their website, Noblo. This awesome invention retails for $14.99. GIVEAWAY: Since there are a lot of sponsors on this giveaway, I decided to pick two winners. In your comment section following this post, please specify which two items you would like to receive. I can’t guarantee that you will get those two items but I will do my very best to assure you do. Please remember that this giveaway is run via rafflecopter so click continue reading to see the actual giveaway. While the comment entry is mandatory, I highly suggest you fill out the rest of the rafflecopter entries to increase your chances of winning, Good Luck! US only a Rafflecopter giveaway Now hop onto the rest of the wonderful blog and enter: *Thank you to all my wonderful sponsors for making this giveaway terrific. As always, all my opinions are my own and not swayed by outside sources. Noblo is such a great idea! OOh you have some awesome products there! The Knot Genie looks great – I could really use that with my daughter’s long hair! Definitely would love having the Noblo! wow – you have LOTS of great goodies! I would like the Knot Genie and the Doodle Roll. I would love to win the Knot Genie and the Soduko! I’d love the Noblo and the Doodle Roll! the beach umbrella stand Noblo Beach Umbrella Anchor and Puzzle Barow’s Logic Puzzles Volume 2! Thanks for the giveaway! I love that NoBlo! Such a great idea I would love the Doodle Roll. Always needs a way to keep the kids busy The Noblo and Doodle Roll!!! Thank you!! I want the knot genie and the puzzle books. The puzzles! doodle roll would be great for vacations! I would like the Puzzle Books and Doodle Roll Noblo and doodle roll :O) WOW can’t decide among so many awesome products! So…..I’ll go for the Knot Genie! Wow! What a lot of fun summer stuff!! Me, me, me Puzzle books & knot genie! Puzzle Books & Doodie Roll! I would like the Doodle Roll, I think my nephews would enjoy that & actually I would enjoy using it with them puzzle books and doodle roll! I like the knot genie and puzzle books, such great gift ideas! I really want to get those puzzle books. I would choose the noblo. The knot genie & Noblo. You are so cute! Fun giveaways, I would say doodle roll and the umbrella stand. Knot Genie and the Noblo The Doodle Roll & Knot Genie! I am definitely looking into the knot genie. My daughter has extremely curly hair and this would make my life SO much easier! (and hers too!!) the doodle roll and the knot genie. Knot genie and the doodle roll puzzle box and knot genie knot geni doodle roll The puzzle books and the doodle roll The Doodle Roll and the Knot Genie #s 1 and 4 noblo and knot genie!! I would like the Puzzle Books & Knot Genie. Definitely the noblo and the picture puzzle book!! THANKS Noblo and Umbrella. noblo and knot genie! Knot genie and doodle roll!! knot genie and the puzzle books knot genie and the puzzle books!! 1 and 3 noblo and knot genie. puzzle books and knot genie The noblo and doodle roll! Puzzle Books (3), Knot Genie Facebook Name: Leslie Galloway (www.facebook.com/GallowayLeslie) Noblo & Doodle Roll! Knot genie and Doodle ROll Knot Genie and Puzzle Books The Knot Genie and the Puzzle Books :)! I’ve been purchasing those puzzle books for years now. I love em :)! Knot Genie & Picture Puzzles Book nancyecdavis AT bellsouth DOT net knot Genie & the doodle roll i like Puzzle Barow’s Logic Puzzles Volume 2 and Puzzle Barow’s Sudoku, and the knot Genie i like the doodle roll and knot genie I love the doodle roll and noblo! noblo and the puzzle books msboatgal at aol.com Puzzle books the noblo Would love to win the Noblo and the Doodle Roll! Thanks! puzzle books Knot Genie and the Doodle Roll Doodle Roll and Noblo I would love the puzzle books and the doodle roll. knot genie Knot Genie and the Doodle Roll Puzzle books and Doodle Roll the knot genie and the doodle roll I like Knot genie and doodle roll Doodle Roll Puzzle Books & Doodie Roll Noblo & puzzle Books duddle roll or noblo looks awesome I would LOVE the puzzle books and the Doodle Roll. puzzle books and doodle roll I love the idea of the knot genie… I like the knot genie and the puzzle books the most. I’d like the puzzle books and the knot genie I would love the puzzle books and the doodle roll noblo and knot genie I would like the The Complete Idiot’s Guide Picture Puzzles Volume 2 and the Doodle Roll. The Logic Puzzle Book & the Doodle Roll puzzle books and the doodle roll tcogbill at live dot com doodle roll and puzzles books Noblo & Doodle roll! I would love the Know Genie and the Doodle roll. The girls have long hair and I have battle the knots and tears way too often. Thanks so much! I like the puzzle books and the doodle roll. Thank you! Knot Genie Your button is on my blog (under Blogs I Like) Doodle Roll & knot genie I would love to get the doodle roll and the logic puzzle book. Thank you for the giveaway! I’d like to win the puzzle books! I would love to win the puzzles! the doodle roll! my daughter loves to draw The NoBlo for sure! i’d choose the doodle roll and the knot genie The umbrella and the puzzles puzzle books and doodle roll Doodle roll, beach umbrella anchor Knot genie, Beach umbrella anchor I’d love the Doodle Roll and the puzzle books! knot genie and doodle roll I would like to win the Puzzle Books and Knot Genie puzzle books and knot genie. Knot Genie and Doodle Roll I would love the Noblo and the Doodle Roll Puzzle books and doodle roll. [...] Here: http://www.the-mommyhood-chronicles.com/2012/06/my-huge-summer-backyard-bash-review-giveaway/ This entry was posted in Blog Giveaways and tagged 6/13/12, Blog Giveaway, Kids, Rafflecopter, [...] knot genie and doodle roll I would love to win the doodle roll and knot genie! The two prizes that I would like if I would be the lucky winner are: The Knot Genie and the Noblo. They are cute, handy and creative products. Thank you! Doodle Roll, and the Knot Genie knot genie and doodle roll knot genie and doodle roll I would love the logic puzzle book and the noblo! Noblo and the puzzle book! Doodle Roll would be a great road trip/hotel item I would like the doodle roll and the knot genie!! Both would be great for my toddler… one to distract her and the other to brush her long hair! I would like to win the puzzle books and the knot genie. Noblo, Doodle Roll doodle roll and knot genie I would pick the noblo and knot genie Doodle Roll and Puzzle Books doodle roll and knot genie The noblo and knot genie! I’d love to win the Noblo & the DoodleRoll noblo and the puzzle books Knot Genie and Doodle Roll! Puzzle Books and Knot Genie The puzzle books and the Doodle Roll doodle roll and puzzle books The Doodle Roll and the Puzzle Books sound the best. Puzzle books and knot genie knot genie and doodle roll I would like the Doodle Roll and Knot Genie The Knot Genie and the Noblo (although the books look really good, I love logic problems, and the doodle roll would be a ton of fun for my granddaughter.) doodle roll and knot genie would be cool to win the knot genie and the noblo. The knot genie and the noblo! Puzzle Books and Doodle Roll knot genie and doodle roll Knot Genie and Doodle Roll I’d like the knot genie and doodle roll Puzzle Books and Doodle Roll logic puzzles and soduko Logic Puzzles and Noblo. I like them all! doodle roll and logic puzzles I would love the doodle roll and the logic puzzles! Puzzle Barow’s Sudoku and Puzzle Barow’s Logic Puzzles Volume 2 The puzzle books and the Noblo. The Knot Genie and Noblo. Thanks for the chance. Puzzle books and Knot Genie! They are all awesome but especially love the doodle roll and detangling brush I’d love a Noblo! Puzzle books and knot genie! The Knot genie and the Noblo the doodle roll and the knot genie #1 and #4! Thanks for the chance Knot genie and puzzle book. The knot genie and the puzzle books-but love everything the noblo and the doodleroll puzzle books & the doodle roll! I would like to win the puzzle books puzzle books and knot genie I would pick the puzzle books and the knot genie. I’d like to win the puzzle books and the knot genie. Noblo and Doodle Roll The knot genie and the logic puzzles Knobo and knot genie Puzzle books and knot genie! The Noblo umbrella and puzzle books. I would like the knot genie and the logic puzzle book. The knot genie & Noblo. I’d like the Knot Genie and doodle roll. The Doodle Roll-Fun with Crayons and the Noblo. The Picture Puzzle Book & the DDoodle Roll. I would like the puzzle books and the doodle roll. no need for the knot genie, I have a boy and we keep his hair short. knot genie and doodle roll Knot Genie and Doodle Roll! I love them all – even the knot genie Puzzle Barow’s Logic Puzzles Volume 2 and Doodle Roll! I’m a puzzle nut and would love to win the sudoku and the logic puzzles. The puzzle books and the knot genie would be my picks! Doodle Roll & KNOT GENIE The knot genie and doodle roll. I’d love to win the Noblo/Doodle Roll contest Doodle Roll and Puzzle Books! ptavernie at yahoo dot com The noblo and the knot genie! I would love the puzzle books and the doodle roll. Prize 3 and 4 I would love to win the Doodle Roll and the Logic Puzzles :)——- I would love to win the Puzzle Books & Knot Genie! The puzzle books for me and the kids, we love puzzles and the Knot Genie for my girls who have very long hair! the 2 things i would pick are the Noblo and the knot genie! the knot genie and noblo! sellcrystal2 (at) yahoo dot com the logic and soduku books Noblo, Doodle Roll Are what I would want most i would love the noblo! I’d love to win the Noblo, Doodle Roll! jessicaahays at hotmail dot com I’d like the puzzle books and knot genie. i would love to win the doodle roll and no blo! I’d like to win the knot genie and puzzle books. the doodle roll and the logic puzzle book I’d like to win the puzzle books and doodle roll. couponshanda (at) yahoo (dot) com I’d like to win the Noblo and Doodle Roll Puzzle Books (3), Knot Genie Puzzle Books , Knot Genie vmkids3 at msn dot com Doodle roll and knot genie. Thanks! I would like the knot genie. the Five Level Sound Indicator. I like the doodle roll and the puzzle books. logic puzzles and knot genie brush the puzzle books and the knot genie Puzzle books, and knot genie. The Noblo and Doodle Roll I’d love the Noblo and the Doodle Roll I like the Noblo and the Knot Genie best! I would love to win the puzzle books and the knot genie. i would love the noblo & the doodle roll! very cool! the Noblo and the puzzle books please and thanks! I’d love to win the Puzzle books and knot genie! I’d love to win the Puzzle books and knot genie! I’d love to own #1 and #3 Doodle roll and puzzle books. I like the Doodle roll and the Hair Genie The Knot Genie and the puzzle books! All of them but the Noblo and the Logic Puzzles would be my top 2 picks! THanks! all of the puzzle books, great time fillers. know genie or Noblo and the puzzle book puzzle book Knot genie for the kids hair, and the Noblo for my umbrella. The puzzle book and noblo! Logic puzzles and sudoku
{"url":"http://www.the-mommyhood-chronicles.com/2012/06/my-huge-summer-backyard-bash-review-giveaway/","timestamp":"2014-04-18T21:20:09Z","content_type":null,"content_length":"256502","record_id":"<urn:uuid:924ed87d-7f92-4d74-93f1-bffc08effa0f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
13. Debugging Models This chapter is intended to help MechanicalSystems users to diagnose and repair common errors in mechanism models. There is a penalty to be paid for the tremendous flexibility available within any symbolic Mathematica model, that is, an equally tremendous number of ways that a model can fail. This chapter can only scratch the surface of the full breadth of possible modeling errors that can occur in a Mech model, but the basic debugging methods presented should be applicable to all models that are so afflicted. 13.1 Model Building Errors This section deals with model building errors that prevent Mech from building the constraint or load expressions or prevent the completed model from running. 13.1.1 Bad Arguments Mech functions are written with as much type checking as is possible without restricting the user from incorporating arbitrary mathematical expressions into a model. Unfortunately, this caveat means that there can be essentially no type checking on any symbolic argument that is intended to evaluate to a number. Many arguments to Mech functions are type checked, such as point and vector arguments, but since point objects and vector objects may also contain symbolic expressions buried in their local coordinates, these items cannot be rigorously type checked until runtime. The following examples show where Mech's type checking catches bad arguments and where others slip through. This loads the Modeler2D package. A Mech body, constraint, or load function is returned unevaluated if it is called with invalid arguments. For example, the Translate2 constraint expects a pair of axis objects that have the head Axis or Line. If Translate2 recognizes that these arguments are incorrect, the entire constraint function is returned unevaluated. Here is a misspelled axis object. If a constraint function is passed the correct argument types, but the contents of these arguments are invalid, a constraint object is returned with some assumptions made about what to do with the invalid arguments. In the following example, the 3D point specification is not recognized by Modeler2D, so Point[2, {1,0,0}] is replaced with Point[2, {0,0}]. This is certainly not what the user intended, but it causes a valid constraint object to be returned. Here is a constraint with a 3D local coordinate that should be 2D. The types of errors shown above are diagnosed fairly easily. More elusive problems result when a bad value is given for arguments of type expr. When a Mech usage statement specifies that an argument of type expr is required, it means any expression that will evaluate to a number at runtime may be used. Mathematica has no way of telling in advance whether or not an expression will evaluate to a number, so these arguments cannot be type checked at all. Here is a constraint with an unchecked expression argument. The last argument to the RelativeDistance1 constraint (2 a) is supposed to evaluate to a number. If, at runtime, the symbol a evaluates to a number, then the constraint is valid. If the symbol a has no definition, or it evaluates to a List or other function at runtime, then Mech will fail. Mathematica has no way of knowing what the symbol a will do at runtime, so it cannot be type checked. Note that the first argument of a SysCon object is a list of the constraint expressions contained in the object. Here are the constraint expressions contained in cs. 13.1.2 Constraint Inspection When an expression in a Mech model is supposed to evaluate to a number but it does not, it can often be caught by the CheckSystem function. CheckSystem evaluates the constraint expressions in the current model, subject to the current initial guesses and parameters, and reports any nonnumeric expressions found. Consider the following simple model of a reciprocating slider. If CheckSystem is run before the symbols amp and freq are defined, they are identified. Here is a reciprocating slider model. If the symbols amp and freq are defined at runtime, but not defined in such a way that the expression amp Sin[freq T] evaluates to a number, CheckSystem cannot determine exactly what is wrong. Here is a bogus definition for the symbol freq. The constraint inspection function. Constraints is a Mech function that is used to return all or part of the vector of constraint expressions in the current model. Often, direct inspection of the constraints immediately gives a clue as to the cause of an error. Usually, the constraints are inspected subject to the values of the current guesses and parameters, which shows exactly what CheckSystem saw to be an error. Here is the entire current constraint vector. In all cases, input such as this should return a vector of numbers. In this case, the third constraint expression evaluates to a list of numbers, instead of a single number. Obviously this is due to the bogus definition of freq = {2, 4} in the example. We can repair the bogus definition and try again. Now the constraints evaluate to a list of numbers, as is required. The fact that each of the constraint expressions evaluates to zero simply means that all of the constraints are perfectly satisfied at the current values of the initial guesses, which is not usually the case. 13.1.3 Load Inspection Mech load expressions may be rendered invalid by the presence of nonnumeric expressions in the same way as constraint expressions. CheckSystem evaluates the current load expressions, if any exist, subject to the current values of the initial guesses, parameters, and Lagrange multipliers, and reports any nonnumeric expressions found. To demonstrate, a simple load is added to the reciprocating slider model defined in Section 13.1.2. A force applied to body 2. If the symbols k and b are defined at runtime, but not defined in such a way that the expression Exp[k t + b] evaluates to a number, CheckSystem will not be able to determine exactly what is wrong. For example, CheckSystem is limited to recognizing raw symbols only. CheckSystem does not recognize a user-defined variable of the form c[n]. The load inspection function. Loads is a Mech function that is used to return all or part of the vector of load expressions that are applied to the current model. Usually, the loads are inspected subject to the values of the current guesses, parameters, and Lagrange multipliers, which shows exactly what CheckSystem saw to be an error. Here is the current vector of applied loads. In all cases, the input shown should return a list of numbers. In this case, the second load expression, the Y component of the force applied to body 2, evaluates to a symbolic expression instead of a single number. Obviously this is due to the definition of b = c[1] given. However, c[2] does not appear in the applied loads. This is because c[2] (k) is multiplied by time T. The current initial guess for T is T -> 0, so c[2] is canceled entirely. Thus, CheckSystem does not even recognize the presence of c[2] until the initial guesses are perturbed. We can repair one of the bogus definitions, but the other still remains. CheckSystem thinks that all is well, which is definitely not the case. This is an often frustrating flaw in CheckSystem, but it cannot tell what is and is not a number until it evaluates it, and zero times anything is a number. If the value of time is perturbed, CheckSystem sees that the loads are not purely numeric. If we change the current value of time with SetGuess, CheckSystem recognizes an error. Now we repair the other bogus definition. Now the loads evaluate to a nested list of numbers, as is required. 13.2 Failure to Converge This section deals with mechanism modeling errors that prevent a Mech model from converging to a solution or cause it to converge to an incorrect solution. 13.2.1 Inconsistent Constraints The most basic error in kinematic modeling is to simply define a set of constraints that is not consistent with the mechanism that is being modeled. If a model to which no solution exists is defined, Mech must fail to converge. For example, consider a rotating crank with a single link between an eccentric point on the crank and a point on the ground. If the link is insufficiently long, it will not reach from the crank to the ground, regardless of the orientation of the crank. In this case, the link is only 1.75 units long, while the minimum possible distance between its two attachment points is 2 units. No assembled configuration of the mechanism exists, so the Newton-Rhapson solution block fails. This loads the Modeler2D package. Here is a simple crank-with-link mechanism. SolveMech attempts to find a solution, but it cannot. The diagnosis of this type of problem can be quite difficult. No specific tool to help isolate such a problem is provided by Mech because Mathematica can only know what is said, not what is meant. The first thing to try is usually to make a quick sketch of the mechanism, and make sure that its assembly is feasible. If it is clear that the problem is a modeling error, not a conceptual one, then an examination of the constraint expressions can yield some insight. Here are the current values of constraints 1 and 2. Since the expressions in constraint 1 are equal to zero at the current values of the initial guesses, they are satisfied and are probably not the cause of the problem. Constraint 2, the RelativeDistance1 constraint, is not equal to zero, therefore it is the constraint that SolveMech was unable to satisfy. While this method may prove to be helpful in isolating a problem, there is no guarantee that a constraint that is not satisfied is actually the one that is in error. For example, it is possible that the length of the link in the example is correct and is what the designer intended, and what really needs correction is the location of the center of the crank. Another tool that can prove helpful is StepMech. This function causes the iterative solver to take a single Newton-Rhapson step toward the solution and return the result, regardless of any convergence criteria. This allows the solution to be inspected at each step, so that if it starts to diverge, the user can see "which way it goes". A debugging utility. StepMech is equivalent to SolveMech with MaxIterations set to 1, and an infinitely accurate convergence criteria. 13.2.2 Bifurcation Another common modeling problem is bifurcation, a condition where more than one solution to the system of constraint equations exists, representing multiple possible assembled configurations. When the previously defined crank-with-link model is modified so that it can converge, it is able to converge to either of two solutions depending on the values of the initial guesses. Here is one possible configuration of the crank-with-link model with a reasonable value of len. And here is the other possible configuration of the crank-with-link model. Once a model is apparently functioning properly, the value of a graphic image of the model cannot be overemphasized. A rudimentary graphic image can immediately show errors that may have gone unnoticed, such as the fact that a model has converged on the wrong side of a bifurcation, or simply that the model has been defined incorrectly. Errors that are difficult to detect from the numbers alone are often easily seen in the graphic. Here is a simple graphic of the crank-with-link model at both positions. 13.2.3 Redundancy A mechanism model with a redundant constraint set can be thought of as having more than one constraint controlling the same degree of freedom. Since SetConstraints only allows models to be defined if they have equal numbers of constraints and degrees of freedom, the presence of one redundant constraint implies the presence of one unconstrained degree of freedom. CheckSystem may be used to detect such errors. Here is a model with a redundant constraint set. The constraint position numbers given in the error message generated by CheckSystem refer to the position of the offending constraints in the current constraint list. For example, the current constraint vector has three expressions, the first two from Translate2 and the last from Orthogonal1. The positions {{1, 1}, {2, 1}} given by the error message refers to the first expression in constraint 1, the Translate2, and the first and only expression in constraint 2, the Orthogonal1. CheckSystem is usually, but not always, able to detect redundancies in a constraint set. Because of the Euler generalized parameters used in the 3D constraint formulation, it is possible for CheckSystem to be unable to detect a redundancy if the current initial guesses for the Euler parameters do not constitute a valid set. Specifically, a valid set of four Euler parameters representing an arbitrary angular orientation must satisfy the following relation. If the Euler parameters in the current initial guesses do not satisfy this relationship, it is possible for CheckSystem to be unable to detect a redundant constraint set. Another weakness in the redundancy checking used by Mech is that numerical inaccuracies may cause CheckSystem to report that more constraint expressions are participating in a redundancy than really are. This is only a problem in relatively large systems where numerical errors are more pronounced. 13.3 Mathematical Anomalies In this section artifacts of the mathematical methods used by Mech are discussed. 13.3.1 Zero Unit Vectors If the concept of a zero-length unit vector seems a little strange, rest assured that Mech also finds it difficult. Unit vectors are used internally by Mech to generate an applied force vector when the magnitude of the force is specified, and the direction of the force is given as a vector based on mechanism geometry. The applied force vector, in this case, is the supplied direction vector divided by its own magnitude, times the specified force magnitude. If the magnitude of the supplied direction vector becomes zero at some point in the mechanism motion, a singularity results. The most common occurrence of this problem is in the application of frictional forces. If the negation of the velocity vector of a point is used to specify the direction of a force, the velocity of the point may not go to zero at any point in time when a solution is sought. If it does, Mathematica evaluates 0/0 and produces an error message. While this is to be expected since a zero vector makes no sense as a direction specification, it can be quite difficult to formulate a workaround. Setting the magnitude of the force equal to zero for all points in time at which the direction vector becomes zero may not work, because this still results in 0*0/0, which is no better than 0/0. The simplest workaround is to use the CutOff option for Force and Moment. A fix for problems with zero-length force vectors. The following example shows the effect that CutOff has on the force vector generated by a Mech Force function. The force is defined with a constant magnitude, but the direction vector of the force is a function of a. Clearly, the force is indeterminate if a goes to zero. Here is a force that fails if a = 0. Here is the same force using the CutOff option. The value of CutOff should be extremely small so that it has a negligible effect on the magnitude of the load in the operating range of the variable a. 13.4 Equations of Motion This section shows how to access the complete system of equations that is generated by a Mech model. 13.4.1 Internal Expression Access The following functions are used to access the symbolic expressions that are generated by MechanicalSystems. Functions to provide access to internally stored expressions. Functions that return a model's dependent variables. In all of these functions, the body number bnum can be a single positive integer, a list of integers, All, or Rest. The constraint number cnum can be a single positive integer constraint number, a list of integers, All, Euler, or any of the forms of partial constraint specifications accepted by Constraints. The complete set of equations that are solved by the Static solution block is given by Transpose[Jacobian[_,_]].Generalized[_] . The equations of motion that are solved by the Dynamic solution block are the Static equations with Centrifugal[_] + MassMatrix[_].Acceleration[_] added to the left-hand side. 13.4.2 Examples Here is a simple model to provide some values for the dependent variables. The model is run with the Dynamic solution option, and very tight convergence criteria. Here are all the kinematic constraint equations. Here are the velocity constraints. Here are the acceleration constraints. Here are the static reaction force equations. They are not satisfied because the model was solved with the Dynamic option. Here are the dynamic equations of motion.
{"url":"http://reference.wolfram.com/applications/mechsystems/DebuggingModels/Mech.13.html","timestamp":"2014-04-20T03:54:42Z","content_type":null,"content_length":"77651","record_id":"<urn:uuid:6967ba45-d6f7-454c-b9cd-e6cdf48b15cb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Osman Güler Professor of Mathematics Department of Mathematics & Statistics University of Maryland, Baltimore County E-mail: guler 'at' umbc 'dot' edu • B.A., Yale University • M.S., The University of Chicago • Ph.D., The University of Chicago Pictures, pictures, pictures Research Interests and Publications My research interests are in mathematical programming, operations research, convex analysis, and complexity. Parent Directory
{"url":"http://www.math.umbc.edu/~guler/","timestamp":"2014-04-21T03:03:50Z","content_type":null,"content_length":"1612","record_id":"<urn:uuid:33e32d78-2f02-4d8d-9743-70760f9b3aed>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory LBVJVM (* Title: HOL/MicroJava/BV/LBVJVM.thy Author: Tobias Nipkow, Gerwin Klein Copyright 2000 TUM header {* \isaheader{LBV for the JVM}\label{sec:JVM} *} theory LBVJVM imports Typing_Framework_JVM type_synonym prog_cert = "cname => sig => JVMType.state list" definition check_cert :: "jvm_prog => nat => nat => nat => JVMType.state list => bool" where "check_cert G mxs mxr n cert ≡ check_types G mxs mxr cert ∧ length cert = n+1 ∧ (∀i<n. cert!i ≠ Err) ∧ cert!n = OK None" definition lbvjvm :: "jvm_prog => nat => nat => ty => exception_table => JVMType.state list => instr list => JVMType.state => JVMType.state" where "lbvjvm G maxs maxr rT et cert bs ≡ wtl_inst_list bs cert (JVMType.sup G maxs maxr) (JVMType.le G maxs maxr) Err (OK None) (exec G maxs rT et bs) 0" definition wt_lbv :: "jvm_prog => cname => ty list => ty => nat => nat => exception_table => JVMType.state list => instr list => bool" where "wt_lbv G C pTs rT mxs mxl et cert ins ≡ check_bounded ins et ∧ check_cert G mxs (1+size pTs+mxl) (length ins) cert ∧ 0 < size ins ∧ (let start = Some ([],(OK (Class C))#((map OK pTs))@(replicate mxl Err)); result = lbvjvm G mxs (1+size pTs+mxl) rT et cert ins (OK start) in result ≠ Err)" definition wt_jvm_prog_lbv :: "jvm_prog => prog_cert => bool" where "wt_jvm_prog_lbv G cert ≡ wf_prog (λG C (sig,rT,(maxs,maxl,b,et)). wt_lbv G C (snd sig) rT maxs maxl et (cert C sig) b) G" definition mk_cert :: "jvm_prog => nat => ty => exception_table => instr list => method_type => JVMType.state list" where "mk_cert G maxs rT et bs phi ≡ make_cert (exec G maxs rT et bs) (map OK phi) (OK None)" definition prg_cert :: "jvm_prog => prog_type => prog_cert" where "prg_cert G phi C sig ≡ let (C,rT,(maxs,maxl,ins,et)) = the (method (G,C) sig) in mk_cert G maxs rT et ins (phi C sig)" lemma wt_method_def2: fixes pTs and mxl and G and mxs and rT and et and bs and phi defines [simp]: "mxr ≡ 1 + length pTs + mxl" defines [simp]: "r ≡ sup_state_opt G" defines [simp]: "app0 ≡ λpc. app (bs!pc) G mxs rT pc et" defines [simp]: "step0 ≡ λpc. eff (bs!pc) G pc et" "wt_method G C pTs rT mxs mxl bs et phi = (bs ≠ [] ∧ length phi = length bs ∧ check_bounded bs et ∧ check_types G mxs mxr (map OK phi) ∧ wt_start G C pTs mxl phi ∧ wt_app_eff r app0 step0 phi)" by (auto simp add: wt_method_def wt_app_eff_def wt_instr_def lesub_def dest: check_bounded_is_bounded boundedD) lemma check_certD: "check_cert G mxs mxr n cert ==> cert_ok cert n Err (OK None) (states G mxs mxr)" apply (unfold cert_ok_def check_cert_def check_types_def) apply (auto simp add: list_all_iff) lemma wt_lbv_wt_step: assumes wf: "wf_prog wf_mb G" assumes lbv: "wt_lbv G C pTs rT mxs mxl et cert ins" assumes C: "is_class G C" assumes pTs: "set pTs ⊆ types G" defines [simp]: "mxr ≡ 1+length pTs+mxl" shows "∃ts ∈ list (size ins) (states G mxs mxr). wt_step (JVMType.le G mxs mxr) Err (exec G mxs rT et ins) ts ∧ OK (Some ([],(OK (Class C))#((map OK pTs))@(replicate mxl Err))) <=_(JVMType.le G mxs mxr) ts!0" proof - let ?step = "exec G mxs rT et ins" let ?r = "JVMType.le G mxs mxr" let ?f = "JVMType.sup G mxs mxr" let ?A = "states G mxs mxr" have "semilat (JVMType.sl G mxs mxr)" by (rule semilat_JVM_slI, rule wf_prog_ws_prog, rule wf) hence "semilat (?A, ?r, ?f)" by (unfold sl_triple_conv) have "top ?r Err" by (simp add: JVM_le_unfold) have "Err ∈ ?A" by (simp add: JVM_states_unfold) have "bottom ?r (OK None)" by (simp add: JVM_le_unfold bottom_def) have "OK None ∈ ?A" by (simp add: JVM_states_unfold) from lbv have "bounded ?step (length ins)" by (clarsimp simp add: wt_lbv_def exec_def) (intro bounded_lift check_bounded_is_bounded) from lbv have "cert_ok cert (length ins) Err (OK None) ?A" by (unfold wt_lbv_def) (auto dest: check_certD) from wf have "pres_type ?step (length ins) ?A" by (rule exec_pres_type) let ?start = "OK (Some ([],(OK (Class C))#(map OK pTs)@(replicate mxl Err)))" from lbv have "wtl_inst_list ins cert ?f ?r Err (OK None) ?step 0 ?start ≠ Err" by (simp add: wt_lbv_def lbvjvm_def) from C pTs have "?start ∈ ?A" by (unfold JVM_states_unfold) (auto intro: list_appendI, force) from lbv have "0 < length ins" by (simp add: wt_lbv_def) show ?thesis by (rule lbvs.wtl_sound_strong [OF lbvs.intro, OF lbv.intro lbvs_axioms.intro, OF Semilat.intro lbv_axioms.intro]) lemma wt_lbv_wt_method: assumes wf: "wf_prog wf_mb G" assumes lbv: "wt_lbv G C pTs rT mxs mxl et cert ins" assumes C: "is_class G C" assumes pTs: "set pTs ⊆ types G" shows "∃phi. wt_method G C pTs rT mxs mxl ins et phi" proof - let ?mxr = "1 + length pTs + mxl" let ?step = "exec G mxs rT et ins" let ?r = "JVMType.le G mxs ?mxr" let ?f = "JVMType.sup G mxs ?mxr" let ?A = "states G mxs ?mxr" let ?start = "OK (Some ([],(OK (Class C))#(map OK pTs)@(replicate mxl Err)))" from lbv have l: "ins ≠ []" by (simp add: wt_lbv_def) from wf lbv C pTs obtain phi where list: "phi ∈ list (length ins) ?A" and step: "wt_step ?r Err ?step phi" and start: "?start <=_?r phi!0" by (blast dest: wt_lbv_wt_step) from list have [simp]: "length phi = length ins" by simp have "length (map ok_val phi) = length ins" by simp from l have 0: "0 < length phi" by simp with step obtain phi0 where "phi!0 = OK phi0" by (unfold wt_step_def) blast with start 0 have "wt_start G C pTs mxl (map ok_val phi)" by (simp add: wt_start_def JVM_le_Err_conv lesub_def) from lbv have chk_bounded: "check_bounded ins et" by (simp add: wt_lbv_def) moreover { from list have "check_types G mxs ?mxr phi" by (simp add: check_types_def) also from step have [symmetric]: "map OK (map ok_val phi) = phi" by (auto intro!: nth_equalityI simp add: wt_step_def) finally have "check_types G mxs ?mxr (map OK (map ok_val phi))" . moreover { let ?app = "λpc. app (ins!pc) G mxs rT pc et" let ?eff = "λpc. eff (ins!pc) G pc et" from chk_bounded have "bounded (err_step (length ins) ?app ?eff) (length ins)" by (blast dest: check_bounded_is_bounded boundedD intro: bounded_err_stepI) from step have "wt_err_step (sup_state_opt G) ?step phi" by (simp add: wt_err_step_def JVM_le_Err_conv) have "wt_app_eff (sup_state_opt G) ?app ?eff (map ok_val phi)" by (auto intro: wt_err_imp_wt_app_eff simp add: exec_def) have "wt_method G C pTs rT mxs mxl ins et (map ok_val phi)" by - (rule wt_method_def2 [THEN iffD2], simp) thus ?thesis .. lemma wt_method_wt_lbv: assumes wf: "wf_prog wf_mb G" assumes wt: "wt_method G C pTs rT mxs mxl ins et phi" assumes C: "is_class G C" assumes pTs: "set pTs ⊆ types G" defines [simp]: "cert ≡ mk_cert G mxs rT et ins phi" shows "wt_lbv G C pTs rT mxs mxl et cert ins" proof - let ?mxr = "1 + length pTs + mxl" let ?step = "exec G mxs rT et ins" let ?app = "λpc. app (ins!pc) G mxs rT pc et" let ?eff = "λpc. eff (ins!pc) G pc et" let ?r = "JVMType.le G mxs ?mxr" let ?f = "JVMType.sup G mxs ?mxr" let ?A = "states G mxs ?mxr" let ?phi = "map OK phi" let ?cert = "make_cert ?step ?phi (OK None)" from wt have 0: "0 < length ins" and length: "length ins = length ?phi" and ck_bounded: "check_bounded ins et" and ck_types: "check_types G mxs ?mxr ?phi" and wt_start: "wt_start G C pTs mxl phi" and app_eff: "wt_app_eff (sup_state_opt G) ?app ?eff phi" by (simp_all add: wt_method_def2) have "semilat (JVMType.sl G mxs ?mxr)" by (rule semilat_JVM_slI) (rule wf_prog_ws_prog [OF wf]) hence "semilat (?A, ?r, ?f)" by (unfold sl_triple_conv) have "top ?r Err" by (simp add: JVM_le_unfold) have "Err ∈ ?A" by (simp add: JVM_states_unfold) have "bottom ?r (OK None)" by (simp add: JVM_le_unfold bottom_def) have "OK None ∈ ?A" by (simp add: JVM_states_unfold) from ck_bounded have bounded: "bounded ?step (length ins)" by (clarsimp simp add: exec_def) (intro bounded_lift check_bounded_is_bounded) with wf have "mono ?r ?step (length ins) ?A" by (rule wf_prog_ws_prog [THEN exec_mono]) hence "mono ?r ?step (length ?phi) ?A" by (simp add: length) from wf have "pres_type ?step (length ins) ?A" by (rule exec_pres_type) hence "pres_type ?step (length ?phi) ?A" by (simp add: length) from ck_types have "set ?phi ⊆ ?A" by (simp add: check_types_def) hence "∀pc. pc < length ?phi --> ?phi!pc ∈ ?A ∧ ?phi!pc ≠ Err" by auto from bounded have "bounded (exec G mxs rT et ins) (length ?phi)" by (simp add: length) have "OK None ≠ Err" by simp from bounded length app_eff have "wt_err_step (sup_state_opt G) ?step ?phi" by (auto intro: wt_app_eff_imp_wt_err simp add: exec_def) hence "wt_step ?r Err ?step ?phi" by (simp add: wt_err_step_def JVM_le_Err_conv) let ?start = "OK (Some ([],(OK (Class C))#(map OK pTs)@(replicate mxl Err)))" from 0 length have "0 < length phi" by auto hence "?phi!0 = OK (phi!0)" by simp with wt_start have "?start <=_?r ?phi!0" by (clarsimp simp add: wt_start_def lesub_def JVM_le_Err_conv) from C pTs have "?start ∈ ?A" by (unfold JVM_states_unfold) (auto intro: list_appendI, force) have "?start ≠ Err" by simp note length have "wtl_inst_list ins ?cert ?f ?r Err (OK None) ?step 0 ?start ≠ Err" by (rule lbvc.wtl_complete [OF lbvc.intro, OF lbv.intro lbvc_axioms.intro, OF Semilat.intro lbv_axioms.intro]) from 0 length have "phi ≠ []" by auto from ck_types have "check_types G mxs ?mxr ?cert" by (auto simp add: make_cert_def check_types_def JVM_states_unfold) note ck_bounded 0 length show ?thesis by (simp add: wt_lbv_def lbvjvm_def mk_cert_def check_cert_def make_cert_def nth_append) theorem jvm_lbv_correct: "wt_jvm_prog_lbv G Cert ==> ∃Phi. wt_jvm_prog G Phi" proof - let ?Phi = "λC sig. let (C,rT,(maxs,maxl,ins,et)) = the (method (G,C) sig) in SOME phi. wt_method G C (snd sig) rT maxs maxl ins et phi" assume "wt_jvm_prog_lbv G Cert" hence "wt_jvm_prog G ?Phi" apply (unfold wt_jvm_prog_def wt_jvm_prog_lbv_def) apply (erule jvm_prog_lift) apply (auto dest: wt_lbv_wt_method intro: someI) thus ?thesis by blast theorem jvm_lbv_complete: "wt_jvm_prog G Phi ==> wt_jvm_prog_lbv G (prg_cert G Phi)" apply (unfold wt_jvm_prog_def wt_jvm_prog_lbv_def) apply (erule jvm_prog_lift) apply (auto simp add: prg_cert_def intro: wt_method_wt_lbv)
{"url":"http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/HOL/HOL-MicroJava/LBVJVM.html","timestamp":"2014-04-20T18:57:39Z","content_type":null,"content_length":"36327","record_id":"<urn:uuid:5afe5838-8c05-42da-94bb-935036f21bdc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Book Excerpt: The wrong math This is an excerpt from The Five Stages of Collapse: Survivors' Toolkit . Please order your copy for shipment in May. An argument can be made that lending at any rate of interest above 0 percent eventually leads to a deflationary collapse followed by a quick but painful bout of hyperinflation thrown in at the very end. A positive interest rate requires exponential growth, and exponential growth, of anything, anywhere, can only produce one outcome: collapse. This is because it quickly outpaces any sustainable physical process in the universe, outside of a few freak cases such as a sustained nuclear explosion, where the entire universe blows up, taking all of us with it, along with all of our debts. Here is a thought experiment that illustrates this point. Suppose we solve every technical problem on Earth and go on and colonize space, found space colonies and take over the solar system, and the galaxy, and other galaxies, and the entire universe (which may not be infinite, which would give us another cause for eventual collapse, but let's ignore that for the moment). As everyone knows, space empires aren't cheap, and to get our space empire started we borrow some money, at an introductory low rate of interest (after somehow convincing the lenders that building a space empire is a low-risk proposition). Suppose we expand this empire at close to the speed of light (since it requires infinite energy to accelerate a finite mass to the speed of light). A space empire expanding even at the speed of light in all three dimensions will only grow as (time cubed). (Let's ignore the fact that initially, while taking over the solar system and the Milky Way galaxy, which are both flat, our empire will only be able to expand in two dimensions.) Meanwhile, our empire's debt will grow as (debt raised to the power of time). And here is the problem: it is a mathematical certainty that as time passes ( increases), debt grows faster than empire for any initial amount of debt. Exponential growth outpaces any physical process. D^t ≫ t^3 Suppose the empire's engineers struggle mightily with this problem and, after taking on even more debt to finance research and development, eventually invent “warp speed,” which flouts the laws of physics and allows our space empire to expand faster than the speed of light. But to their surprise, debt just keeps increasing. Eventually they discover the answer: even at “warp-10,” which is ten times the speed of light, debt is still increasing faster than the empire: D^t ≫ (10t)^3 One brilliant engineer (who happens to be a fan of the band Spinal Tap) hits on a brilliant idea and invents “warp-11.” Everyone is hopeful that this invention will give the imperial growth rate “that extra push over the cliff” and allow it to catch up with its ballooning debt. But this too is to no avail, because… D^t ≫ (11t)^3 Perplexed, the engineers wander back to their drawing boards. Then one remembers a film he saw once — The Adventures of Buckaroo Banzai Across the 8th Dimension — and is struck by a brilliant thought: what if they were to actually invent the circuitry that allowed Buckaroo to penetrate solid matter and travel across eight dimensions instead of just three? Then their space empire could expand across eight dimensions at the same time! They get to work, and quickly cobble together the Oscillation Overthruster (Buckaroo's fully automatic 12-volt cigarette lighter socket plug-in unit, not Dr. Lizardo's bulky foot-operated floor-mounted kludge). A bit more effort is required to compensate for strange hyper-relativistic effects when using the Overthruster at “warp-11” (Buckaroo's was only tested at just over Mach-1) but once they got the hang of it their space empire starts expanding across eight dimensions at eleven times the speed of light, quickly conquering and enslaving the Lectroids of Planet 10, along with billions and billions of others. Alas, it soon turns out that even this rate of growth is nowhere near fast enough to keep up with their debts, because… D^t ≫ (11t)^8 After some more fruitless exploration, the engineers decide to overcome their innate distrust of mathematicians and invite one to join the team, hoping he might be able to explain why this keeps happening to them. The mathematician asks for cocktails to be brought in and, once he sees that the engineers are sufficiently inebriated to take the edge off the bad news, he grabs a cocktail napkin and furiously scribbles out a proof (here omitted for clarity). His proof attempts to demonstrate that exponentially expanding debt will eventually outpace the rate of growth of anything finite that grows at any finite speed across any finite number of dimensions. He then goes on to say that “things get much more interesting once we move into the infinite domain,” and, even more enigmatically, “we can always renormalize it later.” Shocked, they dismiss the mathematician and, grasping at straws, hire a shaman. The shaman listens to their problem and tells them to wait until sundown for an explanation. Then he asks them to shut off electricity everywhere and to join him outside in the middle of the parking lot and form a circle around him. Once their eyes adjust to the darkness, he points to the sky and says: “Look at the darkness between the stars. See it? There is nothing there. Now look at the stars. That is all there is.” Since the engineers all happen to know that what they can see is limited by the age of the universe and the speed of light, not the size of the universe, which, for all they know, could very well be infinite, they dismiss the shaman, turn the lights back on, drink some more, then nurse their hangovers. One of them (a bit of a class clown) writes “It's The Exponential, Stupid!” on a placard and pins it up on a wall next to their cubicle farm. But then another one, a fan of The Time Machine , the 1895 novella by H. G. Wells, hits on yet another brilliant idea. What if they invent a time machine, and go back in time to settle the empire's debts? With that, the math would finally work in their favor, because they could go back in time and pay each loan off with just the principal. Being short of cash, they try taking out another large loan to finance the development of the time machine, but their creditors balk, declaring the project “too risky.” And so the empire defaulted on its debts. Shortly thereafter, it turned out that it was no longer able to secure the financing it needed to continue interplanetary oxygen shipments, and they all died of asphyxiation. Conclusion: borrowing at interest is fine provided you have plans for a time machine and enough cash on hand to actually build one, go back in time and pay off the loan with just the principal; it is not recommended otherwise. This was an excerpt from The Five Stages of Collapse: Survivors' Toolkit . Please order your copy for shipment in May. 15 comments: Hence the ancient prohibition against usury.... Of course, there is another solution, which to my understanding has never been tried: set a date, say, 50 years in the future, when all debts will be forgiven. The story - like life itself - isn't complete without... Father War! The Galactic Empire quickly runs into a mass shortage of the Unobtainium needed to power everybody's Overthrusters. There follows a cataclysmic Galactic Great War, where Translation Rays are fired left and right, rotating hundreds of populated planets into the 9th dimension - separating them from the light of their stars and all shipping traffic to die a most cruel death. The Empire's population thus remains stable, until the carnage repeats. The cycle is revisited again and again, until all of the Unobtainium-bearing planets end up in the 9th dimension, and everybody moves themselves and their stellar bodies there. Or until the heat death of the Universe. I was checking out some steady state economy website and it struck me that steady state is an unnatural state of affairs. What happens with animal populations and apparently also with economies is exponential growth corrected by periodic collapse to create a kind of sawtooth shaped wave. Zoom in close enough and you're just looking at the exponential growth portion of the curve. Zoom out and you get a saw tooth. If we look at the evolution of life on earth we also get exponential growth up to the point where the resources are maxed out. After that we're back to the old expand and collapse theme. Evolution is nature's way of trying to get around the limits of a maxed out ecosystem where every niche is occupied and everyone's growth is maxed out. Evolve and figure out how to eat something that currently isn't being eaten or figure out how to eat it more efficiently than is currently being done. To Mr. Wheeler I would say that debt forgiveness or bankruptcy or outright collapse are all perfectly acceptable responses to exponential growth. I personally endorse Mr. Orlov's prescription to collapse early and beat the rush. All they'd have to do is get the Government to mix a Vodka Gimli: guarantee the loan, ie load all the risk onto the back of the public, funnel all the profits into the pockets of the Oligarchs and Voila!: "Certainty of death, small chance of success ... What are we waiting for?" John D. Wheeler Funny guy. Of course that solution was tried. Its in the torah and you know it. The Jubilee is held every 7x7 years. Dmitry, good show. You've even managed to capture one of the most neglected, most rarely addressed major articles of modern Ferengi faith: that the market is the sole determinant of value; that everything is, can and ought to be a market good; that nothing not worth selling is worth having. Mmm... it seems to me, and it can be tried with pennies on the table, that interest by itself is not necessarily the undoing, provided it is directly returned to the economy. Compound interest, however, is a Ponzi scheme. Georgescu-Rogen called the steady state a mirage: "The crucial error consists in not seeing that not only growth, but also a zerogrowth state, nay, even a declining state which does not converge toward annihilation, cannot exist forever in a finite environment." Exponential growth leads to collapse when growth can no longer can be fueled by available resources. This law of the universe has mathematical certainty and consequently collapse is also mathematically certain. A steady state economy is possible but transitioning society to a sustainable mathematical framework would require the cooperation of people who hold power. Unfortunately people who hold power are also the same people who profit most from exponential growth. Only convincing power holders that it is in their best interest to facilitate social change to a sustainable economy offers any hope for the future. The question is how to do that considering that average thoughts cannot fathom anything beyond immediate self interest. It's a tough nut to crack and the disgusting thing is that it really shouldn't be because collapse means that everyone looses, even those who profit most from exponential growth. 152Had I not already preordered your book, Dmitry, I would do so now based solely on that Buckaroo Bonzai reference. @ John D. Wheeler I have read that a certain Rabbi was crucified 2000 years ago for having the temerity to suggest bringing back the jubilee after a few centuries during which it had been ignored. There certainly are a lot of sermons and parables attributed to him that emphasize helping the poor. I'm an atheist myself, but it seems the two major religions in the U.S. are unlimited growth, and greed. So doesn't it mean that money is the source of the problem? Do You think it is or ever will be possible to stop using it. I think for all of humanity it would be like growing up, just like a teenager matures and becomes an adult. I wouldn't worry. The system will be successfully delevered by depreciating currencies against gold. It's what always happens. Mankind has been living under this roman extortion scheme for so long now, it is nearly impossible for men to wrap their mind around anything else. There has to be class division, there has to be extortion, fraud and hoarding of wealth, there has to be war, and there has to be billions living in squalor so that self deluded psychopaths can have their opulence. The only reason money exists, in any form or type of currency, is so the owners of money can speculate upon and devalue the labor of working men. It is cleverly disguised as a means of exchange between men for goods and services. In reality it is a scheme of debt enslavement no different than the mining community company script, paid in just small enough increments that the worker can almost buy enough food from the company store to feed himself and his family. The value of everything is speculated on by mystery men in high places, using money as the basis of their speculation. In reality, nothing material, or the cash to buy it with, is worth anything. Even the value of the almighty gold is completely speculative. Some scheister somewhere decides that X amount of gold is equal to Y amount of labor using their debt notes as the medium of exchange. By gradually and continually raising the speculative price, while at the same time paying fewer debt notes for the hours of labor. Using this method, you can essentially squeeze the people that actually work into utter desperation. The money supply, the speculation, and all of the paper can disappear in an instant, but the value of the labor remains constant. Just because labor has been devalued and crapped upon by those that never worked a day in their life doesn't mean that the intellectual and physical value of working men isn't necessary and highly valuable. When it all tanks, will you, the reader, have a skillset above drinking Starbucks Lattes and pushing buttons on a keyboard? Will you be unable to waddle your way into securing yourself and your family? Or will you have the necessary ability to think critically, manage full liability and lead to rebuild. It is the downer cattle that will fight for the system as it exists with murderous intent, even if they don't pull the trigger. Here is a book that lends an interesting angle to this discussion: DeAngelis, D. L., W. M. Post, and C. C. Travis. 1986. Positive Feedback in Natural Systems. Berlin: Springer Verlag. The idea being that, similar to what you see in electronics, exponential rise due to positive feedback leads to resonance or multi-stable conditions. Curious about the architects of such an economic system, and there may be some who are smart enough to decouple themselves from the sudden end of growth.
{"url":"http://cluborlov.blogspot.com/2013/02/book-excerpt-wrong-math.html","timestamp":"2014-04-21T04:33:11Z","content_type":null,"content_length":"113193","record_id":"<urn:uuid:1d951b2b-9ab7-4a35-870d-44d3bdcbee7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternative to tree diagrams? July 22nd 2009, 08:14 PM #1 Nov 2008 Alternative to tree diagrams? Hi i just wanted to know is there an alternative to tree diagrams? Like what happens if there is a question that requires a massive tree diagram is there a formula or some other method you could There are. However at this stage in the game, they want you to just recognize the idea behind the eventual equations you will get. For example, when you are introduced to the binomial distribution formula, if you go back and do a tree-digram for it, you will see (and understand) how and why such a formula works; instead of just thinking its a magic equation. ok could you do this hypothetical question if there are 10 dice what is the probability of rolling six 3's and four 5's. This would be next to impossible to do with a tree diagram. I have never done binomials so could you explain what you are doing I believe in the fact you are describing (it has been awhile since I took statistics), you would actually have to use a multinomial distribution as a binomial distribution assumes the "result" belongs to two outcomes (such as true/false on a quiz of 10 questions), whereas in the case of throwing a die, there are 6 possible outcomes for each of the ten throws. However, the multinomial distribution is what the call the "general" (or generic perhaps) form of the binomial, because you are plugging in "2" as the number of outcomes (and probably because it occurs so much in statistics work they wanted to give a two outcome multinomial distribution a name). I don't know how to create equations but here is a link the formula and using it. Essentially you are taking the number of "trials" (dice throws in your case), divinding that 6!4! (which corresponds to you wanting six 3's and four 5's) and multiplying that by ((1/6)^6)((1/6)^4) (which can be of course simplified but might be easier as is to appreciate: 1/6 is the probability of getting a 3 or a 5, and you are raising it to the number of 3's and 5's that you want - 6 and 4). That number, is the probability of getting six 3's and four 5's. Now, imagine a tree diagrim in your head of that problem. Not the entire tree but the general idea of it: you're going to have several arrangments of numbers right? All of the arrangments on their own have a probability of (1/6)^10 of being chosen: since 1/6 is the individual chance of getting a number on a dice, and (by the multiplication rule) you are doing that 10 times. So (1/6)^ 10 denotes the probability of getting any 10-dice combination. In the multinomial distribution equation, this probability of getting any 10-die combination of numbers, is the right side of the Now the left side, where we divide N! trials by 6!4! - that is the number of possible arrangements of 3's and 5's. Because in that tree diagram in your head - theres more than ONE way to get six 3's and four 5's right? If the goal is only to get 3's and 5's, regardless of when I get them in those 10 throws, I have many ways of going about doing that: thus, I need to know HOW MANY ways I can get six 3's and four 5's. Once I have that number, I simply multiply that by the probability of a specific 10-die combination of numbers - and viola that is the probability of getting six 3's and four 5's in 10 throws of a (fair) dice! July 23rd 2009, 08:41 AM #2 Super Member Jul 2009 July 23rd 2009, 06:56 PM #3 Nov 2008 July 24th 2009, 08:39 AM #4 Super Member Jul 2009
{"url":"http://mathhelpforum.com/statistics/95845-alternative-tree-diagrams.html","timestamp":"2014-04-18T10:44:12Z","content_type":null,"content_length":"39063","record_id":"<urn:uuid:d86e874e-8f4f-4649-bd13-7af83bc5b5b2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Numbers Complex Numbers resources Adding and Subtracting Complex Numbers This mobile phone video explains how complex numbers can be added or subtracted. There is an accompanying leaflet. Sigma resource Unit 4. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Division of complex numbers This mobile phone video explains how to divide complex numbers. Sigma resource Unit 7. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Imaginary numbers and quadratic equations This mobile phone video shows how the imaginary number i can be used in the solution of some quadratic equations. Sigma resource Unit 2. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Motivating the study of complex numbers This mobile phone download introduces complex numbers by explaining how it is useful to be able to formally write down the square root of a negative number. Sigma resource Unit 1. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Multiplying complex numbers This mobile phone video explains how complex numbers can be multiplied together. There is an accompanying leaflet. Sigma resource Unit 5. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The Argand Diagram This mobile phone video explains how complex numbers can be represented pictorially using an Argand Diagram. Sigma resource Unit 8. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The complex conjugate This mobile phone video explains what is meant by the complex conjugate of a complex number. Sigma resource Unit 6. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The modulus and argument of a complex number This mobile phone video explains how to calculate the modulus and argument of a complex number. Sigma resource Unit 9. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The polar form of a complex number This mobile phone video explains what is meant by the polar form of a complex number. Sigma resource Unit 10. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. What is a complex number This mobile phone video explains what is meant by a complex number, and how to find its real and imaginary parts. Sigma resource Unit 3. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Adding and Subtracting Complex Numbers This mp4 video explains how complex numbers can be added or subtracted. There is an accompanying leaflet. Sigma resource Unit 4. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Division of complex numbers This mp4 video explains how to divide complex numbers. Sigma resource Unit 7. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Imaginary numbers and quadratic equations This mp4 video shows how the imaginary number i can be used in the solution of some quadratic equations. Sigma resource Unit 2. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Motivating the study of complex numbers This mp4 video introduces complex numbers by explaining how it is useful to be able to formally write down the square root of a negative number. Sigma resource Unit 1. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Multiplying complex numbers This mp4 video explains how complex numbers can be multiplied together. There is an accompanying leaflet. Sigma resource Unit 5. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The Argand Diagram This mp4 video explains how complex numbers can be represented pictorially using an Argand Diagram. Sigma resource Unit 8. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The complex conjugate This mp4 video explains what is meant by the complex conjugate of a complex number. Sigma resource Unit 6. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The modulus and argument of a complex number This mp4 video explains how to calculate the modulus and argument of a complex number. Sigma resource Unit 9. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. What is a complex number This mp4 video explains what is meant by a complex number, and how to find its real and imaginary parts. Sigma resource Unit 3. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Motivating the study of complex numbers This unit introduces complex numbers by explaining how it is useful to be able to formally write down the square root of a negative number. Sigma resource Unit 1. Multiplying complex numbers This leaflet explains how complex numbers can be multiplied together. There are accompanying videos. Sigma resource Unit 5. The Argand Diagram This leaflet explains how complex numbers can be represented pictorially using an Argand Diagram. There are accompanying videos. Sigma resource Unit 8. The complex conjugate This leaflet explains what is meant by the complex conjugate of a complex number. There are accompanying videos. Sigma resource Unit 6. What is a complex number This leaflet explains what is meant by a complex number, and how to find its real and imaginary parts. Sigma resource Unit 3. Maths EG Computer-aided assessment of maths, stats and numeracy from GCSE to undergraduate level 2. These resources have been made available under a Creative Common licence by Martin Greenhow and Abdulrahman Kamavi, Brunel University. Adding and Subtracting Complex Numbers This video explains how complex numbers can be added or subtracted. There is an accompanying leaflet. Sigma resource Unit 4. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Division of complex numbers This video explains how to divide complex numbers. Sigma resource Unit 7. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Imaginary numbers and quadratic equations This video shows how the imaginary number i can be used in the solution of some quadratic equations. Sigma resource Unit 2. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Motivating the study of complex numbers This video introduces complex numbers by explaining how it is useful to be able to formally write down the square root of a negative number. Sigma resource Unit 1. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. Multiplying complex numbers This video explains how complex numbers can be multiplied together. There is an accompanying leaflet. Sigma resource Unit 5. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The Argand Diagram This video explains how complex numbers can be represented pictorially using an Argand Diagram. Sigma resource Unit 8. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The complex conjugate This video explains what is meant by the complex conjugate of a complex number. There is an accompanying leaflet. Sigma resource Unit 6. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The modulus and argument of a complex number This video explains how to calculate the modulus and argument of a complex number. Sigma resource Unit 9. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. The polar form of a complex number This video explains what is meant by the polar form of a complex number. Sigma resource Unit 10. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre. What is a complex number This video explains what is meant by a complex number, and how to find its real and imaginary parts. Sigma resource Unit 3. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by mathcentre.
{"url":"http://www.mathcentre.ac.uk/students/courses/materials/complexnumbers/","timestamp":"2014-04-17T01:26:00Z","content_type":null,"content_length":"30426","record_id":"<urn:uuid:25a44a6a-b7a5-4d81-801e-ba4e6b3a25d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodland Park, NJ Algebra 2 Tutor Find a Woodland Park, NJ Algebra 2 Tutor ...As an educator, I recognize that the student must come first. This attitude has always served me well in patience with exploring new topics and needing to explore alternate routes of explanation. I do hold high expectations on both parties, and do understand that this is a process that evolves as a deeper relationship is formed. 9 Subjects: including algebra 2, calculus, physics, algebra 1 ...I believe that most complex topics can be simplified into lessons that are digestible and accessible to dedicated students. I have 10 years+ of experience tutoring privately and in a classroom, as well as for test preparation organizations. I excel in standardized tests and would be happy to coach you to improve above and beyond your expectations. 44 Subjects: including algebra 2, reading, English, writing ...The problem is that the tutoring program only works during the school year, and I am away during that time now because I am in college. And because I am a college student I am in need of a little money. I found this site as a great way to continue something I know I am good at and still make a little money this summer. 2 Subjects: including algebra 2, algebra 1 ...I am currently student teaching in Lawrence High School this semester. I believe in helping students learn mathematical concepts and develop relational understandings, rather than just memorize procedure. If you want you child to understand math and be able to do math, I am the right tutor for you!I am going to be a teacher in one year. 27 Subjects: including algebra 2, reading, Spanish, statistics ...I have been tutoring for the last 10 years both professionally and as a volunteer. The use of multiple approaches to learning has been integral to my success in helping my students achieve their goals. Though varied approaches customized to your learning style, I will help you reach those breakthrough moments when topics that may have given you trouble suddenly become clearly 22 Subjects: including algebra 2, chemistry, calculus, physics Related Woodland Park, NJ Tutors Woodland Park, NJ Accounting Tutors Woodland Park, NJ ACT Tutors Woodland Park, NJ Algebra Tutors Woodland Park, NJ Algebra 2 Tutors Woodland Park, NJ Calculus Tutors Woodland Park, NJ Geometry Tutors Woodland Park, NJ Math Tutors Woodland Park, NJ Prealgebra Tutors Woodland Park, NJ Precalculus Tutors Woodland Park, NJ SAT Tutors Woodland Park, NJ SAT Math Tutors Woodland Park, NJ Science Tutors Woodland Park, NJ Statistics Tutors Woodland Park, NJ Trigonometry Tutors Nearby Cities With algebra 2 Tutor Cedar Grove, NJ algebra 2 Tutors Clifton, NJ algebra 2 Tutors Elmwood Park, NJ algebra 2 Tutors Glen Rock, NJ algebra 2 Tutors Haledon algebra 2 Tutors Hawthorne, NJ algebra 2 Tutors Hillcrest, NJ algebra 2 Tutors Little Falls, NJ algebra 2 Tutors North Caldwell, NJ algebra 2 Tutors North Haledon, NJ algebra 2 Tutors Paterson, NJ algebra 2 Tutors Totowa algebra 2 Tutors Verona, NJ algebra 2 Tutors Wallington algebra 2 Tutors Wayne, NJ algebra 2 Tutors
{"url":"http://www.purplemath.com/Woodland_Park_NJ_algebra_2_tutors.php","timestamp":"2014-04-21T12:37:29Z","content_type":null,"content_length":"24578","record_id":"<urn:uuid:ee602903-f966-4e84-86ca-59115d1917ea>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 5. Generic Adaptive Mesh Refinement GPU Gems 3 GPU Gems 3 is now available for free online! Please visit our Recent Documents page to see all the latest whitepapers and conference presentations that can help you with your projects. You can also subscribe to our Developer News Feed to get notifications of new material on the site. Chapter 5. Generic Adaptive Mesh Refinement Tamy Boubekeur LaBRI–INRIA, University of Bordeaux Christophe Schlick LaBRI–INRIA, University of Bordeaux In this chapter we present a single-pass generic vertex program for performing adaptive, on-the-fly refinement of meshes with arbitrary topology. Starting from a static or animated coarse mesh, this vertex program replaces each triangle with a refined triangular patch, chosen according to the required amount of local geometry refinement, among a set of pre-tessellated patterns stored in GPU memory. By encoding these patterns in parametric space, this one-to-many, on-the-fly triangle substitution is cast as a simple barycentric interpolation of a vertex displacement function, which is either user-provided or computed from existing data. In addition to vertex displacement, the same process can further be used to interpolate any other per-vertex attribute during the refinement process. The method is totally generic in the sense that no restriction is ever made about the mesh topology, the displacement function, or the refinement level. Several illustrative applications are presented here, including full GPU implementations of (1) mesh smoothing with higher-order Bézier patches, (2) high-frequency geometry synthesis with procedural displacement functions, (3) animated free-form deformations, (4) standard displacement mapping with guaranteed shading and silhouette consistency, and (5) multilevel terrain rendering, to name only a few. But the technique is virtually applicable to any problem that uses predictable generation of geometry by refinement. 5.1 Introduction Mesh refinement is a powerful technique for representing 3D objects with complex shapes. Rather than enumerate the huge number of polygons that would be required to get an accurate discrete approximation of such a complex shape, mesh refinement techniques split the surface representation into a coarse polygonal mesh combined with a continuous displacement function. Then, at rendering time, two successive operations are basically performed on the coarse mesh: • Tessellation, for generating a refined mesh topology at the desired level of detail • Displacement, for translating each newly inserted vertex to its final position, obtained by sampling the continuous displacement function More precisely, the role of the tessellation step is to split each polygon of the coarse mesh into a (possibly huge) set of small triangles without performing any actual geometric deformation. The role of the displacement step is to add small-scale geometric details by moving the vertices of these triangles along a vector provided by the displacement function. Depending on this function, the displacement of each vertex can either be constrained along its normal vector or be performed along an arbitrary vector. While the former solution is more compact and easier to apply on an animated object, the latter allows the creation of much more complex shapes for a given coarse mesh. Popular displacement methods include bitmap textures (such as grayscale height-fields) and procedural 3D textures (such as Perlin noise). Many existing computer graphics techniques can be expressed under this paradigm, such as spline-based or wavelet-based surface representation, subdivision surfaces, hierarchical height fields, and more. However, performing a full GPU implementation of this two-stage process remains a problem with current devices. Although the traditional vertex shader allows an efficient computation of the displacement stage, the lack of geometry creation on the GPU makes the tessellation stage really tricky. Recently a geometry shader (Blythe 2006) has been designed for geometry upscale, but it suffers from a strong limitation, as it can output 1,024 floats at most, which means that only two or three levels of refinement can be applied on each triangle. If deeper refinement is required, multipass geometry shading has to be employed, which obviously reduces overall performance. On the contrary, the vertex program proposed in this chapter allows very deep adaptive, single-pass mesh refinement even on three-generations-old GPUs. Basically, it relies on barycentric coordinates to perform a consistent, crack-free adaptive tessellation. One major advantage of such an on-the-fly implementation of mesh refinement is to deal only with low-resolution meshes at the CPU level, letting the GPU adaptively generate the high-resolution displaced meshes. With our method, this target mesh is never generated on the CPU, never transmitted on the graphics bus, and even never stored on the GPU; the only remaining bottleneck is the GPU's vertex-processing horsepower. 5.2 Overview The generic adaptive mesh refinement (GAMeR) technique that we present in this chapter offers the following features: • Standard geometry representations used by common rendering APIs (polygon soups or indexed triangle sets) can be employed as-is, without any preprocessing (such as global or local parameterization) and without any of the additional data structures often required by refinement techniques (such as half-edge structure). • Only the coarse mesh is transmitted from the CPU to the GPU. The only additional data needed by GAMeR is a per-vertex scalar attribute, called depth tag, that indicates the level of detail required in the vicinity of each vertex. Note that this depth-tagging may be generated either automatically or under user supervision. • Because the mesh refinement is performed on-the-fly, on a per-frame/per-triangle basis, arbitrary level of detail can be obtained, even for animated meshes. • The whole two-stage adaptive mesh refinement (tessellation and displacement) is performed on the GPU by a single-pass vertex program, which totally frees the fragment shaders for including additional visual enrichments. The workflow architecture used by GAMeR is presented in Figure 5-1. The key idea is to precompute all the possible refinement configurations of a single triangle, for various per-vertex depth tags, and encode them using barycentric coordinates. Each possible configuration is called an adaptive refinement pattern (ARP) and is stored once for all on the GPU, as a vertex buffer object. Then, at rendering time, the attributes of each polygon of the coarse mesh, as well as the attributes of the displacement function, are uploaded to the GPU (by using uniform variables, for instance) and the adequate ARP is chosen according to the depth tags. Finally, the vertex program simultaneously interpolates the vertices of the current coarse polygon, and the displacement function, by using the barycentric coordinates stored at each node of the ARP. The first interpolation generates the position of the node on the polygon (tessellation) and the second one translates the node to its final position (displacement). 5.3 Adaptive Refinement Patterns During the initialization step of GAMeR, all possible ARPs are computed once for all and stored in a 3D matrix, called the ARP pool, as shown in Figure 5-2a. An element {i, j, k} of this matrix is the ARP corresponding to a triangle refined at depth i on its first edge, depth j on the second edge, and depth k on the last one. Since depth values are stored on a per-vertex basis, the order in which the edges are enumerated does not matter. The diagonal of the matrix corresponds to the case of uniform refinement (all edges are refined at the same depth). All other cases have to deal with adaptive refinement, because each edge may require a different depth. A simple, but not optimal, way to generate the ARP for a nonuniform depth-tag configuration is to uniformly refine the initial triangle until reaching the minimum depth of the three edges. Then, in the neighborhood of the remaining edges, the border triangles are simply split to reach the correct refinement depth for each edge. The upper pattern in Figure 5-2b has been obtained with this simple algorithm applied on the {3, 4, 5} depth-tag configuration. To get more equilateral triangles, a larger support for adaptive refinement may be employed. The lower pattern in Figure 5-2b shows an alternative topology for the same {3, 4, 5} configuration. 5.3.1 Implementation As already mentioned, each node of the ARP is encoded by using its barycentric coordinates. The very valuable benefit of this approach is that only a single pattern is required for a given depth configuration, whatever the position, orientation, and shape of any coarse triangle it will substitute during the refinement step. Note, therefore, that in the vertex program, the barycentric coordinates of the refined vertices will take the place of the usual position (gl_Vertex). Thus, the geometric attributes of the coarse triangle have to be transmitted in another way. For deep enough refinements or recent graphics devices, uniform variables can be used safely with regard to performance. The ARP is the central structure of our system. In order to achieve maximum performance at rasterization time, the ARP is encoded as an indexed vertex buffer of degenerated triangle strips, directly in the GPU memory. Moreover, because we use dyadic refinement, each refinement level is actually a superset of the previous one, so we can further reduce the global memory footprint by separating the geometry from the topology. A vertex buffer is used to encode all the geometry, as the set of barycentric coordinates for the nodes that belong to the deepest regular ARP. Then the topology of any given ARP is encoded by using an index buffer, as an indexed strip over this maximum configuration. So, at rendering time, the only action to perform is to bind the index buffer of the selected APR, while always keeping the same vertex buffer. In restricted conditions, such as PDAs with 16-bit precision, this encoding allows a maximum refinement level of 256x256 for each coarse triangle. At the other extreme, with a modern GPU, we have experienced real-time performance when using 1024x1024 refinement per coarse triangle, in the context of procedural high-frequency geometry synthesis. Even higher resolutions can easily be obtained if required, because our kernel fully runs in object space and does not depend on the screen resolution. 5.4 Rendering Workflow 5.4.1 Depth-Tagging The depth-tagging process provides an efficient and flexible solution to control the level of adaptive refinement of the input coarse mesh. At the CPU level, the application provides a per-vertex scalar attribute (a positive integer, in our implementation) that indicates the level of detail required in the vicinity of each vertex. In practice, common choices for computing the depth tag may include the camera-to-vertex distance, the local curvature, the semantic importance of the object, the saliency, or any combination of these values. Figure 5-3 presents two different mesh refinements generated by GAMeR on the same coarse mesh, by using either a distance-based tagging or a curvature-based one. Note that, in some specific cases, the depth-tagging may also be performed at the GPU level, by using a preliminary rendering pass with render-to-vertex-buffer functionalities. However, we believe that this is usually not a good choice, mainly for two reasons: • The depth-tagging is computed on the coarse mesh, which contributes little overhead at the CPU level. • The depth-tagging may depend on various criteria, most of which are not easily available at the GPU level. Once the depth-tagging has been performed, the attributes of each coarse polygon are uploaded as uniform variables, and the depth-tag configuration is used to select the adequate ARP's index buffer. Note that edges are not explicitly represented in most real-time 3D engines. Thus we compute depth tags on a per-vertex basis, and then we convert these values to per-edge depth tags simply by using the mean value of the two adjacent vertices. This ensures a crack-free transition between neighboring triangles. 5.4.2 The CPU-Level Rendering Loop At the CPU level, the application just has to maintain the per-vertex depth tags, bind the adequate index buffer from the ARP pool, and draw it, as shown in Listing 5-1. Example 5-1. Pseudocode for the Rendering Loop on the CPU GLuint ARPPool[MaxDepth][MaxDepth][MaxDepth]; . . . void render(Mesh M) { if (dynamic) for each Vertex V of M do V.tag = computeRefinementDepth (V); for each CoarseTriangle T of M do { Note that the number of bind operations can be greatly reduced by clustering coarse triangles according to their depth-tag configuration. Similarly, displacement attributes (such as standard displacement maps, parameters of procedural functions, or coefficients for spline-based or wavelet-based smoothing) are either uploaded once for all at initialization, or on a per-frame basis in the case of animated displacement. 5.4.3 The GPU-Level Refinement Process The vertex program contains three stages: (1) a tessellation stage, which simply interpolates the vertices of the coarse triangle; (2) a displacement stage, which samples and interpolates the continuous displacement function; and (3) the traditional shading stage. In Listing 5-2, we use simple linear interpolation for both stages, but higher-order interpolation can be used, with possibly different orders for each attribute (for example, linear interpolation for vertex positions, coupled with quadratic interpolation for normal vectors). Example 5-2. Pseudocode of the Refinement Vertex Program on the GPU const uniform vec3 p0, p1, p2, n0, n1, n2; // User-defined Displacement Function float dispFunc(vec3 v) {. . .} void main(void) { // Tessellation by barycentric interpolation float u = gl_Vertex.y, v = gl_Vertex.z, w = gl_Vertex.x; // w=1-u-v gl_Vertex = vec4 (p0*w + p1*u + p2*v, gl_Vertex.w); gl_Normal = n0*w + n1*u + n2*v; // User-defined Displacement Function float d = dispFunc(gl_Vertex.xyz); gl_Vertex += d * gl_Normal; // Shading and Output . . . 5.5 Results In this section, we present several examples created with GAMeR. Most of them use simple curvature-based and distance-based depth-tagging. Refinement depth ranges from 4 to 10 (that is, from 16x16 to 1024x1024 refined triangles). In Figure 5-4, a mesh smoothing is performed with triangular Bézier patches, using either curved PN triangles (Vlachos et al. 2001) or scalar tagged PN triangles (Boubekeur et al. 2005) to include additional sharp features. In this case, the displacement attributes transmitted on the graphics bus are reduced to a few Bézier parameters per coarse triangle. Figure 5-5 illustrates another interesting feature of our generic refinement method: because no conversion or preprocessing of the coarse input mesh is required, it can be animated in real time, while always being consistently refined. As shown on Figure 5-6, this flexibility also ensures a consistent adaptive refinement of a surface with arbitrary topologies. Another application of our refinement kernel is the use of procedural refinements. In this case, complex shapes can be represented as very simple meshes equipped with procedural functions. These functions may exhibit very high frequencies, requiring a deep level of tessellation for accurate sampling. Examples are shown in Figure 5-7. With the use of vertex texturing, displacement maps can also be used with our kernel. Figure 5-8a shows a terrain rendering system using our kernel for refining coarse ground in a view-dependent fashion while displacing it with a height map. Figures 5-8b and 5-8c show a displaced refinement of a scanned human face. In general, the best overall performance is obtained with the highest refined size versus coarse size ratio. The refinement can be about three orders of magnitude faster than its equivalent CPU implementation. With recent GPU unified architectures, vertex texture fetches can be performed very efficiently, which allows the use of more and more displacement maps in real-time applications. Our generic refinement technique is then a good candidate for saving CPU workload, graphics bus bandwidth, and on-board graphics memory. Table 5-1 illustrates the frame rates obtained by our implementation on an NVIDIA GeForce 8800 GTX for various models presented earlier. Table 5-1. Frame Rates Achieved Using GAMeR Model Input (CPU) (Triangles) Depth Tag Displacement Output (GPU) (Millions of Triangles) Frame Rate (FPS) Robot 1,246 Curvature + Distance Bézier (STPN) 1.1 263 Hand 546 Distance Procedural 2.1 155 Face 1,914 Curvature Displacement Map 4.0 58 Terrain 98 Distance Height Map 6.4 44 Globally, if the refinement depth is low and the input CPU mesh is large, the system is bottlenecked by the upload of coarse polygon attributes. At the other extreme, if the input CPU mesh is coarse and the refinement is deep, the system is bottlenecked only by the GPU horsepower. For instance, with a target mesh size of one million triangles, an input CPU mesh of 65,000 triangles results in an average GPU refinement depth of 2 and rendering performed at 38 frames/sec. With a CPU mesh of 4,000 triangles, the average GPU refinement depth is 4, and the rendering reaches 279 frames/sec. This makes the system particularly interesting for applications requiring a huge refinement depth, such as CAD or scientific visualization. 5.6 Conclusion and Improvements We have presented an efficient single-pass vertex shading technique for performing real-time adaptive mesh refinement. This technique is particularly interesting when the input mesh is coarse and the refinement is deep. This technique can be combined with the geometry shader by using the geometry shader for low refinements (such as a depth of 1 or 2) and then switching to our kernel for deeper refinements. The tagging system makes our method generic and allows us to integrate it in a 3D engine by just adding a per-vertex attribute. The killer application of our method is clearly the case of dynamic coarse mesh (such as an animated character face or a soft body) equipped with a displacement function, where the CPU application just has to maintain the coarse mesh while still having very high resolution objects on screen. Among the possible improvements of this technique, we can mention the use of alternative refinement patterns, with different polygons distribution, as well as the implementation of true subdivision surfaces, where the displacement function is based on their parametric form instead of their recursive definition. 5.7 References Blythe, David. 2006. "The Direct3D 10 System." In ACM Transactions on Graphics (Proceedings of SIGGRAPH 2006) 25(3), pp. 724–734. Bolz, Jeff, and Peter Schroder. 2003. "Evaluation of Subdivision Surfaces on Programmable Graphics Hardware." http://multires.caltech.edu/pubs/GPUSubD.pdf. Boubekeur, Tamy, Patrick Reuter, and Christophe Schlick. 2005. "Scalar Tagged PN Triangle." Eurographics 2005. Boubekeur, Tamy, and Christophe Schlick. 2005. "Generic Mesh Refinement on GPU." In Proceedings of the SIGGRAPH/Eurographics Workshop on Graphics Hardware 2005, pp. 99–104. Boubekeur, Tamy, and Christophe Schlick. 2007. "A Flexible Kernel for Adaptive Mesh Refinement on GPU." Computer Graphics Forum, to appear. Bunnell, Michael. 2005. "Adaptive Tessellation of Subdivision Surfaces with Displacement Mapping." In GPU Gems 2, edited by Matt Pharr, pp. 109–122. Addison-Wesley. Shiue, Le-Jeng, Ian Jones, and Jorg Peters. 2005. "A Realtime GPU Subdivision Kernel." In ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005) 24(3), pp. 1010–1015. Vlachos, Alex, Jörg Peters, Chas Boyd, and Jason Michel. 2001. "Curved PN Triangles." In Proceedings of SIGGRAPH 2001 Symposium on Interactive 3D Graphics, pp. 159–166.
{"url":"http://http.developer.nvidia.com/GPUGems3/gpugems3_ch05.html","timestamp":"2014-04-17T02:02:58Z","content_type":null,"content_length":"40202","record_id":"<urn:uuid:858aca39-390d-4a24-afe3-f0d333005672>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex ballistic formula? September 21, 2011, 02:28 AM B = ballistic coefficient W = weight of bullet M = muzzle velocity H = height of sights above bore Z = zero range D = distance of bore sight Given a bullet w/ a known BC, weight, & MV, the height of sights above the bore, & the desired zero range, what is the distance at which the bore sight and the sight line coincide? Farmers Fight!
{"url":"http://www.thehighroad.org/archive/index.php/t-615822.html","timestamp":"2014-04-21T02:14:18Z","content_type":null,"content_length":"48249","record_id":"<urn:uuid:499ee35a-19e1-4e38-8a23-166f23672cc4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekend miscellany 4 comments on “Weekend miscellany” 1. I used your Sicherman’s Dice link and related information in a post of my own. Thanks for the tip! 2. In high school, I spent hours trying to find two dice with the same probability as standard dice. My method was just brute force trial and error, writing sums in a 6×6 table. I never did figure it out. Much later I saw that the problem can be solved easily using the factorization of polynomials. Behold the power of math! 3. Speaking of dice, have you already seen the Mathematician’s Dice on kickstarter.com? They have: the imaginary number i; the additive identity 0, the multiplicative identity 1, the golden ratio φ (1.618…); Euler’s number e (2.718…); and the circular constant π (3.141…). 4. To be explicit, the Mathematician’s Dice project can be found on this page on kickstarter.com:
{"url":"http://www.johndcook.com/blog/2011/04/01/weekend-miscellany-82/","timestamp":"2014-04-20T20:56:30Z","content_type":null,"content_length":"30893","record_id":"<urn:uuid:96cbb816-f762-49ae-a81b-3f329eba6c5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Significance of Rational Numbers Date: 01/11/2003 at 11:24:12 From: Chirag Subject: Significance of rational numbers We know that any rational number can be constructed using integers, and that an irrational number can be constructed using a sequence of rational numbers, but isn't that entirely due to the way rational numbers are defined? Why are rational numbers defined the way they are? Date: 01/12/2003 at 12:05:01 From: Doctor Kastner Subject: Re: Significance of rational numbers Hi Chirag - Why are the rationals defined the way they are? That's an interesting question. Let me address it from a formal point of view, and to do that we need to talk about abstract algebra a little. Abstract algebra is the branch of mathematics that studies number systems and their associated operations. An example will help make this clearer. A group (G) is a set of elements, together with a binary operator (.) that satisfies the following properties: 1) Closure. If a and b are in G, a.b is in G 2) Associativity: If a,b,c are in G, then a.(b.c) = (a.b).c 3) Identity. There is an identity element (I) in G such that a.I = a for every element in G 4) Inverse. For each element a in G, there is an inverse b = a^(-1) such that a.(a^(-1)) = I I should make it clear that the a^(-1) does not necessarily mean 1/a. We are dealing with generic inverses here. Groups are all around us in math. The classic example is the set of integers together with the binary operator +. It is clear that it is closed and associative, and the identity is 0, while the inverse of a is -a. But notice that the integers with the operator * do NOT form a group. Multiplication of the integers satisfies the first three properties (the identity is 1), but given an element a, the inverse element 1/a is not always an integer. Even worse, what if the number you want to find an inverse of is 0? Other topics of study in abstract algebra include (in order of increasing number of properties) rings and fields. In addition, rings and fields require two operations (+ and * if we are talking about the rationals), not just one. Groups, rings and fields are all connected, and it is common for mathematicians to start with a group and try to expand it into a ring or a field. The group of integers under + forms a group, but suppose I want to make a field using that as my base. First, I know that I need another operation, in this case *. Both operators in a field must also satisfy the group properties (except for a multiplicative inverse for the 0 element), and we also need commutivity (a+b = b+a, a*b= b*a) and the distributive property (a*(b+c) = a*b+a*c)). Those extra ones aren't bad, but I still don't have the right set of numbers for the multiplicative inverse. If I consider the integer a, my field must also contain the inverse, 1/a. So let's put those in the set. But since this should be closed, I need to include the numbers b*1/a in my field too. Now put those in. You can see where this is going. The result is the rational numbers. It turns out that the rationals are the simplest field that can be constructed using the integers as a starting point. Finally, I should mention that the set of irrational numbers does not form a field (or even a group for that matter) since it isn't closed under +. Pi and its additive inverse -pi are both in the collection, but their sum is 0 which is not an irrational number. I hope this helps. Write back if you're still stuck, or if you have other questions. - Doctor Kastner, The Math Forum Date: 01/16/2003 at 12:23:50 From: Chirag Subject: Thank you (significance of rational numbers ) Thanks so much for answering my question so promptly. It was really very satisfying to get such a comprehensive answer.
{"url":"http://mathforum.org/library/drmath/view/62019.html","timestamp":"2014-04-16T14:21:34Z","content_type":null,"content_length":"8696","record_id":"<urn:uuid:4a3bbfcb-46c0-40c3-a498-51bd40d51c84>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Does higher order arithmetic interpret the axiom of choice? up vote 8 down vote favorite By second order arithmetic I mean the axiomatic theory $Z_2$, that is Peano arithmetic extended by second order variables with the full comprehension axiom, and not defined semantically using power set in ZF. By third order arithmetic I mean that extended by third order variables and the comprehension axiom. And so on. Does each of these have an inner model which also satisfies the axiom of choice in each order, using constructibility? If not, do such inner models exist if we also extend induction to a higher order axiom? Is there a good reference on it? lo.logic arithmetic If its helpful to know, second order arithmetics is a first order theory. The set variable are not actually second order variables. Usually the language would include a unary predicate that indicates something is a "set". – William Aug 3 '12 at 3:15 3 I'm not sure I understand the question. On the one hand, it seems to me semantically impossible to have, say, a model of seventh-order arithmetic in which the seventh-order variables were well-ordered: what would this well-ordering consist of? On the other hand, given $n$, I can take a well-founded model $M$ of $V=L$ and look at the first $n$ many powersets of $\omega$. It seems to me that this gives a model of $n$-th order arithmetic in which the first $n-1$ many sorts are well-ordered. Am I understanding the question correctly? – Noah S Aug 3 '12 at 3:25 William, yes people often say things like "axiomatic second order arithmetic $Z_2$ is a first order theory." Yet $Z_2$ is still widely called second order arithmetic. So I tried to be clear that I am asking about an axiomatic theory with stated axioms and not about what people call the full second order semantics. Noah, yes what you say is right. So my question is what does it take to get an inner model of $V=L$ in $Z_n$ without recourse to ZF. – Colin McLarty Aug 3 '12 at 12:51 add comment 1 Answer active oldest votes There is quite a bit of this in Simpson's book Subsystems of Second Order Arithmetic in the specific context of second-order arithmetic. Here are three relevant results: Corollary VII.5.11 (conservation theorems). Let $T_0$ be any one of the $L_2$-theories $\Pi^1_\infty\text{-CA}_0$, $\Pi^1_{k+1}\text{-CA}_0$, $\Delta^1_{k+2}\text{-CA}_0$, $0 ≤ k < \infty$. Let $\phi$ be any $\Pi^1_4$ sentence. Suppose that $\phi$ is provable from $T_0$ plus $\exists X \forall Y (Y ∈ L(X ))$. Then $\phi$ is provable from $T_0$ alone. Here $\Pi^1_\infty\text{-CA}_0$ has the full comprehension scheme for second order arithmetic, and hence also the full induction scheme. Theorem VII.6.16 ($\Sigma^1_{k+3}$ choice schemes). The following is provable in $\text{ATR}_0$. Assume $\exists X \forall Y (Y ∈ L(X ))$. Then: up vote 7 down 1. $\Sigma^1_{k+3}\text{-AC}_0$ is equivalent to $\Delta^1_{k+3}\text{-CA}_0$. vote accepted 2. $\Sigma^1_{k+3}\text{-DC}_0$ is equivalent to $\Delta^1_{k+3}\text{-CA}_0$ plus $\Sigma^1_{k+3}\text{-IND}$. 3. Strong $\Sigma^1_{k+3}\text{-DC}_0$ is equivalent to $\Pi^1_{k+3}\text{-CA}_0$. 4. $\Sigma^1_\infty \text{-DC}_0$ ($=\bigcup_{k < \omega} \Sigma^1_k\text{-DC}_0$ ) is equivalent to $\Pi^1_\infty\text{-CA}_0$. Corollary IX.4.12 (conservation theorem). For all $k <\omega$, $\Sigma^1_{k+3}\text{-AC}_0$ (hence also $\Delta^1_{k+3}\text{-AC}_0$ ) is conservative over $\Pi^1_{k+2}\text{-CA} _0$ for $\Pi^1_4$ sentences. 1 It looks ok to me. What looks wrong to you? – David Roberts Aug 3 '12 at 5:12 @David Roberts: the math is rendering for me now as well. Yesterday, it was not rendering, and only displayed as raw TeX. It may have been a network issue with MathJaX, since I thought my connection was slow last night in other ways. – Carl Mummert Aug 3 '12 at 10:38 add comment Not the answer you're looking for? Browse other questions tagged lo.logic arithmetic or ask your own question.
{"url":"http://mathoverflow.net/questions/103835/does-higher-order-arithmetic-interpret-the-axiom-of-choice?sort=votes","timestamp":"2014-04-19T04:33:19Z","content_type":null,"content_length":"59056","record_id":"<urn:uuid:4d4c01f3-e9d3-49b1-b2c8-2ac3b4c3c136>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Investment Returns Today we shall start to look at the concept of investment returns. A relatively straightforward concept. The return is the gain or loss you experience on an investment. Pretty easy compared to our risk analysis. Or is it? While the above definition applies, investment returns are slightly more complex. There are a variety of return calculations. The importance of each depends on the individual investor’s personality and circumstances. Below we will look at three common return calculations. Total Return Total Return equals: (Sale Proceeds – Purchase Price + All Cash Flows + Reinvestment Income)/Purchase Price Net cash flows include interest and dividend income. It may include interest expense on any debt used to finance the investment. And taxes payable on any gains or income incurred. Transaction costs are sometimes factored into net cash flows. I prefer to add them to the purchase price and deduct them from the sale proceeds. Alternatively, you can track them separately as investment expenses should you so desire. Reinvestment income is the income you earn on income received. Much like in our discussion of compound returns. For example, you purchase one share of ABC for $10. You receive a dividend of $1.00. You put that cash dividend receipt into your personal savings account and earn $0.10 in interest. You then sell the share for $12. Total Return is the capital gain (sale proceeds minus purchase price), the dividend income (net cash flow), and the interest income earned from the cash dividend (reinvestment income). In this example, it is $3.10 or 31%. When calculating, investors often forget to factor in the reinvestment income. 31% sounds like an excellent return on ABC. But is it? Total Return relates to the return over a period of time. That period may be any duration. One day, one year, one century. What if I told you that you bought and sold the stock in one week. Then the return is impressive. But what if I said that you bought the stock in 1970 and sold it in 2010. On an annual basis, 31% over 40 years may not be that attractive. Or what if I offered you two investments. One provides a Total Return of 100%. The other, 10%. You would obviously be tempted by the first. However, if the holding period for option one is five years and only five weeks for option two, your decision might change. That is a big problem when people speak of Total Returns. Without any context of time, it is hard to assess the relevance of total return as a performance measurement. So when someone talks to you about returns, make sure you put it in a time context. Annual Return A simple way to calculate Annual Return is modify the Total Return calculation, Annual Return equals: (Value at End of Year – Value at Start of Year + Year’s Net Cash Flow + Year’s Reinvestment Income)/Value at Start of Year This formula acts as if you bought the investment at the start of the year and sold it at year’s end. In using this formula, you can quickly compare performance of different investments over the same time horizon. Holding Period Return You may also come across something called a Holding Period Return. HPR = (Ending Value/Beginning Value) – 1. This is like the Total Return except it does not include net cash flows nor reinvested income. If you invest in assets with significant cash flow aspects (e.g. bonds, preferred shares), you will be missing out on a material portion of actual return by ignoring cash flow and reinvested income. But if you invest in common shares of small capitalized (“small cap”) growth stocks you likely will not receive any dividend income. In this case, Holding Period Return will equal Total Return. You can calculate Holding Period Return for any combination of time periods. Just determine a beginning and end date and you are set. A Lesson to Remember There are other returns that you will see. We may consider a few more in due course. If you learn the equations, or have a decent financial calculator, calculating investment returns is not difficult. But always remember to compare apples to apples and oranges to oranges when calculating and analyzing returns. Depending on the asset and conditions, different return calculations can yield materially different results. Make certain that you use the correct ones to arrive at the best conclusions. And if someone tells you the expected or historic returns are 15% (for example), make sure you know exactly which type of return they are using. With a variety of return options, you will usually be informed of the one that is most favourable to the person telling you. And that may not be in your best interest. Of course, what a great site and informative posts, I will add backlink and bookmark this web site? Regards, Reader.
{"url":"http://personalwm.com/2010/06/16/common-investment-returns/","timestamp":"2014-04-17T10:17:09Z","content_type":null,"content_length":"42745","record_id":"<urn:uuid:a3a716b2-9f69-48d0-8da8-acf665cf14f0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Discussion regarding Mr. Diabys algorithm Yes Sir Hofman, I agree with you, regarding what you have said above. From theory, I am 100% sure that Diaby's Algorithm is capable of outputting a solution that is:- 1.) not a TSP tour. 2.) not optimal, if it is a TSP tour Do you think you will be able to show us some computer-implementation results (hopefully by the end of this week), which will clearly show this to the world ?? Or if not by end of this week, could you please give us an estimate, of how much more time will be necessary. Your work will be very crucial, in proving to the world, that Diaby's super-massive model, has a hole. :-) Relevant Pages • [QUIZ] Some observations on Pen and Paper (#90) or to make it fast, choose your algorithm wisely ... The fast ones seem to be based on Warnsdorff's algorithm for the ... Knight's tour which was published in 1823. ... the year 1758, by a Swiss. ... I know that a Springer is the German word for a chess knight, ... • Re: matrix ... >> but only if your algorithm doesn't require computing indices. ... >> Knight's tour appears to require this. ... But if you did use them, lists would be an inefficient rep. ...
{"url":"http://coding.derkeiler.com/Archive/General/comp.theory/2006-11/msg00086.html","timestamp":"2014-04-24T13:34:06Z","content_type":null,"content_length":"10803","record_id":"<urn:uuid:c4693046-d161-496e-b02f-b4788f5675d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
a : array_like Input array or object that can be converted to an array. q : float in range of [0,100] (or sequence of floats) Percentile to compute which must be between 0 and 100 inclusive. axis : int, optional Axis along which the percentiles are computed. The default (None) is to compute the median along a flattened version of the array. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. overwrite_input : bool, optional If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. Note that, if overwrite_input is True and the input is not already an array, an error will be raised.
{"url":"http://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.percentile.html","timestamp":"2014-04-19T19:35:49Z","content_type":null,"content_length":"14427","record_id":"<urn:uuid:eb898288-b622-4a8b-bcdb-bf78e9b2c7de>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how many monkeys can swing on 1 tree ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50eafadde4b07cd2b6487a06","timestamp":"2014-04-20T13:59:29Z","content_type":null,"content_length":"248116","record_id":"<urn:uuid:a3c496a3-fefa-4bf9-bdf5-b108d525ea74>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Fachbereich Mathematik 31 search hits Wavelet-based Adaptive Multiresolution Tools Applied to Speech Recognition (2006) Andreas Simon * naive examples which show drawbacks of discrete wavelet transform and windowed Fourier transform; * adaptive partition (with a 'best basis' approach) of speech-like signals by means of local trigonometric bases with orthonormal windows. * extraction of formant-like features from the cosine transform; * further proceedingings for classification of vowels or voiced speech are suggested at the end. Time-Dependent Cauchy-Navier Splines and their Application to Seismic Wave Front Propagation (2006) Paula Kammann Volker Michel In this paper a known orthonormal system of time- and space-dependent functions, that were derived out of the Cauchy-Navier equation for elastodynamic phenomena, is used to construct reproducing kernel Hilbert spaces. After choosing one of the spaces the corresponding kernel is used to define a function system that serves as a basis for a spline space. We show that under certain conditions there exists a unique interpolating or approximating, respectively, spline in this space with respect to given samples of an unknown function. The name "spline" here refers to its property of minimising a norm among all interpolating functions. Moreover, a convergence theorem and an error estimate relative to the point grid density are derived. As numerical example we investigate the propagation of seismic waves. The Semiconductor Model Hierarchy in Optimal Dopant Profiling (2006) Concetta Drago Rene Pinnau We consider optimal design problems for semiconductor devices which are simulated using the energy transport model. We develop a descent algorithm based on the adjoint calculus and present numerical results for a ballistic diode. Further, we compare the optimal doping profile with results computed on basis of the drift diffusion model. Finally, we exploit the model hierarchy and test the space mapping approach, especially the aggressive space mapping algorithm, for the design problem. This yields a significant reduction of numerical costs and programming effort. The enumeration of plane tropical curves (2006) Hannah Markwig Tropical geometry is a rather new field of algebraic geometry. The main idea is to replace algebraic varieties by certain piece-wise linear objects in R^n, which can be studied with the aid of combinatorics. There is hope that many algebraically difficult operations become easier in the tropical setting, as the structure of the objects seems to be simpler. In particular, tropical geometry shows promise for application in enumerative geometry. Enumerative geometry deals with the counting of geometric objects that are determined by certain incidence conditions. Until around 1990, not many enumerative questions had been answered and there was not much prospect of solving more. But then Kontsevich introduced the moduli space of stable maps which turned out to be a very useful concept for the study of enumerative geometry. A well-known problem of enumerative geometry is to determine the numbers N_cplx(d,g) of complex genus g plane curves of degree d passing through 3d+g-1 points in general position. Mikhalkin has defined the analogous number N_trop(d,g) for tropical curves and shown that these two numbers coincide (Mikhalkin's Correspondence Theorem). Tropical geometry supplies many new ideas and concepts that could be helpful to answer enumerative problems. However, as a rather new field, tropical geometry has to be studied more thoroughly. This thesis is concerned with the ``translation'' of well-known facts of enumerative geometry to tropical geometry. More precisely, the main results of this thesis are: - a tropical proof of the invariance of N_trop(d,g) of the position of the 3d+g-1 points, - a tropical proof for Kontsevich's recursive formula to compute N_trop(d,0) and - a tropical proof of Caporaso's and Harris' algorithm to compute N_trop(d,g). All results were derived in joint work with my advisor Andreas Gathmann. (Note that tropical research is not restricted to the translation of classically well-known facts, there are actually new results shown by means of tropical geometry that have not been known before. For example, Mikhalkin gave a tropical algorithm to compute the Welschinger invariant for real curves. This shows that tropical geometry can indeed be a tool for a better understanding of classical geometry.) The Dynamics of Viscous Fibers (2006) Satyananda Panda This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed. Stop Location Design in Public Transportation Networks: Covering and Accessibility Objectives (2006) Dwi Retnani Poetranto Horst. W. Hamacher Simone Horn Anita Schöbel In StopLoc we consider the location of new stops along the edges of an existing public transportation network. Examples of StopLoc include the location of bus stops along some given bus routes or of railway stations along the tracks in a railway system. In order to measure the ''convenience'' of the location decision for potential customers in given demand facilities, two objectives are proposed. In the first one, we give an upper bound on reaching a closest station from any of the demand facilities and minimize the number of stations. In the second objective, we fix the number of new stations and minimize the sum of the distances between demand facilities and stations. The resulting two problems CovStopLoc and AccessStopLoc are solved by a reduction to a classical set covering and a restricted location problem, respectively. We implement the general ideas in two different environments - the plane, where demand facilities are represented by coordinates and in networks, where they are nodes of a graph. Statistical aspects of setting up a credit rating system (2006) Beatriz Clavero Rasero The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level. Semi-Simultaneous Flows and Binary Constrained (Integer) Linear Programs (2006) Alexander Engau Horst W. Hamacher Linear and integer programs are considered whose coefficient matrices can be partitioned into K consecutive ones matrices. Mimicking the special case of K=1 which is well-known to be equivalent to a network flow problem we show that these programs can be transformed to a generalized network flow problem which we call semi-simultaneous (se-sim) network flow problem. Feasibility conditions for se-sim flows are established and methods for finding initial feasible se-sim flows are derived. Optimal se-sim flows are characterized by a generalization of the negative cycle theorem for the minimum cost flow problem. The issue of improving a given flow is addressed both from a theoretical and practical point of view. The paper concludes with a summary and some suggestions for possible future work in this area. Quasiregular Projective Panes of Order 16 -- A Computational Approach (2006) Marc Röder This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified. Pareto Navigation - Interactive multiobjective optimisation and its application in radiotherapy planning (2006) Michael Monz This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15997/start/0/rows/10/yearfq/2006/sortfield/title/sortorder/desc","timestamp":"2014-04-17T10:48:50Z","content_type":null,"content_length":"48010","record_id":"<urn:uuid:35ea6760-f228-4e81-be9c-d3e5212766fd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Percentage of Increase Date: 07/23/97 at 13:45:49 From: Lisa Zone Subject: Percent increases I need to figure out the percent increase for an article I am writing. The figure in 1995 was 50,000 and the estimated figure for 2000 is 325,000. I need to figure out how much of a percent increase that is. I know 100,000 would be a 100 percent increase, right? But then, how do I go to the 325,000? I need the figure to add piece. Can you Date: 07/24/97 at 15:40:54 From: Doctor Rob Subject: Re: Percent increases The increase is 325,000 - 50,000 = 275,000. As a fraction of the original this is 275,000/50,000 = 11/2 = 5.50 . To convert a decimal to a percentage, multiply by 100. This gives a percentage increase of 550 percent. See http://mathforum.org/dr.math/faq/faq.fractions.html for more on fractions, decimals, and percentages. -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 08/03/97 at 13:41:26 From: Doctor Terrel Subject: Re: Percent increases Dear Lisa, You have every right to feel a little confused on a problem like this. It has been my experience that problems involving percents greater than 100 are confusing to the majority of people. But never fear; Dr. Math is here! I would attack this problem in two stages: (1) find the "amount" of increase; then (2) find what percent that the increase is of the base (1) The amount is rather easy. It's 275,000 [325,000 - 50,000]. (2) When I wish to find out what percent one thing is of another (regardless if it is greater or less than 100 percent), I set things up this way: base value x n% = percentage value In your problem this comes out as 50,000 x n% = 275,000 n% = --------- = 5.5 = 550% You can choose two ways to express your answer now. One is to say: there will be a 550% increase by the year 2000. Or you can say: in the year 2000 the (new value) - you didn't say what the numbers represented, so I'm a little confused right here - will be about five and a half times greater than what it was in 1995. Many people don't quite grasp those phrases, especially the latter one. Instead you might wish to say it this way: in 2000 the (new value) will be 6 and a half times what it was in 1995. The difference in the wording is subtle, of course, but important. The number 6 1/2 comes from --------- = 6.5 or 6 1/2 which is NOT a percent increase situation. Either way is acceptable, however. Use whichever you feel more comfortable with, and your writing will be more clear. Good luck on your article. -Doctor Terrel, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/58131.html","timestamp":"2014-04-16T19:55:06Z","content_type":null,"content_length":"7831","record_id":"<urn:uuid:3af2e677-0d10-403b-aa07-81d031c0d7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Our research focuses on: Switching our current energy supply to renewable sources poses one of the greatest technological and social challenges of humankind. A successful transition in particular requires an intelligent upgrade of the current electric power grid. So-called 'smart grids' may provide part of the solution by enabling the transmission of demand and supply information across the grid online, thereby adapting energy production and distribution, and thus aiming to control the entire grid. However, stable operation as well as failures on large scales already today are consequences of the collective dynamics of the power grid and are often caused by non-local mechanisms. We thus urgently need to understand the intrinsic network dynamics on the large scale to complement partial solutions of control engineering and to be able to develop efficient strategies for operating the future grid. We thus develop and analyze appropriate coarse-scale models of future power grids with an emphasis on increasingly distributed demand and supply. First results show several intriguing features. For instance, the addition of new transmission lines may *destabilize* power grid operation (via Braess paradox that we identified in oscillator networks). In addition, replacing the few large power plants by many small and distributed ones may stabilize grid operation, at least in the stationary (short-time) regime. How can distributed or autonomous systems control themselves to function properly? We are developing a novel line of research: Advancing the theory of chaos control we strive to bring it to applications for making autonomous robots more versatile and more self-organized Particles as well as waves flowing through a weak disorder potential show surprising branching effects. This has most notably been observed in two-dimensional electron gases, but is a phenomenon of much wider applicability. We study the formation of the caustics responsible for the branching of the flow. Many particles interacting nonlinearly often give rise to very complex behavior. This is true even for apparently simple systems, such as those in thermal equilibrium. For instance, particles with a spin that are anti-ferromagnetically coupled may give rise to positive ground state entropy, an exception to the third law of thermodynamics. Investigating such complex ground states immediately leads us to hard enumeration problems in graph theory and computer science. Here we try to understand basic features of complex macroscopic states and in parallel develop tools for analytically and computationally addressing large system with complex ground states and related graph theoretical problems. Making sense of huge neural data sets that contain spikes as well as temporally more coarse information constitutes a challenging task of current research. This is even more so as. for instance, the number of units possible to recorded from simulataneously, increases at a rapid pace. In this project we are developing novel methods of nonlinear time series analysis to relate dynamical quantities of neural activity on different temporal and spatial scales. We currently focus on relating the often precisely timed occurrence of spikes to the temporal evolution of local field potentials and low frequency oscillations via modern phase analysis techniques. How does the interaction topology of a complex network control its dynamics? Can we infer information about how a network is connected from dynamics measurements only? We address theoretical and practical aspects of such questions using mathematical modeling studies for general network dynamical systems and neural networks in particular. Coordinated patterns of precisely timed activity is a key ingredient for neural information processing. This project investigates the theoretical fundamentals underlying the mechanisms for generating precisely timed spikes in complex neural networks. The dynamics of Bose-Einstein condensates in leaky optical lattices is studied (in the mean field limit). For some critical values of the interatomic interaction strength, the current of atoms leaving the trap exhibits avalanches that follow a power-law distribution and indicate the existence of a novel phase transition. We study the influence of disorder on the pseudo-hermitian phase of (generalized) PT-symmetric systems. How do social animals communicate? Social whales may be one of the best fit animal model to study complex vocal communication patterns. We are interested in developing an automated classification system for vocalizations of killer whales and pilot whales. Very loosely speaking one can think of this as developing a "speech"-recognition system for whale vocalizations. Another, more general aim of this project is to characterize whale vocalizations in terms of information theoretic measures and to compare them to human languages. Music generated by computers and rhythm machines sometimes sounds unnatural. One reason for this is the absence of small inaccuracies that are part of every human activity. Professional audio software therefore offers a so-called humanizing technique, by which the regularity of musical rhythms can be randomized to some extent. But what exactly is the nature of the inaccuracy in human musical rhythms? Studying this question for the first time, we found that the temporal rhythmic fluctuations exhibit scale-free long-range correlations, i.e., a small rhythmic fluctuation at some point in time does not only influence fluctuations shortly thereafter, but even after tens of seconds. While this characterization is relevant for neurophysiological mechanisms of timing, it also leads to a novel concept for humanizing musical sequences. Comparing with conventionally humanized versions listeners showed a high preference for long-range correlated humanized music over uncorrelated humanized music. If the time-evolution of the mean-squared displacement of some quantity is non-linear, the system is said to exhibit anomalous diffusion. The underlying mechanisms leading to such anomalous diffusion can be multifold. Our group focuses on processes whose anomalous behavior is due to heavy-tailed distributions of either the waiting time distribution between the displacements or of the displacements themselves. Neural networks display characteristics of critical dynamics in the neural activities as theoretically predicted. The power-law statistics for the size of avalanches of neural activity was confirmed in real neurons, where the critical behavior is re-approached even after a substantial perturbation of the parameters of the system. These findings provide evidence for the presence of self-organized criticality (SOC). We study neural network models that exhibit power-law statistics with realistic synaptic mechanisms, such as synaptic depression. The biological function of self-organized criticality is much less understood than the physical mechanisms behind this phenomenon. Critical dynamics seems beneficial to living beings and it is known to bring about optimal computational capabilities, optimal transmission and storage of information, and sensitivity to sensory stimuli. We are interested in the developmental aspects of motor behavior in animals which we study in biomorphic autonomous robots. This project divided into two subgroups: 'Aging effects in selective attention' and 'Dynamic adaptation in decoding temporal information' that both develop computational models to explain effects in cognitive psychology in a more detailed way than usual psychological theories can. These models base on knowledge about biological processes and insights via abstract modeling in computational neuroscience yielding quantitative model predictions that can directly be tested in psychological experiments.
{"url":"http://www.nld.ds.mpg.de/research/projects","timestamp":"2014-04-19T19:34:26Z","content_type":null,"content_length":"80221","record_id":"<urn:uuid:1c8c27ff-9223-41f5-8ca0-681967c0fe66>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Subgroups of the general linear groups up vote 3 down vote favorite Dear all, I have the following question: Let $M$ be a subgroup of $GL_n(p)$ where $p$ is a prime and let $M'$ denotes the derived subgroup of $M$. Can we conclude that $|M:M'|\leq p^n-1$? Any counterexample or reference is very much appreciated. Thank you in advance. add comment 3 Answers active oldest votes REVISION: In fact, it is a consequence of the (now proved) so-called $k(GV)$-problem that the answer is indeed affirmative when $M$ has order prime to $p,$ and something rather stronger holds. There is a book about the $k(GV)$-problem by Peter Schmid. The $k(GV)$-problem is a special case of a problem of R. Brauer. The $k(GV)$-problem is to prove that if $G$ is a group of order prime to $p$ and $V$ is a faithful finite-dimensional ${\rm GF}(p)G$-module, then the semidirect product $GV$ has at most $|V|$ conjugacy classes. This has now been proved by a combination of authors, the final cases being handled by D.Gluck,K.Magaard, U.Riese and P.Schmid. There are examples when the bound is attained. Its proof does require the classification of finite simple groups. Note that one consequence of the $k(GV)$-problem is that under those hypotheses, $k(G) < |V|,$ where $k(G)$ denotes the number of conjugacy classes of $G.$ In particular, this yields $[G:G^{\prime}] < |V|,$ since $k(G)$ is also the number of complex irreducible characters of $G$, and $[G:G^{\prime}]$ is the number of complex linear characters of Hence this does imply that $[M:M^{\prime}] \leq (p^{n}-1)$ when $M$ is a subgroup of order prime to $p$ of ${\rm GL}(n,p)$ (in fact, we can even conclude that $M$ has at most $p^{n}-1$ conjugacy classes). up vote The answer ( to this MO question) may still be affirmative if you stick to completely reducible subgroups $M$ of ${\rm GL}(n,p),$ though I am nt certain of that. In this situation, these are 6 down essentially the groups with no non-identity normal $p$-subgroups- it is necessary to take a little care here, because the underlying module for $M$ may need to be replaced by the direct sum vote of its composition factors under the action of $M.$ However, if the subgroups $M$ with $O_{p}(M)$ are understood, then that covers all necessary groups at least up to isomorphism. By looking accepted at such subgroups $M,$ you eliminate counterexamples to the question such as those arising in Peter Mueller's answer. However, there are completely reducible subgroups of ${\rm GL}(n,p)$ which are not of order prime to $p,$ such as ${\rm GL}(n,p)$ itself. The completely reducible case reduces almost immediately to the case when $M$ is irreducible. After that, there is work to do: I don't see an immediate elementary argument, but Clifford theory begins to come into play( for example, it may be possibe to reduce to the case where the undelying module for $M^{\ prime}$ is a direct sum of isomorphic irreducible modules). This may well be a difficult queston when $M^{\prime}$ is non-Abelian simple, or when $M$ itself is almost simple. The authors I would check out for relevant results here would be people like : M.Aschbacher,R.Guralnick, P.Kleidman, M.Liebeck, P.Tiep. Guralnick and Tiep and others have been trying to prove variants of the $k(GV)$-problem when $G$ acts completely reducibly on $V$. The general bounds are necessarily somewhat weaker than $|V|$. I do not know at present whether they would provide an affirmative answer to this MO question in the case that $M$ is a completely reducible subgroup of ${\rm GL}(n,p)$, or whether this special case has been considered by authors such as Guralnick and Tiep. Thank you very much for your very detailed explanation. Indeed, I do have the hypothesis that $M$ is an irreducible subgroup of $GL_n(p)$ but I didn't put it in my first post. – Hung Nguyen Mar 17 '13 at 14:18 add comment No, not in general. For instance $GL_4(2)$ is isomorphic to the alternating group $A_8$, and the direct product of the two Klein four groups acting regularly on $1,2,3,4$ and $5,6,7,8$, respectively, is abelian of order $16\gt 2^4-1$. Actually, for any prime $p$, there is an abelian subgroup of order $p^4$ in $GL_4(p)$, just take the matrices of the form $\begin{pmatrix} E & A\\ 0 & E\end{pmatrix}$, where $E$ is the $2\times 2$ identity matrix, and $A$ an arbitrary $2\times 2$ matrix. up vote 9 down vote Generalizing to bigger $n$, there are more drastic counterexamples. I would expect your result to be true if $M$ is a $p'$-group. At ant rate, Maschke + Schur easily show that $\lvert M\rvert\le p^n-1$ when $M$ is an abelian $p'$-group. 4 In general ${\rm GL}_{2n}(q)$ has an abelian subgroup of order $(q-1)q^{n^2}$. – Derek Holt Mar 17 '13 at 11:42 add comment Let $U\leq GL_n(p)$ be the subgroup of upper unitriangular matrices, and let $M\leq U$ be the subgroup with zeros everywhere except in first row and last column. Then $[M\colon M']=p^ up vote 0 down {2(n-2)}$. For $n\geq 4$, this subgroup $M$ is counter-example. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/124706/subgroups-of-the-general-linear-groups/124715","timestamp":"2014-04-20T16:04:14Z","content_type":null,"content_length":"61743","record_id":"<urn:uuid:7175ae54-a97f-46c1-b45c-de1f7f5bf86c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is true about the solutions of a quadratic equation when the radicand of the quadratic formula is a positive number that is not a perfect square? a No real solutions b Two identical rational solutions c Two different rational solutions d Two irrational solutions • one year ago • one year ago Best Response You've already chosen the best response. The number under the radical (the radicand) is also known as the discriminate. You can determine what kind of solutions based on the discriminate. If it is negative, you will have two complex solutions, or roots. If it is zero, you will have one real solution. If it is positive, you will have two real solutions. Based on your choices and given that the radicand, or discriminate, is positive, you have to decide if your solutions are rational or irrational. Best Response You've already chosen the best response. So now you have a 50/50 choice, c or d. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff6f36be4b01c7be8c9b6b3","timestamp":"2014-04-19T04:27:11Z","content_type":null,"content_length":"30540","record_id":"<urn:uuid:63a920aa-3a5d-482a-b99f-c220e00cc5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
How Do You Get 1.21 GigaWatts For Your Time Machine? | Science Blogs | WIRED • By Rhett Allain • 09.16.13 | • 8:24 am | Here’s the important lines in this scene from Back to the Future. Doc: 1.21 gigawatts? 1.21 GIGAWATTS!? Great Scott! Marty: What? What the hell is a gigawatt? Note: they pronounce this as jiga-watt instead of the usual “hard g”. What the hell is a gigawatt? Let’s break it down. A watt is a unit of power. What is power? Power can be one of several things. The most often way to describe it is the change in energy in a certain amount of time. If energy is measured in units of Joules and the time interval is in seconds the power would be in Watts. So, 1 watt = 1 Joule per second. Horsepower is another unit for energy where 1 hp = 746 What about the giga? Giga is a prefix for units that typically means 10^9. This means that 1.21 gigawatts would be 1.21 x 10^9 watts. Is that a large power? Yes. Just for comparison, the nuclear power reactors in a Nimitz Class Aircraft Carrier produces 194 megawatts (1.94 x 10^8 watts). Or perhaps you would like to compare this to the flying S.H.I.E.L.D. Helicarrier. With my calculations, I get a power requirement of 317 gigawatts. What Does Doc Brown Even Mean? How much power does it take to travel through time? Well, Doc said 1.21 gigawatts. To me, this would be like asking how much power it takes to make toast. Yes, you could use a 500 watt toaster. However, you could also use a 250 watt toaster but it would take longer. Maybe there is something special about time travel such that there is both an energy requirement and it has to take place over some time interval. That’s what I am going to assume. If I want to calculate the energy required for time travel, I will need the power (given) as well as the time. How Long Does it Take to Travel in Time? That’s actually looks like a stupid question. Oh well. Let’s take a look at the actual real life footage of the time machine (from the historical archives). Time travel is possible if you get the car up to 88 mph. Is this car going 88 mph? Is there any way to tell? Oh, yes. Yes, there is. All I need to do is to look at this car (a DeLorean) and use video analysis. The clip isn’t perfect, but I think it will give a good enough estimate. I can scale the video using the wheel base of 2.413 meters. Here is a plot of the position of the DeLorean in the first time travel (with the dog in the car). The slope of this line puts the car’s speed at 56.7 m/s (127 mph). Yes, that is faster than 88 mph. I’m not sure why the one frame repeated. Also, there could be a problem with my scale since it was rather difficult to see the car. Here is the next time the car gets to a speed near 88 mph (when Marty first goes back in time). Well that’s not good. This give a speed of 29 m/s (65 mph). For this video, the car isn’t quite up to 88 mph so this seems ok. I guess I should look at the last time travel speed (when Marty goes BACK TO THE FUTURE). Oh, actually there is not a good shot to analyze there. Oh well, the second shot is close enough to 88 mph, that I will just stick with that. What about the time interval? For the first test, I looked at the time from just when the car started to shoot sparks until it “exploded”. This gives a time of 4.3 seconds. But wait! What about the case when a lightning bolt is used to power the car? For that case, the time machine is only getting power for at most 0.46 seconds. So, there are two different time intervals for two different trips through time. Time Travel Energy Now that I have the power AND the time, I can calculate the required energy. Let’s just do it (for both time interval estimates). That’s not so bad. I have an energy range with the high end just a factor of 10 higher. Now, how do you get 5 x 10^8 – 5 x 10^9 Joules? Doc Brown’s first choice was to use plutonium. Although he didn’t give too much of the details, I guess he was using Plutonium-239. Pu-239 is radioactive, but I don’t think that’s how it gave energy in this case. Instead, I guess that there was some type of fission process that broke the nucleus into smaller pieces. Since the pieces have less mass than the original, you also get energy (E = mc^2). The Wikipedia page on plutonium as the details, but let’s just say that one Plutonium atom produces 200 MeV (mega electron volts) in the fission process (3.2 x 10^-11 Joules). In a typical nuclear reactor (which probably wouldn’t use Plutonium-239), this energy is used to increase the temperature of water to make steam. The steam then turns an electric turbine to produce electricity. Clearly, that’s not happening here. I’m not sure what’s going on – but surely it’s not a 100% efficient process. I am going to say it’s 50% efficient. In order to get 5 x 10^8 Joules, I would need: Since 1 Plutonium-239 atom has a mass of 3.29 x 10^-25 kg, this would require a fuel mass of just 1.2 x 10^-5 kg. That seems possible. What about a lightning bolt? Could you get this much energy from lightning? According to Wikipedia, a single bolt of lightning can have about 5 x 10^9 Joules. That would be just perfect for the time traveling machine. But what if I think lightning and Plutonium energy sources are just plain boring? Maybe batteries would be an interesting way to power this machine. How many AA batteries would you need? From a previous post, I already know that a high quality AA battery has about 10,000 Joules of energy. In order to get 5 x 10^8 Joules, I would need 5 x 10^4 AA batteries. Of course, that assumes that I could completely drain these batteries in just half a second. Damn, those things would get hot. Clearly, there are other questions. Here are some that I can think of. • How much space would a DeLorean need to get up to 88 mph? You can look up the time for it to get from 0-60 mph and assume that it has a constant acceleration. • At the end of Back to the Future, Doc Brown replaces the Plutonium energy source with a Mr. Fusion. Estimate how much energy he could get from a banana peel. • If you watch all three movies in the Back to the Future series, there are several times that the car gets up to 88 mph. Use video analysis to check the speeds. • How long would it take current from the lightning strike to travel from the clock tower to the car? • Suppose that Marty is 1 second late on his start to get to the lightning wire. How much greater of an acceleration would he need to make it to the wire on time (assuming that over 88 mph works just as well as 88 mph)? • What if there wasn’t a known source of lightning? What other ways could Doc get energy to power the DeLorean in 1955 (or whatever the year was)? • Assume that the energy needed to time travel was directly proportional to the mass of the object. Would the S.H.I.E.L.D. Helicarrier have enough power to go back to 1957? I think there are some other interesting questions to consider, but I don’t want to give you too much homework to worry about.
{"url":"http://www.wired.com/2013/09/how-do-you-get-1-21-gigawatts-for-your-time-machine/","timestamp":"2014-04-21T15:52:47Z","content_type":null,"content_length":"109856","record_id":"<urn:uuid:b21174ca-cc88-4036-a59f-d6182559cd0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Glossary: R radical: The symbol for the operation to find a square root. raised to the power of: Multiplying a base number the number of times indicated by the exponent. rational number: A quantity, positive or negative, that can be written as a fraction; its decimal equivalent terminates or repeats. real number: Any rational or irrational number. reciprocal; multiplicative inverse: 1. Two numbers whose product is always one. 2. Either one of the two numbers in a reciprocal. reduce: To divide out a common factor of the numerator and denominator of a fraction, leaving an equivalent fraction. relatively prime: The condition of two numbers that have no factors in common other than the number one. remainder: A value that is left over when one number is divided by another. repeating decimal; recurring decimal: A decimal in which, beyond a certain point, a digit or set of digits repeats indefinitely. root: A value that, multiplied by itself a number of times, results in the value or number wanted. rounding: Approximating a value to the nearest digit or decimal place.
{"url":"http://www.dummies.com/how-to/content/algebra-glossary0.navId-323223,pageCd-R.html","timestamp":"2014-04-18T16:21:27Z","content_type":null,"content_length":"38298","record_id":"<urn:uuid:77905bb1-0303-4e02-ac5b-bf63b901b233>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Independence/Dependence Agreed, but I find the proceedure I was taught when I was 12 or 13 perfectly adequate and still in perfect accord with both the Fredrik definition and the Borowski definition. So why did you ignore my question?? It was a standard question: how do you define addition, multiplication and equality of sets? And how did you define the "zero" set?? Nering p11? Kreysig p53? Griffel p89? Gupta 2.23, 1.17? Hoffman and Kunze p40? I searched for Griffel, but I couldn't find it. As for Kreyszig and Gupta, they have multiple books, so I don't know which one you mean. As for Nering and Hoffman & Kunze: A set of vectors is said to be linearly dependent if there exists a non-trivial linear relation among them. Otherwise, the set is said to be linearly independent. Hoffman and Kunze Definition. Let V be a vector space over F. A subset S of V is said to be linearly dependent (or simply, dependent) if there exist distinct vectors [itex]\alpha_1,\alpha_2,...,\alpha_n[/itex] in S and scalars [itex]c_1, c_2,...,c_n[/itex] in F, not all of which are 0, such that [tex]c_1\alpha_1+c_2\alpha_2 + ... + c_n\alpha_n=0[/tex] A set which is not linearly dependent is called linearly independent. If the set S contains only finitely many vectors [itex]\alpha_1,...,\alpha_n[/itex], we sometimes say that [itex]\alpha_1,...,\ alpha_n[/itex] are dependent (or independent) instead of saying S is dependent (or independent) . So the notion defined here is the linear independence of a set. I do not see a definition here of the linear independence of two sets or the linear independence of equations. These definitions are perfectly compatible with what Fredrik has said. So none of these books actually agree with what you are saying. No offense, but I am starting to think that you are just misunderstanding the entire concept. Don't know how to take these and other remarks. Take it how you want. I meant what I said: I am interested in finding out more of this "linear dependence of sets", but I have yet to find a reference about it.
{"url":"http://www.physicsforums.com/showthread.php?p=4182399","timestamp":"2014-04-18T18:21:30Z","content_type":null,"content_length":"83508","record_id":"<urn:uuid:3911f46e-d7fd-4283-aae1-5c290fbfd001>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmhurst, NY ACT Tutor Find an Elmhurst, NY ACT Tutor ...I have tutored math for a couple of semesters back in college, and I consider myself qualified to tutor math, computer science, and the logic portion of the LSAT (a favorite). I also spent several months tutoring math and reading to classes of up to 16 kids grades 4-6 in a school in a Brooklyn.... 9 Subjects: including ACT Math, algebra 1, algebra 2, precalculus ...I also know shortcuts and tricks that can save time on the test. I scored highly on the GRE practice units, particularly on the Math portion, where I obtained 90% correct. I also found that strategies I employed for the verbal questions helped considerably, for both antonyms and analogies. 41 Subjects: including ACT Math, reading, chemistry, physics ...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the subject material I teach and a patient and creative approach that makes any subject simple to understand... 21 Subjects: including ACT Math, calculus, statistics, geometry ...It also challenges me to gather my ideas in such a way that I am able to express them in a coherent manner to the student. Tutoring students from all different ages poses new challenges. Yes, as you progress in the mathematical world, concepts become more involved and abstract. 11 Subjects: including ACT Math, calculus, physics, geometry ...I work with kids of all ages and abilities, and I've had years of experience working with children with special needs (5 years of martial arts instruction and 2 years of camp counseling). I pride myself on being a thorough tutor, and I like to make sure that parents and, if necessary, teachers a... 39 Subjects: including ACT Math, reading, English, writing Related Elmhurst, NY Tutors Elmhurst, NY Accounting Tutors Elmhurst, NY ACT Tutors Elmhurst, NY Algebra Tutors Elmhurst, NY Algebra 2 Tutors Elmhurst, NY Calculus Tutors Elmhurst, NY Geometry Tutors Elmhurst, NY Math Tutors Elmhurst, NY Prealgebra Tutors Elmhurst, NY Precalculus Tutors Elmhurst, NY SAT Tutors Elmhurst, NY SAT Math Tutors Elmhurst, NY Science Tutors Elmhurst, NY Statistics Tutors Elmhurst, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/elmhurst_ny_act_tutors.php","timestamp":"2014-04-19T02:15:38Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:5d5ae911-a7bb-4ae9-bfb7-a3cf2e28a30d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
ELECTRICITY - I :MCQs AND SHORT ANSWERS 1. Who had patented more than 1000 inventions during his life time ? A. Edison B. Volta C. Ampere D. Faraday 2. The unit of electric current is _______. A. coulomb B. volt C. ampere D. ohm 3. The unit of electric charge is _______. A. coulomb B. volt C. ampere D. ohm 4. The unit of electric current is _______. A. coulomb B. coulomb/sec C. coulomb-sec D. coulomb/volt 5. 1 ampere = _______. A. coulomb sec B. coulomb/sec C. volt/sec D. ohm/sec 6. Work done on an electric charge is stored in it as _______. A. potential energy B. kinetic energy C. thermal energy D. nuclear energy 7. The unit of electrical potential is _______. A. coulomb/joule B. joule/coulomb C. watt/coulomb D. joule/ampere 8. The unit of electrical potential is _______. A. ampere B. coulomb C. volt D. watt 9. _______ is not essential to obtain electric current. A. Free electrons B. Potential difference C. Electric circuit D. potential 10. _______ invented a simple electrochemical cell. A. Archimedes B. Edison C. Volta D. Coulomb 11. _______ energy is converted into electrical energy in Voltaic cell. A. Thermal B. Mechanical C. Kinetic D. Chemical 12. The unit of resistance is _______. A. volt/ampere B. coulomb/second C. volt/coulomb D. ampere/volt 13. The unit of resistance is _______. A. coulomb B. volt C. ohm D. ampere 14. When resistors are connected in series _______. A. voltage drop is uniform B. current is uniform C. both voltage and current are uniform D. neither of two is uniform 15. When resistors are connected in parallel _______. A. voltage drop is uniform B. current is uniform C. both voltage and current are uniform D. neither of two is uniform 16. Three resistors of 6 Ω, 12 Ω and 12 Ω are connected in parallel. Their equivalent resistance is _______. A. 6 Ω B. 12 Ω C. 3 Ω D. 1/3 Ω 17. Which of the following is not correct for electrical work ? A. W = VQ B. W = VIt C. W = I^2Rt D. W = I^2RQ 18. _______ is not a unit of energy. A. joule B. watt C. watt second D. kWh 19. 1 joule = _______. A. 1 watt-second B. 1 watt/second C. 1 coulomb/second D. 1 unit 20. Which of the following is not correct ? A. P = W/t B. P = I^2R C. P = WI D. P = VI 21. Silicon is a/an _______. A. conductor B. insulator C. semiconductor D. superconductor 22. Who discovered electron ? A. Coulomb B. Volta C. Ampere D. Thomson 23. 1 A = _______ mA A. 100 B. 10^3 C. 10^(-3) D. 10^(-6) 24. Ohm's law states that A. resistance increases as current increases B. resistance increases as voltage increases C. current increases as voltage increases D. current increases as resistamce increases 25. Equivalent resistance of resistances in parallel is _______. A. smaller than the smallest resistance B. larger than the largest resistance C. an average of all resistances D. algebraic sum of all resistances 26. 1 unit of domestic energy is _______. A. 1 joule B. 1 watt second C. 3.6 x 10^6 j D. 3.6 x 10^6 kwh 27. Three resistors of equal value are connected in parallel.The circuit current is 3 A.Then the current passing through each resistor is _______. A. 3 A B. 1 A C. 9 A D. 1/3 A 28. How much electric charge is present on 100 protons ? A. 6 x 10^-19 C B. 6 x 10^-17 C C. 1.6 x 10^-19 C D. 1.6 x 10^-17 C 29. Pure water is electrically... A. a good conductor B. a bad conductor C. a semiconductor D. a super conductor 30. The frequency of direct current in India is______ Hz. A. 0 B. 50 C. 60 D. 220 (1) A (2) C (3) A (4) B (5) B (6) A (7) B (8) C (9) D (10) C (11) D (12) A (13) C (14) B (15) A (16) C (17) D (18) B (19) A (20) C (21) C (22) D (23) B (24) C (25) A (26) C (27) B (28) D (29) B (30) 1. What type of charge does a glass rod acquire when it is rubbed with silk cloth ? ANS:A glass rod acquires positive charge when it is rubbed with silk cloth 2. what is an electric charge ? ANS:An electric charge is a fundamental property associated with protons and electrons. 3. What is the unit of electric charge ? ANS:The unit of electric charge is coulomb. 4. What is the magnitude of charge on proton ? ANS:The magnitude of charge on proton is 1.6 x 10^(-19). 5. What is the magnitude of charge on electron ? ANS:The magnitude of charge on electron is 1.6 x 10^(-19). 6. Mention some metals which contain maximum free electrons. ANS:Copper, aluminium and silver are the metals containing maximum free electrons. 7. What are conductors ? ANS:Conductors are materials which conduct electricity readily. 8. What are insulators ? ANS:Insulators are materials which do not conduct electricity. 9. Give some examples of insulators. ANS:Rubber, leather, plastic, glass, etc. are examples of insulators. 10. Define : Electric current. ANS:Electric current is the net amount of charge that passes through an area(cross-sectional area) of the conductor per unit time. 11. Who gave the concept of electron ? ANS:Sir J.J.Thomson gave the concept of electron. 12. How many electrons should flow in one second to contribute an electric current of 1 ampere ? ANS:6.25 x 10^18 electrons should flow in one second to contribute an electric current of 1 ampere. 13. What should be done to get electric current ? ANS:Energy should be provided o the free electrons in a conductor to obtain electric current. 14. Define : Electric potential. ANS:At a point in an electric field, the electrical potential energy per unit charge is called the electrical potential at that point. 15. What is the direction of actual current ? ANS:The direction of actual current is from cathode to anode. 16. What is the direction of conventional current ? ANS:The direction of conventional current is from anode to cathode. 17. Define : Electrical potential difference. ANS:The work done to move a unit electric charge from one point of a conductor to another point of that conductor is called the electrical potential difference between these two points. 18. State Ohm's law. ANS:The current passing through a conductor is directly proportional to the potential difference (voltage drop) across the conductor. 19. Define : 1 ohm resistance. ANS:If a potential difference of 1 volt between two terminals of a conductor causes a current of 1 ampere in it, then the resistance of the conductor is 1 ohm. 20. On which factors does the heat produced in a conductor depend ? ANS:The heat produced in a conductor depends on the electric current and the resistance in it. 21. A fuse wire has a low melting point.True or false ? ANS:Yes, this is a true statement. 22. Define : Electric Power. ANS:Electrical energy consumed per unit time is called electric power. 23. What is the unit of power ? ANS:The unit of power is watt (or, joule/sec). 24. How many kj equal 1 kwh ? ANS:3.6 x 10^3 kj equal 1 kwh. 25. How many joule equal 1 kwh ? ANS:3.6 x 10^6 kj equal 1 kwh. 26. Pure water does not conduct electricity. True or false ? ANS:True, pure water does not conduct electricity. 27. The unit of electric charge is faraday. True or false ? ANS:False, the unit of electric charge is coulomb. 28. What are electrolytes ? ANS:The solutions that conduct electricity are known as electrolytes. 29. To which terminal is the object connected during electroplating ? ANS:The object is connected to the negative terminal(cathode) during electroplating. 30. What is electroplating ? ANS:The process of depositing a desired metal on another mettalic object by using electricity is called electroplating. 31. A fuse works on the principle of chemical effect of electric current. True ? ANS:No, a fuse works on the principle of thermal effect of electric current. 32. How do we get flow of electrons in a conductor ? ANS:We get flow of electrons in a conductor by connecting the conductor to a source of energy(a cell or battery) which drives free electrons in a definite direction. 33. Current flows in a wire.Can we call the wire charged ? Why ? ANS:No. Because the flow of current in a wire does not add or remove any charged particles in it. 34. What causes resistance in a conductor ? ANS:The collision of free electrons moving in a conductor with the atoms or molecules of the conductor causes resistance in a conductor. 35. Write the equations for equivalent resistance R of two resistors R[1] and R[2] connected in (i)series (ii)parallel. ANS:(i)Series: R = R[1] + R[2](ii)Parallel : (1/R) = (1/R[1]) + (1/R[2]) 36. Define the unit of electrical energy. ANS:The unit of electrical energy is joule and it is defined as : the electrical energy consumed when one coulomb of charge passes through a point in a conductor having 1 volt potential 37. What type of wire should be used in a fuse ? ANS:A wire made from a low melting point metal/alloy having high resistance should be used in a fuse. 38. How much electric energy is spent if 200 units are consumed ? ANS:200 kwh or 200 x (3.6 x 10^6) = 7.2 x 10^8 j energy is spent if 200 units are consumed. 39. What is the principle of electroplating ? ANS:The principle of electroplating is that a metal can be deposited on the surface of another metal with the help of electrolysis. 40. What is electrolysis ? ANS:The process of separation of ions of a substance with the help of electric current is called electrolysis. 41. How much electric charge is present on 100 neutrons ? 42. Which effect of electric current is used in fuse ? ANS:The heating effect of electric current is used in fuse. 43. Write an expression for the amount of heat produced in a wire of resistance R and carrying current I for time t. ANS:Heat produced = I^2Rt 44. What is meant by saying that the potential difference between two points is 1 volt ? ANS:The meaning of the statement is that 1 joule work needs to be done to move 1 coulomb of charge from one point to the another. 1 comment: zelunika said... such kind of simple questions i gave these questions to to my 5 years bro to solve it really it took no time haha while solving so that iz why hats off to u sir such kind of simple questions r
{"url":"http://manojsirscience4u.blogspot.com/2006/12/electricity-i-mcqs-and-short-answers.html","timestamp":"2014-04-17T06:41:01Z","content_type":null,"content_length":"79203","record_id":"<urn:uuid:7660dfe4-8477-42d2-8a36-d2ca8ad2ae2e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence intervals August 12th 2009, 05:38 AM Confidence intervals Q. It is known that the mean length of a nail being produced is 5.69 mm. If a sample of 19 nails are measured, and is found to be normally distributed with a mean value of 5.76 mm with a standard deviation of 0.15 mm. Test to a statistical significance of 95% that these 19 specimens come from the parent population. What equation do i use for this? 5.76 -1.96(0.15)/SQRT(19) 5.76 +1.96(0.15)/SQRT(19) ?
{"url":"http://mathhelpforum.com/advanced-statistics/97802-confidence-intervals-print.html","timestamp":"2014-04-17T22:30:16Z","content_type":null,"content_length":"3551","record_id":"<urn:uuid:b7837837-d4f7-4898-a6b4-6d324761cc7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
First Paper Finally Published My first paper has finally been published in Genetics: Asmussen et al. (2004). It is based on work I started when I was an undergrad. Selection in which fitnesses vary with the changing genetic composition of the population may facilitate the maintenance of genetic diversity in a wide range of organisms. Here, a detailed theoretical investigation is made of a frequency-dependent selection model, in which fitnesses are based on pairwise interactions between the two phenotypes at a diploid, diallelic, autosomal locus with complete dominance. The allele frequency dynamics are fully delimited analytically, along with all possible shapes of the mean fitness function in terms of where it increases or decreases as a function of the current allele frequency in the population. These results in turn allow possibly the first complete characterization of the dynamical behavior by the mean fitness through time under frequency-dependent selection. Here the mean fitness (i) monotonically increases, (ii) monotonically decreases, (iii) initially increases and then decreases, or (iv) initially decreases and then increases as equilibrium is approached. We analytically derive the exact initial and fitness conditions that produce each dynamic and how often each arises. Computer simulations with random initial conditions and fitnesses reveal that the potential decline in mean fitness is not negligible; on average a net decrease occurs 20% of the time and reduces the mean fitness by >17%. I’ll probably blog more on this in a few days. TrackBack URL for this entry: http://scit.us/cgi-bin/mt/mt-tb.fcgi/173. Comment: #135 This sounds very interesting. I look forward to reading more, particularly concerning the simulation work. Comment: #136 Supercongratulations. My first paper was published last year in Langmuir. Working on the next two now.
{"url":"http://www.dererumnatura.us/archives/2004/06/first_paper_fin.html","timestamp":"2014-04-18T13:07:23Z","content_type":null,"content_length":"20708","record_id":"<urn:uuid:0981d44e-68ef-42e7-9a6a-e51a06be7dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Validate for Matching Parentheses In an interview the other day, the follow problem was posed to me: Write a method which validates a string if all the opening parentheses match the closing ones. For an invalid string, the method will return false. For a valid string, it will return true. The method should behave as follows: validate("(((hello))") # => false validate("((") # => false validate(")(") # => false validate("(())") # => true After attempting a solution using a boolean value, I settled on a simple tally variable. Whenever an opening paren appears, the method increments the tally. When a closing paren appears, the method decrements the tally. Finally, it is just a matter of returning true or false depending on the final value of the tally. Here is the code: def validate(str) tally = 0 str.each_char do |char| case char when "(" tally += 1 when ")" tally -= 1 return false if tally < 0 return tally == 0 If the tally ever becomes negative, we have encountered an unmatched closing right paren and can immediately return false. Note also the implicit fall-through in the case statement which will occur when char is a non-paren. How then would we implement a validator which can check for not just parentheses, but also brackets, curly brackets, and angle brackets? Again, the method should behave as follows: smarter_validate("[(])") # => false smarter_validate("[(0)]") # => true The solution requires a stack and a hash. Whenever we encounter a left paren or member of the left bracket family, we push it onto the stack. When we encounter a closing paren or bracket, we will pop off an item off the stack, look up its expected closing mark, and then compare the expected value with the actual value. Here is a simple solution: def smarter_validate(str) stack = [] lookup = { '(' => ')', '[' => ']', '{' => '}', '<' => '>' } left = lookup.keys right = lookup.values str.each_char do |char| if left.include? char stack << char elsif right.include? char return false if stack.empty? || (lookup[stack.pop] != char) return stack.empty? First, we initialize a stack which will hold all the left parentheses and brackets. The hash gives us an easy way to associate left parentheses and brackets with their counterparts. We pull out both the keys and the values of the hash for easy reference. Next, we loop over the string adding left items to the stack and popping them off when we find a right item. If we find a right item and the stack is empty, or if the right item does not match the item sitting on top of the stack, we know we have a mismatch and return false. Finally, as long as we have taken everything off the stack, we return true. Granted this solution is hardly ready for industrial use, but it's a working solution, which is always a good first step.
{"url":"http://commandercoriander.net/blog/2013/04/18/how-to-validate-matching-parentheses/","timestamp":"2014-04-17T04:08:30Z","content_type":null,"content_length":"10564","record_id":"<urn:uuid:73529df0-c390-4ed5-8647-d9fae91f35b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
SSAC General Collection Modules Following are short descriptions of the available modules. To access any of them, select from the list and you will connect to cover material including learning goals, context of use, and other information. Within the cover material (under "Teaching Materials") there is a link by which you can download the student version of the module. You can also request an instructor's version by clicking on the "request" link under Teaching Materials. There are a couple of ways of searching for modules. The box below left provides a full-text search of the cover material about the module, so you can search by author, subject, or keyword. The controlled vocabularies below right allow targeted searches by three different dimensions (click on the links to see the full, hierarchical vocabularies): Math content (Quantitative Concept) (Microsoft Word 41kB Jun17 10), Context (Subject) (Microsoft Word 29kB Jul13 07), and Excel Skill (Microsoft Word 32kB Jul13 07). Results 31 - 40 of 55 matches Energy Flow through Agroecosystems (Farms) part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets across the Curriculum module. Students build spreadsheets that allow them to calculate the different values needed to examine energy flow through agroecosystems. Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Visual display of data :Pie charts, Basic arithmetic; Number sense:Significant figures, Ratio and proportion; percentage; interpolation, Rates, Logarithms; orders of magnitude; scientific notation, Units and Dimensions:Unit Conversions Subject: Natural science:Biology, Engineering, agriculture:Agriculture and agricultural engineering Excel Skills: Basic Arithmetic:Arithmetic Functions:SUM, Graphs and Charts:Pie Chart, Basic Arithmetic:Simple Formulas Archimedes and Pi part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets across the Curriculum Activity. Student build spreadsheets that allow them to estimate pi using the same iterative process as Archimedes. Quantitative Concepts: Creating and manipulating tabular data, Geometry; Trigonometry :Other plane figures, Basic arithmetic; Number sense:Estimation, Number operations:Number operations: addition, subtraction, multiplication, division, Basic arithmetic; Number sense:Ratio and proportion; percentage; interpolation, Algebra; Modeling; Functions:Modeling:Forward modeling, Geometry; Trigonometry : Circles (including radians and pi), Calculus; Numerical methods:Iteration, Geometry; Trigonometry :Triangles and trigonometric ratios Subject: Social science:History, Mathematics, statistics and computers:Mathematics Excel Skills: Basic Arithmetic:Simple Formulas, Basic Arithmetic, Logic Functions, :IF Chaos in Population Dynamics -- Understanding Chaos in the Logistic Model part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students build spreadsheets to explore conditions that lead to chaotic behavior in logistic models of populations that grow discretely. Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines :Goodness of fit (R2), Basic arithmetic; Number sense:Rates, Ratio and proportion; percentage; interpolation, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Exponential function, Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines :Line- and curve-fitting, Algebra; Modeling; Functions:Modeling:Forward modeling, Algebra; Modeling; Functions:Multivariable functions, Nonlinear functions of a single variable:Exponential (geometric) growth:Logistic function, Calculus; Numerical methods:Differential equations including difference equations, Iteration, Measurement; Data presentation and analysis; Probability:Visual display of data :XY scatter plots, Creating and manipulating tabular data Subject: Natural science:Biology, Mathematics, statistics and computers:Mathematics Excel Skills: Basic Arithmetic:Simple Formulas, Graphs and Charts:XY Scatterplot:One Plot, Multiple Plots, Basic Arithmetic:Nested Formulas Introducing Endangered Birds to Ulva, NZ -- Modeling exponential and logistic growth of the yellowhead population part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students build spreadsheets to model the growth in population of a species of birds introduced to an isolated island in New Zealand. Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Visual display of data :Reading graphs, XY scatter plots, Creating and manipulating tabular data, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Exponential function, Basic arithmetic; Number sense:Rates:Rates of Change, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Exponential (geometric) growth:Logistic function, Algebra; Modeling; Functions:Modeling:Forward modeling, Calculus; Numerical methods:Finding maximum/minimum, Differential equations including difference equations Subject: Natural science:Biology Excel Skills: Graphs and Charts:XY Scatterplot:One Plot, Basic Arithmetic:Nested Formulas, Simple Formulas Computing Dosage for Infants and Children -- Calculation of Dosage by the Body Weight Method part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students build spreadsheets to find dosages in mL/admin from prescription in mg/day/kg and admins/day; body weight in lbs; and concentration in mg/mL. Quantitative Concepts: Basic arithmetic; Number sense:Ratio and proportion; percentage; interpolation, Measurement; Data presentation and analysis; Probability:Creating equations; text-to-math translation, Basic arithmetic; Number sense:Units and Dimensions:Unit Conversions, Algebra; Modeling; Functions:Manipulating equations, Modeling:Forward modeling, Creating and manipulating tabular Subject: Health:Nursing Excel Skills: Basic Arithmetic:Simple Formulas How Sweet Is Your Tea? -- Practical experience with solutions and concentration part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students build a spreadshet to calculate grams solute to add to liters solvent to produce solution of desired concentration (mol/L). Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Creating equations; text-to-math translation, Algebra; Modeling; Functions:Manipulating equations, Basic arithmetic; Number sense:Ratio and proportion; percentage; interpolation, Estimation, Logarithms; orders of magnitude; scientific notation, Units and Dimensions:Unit Conversions Subject: Natural science:Chemistry Excel Skills: Basic Arithmetic:Simple Formulas Modeling Exponential Bacteria Growth on Planet Riker part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets across the Curriculum Module. Students use Excel to conduct data analysis and examine bacteria growth in a lake. Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Visual display of data :Reading graphs, Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines :Goodness of fit (R2), Line- and curve-fitting, Measurement; Data presentation and analysis; Probability:Visual display of data :XY scatter plots, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Exponential (geometric) growth, Exponential function, Creating and manipulating tabular data, Basic arithmetic; Number sense:Rates, Algebra; Modeling; Functions:Modeling:Forward modeling, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Logarithmic function, Basic arithmetic; Number sense:Logarithms; orders of magnitude; scientific notation Subject: Natural science:Biology, Mathematics, statistics and computers:Mathematics Excel Skills: Graphs and Charts:XY Scatterplot:Trendlines, Other Elementary Math Functions:LN Getting Your Fair Share -- Jelly beans, student groups, and Alexander Hamilton part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheet Across the Curriculum module. Students develop an Excel spreadsheet to work with the quota method of apportionment designed by Alexander Hamilton. Quantitative Concepts: Basic arithmetic; Number sense:Estimation, Ratio and proportion; percentage; interpolation, Number operations:Number operations: addition, subtraction, multiplication, division Subject: Social science:History Excel Skills: Other Manipulations and Functions:ROUND, ROUNDUP, ROUNDOWN, Basic Arithmetic:Arithmetic Functions:SUM, Logic Functions:IF, Basic Arithmetic:Percent (format), Nested Formulas, Simple Formulas, Statistical Functions:One array:LARGE, SMALL, etc Carbon Sequestration in Campus Trees part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students use allometric relationships to calculate tree mass from trunk diameter in a stand of trees in the Pacific Northwest. Quantitative Concepts: Algebra; Modeling; Functions:Manipulating equations, Basic arithmetic; Number sense:Logarithms; orders of magnitude; scientific notation, Significant figures, Algebra; Modeling; Functions:Straight lines and linear functions:Slope, intercept; linear trends, Basic arithmetic; Number sense:Comparisons; percent difference, change, increase, Measurement; Data presentation and analysis; Probability:Statistical models; statistical inference; sampling, Algebra; Modeling; Functions:Nonlinear functions of a single variable:Power function, Logarithmic function, Exponential function, Creating and manipulating tabular data, Measurement; Data presentation and analysis; Probability:Visual display of data :XY scatter plots, Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines :Line- and curve-fitting, Measurement; Data presentation and analysis; Probability:Visual display of data :Logarithmic scale Subject: Natural science:Earth science, Biology Excel Skills: Other Elementary Math Functions:LN, EXP, Graphs and Charts:Pie Chart, Basic Arithmetic:Nested Formulas Minimizing Cost while Meeting Nutritional Needs -- An example of linear programming part of Spreadsheets Across the Curriculum:General Collection:Examples Spreadsheets Across the Curriculum module. Students use Excel Solver to find the linear combination of servings of two specified foods that minimize cost while meeting nutritional requirements. Quantitative Concepts: Algebra; Modeling; Functions:Straight lines and linear functions:Solving simultaneous equations, Measurement; Data presentation and analysis; Probability:Visual display of data , Algebra; Modeling; Functions:Multivariable functions:Contours, Algebra; Modeling; Functions:Straight lines and linear functions:Linear programming, linear inequalities, Algebra; Modeling; Functions:Modeling:Forward modeling, Calculus; Numerical methods:Finding maximum/minimum, Creating and manipulating tabular data:Lookup function Subject: Health:Personal health, Mathematics, statistics and computers:Computers Excel Skills: Basic Arithmetic:Arithmetic Functions:SUMPRODUCT, Other Manipulations and Functions:Lookup Functions
{"url":"http://serc.carleton.edu/sp/ssac_home/general/examples.html?results_start=31&module=&vocab=","timestamp":"2014-04-20T11:13:50Z","content_type":null,"content_length":"40877","record_id":"<urn:uuid:d0c528a9-ee20-4fc2-aa61-4f921464a564>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzle Playground - Puzzles with Numbers Home / Puzzle Playground / Non-Manipulative Puzzles / Cube Dates after Martin Gardner It is said that two cubes with twelve digits on their faces are enough to produce any date of year from them. Can you discover how the digits should be placed on the cubes? Play | Download Number Fit by Ali Kılıç Put all the eleven pieces into the 5x5 board so that each row, column, and main diagonal (both diagonals are highlighted) contains different digits, 1 through 5. No piece is rotated or flipped, and no pieces overlap each other. Play | Download Circle Division by Hasan Yurtoğlu With three straight lines divide the circle into several regions with equal sums of their numbers. Lines should begin and end on the circle’s periphery. They may cross each other, but not the numbers. No empty regions are allowed. Play | Download Number Flower by Hasan Yurtoğlu Fully fill in the flower with missing numbers so that parts of the same color and each complete circle contain all numbers, 1 through 6. Play | Download 26 - 63 = 1 by Mel Stover In a simple but wrong equation change the position of only one digit so that to make it correct. Play | Download Nob's Dozen by Nob Yoshigahara Dividing the grid, adding the numbers, comparing the results. One more brilliant brainteaser coming from the great Nob Yoshigahara's puzzle legacy. Play | Download Nob's Number Tree by Nob Yoshigahara This is a puzzling number tree in which when you already think the solution is in your pocket you suddenly realize it slipped out because of a "tiny digital typo". Don't believe? Just give it a Play | Download Simple Multiplication by Henry E. Dudeney Some multiplications can be done in an unusual way – by moving a digit from one position to another. This challenge is about the multiplication and... a proper number which such a multiplication should be applied to. Play | Download Nine Digits by Henry E. Dudeney Nine digits are arranged into two groups of two numbers each. When the numbers are multiplied in each group the resulting product is the same. What is the biggest amount which can be created in such groups? Play | Download The Four Sevens by Henry E. Dudeney We can arrange four 5's to produce one hundred quite easily, using some arithmetical signs. The question is how easy it is to arrange four 7's to obtain the same result? Play | Download Digits in the Square by Henry E. Dudeney This is a 3x3 square grid where the rows are the most important. Nine different digits to be arranged into three numbers observing a special rule. Take a look at it to see the rule... Play | Download Star of Numbers by Boris Kordemsky A five-pointed star is made of circular spots held together by wire. Fill in the circles with the correct numbers of stones from 1 through 15 observing some additional rules. Play | Download From 1 Through 19 by Boris Kordemsky Just write a set of numbers in the circles so that any three of them lying on a straight line always add up to the same total. Is there any algorithm to find the proper solution? Play | Download The Dice Sum by Henry E. Dudeney A set of four dice, though not marked with spots in the ordinary way, but with digits. When put together a plenty of different four-figure numbers can be formed. How many? And what they all would add up to? Play | Download Divisible By 7 after Sam Loyd Three cubes with three numbers on them should be arranged to create a number divisible by 7. But is there a way to arrange the cubes in order to get the proper number? Play | Download Darts Count after Martin Gardner How many darts will you need to toss and which rings with numbers will you need to target in order to score exactly 100? Before scoring this number you can try to score the 50 first. Play | Download The abc Arithmetics by Sam Loyd When 2 is multiplied by 2 it produces the same result as when 2 is added to 2. It is 4 in both cases. Can you think of another pair of numbers with the same arithmetical feature? Play | Download Up to 100 by Henry E. Dudeney and Martin Gardner Digits 1 through 9 stand in a row in ascending order. Just insert a number of pluses and minuses between them and get 100. And what about the descending order? Play | Download Magic Triangle 3X The six numbers from 1 to 6 have to be placed along the sides of a triangle so that to create some magic sum along each of its sides. What magic sums can be there? Play | Download The 26 Puzzle Re_Solution after Professor Louis Hoffman A sequel to The "Twenty-Six" Puzzle. Numbers 1 through 12 and the magic sum of 26 remain, but the seven areas have changed. Will it be harder than the prequel? Play | Download Twenty4 Puzzle by Wei-Hwa Huang It is quite easy to get 24 from an 8 and a 3 - just multiply them together. But discover how incredibly hard this task can be when you have two 8s and two 3s. Play | Download The sum of four ONEs, of course, isn't TEN. That's why the real goal for this calculation is to discover which numbers are hidden behind those words. Play | Download The "Twenty-Six" Puzzle after Professor Louis Hoffman It's a kind of magic square, and you have to place twelve different numbers all around it so that seven regions with the magic sum of twenty six appear. Play | Download Send More Money by Henry E. Dudeney Of course, this isn't our appeal! We just want to propose you this great classic cryptarithm that will improve your ability to calculate. Play | Download Six Numbers by Martin Gardner Solving this puzzle you can circle with a feeling that you're one step before your goal or... one step behind. Is there any kind of joke to get exactly on it? Play | Download The Number Grid Puzzle Don't let the adjacent digits be... adjacent. The eight digits are known, a simple grid is drawn and the systematic approach is recommended. Play | Download The Eight Cards by Henry E. Dudeney A true classic gem passed through the time. Despite the fact you need just simple arithmetic skills to get to the solution, it will make you be thinking slightly out of the box. Play | Download A Simple Cryptarithm by Henry E. Dudeney A "cryptarithm" stands for a puzzle where you have to reveal the hidden numbers to make some calculations correct. Try this simple classic one. Play | Download Last Updated: January 21, 2013 top
{"url":"http://www.puzzles.com/PuzzlePlayground/Numbers.htm","timestamp":"2014-04-21T04:44:01Z","content_type":null,"content_length":"106201","record_id":"<urn:uuid:dc70cd91-8c9d-4a63-b3ba-0fc0ce1b7b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
CSP Tutorial This tutorial was put together both as an introduction to CSPs and as a resource that can be referred back to. 1 What is a constraint satisfaction problem? 2 What are the practical applications of CSPs? 3 Definition & model of a CSP 4 Constraints, variables, and values 5 Search: basic 6 Search: propagating constraints 7 Search: local consistency 8 Search: other 9 Complexity 10 Symmetry 11 Optimization problems Consulted sources [1] K. R. Apt. Principles of Constraint Programming. Cambridge University Press, 2003. [2] R. Dechter. Constraint Processing. Morgan Kaufmann Publishers, 2003. [3] P. V. Hentenryck. Constraint Satisfaction in Logic Programming. The MIT Press, 1989. [4] S. Rusell and P. Norvig. Artificial Intelligence: A Modern Approach, 2nd Edition. Prentice Hall, 2003. 137-59. [5] B. M. Smith. “A Tutorial on Constraint Programming.” Research Report 95.14, School of Computer Studies, University of Leeds, April 1995. [6] P. Cheeseman, et al. "Where the REALLY Hard Problems Are." Proceedings of IJCAI-91, 331-337, 1991. [7] D. Mitchell, et al. "Hard and Easy Distributions of SAT Problems." Proceedings of AAAI-92, pages 459-465, 1992. [8] B. Smith. “Phase Transition and the Mushy Region in Constraint Satisfaction Problems.” Proceedings of ECAI-94, pages 100-104, 1994. What is a constraint satisfaction problem? As can be inferred from the name, this set of problems deal with constraints. These constraints are no different from the ones that inhabit our real-world. There are constraints all around us, such as temporal constraints (managing work and home life), or tangible constraints (making sure we don't go over budget), and we figure out ways to deal with them to varying success. Where we don't enjoy success and run into problems, especially solutions that may work, but are not exactly what we wanted, are due in no small part to our limited capacity to deal with problems involving a large amount of variables and constraints. This is where computers, and more specifically, constraint satisfaction problems (CSPs), are necessary. Like most problems in artificial intelligence (AI), CSPs are solved through search. What makes CSPs unique, however, is the structure of the problem; unlike other AI problems, there is a standard structure to CSPs that allows general search algorithms using heuristics (with knowledge about the structure of the problem and not necessarily domain-specific knowledge) to be implemented for any CSP. In addition to this, all CSPs are also commutative--they can be searched in any order and still give the same result. These special and defining characteristics makes CSPs both interesting and worthwhile to study. What are the practical applications of CSPs? The practical applications of CSPs are very straightforward. CSPs are very good for solving general temporal and combinatorial problems, among other things. The following are examples where constraint programming has been successfully applied in various other fields: - Operations Research (scheduling, timetabling) - Bioinformatics (DNA sequencing) - Electrical engineering (circuit layout-ing) - Telecommunications (CTVR @ 4C) - Hubbell telescope/Satellite scheduling Generally speaking, CSPs are a rather recent formulation. There is not extensive published literature on the subject, but they are widely studied and their applications will continue to increase. Definition of a CSP Formal definition The formal definition of a CSP involves variables and their domains, and constraints. Suppose we have a set of variables, X[1], X[2], ..., X[n], all with domains D[1], D[2], ..., D[n] such that all variables X[i] have a value in their respective domain D[i]. There is also a set of constraints, C[1], C[2], ..., C[m], such that a constraint C[i] restricts (imposes a constraint on) the possible values in the domain of some subset of the variables. A solution to a CSP is an assignment of every variable some value in its domain such that every constraint is satisfied. Therefore, each assignment (a state change or step in a search) of a value to a variable must be consistent: it must not violate any of the constraints. As in any AI search problem, there can be multiple solutions (or none). To address this, a CSP may have a preference of one solution over another using some preference constraints (as opposed to all absolute constraints), want all solutions, or the optimal solution, given by an objective function. Optimizing a CSP model will be explored later in the tutorial. Finite vs real-valued domains This explanation of constraint programming will only touch on problems that have finite domain variables. This means that the domains are a finite set of integers, as opposed to a real-valued domain that would include an infinite number of real-values between two bounds. The modeling of a real problem Consider the popular N-Queens problem used throughout AI. This problem involves placing n queens on an n x n chessboard such that no queen is attacking another queen. (According to the rules of chess, a queen is able to attack another piece--in this case, a queen--if it is in the same row, column, or diagonal from that queen.) There are, of course, many ways to formulate this problem as a CSP (think: variables, domains, and constraints). A simple model is to represent each queen as a row so that (for example) to solve the 4-queen problem, we have variables Q1, Q2, Q3, and Q4. Each of these variables has an integer domain, whose values correspond to the different columns, 1-4. An assignment consists of assigning a column to a queen, i.e. { Q1 = 2 }, which "places" a queen in row 1, column 2. The constraints on the problem restrict certain values for each variable so that all assignments are consistent. For example, after we have assigned Q1, and now want to assign Q2, we know we cannot use value 2, since this would violate a constraint: Q1 could attack Q2 and visa versa. Thus we come up with the following variables, values, and constraints to model this problem: Variables: { Q1, Q2, Q3, Q4 } Domain: { (1, 2, 3, 4), (1, 2, 3, 4), (1, 2, 3, 4), (1, 2, 3, 4) } Constraints: Alldifferent( Q1, Q2, Q3, Q4 ) and for i = 0...n and j = (i+1)...n, k = j-i, Q[i] != Q[j] + k and Q[i] != Q[j] - k. (Some more on) constraints, variables, and values As mentioned, the structure of the CSP is the most important part of it since the same algorithms can be used to search any CSP. Since we know that the structure is standard across all CSPs, we can take a look at heuristics that are able to operate on all different types of problems. However, that is not to say that all algorithms are equally tractable and efficient on all sorts of problems. Currently, the decision of the use of an algorithm for a certain problem is determined empirically. A constraint is considered n-ary if it involves n variables. So if a constraint affects just a single variable, it is considered unary. Unary constraints can be dealt with as a preprocessing step. Constraints that involve two variables are binary constraints and are of particular interest for two reasons. One is that they can be modeled as a constraint graph, where the nodes of the graph represent the variables and an edge connects two nodes if a constraint exists between the two variables. The second is that a constraint of higher arity (the number of variables involved in a constraint) can always be reduced to a set of binary constraints! (However, that doesn't mean that this is always a good idea--in some cases, the number of binary constraints for a problem can be exponential, thus creating an intractable model.) More complex constraints, with arity > 2, are called global constraints. A simple example of a global constraint is the Alldifferent constraint; this constraint forces all the variables it touches to have different values. (Note: It is easy to see how this particular global constraint could be decomposed into many "not equal" binary constraints.) There is much to be said about global constraints, however, they are beyond the scope of this tutorial. Constraint programming has close ties with integer programming (IP): one way of representing constraints is to form equations using subsets of the variables. One example of a simple constraint modeled as an equation, where X[1] and X[2] are variables, is X[1] < X[2]. A more complex constraint is one we would use in the SEND + MORE = MONEY problem. Without elaborating extensively about the problem, the basic idea is that we try to find decimals to represent each letter in the equation so that if all the letters { S, E, N, D, M, O, R, Y } are replaced by decimals (with S and M != 0), SEND + MORE = MONEY. In order to model this problem, we have to construct constraints, such as D + E = 10 * C[1] + Y. This constraint makes sure that the sum of the values of D + E on top of the equation are equal to Y and the carry value of the addition. This is a real constraint that is widely used to solve this problem, not some diluted example; the majority of the time defining constraints for a problem really is this intuitive. Variable ordering Deciding on the variables to be included in your problem's model is usually not too difficult: there are the obvious variables that must be assigned values for a solution to exist (decision variables) and variables that help make the problem more efficient or contribute to some objective function. While tricks can be used (such as was earlier when the queens were represented as rows) to increase performance, they are just that. A part of any search algorithm is choosing a variable that has not yet been instantiated and assigning it a value from its domain. There are both static and dynamic variable ordering heuristics available that decide how to choose this next variable. One such heuristic is MRV (minimum-remaining values), which comes from the fail-first principle. The MRV heuristic selects from the set of unassigned variables the next variable with the fewest remaining values in its domain. Essentially this allows us to discover a dead end sooner than we would have and as a result reduce the overall size of our search tree. This heuristic becomes much more useful when dealing with a problem with noticeable variances in the cardinality of domains, both during the preprocessing and (dynamically) as the search progresses. Different search techniques, explained later in this tutorial will delete values from the domains of variables, making MRV appealing. Another heuristic for variable ordering, often used as a tie-breaker is the degree heuristic. This heuristic attempts to choose the unassigned variable that is involved in the most constraints with other unassigned variables. This reduces the number of children of each node in the search tree by decreasing the domain sizes of other variables. Value ordering After we have a variable, we must assign it a value. The way in which we choose values, or value ordering, is also important because we want to branch as often as possible towards a solution (though value ordering is a waste of time if we are looking for all solutions). The most popular heuristic for choosing a value is LCV, or least-constraining value. The idea is to choose the value that would eliminate the fewest values in the domains of other variables and thus hopefully steer the search away from a dead end. By doing so, it leaves the most choices open for subsequent assignments to unassigned variables. Search: basic Searching a CSP involves first, choosing a variable, and then assigning a value to it. In a search tree, each node is a variable and branches leading away from that node are the different values that can be assigned to that variable. Therefore, a CSP with n variables will generate a search tree of depth n. Successors are generated for only the current variable and the state of the problem is appended each time the search branches. If we consider a simple depth-first search on a CSP, we realize that because of the constraints we have imposed, at some point during our search we may be unable to instantiate a variable because its domain is empty! In the case that we arrive at a node where the goal test returns false (there are still unassigned variables) and there are no branches leading away from that node, we must go backward. This is called backtracking and it is the most basic of searches for CSPs. A variable is assigned a value and then the consistency of that assignment is checked. If the assignment is not consistent with the state of the problem, another value is assigned. When a consistent value is found, another variable is chosen and this is repeated. If all values in the domain of a variable are inconsistent, the algorithm will backtrack to the previous assignment and assign it a new value. The figure below illustrates how a simple backtracking search would work. It is the n-queens problem that was presented earlier in this tutorial: Search: propagating constraints When our backtracking search chooses a value for a variable, it checks to see whether that assignment is consistent with the constraints on our problem. Clearly, this is not very efficient. Consider a simple graph-colouring problem. If there is an edge between two nodes, then once we assign a colour to one of these nodes, we know choosing that same colour for the other will not be consistent; therefore, we must temporarily remove the values from the domains of yet uninstantiated variables that are not consistent with the current problem state with the new assignment. Forward checking The forward checking algorithm does just this. Every time an assignment is made to a variable, all other variables connected to the variable (that is currently being instantiated) by constraints are retrieved and the values in their domains which are inconsistent with the current assignment are temporarily removed. In this way the domain of a variable can become empty and another value must be chosen for the current assignment. If there are no values left with which to assign the current variable, the search may need to backtrack, in which case those values that were temporarily removed as a result of the original assignment are reinstated to their respective domains. Forward checking is able to predict, in a sense, what assignments will lead to a failure and can act accordingly by pruning branches. It will, of course, encounter these inconsistencies much sooner than backtracking, especially when used in conjunction with the fail-first heuristic described earlier. The below diagram illustrates a search using the above forward checking algorithm on an n-queens problem: Search: local consistency Forward checking utilizes a basic notion of consistency: an assignment to a variable is consistent with other assignments given a set of constraints. k-consistency is a term that defines the extent to which constraints are propagated. By definition, a CSP is k-consistent if for any subset of k – 1 variables in the problem, a consistent assignment to one of those variables allows a consistent assignment to any kth variable. Below, popular consistencies are discussed. In addition to a problem being k-consistent, it can also be strongly k-consistent, which means it is consistent for k and all weaker consistencies less than k. The benefit of a strongly k-consistent problem is that we will never have to backtrack! As with most things CSP, determining the correct level of consistency checking for a given problem is done empirically. Node consistency This is the weakest consistency check and simply assures that each variable is consistent with itself; if a variable is assigned a value, the value must be in that variable’s domain. Arc consistency (AC) 2-consistency is the most popular consistency and can be used as both a preprocessing step, or dynamically as part of the maintaining arc consistency (MAC) algorithm. The simple definition of arc consistency is that given a constraint C[XY] between two variables X and Y, for any value of X, there is a consistent value that can be chosen for Y such that C[XY] is satisfied, and visa versa. Thus, unlike forward checking, arc consistency is directed and is checked in both directions for two connected variables. This makes it stronger than forward checking. When applied as a preprocessing step, arc consistency removes all values from domains that are inconsistent with one another. If it is applied dynamically as MAC, the same algorithm that is used to check AC for preprocessing is applied after every variable instantiation during the search. That algorithm is presented here along with a demo: Path consistency The easiest way to think about path consistency is to consider a triangle, with three points labeled a, b, and c where edge( a, c ) is not a solid line[[5]]. This represents a problem where there are constraints between a and b, and b and c. Path consistency considers triples of variables, so that while a and c are not explicitly constrained, there is a constraint induced on them through the transitive nature of their constraints involving b. Thus our triangle is an easy representation of this relationship. If the constraints are that a > b and b > c, it is clear that there is an implicit constraint between a and c, such that a > c. 3-consistency, though obviously stronger than arc consistency, is not generally used. While arc consistency checks pairs of variables for consistency, path consistency must check all triples of variables—for a large problem, it is easy to see that the number of combinations is potentially huge. In worst-case time, the complexity is O(d Search: other Local search Local search for AI problems involves making a complete assignment of variables and then switching (flipping) a value for a variable and checking to see if we have found a solution. This is no different in constraint programming. Assignments are made to all our variables, but these assignments will not be consistent with the constraints on the problem. In local search, each node in the search tree is a complete assignment, with branches involving flipping different variables within the complete assignment until a solution is found. Local search works very well for some types of CSPs. The most popular heuristic for local search in CP is min-conflicts. When min-conflicts flips one of the variables in the assignment, it will choose a value for that variable that results in the minimum number of conflicts with other assignments. This way we have some idea of progress in our local search. The min-conflicts algorithm is presented here: min-conflicts(csp, max): initial := complete assignment for 1..max do: if initial is a solution: return initial var := nextVar() value := leastConflicts( var ) var := value in initial return failure Phase transition Studying the phase transition for a type of problem allows us to locate where the fundamentally hard CSP and SAT problems are. In the context of a CSP, constraint tightness refers to the number of instances where, given a constraint between two variables X and Y, the pair of values for X and Y are inconsistent. A phase transition is a phenomenon that is observed by considering the graph of search effort against constraint tightness for many instances of a problem as the constraint tightness of the problem increases from 0 to 1. As this happens, the CSP will move from the part of the problem space that is underconstrained and where there are many solutions to a space that is overconstrained and where problems are not satisfiable. The transition between the soluble and insoluble regions is referred to as the “mushy region”—a term coined by B Smith—and is populated both with problems that have a solution and those that don't: it is in this region that the peak search effort is spent trying to find a solution that exists with low probability. Because of this, phase transitions are important in the study of NP-complete problems and can give some understanding as to whether a problem is likely to be easy or difficult. Phase transitions are not algorithm-specific. When modelling a CSP, it is common that one may encounter symmetry. Symmetry is defined as an assignment, which is equivalent to another assignment; that is to say, the assignments are interchangeable. If you consider these instances, it is easy to see that if one of these assignments is consistent, then they all are. Hence it is possible to have classes of equivalent solutions where different symmetrically equivalent assignments can be interchanged and one can be assured a solution can be found for this "different" problem. It is important to take notice of symmetries because they can be used to shorten search time (by not searching symmetrically equivalent branches). "Breaking" symmetries, as this is called, can often be the difference in whether a problem is tractable or not. In order to exploit this feature, additional constraints must be added to our model. These constraints (which are model-specific) must make sure that if one assignment does not work, all symmetrically equivalent sets of assignments are automatically ruled out. COP: Constraint Optimization Problems Problems which must be optimized for a given constraint--representing an objective function--are solved in the same way as other CSPs. When a solution is encountered, it will have associated with it some "ranking" value for the objective function. The search continues to find all solutions and then chooses the solution with the most optimal value. That is to say, for every solution found if for some objective function the value is more optimal than the previous one found, that solution is saved until all solutions have been found. This idea is also referred to as preferred constraints. Minimizing and maximizing The most common objective functions are minimize and maximize. These functions try to minimize a given constraint (or variable) or maximize it. An example of a constraint is a linear equation between two variables, x and y. The constraint may be that x + y > 100, but we want to maximize this constraint for x or y such that we find the largest value(s), which satisfy the all the constraints of the Scheduling problems are obvious places where objective functions will pop-up. Let's say you work at a grocery store that is open from 7am to 2am. You wouldn't want to work one night from 6pm-2am and then the next morning from 7am-3pm, would you? Using an objective function to create a schedule where all shifts are covered by employees, but minimizing the number of consecutive night-morning shifts would certainly be very useful. Branch and bound Branch and bound is an optimizing method for solving CSPs that are too large (whether it be in the number of variables and constraints or the complexity of the constraints) to search all the solutions. The idea borrows from that of partitioning; the first solution to the problem is found (and along with it, some evaluation) and a constraint is added on the objective function to form a "subproblem" of the original so that the subproblem will be searched for the first solution found again and repeated until some optimal solution for a minimizing/maximizing function is found or there are no more solutions left. The creation of a new subproblem from the original is the branching part of the algorithm, while the bounding is the use of the evaluation for a solution to bound the new constraint. While this approach to large problems that need to be optimized is practical for real constraints like time and space, it is also more efficient. Once you have a bound, the search can stop before finding a solution if it knows that the evaluation will not be as low as one previously found. Please take some time to experiment with some of the games provided on this site to get a sense of constraint programming!
{"url":"http://4c.ucc.ie/web/outreach/tutorial.html","timestamp":"2014-04-19T01:48:44Z","content_type":null,"content_length":"27879","record_id":"<urn:uuid:07544522-c77f-455b-9607-b1879acf7d51>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Confirmatory Factor Analysis - Statistics Solutions Confirmatory factor analysis (CFA) is a multivariate statistical procedure that is used to test how well the measured variables represent the number of constructs. Confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) are similar techniques, but in exploratory factor analysis (EFA), data is simply explored and provides information about the numbers of factors required to represent the data. In exploratory factor analysis, all measured variables are related to every latent variable. But in confirmatory factor analysis (CFA), researchers can specify the number of factors required in the data and which measured variable is related to which latent variable. Confirmatory factor analysis (CFA) is a tool that is used to confirm or reject the measurement theory. General Purpose – Procedure 1. Defining individual construct: First, we have to define the individual constructs. The first step involves the procedure that defines constructs theoretically. This involves a pretest to evaluate the construct items, and a confirmatory test of the measurement model that is conducted using confirmatory factor analysis (CFA), etc. 2. Developing the overall measurement model theory: In confirmatory factor analysis (CFA), we should consider the concept of unidimensionality between construct error variance and within construct error variance. At least four constructs and three items per constructs should be present in the research. 3. Designing a study to produce the empirical results: The measurement model must be specified. Most commonly, the value of one loading estimate should be one per construct. Two methods are available for identification; the first is rank condition, and the second is order condition. 4. Assessing the measurement model validity: Assessing the measurement model validity occurs when the theoretical measurement model is compared with the reality model to see how well the data fits. To check the measurement model validity, the number of the indicator helps us. For example, the factor loading latent variable should be greater than 0.7. Chi-square test and other goodness of fit statistics like RMR, GFI, NFI, RMSEA, SIC, BIC, etc., are some key indicators that help in measuring the model validity. Questions a CFA answers From my 20 question instrument, are the five factors clearly identifiable constructs as measured by the 4 questions that they are comprised of? Do my four survey questions accurately measure one factor? The assumptions of a CFA include multivariate normality, a sufficient sample size (n >200), the correct a priori model specification, and data must come from a random sample. Key Terms: • Theory: A systematic set of causal relationships that provide the comprehensive explanation of a phenomenon. • Model: A specified set of dependent relationships that can be used to test the theory. • Path analysis: Used to test structural equations. • Path diagram: Shows the graphical representation of cause and effect relationships of the theory. • Endogenous variable: The resulting variables that are a causal relationship. • Exogenous variable: The predictor variables. • Confirmatory analysis: Used to test the pre-specified relationship. • Cronbach’s alpha: Used to measure the reliability of two or more construct indicators. • Identification: Used to test whether or not there are a sufficient number of equations to solve the unknown coefficient. Identifications are of three types: (1) under-identified, (2) exact identified, and (3) over-identified. • Goodness of fit: The degree to which the observed input matrix is predicted by the estimated model. • Latent variables: Variables that are inferred, not directly observed, from other variables that are observed. Confirmatory factor analysis (CFA) and statistical software: Usually, statistical software like AMOS, LISREL, EQS and SAS are used for confirmatory factor analysis. In AMOS, visual paths are manually drawn on the graphic window and analysis is performed. In LISREL, confirmatory factor analysis can be performed graphically as well as from the menu. In SAS, confirmatory factor analysis can be performed by using the programming languages. Related Pages: To Reference This Page: Statistics Solutions. (2013). Confirmatory Factor Analysis [WWW Document]. Retrieved from http://www.statisticssolutions.com/academic-solutions/resources/directory-of-statistical-analyses/
{"url":"http://www.statisticssolutions.com/academic-solutions/resources/directory-of-statistical-analyses/confirmatory-factor-analysis/","timestamp":"2014-04-20T18:24:17Z","content_type":null,"content_length":"60205","record_id":"<urn:uuid:9f6a15d6-5962-4cf2-982c-b5162e4846be>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Counting Intersections of Diagonals in Polygons Date: 03/08/2000 at 11:19:55 From: Dominic Dingena Subject: Intersections of Diagonals in Polygons I tried to find an equation for the maximum number of intersections of the diagonals in a polygon (because a polygon can have different shapes, its diagonals may have different numbers of intersections, but this is about the maximum number of intersections). My equation is: I = ----------- where D = number of diagonals, S = number of sides, and I = number of NOTE: This formula does not work for squares, triangles and pentagons. What we have so far is: Sides Diag.s Int.s These numbers are in this number pattern: My math teacher and I would really appreciate it if you would reply Date: 03/08/2000 at 16:46:29 From: Doctor Peterson Subject: Re: Intersections of Diagonals in Polygons Hi, Dominic. This is not an easy problem; I'd like to know more about how you are approaching it. I get the impression you are just counting, making a table, and looking for an apparent pattern; have you been looking for what might be happening in the background that would produce whatever patterns you see? That's where real mathematical thinking comes in. I started by just looking for an orderly way to count, which gave me a way to see patterns as they formed, and eventually found a formula that will work for all polygons whose diagonals don't intersect more than two at a time. You might like to practice finding formulas by looking for the formula for the number of diagonals in a polygon, if you haven't already done so. There's a simple formula that you can prove by simple logical reasoning, starting with the question, "How many ways can I make a diagonal that ends at a given vertex?" This will give you D in terms of S. I checked out your table, and you must have made some errors in copying it; for example, in line 3 you seem to have included the vertices in the number of intersections, which should be 15, not 21. For line 5 I get 70 intersections, and the rest are off too. In fact, though your numbers are all in Pascal's triangle as you said, they somehow switch from one line to another, hiding the real pattern. I'd like to know how you went about counting these; were you careful not to allow multiple intersections? As for your formula, I can't see how it works; maybe I'm misreading it. This doesn't seem to agree with your numbers at all, and in fact in most cases the number of diagonals D is not a square, so the formula doesn't even give a whole number. On your second line, for example, S and D are both 5, and you would get sqrt(5)*25/5 = 11.18, not 5. Can you explain where this formula came from, and how you are using it? Your observation that the numbers are in Pascal's triangle is good, though you haven't said anything about where it is in the triangle or why it should be so. Having found the correct numbers and seen that they follow a particular diagonal of the triangle, I was able to find a formula for I in terms of S alone, which gave the correct numbers. Our FAQs on Pascal's triangle and on combinations may help you with Pascal's Triangle Permutations and Combinations Having seen a formula involving combinations, I was able to see a reason why it should be true, without which I would really have no grounds for claiming to have found a formula at all. (You can also use the idea of combinations to find the number of diagonals.) If you think about what it takes to form an intersection of diagonals (namely, four endpoints to determine the two diagonals), you can rather easily come up with my formula. But it took me a while to see how easy it was. I was surprised that I didn't find anything about this problem anywhere I looked. The closest I came was in The On-Line Encyclopedia of Integer Sequences which gives a sequence of the number of actual intersections of a REGULAR polygon, a much harder problem: Name: Number of intersections of diagonals of regular n-gon. Sequence: 1, 5, 13, 35, 49, 126, 161, 330, 301, 715, 757, 1365, 1377, 2380, 1837, 3876, 3841, 5985, 5941, 8855, 7297, 12650, 12481, 17550, 17249, 23751, 16801, 31465, 30913, 40920, 40257, 52360, 46981, 66045, 64981, 82251, 80881, 101270 There is an attempt in our archives to solve this problem, which comes close to the solution of your version of the problem, but doesn't recognize the connection with Pascal and combinations: Lines Intersecting within a Polygon Keep working on this, and let me know when you come up with the formula and an explanation for it. I'll be here if you need more help. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55217.html","timestamp":"2014-04-19T17:24:57Z","content_type":null,"content_length":"10725","record_id":"<urn:uuid:719c6c3b-4102-42a3-a21c-ce472ba86f63>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmhurst, NY ACT Tutor Find an Elmhurst, NY ACT Tutor ...I have tutored math for a couple of semesters back in college, and I consider myself qualified to tutor math, computer science, and the logic portion of the LSAT (a favorite). I also spent several months tutoring math and reading to classes of up to 16 kids grades 4-6 in a school in a Brooklyn.... 9 Subjects: including ACT Math, algebra 1, algebra 2, precalculus ...I also know shortcuts and tricks that can save time on the test. I scored highly on the GRE practice units, particularly on the Math portion, where I obtained 90% correct. I also found that strategies I employed for the verbal questions helped considerably, for both antonyms and analogies. 41 Subjects: including ACT Math, reading, chemistry, physics ...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the subject material I teach and a patient and creative approach that makes any subject simple to understand... 21 Subjects: including ACT Math, calculus, statistics, geometry ...It also challenges me to gather my ideas in such a way that I am able to express them in a coherent manner to the student. Tutoring students from all different ages poses new challenges. Yes, as you progress in the mathematical world, concepts become more involved and abstract. 11 Subjects: including ACT Math, calculus, physics, geometry ...I work with kids of all ages and abilities, and I've had years of experience working with children with special needs (5 years of martial arts instruction and 2 years of camp counseling). I pride myself on being a thorough tutor, and I like to make sure that parents and, if necessary, teachers a... 39 Subjects: including ACT Math, reading, English, writing Related Elmhurst, NY Tutors Elmhurst, NY Accounting Tutors Elmhurst, NY ACT Tutors Elmhurst, NY Algebra Tutors Elmhurst, NY Algebra 2 Tutors Elmhurst, NY Calculus Tutors Elmhurst, NY Geometry Tutors Elmhurst, NY Math Tutors Elmhurst, NY Prealgebra Tutors Elmhurst, NY Precalculus Tutors Elmhurst, NY SAT Tutors Elmhurst, NY SAT Math Tutors Elmhurst, NY Science Tutors Elmhurst, NY Statistics Tutors Elmhurst, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/elmhurst_ny_act_tutors.php","timestamp":"2014-04-19T02:15:38Z","content_type":null,"content_length":"23808","record_id":"<urn:uuid:5d5ae911-a7bb-4ae9-bfb7-a3cf2e28a30d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
1 hour, 30 minutes, 21 seconds Exchange Protocol Test Suite Architecture Simon Xiao, Software Developer Engineer in Test II, introduces the Exchange Protocol Test Suite Architecture. The talk also includes a demo presented by Xianming Xu (Chinasoft), Test Suite Developer. This talk was part of the January 2011 Exchange RPC Protocol Documentation Plugfest and is related…
{"url":"https://channel9.msdn.com/Tags/interoperability?page=2","timestamp":"2014-04-20T16:53:51Z","content_type":null,"content_length":"51570","record_id":"<urn:uuid:40edf75d-a698-448f-8e07-a0bcf857bed9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewood, Atlanta, GA Decatur, GA 30033 Nation's top ranked AP calculus/AP physics teacher by College Board Hello, my name is Eric I've taught AP and AP Physics ( and trig based) for 18 years. Here are a few of my other qualifications: 1. After high school, I spent 6 years in the Navy as a nuclear reactor operator aboard a nuclear submarine I then attended... Offering 2 subjects including calculus
{"url":"http://www.wyzant.com/Edgewood_Atlanta_GA_calculus_tutors.aspx","timestamp":"2014-04-20T01:52:43Z","content_type":null,"content_length":"60873","record_id":"<urn:uuid:1178ab1d-5f99-4a7d-bcdd-d094e94a07dc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
[Python-Dev] PEP239 (Rational Numbers) Reference Implementation and new issues Andrew Koenig ark at research.att.com Thu Oct 3 03:10:52 CEST 2002 Guido> [Andrew Koenig] >> Much as I like APL, I'd rather use Scheme's numeric model. Guido> I've heard that before, but I've also heard criticism of Guido> Scheme's numeric model. "It works in Scheme" doesn't give me Guido> the warm fuzzy feeling that it's been tried in real life. ...and "It works in APL" does? More seriously, there aren't that many languages with infinite-precision rationals, which means there aren't all that many precedents. I find the partial ordering among Python's types interesting. If we use "<" to mean "is a strict subset of", then int < long < rational (except perhaps on machines with 64-bit int, which opens a different can of worms entirely) int < float < rational float < complex Excluding complex, then, adding rational to the numeric types makes the numeric types a lattice. We could make all of the numeric types a lattice by adding a "complex rational" type: complex rational | \___ | \ rational complex / \ ____/ / \ / long float \ __/ \ / \ / What's nice about a lattice is that for any two types T1 and T2, there is a unique minimum type T of which T1 and T2 are both subsets (not necessarily proper subsets, because T1 could be a subset of T2 or vice More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2002-October/134455.html","timestamp":"2014-04-20T11:57:05Z","content_type":null,"content_length":"4364","record_id":"<urn:uuid:ba4cb004-7d40-42d5-ade1-72f3f4c1818c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: solve for the indicated variable 3x - 8y = 24 ; for y • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506da649e4b088f3c14d6790","timestamp":"2014-04-20T10:48:51Z","content_type":null,"content_length":"56103","record_id":"<urn:uuid:7a1cdf85-1bdb-4d83-a3ed-033a28b0e76e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Magazine - April 2012 Our first two colorful articles show the benefits of playing with toys and games. Ben Coleman and Kevin Hartshorn describe the mathematics of the game of SET, and Peter Hilton and Jean Pedersen use a large Magz kit to find some remarkable polyhedra inside "Pascal's Tetrahedron." The Notes include a treatment of the cycles formed by Fibonacci sequences, when they are reduced mod p.—Walter Stromquist, Editor Game, Set, Math Ben Coleman and Kevin Hartshorn We describe the card game SET, and discuss interesting mathematical properties of the game that illustrate ideas from group theory, linear algebra, discrete geometry, and computational complexity. We then suggest a criteria to identify when two card collections are similar to one another and appeal to Pólya’s Theorem to determine the number of structurally distinct collections. For example, we find there are 41,407 collections of 12 cards, the layout most commonly seen in gameplay. Mathematics, Models, and Magz, Part I: Patterns in Pascal’s Triangle and Tetrahedron Peter Hilton and Jean Pedersen Illustrations by Sylvie Donmoyer and Photographs by Chris Pedersen This paper describes how the authors used a set of magnetic toys to discover analogues in 3 dimensions of well-known theorems about binomial coefficients. In particular, they looked at the Star of David theorem involving the six nearest neighbors to a binomial coefficient Picturing Irrationality Steven J. Miller and David Montague In the 1950s, Tennenbaum gave a wonderful geometric proof of the irrationality of the square root of two. We show how to generalize his arguments to prove the irrationality of other numbers, and invite the reader to explore how far these arguments can go. Gauss’s Lemma and the Irrationality of Roots, Revisited David Gilat An idea of T. Estermann (1975) for demonstrating the irrationality of Minimizing Areas and Volumes and a Generalized AM–GM Inequality Walden Freedman Solving optimization problems via Lagrange Multipliers leads us to a generalized AM-GM inequality. We give several related optimization problems, suitable as projects for calculus students, with answers provided at the end. Proof Without Words: Grant Cairns A Generalization of the Identity cos Erik Packard and Markus Reitenbach Using Euler’s Theorem and the Geometric Sum Formula, we prove trigonometric identities for alternating sums of sines and cosines. A Class of Matrices with Zero Determinant André L. Yandl and Carl Swenson Let a[1], a[2], . . . , a[n] , b[1], b[2], . . . , b[n] be real numbers and the nxn matrix C be defined with entries k is a positive integer. If n > k + 1, then det(C) = 0, and if n= k + 1, then det( C) is a product involving two Vandermonde determinants. Splitting Fields and Periods of Fibonacci Sequences Modulo Primes Sanjai Gupta, Parousia Rockstroh, and Francis Edward Su We consider the period of a Fibonacci sequence modulo a prime and provide an accessible, motivated treatment of this classical topic using only ideas from linear and abstract algebra. Our methods extend to general recurrences with prime moduli and provide some new insights. And our treatment highlights a nice application of the use of splitting fields that might be suitable to present in an undergraduate course in abstract algebra or Galois theory. A Short Proof of the Chain Rule for Differentiable Mappings in R^n Raymond Mortini Based on the notion of M-differentiability, we present a short proof of the differentiability of composite functions in the finite dimensional setting. The Surprising Predictability of Long Runs Mark F. Schilling When data arise from a situation that can be modeled as a collection of n independent Bernoulli trials with success probability p, a simple rule of thumb predicts the approximate length that the longest run of successes will have, often with remarkable accuracy. The distribution of this longest run is well approximated by an extreme value distribution. In some cases, we can practically guarantee the length that the longest run will have. Applications to coin and die tossing, roulette, state lotteries and the digits of π are given.
{"url":"http://www.maa.org/publications/periodicals/mathematics-magazine/mathematics-magazine-april-2012","timestamp":"2014-04-17T01:56:13Z","content_type":null,"content_length":"98007","record_id":"<urn:uuid:0ad208da-06bf-421e-957b-a28f660b62e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: How to add Q stats from -wntestq- to -esttab- table (hopefully w Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: How to add Q stats from -wntestq- to -esttab- table (hopefully with -estadd-) From Stas Kolenikov <skolenik@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: How to add Q stats from -wntestq- to -esttab- table (hopefully with -estadd-) Date Fri, 16 Sep 2011 09:06:31 -0500 I don't see much advantage of -eststo- over the official -estimates store-, with which you may not be familiar. -estimates- are as close as Stata gets to R objects, with -e(whatever)- being essentially results$whatever. -estadd-, on the other hand, is immensely helpful. In your R to Stata work, you would also find that you would need to declare temporary variables explicitly with -tempvar-, and use local macro names for these. (It actually provides a better locality control than R objects that can seep through the program interfaces and produce unpredictable results. In Stata, variables in the data set are ALWAYS global, and the macros in a program are ALWAYS local.) Hence your programmatic solution would look program define arima_plus_wntestq version 11 syntax varlist , estname( name ) [ * ] quietly { arima `varlist', `options' tempvar res predict `res', residuals wntestq `res' estadd scalar Q = r(stat) estadd scalar df_Q = r(df) estadd scalar p_Q = r(p) estimates store `estname' end // of arima_plus_wntestq Another thing about R to Stata change is that you can rely on the official Stata commands doing good job with backwards compatibility, but you should rarely expect such of the user-written command. In other words, in your own programs, you would want to use as little user-written stuff as needed for them to work. (There is no built-in mechanisms to resolve dependencies, but you as the author of the program may build rudimentary tools for that. More typically, I put whatever -ssc install- packages I need at the top of my do-file, rather than in ado-files and programs themselves.) On Fri, Sep 16, 2011 at 8:39 AM, Richard Herron <richard.c.herron@gmail.com> wrote: > I found a hackish solution. > * ----- begin code ----- > * combine regressions with -eststo- and add -wntestq- with -estadd- > webuse friedman2, clear > generate ln_m1 = ln(m1) > eststo clear > eststo: quietly arima DS4.ln_m1, ar(1) ma(2) > quietly predict res_1, residuals > quietly wntestq res_1 > estadd scalar Q = r(stat) > eststo: quietly arima DS4.ln_m1, ar(1) ma(1/2) > quietly predict res_2, residuals > quietly wntestq res_2 > estadd scalar Q = r(stat) > eststo: quietly arima DS4.ln_m1, ar(1/2) ma(2) > quietly predict res_3, residuals > quietly wntestq res_3 > estadd scalar Q = r(stat) > eststo: quietly arima DS4.ln_m1, ar(1/2) ma(1/2) > quietly predict res_4, residuals > quietly wntestq res_4 > estadd scalar Q = r(stat) > * and create tables with -esttab- > esttab, stats(aic bic Q) noobslast nomtitles > * ----- end code ----- > Programming is the next task in my R-to-Stata switch. I will update > when I learn how to code a -estadd_wntestq- solution. > On Wed, Sep 14, 2011 at 20:32, Richard Herron > <richard.c.herron@gmail.com> wrote: >> I have an -esttab- table with multiple -arima- models to which I would >> like to add Q stats made with -wntestq-. I can't use -estadd scalar- >> because Q stats are not part of -arima- objects. Is there a way that I >> can chain together -predict, residuals- and -wntestq- to add Q stats >> to my -esttab- table? Thanks! >> Here is some code (including how I would find Q stats "manually"): >> * ----- begin code ----- >> * I know how to combine regressions with -eststo- >> webuse friedman2, clear >> generate ln_m1 = ln(m1) >> eststo clear >> eststo: quietly arima DS4.ln_m1, ar(1) ma(2) >> eststo: quietly arima DS4.ln_m1, ar(1) ma(1/2) >> eststo: quietly arima DS4.ln_m1, ar(1/2) ma(2) >> eststo: quietly arima DS4.ln_m1, ar(1/2) ma(1/2) >> * and create tables with -esttab- >> esttab, aic bic noobslast nomtitles >> * I would like to add Q stats to each model with -estadd-, but I can't >> figure out how >> * here's how I find Q stats "manually" >> quietly arima DS4.ln_m1, ar(1/2) ma(1/2) >> predict res, residuals >> wntestq res, lags(8) >> * ----- end code ----- Stas Kolenikov, also found at http://stas.kolenikov.name Small print: I use this email account for mailing lists only. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-09/msg00670.html","timestamp":"2014-04-18T01:26:10Z","content_type":null,"content_length":"12712","record_id":"<urn:uuid:05eb04e2-6515-4f55-b688-eee78c476b7f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of triangulation noun Surveying, Navigation. a technique for establishing the distance between any two points, or the relative position of two or more points, by using such points as of a triangle or series of triangles, such that each triangle has a side of known or measurable length (base or base line) that permits the size of the angles of the triangle and the length of its other two sides to be established by observations taken either upon or from the two ends of the base line. the triangles thus formed and measured. Origin: 1810–20; Medieval Latin triangulātiōn- (stem of ) the making of triangles. See
{"url":"http://dictionary.reference.com/browse/triangulation?qsrc=2446","timestamp":"2014-04-19T07:29:19Z","content_type":null,"content_length":"100149","record_id":"<urn:uuid:79dd40b1-cc8f-48f4-8c34-f84045bbd5e2>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
article about long range artillery Does anyone know where to find information about this subject? For Example articles written about the subject. As a brief note, you can use the substitution which will make your range formula slightly easier to understand. Regarding articles about long range artillery: I recall a reading a bunch of stuff about Gerard Bull (who was working on the Iraqi supergun) and gun-barrel based space launch , which is, I believe the most extreme example of long range artillery. For your article it wouldn't hurt to point out that long range artillery trajectories are essentially orbits. Regarding dealing with height offsets: I'm sure that there are endless charts and formulae for calculating the angles for hitting things. I remember working out the general problem with [itex]h\neq0[/itex] in a second year calculus class once. In retrospect, it might be easier to try to work out the angle [itex]\theta[/itex] off the horizontal to go a distance [itex]x[/itex] along a slope of angle [itex]\phi[/itex], rather than dealing with a fixed height offset. (Finding the optimum angle on a slope is a standard physics problem.) However, I haven't really looked at it, so that's just a guess. Regarding your assignment: Showing that even simplistic attempts to do more realistic artillery calculations are hard isn't necessarily a bad paper. You can also do work to determine how large the error from various sources would be on a long-range artillery shot rather than trying to account for them in your calculation.
{"url":"http://www.physicsforums.com/showthread.php?t=56083","timestamp":"2014-04-20T21:32:24Z","content_type":null,"content_length":"55922","record_id":"<urn:uuid:48a702e2-92e3-4ee3-9487-3fd7c5a495a7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Competitive auctions Results 1 - 10 of 68 , 2002 "... We describe mechanisms for auctions that are simultaneously truthful (alternately known as strategy-proof or incentive-compatible) and guarantee high "net" profit. We make use of appropriate variants of competitive analysis of algorithms in designing and analyzing our mechanisms. Thus, we do not req ..." Cited by 89 (19 self) Add to MetaCart We describe mechanisms for auctions that are simultaneously truthful (alternately known as strategy-proof or incentive-compatible) and guarantee high "net" profit. We make use of appropriate variants of competitive analysis of algorithms in designing and analyzing our mechanisms. Thus, we do not require any probabilistic assumptions on bids. We present - In Proceedings of the 7th ACM Conference on Electronic Commerce , 2005 "... We study a multi-unit auction with multiple bidders, each of whom has a private valuation and a budget. The truthful mechanisms of such an auction are characterized, in the sense that, under standard assumptions, we prove that it is impossible to design a non-trivial truthful auction which allocates ..." Cited by 74 (10 self) Add to MetaCart We study a multi-unit auction with multiple bidders, each of whom has a private valuation and a budget. The truthful mechanisms of such an auction are characterized, in the sense that, under standard assumptions, we prove that it is impossible to design a non-trivial truthful auction which allocates all units, while we provide the design of an asymptotically revenue-maximizing truthful mechanism which may allocate only some of the units. Our asymptotic parameter is a budget dominance parameter which measures the size of the budget of a single agent relative to the maximum revenue. We discuss the relevance of these results for the design of Internet ad auctions. - In Proceedings of the ACM Conference on Electronic Commerce (EC , 2007 "... For allocation problems with one or more items, the wellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents ’ payments will sum to more than ..." Cited by 48 (15 self) Add to MetaCart For allocation problems with one or more items, the wellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents ’ payments will sum to more than 0. If there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer. However, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents. In 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategy-proofness, individual rationality, and the , 2004 "... in Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa FL, October 2001. This work was supported by the DoD University Research Initiative (URI) program administered by the Oce of Naval Research under Grant N00014-01-1-0795. ..." Cited by 43 (4 self) Add to MetaCart in Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa FL, October 2001. This work was supported by the DoD University Research Initiative (URI) program administered by the Oce of Naval Research under Grant N00014-01-1-0795. "... The monopolist’s theory of optimal single-item auctions for agents with independent private values can be summarized by two statements. The first is from Myerson [8]: the optimal auction is Vickrey with a reserve price. The second is from Bulow and Klemperer [1]: it is better to recruit one more bid ..." Cited by 40 (14 self) Add to MetaCart The monopolist’s theory of optimal single-item auctions for agents with independent private values can be summarized by two statements. The first is from Myerson [8]: the optimal auction is Vickrey with a reserve price. The second is from Bulow and Klemperer [1]: it is better to recruit one more bidder and run the Vickrey auction than to run the optimal auction. These results hold for single-item auctions under the assumption that the agents ’ valuations are independently and identically drawn from a distribution that satisfies a natural (and prevalent) regularity condition. These fundamental guarantees for the Vickrey auction fail to hold in general single-parameter agent mechanism design problems. We give precise (and weak) conditions under which approximate analogs of these two results hold, thereby demonstrating that simple mechanisms remain almost optimal in quite general single-parameter agent settings. - STOC ’08 , 2008 "... Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising ..." Cited by 37 (12 self) Add to MetaCart Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality—routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spam-fighting systems). Service degradation is tantamount to requiring that users burn money, and such “payments ” can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of money-burning mechanisms to maximize the residual surplus— the total value of the chosen outcome minus the payments required. Our primary contributions are the following. • We define a general template for prior-free optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worst-case analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in prior-free mechanism design. • For general single-parameter agent settings, we char- - In Proc. of the 49th Annual Symposium on Foundations of Computer Science (FOCS , 2008 "... We study multi-unit auctions where the bidders have a budget constraint, a situation very common in practice that has received very little attention in the auction theory literature. Our main result is an impossibility: there are no incentive-compatible auctions that always produce a Pareto-optimal ..." Cited by 35 (5 self) Add to MetaCart We study multi-unit auctions where the bidders have a budget constraint, a situation very common in practice that has received very little attention in the auction theory literature. Our main result is an impossibility: there are no incentive-compatible auctions that always produce a Pareto-optimal allocation. We also obtain some surprising positive results for certain special cases. 1 , 2006 "... We present a distribution-free model of incomplete-information games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides ..." Cited by 33 (0 self) Add to MetaCart We present a distribution-free model of incomplete-information games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides an alternative distribution-free equilibrium concept, which we call “robust-optimization equilibrium, ” to that of the ex post equilibrium. We prove that the robust-optimization equilibria of an incomplete-information game subsume the ex post equilibria of the game and are, unlike the latter, guaranteed to exist when the game is finite and has bounded payoff uncertainty set. For arbitrary robust finite games with bounded polyhedral payoff uncertainty sets, we show that we can compute a robust-optimization equilibrium by methods analogous to those for identifying a Nash equilibrium of a finite game with complete information. In addition, we present computational results. , 2007 "... Call a Vickrey-Clarke-Groves (VCG) mechanism to assign p identical objects among n agents, feasible if cash transfers yield no deficit. The efficiency loss of such a mechanism is the worst (largest) ratio of the budget surplus to the efficient surplus, over all profiles of non negative valuations. T ..." Cited by 24 (3 self) Add to MetaCart Call a Vickrey-Clarke-Groves (VCG) mechanism to assign p identical objects among n agents, feasible if cash transfers yield no deficit. The efficiency loss of such a mechanism is the worst (largest) ratio of the budget surplus to the efficient surplus, over all profiles of non negative valuations. The optimal (smallest) efficiency loss � L(n, p) satisfies is strictly smaller or strictly �L(n, p) ≤ �L(n, { n 4 , 2002 "... In this paper we consider the problem of designing a mechanism for double auctions where bidders each bid to buy or sell one unit of a single commodity. We assume that each bidder's utility value for the item is private to them and we focus on truthful mechanisms, ones were the bidders' optimal stra ..." Cited by 23 (8 self) Add to MetaCart In this paper we consider the problem of designing a mechanism for double auctions where bidders each bid to buy or sell one unit of a single commodity. We assume that each bidder's utility value for the item is private to them and we focus on truthful mechanisms, ones were the bidders' optimal strategy is to bid their true utility. The profit of the auctioneer is the difference between the total payments from buyers and total to the sellers. We aim to maximize this profit. We extend the competitive analysis framework of basic auctions [9] and give an upper bound on the profit of any truthful double auction. We then reduce the competitive double auction problem to basic auctions by showing that any competitive basic auction can be converted into a competitive double auction with a competitive ratio of twice that of the basic auction.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=132806","timestamp":"2014-04-17T14:21:38Z","content_type":null,"content_length":"37491","record_id":"<urn:uuid:d18f7007-1f25-4b3d-a19b-ee9d7e7a25a6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Lure of the Labyrinth: Employee Cafeteria Lesson Ideas Educator Resources for Lure of the Labyrinth: Employee Cafeteria In this math game, students enter the cafeteria of the Labyrinth, where hungry monsters come to feed. Players must figure out what the monsters want to eat by using their knowledge of proportions, ratios, multiplication, division, and prime numbers. In this lesson plan, which is adaptable for grade 6, grade 7, and grade 8, students use BrainPOP resources to explore mathematical concepts. They will describe, represent, and apply numbers using mental strategies, paper/pencil and technology through an online game. Students will also practice analyzing ratios, proportions, and percentages. This lesson plan is aligned to Common Core State Standards. See more »
{"url":"http://www.brainpop.com/educators/community/bp-game/lure-of-the-labyrinth-employee-cafeteria/","timestamp":"2014-04-18T21:39:54Z","content_type":null,"content_length":"59511","record_id":"<urn:uuid:e6662758-822e-49f7-ba19-34c0af958f86>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra, Problems For Fun (11) May 22nd 2009, 05:46 AM #1 MHF Contributor May 2008 Algebra, Problems For Fun (11) This one is a well-known and important result: Let $G$ be a group and $H eq G$ a subgroup of $G.$ Prove that if $[G:H] < \infty,$ then $\bigcup_{g \in G} gHg^{-1} eq G.$ I love these problems. It is well known that for any subgroup $H$ of finite index $n$ in $G,$ there exists a normal subgroup $N$ of $G$ which is contained in $H.$ This is proved by considering the set $S=\{aH:a\in G\}$ of all left cosets of $H$ in $G;$ any element $g\in G$ induces a permutation $\pi_g$ of $S$ by $g\mapsto gaH.$ Then the mapping $g\to\pi_g$ from $G$ to the symmetric group of $S$ is a homomorphism with kernel $N=\bigcap_{a\,\in\,G}aHa^{-1}\subseteq H.$ Consider first the case when $G$ is finite. The number of conjugates of $H$ in $G$ is equal to $|G:N_G(H)|,$ the index of the normalizer of $H$ in $G.$ As $|H|\le|N_G(H)|,$ we have $|G:N_G(H)|\le |G:H|=n$ – and as $H$ is proper, $n>1.$ Hence $\left|\bigcup_{a\,\in\,G}aHa^{-1}\right|\color{white}.$$\le\ 1+|G:N_G(H)|\left(|H|-1\right)\ \le\ 1+|G:H|\left(|H|-1\right)\ =\ 1+|G|-n\ <\ |G|$ This proves the result for the finite case. For $G$ infinite, consider the normal subgroup $N$ above. The quotient group $G/N$ is a finite group since it is isomorphic to a subgroup of the symmetric group of degree $n.$ So is $H/N$ as it is a subgroup of $G/N.$ Thus $H/N$ is a proper subgroup of finite index of the finite group $G/N$ and so the union of its conjugates in $G/N$ is proper subset of $G/N.$ The result follows since the mapping $aHa^{-1}\mapsto aN(H/N)a^{-1}N$ is a bijection from the set of all conjugates of $H$ in $G$ to the set of all conjugates of $H/N$ in $G/N.$ May 23rd 2009, 01:04 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/90059-algebra-problems-fun-11-a.html","timestamp":"2014-04-16T15:02:55Z","content_type":null,"content_length":"44044","record_id":"<urn:uuid:c0300c24-23d6-495f-a601-27c4a639ca8c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
DOCUMENTA MATHEMATICA, Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 525-594 DOCUMENTA MATHEMATICA , Vol. Extra Volume: Andrei A. Suslin's Sixtieth Birthday (2010), 525-594 S. Saito and K. Sato A $p$-adic Regulator Map and Finiteness Results for Arithmetic Schemes A main theme of the paper is a conjecture of Bloch-Kato on the image of $p$-adic regulator maps for a proper smooth variety $X$ over an algebraic number field $k$. The conjecture for a regulator map of particular degree and weight is related to finiteness of two arithmetic objects: One is the $p$-primary torsion part of the Chow group in codimension $2$ of $X$. Another is an unramified cohomology group of $X$. As an application, for a regular model ${\mathscr X}$ of $X$ over the integer ring of $k$, we prove an injectivity result on the torsion cycle class map of codimension $2$ with values in a new $p$-adic cohomology of ${\mathscr X}$ introduced by the second author, which is a candidate of the conjectural étale motivic cohomology with finite coefficients of 2010 Mathematics Subject Classification: Primary 14C25, 14G40; Secondary 14F30, 19F27, 11G25. Keywords and Phrases: $p$-adic regulator, unramified cohomology, Chow groups, $p$-adic étale Tate twists Full text: dvi.gz 117 k, dvi 323 k, ps.gz 1385 k, pdf 565 k. Home Page of DOCUMENTA MATHEMATICA
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/DMJDMV/vol-suslin/saito_sato.html","timestamp":"2014-04-18T10:37:18Z","content_type":null,"content_length":"2217","record_id":"<urn:uuid:8d6b95d3-c0e7-48fb-ac71-e606bc950b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00121-ip-10-147-4-33.ec2.internal.warc.gz"}