content
stringlengths
86
994k
meta
stringlengths
288
619
convergence of metrics up vote 0 down vote favorite Hi, I have the following question: take a Riemannian manifold M, with a family of smooth metrics $g(t)$ in $[0,T)$, call $D_0$ the Levi-Civita connection of $g(0)$ and assume that for every $m\geq 0$ $\int_0^T \sup_M|D_0^m \frac{\partial}{\partial t} g(t)|_{g(0)} dt< \infty$, then why $g(t)$ converges in $C^{\infty}$ to a smooth tensor? ca.analysis-and-odes ap.analysis-of-pdes dg.differential-geometry add comment 1 Answer active oldest votes Sounds like a home work problem? Note that $$g(T)=\lim_{t\to T-}g(t)=g(0)+\int\limits_0^T\tfrac{\partial}{\partial t}g$$ Then you get $$|D_0^m g(T)|\le \mathrm{Const}(m)$$ and $$\sup_M|D_0^m[g(T)-g(t_0)]|=\sup_M\int\ up vote 2 down limits_{t_0}^TD_0^m[\tfrac{\partial}{\partial t}g]\\,dt\to 0\ \ \text{as}\ \ t_0\to T-.$$ One can cover $M$ by charts with bounded $g(0)$-Christoffel symbols in each. Then the above vote accepted inequalities imply that $g(T)$ is $C^\infty$-smooth and $g(t)\to g(T)$ in $C^\infty$-topology as $t\to T-$. @Anton Petrunin I don't catch your argument so the point is What do you mean for convergence in $C^{\infty}$? – ukn1 May 4 '11 at 16:47 Right, I only show that the there is $C^0$-limit and it is $C^\infty$-smooth. I will make some corrections. [I am sorry --- I did not read your question carefully.] – Anton Petrunin May 4 '11 at 16:52 add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes ap.analysis-of-pdes dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/63916/convergence-of-metrics?answertab=oldest","timestamp":"2014-04-19T04:47:40Z","content_type":null,"content_length":"53887","record_id":"<urn:uuid:31eef8cb-3a81-4d7a-b084-3c654bc37173>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you find the argument of 8i? September 30th 2009, 01:40 PM #1 Oct 2008 How do you find the argument of 8i? The modulus is $\sqrt {x^2 + y^2}$ which here is just $\sqrt{64}$ = $8$ And as the argument is defined by $tan^{-1} ({|\frac{y}{x}|)}$, and $x$ in this case is zero, you would be basically calculating the inverse tan of $\frac{8}{0}$, and it's impossible to divide through by zero! How, then, is it possible to find the argument of $8i$? The modulus is $\sqrt {x^2 + y^2}$ which here is just $\sqrt{64}$ = $8$ And as the argument is defined by $tan^{-1} {|\frac{y}{x}|}$, and $x$ in this case is zero, you would be basically calculating the inverse tan of $\frac{8}{0}$, and it's impossible to divide through by zero! How, then, is it possible to find the argument of $8i$? What you're actually (kind of) trying to do is find the inverse tan of infinity. What is the value of $\tan{\frac{\pi}{2}}$ ? Also, an easier way to see it is to just draw it on an argand diagram. 8i is on the imaginary axis, what angles does the imaginary axis make with the positive real axis? The modulus is $\sqrt {x^2 + y^2}$ which here is just $\sqrt{64}$ = $8$ And as the argument is defined by $tan^{-1} ({|\frac{y}{x}|)}$, and $x$ in this case is zero, you would be basically calculating the inverse tan of $\frac{8}{0}$, and it's impossible to divide through by zero! How, then, is it possible to find the argument of $8i$? The number $8i$ is located on the positive imaginary axis. That makes a right angle with the positive real axis. Therefore $\arg(8i)=\frac{\pi}{2}$. What you're actually (kind of) trying to do is find the inverse tan of infinity. What is the value of $\tan{\frac{\pi}{2}}$ ? Also, an easier way to see it is to just draw it on an argand diagram. 8i is on the imaginary axis, what angles does the imaginary axis make with the positive real axis? I drew 8i on the argand diagram, and plotted the value of 8 on the imaginary axis but then that's the thing - there is no value to plot on the real axis because it is the only term The value of $\tan{\frac{\pi}{2}}$ is....an error, according to my calculator. Does this mean infinity, then? Obviously, this can't be found on a calculator, or to me on an argand diagram. Thanks for the help so far. It's just probably one of many simple things in mathematics that I haven't learned yet, and this example seems difficult. Is there any way you might be able to explain further? EDIT: I see it now! It's just clicked. Thanks very much to you and Plato for this. You're welcome. Glad we could help. September 30th 2009, 01:43 PM #2 Jun 2009 September 30th 2009, 01:48 PM #3 September 30th 2009, 01:56 PM #4 Oct 2008 September 30th 2009, 02:08 PM #5 Jun 2009
{"url":"http://mathhelpforum.com/calculus/105270-how-do-you-find-argument-8i.html","timestamp":"2014-04-16T07:57:05Z","content_type":null,"content_length":"47435","record_id":"<urn:uuid:ed8f5331-fff9-4981-b6ff-d54039e2b21e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups Re: Goldbach Restatement Expand Messages View Source --- In primenumbers@y..., Jud McCranie <jud.mccranie@m...> wrote: > If GC is shown to be undecidable, then the opposite of GC is NOT > with the axioms - it is contradictory. The fact that it is > says that there is no finite proof in number theory for either it or > opposite. But if the GC is false, that itself states that there is > finite proof that it is false (i.e. a counterexample that can be found in a > finite number of steps). So if GC is undecidable, a false GC would be a > contradiction. So if GC is shown to be undecidable, it can't be The nature of GC implies that it may be undecidable, but cannot be *proven* to be undecidable. Follow this reasoning: (1) GC is either decidable or undecidable; also, GC is either true or false. (2) If GC is false, it is decidable, since a finite counterexample (3) The converse of (2): If GC is undecidable, it is true. (4) Extending (3): If GC can be proven undecidable, it can be proven to be true. Statement (4) yields a contradiction: if GC is proven to be undecidable, it is proven to be true; if it can be proven true, it is not undecidable. So if GC is truly undecidable, you'll never prove it :-) View Source At 02:44 AM 10/31/2001 -0500, Jack Brennen wrote: >Then you agree that a proof that GC is undecidable is by its nature >also a proof that GC is true. If GC is in fact undecidable, such a >proof cannot be allowed to exist - it would decide GC to be true. IF (big "if") GC is shown to be undecidable in standard NT then that would mean that it is true, but it would also mean that there is no proof of that it is true in NT. It would be like Godel's construction of a statement that is true, but can't be proven in the system. If GC is false then it is decidable (by showing the counterexample in a finite number of steps). So if GC is shown to be undecidable in NT then it can't be false. To look at it a slightly different way, if X is shown to be undecidable in a system, that says two things (1) there is no proof of X in the system, (2) there is no proof of ~X in the system. So if GC is shown to be undecidable, it can't be false since says that there is no counterexample whereas a false GC says that there is a counterexample. >I don't disagree with that. It conflicts with what you said though. To be undecidable, it is only necessary that at least one of X or ~X is undecidable. At most one of them can be decidable if the problem is to be undecidable (this is called partial decidability for the one that is decidable.) | Jud McCranie | | | | Programming Achieved with Structure, Clarity, And Logic | Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/primenumbers/conversations/topics/3666?viscount=-30&l=1","timestamp":"2014-04-16T22:07:26Z","content_type":null,"content_length":"44531","record_id":"<urn:uuid:56a31ba5-9749-4c7c-9b0a-4b5bad8fe01e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal and Sampling Distribution #2 September 11th 2009, 06:01 PM #1 Apr 2008 Normal and Sampling Distribution #2 Please help! I've got nooo idea what to do! 4. The average and standard deviation of the amount of Goods and Services Tax remitted by all performance artists in a twelve month period were $6.69 thous and $1.67 thous respectively. If a sample of 116 artists was taken, find the average value (in thous dollars) below which only 1% of taxes would lie. 5. The length of time of long-distance phone calls has a variance of 38.4 minutes squared. It is also known that 7% of calls last longer than 42.4 minutes. Find the mean assuming the distribution closely approximates a normal distribution Your help would be muchly appreciated!! Last edited by mr fantastic; September 11th 2009 at 06:42 PM. Reason: Questions moved from original thread. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/101759-normal-sampling-distribution-2-a.html","timestamp":"2014-04-17T16:20:51Z","content_type":null,"content_length":"30029","record_id":"<urn:uuid:ebbf86e3-f3a4-451f-b0f6-0c55d771170e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
compressed air flow calculation Montemayor (Chemical) 8 Mar 04 10:03 zdas04 has stated the case in an excellent manner. The basic dilemma is that we don't know if you have established Choke Flow conditions or not. We lack the basic data (flow rate, temp., pressures) so we can't make the determination. But you can. Normally, if the pressure drop is 1.8 to 2.0, you expect and obtain Choke Flow. However, we can't tell is the 2 ft of tubing consumes the pressure down to where the outlet pressure drop is less than 1.8. Therefore, what zdas04 recommends is the practical step to take. I suspect Choke Flow, but that has to be proven first. Art Montemayor Spring, TX
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=89248","timestamp":"2014-04-19T01:52:50Z","content_type":null,"content_length":"35520","record_id":"<urn:uuid:bbfb05d6-5fa1-46fa-b99a-f7b21781a04a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Berkeley, CA 94702 Engineering M.S. Tutoring Math, Physics, ESL and Spanish ...At UC Berkeley I taught CE100, an introductory fluid mechanics course, for which I obtained outstanding student reviews. In the past I have also independently tutored engineering graduate students in physics, water chemistry, , and fluid mechanics. Other... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Berkeley_CA_Calculus_tutors.aspx","timestamp":"2014-04-23T23:36:40Z","content_type":null,"content_length":"63020","record_id":"<urn:uuid:5a281517-86bf-40ce-964a-cac0d52ce7ad>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 296 oh nvmd yes the LR is O2 COULD YOU PLS EXPLAIN IT TO ME. Consider the fermentation reaction of glucose: C6H12O6 ----) 2C2H5OH+2CO2 A 1.00-mol sample of C6H12O6 was placed in a vat with excess yeast. If 35.1 g of C2H5OH was obtained, what was the percent yield of C2H5OH? An observer on top of a building 66 metres high,finds the angle of elevation to the top of a taller building to be 34 degrees.The angle of depression to the foot of the same building is 51 degrees.If the buildings are on the same ground level find the height of the taller buil... Find the area of an equilateral triangle whose base is 10 cm. (1) Use the thermodynamic values to (a) deduce the enthalpy change of each reaction (b)state if the reaction is exothermic,or endothermic,and also how much energy is released or absorbed. For thi reactions; (1) NH4NO3 (s)------>N2 (g) + 2H2O (l) (2) 4Zn (s) + 9HNO3 (aq)----... From a point on a lighthouse 76 metres above sea level,the angle of depression of a fishing boat 28 degrees.How far is the boat from the foot of the lighthouse? A surveyor finds two points A and B on a hillside to be 114.6 metres apart(measured along the line of sight) and finds the angle of elevation from A to B to be 21 degrees.On his plans these points must be shown at their horizontal distance apart.Find this distance. The top of a T .V .tower casts a shadow 35 metres long when the angle of elevation of the sun 57 degree40 minutes.How high is the top of the tower? From an aircraft flying at 8000 metres above sea level the angle of depression of a ship is 36 degree15 minutes.How far is the ship from the plane in a straight line? A tourist viewing Sydney Harbour from a building 180 metres above sea level observes a ferry which is 850 metres from the base of the building.Find the angle of depression of his line of sight. Two walkers set off at the same time from a crossroad and walk along flat straight roads inclined to each other at 68 degrees.If they both walk at a speed of 6km/h,find their distance apart 10 minutes later. Two walkers set off at the same time from a crossroad and walk along flat straight roads inclined to each other at 68 degrees.If they both walk at a speed of 6km/h,find their distance apart 10 minutes later. Two walkers set off at the same time from a crossroad and walk along flat straight roads inclined to each other at 68 degrees.If they both walk at a speed of 6km/h,find their distance apart 10 minutes later. Describe one benefit of the transpiration stream of a plant: A ship sailing on a course bearing 036 degrees is 5500 metres due south of a lighthouse.If the ship continues on this course,what is the closest distance the ship will come to the lighthouse? Mature phloem is a live tissue,whereas xylem is dead when mature.Explain why it is necessary for phloem to be alive to be functional,whereas xylem can function as dead tissue: Explain why a plant needs to move food around,particularly from the leaves to other regions: Explain what is meant by source to sink flow in phloem transport. Two patrol boats M3 and M7 leave port at the same time.M3 heads due west and M7 on a bearing 227 degrees.After 30 minutes M7 has travelled 18 nautical miles and observes M3 in a direction due south. (a)How far is M3 from M7? (b) How far has M3 travelled? A ship sails on a steady course bearing 106 degrees from A to B.If B is 76 nautical miles further east than A,find,to the nearest nautical mile,how far the ship has sailed? After walking due west,turning and walking due south a man is 900 metres from his starting point and bearing 205 degrees from it.How far did he walk(a) westward?(b)southward? Describe one benefit of the transpiration stream of a plant: Describe two features of internal anatomy that distinguish monocot and dicot roots. In angle ABC,angle A=angle B=32degree and AB=108 mm,find the perpendicular height from C to the baseAB. Explain what is meant by source to sink flow in phloem transport. Explain why it is necessary for phloem to be alive to be functional,whereas xylem can function as dead tissue: Name three environmental conditions that increases the rate of transpiration in plants,explaining how they operate: A light air craft takes off flying die north then turns and flies 11 000 metres due west.The plane then has a bearing of 340 degrees from its starting point.For what distance did it fly due north? The system H2 (g) + 3N2 (g) = NH3 (s) is at equilibrium.Use Le Chatelier's principle to predict the direction in which the equilibrium will shift if the ammonia is withdrawn from the reaction If the temperature of the system PCl5 (g) =PCl3 (s) + Cl2 (g) + 92.5 kj. is increased, predict the direction of th Le Chatelier's shift. The volume occupied by the reaction SiF4(g)+2H2O (g)=SiO2 (s)+4HF (g). is reduced.Predict the direction of shift in the position of equilibrium.Will the shift be in the direction of the more gaseous molecules,or fewer? Describe two differences between xylem and phloem. What happens if water is lost from the leaves faster that it is taken up the roots? Does all the water which rises up a plant escape through the leaves? A lighthouse is 9.6 nautical miles from a ship which bears 156 degrees from the lighthouse.How far is the ship east of the lighthouse?Give answer correct to one-tenth of a nautical mile. The vertical angle of a cone is 110 degree and the diametre of its base is 186mm.What is its height? A chord subtends an angle of 68 degree at the centre of a circle of radius 200 mm.Find the length of the chord. The angle between two tangents from a point to a circle is 82 degree.What is the length of one of these tangents if the radius of the circle is 80mm? A rectangle has sides 170mm and 130mm.What is the angle between the diagonals? The diagonals of a rhombus are 320mm and 240mm in length.Find the angles of the rhombus. If cos A=3/5 and A is acute,find sin A,tan A,and sec A. Invent a unit to measure how fast you spend your money List the types of energy according to the: (a)content/particle arrangement (b)energy changes involved what is the unit for measuring the speed at which a motor spins? the sum of twice a number and 4 less than the number is tha same as the diiference between -36 and the number. What is the number? What is the apparent weight of a lead brick 2.0 in x 2.0 in x 8.0 in if placed in oil with density p=0.93 g/cm^3 (Ppb=11.4 g/cm^3)(1 in. = 2.54 cm) Why does the volume of air bubble increase as it rises toward the surface? For the party, 105 re and blue balloons were inflated. There were 3 red balloons for every 2 blue balloons. How many blue balloons are there? An organization sends a random sample of surveys to 100,000 people and received 4,781 responses. A) Give a 95% confidence interval for the true proportion of those from their entire mailing list who may donate. What is the molarity of a sulfuric acid solution if 30.00mL of H2SO4 is required to neutralize 0.840g of sodium hydrogen carbonate? H2SO4(aq) + NaHCO3(aq) → Na2SO4(aq) + H2O(l)+ CO 2(aq) Why is fermentation an important process for some organisms? Two resistors of equal resistance are connected in series with each other and are connected to a battery that produces a potential difference of 8 V. If the current is 0.2 A what is the value of each State the x-intercepts of the graph of y=1-x^2 Point P(-8,-3)and Q(6,9)are joined by a straight line.Find the equation of the line joining the midpoint of PQ to the origin. Solve for x,leave your answer in surd form. 2/x+2 + 3/x+1=5 Given the quadratic function y =3x^2+10x-8 (i)Find the y-intercept. (ii)Find the x-intercept. (iii) Calculate the axis of symmetry. (iv)Find the coordinates of the vertex (v)Sketch the graph of the above function. Complete and balance the following equations.Identify each type of the reactions. (a) Cr + O2 >Cr2O3: (b)Na2CO3 + Mg3(PO4)2: (c)H3BCO+ H2O> B(OH)3 + CO + H2: Represent this reaction with a balance chemical equation. Solid aluminium hydride is formed by a combination reaction of its two elements. compleat and balance this equation. Na2CO3+Mg3(PO4)2 what is the name of the ions that are formed when the following compounds dissolve in water. (a) BaBr2 (b) (NH4)2CrO4 (c)HBr Why is there difference in the quality of the poultry products? Represent this reaction with balanced chemical equation. Disilane gas (Si2H6) undergoes combustion to form solid silicon dioxide and water. A solution of silver nitrate is mixed with a solution of potassium sulphide.Write the molecular,total ionic,and net ionic equations illustrating the reaction. Based on CaCo3 + 2HCl --> H2O + CO2 + CaCl. How many ml of 1.00 M NaOH should be used in the titration. 25ml of HCL and 750mg of Tums are combined to simulate an upset stomach. Based on CaCo3 + 2HCl --> H2O + CO2 + CaCl. How many ml of 1.00 M NaOH should be used in the titration. 25ml of HCL and 750mg of Tums are combined to simulate an upset stomach. in the auditorium there are 24 rows of seats. the first second rows have 8 chairs and the third and fourth rows have 10 chairs.if this pattern continues.how many chairs are in the auditorium? Math Word Problem Ron is tiling a countertop. He needs to place 54 square tiles in each of 8 rows to cover the counter. He wants to randomly place 8 groups of four blue tiles each and have the rest of the tiles in white. How many white tiles will Ron need? A large grinding wheel in the shape of a solid cylinder of radius 0.330 m is free to rotate on a frictionless, vertical axle. A constant tangential force of 260 N applied to its edge causes the wheel to have an angular acceleration of 0.795 rad/s2. (a) What is the moment of in... English 8R - Homework Check (plzzzz read) hum its rather good a bit rushed with minor grammar issues 7th grade science ASAP please! ur answers r right Which of the following represents a decimal system of measurement? 1 quart = 16 fluid ounces 1 mile = 5,280 feet 1 kilometer = 1,000 meters 1 pound = 16 ounces Jsmine has 8 packs of candle wax to make scented candles. Each pack contains 14 ounces of wax.Jasmine uses 7 ounces of wax to make one candle. How many candles can she make? AP Physics C A stone is catapulted at time t = 0, with an initial velocity of magnitude 21.5 m/s and at an angle of 35.3° above the horizontal. What are the magnitudes of the (a) horizontal and (b) vertical components of its displacement from the catapult site at t = 1.06 s? Repeat for... each of the natural numbers 2 through 100, inclusive, is factored in its prime factorization. how many factors of 5 are in the collection of factorizations? A cantilever beam member made from steel with hollow circular cross section experiences an axial load of 300 KN (F1), and a vertical load of 200 kN (F2) as shown in the following figure. The outside diameter of the beam is 50mm and the uniform wall thickness of the cross secti... statics and strengths A cantilever beam member made from steel with hollow circular cross section experiences an axial load of 300 KN (F1), and a vertical load of 200 kN (F2) as shown in the following figure. The beam is 300mm long and fixed on one end. Your teacher has a jar of chocolate candies. For every completed assessment, she gives herself three pieces of chocolate. candies eaten after 67 completed assessments If 20ml of a 2.0M solution of HCl is diluted to 500mls, what is its new molarity? How do I work this out if 240ml of O2 gas increases pressure from 0.40 atm to 0.80 atm what is the new volume? show all work A saturated Ni(OH)_2_ can be prepared by dissolving 0.239 mg of Ni(OH)_2_ in water to make 500.0 mL of solution. What is the Ksp for Ni(OH)_2_? if f(x) = square root of e^2x +1, then then f'(0) equals what? for abs value of x <1, the derivative of y= ln square root 1-x^2 is.....? a carpenter cut the top section of window frame with a 32 degree angle on each end. The side peices hav an angle of 49 degreese at the top will the side pieces be parallel A(4,1),B (6,3) Abraham Lincoln so far. I need help with making an abc book with letters of the alphabet with most Imprtant people or moment. Please help! U.S. History Civics (The American President Movie) how was the system of checks and balances was presented in the American president movie Find the general solution using the method of undetermined coefficients. y''' - 10y'' + 25y' = e^(-x) cos x + e^(5x) + x. 10.6 t/s business communication you are in a group that feels very cohesive, so cohesive that it seems to agree readily at every step of a project, without any lengthy discussion or conflict. sometimes you dont agree with decisions, but you do not say anything. which of the following is the MOST probable sit... business communication What do you think of D? business communication that its a group of international business executives so they would have to gather information about this investment if it would be a good choice business communication would the sentence "Time is Money!" be a good opening line for a letter urging a group of international business executives to come to an immediate decision about an investment opportunity? a. yes, because then they would realize that a delay could hurt them financia... Business Communication thank you Business Communication Please submit your end-of-year report by December 31 so we can include your accomplishments in our report to the Corporate Offfice due january 20." is it appropriate to give the report due date in this sentence? a. no, ebcause the reader will not believe that the write ca... Business Communication thank you Business Communication You ask for feedback on a proposal and are told "I don't believe your claims. Your facts must be wrong." You are confident that your facts are indeed correct. You should: a. write a memo to the proofreader's supervisor before sending the final proposal b. mak... Pages: <<Prev | 1 | 2 | 3 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Shane&page=2","timestamp":"2014-04-18T11:31:38Z","content_type":null,"content_length":"26344","record_id":"<urn:uuid:4491c9bd-ace2-4c50-a756-51c2c82e6a43>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Recurrence Times of Stochastic Processes (also Hitting, Waiting, and First-Passage Times) Recurrence Times of Stochastic Processes (also Hitting, Waiting, and First-Passage Times) 21 Aug 2012 15:48 The recurrence time of a state or a finite trajectory is simply how long one must wait to revisit the state, or re-traverse that trajectory. One can learn a lot about a stochastic process by understanding its recurrence times. For instance, Mark Kac proved a very beautiful theorem which says that, for a stationary, discrete-valued stochastic process, the expected recurrence time of a finite trajectory is just the reciprocal of the probability of encountering the trajectory in the first place. This suggests a very simple way to estimate the probability distribution of trajectories. Similarly, one can use the recurrence times to estimate the entropy rate. Vague, Kac-inspired question: Clearly, if you had a function which gave you the expected recurrence time of an arbitrary finite trajectory, you'd have a function which also gave you all the finite-dimensional marginal distributions of the generating process. But how does one express the higher moments of the recurrence times (the variance, for starters), and might there be some way of trading off knowing more about the higher moments of shorter trajectories for knowing more first moment of longer trajectories? (Getting first moments of long trajectories from first moments of short ones would seem to imply some sort of [conditional] independence.) See also: Ergodic Theory; Estimating Entropies and Informations; Information Theory; Friedrich Nietzsche; Stochastic Processes; Time Series • Mark Kac, "On the Notion of Recurrence in Discrete Stochastic Processes", Bulletin of the American Mathematical Society 53 (1947): 1002--1010 [Reprinted in Kac's Probability, Number Theory, and Statistical Physics: Selected Papers, pp. 231--239] • Benjamin Weiss, Single Orbit Dynamics [Includes a really excellent chapter on recurrence times, the asymptotic equipartition property of entropy (Shannon-MacMillan-Breiman theorem) and data To read: • Miguel Abadi, Nicolas Vergne, "Sharp error terms for return time statistics under mixing conditions", arxiv:0812.1016 • Eduardo G. Altmann and Holger Kantz, "Recurrence time analysis, long-term correlations, and extreme events", PRE 71 (2005): 056106 • Roberto Artuso, Cesar Manchein, "Instability statistics and mixing rates", arxiv:0906.0791 • Frank Aurzada, Hanna Doering, Marcel Ortgiese, Michael Scheutzow, "Moments of recurrence times for Markov chains", arxiv:1104.1884 • Luis Baez-Duarte, "On the spatial mean of the Poincare cycle", math.PR/0505625 ["Let $X$ be a measure space and $T:X\to X$ a measurable transformation. For any measurable $E\subseteq X$ and $x\in E$, the possibly infinite return time is (x):=\inf\{n>0: T^n x\in E\}$. If $T$ is an ergodic tranformation of the probability space $X$, and $\mu(E)>0$, then a theorem of M. Kac states that $\ int_E n_E d\mu=1$. We generalize this to any invertible measure preserving transformation $T$ on a finite measure space $X$, by proving independently, and nearly trivially that for any measurable $E\subseteq X$ one has $\int_E n_E d\mu=\mu(I_E)$, where $ is the smallest invariant set containing $E$. In particular this also provides a simpler proof of Poincar\'{e}'s recurrence theorem."] • M. S. Baptista, E. J. Ngamga, Paulo R. F. Pinto, Margarida Brito, J. Kurths, "Kolmogorov-Sinai entropy from recurrence times", arxiv:0908.3401 • J.-R. Chazottes and F. Redig, "Testing the irreversibility of a Gibbsian process via hitting and return times", math-ph/0503071 • J.-R. Chazottes and E. Uglade, "Entropy estimation and fluctuations of Hitting and Recurrence Times for Gibbsian sources", math.DS/0401093 • Victor H. De La Pena and Ming Yang, "Bounding the first passage time on an average", Statistics and Probability Letters 67 (2004): 1--7 • Peter G. Doyle, Jean Steiner, "Commuting time geometry of ergodic Markov chains", arxiv:1107.2612 • Nikos Frantzikinakis, Randall McCutcheon, "Ergodic Theoy: Recurrence", arxiv:0705.0033 • Stefano Galatolo and Dong Han Kim, "The dynamical Borel-Cantelli lemma and the waiting time problems", math.DS/0610213 • N. Hadyn, J. Luevano, G. Mantica and S. Vaienti, "Multifractal properties of return time statistics," nlin.CD/0108050 • Oliver Johnson, "A Central Limit Theorem for non-overlapping return times", math.PR/0506165 • Raphael Lefevere, Mauro Mariani, Lorenzo Zambotti, "Large deviations for renewal processes", arxiv:1009.2659 • Yuval Peres and Paul Shields, "Two new Markov order estimators", math.ST/0506080 [One estimator is based on recurrence times] • Paulo R. F. Pinto, M. S. Baptista, Isabel S. Labouriau, "Density of first Poincaré returns, periodic orbits, and Kolmogorov-Sinai entropy", arxiv:0908.4575 • Amr Sadek and Nikolaos Limnios, "Nonparametric estimation of reliability and survival function for continuous-time finite Markov processes", Journal of Statistical Planning and Inference 133 (2005): 1--21 • Sidney Redner, A Guide to First-Passage Processes [Author's website, with full text of reviews and errata] • B. Saussol, S. Troubetzkoy and S. Vaienti, "Recurrence, dimensions and Lyapunov exponents," math.DS/0109197 • Abraham J. Wyner, "More on recurrence and waiting times", Annals of Applied Probability 9 (1999): 780--796
{"url":"http://vserver1.cscs.lsa.umich.edu/~crshalizi/notabene/recurrence-times.html","timestamp":"2014-04-24T10:34:58Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:2148c1f5-b025-4f93-bc0b-0d0b5798f939>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations and Mass Relationships in Biology/Ecological (Biological) Stoichiometry From ChemPRIME back to Equations and Mass Relationships Ecological (Biological) Stoichiometry A new approach to studying the relationships in ecological or biological systems has been developed by R.W. Sterner, J.J. Elser, and others ^[1] ^[2]. It is called ecological or biological stoichiometry because it focuses on the chemical requirements of each trophic level, in addition to energy requirements. Proponents say that "Ecological stoichiometry recognizes that organisms themselves are outcomes of chemical reactions and thus their growth and reproduction can be constrained by supplies of key chemical elements [especially carbon (C), nitrogen (N) and phosphorus (P)]". For example, by writing a chemical equation for photosynthesis in oceanic algae, we can predict which nutrients (nitrogen as nitrate, NO[3]^-, phosphorus as hydrogen phosphate, HPO[4]^2-, etc.) are required for algae growth, and what products result from algal respiration. 106 CO[2](g) + 16 NO[3]^-(aq) + HPO[4]^2-(aq) + 122 H[2]O(l) + 18 H^+(aq) ↔ C[106]H[263]O[110]N[16]P[1](s) + 138 O[2](g) (1) The equation above does not include some minerals, like potassium ion (K^+), which may both consumed and produced in the processes, and the "formula" for algae (C[106]H[263]O[110]N[16]P[1]) does not represent a single molecule, but just the overall composition of the algae (one might call it an "average" molecular formula)^[4]. It illustrates the elemental homeostasis exhibited by plants and animals--their elemental composition is fixed, regardless of the composition of their environment, so if the environment does not provide the correct elements in the correct ratios, the plants or animals do not thrive. The double arrow indicates that the reaction may produce the products on the right from the reactants on the left during photosynthesis, or may proceed in the reverse direction during respiration. The abbreviations in parentheses indicate the state of the species; (g) indicates a gas, (l) a liquid, (s) a solid, and species which are dissolved in aqueous (water) solutions are denoted (aq). This balanced chemical equation not only tells how many molecules of each kind are involved in a reaction, it also indicates the amount of each substance that is involved. Equation (1) says that 106 CO[2] molecules can react with 122 H[2]O molecules, 16 NO[3]^- ions, 1 HPO[4]^2- and 18 H^+ ions to give 1 C[106]H[263]O[110]N[16]P[1]"molecule" and 138 O[2] molecules. It also says that 106 mol of CO[2] molecules can react with mol of 122 H[2]O molecules, 16 mol of NO[3]^- ions, 1 mol of HPO[4]^2- and 18mol of H^+ ions to give 1 mol of C[106]H[263]O[110]N[16]P[1]"molecule" and 138 mol of O[2] The balanced equation does more than this, though. It also tells us that 2 × 16 = 32 mol NO[3]^- will react with 2 × 1 = 2 mol HPO[4]^2-, and that ½ × 16 = 8 mol NO[3]^- requires only ½ × 1 = ½ mol HPO[4]^2-. In other words, the equation indicates that exactly 16 mol NO[3]^- must react for every 1 mol HPO[4]^2- consumed. For the purpose of calculating how much HPO[4]^2- is required to react with a certain amount of NO[3]^- therefore, the significant information contained in Eq. (1) is the ratio $\frac{\text{1 mol HPO}_{\text{4}}^{\text{2-}}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}$ We shall call such a ratio derived from a balanced chemical equation a stoichiometric ratio and give it the symbol S. Thus, for Eq. (1), $\text{S}\left( \frac{\text{HPO}_{\text{4}}^{\text{-}}}{\text{NO}_{\text{3}}^{\text{-}}} \right)=\frac{\text{1 mol HPO}_{\text{4}}^{\text{2-}}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}~ ~ ~ ~ \text Thus for algae to grow, 1 HPO[4]^2- ion must be present for every 16 NO[3]^- ions. Often, this amount of HPO[4]^2- is not present, so algae cannot grow. But if a good source of HPO[4]^2- is added (for example, as modern detergent runoff), the stoichiometric ratio is established and an algae bloom results. The word stoichiometric comes from the Greek words stoicheion, “element,“ and metron, “measure.“ Hence the stoichiometric ratio measures one element (or compound) against another. EXAMPLE 1 Derive all possible stoichiometric ratios from Eq. (1) Solution Any ratio of amounts of substance given by coefficients in the equation may be used: $\text{S}\left( \frac{\text{NO}_{3}^{\text{-}}}{\text{O}_{\text{2}}} \right) =\frac{\text{16 mol NO}_{3}^{\text{-}}}{\text{138 mol O}_{\text{2}}}$$\text{S}\left( \frac{\text{O}_{\text{2}}}{\text{NO}_ {\text{3}}^{\text{-}}} \right)$=$\frac{\text{138 mol O}_{\text{2}}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}$ $\text{S}\left( \frac{\text{CO}_{2}}{\text{NO}_{\text{3}}^{\text{-}}} \right)$=$\frac{\text{106 mol CO}_{2}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}$$\text{S}\left( \frac{\text{O}_{\text{2}}}{\text {H}_{\text{2}}\text{O}} \right)=\frac{\text{138 mol O}_{\text{2}}}{\text{122 mol H}_{\text{2}}\text{O}}$ $\text{S}\left( \frac{\text{CO}_{2}}{\text{H}_{\text{2}}\text{O}} \right)=\frac{\text{106 mol CO}_{2}}{\text{122 mol H}_{\text{2}}\text{O}}$$\text{S}\left( \frac{\text{NO}_{\text{3}}^{\text{-}}}{\ text{H}_{\text{2}}\text{O}} \right)=\frac{\text{16 mol NO}_{\text{3}}^{\text{-}}}{\text{122 mol H}_{\text{2}}\text{O}}$ There are many more stoichiometric ratios, each of which relates any two of the reactants or products. Eq. (2) gives one of them. When any chemical reaction occurs, the amounts of substances consumed or produced are related by the appropriate stoichiometric ratios. Using Eq. (1) as an example, this means that the ratio of the amount of CO[2] consumed to the amount of N[3] ^- consumed must be the stoichiometric ratio S(CO[2]/NO[3]^-): $\frac{n_{\text{CO}_{\text{2}}\text{ consumed}}}{n_{\text{NO}_{\text{3}}^{\text{-}}\text{ consumed}}}=\text{S}\left( \frac{\text{CO}_{2}}{\text{NO}_{\text{3}}^{\text{-}}} \right)=\frac{\text{106 mol CO}_{2}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}$ Similarly, the ratio of the amount of O[2] produced to the amount of CO[2] consumed must be $\frac {n_ {\text{O}_{\text{2}}\text{ produced}}} {n_{\text{CO}_{\text{2}}\text{ consumed}}}~ ~ ~ =~ ~ ~ ~ ~ \text{S}\left( \frac{\text{O}_{\text{2}}}{\text{CO}_{\text{2}}} \right)~ ~ ~ = ~ ~ ~\frac {\text{138 mol O}_{\text{2}}}{\text{106 mol CO}_{2}}$ In general we can say that $\text{Stoichiometric ratio }\left( \frac{\text{X}}{\text{Y}} \right)=\frac{\text{amount of X consumed or produced}}{\text{amount of Y consumed or produced}}\text{ (3}\text{a)}$ or, in symbols, $\text{S}\left( \frac{\text{X}}{\text{Y}} \right)=\frac{n_{\text{X consumed or produced}}}{n_{\text{Y consumed or produced}}}\text{ (3}\text{b)}$ Note that in the word Eq. (3a) and the symbolic Eq. (3b), X and Y may represent any reactant or any product in the balanced chemical equation from which the stoichiometric ratio was derived. No matter how much of each reactant we have, the amounts of reactants consumed and the amounts of products produced will be in appropriate stoichiometric ratios. EXAMPLE 2 Find the amount of oxygen produced when 3.68 mol CO[2] is consumed according to Eq. (1). Solution The amount of oxygen produced must be in the stoichiometric ratio S(O[2]/CO[2]) to the amount of carbon dioxide consumed: $\text{S}\left( \frac{\text{O}_{\text{2}}}{\text{CO}_{\text{2}}} \right)~ ~=~ ~\frac{n_{\text{O}_{\text{2}}\text{ produced}}}{n_{\text{CO}_{\text{2}}\text{ consumed}}}$ Multiplying both sides byn[CO2 consumed], we have $n_{\text{O}_{\text{2}}\text{ produced}}=n_{\text{CO}_{\text{2}}\text{ consumed}}\times \text{S}\left( \frac{\text{O}_{\text{2}}}{\text{CO}_{\text{2}}} \right)=\text{3}\text{.68 mol CO}_{\text{2}}\ times \frac{\text{138 mol O}_{\text{2}}}{\text{106 mol CO}_{\text{2}}}=\text{4}\text{.79 mol O}_{\text{2}}$ This is a typical illustration of the use of a stoichiometric ratio as a conversion factor. Example 2 is analogous to Examples 1 and 2 from Conversion Factors and Functions, where density was employed as a conversion factor between mass and volume. Example 2 is also analogous to Examples 2.4 and 2.6, in which the Avogadro constant and molar mass were used as conversion factors. As in these previous cases, there is no need to memorize or do algebraic manipulations with Eq. (3) when using the stoichiometric ratio. Simply remember that the coefficients in a balanced chemical equation give stoichiometric ratios, and that the proper choice results in cancellation of units. In road-map form $\text{amount of X consumed or produced}\overset{\begin{smallmatrix} \text{stoichiometric} \\ \text{ ratio X/Y} \end{smallmatrix}}{\longleftrightarrow}\text{amount of Y consumed or produced}$ or symbolically. $n_{\text{X consumed or produced}}\text{ }\overset{S\text{(X/Y)}}{\longleftrightarrow}\text{ }n_{\text{Y consumed or produced}}$ When using stoichiometric ratios, be sure you always indicate moles of what. You can only cancel moles of the same substance. In other words, 1 mol CO[2] cancels 1 mol CO[2] but does not cancel 1 mol The next example shows that stoichiometric ratios are also useful in problems involving the mass of a reactant or product. EXAMPLE 3 Equation (1) may be used to calculate the ratio of nitrate to phosphate in a fertilizer that is optimal for algal growth. Phosphate fertilizers may be in liquid form, containing phosphoric acid^[5], while nitrate fertilizers may contain solid KNO[3]. If 100 g of KNO[3] is applied to a small pond, what is the stoichiometric mass of H[3]PO[4] required for optimal algae growth? Solution The problem asks that we calculate the mass of H[3]PO[4] from a mass of KNO[3]. As we learned in Example 2 of The Molar Mass, the molar mass can be used to convert from the mass of KNO[3] to the amount of KNO[3]. We can thus calculate the amount of KNO[3]. The amount of NO[3]^- will be the same (there is 1 mol of NO[3]^- in 1 mol of KNO[3]). We can then use a stoichiometric ratio to calculate the amount of HPO[4]^2- that would stoichiometrically react with the given mass of NO[3]^-. This is the same as the amount of H[3]PO[4], and we can use the molar mass to convert the amount to the mass. This requires the stoichiometric ratio $\text{S}\left( \frac{\text{HPO}_{\text{4}}^{\text{-}}}{\text{NO}_{\text{3}}^{\text{-}}} \right)=\frac{\text{1 mol HPO}_{\text{4}}^{\text{2-}}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}~ ~ ~ ~ \text The amount of KNO[3] in 100 g is n (mol) = m (g) / M (g/mol) = 100 g / 101.1 g/mol = 0.989 mol The amount of H[3]PO[4] required is then $n_{\text{HPO}_{\text{4}}^{\text{2-}}}$ = $n_{\text{NO}_{\text{3}}^{\text{-}}\text{ consumed}}\text{ }\!\!\times\!\!\text{ conversion factor}$=$\text{0}\text{.989 mol NO}_{\text{3}}^{\text{-}}\times$ $\frac{\text{1 mol HPO}_{\text{4}}^{\text{2-}}}{\text{16 mol NO}_{\text{3}}^{\text{-}}}=\text{0}\text{.0618 mol HPO}_{\text{4}}^{\text{2-}}$ The molar mass of H[3]PO[4] (98.0 g/mol) is used to calculate the mass: m (g) = n (mol) x M (g/mol) = 0.0618 mol x 98.0 g/mol = 6.06 g. Very little phosphate is required compared to nitrate, so the phosphate in discharged washwater may well cause the eutrophication shown in the figure above when (as is often the case) plenty of nitrate is available. With practice this kind of problem can be solved in one step by concentrating on the units. The appropriate stoichiometric ratio will convert moles of NO[3]^- to moles of HPO[4]^2- which equals moles of H[3]PO[4], and the molar mass will convert moles of HPO[4]^2- to grams. A schematic road map for the calculation can be written as $m_{\text{KNO}_{\text{3}}} \text{ }\xrightarrow{M_{\text{KNO}_{\text{3}}}}\text{ }n_{\text{KNO}_{\text{3}}} \text{ }\xrightarrow{S\text{(HPO}_{\text{4}}^{\text{2-}}\text{/NO}_{\text{3}}^{\text{-}}\ text{)}}\text{ }n_{\text{HPO}_{\text{4}}^{\text{2-}}}\text{ }\xrightarrow{M_{\text{H}_{\text{3}}\text{PO}_{\text{4}}}}\text{ }m_{\text{H}_{\text{3}}\text{PO}_{\text{4}}}$
{"url":"http://wiki.chemprime.chemeddl.org/index.php/Equations_and_Mass_Relationships_in_Biology/Ecological_(Biological)_Stoichiometry","timestamp":"2014-04-19T17:02:07Z","content_type":null,"content_length":"91153","record_id":"<urn:uuid:cd2d4a3c-0040-4998-aa3c-9b57adecd2f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library Routine Document Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use settings for all of the optional parameters, you need only read Sections 1 to 9 of this document. If, however, you wish to reset some or all of the settings please refer to Section 10 for a detailed description of the algorithm, to Section 11 for a detailed description of the specification of the optional parameters and to Section 12 for a detailed description of the monitoring information produced by the routine 1 Purpose E04USF/E04USA is designed to minimize an arbitrary smooth sum of squares function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) using a sequential quadratic programming (SQP) method. As many first derivatives as possible should be supplied by you; any unspecified derivatives are approximated by finite differences. See the description of the optional parameter Derivative Level , in Section 11.1 . It is not intended for large sparse problems. E04USF/E04USA may also be used for unconstrained, bound-constrained and linearly constrained optimization. E04USA is a version of E04USF that has additional parameters in order to make it safe for use in multithreaded applications (see Section 5 ). The initialization routine E04WBF must have been called before calling E04USA. 2 Specification 2.1 Specification for E04USF SUBROUTINE E04USF ( M, N, NCLIN, NCNLN, LDA, LDCJ, LDFJ, LDR, A, BL, BU, Y, CONFUN, OBJFUN, ITER, ISTATE, C, CJAC, F, FJAC, CLAMDA, OBJF, R, X, IWORK, LIWORK, WORK, LWORK, IUSER, RUSER, IFAIL) INTEGER M, N, NCLIN, NCNLN, LDA, LDCJ, LDFJ, LDR, ITER, ISTATE(N+NCLIN+NCNLN), IWORK(LIWORK), LIWORK, LWORK, IUSER(*), IFAIL REAL (KIND=nag_wp) A(LDA,*), BL(N+NCLIN+NCNLN), BU(N+NCLIN+NCNLN), Y(M), C(max(1,NCNLN)), CJAC(LDCJ,*), F(M), FJAC(LDFJ,N), CLAMDA(N+NCLIN+NCNLN), OBJF, R(LDR,N), X(N), WORK(LWORK), RUSER(*) EXTERNAL CONFUN, OBJFUN 2.2 Specification for E04USA SUBROUTINE E04USA M, N, NCLIN, NCNLN, LDA, LDCJ, LDFJ, LDR, A, BL, BU, Y, CONFUN, OBJFUN, ITER, ISTATE, C, CJAC, F, FJAC, CLAMDA, OBJF, R, X, IWORK, LIWORK, WORK, LWORK, IUSER, RUSER, LWSAV, IWSAV, ( RWSAV, IFAIL) INTEGER M, N, NCLIN, NCNLN, LDA, LDCJ, LDFJ, LDR, ITER, ISTATE(N+NCLIN+NCNLN), IWORK(LIWORK), LIWORK, LWORK, IUSER(*), IWSAV(610), IFAIL REAL (KIND=nag_wp) A(LDA,*), BL(N+NCLIN+NCNLN), BU(N+NCLIN+NCNLN), Y(M), C(max(1,NCNLN)), CJAC(LDCJ,*), F(M), FJAC(LDFJ,N), CLAMDA(N+NCLIN+NCNLN), OBJF, R(LDR,N), X(N), WORK(LWORK), RUSER(*), RWSAV LOGICAL LWSAV(120) EXTERNAL CONFUN, OBJFUN Before calling E04USA, or either of the option setting routines E04WBF must be called. The specification for SUBROUTINE E04WBF ( RNAME, CWSAV, LCWSAV, LWSAV, LLWSAV, IWSAV, LIWSAV, RWSAV, LRWSAV, IFAIL) INTEGER LCWSAV, LLWSAV, IWSAV(LIWSAV), LIWSAV, LRWSAV, IFAIL REAL (KIND=nag_wp) RWSAV(LRWSAV) LOGICAL LWSAV(LLWSAV) CHARACTER(*) RNAME CHARACTER(80) CWSAV(LCWSAV) should be called with , the declared lengths of respectively, must satisfy: • ${\mathbf{LCWSAV}}\ge 1$ • ${\mathbf{LLWSAV}}\ge 120$ • ${\mathbf{LIWSAV}}\ge 610$ • ${\mathbf{LRWSAV}}\ge 475$ The contents of the arrays RWSAV must not be altered between calling routines 3 Description E04USF/E04USA is designed to solve the nonlinear least squares programming problem – the minimization of a smooth nonlinear sum of squares function subject to a set of constraints on the variables. The problem is assumed to be stated in the following form: $minimize x∈Rn Fx = 12 ∑ i=1 m yi- fi x 2 subject to l≤ x ALx cx ≤u,$ (1) objective function ) is a nonlinear function which can be represented as the sum of squares of $\left({y}_{1}-{f}_{1}\left(x\right)\right),\left({y}_{2}-{f}_{2}\left(x\right)\right),\dots ,\left({y}_{m}-{f}_{m}\left(x\right)\right)$ , the are constant, is an constant matrix, and is an element vector of nonlinear constraint functions. (The matrix and the vector may be empty.) The objective function and the constraint functions are assumed to be smooth, i.e., at least twice-continuously differentiable. (The method of E04USF/E04USA will usually solve if any isolated discontinuities are away from the solution.) Note that although the bounds on the variables could be included in the definition of the linear constraints, we prefer to distinguish between them for reasons of computational efficiency. For the same reason, the linear constraints should be included in the definition of the nonlinear constraints. Upper and lower bounds are specified for all the variables and for all the constraints. An constraint can be specified by setting . If certain bounds are not present, the associated elements of can be set to special values that will be treated as . (See the description of the optional parameter Infinite Bound Size You must supply an initial estimate of the solution to , together with subroutines that define $f\left(x\right)={\left({f}_{1}\left(x\right),{f}_{2}\left(x\right),\dots ,{f}_{m}\left(x\right)\right)}^{\mathrm{T}}$ and as many first partial derivatives as possible; unspecified derivatives are approximated by finite differences. The subfunctions are defined by the array , and the nonlinear constraints are defined by . On every call, these subroutines must return appropriate values of . You should also provide the available partial derivatives. Any unspecified derivatives are approximated by finite differences for a discussion of the optional parameter Derivative Level . Note that if there any nonlinear constraints, then the call to will precede the call to For maximum reliability, it is preferable for you to provide all partial derivatives (see Chapter 8 of Gill et al. (1981) for a detailed discussion). If all gradients cannot be provided, it is similarly advisable to provide as many as possible. While developing , the optional parameter should be used to check the calculation of any known gradients. 4 References Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag 5 Parameters 1: M – INTEGERInput On entry: $m$, the number of subfunctions associated with $F\left(x\right)$. Constraint: ${\mathbf{M}}>0$. 2: N – INTEGERInput On entry: $n$, the number of variables. Constraint: ${\mathbf{N}}>0$. 3: NCLIN – INTEGERInput On entry: ${n}_{L}$, the number of general linear constraints. Constraint: ${\mathbf{NCLIN}}\ge 0$. 4: NCNLN – INTEGERInput On entry: ${n}_{N}$, the number of nonlinear constraints. Constraint: ${\mathbf{NCNLN}}\ge 0$. 5: LDA – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. Constraint: ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{NCLIN}}\right)$. 6: LDCJ – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. Constraint: ${\mathbf{LDCJ}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{NCNLN}}\right)$. 7: LDFJ – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. Constraint: ${\mathbf{LDFJ}}\ge {\mathbf{M}}$. 8: LDR – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. Constraint: ${\mathbf{LDR}}\ge {\mathbf{N}}$. 9: A(LDA,$*$) – REAL (KIND=nag_wp) arrayInput the second dimension of the array must be at least , and at least On entry : the th row of contains the th row of the matrix of general linear constraints in . That is, the th row contains the coefficients of the th general linear constraint, for $\mathit{i}=1,2,\dots ,{\mathbf{NCLIN}}$ , the array is not referenced. 10: BL(${\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$) – REAL (KIND=nag_wp) arrayInput 11: BU(${\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$) – REAL (KIND=nag_wp) arrayInput On entry : must contain the lower bounds and the upper bounds, for all the constraints in the following order. The first elements of each array must contain the bounds on the variables, the next elements the bounds for the general linear constraints (if any) and the next elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., ), set ${\mathbf{BL}}\left(j\right)\le -\mathit{bigbnd}$ , and to specify a nonexistent upper bound (i.e., ), set ${\mathbf{BU}}\left(j\right)\ge \mathit{bigbnd}$ ; the default value of , but this may be changed by the optional parameter Infinite Bound Size . To specify the th constraint as an equality, set , say, where $\left|\beta \right|<\mathit{bigbnd}$ □ ${\mathbf{BL}}\left(\mathit{j}\right)\le {\mathbf{BU}}\left(\mathit{j}\right)$, for $\mathit{j}=1,2,\dots ,{\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$; □ if ${\mathbf{BL}}\left(j\right)={\mathbf{BU}}\left(j\right)=\beta$, $\left|\beta \right|<\mathit{bigbnd}$. 12: Y(M) – REAL (KIND=nag_wp) arrayInput On entry: the coefficients of the constant vector $y$ of the objective function. 13: CONFUN – SUBROUTINE, supplied by the NAG Library or the user.External Procedure must calculate the vector of nonlinear constraint functions and (optionally) its Jacobian ( $\text{}=\frac{\partial c}{\partial x}$ ) for a specified -element vector . If there are no nonlinear constraints (i.e., will never be called by E04USF/E04USA and may be the dummy routine E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to will occur before the first call to The specification of SUBROUTINE CONFUN ( MODE, NCNLN, N, LDCJ, NEEDC, X, C, CJAC, NSTATE, IUSER, RUSER) INTEGER MODE, NCNLN, N, LDCJ, NEEDC(NCNLN), NSTATE, IUSER(*) REAL (KIND=nag_wp) X(N), C(NCNLN), CJAC(LDCJ,N), RUSER(*) 1: MODE – INTEGERInput/Output On entry : indicates which values must be assigned during each call of . Only the following values need be assigned, for each value of such that All available elements in the $i$th row of CJAC. ${\mathbf{C}}\left(i\right)$ and all available elements in the $i$th row of CJAC. On exit : may be set to a negative value if you wish to terminate the solution to the current problem, and in this case E04USF/E04USA will terminate with set to 2: NCNLN – INTEGERInput On entry: ${n}_{N}$, the number of nonlinear constraints. 3: N – INTEGERInput On entry: $n$, the number of variables. 4: LDCJ – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. 5: NEEDC(NCNLN) – INTEGER arrayInput On entry : the indices of the elements of that must be evaluated by . If , then the th element of and/or the available elements of the th row of (see parameter ) must be evaluated at 6: X(N) – REAL (KIND=nag_wp) arrayInput On entry: $x$, the vector of variables at which the constraint functions and/or all available elements of the constraint Jacobian are to be evaluated. 7: C(NCNLN) – REAL (KIND=nag_wp) arrayOutput On exit : if must contain the value of the th constraint at . The remaining elements of , corresponding to the non-positive elements of , are ignored. 8: CJAC(LDCJ,N) – REAL (KIND=nag_wp) arrayInput/Output On entry: is set to a special value. On exit : if , the th row of must contain the available elements of the vector $abla {c}_{i}$ given by $∇ci= ∂ci ∂x1 , ∂ci ∂x2 ,…, ∂ci ∂xn T,$ $\frac{\partial {c}_{i}}{\partial {x}_{j}}$ is the partial derivative of the th constraint with respect to the th variable, evaluated at the point . See also the parameter . The remaining rows of , corresponding to non-positive elements of , are ignored. If all elements of the constraint Jacobian are known (i.e., ${\mathbf{Derivative Level}}=2$ ), any constant elements may be assigned to one time only at the start of the optimization. An element of that is not subsequently assigned in will retain its initial value throughout. Constant elements may be loaded into either before the call to E04USF/E04USA or during the first call to (signalled by the value ). The ability to preload constants is useful when many Jacobian elements are identically zero, in which case may be initialized to zero and nonzero elements may be reset by Note that constant nonzero elements do affect the values of the constraints. Thus, if is set to a constant value, it need not be reset in subsequent calls to , but the value must nonetheless be added to . For example, if , then the term must be included in the definition of It must be emphasized that, if ${\mathbf{Derivative Level}}=0$ , unassigned elements of are not treated as constant; they are estimated by finite differences, at nontrivial expense. If you do not supply a value for the optional parameter Difference Interval , an interval for each element of is computed automatically at the start of the optimization. The automatic procedure can usually identify constant elements of , which are then computed once only by finite differences. 9: NSTATE – INTEGERInput On entry : if then E04USF/E04USA is calling for the first time. This parameter setting allows you to save computation time if certain data must be read or calculated only once. 10: IUSER($*$) – INTEGER arrayUser Workspace 11: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace is called with the parameters as supplied to E04USF/E04USA. You are free to use the arrays to supply information to as an alternative to using COMMON global variables. must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which E04USF/E04USA is called. Parameters denoted as be changed by this procedure. Note: CONFUN should be tested separately before being used in conjunction with E04USF/E04USA. See also the description of the optional parameter 14: OBJFUN – SUBROUTINE, supplied by the user.External Procedure must calculate either the th element of the vector $f\left(x\right)={\left({f}_{1}\left(x\right),{f}_{2}\left(x\right),\dots ,{f}_{m}\left(x\right)\right)}^{\mathrm{T}}$ or all elements of and (optionally) its Jacobian ( $\text{}=\frac{\partial f}{\partial x}$ ) for a specified -element vector The specification of SUBROUTINE OBJFUN ( MODE, M, N, LDFJ, NEEDFI, X, F, FJAC, NSTATE, IUSER, RUSER) INTEGER MODE, M, N, LDFJ, NEEDFI, NSTATE, IUSER(*) REAL (KIND=nag_wp) X(N), F(M), FJAC(LDFJ,N), RUSER(*) 1: MODE – INTEGERInput/Output On entry : indicates which values must be assigned during each call of . Only the following values need be assigned: ${\mathbf{MODE}}=0$ and ${\mathbf{NEEDFI}}=i$, where $i>0$ ${\mathbf{MODE}}=0$ and ${\mathbf{NEEDFI}}<0$ ${\mathbf{MODE}}=1$ and ${\mathbf{NEEDFI}}<0$ All available elements of FJAC. ${\mathbf{MODE}}=2$ and ${\mathbf{NEEDFI}}<0$ F and all available elements of FJAC. On exit : may be set to a negative value if you wish to terminate the solution to the current problem, and in this case E04USF/E04USA will terminate with set to 2: M – INTEGERInput On entry: $m$, the number of subfunctions. 3: N – INTEGERInput On entry: $n$, the number of variables. 4: LDFJ – INTEGERInput On entry : the first dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. 5: NEEDFI – INTEGERInput On entry: if ${\mathbf{NEEDFI}}=i>0$, only the $i$th element of $f\left(x\right)$ needs to be evaluated at $x$; the remaining elements need not be set. This can result in significant computational savings when $m\gg n$. 6: X(N) – REAL (KIND=nag_wp) arrayInput On entry: $x$, the vector of variables at which $f\left(x\right)$ and/or all available elements of its Jacobian are to be evaluated. 7: F(M) – REAL (KIND=nag_wp) arrayOutput On exit : if must contain the value of If ${\mathbf{MODE}}=0$ or $2$ and ${\mathbf{NEEDFI}}<0$, ${\mathbf{F}}\left(\mathit{i}\right)$ must contain the value of ${f}_{\mathit{i}}$ at $x$, for $\mathit{i}=1,2,\dots ,m$. 8: FJAC(LDFJ,N) – REAL (KIND=nag_wp) arrayInput/Output On entry: is set to a special value. On exit : if , the th row of must contain the available elements of the vector $abla {f}_{i}$ given by $∇fi= ∂fi ∂x1 , ∂fi ∂x2 ,…, ∂fi ∂xn T,$ evaluated at the point . See also the parameter 9: NSTATE – INTEGERInput On entry : if then E04USF/E04USA is calling for the first time. This parameter setting allows you to save computation time if certain data must be read or calculated only once. 10: IUSER($*$) – INTEGER arrayUser Workspace 11: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace is called with the parameters as supplied to E04USF/E04USA. You are free to use the arrays to supply information to as an alternative to using COMMON global variables. must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which E04USF/E04USA is called. Parameters denoted as be changed by this procedure. Note: OBJFUN should be tested separately before being used in conjunction with E04USF/E04USA. See also the description of the optional parameter 15: ITER – INTEGEROutput On exit: the number of major iterations performed. 16: ISTATE(${\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$) – INTEGER arrayInput/Output On entry : need not be set if the (default) optional parameter Cold Start is used. If the optional parameter Warm Start has been chosen, the elements of corresponding to the bounds and linear constraints define the initial working set for the procedure that finds a feasible point for the linear constraints and bounds. The active set at the conclusion of this procedure and the elements of corresponding to nonlinear constraints then define the initial working set for the first QP subproblem. More precisely, the first elements of refer to the upper and lower bounds on the variables, the next elements refer to the upper and lower bounds on , and the next elements refer to the upper and lower bounds on . Possible values for are as follows: ${\mathbf{ISTATE}}\left(j\right)$ Meaning 0 The corresponding constraint is not in the initial QP working set. 1 This inequality constraint should be in the working set at its lower bound. 2 This inequality constraint should be in the working set at its upper bound. 3 This equality constraint should be in the initial working set. This value must not be specified unless ${\mathbf{BL}}\left(j\right)={\mathbf{BU}}\left(j\right)$. The values are also acceptable but will be modified by the routine. If E04USF/E04USA has been called previously with the same values of already contains satisfactory information. (See also the description of the optional parameter Warm Start .) The routine also adjusts (if necessary) the values supplied in to be consistent with Constraint: $-2\le {\mathbf{ISTATE}}\left(\mathit{j}\right)\le 4$, for $\mathit{j}=1,2,\dots ,{\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$. On exit : the status of the constraints in the QP working set at the point returned in . The significance of each possible value of is as follows: ${\mathbf{ISTATE}}\ Meaning $-2$ This constraint violates its lower bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem. $-1$ This constraint violates its upper bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem. $\phantom{-}0$ The constraint is satisfied to within the feasibility tolerance, but is not in the QP working set. $\phantom{-}1$ This inequality constraint is included in the QP working set at its lower bound. $\phantom{-}2$ This inequality constraint is included in the QP working set at its upper bound. $\phantom{-}3$ This constraint is included in the QP working set as an equality. This value of ISTATE can occur only when ${\mathbf{BL}}\left(j\right)={\mathbf{BU}}\left(j\right)$. 17: C($\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{NCNLN}}\right)$) – REAL (KIND=nag_wp) arrayOutput On exit : if contains the value of the th nonlinear constraint function at the final iterate, for $\mathit{i}=1,2,\dots ,{\mathbf{NCNLN}}$ , the array is not referenced. 18: CJAC(LDCJ,$*$) – REAL (KIND=nag_wp) arrayInput/Output the second dimension of the array must be at least , and at least On entry : in general, need not be initialized before the call to E04USF/E04USA. However, if ${\mathbf{Derivative Level}}=3$ , you may optionally set the constant elements of (see parameter in the description of ). Such constant elements need not be re-assigned on subsequent calls to On exit : if contains the Jacobian matrix of the nonlinear constraint functions at the final iterate, i.e., contains the partial derivative of the th constraint function with respect to the th variable, for $\mathit{i}=1,2,\dots ,{\mathbf{NCNLN}}$ $\mathit{j}=1,2,\dots ,{\mathbf{N}}$ . (See the discussion of parameter , the array is not referenced. 19: F(M) – REAL (KIND=nag_wp) arrayOutput On exit: ${\mathbf{F}}\left(\mathit{i}\right)$ contains the value of the $\mathit{i}$th function ${f}_{\mathit{i}}$ at the final iterate, for $\mathit{i}=1,2,\dots ,{\mathbf{M}}$. 20: FJAC(LDFJ,N) – REAL (KIND=nag_wp) arrayInput/Output On entry : in general, need not be initialized before the call to E04USF/E04USA. However, if ${\mathbf{Derivative Level}}=3$ , you may optionally set the constant elements of (see parameter in the description of ). Such constant elements need not be re-assigned on subsequent calls to On exit : the Jacobian matrix of the functions ${f}_{1},{f}_{2},\dots ,{f}_{m}$ at the final iterate, i.e., contains the partial derivative of the th function with respect to the th variable, for $\mathit{i}=1,2,\dots ,{\mathbf{M}}$ $\mathit{j}=1,2,\dots ,{\mathbf{N}}$ . (See also the discussion of parameter 21: CLAMDA(${\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$) – REAL (KIND=nag_wp) arrayInput/Output On entry : need not be set if the (default) optional parameter Cold Start is used. If the optional parameter Warm Start has been chosen, must contain a multiplier estimate for each nonlinear constraint with a sign that matches the status of the constraint specified by the array, for $\mathit{j}={\mathbf{N}}+{\mathbf{NCLIN}}+1,\dots ,{\mathbf{N}}+{\mathbf{NCLIN}}+{\mathbf{NCNLN}}$ . The remaining elements need not be set. Note that if the th constraint is defined as ‘inactive’ by the initial value of the array (i.e., should be zero; if the th constraint is an inequality active at its lower bound (i.e., should be non-negative; if the th constraint is an inequality active at its upper bound (i.e., should be non-positive. If necessary, the routine will modify to match these rules. On exit: the values of the QP multipliers from the last QP subproblem. ${\mathbf{CLAMDA}}\left(j\right)$ should be non-negative if ${\mathbf{ISTATE}}\left(j\right)=1$ and non-positive if ${\ 22: OBJF – REAL (KIND=nag_wp)Output On exit: the value of the objective function at the final iterate. 23: R(LDR,N) – REAL (KIND=nag_wp) arrayInput/Output On entry : need not be initialized if the (default) optional parameter Cold Start is used. If the optional parameter Warm Start has been chosen, must contain the upper triangular Cholesky factor of the initial approximation of the Hessian of the Lagrangian function, with the variables in the natural order. Elements not in the upper triangular part of are assumed to be zero and need not be assigned. On exit : if contains the upper triangular Cholesky factor , an estimate of the transformed and reordered Hessian of the Lagrangian at in E04UFF/E04UFA). If contains the upper triangular Cholesky factor , the approximate (untransformed) Hessian of the Lagrangian, with the variables in the natural order. 24: X(N) – REAL (KIND=nag_wp) arrayInput/Output On entry: an initial estimate of the solution. On exit: the final estimate of the solution. 25: IWORK(LIWORK) – INTEGER arrayWorkspace 26: LIWORK – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. Constraint: ${\mathbf{LIWORK}}\ge 3×{\mathbf{N}}+{\mathbf{NCLIN}}+2×{\mathbf{NCNLN}}$. 27: WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace 28: LWORK – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which E04USF/E04USA is called. □ if ${\mathbf{NCNLN}}=0$ and ${\mathbf{NCLIN}}=0$, ${\mathbf{LWORK}}\ge 20×{\mathbf{N}}+{\mathbf{M}}×\left({\mathbf{N}}+3\right)$; □ if ${\mathbf{NCNLN}}=0$ and ${\mathbf{NCLIN}}>0$, ${\mathbf{LWORK}}\ge 2×{{\mathbf{N}}}^{2}+20×{\mathbf{N}}+11×{\mathbf{NCLIN}}+\phantom{\rule{0ex}{0ex}}{\mathbf{M}}×\left({\mathbf{N}}+3\ □ if ${\mathbf{NCNLN}}>0$ and ${\mathbf{NCLIN}}\ge 0$, ${\mathbf{LWORK}}\ge 2×{{\mathbf{N}}}^{2}+{\mathbf{N}}×{\mathbf{NCLIN}}+2×{\mathbf{N}}×\phantom{\rule{0ex}{0ex}}{\mathbf{NCNLN}}+20×{\ The amounts of workspace provided and required are (by default) output on the current advisory message unit (as defined by ). As an alternative to computing from the formulas given above, you may prefer to obtain appropriate values from the output of a preliminary run with set to . (E04USF/E04USA will then terminate with 29: IUSER($*$) – INTEGER arrayUser Workspace 30: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace are not used by E04USF/E04USA, but are passed directly to and may be used to pass information to these routines as an alternative to using COMMON global variables. 31: IFAIL – INTEGERInput/Output Note: for E04USA, IFAIL does not occur in this position in the parameter list. See the additional parameters described below On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if ${\mathbf{IFAIL}}e {\mathbf{0}}$ on exit, the recommended value is When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 E04USF/E04USA returns with if the iterates have converged to a point that satisfies the first-order Kuhn–Tucker conditions (see Section 10.1 in E04UFF/E04UFA) to the accuracy requested by the optional parameter Optimality Tolerance $\text{default value}={\epsilon }_{r}^{0.8}$ , where ${\epsilon }_{r}$ is the value of the optional parameter Function Precision $\text{default value}={\epsilon }^{0.9}$ , where is the machine precision )), i.e., the projected gradient and active constraint residuals are negligible at You should check whether the following four conditions are satisfied: (i) the final value of Norm Gz (see Section 8.1) is significantly less than that at the starting point; (ii) during the final major iterations, the values of Step and Mnr (see Section 8.1) are both one; (iii) the last few values of both Norm Gz and Violtn (see Section 8.1) become small at a fast linear rate; and (iv) Cond Hz (see Section 8.1) is small. If all these conditions hold, is almost certainly a local minimum of Note: the following are additional parameters for specific use with E04USA. Users of E04USF therefore need not read the remainder of this description. 31: LWSAV($120$) – LOGICAL arrayCommunication Array 32: IWSAV($610$) – INTEGER arrayCommunication Array 33: RWSAV($475$) – REAL (KIND=nag_wp) arrayCommunication Array The arrays RWSAV must not be altered between calls to any of the routines E04USA, 34: IFAIL – INTEGERInput/Output see the parameter description for 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Note: E04USF/E04USA may return useful information for one or more of the following detected errors or warnings. Errors or warnings detected by the routine: A negative value of indicates an exit from E04USF/E04USA because you set . The value of will be the same as your setting of The final iterate satisfies the first-order Kuhn–Tucker conditions (see Section 10.1 in E04UFF/E04UFA) to the accuracy requested, but the sequence of iterates has not yet converged. E04USF/E04USA was terminated because no further improvement could be made in the merit function Section 8.1 This value of may occur in several circumstances. The most common situation is that you ask for a solution with accuracy that is not attainable with the given precision of the problem (as specified by the optional parameter Function Precision $\text{default value}={\epsilon }^{0.9}$ , where is the machine precision )). This condition will also occur if, by chance, an iterate is an ‘exact’ Kuhn–Tucker point, but the change in the variables was significant at the previous iteration. (This situation often happens when minimizing very simple functions, such as quadratics.) If the four conditions listed in Section 5 are satisfied, is likely to be a solution of even if E04USF/E04USA has terminated without finding a feasible point for the linear constraints and bounds, which means that either no feasible point exists for the given value of the optional parameter Linear Feasibility Tolerance $\text{default value}=\sqrt{\epsilon }$ , where is the machine precision ), or no feasible point could be found in the number of iterations specified by the optional parameter Minor Iteration Limit $\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}+{n}_{N}\right)\right)$ ). You should check that there are no constraint redundancies. If the data for the constraints are accurate only to an absolute precision , you should ensure that the value of the optional parameter Linear Feasibility Tolerance is greater than . For example, if all elements of are of order unity and are accurate to only three decimal places, Linear Feasibility Tolerance should be at least No feasible point could be found for the nonlinear constraints. The problem may have no feasible solution. This means that there has been a sequence of QP subproblems for which no feasible point could be found (indicated by at the end of each line of intermediate printout produced by the major iterations; see Section 8.1 ). This behaviour will occur if there is no feasible point for the nonlinear constraints. (However, there is no general test that can determine whether a feasible point exists for a set of nonlinear constraints.) If the infeasible subproblems occur from the very first major iteration, it is highly likely that no feasible point exists. If infeasibilities occur when earlier subproblems have been feasible, small constraint inconsistencies may be present. You should check the validity of constraints with negative values of . If you are convinced that a feasible point does exist, E04USF/E04USA should be restarted at a different starting point. The limiting number of iterations (as determined by the optional parameter Major Iteration Limit $\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}\right)+10{n}_{N}\right)$ ) has been reached. If the algorithm appears to be making satisfactory progress, then Major Iteration Limit may be too small. If so, either increase its value and rerun E04USF/E04USA or, alternatively, rerun E04USF/E04USA using the optional parameter Warm Start . If the algorithm seems to be making little or no progress however, then you should check for incorrect gradients or ill-conditioning as described under Note that ill-conditioning in the working set is sometimes resolved automatically by the algorithm, in which case performing additional iterations may be helpful. However, ill-conditioning in the Hessian approximation tends to persist once it has begun, so that allowing additional iterations without altering is usually inadvisable. If the quasi-Newton update of the Hessian approximation was reset during the latter major iterations (i.e., an occurs at the end of each line of intermediate printout; see Section 8.1 ), it may be worthwhile to try a Warm Start at the final point as suggested above. Not used by this routine. does not satisfy the first-order Kuhn–Tucker conditions (see Section 10.1 in E04UFF/E04UFA), and no improved point for the merit function (see Section 8.1 ) could be found during the final linesearch. This sometimes occurs because an overly stringent accuracy has been requested, i.e., the value of the optional parameter Optimality Tolerance $\text{default value}={\epsilon }_{r}^{0.8}$ , where ${\epsilon }_{r}$ is the value of the optional parameter Function Precision $\text{default value}={\epsilon }^{0.9}$ , where is the machine precision )) is too small. In this case you should apply the four tests described under to determine whether or not the final solution is acceptable (see Gill et al. (1981) , for a discussion of the attainable accuracy). If many iterations have occurred in which essentially no progress has been made and E04USF/E04USA has failed completely to move from the initial point then user-supplied subroutines may be incorrect. You should refer to comments under and check the gradients using the optional parameter $\text{default value}=0$ ). Unfortunately, there may be small errors in the objective and constraint gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies. An indication of this situation is a dramatic alteration in the iterates if the finite difference interval is altered. One might also suspect this type of error if a switch is made to central differences even when Norm Gz Section 8.1 ) are large. Another possibility is that the search direction has become inaccurate because of ill-conditioning in the Hessian approximation or the matrix of constraints in the working set; either form of ill-conditioning tends to be reflected in large values of (the number of iterations required to solve each QP subproblem; see Section 8.1 If the condition estimate of the projected Hessian ( Cond Hz ; see Section 12 ) is extremely large, it may be worthwhile rerunning E04USF/E04USA from the final point with the optional parameter Warm Start . In this situation, should be left unaltered and should be reset to the identity matrix. If the matrix of constraints in the working set is ill-conditioned (i.e., Cond T is extremely large; see Section 12 ), it may be helpful to run E04USF/E04USA with a relaxed value of the optional parameter Feasibility Tolerance $\text{default value}=\sqrt{\epsilon }$ , where is the machine precision ). (Constraint dependencies are often indicated by wide variations in size in the diagonal elements of the matrix , whose diagonals will be printed if ${\mathbf{Major Print Level}}\ge 30$ The user-supplied derivatives of the subfunctions and/or nonlinear constraints appear to be incorrect. Large errors were found in the derivatives of the subfunctions and/or nonlinear constraints. This value of will occur if the verification process indicated that at least one Jacobian element had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error. As a first-step, you should check that the code for the subfunction and constraint values is correct – for example, by computing the subfunctions at a point where the correct value of $F\left(x\ right)$ is known. However, care should be taken that the chosen point fully tests the evaluation of the subfunctions. It is remarkable how often the values $x=0$ or $x=1$ are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless. Special care should be used in this test if computation of the subfunctions involves subsidiary data communicated in COMMON storage. Although the first evaluation of the subfunctions may be correct, subsequent calculations may be in error because some of the subsidiary data has accidentally been overwritten. Gradient checking will be ineffective if the objective function uses information computed by the constraints, since they are not necessarily computed before each function evaluation. Errors in programming the subfunctions may be quite subtle in that the subfunction values are ‘almost’ correct. For example, a subfunction may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the subfunction depends. A common error on machines where numerical calculations are usually performed in double precision is to include even one single precision constant in the calculation of the subfunction; since some compilers do not convert such constants to double precision, half the correct figures may be lost by such a seemingly trivial error. Not used by this routine. An input parameter is invalid. If overflow occurs then either an element of $C$ is very large, or the singular values or singular vectors have been incorrectly supplied. 7 Accuracy on exit, then the vector returned in the array is an estimate of the solution to an accuracy of approximately Optimality Tolerance $\text{default value}={\epsilon }^{0.8}$ , where is the machine precision 8.1 Description of the Printed Output This section describes the intermediate printout and final printout produced by E04USF/E04USA. The intermediate printout is a subset of the monitoring information produced by the routine at every iteration (see Section 12 ). You can control the level of printed output (see the description of the optional parameter Major Print Level ). Note that the intermediate printout and final printout are produced only if ${\mathbf{Major Print Level}}\ge 10$ (the default for E04USF, by default no output is produced by E04USA). (by default no output is produced by E04USF). The following line of summary output ( characters) is produced at every major iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration. Maj is the major iteration count. is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be $1$ in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Section 10 in E04UFF/E04UFA). Note that may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase. Step is the step ${\alpha }_{k}$ taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., ${\alpha }_{k}=1$) will be taken as the solution is is the value of the augmented Lagrangian merit function (see (12) in E04UFF/E04UFA) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see Section 10.3 in E04UFF/E04UFA). As the solution is approached, Merit Function will converge to the value of the objective function at the solution. If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted Merit by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible Function subproblem is obtained or E04USF/E04USA terminates with ${\mathbf{IFAIL}}={\mathbf{3}}$ (no feasible point could be found for the nonlinear constraints). If there are no nonlinear constraints present (i.e., ${\mathbf{NCNLN}}=0$) then this entry contains Objective, the value of the objective function $F\left(x\right)$. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints. Norm Gz is $‖{Z}^{\mathrm{T}}{g}_{\mathrm{FR}}‖$, the Euclidean norm of the projected gradient (see Section 10.2 in E04UFF/E04UFA). Norm Gz will be approximately zero in the neighbourhood of a Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if NCNLN is zero). Violtn will be approximately zero in the neighbourhood of a solution. Cond Hz is a lower bound on the condition number of the projected Hessian approximation ${H}_{Z}$ (${H}_{Z}={Z}^{\mathrm{T}}{H}_{\mathrm{FR}}Z={R}_{Z}^{\mathrm{T}}{R}_{Z}$; see (6) and (11) in E04UFF/E04UFA). The larger this number, the more difficult the problem. M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see Section 10.4 in E04UFF/E04UFA). I is printed if the QP subproblem has no feasible point. is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made C because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that $x$ is close to a Kuhn–Tucker point (see Section 10.1 in E04UFF/E04UFA). L is printed if the linesearch has produced a relative change in $x$ greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value. R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of $R$ indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, $R$ is modified so that its diagonal condition estimator is bounded. The final printout includes a listing of the status of every variable and constraint. The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero. Varbl gives the name (V) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,n$, of the variable. gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or -- respectively. A key is sometimes printed before State . Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away A from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change. D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds. I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance. Value is the value of the variable at the final iteration. Lower is the lower bound specified for the variable. None indicates that ${\mathbf{BL}}\left(j\right)\le -\mathit{bigbnd}$. Upper is the upper bound specified for the variable. None indicates that ${\mathbf{BU}}\left(j\right)\ge \mathit{bigbnd}$. Lagr is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless ${\mathbf{BL}}\left(j\right)\le -\mathit{bigbnd}$ and ${\mathbf{BU}}\left(j\right)\ge \mathit Mult {bigbnd}$, in which case the entry will be blank. If $x$ is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL. Slack is the difference between the variable Value and the nearer of its (finite) bounds ${\mathbf{BL}}\left(j\right)$ and ${\mathbf{BU}}\left(j\right)$. A blank entry indicates that the associated variable is not bounded (i.e., ${\mathbf{BL}}\left(j\right)\le -\mathit{bigbnd}$ and ${\mathbf{BU}}\left(j\right)\ge \mathit{bigbnd}$). The meaning of the printout for linear and nonlinear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, are replaced by respectively, and with the following changes in the heading: L Con gives the name (L) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,{n}_{L}$, of the linear constraint. N Con gives the name (N) and index ($\mathit{j}-{n}_{L}$), for $\mathit{j}={n}_{L}+1,\dots ,{n}_{L}+{n}_{N}$, of the nonlinear constraint. Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive. Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision. 9 Example This example is based on Problem 57 in Hock and Schittkowski (1981) and involves the minimization of the sum of squares function $Fx = 12 ∑ i=1 44 yi - fi x 2 ,$ $fi x = x1 + 0.49-x1 e -x2 ai- 8$ $i yi ai i yi ai 1 0.49 8 23 0.41 22 2 0.49 8 24 0.40 22 3 0.48 10 25 0.42 24 4 0.47 10 26 0.40 24 5 0.48 10 27 0.40 24 6 0.47 10 28 0.41 26 7 0.46 12 29 0.40 26 8 0.46 12 30 0.41 26 9 0.45 12 31 0.41 28 10 0.43 12 32 0.40 28 11 0.45 14 33 0.40 30 12 0.43 14 34 0.40 30 13 0.43 14 35 0.38 30 14 0.44 16 36 0.41 32 15 0.43 16 37 0.40 32 16 0.43 16 38 0.40 34 17 0.46 18 39 0.41 36 18 0.45 18 40 0.38 36 19 0.42 20 41 0.40 38 20 0.42 20 42 0.40 38 21 0.43 20 43 0.39 40 22 0.41 22 44 0.39 42$ subject to the bounds to the general linear constraint and to the nonlinear constraint The initial point, which is infeasible, is The optimal solution (to five figures) is . The nonlinear constraint is active at the solution. The document for includes an example program to solve the same problem using some of the optional parameters described in Section 11 9.1 Program Text Note: the following programs illustrate the use of E04USF and E04USA. 9.2 Program Data 9.3 Program Results Note: the remainder of this document is intended for more advanced users. Section 11 describes the optional parameters which may be set by calls to E04UQF/E04UQA and/or E04URF/E04URA. Section 12 describes the quantities which can be requested to monitor the course of the computation 10 Algorithmic Details E04USF/E04USA implements a sequential quadratic programming (SQP) method incorporating an augmented Lagrangian merit function and a BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton approximation to the Hessian of the Lagrangian, and is based on . The documents for should be consulted for details of the method. 11 Optional Parameters Several optional parameters in E04USF/E04USA define choices in the problem specification or the algorithm logic. In order to reduce the number of formal parameters of E04USF/E04USA these optional parameters have associated default values that are appropriate for most problems. Therefore you need only specify those optional parameters whose values are to be different from their default values. The remainder of this section can be skipped if you wish to use the default values for all optional parameters. The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 11.1 Optional parameters may be specified by calling one, or both, of before a call to E04USF/E04USA. reads options from an external options file, with as the first and last lines respectively and each intermediate line defining a single optional parameter. For example, Print level = 1 The call CALL E04UQF (IOPTNS, INFORM) can then be used to read the file on unit will be zero on successful exit. should be consulted for a full description of this method of supplying optional parameters. can be called to supply options directly, one call being necessary for each optional parameter. For example, CALL E04URF ('Print Level = 1') should be consulted for a full description of this method of supplying optional parameters. All optional parameters not specified by you are set to their default values. Optional parameters specified by you are unaltered by E04USF/E04USA (unless they define invalid values) and so remain in effect for subsequent calls to E04USF/E04USA, unless altered by you. 11.1 Description of the Optional Parameters For each option, we give a summary line, a description of the optional parameter and details of constraints. The summary line contains: • the keywords, where the minimum abbreviation of each keyword is underlined (if no characters of an optional qualifier are underlined, the qualifier may be omitted); • a parameter value, where the letters $a$, $i\text{ and }r$ denote options that take character, integer and real values respectively; • the default value, where the symbol $\epsilon$ is a generic notation for machine precision (see X02AJF), and ${\epsilon }_{r}$ denotes the relative precision of the objective function Function Keywords and character values are case and white space insensitive. Further details of other quantities not explicitly defined in this section may be found by consulting the document for Central Difference Interval $r$ Default values are computed If the algorithm switches to central differences because the forward-difference approximation is not sufficiently accurate, the value of is used as the difference interval for every element of . The switch to central differences is indicated by at the end of each line of intermediate printout produced by the major iterations (see Section 8.1 ). The use of finite differences is discussed further under the optional parameter Difference Interval If you supply a value for this optional parameter, a small value between $0.0$ and $1.0$ is appropriate. This option controls the specification of the initial working set in both the procedure for finding a feasible point for the linear constraints and bounds, and in the first QP subproblem thereafter. With a Cold Start , the first working set is chosen by E04USF/E04USA based on the values of the variables and constraints at the initial point. Broadly speaking, the initial working set will include equality constraints and bounds or inequality constraints that violate or ‘nearly’ satisfy their bounds (to within Crash Tolerance With a Warm Start , you must set the array and define as discussed in Section 5 values associated with bounds and linear constraints determine the initial working set of the procedure to find a feasible point with respect to the bounds and linear constraints. values associated with nonlinear constraints determine the initial working set of the first QP subproblem after such a feasible point has been found. E04USF/E04USA will override your specification of if necessary, so that a poor choice of the working set will not cause a fatal error. For instance, any elements of which are set to $-1\text{ or }4$ will be reset to zero, as will any elements which are set to when the corresponding elements of are not equal. A Warm Start will be advantageous if a good estimate of the initial working set is available – for example, when E04USF/E04USA is called repeatedly to solve related problems. Crash Tolerance $r$ Default $\text{}=0.01$ This value is used in conjunction with the optional parameter Cold Start (the default value) when E04USF/E04USA selects an initial working set. If $0\le r\le 1$ , the initial working set will include (if possible) bounds or general inequality constraints that lie within of their bounds. In particular, a constraint of the form ${a}_{j}^{\mathrm{T}}x\ge l$ will be included in the initial working set if $\left|{a}_{j}^{\mathrm{T}}x-l\right|\le r\left(1+\left|l\right|\right)$ . If , the default value is used. This special keyword may be used to reset all optional parameters to their default values. Derivative Level $i$ Default $\text{}=3$ This parameter indicates which derivatives are provided in user-supplied subroutines . The possible choices for are the following. $i$ Meaning 3 All elements of the objective Jacobian and the constraint Jacobian are provided by you. 2 All elements of the constraint Jacobian are provided, but some elements of the objective Jacobian are not specified by you. 1 All elements of the objective Jacobian are provided, but some elements of the constraint Jacobian are not specified by you. 0 Some elements of both the objective Jacobian and the constraint Jacobian are not specified by you. The value $i=3$ should be used whenever possible, since E04USF/E04USA is more reliable (and will usually be more efficient) when all derivatives are exact. $i=0\text{ or }2$ , E04USF/E04USA will approximate unspecified elements of the objective Jacobian, using finite differences. The computation of finite difference approximations usually increases the total run-time, since a call to is required for each unspecified element. Furthermore, less accuracy can be attained in the solution (see Chapter 8 of Gill et al. (1981) , for a discussion of limiting accuracy). $i=0\text{ or }1$ , E04USF/E04USA will approximate unspecified elements of the constraint Jacobian. One call to is needed for each variable for which partial derivatives are not available. For example, if the constraint Jacobian has the form $* * * * * ? ? * * * ? * * * * *$ where ‘ ’ indicates an element provided by you and ‘?’ indicates an unspecified element, E04USF/E04USA will call twice: once to estimate the missing element in column , and again to estimate the two missing elements in column . (Since columns are known, they require no calls to At times, central differences are used rather than forward differences, in which case twice as many calls to are needed. (The switch to central differences is not under your control.) If $i<0$ or $i>3$, the default value is used. Difference Interval $r$ Default values are computed This option defines an interval used to estimate derivatives by finite differences in the following circumstances: (a) For verifying the objective and/or constraint gradients (see the description of the optional parameter Verify). (b) For estimating unspecified elements of the objective and/or constraint Jacobian matrix. In general, a derivative with respect to the th variable is approximated using the interval ${\delta }_{j}$ , where ${\delta }_{j}=r\left(1+\left|{\stackrel{^}{x}}_{j}\right|\right)$ , with the first point feasible with respect to the bounds and linear constraints. If the functions are well scaled, the resulting derivative approximation should be accurate to . See Gill et al. (1981) for a discussion of the accuracy in finite difference approximations. If a difference interval is not specified, a finite difference interval will be computed automatically for each variable by a procedure that requires up to six calls of for each element. This option is recommended if the function is badly scaled or you wish to have E04USF/E04USA determine constant elements in the objective and constraint gradients (see the descriptions of Section 5 If you supply a value for this optional parameter, a small value between $0.0$ and $1.0$ is appropriate. Feasibility Tolerance $r$ Default $\text{}=\sqrt{\epsilon }$ The scalar defines the maximum acceptable violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a constraint is considered satisfied if its violation does not exceed . If $r\ge 1$ , the default value is used. Using this keyword sets both optional parameters Linear Feasibility Tolerance Nonlinear Feasibility Tolerance , if $\epsilon \le r<1$ . (Additional details are given under the descriptions of these optional parameters.) Function Precision $r$ Default $\text{}={\epsilon }^{0.9}$ This parameter defines ${\epsilon }_{r}$, which is intended to be a measure of the accuracy with which the problem functions $F\left(x\right)$ and $c\left(x\right)$ can be computed. If $r<\epsilon$ or $r\ge 1$, the default value is used. The value of ${\epsilon }_{r}$ should reflect the relative precision of ; i.e., ${\epsilon }_{r}$ acts as a relative precision when is large and as an absolute precision when is small. For example, if is typically of order and the first six significant digits are known to be correct, an appropriate value for ${\epsilon }_{r}$ would be . In contrast, if is typically of order and the first six significant digits are known to be correct, an appropriate value for ${\epsilon }_{r}$ would be . The choice of ${\epsilon }_{r}$ can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However, when the accuracy of the computed function values is known to be significantly worse than full precision, the value of ${\epsilon }_{r}$ should be large enough so that E04USF/E04USA will not attempt to distinguish between function values that differ by less than the error inherent in the calculation. Hessian $\overline{)\mathbf{N}}\mathbf{o}$ Default $=\mathrm{NO}$ This option controls the contents of the upper triangular matrix Section 5 ). E04USF/E04USA works exclusively with the transformed and reordered , and hence extra computation is required to form the Hessian itself. If contains the Cholesky factor of the transformed and reordered Hessian. If , the Cholesky factor of the approximate Hessian itself is formed and stored in . You should select if a Warm Start will be used for the next call to E04USF/E04USA. Infinite Bound Size $r$ Default $\text{}={10}^{20}$ If $r>0$, $r$ defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty$ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty$). If $r<0$, the default value is used. Infinite Step Size $r$ Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\mathit{bigbnd},{10}^{20}\right)$ If $r>0$, $r$ specifies the magnitude of the change in variables that is treated as a step to an unbounded solution. If the change in $x$ during an iteration would exceed the value of $r$, the objective function is considered to be unbounded below in the feasible region. If $r\le 0$, the default value is used. JTJ Initial Hessian Default This option controls the initial value of the upper triangular matrix . If denotes the objective Jacobian matrix $abla f\left(x\right)$ , then is often a good approximation to the objective Hessian matrix ${abla }^{2}F\left(x\right)$ (see also optional parameter Reset Frequency Line Search Tolerance $r$ Default $\text{}=0.9$ The value $r$ ($0\le r<1$) controls the accuracy with which the step $\alpha$ taken during each iteration approximates a minimum of the merit function along the search direction (the smaller the value of $r$, the more accurate the linesearch). The default value $r=0.9$ requests an inaccurate search and is appropriate for most problems, particularly those with any nonlinear constraints. If there are no nonlinear constraints, a more accurate search may be appropriate when it is desirable to reduce the number of major iterations – for example, if the objective function is cheap to evaluate, or if a substantial number of derivatives are unspecified. If $r<0$ or $r\ge 1$, the default value is used. Linear Feasibility Tolerance ${r}_{1}$ Default $\text{}=\sqrt{\epsilon }$ Nonlinear Feasibility Tolerance ${r}_{2}$ Default $\text{}={\epsilon }^{0.33}$ or $\sqrt{\epsilon }$ The default value of ${r}_{2}$ is ${\epsilon }^{0.33}$ if ${\mathbf{Derivative Level}}=0$ or $1$, and $\sqrt{\epsilon }$ otherwise. The scalars ${r}_{1}$ and ${r}_{2}$ define the maximum acceptable absolute violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a linear constraint is considered satisfied if its violation does not exceed ${r}_{1}$, and similarly for a nonlinear constraint and ${r}_{2}$. If ${r}_{m}<\epsilon$ or ${r}_{\mathit{m}}\ge 1$, the default value is used, for $\mathit{m}=1,2$. On entry to E04USF/E04USA, an iterative procedure is executed in order to find a point that satisfies the linear constraints and bounds on the variables to within the tolerance ${r}_{1}$. All subsequent iterates will satisfy the linear constraints to within the same tolerance (unless ${r}_{1}$ is comparable to the finite difference interval). For nonlinear constraints, the feasibility tolerance defines the largest constraint violation that is acceptable at an optimal point. Since nonlinear constraints are generally not satisfied until the final iterate, the value of optional parameter Nonlinear Feasibility Tolerance acts as a partial termination criterion for the iterative sequence generated by E04USF/E04USA (see also optional parameter Optimality Tolerance These tolerances should reflect the precision of the corresponding constraints. For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about $6$ decimal digits, it would be appropriate to specify ${r}_{1}$ as ${10}^{-6}$. List Default for $\mathrm{E04USF}={\mathbf{List}}$ Nolist Default for $\mathrm{E04USA}={\mathbf{Nolist}}$ Normally each optional parameter specification is printed as it is supplied. Optional parameter may be used to suppress the printing and optional parameter may be used to restore printing. Major Iteration Limit $i$ Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}\right)+10{n}_{N}\right)$ The value of $i$ specifies the maximum number of major iterations allowed before termination. Setting $i=0$ and ${\mathbf{Major Print Level}}>0$ means that the workspace needed will be computed and printed, but no iterations will be performed. If $i<0$, the default value is used. Major Print Level $i$ Default for E04USF $\text{}=10$ Print Level Default for E04USA $\text{}=0$ The value of controls the amount of printout produced by the major iterations of E04USF/E04USA, as indicated below. A detailed description of the printed output is given in Section 8.1 (summary output at each major iteration and the final solution) and Section 12 (monitoring information at each major iteration). (See also the description of the optional parameter Minor Print Level The following printout is sent to the current advisory message unit (as defined by $i$ Output $\phantom{\ge 0}0$ No output. $\phantom{\ge 0}1$ The final solution only. $\phantom{\ge 0}5$ One line of summary output ($\text{}<80$ characters; see Section 8.1) for each major iteration (no printout of the final solution). $\text{}\ge 10$ The final solution and one line of summary output for each major iteration. The following printout is sent to the logical unit number defined by the optional parameter Monitoring File $i$ Output $\text{} No output. $\text{}\ One long line of output ($\text{}>80$ characters; see Section 12) for each major iteration (no printout of the final solution). ge 5$ $\text{}\ At each major iteration, the objective function, the Euclidean norm of the nonlinear constraint violations, the values of the nonlinear constraints (the vector $c$), the values of the ge 20$ linear constraints (the vector ${A}_{L}x$), and the current values of the variables (the vector $x$). $\text{}\ At each major iteration, the diagonal elements of the matrix $T$ associated with the $TQ$ factorization (see (5) in E04UFF/E04UFA) of the QP working set, and the diagonal elements of $R$, ge 30$ the triangular factor of the transformed and reordered Hessian (see (6) in E04UFF/E04UFA). ${\mathbf{Major Print Level}}\ge 5$ and the unit number defined by the optional parameter Monitoring File is the same as that defined by , then the summary output for each major iteration is suppressed. Minor Iteration Limit $i$ Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}+{n}_{N}\right)\right)$ The value of $i$ specifies the maximum number of iterations for finding a feasible point with respect to the bounds and linear constraints (if any). The value of $i$ also specifies the maximum number of minor iterations for the optimality phase of each QP subproblem. If $i\le 0$, the default value is used. Minor Print Level $i$ Default $\text{}=0$ The value of controls the amount of printout produced by the minor iterations of E04USF/E04USA (i.e., the iterations of the quadratic programming algorithm), as indicated below. A detailed description of the printed output is given in Section 8.1 (summary output at each minor iteration and the final QP solution) and Section 12 (monitoring information at each minor iteration). (See also the description of the optional parameter Major Print Level The following printout is sent to the current advisory message unit (as defined by $i$ Output $\phantom{\ge 0}0$ No output. $\phantom{\ge 0}1$ The final QP solution only. $\phantom{\ge 0}5$ One line of summary output ($\text{}<80$ characters; see Section 8.1) for each minor iteration (no printout of the final QP solution). $\text{}\ge 10$ The final QP solution and one line of summary output for each minor iteration. The following printout is sent to the logical unit number defined by the optional parameter Monitoring File $i$ Output $\text{}<5$ No output. $\text{}\ge One long line of output ($\text{}>80$ characters; see Section 12) for each minor iteration (no printout of the final QP solution). $\text{}\ge At each minor iteration, the current estimates of the QP multipliers, the current estimate of the QP search direction, the QP constraint values, and the status of each QP constraint. $\text{}\ge At each minor iteration, the diagonal elements of the matrix $T$ associated with the $TQ$ factorization (see (5) in E04UFF/E04UFA) of the QP working set, and the diagonal elements of the 30$ Cholesky factor $R$ of the transformed Hessian (see (6) in E04UFF/E04UFA). ${\mathbf{Minor Print Level}}\ge 5$ and the unit number defined by the optional parameter Monitoring File is the same as that defined by , then the summary output for each minor iteration is suppressed. Monitoring File $i$ Default $\text{}=-1$ If $i\ge 0$ and ${\mathbf{Major Print Level}}\ge 5$ or $i\ge 0$ and ${\mathbf{Minor Print Level}}\ge 5$, monitoring information produced by E04USF/E04USA at every iteration is sent to a file with logical unit number $i$. If $i<0$ and/or ${\mathbf{Major Print Level}}<5$ and ${\mathbf{Minor Print Level}}<5$, no monitoring information is produced. Optimality Tolerance $r$ Default $\text{}={\epsilon }_{R}^{0.8}$ The parameter $r$ (${\epsilon }_{R}\le r<1$) specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, $r$ indicates the number of correct figures desired in the objective function at the solution. For example, if $r$ is ${10}^{-6}$ and E04USF/E04USA terminates successfully, the final value of $F$ should have approximately six correct figures. If $r<{\epsilon }_{R}$ or $r\ge 1$, the default value is used. E04USF/E04USA will terminate successfully if the iterative sequence of values is judged to have converged and the final point satisfies the first-order Kuhn–Tucker conditions (see Section 10.1 in E04UFF/E04UFA). The sequence of iterates is considered to have converged at is the search direction and the step length. An iterate is considered to satisfy the first-order conditions for a minimum if $ZT g FR ≤ r 1 + max 1 + F x , g FR$ (3) $resj≤ftol for all j,$ (4) is the projected gradient, is the gradient of with respect to the free variables, is the violation of the th active nonlinear constraint, and is the Nonlinear Feasibility Tolerance Reset Frequency $i$ Default $\text{}=2$ , this parameter allows you to reset the approximate Hessian matrix to iterations, where is the objective Jacobian matrix $abla f\left(x\right)$ (see also the description of the optional parameter JTJ Initial Hessian At any point where there are no nonlinear constraints active and the values of $f$ are small in magnitude compared to the norm of $J$, ${J}^{\mathrm{T}}J$ will be a good approximation to the objective Hessian ${abla }^{2}F\left(x\right)$. Under these circumstances, frequent resetting can significantly improve the convergence rate of E04USF/E04USA. Resetting is suppressed at any iteration during which there are nonlinear constraints active. If $i\le 0$, the default value is used. Start Objective Check At Variable ${i}_{1}$ Default $\text{}=1$ Stop Objective Check At Variable ${i}_{2}$ Default $\text{}=n$ Start Constraint Check At Variable ${i}_{3}$ Default $\text{}=1$ Stop Constraint Check At Variable ${i}_{4}$ Default $\text{}=n$ These keywords take effect only if ${\mathbf{Verify Level}}>0$ . They may be used to control the verification of Jacobian elements computed by user-supplied subroutines . For example, if the first columns of the objective Jacobian appeared to be correct in an earlier run, so that only column remains questionable, it is reasonable to specify ${\mathbf{Start Objective Check At Variable}}=31$ . If the first variables appear linearly in the subfunctions, so that the corresponding Jacobian elements are constant, the above choice would also be appropriate. If ${i}_{2\mathit{m}-1}\le 0$ or ${i}_{2\mathit{m}-1}>\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,{i}_{2\mathit{m}}\right)$, the default value is used, for $\mathit{m}=1,2$. If ${i}_{2\mathit {m}}\le 0$ or ${i}_{2\mathit{m}}>n$, the default value is used, for $\mathit{m}=1,2$. Step Limit $r$ Default $\text{}=2.0$ specifies the maximum change in variables at the first step of the linesearch. In some cases, such as , even a moderate change in the elements of can lead to floating point overflow. The parameter is therefore used to encourage evaluation of the problem functions at meaningful points. Given any major iterate , the first point at which are evaluated during the linesearch is restricted so that The linesearch may go on and evaluate at points further from if this will result in a lower value of the merit function (indicated by at the end of each line of output produced by the major iterations; see Section 8.1 ). If is printed for most of the iterations, should be set to a larger value. Wherever possible, upper and lower bounds on should be used to prevent evaluation of nonlinear functions at wild values. The default value ${\mathbf{Step Limit}}=2.0$ should not affect progress on well-behaved functions, but values such as $0.1\text{ or }0.01$ may be helpful when rapidly varying functions are present. If a small value of Step Limit is selected, a good starting point may be required. An important application is to the class of nonlinear least squares problems. If $r\le 0$ , the default value is used. Verify Level $i$ Default $\text{}=0$ Verify Constraint Gradients Verify Objective Gradients These keywords refer to finite difference checks on the gradient elements computed by . (Unspecified gradient elements are not checked.) The possible choices for are the following: $i$ Meaning $-1$ No checks are performed. $\phantom{-}0$ Only a ‘cheap’ test will be performed, requiring one call to OBJFUN. $\phantom{-}1$ Individual gradient elements will also be checked using a reliable (but more expensive) test. For example, the nonlinear objective gradient (if any) will be verified if either Verify Objective Gradients ${\mathbf{Verify Level}}=1$ is specified. Similarly, the objective and the constraint gradients will be verified if ${\mathbf{Verify Level}}=3$ is specified. If $i=-1$, no checking will be performed. $0\le i\le 3$ , gradients will be verified at the first point that satisfies the linear constraints and bounds. If , only a ‘cheap’ test will be performed, requiring one call to and (if appropriate) one call to . If $1\le i\le 3$ , a more reliable (but more expensive) check will be made on individual gradient elements, within the ranges specified by the Start Objective Check At Variable Stop Objective Check At Variable keywords. A result of the form is printed by E04USF/E04USA to indicate whether or not each element appears to be correct. If $10\le i\le 13$, the action is the same as for $i-10$, except that it will take place at the user-specified initial value of $x$. If $i<-1$ or $4\le i\le 9$ or $i>13$, the default value is used. We suggest that ${\mathbf{Verify Level}}=3$ be used whenever a new function routine is being developed. 12 Description of Monitoring Information This section describes the long line of output ( characters) which forms part of the monitoring information produced by E04USF/E04USA. (See also the description of the optional parameters Major Print Level Minor Print Level Monitoring File .) You can control the level of printed output. ${\mathbf{Major Print Level}}\ge 5$ ${\mathbf{Monitoring File}}\ge 0$ , the following line of output is produced at every major iteration of E04USF/E04USA on the unit number specified by optional parameter Monitoring File . In all cases, the values of the quantities printed are those in effect on completion of the given iteration. Maj is the major iteration count. is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be $1$ in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Section 10 in E04UFF/E04UFA). Note that may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase. Step is the step ${\alpha }_{k}$ taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., ${\alpha }_{k}=1$) will be taken as the solution is Nfun is the cumulative number of evaluations of the objective function needed for the linesearch. Evaluations needed for the estimation of the gradients by finite differences are not included. Nfun is printed as a guide to the amount of work required for the linesearch. is the value of the augmented Lagrangian merit function (see (12) in E04UFF/E04UFA) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see Section 10.3 in E04UFF/E04UFA). As the solution is approached, Merit Function will converge to the value of the objective function at the solution. If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted Merit by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible Function subproblem is obtained or E04USF/E04USA terminates with ${\mathbf{IFAIL}}={\mathbf{3}}$ (no feasible point could be found for the nonlinear constraints). If there are no nonlinear constraints present (i.e., ${\mathbf{NCNLN}}=0$) then this entry contains Objective, the value of the objective function $F\left(x\right)$. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints. Norm Gz is $‖{Z}^{\mathrm{T}}{g}_{\mathrm{FR}}‖$, the Euclidean norm of the projected gradient (see Section 10.2 in E04UFF/E04UFA). Norm Gz will be approximately zero in the neighbourhood of a Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if NCNLN is zero). Violtn will be approximately zero in the neighbourhood of a solution. Nz is the number of columns of $Z$ (see Section 10.2 in E04UFF/E04UFA). The value of Nz is the number of variables minus the number of constraints in the predicted active set; i.e., $\mathtt Bnd is the number of simple bound constraints in the current working set. Lin is the number of general linear constraints in the current working set. Nln is the number of nonlinear constraints in the predicted active set (not printed if NCNLN is zero). Penalty is the Euclidean norm of the vector of penalty parameters used in the augmented Lagrangian merit function (not printed if NCNLN is zero). Cond H is a lower bound on the condition number of the Hessian approximation $H$. Cond Hz is a lower bound on the condition number of the projected Hessian approximation ${H}_{Z}$ (${H}_{Z}={Z}^{\mathrm{T}}{H}_{\mathrm{FR}}Z={R}_{Z}^{\mathrm{T}}{R}_{Z}$; see (6) and (11) in E04UFF/E04UFA). The larger this number, the more difficult the problem. Cond T is a lower bound on the condition number of the matrix of predicted active constraints. is a three-letter indication of the status of the three convergence tests (2)–(4) defined in the description of the optional parameter Optimality Tolerance. Each letter is T if the test is satisfied and F otherwise. The three tests indicate whether: (i) the sequence of iterates has converged; Conv (ii) the projected gradient (Norm Gz) is sufficiently small; and (iii) the norm of the residuals of constraints in the predicted active set (Violtn) is small enough. If any of these indicators is F when E04USF/E04USA terminates with ${\mathbf{IFAIL}}={\mathbf{0}}$, you should check the solution carefully. M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see Section 10.4 in E04UFF/E04UFA). I is printed if the QP subproblem has no feasible point. is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made C because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that $x$ is close to a Kuhn–Tucker point (see Section 10.1 in E04UFF/E04UFA). L is printed if the linesearch has produced a relative change in $x$ greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value. R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of $R$ indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, $R$ is modified so that its diagonal condition estimator is bounded.
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/E04/e04usf.html","timestamp":"2014-04-21T02:19:31Z","content_type":null,"content_length":"224253","record_id":"<urn:uuid:b8cc6127-adfa-44c8-be3c-7e06e235ffd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geomblog I did an impromptu lunch presentation with my research group on expanders to celebrate the Godel Prize, and in the process dug up some useful reading. There's nothing here that's new to folks familiar with the main results, but if you want a reading list to help with grasping the significance of the results, read on. There are some aspects of this story that are worth drawing out. First of all, it's actually possible (I think) to get the SL = L result without using the zig zag product directly. In fact, stray comment on a post by Lance suggests as much, and with the new analysis of the replacement product this might be even more likely. This doesn't mean that the zig-zag product is useless of course. In fact, there's a wonderful 'return to start' story here, which I'll attempt to convey. Essentially, as Luca describes, many early expander constructions proceeded via taking some special non-Abelian group and constructing its Cayley grap h, which was then shown to be an expander. The zig-zag product is described "combinatorially" as a construction that takes two graphs and makes a third out of them, and one advantage of this representation is that it gives more explicit expander constructions. But it turned out that at the core of this hides a group operator ! In a certain sense, the zig zag product of the Cayley graph of two groups is the Cayley graph of the semidirect product of the groups ! This is a result of Alon, Lubotsky and Wigderson The ACM site isn't updated yet (here's the press release), but Michael M informs us that Vadhan, Reingold and Wigderson just won the Godel prize, [S:most likely:S] for their paper on the zig-zag graph product, [S:which had a major role in:S] and for Reingold's proof that SL = L. These were inspiration for Dinur's combinatorial PCP theorem. Congratulations ! I was reading Noam Nisan's exposition of correlated equilibria. It's interesting to note that finding a correlated equilibrium is easy: you have to solve a linear program in the relevant variables of the underlying distribution. This is in contrast to Nash equilibria. What struck me is that this is a good example of where a joint distribution isn't that bad. In machine learning especially, trying to "learn" a joint distribution is hard firstly because the number of variables blows up, and because of nasty correlations that one has to account for. In fact, the area of variational methods often uses the trick of replacing a joint distribution by the "closest" product distribution, and optimizing over that instead. Here though, the "replacing by a product distribution" takes you from a correlated equilibrium problem to a Nash equilibrium problem, and now the individual probabilities actually multiply, yielding a nonlinear problem that's PPAD-Complete. A new physics review site. via the baconmeister: Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. The Physical Review journals alone published over 18,000 papers last year. How can an active researcher stay informed about the most important developments in physics? Physics highlights exceptional papers from the Physical Review journals. To accomplish this, Physics features expert commentaries written by active researchers who are asked to explain the results to physicists in other subfields. These commissioned articles are edited for clarity and readability across fields and are accompanied by explanatory illustrations. Each week, editors from each of the Physical Review journals choose papers that merit this treatment, aided by referee comments and internal discussion. We select commentary authors from around the world who are known for their expertise and communication skills and we devote much effort to editing these commentaries for broad accessibility. Physics features three kinds of articles: Viewpoints are essays of approximately 1000–1500 words that focus on a single Physical Review paper or PRL letter and put this work into broader context. Trends are concise review articles (3000–4000 words in length) that survey a particular area and look for interesting developments in that field. Synopses (200 words) are staff-written distillations of interesting and important papers each week. In addition, we intend to publish selected Letters to the Editor to allow readers a chance to comment on the commentaries and Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org). What an excellent idea ! Much well-deserved drooling over the Kindle DX. The killer app is native PDF support: as Michael Trick pointed out, the earlier Kindles didn't do too well with math support (the native Kindle document format is not PDF). I could easily see myself using the DX at conferences, rather than lugging around my laptop, or (worse) printing out copies of papers I wanted. The Kindle has a nice feature that you can email documents to a specific address and have them synced automatically to the device, but it comes at a price ($0.10/document). If you use a direct transfer over USB though, it's free of course. What I don't understand is why this has taken so long to happen: it seems to me that the academic market is the killer market for the Kindle: can you imagine transferring ALL your PDF papers to the Kindle for reading ? not to mention books ? p.s for those of you who will no doubt point out that other readers exist that can read PDF, and are puzzled by all the hype over the Kindle, I leave you to your Archos MP3 players and Opera p.p.s I spotted my first Kindle in the wild a month ago at a conference. It was rather cute looking. the DX will be much larger of course.
{"url":"http://geomblog.blogspot.com/2009_05_01_archive.html","timestamp":"2014-04-17T22:37:43Z","content_type":null,"content_length":"160840","record_id":"<urn:uuid:c6113018-0143-423c-b6bd-b5015d4207ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
La Jolla Trigonometry Tutor Find a La Jolla Trigonometry Tutor ...I can tutor a variety of subjects from basic elementary math to calculus, basic natural sciences to upper division chemistry, as well as up to Semester 4 of university Japanese. I started out majoring Chemistry at Harvey Mudd College where I was taught not only a wide breadth of subjects in math... 13 Subjects: including trigonometry, chemistry, statistics, calculus ...I learned that every student is different, and I learned several methods by which to attack their weaknesses and promote confidence in their abilities. Later, after transferring to North Carolina, I began more formal education training and continued tutoring. Again, I saw students with different backgrounds and varying abilities. 26 Subjects: including trigonometry, chemistry, physics, geometry ...I also took numerous chemistry courses, receiving A's in all. My tutoring experience includes study sessions with fellow classmates who were struggling with the subject. So although it is not certified tutoring experience, my fellow classmates' consistent appreciation and gratitude for my help makes me confident in my ability to help those who are struggling. 10 Subjects: including trigonometry, chemistry, calculus, algebra 1 ...I began tutoring math in high school, volunteering to assist an Algebra 1 class for 4 hours per week. Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps from past courses tend to reappear as roadblocks down the line. 14 Subjects: including trigonometry, physics, calculus, geometry ...Surprisingly, I enjoyed these experiences and would like to continue them. I majored in both Chemistry and Microbiology with a minor equivalent in Mathematics. The math was required for applying to Chemistry related problems, so the math was useful to know, but at the same time I took extra Math classes as a way to boost my GPA. 23 Subjects: including trigonometry, chemistry, physics, calculus
{"url":"http://www.purplemath.com/La_Jolla_Trigonometry_tutors.php","timestamp":"2014-04-18T22:03:36Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:ab9fdd0e-5541-4cb0-87b8-6becec7ebae3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A car salesperson sells 7 cars in a week. The sum of the purchases is $365,000.If the dealership pays their salesperson a commission of 1.5% and $10.00 an hour,how much money did this salesperson make for 35 hours of work ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5158e954e4b07077e0c0b491","timestamp":"2014-04-19T17:39:28Z","content_type":null,"content_length":"58407","record_id":"<urn:uuid:743e65cb-0362-4a3b-8a84-15b8ff5a3082>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
General Balanced Trees An implementation of ordered sets using Prof. Arne Andersson's General Balanced Trees. This can be much more efficient than using ordered lists, for larger sets, but depends on the application. Complexity note The complexity on set operations is bounded by either O(|S|) or O(|T| * log(|S|)), where S is the largest given set, depending on which is fastest for any particular function call. For operating on sets of almost equal size, this implementation is about 3 times slower than using ordered-list sets directly. For sets of very different sizes, however, this solution can be arbitrarily much faster; in practical cases, often between 10 and 100 times. This implementation is particularly suited for accumulating elements a few at a time, building up a large set (more than 100-200 elements), and repeatedly testing for membership in the current set. As with normal tree structures, lookup (membership testing), insertion and deletion have logarithmic complexity. All of the following functions in this module also exist and do the same thing in the sets and ordsets modules. That is, by only changing the module name for each call, you can try out different set • add_element/2 • del_element/2 • filter/2 • fold/3 • from_list/1 • intersection/1 • intersection/2 • is_element/2 • is_set/1 • is_subset/2 • new/0 • size/1 • subtract/2 • to_list/1 • union/1 • union/2 add(Element, Set1) -> Set2 add_element(Element, Set1) -> Set2 Element = term() Set1 = Set2 = gb_set() Returns a new gb_set formed from Set1 with Element inserted. If Element is already an element in Set1, nothing is changed. Rebalances the tree representation of Set1. Note that this is rarely necessary, but may be motivated when a large number of elements have been deleted from the tree without further insertions. Rebalancing could then be forced in order to minimise lookup times, since deletion only does not rebalance the tree. Element = term() Set1 = Set2 = gb_set() Returns a new gb_set formed from Set1 with Element removed. Assumes that Element is present in Set1. delete_any(Element, Set1) -> Set2 del_element(Element, Set1) -> Set2 Element = term() Set1 = Set2 = gb_set() Returns a new gb_set formed from Set1 with Element removed. If Element is not an element in Set1, nothing is changed. difference(Set1, Set2) -> Set3 subtract(Set1, Set2) -> Set3 Set1 = Set2 = Set3 = gb_set() Returns only the elements of Set1 which are not also elements of Set2. Returns a new empty gb_set. Pred = fun (E) -> bool() E = term() Set1 = Set2 = gb_set() Filters elements in Set1 using predicate function Pred. fold(Function, Acc0, Set) -> Acc1 Function = fun (E, AccIn) -> AccOut Acc0 = Acc1 = AccIn = AccOut = term() E = term() Set = gb_set() Folds Function over every element in Set returning the final value of the accumulator. List = [term()] Set = gb_set() Returns a gb_set of the elements in List, where List may be unordered and contain duplicates. List = [term()] Set = gb_set() Turns an ordered-set list List into a gb_set. The list must not contain duplicates. Element = term() Set1 = Set2 = gb_set() Returns a new gb_set formed from Set1 with Element inserted. Assumes that Element is not present in Set1. intersection(Set1, Set2) -> Set3 Set1 = Set2 = Set3 = gb_set() Returns the intersection of Set1 and Set2. SetList = [gb_set()] Set = gb_set() Returns the intersection of the non-empty list of gb_sets. Returns true if Set is an empty set, and false otherwise. is_member(Element, Set) -> bool() is_element(Element, Set) -> bool() Element = term() Set = gb_set() Returns true if Element is an element of Set, otherwise false. Returns true if Set appears to be a gb_set, otherwise false. is_subset(Set1, Set2) -> bool() Returns true when every element of Set1 is also a member of Set2, otherwise false. Set = gb_set() Iter = term() Returns an iterator that can be used for traversing the entries of Set; see next/1. The implementation of this is very efficient; traversing the whole set using next/1 is only slightly slower than getting the list of all elements using to_list/1 and traversing that. The main advantage of the iterator approach is that it does not require the complete list of all elements to be built in memory at one time. Returns the largest element in Set. Assumes that Set is nonempty. next(Iter1) -> {Element, Iter2} | none Iter1 = Iter2 = Element = term() Returns {Element, Iter2} where Element is the smallest element referred to by the iterator Iter1, and Iter2 is the new iterator to be used for traversing the remaining elements, or the atom none if no elements remain. singleton(Element) -> gb_set() Returns a gb_set containing only the element Element. Returns the number of elements in Set. Returns the smallest element in Set. Assumes that Set is nonempty. take_largest(Set1) -> {Element, Set2} Set1 = Set2 = gb_set() Element = term() Returns {Element, Set2}, where Element is the largest element in Set1, and Set2 is this set with Element deleted. Assumes that Set1 is nonempty. take_smallest(Set1) -> {Element, Set2} Set1 = Set2 = gb_set() Element = term() Returns {Element, Set2}, where Element is the smallest element in Set1, and Set2 is this set with Element deleted. Assumes that Set1 is nonempty. Set = gb_set() List = [term()] Returns the elements of Set as a list. Set1 = Set2 = Set3 = gb_set() Returns the merged (union) gb_set of Set1 and Set2. SetList = [gb_set()] Set = gb_set() Returns the merged (union) gb_set of the list of gb_sets. stdlib 1.16 Copyright © 1991-2009 Ericsson AB
{"url":"http://www.erlang.org/documentation/doc-5.7/lib/stdlib-1.16/doc/html/gb_sets.html","timestamp":"2014-04-17T09:42:21Z","content_type":null,"content_length":"18964","record_id":"<urn:uuid:28e2bce4-c8f1-4e27-9285-656b95d36e98>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
San Marcos, CA Algebra 2 Tutor Find a San Marcos, CA Algebra 2 Tutor ...I have over twenty hours of in-class tutoring experience at Montgomery Middle School. Along with my studies I have also learned guitar and how to cook as a way of keeping myself well rounded and regularly compose music with members within the San Diego, and more specifically UCSD, community. I used to hold events where I would cook healthy meals for up to fifty people every other 10 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I taught a unit in coordinate geometry at a SDUSD high school and passed the three math CSET tests for teacher credentialing. These are just a few of my accomplishments in pursuit of exploring my own goal to teach. I have spent many years as an engineer in telecommunications, and my academic ba... 11 Subjects: including algebra 2, Spanish, algebra 1, trigonometry ...Students, and I know this personally, might be afraid to ask questions to someone who intimidates them; therefore, I don't intimidate. I am a young tutor and I am sure I have the experience to help any student from a sophomore in college to a kindergartner. On to my area of expertise. 29 Subjects: including algebra 2, reading, English, ASVAB My name is Ana. I am currently 18 years old and a high school graduate (Class of 2013). I love school and education. I aspire to be a mathematics major. 13 Subjects: including algebra 2, Spanish, chemistry, calculus ...I took Algebra 1 in 7th grade, but should be able to learn any obscure topics fairly quickly. I am pretty comfortable with all mathematics. I took this class in 8th grade, but should be able to learn any obscure topics fairly quickly. 12 Subjects: including algebra 2, physics, geometry, algebra 1 Related San Marcos, CA Tutors San Marcos, CA Accounting Tutors San Marcos, CA ACT Tutors San Marcos, CA Algebra Tutors San Marcos, CA Algebra 2 Tutors San Marcos, CA Calculus Tutors San Marcos, CA Geometry Tutors San Marcos, CA Math Tutors San Marcos, CA Prealgebra Tutors San Marcos, CA Precalculus Tutors San Marcos, CA SAT Tutors San Marcos, CA SAT Math Tutors San Marcos, CA Science Tutors San Marcos, CA Statistics Tutors San Marcos, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/San_Marcos_CA_algebra_2_tutors.php","timestamp":"2014-04-19T07:04:38Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:cca92758-fbe1-4109-b813-5e0b07827284>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a "motivic Gromov-Witten invariant"? up vote 12 down vote favorite I recently attended an interesting seminar, where the concept of motivic Donaldson-Thomas invariants was explained (0909.5088). Very roughly, the DT invariant is a generating function $\sum q^k e(M_k)$ of a numerical invariant $e(\cdot)$ of a sequence of moduli spaces $M_k$. The motivic DT invariant is obtained by considering $\sum q^k [M_k]$ where $[M_k]$ is the image in $K(Var)$. This contains more info than the ordinary DT invariant. Can this idea be applied for, say, the GW invariant of Calabi-Yau 3-folds, to get a finer invariant? (Sorry for my vague question.) ag.algebraic-geometry gromov-witten-theory add comment 3 Answers active oldest votes Let me give an answer from a slightly different point of view. Let $M_k$ be a moduli space as in your question; say it's a (compact) moduli space of sheaves on some (compact) Calabi-Yau threefold. In general, $M_k$ is going to be very singular. However, it carries a so-called perfect deformation-obstruction theory of dimension zero. This gives a virtual fundamental class on $M_k$, and the technical definition of the numerical invariant $e(M_k)$ is that it is the degree of this virtual fundamental class. In the case of sheaves on CY3s, the deformation-obstruction theory has a duality property: it is a symmetric obstruction theory. In this case, according to a result of Kai Behrend, $e (M_k)$ can also be expressed as an Euler characteristic, albeit a weighted one: $e(M_k)=\chi(M_k, \nu_{M_k})$, where $\nu_{M_k}$ is the Behrend function of the singular space $M_k$. In other words, one computes an Euler characteristic, but weighted with a numerical up vote 8 measure of how bad the singularities are. down vote accepted On can hope that this Euler characteristic definition can now be turned into something motivic. What one needs is a way to attach a motivic weight to points of $M_k$. In some specific moduli problems, such as for Hilbert schemes of points where at least locally the moduli space can be expressed as a critical locus of a function on a smooth variety, this can be done using the tool of the motivic vanishing cycle; indeed, this is what our work does in the paper you cite. The general theory of how one attaches motivic weights is discussed in a (partially conjectural) paper of Kontsevich and Soibelman. The issue with Gromov-Witten theory on a CY3 is that the deformation-obstruction theory in that case, while it is of dimension zero, is not fully symmetric. It is symmetric on the open part corresponding to stable maps which are immersions from a smooth curve, but (as an expert assures me) not on the whole moduli space. add comment I don't know anything about motivic Donaldson-Thomas invariants, but it is possible to define the notion of GW invariants on the level of motives. You should check out Behrend and Manin, "Stacks of stable maps and Gromov-Witten invariants" sections 8 and 9, as well as Toën, "On motives for Deligne-Mumford stacks". Namely, instead of considering GW invariants as a collection of maps $$ I^X_{g,n,\beta} : H^\ast (X^n) \to H^\ast(\overline M_{g,n})$$ you can define them as morphisms between the Chow motives associated to $X^n$ and $\overline M_{g,n}$. The notion of up vote a motive associated to a DM-stack is explained in Toën's article. In the case of $\overline M_{g,n}$ it is easy to say explicitly what this means: there is a finite cover $f \colon M \to \ 9 down overline M_{g,n}$ by a smooth projective scheme M, and we take as the motive associated to $\overline M_{g,n}$ the motive associated to the scheme M and the projector $\frac{1}{\deg f} f^\ast vote f_\ast$. The correspondence inducing the morphism is given by the pushforward of the virtual fundamental class of $\overline M_{g,n}(X,\beta)$ to $A^\ast(X^n \times \overline M_{g,n})$ along the product of the evaluation maps and the forgetful map. 2 See this great talk by Manin: dailymotion.com/video/… – Peter Arndt Apr 4 '11 at 21:48 add comment Let me just add a bit to what Balazs said. The fact that the moduli spaces of sheaves on a CY3 have this symmetric obstruction theory is a reflection of properties of the category they live in, namely the derived category of coherent sheaves on the CY3. Indeed, Kontsevich and Soibelman's general construction of motivic DT invariants applies to a general class of CY3 categories and the motivic invariants live in a Hall algebra associated to this category. up vote My point is that the natural category associated to Gromov-Witten invariants is the Fukaya category and so I would expect any sort of "motivic" version of GW theory to live in something 4 down associated to the Fukaya category. In particular, I don't think these invariants will live in the Grothendieck group of varieties the way the motivic DT invariants do. GW theory (A-model) is vote inherently analytic (or symplectic) whereas DT theory (B-model) is inherently algebraic. It would be really interesting to figure out what the analog of the Hall algebra is in the Fukaya category. Maybe the symplectic geometers already know it? Jim - I'm not sure I follow the reasoning formally. The Fukaya category of a compact symplectic 3-fold is a CY3 category, indistinguishable intrinsically from coherent sheaves on a (smooth proper) CY3. It has a moduli (derived higher) stack of objects, and one can speak of a Hall algebra of motivic functions on it. Perhaps you're suggesting that one uses something more (like knowledge of a particular class of objects) in defining the motivic DT invariants? or that the extraction of GW invariants from the Fukaya category is not fully analogous to the categorical extraction of DT invariants? – David Ben-Zvi Apr 6 '11 at 0:44 The extraction of the invariants from the categories are definitely not analogous. The DT invariants come directly from the moduli of objects (i.e. sheaves) in the CY category. In 1 contrast, the GW invariants come from moduli of stable maps, which are \em{not} the moduli of objects in the Fukaya category (those would be special Lagrangians). I guess this discussion does make my reasoning seem faulty --- one should not use the A-model / B-model equivalence to understand the DT/GW correspondence (it is not mirror symmetry -- the correspondence takes place on the same CY3 after all). – Jim Bryan Apr 6 '11 at 5:15 I'd like to thank everybody for the answers. My most naive expectation was that the DT/GW correspondence would be extended to the motivic DT/motivic GW correspondence, but that idea doesn't sound promising, judging from the answers... – Yuji Tachikawa Apr 10 '11 at 18:06 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry gromov-witten-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/60569/is-there-a-motivic-gromov-witten-invariant?sort=votes","timestamp":"2014-04-21T07:56:45Z","content_type":null,"content_length":"65385","record_id":"<urn:uuid:02609ec9-3fdf-4704-b072-157245008b3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Trinomials, a = 1 - Problem 5 You can think of factoring in terms of using the area of a rectangle to find the side lengths. If the area is a trinomial, start with a 2 x 2 box, and we know the first term must go in the upper left, and the third term must go in the bottom right. From there, you'll need to guess and check the outside values that would let your remaining diagonal boxes sum to the "b" term. The length and width that you discover represent the factors of the trinomial. This requires a lot of persistence and you'll get better with practice- don't quit! Transcript Coming Soon! factoring trinomials a=1
{"url":"https://www.brightstorm.com/math/algebra/factoring-2/factoring-trinomials-a-equals-1-problem-5/","timestamp":"2014-04-18T23:20:28Z","content_type":null,"content_length":"60994","record_id":"<urn:uuid:65b0a89a-2821-407d-81e2-bee4566fe52f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Conservative Systems and Quantum Chaos A co-publication of the AMS and Fields Institute. &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp Fields Institute This volume presents new research in classical Hamiltonian and quantum systems from the Workshop on Conservative Systems and Quantum Chaos held during The Fields Institute Program Communications Year on Dynamical Systems and Bifurcation Theory in October 1992 (Waterloo, Canada). The workshop was organized so that there were presentations that formed a bridge between classical and quantum mechanical systems. Four of these papers appear in this collection, with the remaining six papers concentrating on classical Hamiltonian dynamics. 1996; 176 pp; hardcover Feature: Volume: 8 • Includes applications ranging from numerical exploration of a coupled mechanical system to a Hamiltonian approach to the problem of spatio-temporal chaos in biological models from population genetics and ecology. 0-8218-0254-2 Titles in this series are co-published with the Fields Institute for Research in Mathematical Sciences (Toronto, Ontario, Canada). ISBN-13: Advanced graduate students and mathematics professionals interested in Hamiltonian dynamics and quantum chaos. • M. S. Alber and J. E. Marsden -- Semiclassical monodromy and the spherical pendulum as a complex Hamiltonian system List Price: US$91 • T. J. Bridges -- Hamiltonian structure of plane-wave instabilities • M. P. Kummer -- Anharmonic oscillators in classical and quantum mechanics with applications to the perturbed Kepler problem Member Price: • E. A. Lacomba and J. G. Reyes -- On the global flow in the Coulombian restricted isosceles \(3\)-body problem US$72.80 • E. M. Lerman and R. Sjamaar -- Reductive group actions on Kahler manifolds • K. R. Meyer, J. B. Delos, and J.-m. Mao -- Atomic spectroscopy, periodic orbits and generic two-parameter bifurcations Order Code: FIC/8 • D. C. Offin and H. Yu -- Homoclinic orbits in the forced pendulum system • G. W. Patrick -- Dynamics near stable relative equilibria at nongeneric momenta: A numerical investigation • B. D. Sleeman -- The dynamics of reversible systems and spatio-temporal complexity in biology • J.-C. Van Der Meer -- Degenerate Hamiltonian Hopf bifurcations
{"url":"http://ams.org/bookstore?fn=20&arg1=ficseries&ikey=FIC-8","timestamp":"2014-04-20T02:23:40Z","content_type":null,"content_length":"16042","record_id":"<urn:uuid:f70bc161-c3ae-456e-9341-ec539dbc1ef2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Paraschiv Andrei Technical Blog Update: you can find the entire source code This is my implementation of a radix tree data structure. This is a data structure very similar to a . Basically instead of having each one of your nodes hold a char you have each one of your nodes store a string or as wikipedia puts it, it is a trie where "each node with only one child is merged with its child". As I understand it the names radix tree and crit bit tree are only applied to trees storing integers and Patricia trie is retained for more general inputs. As usual I am outputting the code for each class. First, the node: Then the actual tree And the test program: 5 comentarii: 1. Hello Andrei, Thanks for sharing your solutions. Sorry to ask this but, what do you intend to do on line 055? 054 if ((matches == 0) || (curNode == _root) || 055 ((matches > 0) && (matches <>= curNode.Label.Length))) Using the operator <>= Also on line 074 074 else if(matches < commonroot =" wordPart.Substring(0," branchpreviouslabel =" curNode.Label.Substring(matches," branchnewlabel =" wordPart.Substring(matches," label =" commonRoot;" newnodepreviouslabel =" new" newnodenewlabel =" new" matches ="="> curNode.Label.Length) Did I miss the definition of commoroot? Also on line 101 101 for (int i = 0; i <> Have you finished the cycle, or you left it for us to finish it by ourselves? 1. Hi, Sorry about that. It's the syntax highlighter messing up the code. Anyway this is the proper code for line 74: if ((matches == 0) || (curNode == _root) || ((matches > 0) && (matches < wordPart.Length) && (matches >= curNode.Label.Length))) and this one for 101: //go throught the two streams for (int i = 0; i < minLength; i++) //if two characters at the same position have the same value we have one more match if(word[i] == curNode.Label[i]) //if at any position the two strings have different characters break the cycle //and return the current number of matches return matches; If anyone can suggest a better syntax highlighter please do so. This one has lots of bugs. I promise I'll upload all these codes to github or something like that as soon as I find the time. 2. Hi this is somashekhar i want to ask in the internal node of particia tree what it contains plz reply me??? 1. Grab the code from here: https://github.com/paratechnical/RadixTree I will try to update the code don't know why nothing is visible anymore 2. Done
{"url":"http://paratechnical.blogspot.com/2011/03/radix-tree-implementation-in-c.html","timestamp":"2014-04-17T01:42:00Z","content_type":null,"content_length":"76659","record_id":"<urn:uuid:7f8b0f33-6747-44eb-999b-d56fcbbd40bf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Jordan Normal Form Question January 17th 2011, 11:22 AM #1 May 2010 Jordan Normal Form Question So far I've found the characteristic equation. [Math] (t-1)^2(t+2) [/tex] So the eigenvalues, are $1,1,-2$ And I know the jordan matrix is along these lines(below) $J= \begin{bmatrix}1&0&0\\0&1&0\\0&0&-2\end{bmatrix}$ But I literally have NO clue on where the extra 1's should be. I know a singular eigenvector towards T for sure, but I'm confused on -2, as the answer is totally different in my mark scheme :S! $\textit{}<br /> e^1 = <br /> \left( {\begin{array}{cc}<br /> 1 \\<br /> 1 \\<br /> 1<br /> \end{array} } \right)$ $\textit{}<br /> e^{-2} = <br /> \left( {\begin{array}{cc}<br /> 2 \\<br /> 1 \\<br /> 1<br /> \end{array} } \right)$ For the last one i'm trying... [Math](A-I)^2\begin{bmatrix}a&b&c\end{bmatrix}=\begin{bmatr ix}1&1&1\end{bmatrix}[/tex] $T= \begin{bmatrix}1&2&e^1\\1&1&e^1\\1&1&e^1}\end{bmat rix}$ I also know $A^5 = TJ^5T^{-1}$ I know this is a bit scrappy but could anybody help me on the bits i'm struggling with.. 1) Finding a second eigenvector for t=1, making sure my t=-2 eigenvector is correct. 2) Explaining how you decide how many, and where to put the 1's in a jordan matrix. Thank you So far I've found the characteristic equation. [Math] (t-1)^2(t+2) [/tex] So the eigenvalues, are $1,1,-2$ And I know the jordan matrix is along these lines(below) $J= \begin{bmatrix}1&0&0\\0&1&0\\0&0&-2\end{bmatrix}$ But I literally have NO clue on where the extra 1's should be. I know a singular eigenvector towards T for sure, but I'm confused on -2, as the answer is totally different in my mark scheme :S! $\textit{}<br /> e^1 = <br /> \left( {\begin{array}{cc}<br /> 1 \\<br /> 1 \\<br /> 1<br /> \end{array} } \right)$ $\textit{}<br /> e^{-2} = <br /> \left( {\begin{array}{cc}<br /> 2 \\<br /> 1 \\<br /> 1<br /> \end{array} } \right)$ For the last one i'm trying... [Math](A-I)^2\begin{bmatrix}a&b&c\end{bmatrix}=\begin{bmatr ix}1&1&1\end{bmatrix}[/tex] $T= \begin{bmatrix}1&2&e^1\\1&1&e^1\\1&1&e^1}\end{bmat rix}$ I also know $A^5 = TJ^5T^{-1}$ I know this is a bit scrappy but could anybody help me on the bits i'm struggling with.. 1) Finding a second eigenvector for t=1, making sure my t=-2 eigenvector is correct. 2) Explaining how you decide how many, and where to put the 1's in a jordan matrix. Thank you Oh god, actually finding the representation for the Jordan canonical form is hellacious! One question before I help, is this for a graded assignment? Nope it's a mock exam question from a few years back. Got my real exam tommorow.... It's always a question in that format with different numbers but I just can't suss it with my notes or by the mark scheme... =.= Ok, to start to find out how many "extra" one's there are you need to recall that the geometric multiplicity of the eigenvalue $\lambda$ is $\text{null}\left(A-\lambda I\right)$. So, what's $\ Well i'm guessing we look at the eigenvalue 1, as it's the one in the brackets, and due to the bracket being to the power of two, is the null(A-I) equal to 2, so the geometric multiplicity is 2? Sorry if i'm way off here my head is just scrambled I get that $\text{rk}\left(A-I\right)=2$ and so by the Rank-Nullity theorem we have that $\text{null}\left(A-I\right)=1$. It follows that $J=\begin{pmatrix}0 & 0 & -2\\ 0 & 1 & 1\\ 0 & 0 & 1\end {pmatrix}$. So how about finding those T's? If null(A-I) = 2, does that have 2 1's above the diagonal?? And yes let's move onto the T's. Looking at the solutions, I get the correct answer to t = 1, with (1,1,1). However when I put in -2, I get (2,1,1), whereas in the solutions it says (1,2,2). For the third t, where t=1 again, I have written the method I have been trying $(A-I)^2\begin{bmatrix}a&b&c\end{bmatrix}=\begin{bmatr ix}1&1&1\end{bmatrix}$ Is this correct I do it but still get a different answer to the solutions again.. Since you have determined that 1 is a double root of the characteristic equation (has "algbraic multiplicity" 2) you know that the "Jordan Normal Form" is either $\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2\end{bmatrix}$ (if eigenvalue 1 has "geometric multiplicity 2) (two independent eigenvectors corresponding to eigenalue 1) $\begin{bmatrix}1 & 1 & 0 \\ 0 & 1 & 0\\ 0 & 0 & 2\end{bmatrix}$ (if eigenvalue 1 has "geometric multiplicity" 1) (only one independent eigenvectors corresponding to eigenvalue 1. The "exra ones" only go above the multiple eigenvalue so there is only one place it could be. The point is this: every matrix satisfies its own characteristic equation so for this matix, A, it must b true that $(A- I)^2(A- 2I)v= 0$ where v is any vector. Obviously if (A- 2I)v= 0 (which is the same as saying that Av= 2v) this is true. Also if (A- I)v= 0 so that Av= v, it is true. That, is if v is an eigenvector of A with eigenvalue 2 or 1, that is true. If A has two independent eigenvectors with eigenvalue 1 (1 has geometric multiplicity 2) then, since an eigenvector corresponding to eigenvalue 2 would have to be independent of them, there are 3independent eigenvectors and the matrix T having those eigevectors as columns makes $T^{-1}AT$ diagonal. If there are NOT two independent eigenvectors for eigenvalue 1 (it has geometric multiplicity 1) it still must be true that $(A- I)^2(A- 2I)v= 0$ for every vector. What that means is there must be an vector, independent of the eigenvector for eigenvalue 1, such that ( $(A- I)ve 0$ but such that $(A- I)^2v= 0$. We can write that as $(A- I)((A- I)v)= 0$ so that if v is not an eigenvector, $(A- I)v$ must be an eigenvector! Since you have found that one eigenvector for eigenvalue 1 is $\begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix}$, you want to find $\begin{bmatrix}x \\ y \\ z\end{bmatrix}$ such that $\begin{bmatrix}3 & 1 & -4 \\ 6 & 1 & -7\\ 6 & 1 & -7\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}= \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix}$ Such a vector will be the second column you need in T. (I once made a total hash of a problem like this and Plato corrected me so I am redeeming myself.) Last edited by HallsofIvy; January 17th 2011 at 02:17 PM. Thank you for this response HallsofIvy, I'd nearly resigned myself to being doomed come tommorow morning, but this is a great explanation. (Except the latex errors =P) I now understand this problem much better than I did before. Just as a question, would I proceed to make the Matrix $(A-I)^2$ rREF, to make the equations easier to solve? and Also does nullity(A-I) = geometric multiplicity? I totally know what you mean by making a hash of these problems! The numbers are so easy to mix up with your minus signs etc. I'm just going to have to take my sweet time. I have corrected the LaTex (had to go fix dinner just as I was typing that!). You can solve the equations $Av= v$, $Av= 2v$, and $(A- I)v= \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix}$ however you like- row reducing the augmented matrix or solving the three equations simultaneously. Yes, the "geometric multiplicity" of an eigenvalue, [tex]\lambda[/itex, is the number of independent vectors, v, that satisfy $Av= \lamda v$ which is, of course, the same as $(A- \lambda I)v= 0$ and so is the "nullity" of $A- \lambda I$. January 17th 2011, 12:26 PM #2 January 17th 2011, 12:27 PM #3 May 2010 January 17th 2011, 12:36 PM #4 January 17th 2011, 12:39 PM #5 May 2010 January 17th 2011, 12:53 PM #6 January 17th 2011, 01:00 PM #7 May 2010 January 17th 2011, 01:57 PM #8 MHF Contributor Apr 2005 January 17th 2011, 02:07 PM #9 May 2010 January 17th 2011, 03:29 PM #10 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/168613-jordan-normal-form-question.html","timestamp":"2014-04-16T13:37:47Z","content_type":null,"content_length":"73586","record_id":"<urn:uuid:6ea85e6c-c394-47af-8643-b8456a8ae440>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Brünnler and Alwen Fernanto Tiu. A local system for classical logic Results 1 - 10 of 47 - ACM TRANSACTIONS ON COMPUTATIONAL LOGIC , 2004 "... This paper introduces a logical system, called BV, which extends multiplicative linear logic by a non-commutative self-dual logical operator. This extension is particularly challenging for the sequent calculus, and so far it is not achieved therein. It becomes very natural in a new formalism, call ..." Cited by 87 (15 self) Add to MetaCart This paper introduces a logical system, called BV, which extends multiplicative linear logic by a non-commutative self-dual logical operator. This extension is particularly challenging for the sequent calculus, and so far it is not achieved therein. It becomes very natural in a new formalism, called the calculus of structures, which is the main contribution of this work. Structures are formulae subject to certain equational laws typical of sequents. The calculus of structures is obtained by generalising the sequent calculus in such a way that a new top-down symmetry of derivations is observed, and it employs inference rules that rewrite inside structures at any depth. These properties, in addition to allowing the design of BV, yield a modular proof of cut elimination. , 2000 "... We obtain two results about the proof complexity of deep inference: 1) deep-inference proof systems are as powerful as Frege ones, even when both are extended with the Tseitin extension rule or with the substitution rule; 2) there are analytic deep-inference proof systems that exhibit an exponential ..." Cited by 31 (13 self) Add to MetaCart We obtain two results about the proof complexity of deep inference: 1) deep-inference proof systems are as powerful as Frege ones, even when both are extended with the Tseitin extension rule or with the substitution rule; 2) there are analytic deep-inference proof systems that exhibit an exponential speed-up over analytic Gentzen proof systems that they polynomially simulate. - of Lecture Notes in Computer Science , 2002 "... System SKS is a set of rules for classical propositional logic presented in the calculus of structures. Like sequent systems and unlike natural deduction systems, it has an explicit cut rule, which is admissible. ..." Cited by 29 (5 self) Add to MetaCart System SKS is a set of rules for classical propositional logic presented in the calculus of structures. Like sequent systems and unlike natural deduction systems, it has an explicit cut rule, which is admissible. - Theory and Applications of Categories, 18:473–535 , 2006 "... Abstract. The Medial rule was first devised as a deduction rule in the Calculus of Structures. In this paper we explore it from the point of view of category theory, as additional structure on a ∗-autonomous category. This gives us some insights on the denotational semantics of classical proposition ..." Cited by 28 (3 self) Add to MetaCart Abstract. The Medial rule was first devised as a deduction rule in the Calculus of Structures. In this paper we explore it from the point of view of category theory, as additional structure on a ∗-autonomous category. This gives us some insights on the denotational semantics of classical propositional logic, and allows us to construct new models for it, based on suitable generalizations of the theory of coherence spaces. 1. "... We see a systematic set of cut-free axiomatisations for all the basic normal modal logics formed by some combination the axioms d,t,b,4, 5. They employ a form of deep inference but otherwise stay very close to Gentzen’s sequent calculus, in particular they enjoy a subformula property in the litera ..." Cited by 28 (4 self) Add to MetaCart We see a systematic set of cut-free axiomatisations for all the basic normal modal logics formed by some combination the axioms d,t,b,4, 5. They employ a form of deep inference but otherwise stay very close to Gentzen’s sequent calculus, in particular they enjoy a subformula property in the literal sense. No semantic notions are used inside the proof systems, in particular there is no use of labels. All their rules are invertible and the rules cut, weakening and contraction are admissible. All systems admit a straightforward terminating proof search procedure as well as a syntactic cut elimination procedure. , 2002 "... In this paper I will present a deductive system for linear logic, in which all rules are local. In particular, the contraction rule is reduced to an atomic version, and there is no global promotion rule. In order to achieve this, it is necessary to depart from the sequent calculus and use the calcul ..." Cited by 27 (5 self) Add to MetaCart In this paper I will present a deductive system for linear logic, in which all rules are local. In particular, the contraction rule is reduced to an atomic version, and there is no global promotion rule. In order to achieve this, it is necessary to depart from the sequent calculus and use the calculus of structures, which is a generalization of the one-sided sequent calculus. In a rule, premise and conclusion are not sequents, but structures, which are expressions that share properties of formulae and sequents. , 2002 "... We extend multiplicative exponential linear logic (MELL) by a non-commutative, self-dual logical operator. The extended system, called NEL, is defined in the formalism of the calculus of structures, which is a generalisation of the sequent calculus and provides a more refined analysis of proofs. We ..." Cited by 27 (12 self) Add to MetaCart We extend multiplicative exponential linear logic (MELL) by a non-commutative, self-dual logical operator. The extended system, called NEL, is defined in the formalism of the calculus of structures, which is a generalisation of the sequent calculus and provides a more refined analysis of proofs. We should then be able to extend the range of applications of MELL, by modelling a broad notion of sequentiality and providing new properties of proofs. We show some proof theoretical results: decomposition and cut elimination. The new operator represents a significant challenge: to get our results we use here for the first time some novel techniques, which constitute a uniform and modular approach to cut elimination, contrary to what is possible in the sequent calculus. , 2008 "... Abstract. We introduce ‘atomic flows’: they are graphs obtained from derivations by tracing atom occurrences and forgetting the logical structure. We study simple manipulations of atomic flows that correspond to complex reductions on derivations. This allows us to prove, for propositional logic, a n ..." Cited by 23 (11 self) Add to MetaCart Abstract. We introduce ‘atomic flows’: they are graphs obtained from derivations by tracing atom occurrences and forgetting the logical structure. We study simple manipulations of atomic flows that correspond to complex reductions on derivations. This allows us to prove, for propositional logic, a new and very general normalisation theorem, which contains cut elimination as a special case. We operate in deep inference, which is more general than other syntactic paradigms, and where normalisation is more difficult to control. We argue that atomic flows are a significant technical advance for normalisation theory, because 1) the technique they support is largely independent of syntax; 2) indeed, it is largely independent of logical inference rules; 3) they constitute a powerful geometric formalism, which is more intuitive than syntax. 1. - Logical Methods in Computer Science, 2(4:3):1–44, 2006. Available from: http://arxiv.org/abs/cs/0605054. [McK05] Richard McKinley. Classical categories and deep inference. In Structures and Deduction 2005 (Satellite Workshop of ICALP’05 , 2005 "... Vol. 2 (4:3) 2006, pp. 1–44 www.lmcs-online.org ..." , 2005 "... The calculus of structures is a proof theoretical formalism which generalizes the sequent calculus with the feature of deep inference: in contrast to the sequent calculus, inference rules can be applied at any depth inside a formula, bringing shorter proofs than all other formalisms supporting a ..." Cited by 16 (5 self) Add to MetaCart The calculus of structures is a proof theoretical formalism which generalizes the sequent calculus with the feature of deep inference: in contrast to the sequent calculus, inference rules can be applied at any depth inside a formula, bringing shorter proofs than all other formalisms supporting analytical proofs. However, deep applicability of inference rules causes greater nondeterminism than in the sequent calculus regarding proof search. In this paper, we introduce a new technique which reduces nondeterminism without breaking proof theoretical properties, and provides a more immediate access to shorter proofs. We present our technique on system BV, the smallest technically non-trivial system in the calculus of structures, extending multiplicative linear logic with the rules mix, nullary mix and a self dual, non-commutative logical operator. Since our technique exploits a scheme common to all the systems in the calculus of structures, we argue that it generalizes to these systems for classical logic, linear logic and modal logics.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=285651","timestamp":"2014-04-18T08:41:15Z","content_type":null,"content_length":"35272","record_id":"<urn:uuid:bc10affb-59b5-4ad8-a47a-88f45624c71d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Why a hard drive has less storage space than promised? It has happened to most of us. We buy a new hard drive (or maybe a flash drive) with mind boggling storage capacity only to find that it has less space than what was mentioned on the box. Angered, we start cursing the manufacturer and our dealer for false marketing thinking that they should be sued for doing this. Hey, but have you ever wondered how they continue to do this again and again without getting into legal trouble? The answer is that they are not it falsely at all. Surprised? I'll explain. A manufacturer considers 1 Megabyte to be 1000 Kilobytes, 1 Gigabyte to be 1000 Megabytes, 1 Terabyte to be 1000 Gigabytes and so on. This is correct considering that kilo means 1000 and mega means 1000000 (10^6). However, computers calculate on base 2 and to them, 1 MB is actually 1024 kilobytes, 1GB is 1024MB and 1 TB is 1024GB. This difference in the method of computation is responsible for this "missing space." Lets take an example of a 500 GB hard disk. From a manufacturer's point of view, the 500GB will have 500*1000*1000*1000 = 500000000000 bytes From a computer's point of view, 500GB is actually 500*1024*1024*1024 = 536870912000 bytes So, a hard drive that promises to have 500 GB storage space will actually display , 536870912000-500000000000 = 36870912000 bytes less storage space when connected to a │Space Promised │Displayed on a computer │Difference│ │100GB │93.13GB │6.87GB │ │250GB │232.83GB │17.17GB │ │500GB │465.66GB │34.34GB │ │1TB │931.32GB │92.68GB │ │2TB │1862.64GB │185.36GB │ Take a look at the table given above to see how much space is "lost" due to computers working on a base 2 system. As you can see, with the increase in capacity of the storage device, there is an increase in the missing space. Reader Comments Innocent Sm said... Informative stuff. anjan karki said... i understand that the manufacturer takes 1000 mb as 1 gb.then why is the hard disk with 1 tb space is not 1000gb and instead it is 931 gb. i understand the calculations done up there but why are u taking the difference between base 2 system and manufacturer specifications. as u said if the manufacturer takes 1000 gb as 1 tb then when they sell 1 tb harddisk it should have 1000gb not 931 gb. why take the difference?????? Akhilesh said... That is because every GB of the 1TB hard drive will have 1000MB instead of 1024MB. Every MB here will too have 1000 KB instead of 1024KB. The same is true for each KB. Hence, a hard drive advertised as 1TB will have 1000,000,000,000 bytes which is ~931GB in terms of base 2 system. Pongwut Maensamut said... I think it has lost too much on 2TB. 185.36GB! Abhishek Garg said... Thanks for posting this. Great info. goldsalltime said... But to my surprise I use more than one of each size yet to notice that they are different in reduced size eg 100gb sometimes will be 92,93,89,94 why? At first, I just thought maybe its a reserved space in case of overdose. Rodolofo Rubens said... I still think that this is false advertising. Anonymous said... Why the hell will the manufacturers consider it like metric system and not in computer sense. They are not making or selling grocery items. They are selling computer parts and therefore should measure in the same way the computer understands it.!!! This is simply deliberate fraud. It's like I am driving a car, my speedometer is showing 60miles per hour but my car manufacturer tells me that they consider 1 mile = 4000ft instead of 5280ft. Hence the actual speed I am getting is (60*4000)/5280 = 45.45 miles. Anonymous said... @Anonymous: Why the hell would you use your silly measurements like inch, yard, feet, elbows, fists, stones, and miles when you got a perfect good measurement scale like mm, cm, dm, m, km, and m. However you would answer: Well because it makes sense to me. Same thing goes for the manufacturers, they give you a rounded down number witch is easier to remember, easier to put on the box, and easier for all sorts of people that are not interested nor know anything about computers at all.
{"url":"http://www.tweakandtrick.com/2013/07/lost-storage-space.html","timestamp":"2014-04-19T15:02:05Z","content_type":null,"content_length":"32286","record_id":"<urn:uuid:8ba5a677-2fce-4975-9414-0f14723ecaf3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Molecular Expressions Microscopy Primer: Anatomy of the Microscope - Modulation Transfer Function Modulation Transfer Function Basic Concepts The resolution and performance of an optical microscope can be characterized by a quantity known as the modulation transfer function (MTF), which is a measurement of the microscope's ability to transfer contrast from the specimen to the intermediate image plane at a specific resolution. Computation of the modulation transfer function is a mechanism that is often utilized by optical manufacturers to incorporate resolution and contrast data into a single specification. The modulation transfer function is useful for characterizing not only traditional optical systems, but also photonic systems such as analog and digital video cameras, image intensifiers, and film scanners. This concept is derived from standard conventions utilized in electrical engineering that relate the degree of modulation of an output signal to a function of the signal frequency. In optical microscopy, signal frequency can be equated to a periodicity observed in the specimen, ranging from a metal line grating evaporated onto a microscope slide or repeating structures in a diatom frustule to subcellular particles observed in living tissue culture cells. The number of spacings per unit interval in a specimen is referred to as the spatial frequency, which is usually expressed in quantitative terms of the periodic spacings (spatial period) found in the specimen. A common reference unit for spatial frequency is the number of line pairs per millimeter. As an example, a continuous series of black and white line pairs with a spatial period measuring 1 micrometer per pair would repeat 1000 times every millimeter and therefore have a corresponding spatial frequency of 1000 lines per millimeter. Another important concept is the optical transfer function (OTF), which represents the ratio of image contrast to specimen contrast when plotted as a function of spatial frequency, taking into account the phase shift between positions occupied by the actual and ideal image. In general terms, the optical transfer function can be described as: OTF = MTF • e^if(f) in which the imaginary term represents the phase transfer function (PTF), or the change in phase position as a function of spatial frequency. Therefore, the optical transfer function is a spatial frequency-dependent complex variable whose modulus is the modulation transfer function, and whose phase is described by the phase transfer function. If the phase transfer function is linear with frequency, it represents a simple lateral displacement of the image as would be observed with an aberration such as geometric distortion. However, if the phase transfer function is nonlinear, it can adversely affect image quality. In the most dramatic example, a phase shift of 180 degrees produces a reversal of image contrast, where light and dark patterns are inverted. A perfect optical system would have a modulation transfer function of unity at all spatial frequencies, while simultaneously having a phase transfer factor of zero. In cases where the image produced by the microscope (or other optical system) is sinusoidal and there is no significant phase shift, the modulus of the optical transfer function reverts to the modulation transfer function. In situations where the specimen is a periodic line grating composed of alternating black and white lines of equal width (square waves), a graph relating the percentage of specimen contrast transferred to the image is known as the contrast transfer function (CTF). Most specimens display a composition of sinusoidally varying intensities having differing spatial frequencies instead of distinct sharp profiles in the form of square waves. In this case, a graph relating output as a fraction of input intensity versus signal (spatial) frequency is analogous to the modulation transfer function. As the spatial frequency approaches very large values, the square wave response resembles that of a sinusoid, yielding graphs of the contrast transfer function and the modulation transfer function that are virtually identical. Interactive Java Tutorial Diffraction Effects on Image Contrast Observe how image contrast decreases as a function of increasing spatial frequency in specimen details with diffraction-limited optical systems. The effect of increasing spatial frequency on image contrast in a diffraction-limited optical microscope is illustrated in Figure 1. A periodic line grating consisting of alternating white and black rectangular bars (representing 100 percent contrast) is presented at two spatial frequencies on the left-hand side of the figure. The resulting image produced in the microscope is shown on the right side of each objective, and appears as a sinusoidal intensity that has reduced contrast, which is plotted in the graph below the image in terms of a relative percentage of the object contrast. One hundred percent contrast represents regular white and black repeating bars, while zero percent contrast is manifested by gray bars that blend into a gray background of the same intensity. After the contrast value reaches zero, the image becomes a uniform shade of gray, and remains as such for all higher spatial frequencies. When the input is a high contrast square wave, such as the periodic grating target illustrated in Figure 1, transfer of contrast is determined by the contrast transfer function. A majority of specimens observed in the microscope, however, do not display such a regular periodicity and consist of "square waves" that are sinusoidal to varying degrees at the sub-micron level. In this case, the modulation transfer function is utilized to calculate transfer of contrast from the specimen to the image produced by the microscope. Modulation of the output signal, the intensity of light waves forming an image of the specimen, corresponds to the formation of image contrast in microscopy. Therefore, a measurement of the MTF for a particular optical microscope can be obtained from the contrast generated by periodic lines or spacings present in a specimen that result from sinusoidal intensities in the image that vary as a function of spatial frequency. If a specimen having a spatial period of 1 micron (the distance between alternating absorbing and transparent line pairs) is imaged at high numerical aperture (1.40) with a matched objective/condenser pair using immersion oil, the individual line pairs would be clearly resolved in the microscope. The image would not be a faithful reproduction of the line pair pattern, but would instead have a moderate degree of contrast between the dark and light bars (Figure 1). Decreasing the distance between the line pairs to a spatial period of 0.5 microns (spatial frequency equal to 2000 lines per millimeter) would further reduce contrast in the final image, but increasing the spatial period to 2 microns (spatial frequency equal to 500 lines per millimeter) would produce a corresponding increase in image contrast. The limit of resolution with an optical microscope is reached when the spatial frequency approaches 5000 lines per millimeter (spatial period equal to 0.2 microns), using an illumination wavelength of 500 nanometers at high numerical aperture (1.4). At this point, contrast would be barely detectable and the image would appear a neutral shade of gray. In real specimens, the amount of contrast observed in a microscope depends upon the size, brightness, and color of the image, but the human eye ceases to detect periodicity at contrast levels below about three to five percent for closely spaced stripes and may not reach the 0.2-micron limit of resolution. When a specimen is observed in an optical microscope, the resulting image will be somewhat degraded due to aberrations and diffraction phenomena, in addition to minute assembly and alignment errors in the optics. In the image, bright highlights will not appear as bright as they do in the specimen, and dark or shadowed areas will not be as black as those observed in the original patterns. The specimen contrast or modulation can be defined as: Modulation (M) = (I(max) - I(min))/(I(max) + I(min)) where I(max) is the maximum intensity displayed by a repeating structure and I(min) is the minimum intensity found in the same specimen. By convention, the modulation transfer function is normalized to unity at zero spatial frequency. Modulation is typically less in the image than in the specimen and there is often a slight phase displacement of the image relative to the specimen. By comparing several specimens having differing spatial frequencies, it can be determined that both image modulation and phase shifts will vary as a function of spatial frequency. By definition, the modulation transfer function (MTF) is described by the equation: MTF = Image Modulation/Object Modulation This quantity, as discussed above, is an expression of the contrast alteration observed in the image of a sinusoidal object as a function of spatial frequency. In addition, there is a position or phase shift of the sinusoid that is dependent upon spatial frequency in both the horizontal and vertical coordinates. A good example occurs in video microscopy where the raster scanning process produces slightly different responses resulting in a variation between the horizontal and vertical modulation transfer functions. The phase response from an ideal imaging system demonstrates a linear dependence on spatial frequency, with a position shift that is independent of the frequency and normalized to zero at zero spatial frequency. In the ideal system, all sinusoidal image components are displaced by the same amount, resulting in a net position shift for the image without degradation of image quality. When the phase response deviates from ideal linear behavior, then some components will be shifted to a greater degree than others resulting in image degradation. This is especially critical in electronic video systems, which often possess less than ideal phase characteristics that can lead to noticeable loss of image quality. Fortunately, an ideal aberration-free optical system having a circular aperture and a centered optical axis (such as a high-performance microscope) will produce a phase transfer function that has a value of zero for all spatial frequencies in all directions. In this case, phase shifts occur exclusively for off-axis rays and only the modulation transfer function need be considered. A perfect aberration-free optical system is termed diffraction limited, because the effects of light diffraction at the pupils limit the spatial frequency response and establish the limits of resolution. Presented in Figure 2 is a graph relating the modulation transfer function of a repeating specimen imaged with incoherent illumination by visible light with several different diffraction-limited microscope objectives having a circular pupil. In this case, objective quality affects the modulation response as a function of spatial frequency. Higher quality objectives (red line in Figure 2) exhibit greater performance than those of a lower quality (yellow line), and are able to transfer contrast more effectively at higher spatial frequencies. The objective represented by the yellow curve has the highest performance at low spatial frequencies, but falls short of the high numerical aperture objective at larger frequencies. Beneath the graph is a representation of relative feature size versus spatial frequency with respect to the Rayleigh criteria and Sparrow limit. Also presented is a series of sine waves representing a specimen (object) and the resulting image produced in a typical microscope as the frequency of the sinusoid increases. When there are no significant aberrations present in an optical system, the modulation transfer function is related to the size of the diffraction pattern, which is a function of the system numerical aperture and wavelength of illumination. In quantitative terms, the modulation transfer function for an optical system with a uniformly illuminated circular aperture can be expressed as: MTF = 2(f - cosfsinf)/p f = cos^-1(ln/2NA) In these equations, n is the frequency in cycles per millimeter, l is the wavelength of illumination, and NA is the numerical aperture. At low spatial frequencies, image contrast is the highest, but falls to zero as the spatial frequency is increased beyond a certain point (drawn in Figure 2 as a reduction in amplitude produced in the image). The cutoff (f(c)) is the spatial frequency at which contrast reaches zero and can be expressed by the equation: f(c) = 2NA/l It is interesting to note that this equation expresses (in terms of spatial frequency) the fact that resolution increases with both numerical aperture and shorter wavelengths. The modulation transfer function is also related to the point spread function, which is the image of a point source of light (commonly referred to as the Airy disk) from the specimen projected by the microscope objective onto the intermediate image plane. Optical aberrations and numerical aperture variations affect the distribution of light intensity observed at the image plane, and thus influence the shape of the point spread function. Also note that the sum of the point spread functions produced by a specimen in a diffraction-limited microscope comprises the diffraction pattern produced at the image plane. The highest spatial frequencies that can be imaged by a microscope objective are proportional to the numerical aperture and are based on the distribution size of the point spread function. Objectives with low numerical apertures produce point spread functions that have a wider intensity distribution at the image plane than those formed by objectives with higher numerical apertures. At the limit of resolution, adjacent Airy disks or point spread functions start to overlap, obscuring the ability to distinguish between individual intensities. Narrower intensity distributions (at higher numerical apertures) can approach each other more closely and still be resolved by the microscope. This implies that a narrow point spread function corresponds to a high spatial frequency. In fact, the optical transfer function, a measure of spatial frequency response for an optical system, is the mathematical Fourier transform of the point spread function. The relationship between the modulation transfer function and the point spread function for a diffraction-limited optical microscope is illustrated in Figure 3. As discussed above, the limiting cutoff frequency (f(c)) of the modulation transfer function is directly proportional to the objective numerical aperture and inversely proportional to the illumination wavelength. The radius of the first dark concentric ring surrounding the central intensity peak of a point spread function (or Airy disk) is expressed by the equation: r = 0.61l/NA which is more commonly referred to as the Rayleigh criterion, or the resolution limit of the microscope. Because r is inversely proportional to numerical aperture and directly proportional to the illuminating wavelength, it follows that r and f(c) are also inversely proportional, a fundamental property of Fourier transforms (the width of a function is inversely proportional to the width of its transform). Individual objectives in a microscope display a specific modulation transfer function (or optical transfer function) that depends on numerical aperture, objective design, illumination wavelength, and the mode of contrast generation. When the numerical aperture of the condenser is equal to or greater than that of the objective, the spatial frequency cutoff value decreases with decreasing objective numerical aperture (Figure 4(a)). Holding the objective numerical aperture value constant and varying the condenser numerical aperture results in progressively lower cutoff values with decreasing condenser numerical aperture (Figure 4(b)). Utilization of contrast enhancing techniques such as phase contrast and differential interference contrast (DIC) results in unique modulation transfer functions that display curves markedly different from those observed in brightfield illumination using the objective's full numerical aperture (Figure 5). For example, the narrow illumination produced by phase rings in phase contrast microscopy produces a modulation transfer function curve that oscillates above and below the brightfield curve, while the curves generated by DIC objectives vary with the angle between the specimen period and the shear direction of the Wollaston or Nomarski prisms. Also illustrated in Figure 5 is the curve produced by a single-sideband edge enhancement microscope (developed by Dr. Gordon W. Ellis), which yields images of superior contrast at high spatial frequencies. In practice, the performance of a microscope objective or other lens system is often determined by tracing a large number of light rays emitted by a point source in a uniformly distributed array over the vignetted entrance pupil of the objective. After passing through the exit pupil and being distributed over the image plane, the ray intersections are used to plot a spot diagram of the light points at the image plane. In most cases, several hundred rays are utilized to construct a spot diagram, which may take into account optical aberrations if the spacings of light rays are so adjusted. The resulting spot diagram is then regarded as a point spread function and is converted into a graph of the modulation transfer function versus spatial frequency by means of a Fourier transform. Direct measurements of the modulation transfer function are conducted by utilizing specific test pattern targets consisting of high-contrast periodic line gratings having a series of spacings that usually range from one or several millimeters down to 0.1 micrometer, as illustrated in Figure 8. These targets allow evaluation of microscope objective diffraction patterns, both in and out of focus, in a variety of contrast enhancing modes. Detector arrays are utilized to measure the distribution of light in the image plane by summation of the point spread functions, and a Fourier transform algorithm applied to the data to determine the modulation transfer function. The target presented in Figure 6(a) is designed specifically for testing the horizontal modulation transfer function of a macro imaging system such as a telescope, binoculars, video system, camera, or digital video recorder. It is composed of sinusoidal patterns having a spatial frequency range between 0.2 and 80 line pairs per millimeter with a grayscale optical density range varying between 0.2 and 1.2 and an 80 percent modulation of the sine waves. This type of target relays image quality information over a wide range of frequencies and contains on-target references for denoting the contrast levels of the sinusoidal frequencies. In video microscopy, microscopic test targets of sinusoidal targets are not readily available, so the contrast transfer function of a video detector coupled to the microscope is often determined rather than the modulation transfer function. In systems that have a circular aperture (such as an optical microscope), the modulation and/or contrast transfer function is often computed or measured with star and bar targets similar to the one illustrated in Figure 6(b). Targets of this type have both radial and tangential patterns that are orthogonal to each other and are also useful for detecting focus errors and aberrations such as astigmatism. Variations of the basic star target design contain paired lines and dots that allow determination of objective diffraction patterns both in and out of focus and are useful for measurements conducted in brightfield, reflection contrast, or epifluorescence illumination modes. The wedge and bar spacing period ranges from 0.1 micrometer to tens of microns with spatial frequencies between 0.2 and 25 line pairs per millimeter. Radial modulation transfer targets are ideal for high-resolution measurements using photographic film or analog sensors, but the horizontal and vertical pixelated nature of CCD detectors benefits from analysis utilizing targets that are geometrically consistent with the pixel rows and columns of the imaging device. A typical intensity scan made from a star target measured with a high numerical aperture apochromatic objective operating in transmitted light mode is presented in Figure 7(a). Intensity values were averaged over the dimension parallel to the target grating lines. When these types of data are collected for a variety of objectives at varying numerical aperture and plotted as percent contrast versus spatial frequency, a graph similar to that illustrated in Figure 7(b) is obtained. Contrast transfer approaches 100 percent at very low spatial frequencies (wide spacing periods) and gradually drops with increasing spatial frequency. As spatial frequencies reach the Abbe limit (the imaging wavelength divided by twice the objective numerical aperture), contrast values are generally too low to detect individual spacings in the line grating. In some instances, the modulation transfer function of an optical microscope can actually be less than zero. This occurs in an otherwise functional system when performance is degraded due to defocus, aberrations, and/or manufacturing errors. Often, the modulation transfer function will oscillate above and below zero as the microscope is racked through the point of best focus on a specimen having features with high spatial frequency. When the transfer function dips below zero, the image undergoes a phase reversal in which dark features become bright and vice versa. This phenomenon is illustrated in Figure 8(a) for the periodic knobs imaged from the curved surface of a diatom frustule. As the microscope focus is changed, the knobs undergo inversion of contrast, producing a ripple effect in the relative modulation (compare knobs (1) through (5) in Figure 8(a)). Increasing the degree of defocus will produce a corresponding increase in the oscillations observed with a modulation transfer function plot, with contrast reversals affecting increasingly larger features in the image. As the specimen plane is defocused, contrast drops rapidly for microscopic feature having high spatial frequencies and more slowly for those with low frequencies. It is often useful to measure contrast at a particular spatial frequency and then follow contrast as a function of distance on either side of the image plane. This analysis is sometimes termed the through-focus transfer function and is a measure of the depth of focus for a particular objective. The relationship between spatial frequency and the modulation transfer function for the diatom is illustrated in Figure 8(b). The graph represents a series of varying focus levels where the measured MTF is plotted against spatial frequency (number of sinusoidal features per unit distance). A drop in relative modulation values with defocus at fixed spatial frequencies is obvious in the figure, as well as the contrast reversal at focus levels 4 and 5 where the reduced spatial frequency drops into negative values of the MTF. Curve number 1 represents the diatom frustule in focus, and curves 2 through 5 present the results with successively increasing levels of defocus. The dotted line corresponds to the approximate spatial frequency of the knobs illustrated in Figure 8(a). Contrast is at a minimum where the dotted line crosses curve 4 and is reversed where curve 5 dips below zero on the y-axis. All optical systems and supporting components including microscopes, digital and analog video systems, video capture boards, cables, computer monitors, photographic film emulsions, and the human eye each have a characteristic modulation transfer function. In the case of analog and digital electronic imaging detectors, the reciprocal relationship discussed above between spatial resolution and frequency response is valid. In this case, however, the point spread function is replaced by the time response to a very short electrical impulse, and the optical transfer function is replaced by the imaging system's response to the sinusoidal electrical signal with respect to amplitude and phase. Electronic systems lack the symmetry of optical systems, which introduces non-linear phase effects into the function. Regardless of these differences, the underlying concepts are similar between electronic and optical systems, and this allows optical microscopes coupled to digital (or analog) imaging equipment to be analyzed within a common framework. The modulation transfer function of an optical system that contains a cascading series of components (microscope, digital video camera, video capture board, computer monitor, etc.) can be calculated by multiplying the individual MTF's of each component. By conducting a careful analysis of the combined system modulation transfer functions, a prediction about performance of the system can be obtained. In the same manner, the system phase response can be obtained by adding the phase transfer functions of individual components (Note: phase transfer functions are summed while modulation transfer functions are multiplied). Together, the modulation and phase transfer functions define the optical transfer function of the system. It is important to point out that the contrast transfer function does not have the same mathematical properties as the modulation transfer function and cannot be obtained simply by multiplying the CTF of individual components. In a cascaded series of devices that work together to produce an image, contrast is lost in certain frequency regions at each step, generally at the higher end of the spatial frequency range. In this regard, each detector or image processing function can also be used to cut off or boost the modulation transfer function at certain frequencies. At each stage, noise introduced by image transfer and processing is also a function of spatial frequency. Therefore, fine-tuning the response for optimum image contrast and system performance is dependent not only upon the type of image information desired, but also the frequency dependence of noise levels in the image. In addition, because the modulation transfer function of a detector is wavelength-dependent, it must be determined under carefully defined conditions of illumination. The modulation transfer function has not yet been established for several contrast enhancing modes commonly utilized in optical microscopy (such as polarized light), which await more highly perfected theories of image formation and appropriate test patterns (or specimens) to determine, by experiment, the MTF values. Contributing Authors Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657. Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. BACK TO MODULATION TRANSFER FUNCTION Questions or comments? Send us an email. © 1998-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. Last modification: Friday, Aug 01, 2003 at 11:43 AM Access Count Since December 10, 2000: 19915 For more information on microscope manufacturers, use the buttons below to navigate to their websites:
{"url":"http://micro.magnet.fsu.edu/primer/anatomy/mtfintro.html","timestamp":"2014-04-19T03:08:22Z","content_type":null,"content_length":"68763","record_id":"<urn:uuid:0d193ece-40ba-40e6-b7b5-117347db8a74>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Kevin Drum wrote a good post a couple of weeks ago about statistical illiteracy in the media, viz. the widespread tendency to characterize election poll results in which one candidate’s percentage point lead is equal to or less than the poll’s statistical margin of error (MOE) as a “statistical tie” or “dead heat.” Kevin notes: …probability isn’t a cutoff, it’s a continuum: the bigger the lead, the more likely that someone is ahead and that the result isn’t just a polling fluke. So instead of lazily reporting any result within the MOE as a “tie,” which is statistically wrong anyway, it would be more informative to just go ahead and tell us how probable it is that a candidate is really ahead. Here’s a table that gives you the answer to within a point or two: As Kevin notes, if Obama is up by three points, and the MOE is three points, it’s 84% likely that he’ll be the next President of the United States. That’s very different than 50% likely, i.e. an actual tie. This is directly relevant to education because most states use precisely the same statistical techniques when deciding whether a school has made Adequate Yearly Progress (AYP) under No Child Left Behind. If, say, 65% of students need to pass the test in order to make AYP, and only 62% pass, but the state determines an MOE of 4 percentage points, then the school makes AYP because the score was “within the margin of error.” This is silly for two reasons. First, unlike opinion polls, NCLB doesn’t test a sample of students. It tests all students. The only way states can even justify using MOEs in the first place is with the strange assertion that the entire population of a school is a sample, of some larger universe of imaginary children who could have taken the test, theoretically. In other words, the message to parents is “Yes, it is true that your children didn’t learn very much this year, but we’re pretty sure, statistically speaking, that had we instead been teaching another group of children who do not actually exist, they’d have done fine. So there’s nothing to worry about.” Second, per Kevin’s chart above, the idea that scores that fall below the cutoff but within the margin of error are statistically indistinguishable from actual passing scores is incorrect. This is particularly true given that, while opinion polls almost always use a 95% confidence interval to establish their MOEs, most states use a 99% confidence interval for NCLB, which results in substantially larger margins of error around the passing score. But states do it anyway, because many of them basically see NCLB accountability as a malevolent force emanating from Washington, DC from which schools need to be shielded by any means necessary. Think of it this way: let’s say your child is sick and you bring him to the doctor. After the diagnosis is complete, you and the doctor have the following conversation: Doctor: My diagnosis is that your son has pneumonia and needs to be hospitalized. You: That’s terrible! Are you sure? Doctor: Well, there are few absolute certainties in medicine. It’s possible that he only has bronchitis. But I’m pretty sure it’s pneumonia. You: How sure? Doctor: 84% sure. What would you do? Would you (A) Check your son into the hospital? Or would you (B) Say “Hey, there’s a 16 percent chance this whole thing will work itself out with bedrest and chicken soup. Let’s go that way.” States implementing NCLB nearly always choose option (B). That’s because they see the law as a process for making the lives of educators worse, not what it actually is: a process for making the lives of students better. 6 Comments Ian August 24, 2008 at 9:35 pm If you think students need 80% correct to be proficient, set the cut score at 75% to account for error. But having done so, don't then proceed to tell parents that schools have met AMOs under NCLB when in fact they have not. So are you saying that the schools have already done this, or that they should do this? It's true that if you have a 100% sample of test scores you don't need a confidence interval. You have sampled your entire population. That said, no one is interested in the population of test scores - they are interested in what the students know. You have only sampled what it is that students know. Ignoring the systematic error associated with the instrument, we still have a random error associated with the measurement. Different versions of a test may be identical on average, but individuals will perform differently on different versions of a given test. Even with a given test, individuals will perform differently at different times. You might be hungry one day, sleepy another, and in peak mental condition the third. One could, of course, fudge the standards up front, as you suggested, and build in a "margin of error". But why build junk like that into the system? Why not use something more reliable - like a margin or error? You know - something that you can actually calculate from the data... Corey Bunje Bower August 22, 2008 at 11:58 am Regardless of what they call it, it's still the right idea. Imagine that the cut-off for AYP is that 50% of students in a school are able to pass a test. If you test everybody in the school ten times, you might have 45% pass one time and 55% pass another. But the school is only tested once. So if 45% of students in the school pass, we can't really be sure that the school didn't live up to expectations -- it might have just been a testing day that was at the bottom of the distribution. Anonymous August 22, 2008 at 11:13 am I found this post amusing given your past attempts at statistical wizardry. Let's re-tell the story, applying the Carey 51% action rule. You: That's terrible! Are you sure? Dr: Well, there are few absolute certainties in medicine...But I'm pretty sure it's pneumonia. You: How sure? Dr: 51% sure. What would you do? Hey, what matters is your kid is sick! If there's a 51% chance he needs to be treated for pneumonia, then we begin treatment--now! It sure beats the status quo. Another lesson learned from Q&E. When it comes to statistics--kids, don't try this at home. Anonymous August 22, 2008 at 6:07 am As a statistician, I refer you to the very important Todd Snider song, "Statisticians Blues". It's friday afterall: Live with band: http://www.youtube.com/watch?v=3d_VU2XUP-E&feature=related Live by himself: http://www.youtube.com/watch?v=BMQdtyot38s&feature=related Kevin Carey August 21, 2008 at 9:10 am When you ask states themselves why they use confidence intervals, they refer to sampling error, not testing error. It's true that tests only assess a subset of what students need to know, and test results are subject to measurement error. But the proper way to account for measurement error is in setting cut scores on the test. No states requires students to get 100% correct to pass. So when policymakers decide what score score is good enough, that's the place to make allowances for the imprecision of the instrument. If you think students need 80% correct to be proficient, set the cut score at 75% to account for error. But having done so, don't then proceed to tell parents that schools have met AMOs under NCLB when in fact they have not. Corey Bunje Bower August 21, 2008 at 8:33 am It wouldn't be 84% likely that Obama is the next president, it would be 84% likely that more people in the population currently plan on voting for him. And the MOE around test results doesn't reflect the fact that not every kid was tested, it reflects the fact that that one test is only a sample of their ability -- if 100 different tests were given, the scores would differ each time -- and recognizes that a student's score would vary each time they took a test based on how they're feeling and what questions happen to be on the test.
{"url":"http://www.quickanded.com/2008/08/margins-of-error.html","timestamp":"2014-04-16T10:11:00Z","content_type":null,"content_length":"78917","record_id":"<urn:uuid:ea31c919-2134-41df-aab0-7706ecbfce3b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...I am also highly proficient at providing professional academic coaching for standardized examinations including, but not limited to, the ISEE/SSAT, PLAN/ACT, PSAT/SAT, AP AB/BC, AP Physics B/C, and GRE/GMAT/MCAT. I have scored in the 99th percentile on... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Eastpointe_Calculus_tutors.aspx","timestamp":"2014-04-20T06:53:59Z","content_type":null,"content_length":"61086","record_id":"<urn:uuid:9c5c2adb-2782-499d-b398-6b56685bcc80>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Two exponent problems. February 2nd 2009, 02:45 PM #1 Jan 2009 Two exponent problems. Ok, here are two more problems involving exponents. I'm not the best mathematician, so my answers may or may not be correct. If they are incorrect, could you please show how one can reach the correct solution? Thank you. Here is the first problem: $<br /> (7x^6y^3)(-8x^2y^6)<br />$ And my solution to same: Note that in this second problem, the "|" represents a bracket that is around the whole problem, NOT the contained numbers' absolute value: "simplify: " |-5a^3 b^4 c^0|^-4 |3a^4 b^6 c^3 | and here are my steps to reaching a solution: -5a^-12 b^-16 1^-4 3a^-16 b^-24 c^-12 -5 b^-16 1^-4 3a^-4 b^-24 c^-12 3a^-4 b^-40 c^-12 Hello, blastedcody! Simplify: . $(7x^6y^3)(-8x^2y^6)<br />$ And my solution: . $-56x^8y^9$ . . . . Good! Simplify: . $\left[\frac{\text{-}5a^3 b^4 c^0}{3a^4 b^6 c^3}\right]^{-4}$ and here are my steps to reaching a solution: . $\frac{\text{-}5a^{-12}b^{-16}1^{-4}} {3a^{-16} b^{-24} c^{-12}}$ You forgot about the coefficients . . . I'd reduce "inside" first: . $\left[\frac{\text{-}5}{3ab^2c^3}\right]^{-4} \;=\;\frac{(\text{-}5)^{-4}}{(3)^{-4}(a)^{-4}(b^2)^{-4}(c^3)^{-4}}$ . . $= \;\frac{3^4}{(\text{-}5)^4a^{-4}b^{-8}c^{-12}} \;=\;\frac{81a^4b^8c^{12}}{625}$ Thank you, that was very helpful. Your first answer is ver correct February 2nd 2009, 04:11 PM #2 Super Member May 2006 Lexington, MA (USA) February 2nd 2009, 05:31 PM #3 Jan 2009 February 15th 2009, 06:33 AM #4 Nov 2008
{"url":"http://mathhelpforum.com/algebra/71416-two-exponent-problems.html","timestamp":"2014-04-21T16:01:54Z","content_type":null,"content_length":"39991","record_id":"<urn:uuid:eed53c5d-e26d-447a-b179-5ee5a4d91438>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: okay, so what is the answer to (sqrt10xthirdroot10)^8 ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bbc5c6e4b0bcefefa0488e","timestamp":"2014-04-17T18:36:12Z","content_type":null,"content_length":"73593","record_id":"<urn:uuid:11a1c9ed-a5b2-4bf4-b2e0-d6ecdd37e4a9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
SVG Image PNG Image Compare from-to, from-by and by animateTransform rotate (around zero) with values animateTransform. The main indication for a failed test is the appearance of red. SMIL 2 specifies, how from-to, from-by and by animations have to be converted into values animation. Therefore they have to be the same as the related values animation. The conversion is as follows: used converted from="a" to="b" values="a;b" from="a" by="b" values="a;a+b" by="a" values="0;a" additive="sum" (by and from-by animations have only a meaning, if values can be added somehow. '0' is used as a general symbol for the neutral element of addition for the related attribute, this means 0 + a = a + 0 = a. And '0' is not equal to the symbol '1' as the basic unit of the related attribute, '0' is a predecessor of '1' in the related attribute space. For animateTransform the '0' is the same as the zero matrix, not the unity or identity matrix. For the rotate type this is a skew with an angle of 0. Deviating from SMIL 2 in SVG it is specified, that for animateTranform the animation effect has to be postmultiplied to the underlying value, if the animation is additive. Note that for two additive rotate angles a, b the resulting angle is not a+b but atan(tan(a)+tan(b)). The from-to, from-by and by are applied to animateTransform of the rotate type of different blue stroked paths and are compared with the related values animations including additive and cumulative hehaviour for underlying red paths. Additionally underlying dark red paths simulate the same behaviour using always the defaults additive replace and accumulate replace with animateMotion. The blue paths cover all red paths. Therefore if something red gets visible, an error is occured. Because fill is always not set and therefore remove, the final value is the value at 2s given with a simple values animateTransform, not very interesting for the test.
{"url":"http://www.w3.org/Graphics/SVG/Test/20080912/htmlEmbedHarness/animate-elem-212-t.html","timestamp":"2014-04-21T15:09:53Z","content_type":null,"content_length":"5733","record_id":"<urn:uuid:23b7c162-6dc5-48e9-a923-94a884dd6309>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
file pwts4.f for generalized gaussian quadratures for exp(a*x), complex a by N. Yarvin and V. Rokhlin file wts500.f for generalized gaussian quadratures for exp(a*x), real a by N. Yarvin and V. Rokhlin file swts500.f for generalized gaussian quadratures for I0(mu*x)exp(a*x) by N. Yarvin and V. Rokhlin file vwts.f for generalized gaussian quadratures for J0(r*x)*exp(z*x) by N. Yarvin and V. Rokhlin
{"url":"http://netlib.org/pdes/multipole/","timestamp":"2014-04-16T16:42:12Z","content_type":null,"content_length":"822","record_id":"<urn:uuid:bdfc0c16-4d65-4186-9071-76f747e64263>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
130 lbs in stone and pounds You asked: 130 lbs in stone and pounds the mass 9 stone and 4 pounds Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/130_lbs_in_stone_and_pounds","timestamp":"2014-04-17T22:34:35Z","content_type":null,"content_length":"59035","record_id":"<urn:uuid:83a8d5eb-b15f-4cae-9d47-3620a6a786fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution-adaptive moving mesh solver for geophysical flows / Doctoral Thesis Ludwig-Maximillans-Universitat Munchen, 2011. interaction of various scales of motion. The accurate numerical representation of such flows is limited by the available number of mesh points covering the domain of interest. Numerical simulations applying uniformly distributed grid cells waste mesh points in regions of large motion scales whereas coexisting small-scale processes cannot be adequately resolved. The current thesis offers the design, implementation, and application of an adaptive moving mesh algorithm for dynamically variable spatial resolution to the numerical simulation of nonlinear geophysical flows. For this purpose, the established geophysical flow solver EULAG was modified and extended. The non-hydrostatic, anelastic equations of EULAG are rigorously implemented in time-dependent generalised coordinates. This setting enables moving mesh adaptation by solving the equations in a straightforward approach developed in this thesis. The methodological development of the new adaptive solver is divided into three tasks: (i) The flux-form Eulerian advection scheme MPDATA employed in EULAG was extended. For transport equations in conservative form, a mass conservation law enters naturally and implies a unique compatibility condition for the solution algorithm. Here, extensions of the Eulerian MPDATA integration were developed, implemented and tested to provide full compatibility with the generalised anelastic mass conservation law (GMCL) under adaptive moving meshes. (ii) A machinery performing the numerical generation of an adaptive moving curvilinear mesh was designed and implemented in EULAG. For this purpose, an auxiliary set of parabolic moving mesh partial differential equations (MMPDEs) was employed to redistribute the existing mesh cells temporally. The solutions of the MMPDEs provide the mesh coordinates and the adaptation properties of the generated moving mesh (e.g. local mesh density) are controlled by a monitor function that horizontally and temporally. The form of the monitor function depends inter alia in the flow state. (iii) An efficient coding of the mesh adaptation machinery was successfully incorporated into the computational framework of EULAG. For this task, the approximation of the advective contravariant mass flux in MPDATA was developed and implemented in EULAG so to minimise errors of the incompatibility with the GMCL. The developed adaptive moving mesh solver was thoroughly investigated by simulating a number of relevant atmospheric problems. The advection of a passive tracer in a two-dimensional shear flow demonstrated the capability of the solver to automatically adapt the local resolution to the evolving small-scale filamentary structures. For this flow, the expected advantage of the mesh adaptation was achieved: the computing time (and the error) was reduced significantly by a factor of 26 (by 20%) compared to high-resolution uniform mesh computations. Another advantage of adaptive simulations is the appearance of new physical phenomena. Here, instabilities occurring at the interface of an idealised rising thermal with the ambient air could be simulated in much greater detail. The representation of the associated mixing processes is of direct relevance for simulating cumulus convection in realistic atmospheric flows. There, the process of fine-scale mixing, i.e. entrainment and detrainment, between the cloudy and the ambient air could be better resolved by mesh adaptation. The first application of the developed adaptive mesh solver in the three-dimensional parallelised modelling framework of EULAG to an idealised baroclinic wave life cycle demonstrated the accurate representation of the synoptic-scale flow (improved statistics) and the ability to resolve coexisting mesoscale processes. Focussing the adaptation to the developing frontal zone indicated the excitation of internal gravity waves which were nearly absent in simulations applying a uniform mesh with the same number of mesh points. As before, significant savings in computing time (at least a factor of 2) compared to equivalent results of a high-resolution uniform mesh computation were achieved for the three-dimensional simulations. A cumbersome side-effect of the successful and efficient numerical simulations was the extremely time-consuming tuning of the adaptation parameters, especially of the monitor function. So far, only a very limited number of monitor functions were tested. Systematic research will yield improved specifications of the monitor function for distinct atmospheric flows. In summary, the results obtained in this thesis show the capability and potential of adaptive moving mesh methods to simulate multiscale atmospheric flows with higher numerical accuracy and a broader coverage of motion scales. However, the adaptive moving mesh method adds substantial user complexity to the modelling system EULAG.
{"url":"https://opencat.library.ucar.edu/cgi-bin/koha/opac-detail.pl?biblionumber=58279","timestamp":"2014-04-20T03:12:08Z","content_type":null,"content_length":"35068","record_id":"<urn:uuid:bf1c7305-968a-46a4-8ac8-9c8e3acd2f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. Methodology2.1. System Studied/Monte Carlo Algorithm2.2. Analysis of Local Structure, Voronoi Cell and Characteristic Crystallographic Element Norm3. Results and Discussion3.1. Phase Transition of Athermal Systems: Effect of Chain Connectivity3.2. Crystal Morphologies in Ordered Packings of Athermal Chains3.3. Evolution of the Voronoi Cell during Hard-Sphere Crystallization3.4. Entropic Origins of Crystallization in Hard-Sphere Chain Packings4. ConclusionsAcknowledgmentsReferencesFigures Crystallization and phase transitions in general play a key role in many processes related, among others, to material engineering, physics, chemistry and biology. Advances in crystallography, mainly through X-ray diffraction measurements, have provided significant information on crystal structures. However, how such crystals nucleate and grow and how processing history further affects the corresponding ordered morphologies remain open topics of intense scientific debate. While experimental, theoretical and modeling advances constantly enrich our fundamental understanding of the phenomenon in a wide range of physical systems [1–7], a plethora of key aspects remain unknown, especially with respect to the microscopic origins of crystallization. Computer simulations can greatly aid in this direction through systematic studies on ideal atomic and molecular systems under controlled conditions, which remain unattainable in conventional approaches. Such an “in silico” modeling approach, subject to obvious advantages and disadvantages compared to experiments, has been shown to be an invaluable research tool in the analysis of the highly complex process of crystallization. Deep in the heart of numerical simulations lays the molecular model, which determines the level of detail and the corresponding approximations with respect to the way atoms and molecules are represented. Atomistic models incorporate highly detailed force fields to describe interactions between atoms, while coarse-grained ones sacrifice detailed information in favor of computational efficiency. In one of the simplest possible representations, atoms, either as monomeric entities or as part of molecular species, are treated as non-overlapping hard spheres. The hard-sphere model is obviously void of any kind of chemical information. However, because of its simplicity it stands as an invaluable simulation tool: It is accessible to analytical approaches, requires minimal computational resources and time, and can thus be employed under conditions which remain inaccessible to more detailed molecular models. Furthermore, the hard-sphere model allows us to discriminate and accurately identify the different governing factors (for example density or entropy) that affect various phenomena and physical processes. Ideally the knowledge gleaned from such simplified models could shed light onto the fundamental role of analogous mechanisms in much more complex physical and biological applications. Thus, it is not surprising that during the last decades an ever-growing body of simulations has successfully employed the hard-sphere model in studies of systems that range from colloids, microgels and granular materials to synthetic and biological polymers. The study of how objects of different shapes and sizes arrange in a multidimensional space and of the corresponding packing morphologies has been in the spotlight of research since early historical times. During the last decades pioneering scientific contributions have been achieved in general packing with modeling studies having greatly benefited by the continuous advances in computer hardware and software. Almost four centuries ago, Kepler conjectured that in three dimensional space the densest hard-sphere packing is that of a face centered cubic (fcc) lattice, a long-standing geometrical problem that has been addressed only recently in a series of papers by Hales and coworkers [8–10]. Equally interesting and perhaps more complex in its mathematical and physical formulation is the analogous problem of random packing: What is the maximum achievable density in the absence of order? Which are the salient characteristics of this state that could serve as a fingerprint for its identification? Under which conditions does an assembly of spheres transit between the amorphous and ordered phases? Many key aspects of random close (densest) packing of spheres were revealed by the pioneering experimental studies of Bernal and collaborators [11–14] while a rigorous definition of the maximally random jammed (MRJ) state has been provided more recently [15–18]. Over the years significant advances have provided a wealth of information about the state of jamming in a wide range of model physical systems [19–35]. Regarding phase transition in athermal packings, Onsager was the first to predict an anisotropic-nematic transition in hard rods as a result of the increase in entropy caused by ordering [36]. Crystallization in hard-sphere systems was initially reported by the independent simulations of Alder and Wainwright [37] and Wood and Jacobson [38]. Frenkel and collaborators studied and analyzed in detail the entropic mechanism behind phase transition in colloidal systems and model athermal packings at various conditions [39–51]. For packings of monomeric hard spheres of uniform size the calculated phase diagram identifies the freezing and melting points at ϕ[F] = 0.494 and ϕ[M] = 0.545, respectively [52]. It is now well established that above the melting point and given sufficient time an initially random packing of monodisperse hard spheres transits to the crystal phase [53–56]. Furthermore, crystallization is strongly affected by, among other factors, size polydispersity [39,57–61], microgravity [62,63], shear stress [64,65] and the presence of interfaces [66–69]. With respect to the morphology of the crystal phase, free energy calculations have shown that the face centered cubic (fcc) is thermodynamically more stable than the hexagonal close packed (hcp) lattice [45,70], albeit by a small margin [71]. However, in accordance with Ostwald’s rule of stages [72], the formation of a random hexagonal close packing (rhcp) is regularly observed in experiments on colloidal systems [63,73–75] and in simulations of monomeric hard spheres [55,56,76] since such stacking is configurationally closer to the random arrangement than the pure fcc and hcp lattices. The presence of defects in the crystal boundaries, the very small differences in the free energy of the competing crystal structures and the very slow dynamics at such high densities hinder the formation of a perfect crystallite. From the modeling perspective these kinetic hindrances are aggravated by computational limitations in system size and simulation time [56]. Random hard-sphere packings are further characterized by short-range order in the form of polytetrahedral structures with fivefold symmetry [35,56,77–79]. In crystal phases such structural morphologies are strongly correlated with multiply twinned planes at crystallite boundaries [55,56,76]. The evolution of fivefold local symmetry during crystal nucleation and growth in dense packings of monomeric hard spheres has been studied extensively through event-driven Molecular Dynamics (edMD) [56] and Monte Carlo (MC) simulations [1]. Based on these modeling results, a microscopic interpretation of hard-sphere crystallization proposes a competition between short-range order and crystallization in dense packings, where the formation of long-lived fivefold structures could frustrate the growth of crystallites [1,55,56]. Recent advances in experimental and simulation techniques have contributed to the detailed analysis and characterization of the phase behavior and self-assembly in packings of objects with highly complex shapes [32,80–86]. Among these systems lie the athermal polymer packings, consisting of chains of hard-sphere monomers. Macromolecules are characterized by a wide spectrum of characteristic time and length scales, which render their study very challenging from the modeling perspective, especially in atomistic detail. Furthermore, holonomic constraints applied to constituent monomers by chain connectivity, and inter-chain topological constraints in the form of entanglements, render the dynamical, conformational, mechanical and rheological properties of polymers distinctly different than the ones of monomeric analogs. A pertinent question with respect to dense packings of athermal polymers is how chain connectivity affects the maximally random jammed (MRJ) state and the disorder–order transition (crystallization) of hard spheres at high volume fractions. During the last years a growing body of research work has focused on such open topics related to the intrinsic features of athermal polymer packing and to the self-assembled morphologies of associated systems [33,87–97]. In the present manuscript we review our latest results from extensive Monte Carlo (MC) simulations on the crystallization in dense packings of freely-jointed chains of tangent hard spheres of uniform size [98–100]. In particular, we describe in detail the modeling methodology for the creation of athermal polymer configurations and the metrics adopted to characterize local order. Particular emphasis is placed on identifying the entropic origins of the phase transition and on comparing with the corresponding trends in monomeric analogs [55,56]. The paper is organized as follows: Section 2 describes the systems studied, the MC algorithm and the novel descriptors introduced to quantify local environment of each site. Section 3 presents the main results on the disorder-order phase transition, the self-assembly of crystallites and the corresponding structural differences with respect to the random phase. The manuscript concludes with Section 4, where the key findings are summarized with a description of potential applications on the self-assembly of crystals for more complex systems. Athermal polymer packings consist of freely jointed, linear chains of tangent hard spheres. Monomers are treated as non-overlapping spheres of diameter σ. Tangency implies that the bond length l is equal to the sphere diameter. This condition is numerically imposed within a tolerance in bond length. Comparisons carried out by allowing the bond length to fluctuate in the interval l ∈ [σ, σ + 10^ −4] [91] and in the interval l ∈ [σ, σ + 10^−8] showed no difference in the crystal growth and nucleation as well as in the self-assembly of the ordered morphologies. The freely-jointed model allows for full flexibility in the conformations as there are no constraints in bond bending and torsion (dihedral) angles. However, it has been shown that, due to strong excluded-volume interactions, bond bending angles and torsion angles tend to adopt specific geometric arrangements, which become increasingly more favorable as packing density increases [88,89,91–93]. Such a conformational tendency in bonded geometry leads to major changes in the long-range characteristics of chains: Their size shrinks significantly once the marginal scaling regime is reached [89,93,94]. As a consequence, in the vicinity of the MRJ state (concentrated regime), polymer dimensions become so collapsed that a significant fraction of chains form closed loops (cyclic long-range conformations) [88]. The employed Monte Carlo scheme consisted of the following mix of moves: (i) reptation (10%); (ii) end-mer rotation (10%); (iii) configurational bias (20%); (iv) inter-chain reptation (25%); (v) internal libration (34.98%); (vi) simplified end-bridging (sEB, 0.1%) and (vii) simplified intramolecular end-bridging (sIEB, 0.1%), where the percentages in parenthesis denote the attempt probabilities of each move. All local moves (i–v) are executed in a configurational bias pattern [101–103], according to which multiple trial positions, whose number increases with volume fraction, are attempted for each displaced site. This algorithm significantly increases the average computational time per MC step, but, in contrast to the conventional MC, it guarantees short-range equilibration of chains even at packing densities well above the melting point [91]. Long-range equilibration is achieved by the pair of chain-connectivity altering moves sEB and sIEB [91], which are based on the original end-bridging (EB) move [104] for atomistic polymer systems. Based on the tangency condition, sEB and sIEB proceed by deleting and forming bonds between properly selected pairs of spheres instead of displacing trimers [91,105]. Through this rapid re-arrangement long-range equilibration is achieved within modest computational time even in the close vicinity of the MRJ state; in fact the acceptance rate and accordingly the performance of the chain-connectivity altering moves increase with concentration [90,91]. In addition, the sEB and sIEB moves allow for polydispersity in chain lengths to be considered, which is controlled by casting the simulations in the n[at]n[ch]VTμ ensemble, where n[at] is the total number of spheres, n[ch] is the number of chains, V is the volume of the simulation cell, T is temperature and μ is the spectrum of relative chemical potentials of all chain species except two which are singled out as reference species [91,104]. In our simulations two different chain length distributions were implemented: A uniform one in the closed interval [N[av] (1 − Δ), N[av] (1 + Δ)], where N[av] is the average chain length and Δ is the reduced half width of the distribution divided by N[av], and a most probable (Flory) one with the shortest allowed chain length set at N[min]. All simulations were executed in cubic cells with periodic boundary conditions applied in all dimensions. Three different polymer systems were modeled, each one containing a total of 1200 hard spheres: (i) N[av] = 12, Δ = 0.5; (ii) N[av] = 12, N[min] = 3 and (iii) N[av] = 24, Δ = 0.5. Additional simulations conducted with simulation cells of 3000 sites to investigate the effect of system size on crystallization and on the formation of ordered morphologies showed no appreciable qualitative and quantitative differences. Initial configurations were generated at very low volume fractions using fully equilibrated, atomistic polyethylene structures [106–110] as templates by performing short equilibration steps to ensure the absence of overlaps between hard spheres. These dilute cells, filled with overlap-free athermal polymer chains, were then used as initial structures in MC simulations with isotropic shrinkages of the cell dimensions being attempted at frequent intervals until a target volume fraction was reached. Such cell volume reductions were accompanied by an affine repositioning of chains based on the relative position of their end with respect to the box origin and the amplitude of the attempted box shrinkage. Configurations of the N[av] = 12 system were further generated at selected packing densities by splitting all chains of a N[av] = 24 configuration in half to guarantee that the structural characteristics of the initial random packings and the phase transition were not affected by the generation protocol of the modeling procedure. In production simulations system snapshots and ensemble statistics were recorded every 2 × 10^5 MC steps, while the total simulation time exceeded 1 × 10^10 steps at the higher densities. Due to very long runs required to observe crystallization at volume fractions in the vicinity of the MRJ state, modeling studies were necessarily limited to packing densities of ϕ = 0.56, 0.58, 0.60 and 0.61 above the melting point. More details on the algorithm and the procedure to generate and equilibrate random packings of athermal polymer chains can be found in [92]. For comparison purposes parallel sets of simulations for analogous monomeric systems were carried out by event-driven Molecular Dynamics (edMD). The edMD algorithm used was a minor modification of the conventional edMD technique, which proceeds on a simple collision-by-collision basis until a preset number of collisions is reached [111]. Initial configurations of monomeric hard spheres were generated by deleting all bonds from random chain packings and by subsequently performing an edMD equilibration. Because the observation of crystallization in monomers requires much shorter simulations than in chain systems, a larger set of eight statistically uncorrelated MD trajectories of monomeric samples was produced at each packing density. Once a large number of system configurations (frames) is collected, the analysis proceeds by a detailed characterization of the local environment around each site. An accurate and highly discriminating descriptor is required to quantitatively describe the degree of randomness as well as the appearance and propagation of ordered nuclei corresponding to specific crystal structures. Existing descriptors of local structure include the widely used pair radial distribution function, g(r) [111], and a set of rotationally invariant measures, which are defined as combinations of spherical harmonics [112]. g(r) provides detailed information on the radial characteristics of the atomic or particulate system under study while rotationally invariant measures detect orientational deviations with respect to perfect local order. Recently, we qualitatively and quantitatively analyzed the local structure of athermal packings through a novel scheme that consists of two main steps: (i) Identification of the local environment around each sphere through a Voronoi tessellation and by measuring the shape and size of the corresponding Voronoi cell, and (ii) application of a novel structural descriptor based on the concept of the characteristic crystallographic element (CCE), as used in crystallography [113,114]. In structural characterization via Voronoi tessellation the set of neighbors closer to a reference site than to any other sites is identified. This task was performed with the qhull algorithm [115,116], which yields full information about the vertices, edges and faces of the Voronoi polyhedron around every site. Once the tessellation is completed, the corresponding Voronoi cells are constructed. In the simplest approach the local density around each hard sphere is calculated as the inverse of the volume of the corresponding Voronoi polyhedron [117]. A more detailed topological analysis can be performed with respect to the shape and size of each Voronoi cell through the calculation of the mass moment of inertia tensor I with all vertices being treated as equivalent point unit masses I = 1 n ver ∑ i = 1 n v er ( r i 2 δ - r i r i ) with n[ver] being the number of vertices of the polyhedron, r[i] is the position vector of vertex i with respect to the center of mass of the polyhedron and δ is the unit second order tensor. Equation 1 is written in dimensionless form. The mass moment of inertia tensor provides a quantitative description of the shape of a rigid body and of the spatial distribution of its mass [89,118–120]. The internal, co-moving principal axis system of the Voronoi polyhedron is defined by the normalized eigenvectors (e[1], e[2,]e[3]). Once the Voronoi tessellation is completed, the internal principal axes system is determined for each Voronoi cell from the coordinates of its vertices. The three real eigenvalues of the intertia tensor I[1], I[2] and I[3] (I[1] ≥ I[2] ≥ I[3]) correspond to the principal moments of inertia. The inertia tensor provides useful information on the shape and size of the Voronoi cell and accordingly on the local environment around each hard sphere. Based on the eigenvalues, a coarse-grained ellipsoid can be constructed with the lengths of the semiaxes being calculated as: L 1 = 5 2 ( I 2 + I 3 - I 1 ) with semiaxis lengths L[2] and L[3] being calculated in an analogous fashion as in Equation 2 under cyclic permutation of the indices. As global shape measures of the coarse-grained ellipsoid the following were computed: asphericity: b = 1 2 ( I 1 + I 2 ) - I 3 acylindricity: c = I 1 - I 2 and relative shape anisotropy: k 2 = 4 ( 1 - 3 I 2 I 3 + I 3 I 1 + I 1 I 2 ( I 1 + I 2 + I 3 ) 2 ) These measures are defined so that the lower the values of b, c and k^2 the closer the resemblance to spherical, cylindrical and isotropic shapes, respectively. Once a system configuration was recorded in the course of MC or edMD simulations, a Voronoi tessellation was performed to identify the characteristics of the corresponding polyhedra including a shape analysis based on asphericity, acylindicity and relative shape anisotropy. These global shape measures for each individual Voronoi cell can be directly compared with the analogous measures for the trapezo-rhombic dodecahedron and the rhombic dodecahedron, which are the characteristic Voronoi polyhedra arising from tessellation of hcp and fcc lattices, respectively. Thus, a systematic analysis of the shape measures of the Voronoi cells at each instance provide a reliable estimate of the current state of the athermal packing as well as of possible phase transition (crystallization). In addition, a change in the local environment around each sphere, quantified by the Voronoi global shape measures, can be directly related to changes in translational entropy, quantified in turn by sphere mobility. The second descriptor of local structure, the characteristic crystallographic element (CCE) norm for a given configuration of point-like atoms around a reference atom j, defined by the corresponding position vectors, quantifies both the orientational and radial similarity of this set of sites with respect to a specific ordered structure. This reference crystal structure is characterized by a unique, and thus distinguishing, set of crystallographic elements each of which, in turn, consists of a set of distinct elements of the corresponding point symmetry group. The complete mathematical formulation of the CCE norm of site j with respect to a reference crystal structure X, denoted as ɛ[j]^X can be found in [89,98,100]. Algorithmically, the method proceeds by identifying the set of orientation axes in the internal coordinate system that minimizes the value of ɛ[j]^X. For an ordered site j of perfect X crystal structure ɛ[j]^X = 0, any deviation will lead to CCE values greater than zero. By construction and due to the highly discriminating nature of the CCE norm, a site with high similarity to a given ordered structure X (ɛ[j]^X → 0) will necessarily possess a high norm value with respect to any alternative Y crystal structure (ɛ[j]^Y ≫ 0). Once the minimum CCE norm is calculated for each site in system, an order parameter with respect to perfect order X can be calculated as S X = ∫ 0 ɛ thres P ( ɛ X ) d ɛ X where P(ɛ^X) is the probability distribution function of CCE norm ɛ^X and ɛ^thres is a threshold value below which a site is considered to possess X-like order. Trial tests suggest that a value ɛ^ thres = 0.245 of is adequately small to discriminate between different crystal types but also large enough to correctly identify the disorder-order transition in initially random packings and the emergence of specific crystal morphologies. The hcp and fcc crystals are the two competing structures that arise when dense hard-sphere packings crystallize. Thus, the CCE norms (ɛ^hcp, ɛ^fcc ) and the corresponding order parameters (S^hcp, S^ fcc) for each were calculated with respect to these ordered structures. As the CCE-based analysis is highly discriminating between different crystal lattices, the degree of ordering τ^c can be estimated as the total number of sites with either hcp or fcc structural similarity (τ^c = S^hcp + S^fcc). Additional measurements were conducted to detect sites with fivefold local symmetry, a structural motif which is favored at high packing densities and constitutes an alternative local arrangement to hcp and fcc crystals [55,56]. We should note that while the CCE-based descriptor is used here to compare with the hcp, fcc and fivefold symmetries, by incorporating the proper distinguishing set of crystallographic elements and operations, it can be used to identify any emerging crystal structure. As in the case of the Voronoi shape measure characterization, application of the CCE norm allows for an accurate description of the local environment around each site and for a precise identification of a potential disorder-order phase transition at high volume fractions. The analysis of local structure based on the concepts of the Voronoi cells and of the CCE-based norm was performed at equally spaced frames of the long MC (or edMD) trajectories over all computer-generated samples and at all packing densities. Figures 1 and 2 present the CCE norm distribution of sites for the N = 12 chain system at ϕ = 0.56 and 0.61, respectively. The CCE norm was calculated with respect to the hcp, fcc and fivefold symmetries and is presented here for two different frames, one very close to the beginning (left panel) and one at the end (right panel) of the MC simulation. The vertical dotted lines denote the CCE-based threshold (ɛ^thres = 0.245) below which a site is assigned to one of the reference structures (hcp, fcc or fivefold). According to Equation 2, the fraction of sites with specific X local order corresponds to the part of the CCE-based distribution (P(ɛ^X)) that lies below the threshold value. According to the data reported in Figure 1, at a volume fraction of ϕ = 0.56 there exist no appreciable differences in norm distributions for all three different reference structures (hcp, fcc and fivefold). Furthermore, the fraction of sites with highly ordered local structure, in other words the fraction of sites with hcp- or fcc-norms below the threshold value, is very small and does not change throughout the MC simulation. Thus, it can be safely concluded that the athermal chain packing (N = 12) shows no signs of phase transition and remains in the original amorphous (random) state. The situation is quite different at the higher density (ϕ = 0.61). While the shapes of the initial CCE distributions are similar to those at ϕ = 0.56 for all three structures, the same is not true at later stages. A very clear shift of the hcp and fcc CCE distributions to much lower values is evident, with many sites possessing CCE norms well below the critical threshold. In parallel, the shape of the corresponding CCE distributions is significantly altered. With respect to the fivefold local structure, the distribution becomes narrower and the average shifts to higher values, which implies that in the final state for the N = 12 polymer system there exist no sites with fivefold local symmetry. In parallel, the distributions of the hcp and fcc norms adopt a bidisperse shape with peaks at low and high values stemming from the discriminating nature of the CCE norm: By construction, an hcp-like site (low hcp CCE norm) is characterized by a high fcc CCE norm and vice versa[56,98,100]. For this specific sample (N = 12, ϕ = 0.61), the fraction of sites with hcp- and fcc-like local structures in the final state are very similar. By calculating the corresponding fractions it can be concluded that crystallization occurs for the athermal polymer packing at ϕ = 0.61. Information obtained from the CCE-norm distribution also allows the calculation of crystallinity (degree of ordering),τ^c, as a function of steps (or number of collisions) from MC and edMD simulations on athermal chains and on hard-sphere monomers, respectively. Panels (a) and (b) in Figure 3 present the evolution of crystallinity as a function of MC steps (number of collisions for edMD). We should note that while one could transform both measures (steps and collisions) into a common reference framework of CPU time, such mapping would only provide technical information about the computational cost of each method. The application of stochastic, non-physical (but highly efficient) MC algorithms for polymer chains prevents the extraction of any kind of dynamical information related to chain motion and to the kinetics of phase transition. While crystallinity results are presented here for the N = 12 system with uniform distribution of lengths the phase behavior of athermal polymer chains remains the same also for the other two systems (N = 24 with uniform distribution and N = 12 with Flory distribution of chain lengths). As seen in Figure 3 (left panel) at the volume fraction of ϕ = 0.56, which lies just above the melting point (ϕ[M] = 0.545), hard sphere monomers of uniform size show a clear disorder-order transition while the corresponding athermal chains remain in the original amorphous state throughout the simulation. Evidently, at this packing density, chain connectivity suppresses crystallization. However, as concentration increases, polymer packings spontaneously evolve into a stable crystal phase. This trend is clearly shown in Figure 3 (right panel): Initially the degree of crystallinity, as quantified by the CCE norm, remains low (τ^c = 0.05) for both chains and monomers as expected for random (amorphous) packings. However, as MC (MD) simulations evolve, a sharp ordering transition occurs as crystallinity adopts high values (τ^c = 0.83), which are very similar for chain and monomeric packings. In the final stable crystal phase, the majority of sites adopt a highly ordered structure of either hcp or fcc character. Figure 4 shows the dependence of crystallinity on packing density for hard-sphere chains and monomers. At a range of volume fraction near and above the melting point hard sphere monomers of uniform size crystallize while the corresponding polymer systems remain amorphous. As packing density increases, the difference in crystallinity between chains and monomers progressively gets smaller. At the highest studied concentration (ϕ = 0.61) CCE-based crystallinity is, within statistical error, the same between athermal chains and monomers. Thus, it can be safely concluded that at high volume fractions chain connectivity has no appreciable effect on the ability of hard spheres to crystallize given adequate simulation time. However, according to the present simulations near the melting point, connectivity frustrates phase transition of chain packings, which, in sharp contrast to monomeric analogs, remain predominantly amorphous. From the simulation results presented in Section 3.1 it is evident that once a critical volume fraction is reached, which lies at higher concentrations compared to monomers, athermal packings of freely-jointed chains of hard spheres transit from the initial amorphous (random) to the final crystal (ordered) phase. In the present section, we study in detail the structural features of the characteristic ordered morphologies that arise during athermal polymer crystallization. Figure 5 shows some representative snapshots as obtained for the N = 12 system with uniform distribution of chain lengths at ϕ = 0.56 (upper panel) and 0.61 (lower panel) at the start (left side) and at the end (right side) of the MC simulations. In the system snapshots of Figure 5, spheres are colored according to the index of the chain to which they belong. While no significant ordering is observed in the structures at ϕ = 0.56, a visual inspection at ϕ = 0.61 confirms the ordering of the spheres and the formation of layers, as is clearly visible in the final configuration at ϕ = 0.61. The loss of positional and radial randomness and the formation of well-ordered morphologies during crystallization are depicted more vividly once we adopt a visualization scheme based on the information obtained by the CCE analysis. In Figure 6, spheres are color-coded according to the corresponding CCE-norm for hcp, fcc and fivefold symmetries for the same chain configurations as in Figure 5. At ϕ = 0.56 no particular change is observed in the population of the ordered sites (hcp or fcc symmetry), which remains at low levels: At ϕ = 0.56 packings of freely-jointed chains of tangent hard spheres remain amorphous. At ϕ = 0.61, as expected, the initial fraction of sites with ordered local environment in the random phase is significantly higher than at ϕ = 0.56, and the same conclusion can be drawn regarding the sites with fivefold symmetry. The latter finding is in full agreement with corresponding results on random packings of monomeric hard spheres above the freezing transition, where the fraction of fivefold sites increases linearly with packing density [55,56]. In the final stable crystal phase at ϕ = 0.61, alternating layers of almost exclusive hcp- or fcc- type are formed. These stack-faulted ordered morphologies possess a unique stacking direction and are further accompanied by an absence of fivefold sites. Analogous snapshots of the final crystal phases as obtained from MC simulations on the N = 12 polymer system, where chain lengths obey the Flory (most probable) distribution, are shown in Figure 7 at packing densities of ϕ = 0.58, 0.60 and 0.61. As in the case of the polymer packing characterized by the same average chain length but by a uniform chain-length distribution, stable crystal structures correspond to layered morphologies of alternating hcp and fcc character with a unique stacking direction. Furthermore, sites with fivefold local symmetry are absent in the ordered state. According to the results presented here, crystal morphologies of dense assemblies of athermal polymers correspond to randomly stacked hexagonal close packings (rhcp), and no ordered structures were found of exclusive fcc (or hcp) character. As mentioned in the introduction, this trend is in accordance with the Ostwald rule of stages [72], as the rhcp morphology is structurally and thermodynamically closer to the random phase than the pure fcc crystal. Accordingly, in a second step, a transition is expected from the (metastable) rhcp to the (stable) fcc phase. However, in all present MC simulations on athermal polymer packings, no such transition occurs even if the total simulation time at cases exceeds by orders of magnitude the time required for crystallization (disorder-order transition), which is not unexpected in view of the tiny entropic difference between rhcp and fcc structures. Similar conclusions have been drawn for corresponding packings of monodisperse hard-sphere monomers as the rhcp structure is shown to be the prevailing morphology [55,56]. In parallel, crystallization processes of athermal packings consisting of chains and monomers show appreciable differences. For chains, crystal morphologies are free of defects such as twinning which appear in ordered phases of monomeric hard spheres. Furthermore, while even for monomers the rchp is the prevailing crystal structure in the majority of samples, ordered morphologies still exist, which are characterized by a clear prevalence of fcc (or hcp) sites. Studies are in progress to investigate in a systematic fashion the effect of chain connectivity on the established crystal morphologies and on the short-range order in the form of fivefold local symmetry. In the introduction we proposed a second method to identify crystallization by analyzing the changes in the local environment through a systematic study of the shape and size of the Voronoi cell around each sphere site. This approach is presented through a series of illustrations starting in Figure 8 with the parent hard-sphere configuration where all nearest neighbors of a reference site are identified through the Voronoi tessellation (Figure 8a,b). The enclosing Voronoi cell of the reference site is shown in Figure 8c. Once the Voronoi cell is constructed, the moment of inertia tensor I is calculated according to Equation 2. In the next step the Voronoi cell is mapped to a coarse-grained ellipsoid with semiaxes given by Equation 3. Finally, global shape measures of the simplified representation can be readily calculated through Equations 4–6. An estimate of the local density around each sphere site can be obtained as the reciprocal of the volume of the enclosing Voronoi polyhedron. Since the volume of the simulation cell remains constant during the simulation and the Voronoi tessellation is a space-filling geometrical procedure, the average local density of the system does not change during the whole simulation time and consequently during the phase transition. However, significant qualitative and quantitative changes occur in the shape of the Voronoi cells as the chain assembly crystallizes spontaneously and ordered morphologies are formed. Figures 9–11 show the parent sphere configuration along with the corresponding Voronoi cell for sites, which possess amorphous and well-ordered hcp-like and fcc-like local structures, respectively. Visual comparison of the shapes of the Voronoi cells depicted in Figures 9–11 clearly shows that the local environment around each site undergoes significant changes as the packing evolves from the amorphous to the crystal phase. The Voronoi polyhedra corresponding to well-ordered crystal structures have more symmetrical and more spherical shapes. Thus, while there is no appreciable difference in the average local density, as quantified by the Voronoi volume, between the amorphous and crystal phases, the shape of the local environment around each site is profoundly altered during the phase transition [98,99]. Such structural changes during the phase transition can be made quantitative by plotting the global shape measures (asphericity, acylindricity and relative shape anisotropy) as a function of MC steps (Figure 12). All shape measures of deviation from isotropy, averaged over all Voronoi cells for each system configuration, decrease monotonically as the MC simulation advances. According to the data shown in Figure 3, for the N = 12 system (uniform chain length distribution) a sharp phase transition (crystallization) occurs at around 20 × 10^10 MC steps. It is exactly the same regime where the values of asphericity, acylindricity and relative shape anisotropy of the Voronoi polyhedra show a precipitous decline. In the final stable crystal phase all values of shape measures are significantly lower than the initial ones of the random phase. Based on the above, it can be safely concluded that during crystallization of athermal polymer packings, on average, the local environment around each sphere site becomes more symmetric and more spherical. Thus, a detailed geometrical analysis of the Voronoi polyhedra can shed light on the structural changes that occur during the phase transition. Such a methodological approach could be complementary to more refined structural descriptors like the CCE-based norm. In addition, such shape transformations of the local environment around each sphere site can be directly connected with local dynamics and consequently with translational entropy. The present analysis based on the Voronoi polyhedra can further serve as the basis for a descriptor, which would potentially identify shape similarities with respect to specific Voronoi cells of reference crystal structures. In isolated athermal systems, a phase transition can only be driven by an increase in entropy. Accordingly, in athermal packings of chain molecules, entropy is the driving force for crystal nucleation and growth. The conformational contribution entropy is actually reduced as a result of sphere arrangements adopting specific conformations both in bonded and non-bonded terms. This trend is easily identified by comparing the pair radial distribution function, g(r), in the initial random and final ordered phases as seen in Figure 13. Characteristic peaks appear in the ordered phase, especially near contact. At the same time, long-range correlations features in the pair distribution unambiguously point towards the emergence of the ordered phase. Even if a more refined measure of pair correlation would be required to capture the anisotropic features of layered crystals, g(r) points to a clear loss of conformational entropy during crystallization. Similar conclusions can be drawn for the orientational contribution of entire chains to entropy. While the population of oriented chains is quite limited, so that the change in the average orientation vector is very small, orientational entropy is nevertheless reduced in polymer crystals compared to random chain packings [98]. The two previous sources of entropy loss must be more than compensated for by an independent mechanism of entropy gain. In Section 3.3 we have reported that the local environment around each hard sphere becomes more isotropic as the crystal phase appears. In order to establish a connection between shape transformation of local structure and entropy increase, we first studied local sphere dynamics. In this direction, and given that MC simulation provide no dynamical information, we resort to the concept of “flipper” originally employed to identify the jamming transition in polymer packings [92,94]. “Flipper” is a term used to denote a sphere, which can perform a flip-like move, which obeys the holonomic chain constraints and does not lead to overlaps with any other sphere of the system. Here, we employ the concept of flipper to study how the ability of hard spheres to move locally is affected by the shape transformations of the local environment, which take place during crystallization. Figure 14 shows the fraction of sites (flippers), which can perform a flip-like move of specific amplitude, dφ in both directions (clockwise and counter-clockwise) as a function of MC steps corresponding to the same simulation trajectory of Figures 3 and 12. As simulation progresses, the population of sites which can move freely in their local vicinity increases. This trend is especially apparent around 20 × 10^10 MC steps, a regime which marks the disorder-order transition. Thus, as the local environment becomes more spherical and more symmetric monomers are able to explore more efficiently the free volume that surrounds them. Consequently, the translational entropy of the athermal polymer system increases during crystallization. This increase in translational entropy is large enough to compensate for the losses in conformational and orientational entropy, and is thus responsible for crystallization. In order to better understand the strong correlation between the shape transformation of the local environment and the increase in translational entropy during crystallization, Figure 15 summarizes the evolution of asphericity (averaged over all Voronoi polyhedra), fraction of flippers (of amplitude dφ = 1.00°) and crystallinity as a function of MC steps. The data shown confirm that during phase transition (indicated by the sharp increase of crystallinity), as the local environment becomes more spherical (indicated by the sharp decline of asphericity), translational entropy increases (indicated by the sharp increase of the flipper population). The large increase in translational entropy is thus the driving force for the athermal polymer crystallization as in the case of monomeric We have reviewed recent studies on the phase transition and self-assembly of crystal morphologies from extensive simulations of packings of freely-jointed chains of tangent hard spheres of uniform size. The key finding is that once a critical packing density is reached, athermal polymer chains crystallize, just as monomers do, in spite of the additional constraint set by chain connectivity. However, at volume fractions very close to the melting point, chain connectivity does indeed frustrate crystallization; for the systems studied at ϕ = 0.61, polymer packings remain amorphous while monomeric analogs show a clear phase transition. The exact origins of the frustration along with the extent of the effect of connectivity on crystallization are under investigation. Above a critical packing density, which is higher than for monomeric systems, and given sufficient simulation time, polymer configurations self-assemble into well characterized ordered morphologies of predominately rhcp character. Such crystals consist of stack-faulted alternating layers of hcp or fcc type with a single stacking direction and never show twinning. For all chain systems studied so far no transition to a pure fcc (or hcp) crystal was observed during the allowed simulation time. A detailed comparison between the crystal morphologies of polymers and of monomeric hard spheres is currently in progress. We have also described in detail two new descriptors of local structure, the characteristic crystallographic element norm and a geometric analysis based on the Voronoi cells. It is established that during crystallization the local environment around each site becomes more spherical and more symmetric. In turn, this shape transformation allows the sphere sites more freedom to move locally. Thus, the entropy of the system increases and it is the driving force for the crystallization of chain packings. Present efforts include the modeling study of phase transition in athermal polymer packings under varied conditions of confinement, mainly through the presence of a hard wall. The proposed simulation approach is further generalized to treat polymer packings with a finite degree of chain stiffness.
{"url":"http://www.mdpi.com/1422-0067/14/1/332/xml","timestamp":"2014-04-17T02:50:21Z","content_type":null,"content_length":"153429","record_id":"<urn:uuid:33eb10f8-f868-4e1f-995b-85b1dd24ccfd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Grosse Pointe Park, MI Math Tutor Find a Grosse Pointe Park, MI Math Tutor ...Also, I am involved in my firm's recruiting, candidate interviews, and hiring decisions which require assessing the candidates interview performance, resume, and skills with the firm's needs. I was involved in candidate assessments at my previous employer too. I minored in mathematics in colleg... 12 Subjects: including algebra 1, algebra 2, calculus, physics ...In high school I attended an advanced math and science center and excelled in Algebra II. I also was a TA at Wayne State University and specialize in tutoring high school math of all subjects. I have helped several students in high school math understand, apply the knowledge and advance to higher levels of mathematics. 46 Subjects: including discrete math, ACT Math, algebra 2, probability ...Vocabulary is an important part of every subject in which I work with students. In Language arts, I incorporate vocabulary words for both speaking and writing. As part of a reading or study skills session, I preview material with the student using vocabulary words that they will encounter in the text. 13 Subjects: including algebra 1, prealgebra, reading, English ...My passion is teaching students ranging in grades K-8. I have thousands of hours in tutoring students and have had much success in bringing understanding to the elementary mind. I am pleasant, kind, and caring. 22 Subjects: including SAT math, algebra 1, algebra 2, ACT Math ...I have my Master's Degree in Civil and Structural Engineering and have a deep understanding of all math subjects from pre-Algebra to Calculus. I also know how to apply Calculus in Physics and Engineering in general. I excel in tutoring. 10 Subjects: including algebra 1, algebra 2, calculus, physics Related Grosse Pointe Park, MI Tutors Grosse Pointe Park, MI Accounting Tutors Grosse Pointe Park, MI ACT Tutors Grosse Pointe Park, MI Algebra Tutors Grosse Pointe Park, MI Algebra 2 Tutors Grosse Pointe Park, MI Calculus Tutors Grosse Pointe Park, MI Geometry Tutors Grosse Pointe Park, MI Math Tutors Grosse Pointe Park, MI Prealgebra Tutors Grosse Pointe Park, MI Precalculus Tutors Grosse Pointe Park, MI SAT Tutors Grosse Pointe Park, MI SAT Math Tutors Grosse Pointe Park, MI Science Tutors Grosse Pointe Park, MI Statistics Tutors Grosse Pointe Park, MI Trigonometry Tutors Nearby Cities With Math Tutor Detroit, MI Math Tutors East Detroit, MI Math Tutors Eastpointe Math Tutors Grosse Pointe Math Tutors Grosse Pointe Farms, MI Math Tutors Grosse Pointe Shores, MI Math Tutors Grosse Pointe Woods, MI Math Tutors Hamtramck Math Tutors Harper Woods Math Tutors Highland Park, MI Math Tutors Roseville, MI Math Tutors Royal Oak, MI Math Tutors Saint Clair Shores Math Tutors Taylor, MI Math Tutors Warren, MI Math Tutors
{"url":"http://www.purplemath.com/grosse_pointe_park_mi_math_tutors.php","timestamp":"2014-04-20T04:01:14Z","content_type":null,"content_length":"24433","record_id":"<urn:uuid:f62f1658-ea5a-4537-88a0-b6c749d50b3c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Compute roots of sum_i c_i/(a_i + b_i x)^p up vote 2 down vote favorite How to compute the (real) roots of $$\sum_{i=1}^n \frac{c_i}{(a_i + b_i \cdot x)^p}$$ for given reals $a_i, b_i, c_i$, and positive integers $n, p$? The cases $p=1, ..., 5$ and $n=6, ..., 20$ would already be very useful for me. I actually just need any root in a given interval. Multiplying by the denominators, this task can be reduced to finding roots of a polynomial, but this only works for very small $n$ whereas even for $n=8$ the coefficients in the polynomial are numerically unstable. The only other method I could think of is using binary search (aka the bisection method). But this is too slow. Is there a faster method that is numerically stable? root-systems stability 2 If $p$ is even, I don't think you'll find many real roots! – Barry Cipra Feb 1 '13 at 20:40 Sorry, I had set $c_i = 1$ to simplify the question. I have now put it back in so there may be roots even for even $p$. – Emanuele Viola Feb 1 '13 at 20:48 With a $c_i$ in the numerator, you can simplify the denominator to just $(x-d_i)^p$. – Barry Cipra Feb 1 '13 at 20:52 1 did you try Newton-Raphson? – Suvrit Feb 1 '13 at 20:53 Thanks. I have not tried it because I was not sure what starting point to choose/whether the method would work in general. Do you see it? – Emanuele Viola Feb 1 '13 at 21:49 show 1 more comment 1 Answer active oldest votes This recent master thesis by Leonardo Robol treats the case $p=1$ in a numerically sound way. I think they are going to release some code soon, so you might want to contact the up vote 1 down vote Thanks for the useful pointer. I wonder if something like that can be done for $p > 1$? – Emanuele Viola Feb 19 '13 at 14:48 add comment Not the answer you're looking for? Browse other questions tagged root-systems stability or ask your own question.
{"url":"http://mathoverflow.net/questions/120545/compute-roots-of-sum-i-c-i-a-i-b-i-xp","timestamp":"2014-04-25T05:08:47Z","content_type":null,"content_length":"57496","record_id":"<urn:uuid:8e427c8c-4bc9-4589-9510-d4d7c935b526>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
How to combine SUMPRODUCT with COUNTIF? I need to count up the number of occurrences of the letter "Q" in Column A, only if the string in Column B = "Claims." That is, how do I combine . . . I have tried various approaches and different formulas, but nothing seems to work. Any help at all would be appreciated. Thanks in advance.
{"url":"http://www.excel-answers.com/microsoft/Excel-Worksheet/31882310/how-to-combine-sumproduct-with-countif.aspx","timestamp":"2014-04-16T10:28:00Z","content_type":null,"content_length":"10300","record_id":"<urn:uuid:cf55bdca-b5c1-48a8-a35e-bb485c05677f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: MDKC and MDWDF for Up: Timoshenko's Beam Equations Previous: Timoshenko's Beam Equations The characteristic polynomial equation, from (3.10) with the system matrices given above, in the case of constant coefficients, is where and are frequency and spatial wavenumber, respectively. There are two pairs of solutions to this equation, which can be written as and it is simple to show that in contrast with the Euler-Bernoulli beam, the group velocities will be bounded. Indeed, we have in particular that the first of these relations is similar to that which describes longitudinal wave propagation in a bar, and the second corresponds to shear vibration [77]. For the full varying-coefficient problem, the maximum group velocity, as defined in (3.13), will be Next: MDKC and MDWDF for Up: Timoshenko's Beam Equations Previous: Timoshenko's Beam Equations Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node164.html","timestamp":"2014-04-17T01:14:42Z","content_type":null,"content_length":"7694","record_id":"<urn:uuid:4df5ac60-18e2-4213-ac43-5d5a01e02fbf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrating to find distance and time - difficult! I have been looking for problems to work out for practice and came across one I have no idea how to even start about. Say a box is dropped from a plane, with a parachute attached to it. If we are given two equations for velocity in terms of time, such as Vx(t)=Vo cos(θ) e^(-pt) and Vy(t)=Vo sin(θ) - ge^(-z(t+6)) -16 If p is the drag coefficient in the x and z is the drag coefficient in the y. θ would be the angle between the ground and the box. Would θ not be a constant though since the angle will always be constant, even if the box follows a parabolic path? And say the plane was flying level when the box was dropped... Wouldn't this mean that there is only an initial velocity in the x direction at whatever speed the plane was flying? For practice, I made up p=0.061 and z=0.213 My whole dilemma is how I would find the time it takes for the box to hit the ground. I'm fairly sure I'd have to take the integral which would give me distance. Plugging in zero for Vo in the y direction would cancel out the first part of the equation. θ would remain constant which would take out the sin and cos part of each equation and make it a constant. I guess from there I am lost. My above assumptions could be wrong, however. Please let me know! No rush on answering this as its just for my own personal practice and "fun".
{"url":"http://www.physicsforums.com/showthread.php?t=397746","timestamp":"2014-04-17T15:33:33Z","content_type":null,"content_length":"23654","record_id":"<urn:uuid:ea8b08ab-52c4-4d73-a6b9-ee3a5d43ddb2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Linear Chromatic Number of a Sperner Family Reza Akhtar Dept. of Mathematics Miami University, Oxford, OH 45056, USA Maxwell Forlini Dept. of Mathematics Miami University, Oxford, OH 45056, USA July 29, 2010 Mathematics Subject Classification: 05D05 Let S be a finite set and S a complete Sperner family on S, i.e. a Sperner family such that every x S is contained in some member of S. The linear chromatic number of S, defined by Civan, is the smallest integer n with the property that there exists a function f : S {1, . . . , n} such that if f(x) = f(y), then every set in S which contains x also contains y or every set in S which contains y also contains x. We give an explicit formula for the number of complete Sperner families on S of linear chromatic number 2. We also prove tight bounds on the number of elements in a Sperner family of given chromatic
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/551/0071722.html","timestamp":"2014-04-18T18:30:01Z","content_type":null,"content_length":"8045","record_id":"<urn:uuid:77024e5a-1ce6-4217-acd1-222d1f3a6cd8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
khasiat Lobak merah : Petua &amp; Tip Moderator: Moderators Lobak merah mengandungi banyak vitamin A. Ia diperlukan oleh tisu badan untuk pelbagai fungsi. Salah satu daripadanya adalah untuk membina pigmen bagi penglihatan yang baik pada waktu malam. Walaubagaimana pun memakan lobak tidak akan menghalang anda memakai cermin mata. Tetapi kekurangan vitamin A akan merosakan penglihatan anda. Selain daripada lobak, sayuran hijau dan merah juga kaya dengan vitamin A. Orang Baru Posts: 236 Joined: Sun Aug 22, 2004 11:07 pm mr.lado wrote::D Mengapakah lobak merah baik untuk mata anda? Lobak merah mengandungi banyak vitamin A. Ia diperlukan oleh tisu badan untuk pelbagai fungsi. Salah satu daripadanya adalah untuk membina pigmen bagi penglihatan yang baik pada waktu malam. Walaubagaimana pun memakan lobak tidak akan menghalang anda memakai cermin mata. Tetapi kekurangan vitamin A akan merosakan penglihatan anda. Selain daripada lobak, sayuran hijau dan merah juga kaya dengan vitamin A. thank you ~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~ Veteran SF Posts: 15299 Joined: Sat Oct 30, 2010 2:15 pm Location: Di sana sini sia ingat buli kasi kurus If you judge people, you have no time to love them. Orang Kuat SF Posts: 4876 Joined: Mon Feb 23, 2009 10:12 am Location: masih mencari tempat berteduh ladybugs wrote:sia ingat buli kasi kurus bikin tarang mata bilang ~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~ Veteran SF Posts: 15299 Joined: Sat Oct 30, 2010 2:15 pm Location: Di sana sini elle wrote: ladybugs wrote:sia ingat buli kasi kurus bikin tarang mata bilang nda la sia mo terlampau makan lobak ni... takut sia terang mata tembus menembus nanti If you judge people, you have no time to love them. Orang Kuat SF Posts: 4876 Joined: Mon Feb 23, 2009 10:12 am Location: masih mencari tempat berteduh ladybugs wrote: elle wrote: ladybugs wrote:sia ingat buli kasi kurus bikin tarang mata bilang nda la sia mo terlampau makan lobak ni... takut sia terang mata tembus menembus nanti sy mo juga kalu boleh tembus pandangan ~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~ Veteran SF Posts: 15299 Joined: Sat Oct 30, 2010 2:15 pm Location: Di sana sini elle wrote:sy mo juga kalu boleh tembus pandangan sapa dulu ko mau pandang tu... si ed ka? If you judge people, you have no time to love them. Orang Kuat SF Posts: 4876 Joined: Mon Feb 23, 2009 10:12 am Location: masih mencari tempat berteduh ladybugs wrote: elle wrote:sy mo juga kalu boleh tembus pandangan sapa dulu ko mau pandang tu... si ed ka? ada lah sy mo tingu ~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~ Veteran SF Posts: 15299 Joined: Sat Oct 30, 2010 2:15 pm Location: Di sana sini elle wrote:ada lah sy mo tingu sia harap ko berjaya If you judge people, you have no time to love them. Orang Kuat SF Posts: 4876 Joined: Mon Feb 23, 2009 10:12 am Location: masih mencari tempat berteduh ladybugs wrote: elle wrote:ada lah sy mo tingu sia harap ko berjaya mudah-mudahan.. dari jauh lagi sy nampak sudah ~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~ Veteran SF Posts: 15299 Joined: Sat Oct 30, 2010 2:15 pm Location: Di sana sini Sia risau kalau terlalu banyak makan lobak merah nanti Vitamin A berlebihan pula sehingga penglihatan sia boleh tembus bah [spoiler]Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist. Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt. The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word "torque" is always used to mean the same as "moment". The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M. The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector and the lever arm. In symbols: \boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\! \tau = rF\sin \theta\,\! τ is the torque vector and τ is the magnitude of the torque, r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector, F is the force vector, and F is the magnitude of the force, × denotes the cross product, θ is the angle between the force vector and the lever arm vector. The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below. 1 Terminology 2 History 3 Definition and relation to angular momentum 3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation 4 Units 5 Special cases and other facts 5.1 Moment arm formula 5.2 Static equilibrium 5.3 Net force versus torque 6 Machine torque 7 Relationship between torque, power and energy 7.1 Conversion to other units 7.2 Derivation 8 Principle of moments 9 Torque multiplier 10 See also 11 References 12 External links See also: Couple (mechanics) In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are called a "couple" and their moment is called a "torque".[3] For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque". This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and angular acceleration, respectively. Definition and relation to angular momentum A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page. Torque is defined about a point not specifically about axis as mentioned in several books. A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5] More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by \tau = rF\sin\theta,\! where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, \tau = rF_{\perp}, where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6] It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is determined by the right-hand rule.[6] The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum, \boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum: \boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}. For rotation about a fixed axis, \mathbf{L} = I\boldsymbol{\omega}, where I is the moment of inertia and ω is the angular velocity. It follows that \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion. Proof of the equivalence of definitions for a fixed instantaneous centre of rotation The definition of angular momentum for a single particle is: \mathbf{L} = \mathbf{r} \times \mathbf{p} where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is: \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}. This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p = mv (if mass is constant), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}. The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}. Then by definition, torque τ = r × F. If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}. The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected. Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m. [8] This avoids ambiguity with mN, millinewtons. The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, E= \tau \theta\ where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7] In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and not pound-mass). Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2). Special cases and other facts Moment arm formula Moment arm diagram A very useful special case, often given as the definition of torque in fields other than physics, is as follows: |\tau| = (\textrm{moment\ arm}) (\textrm{force}). The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: |\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}). For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess. Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, we use three equations. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\ tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F} Machine torque Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is capable of providing at that speed. Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak. Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch. Relationship between torque, power and energy If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta, where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change in the rotational kinetic energy Krot of the body, given by K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2, where I is the moment of inertia of the body and ω is its angular speed.[10] Power is the work per unit time, given by P = \boldsymbol{\tau} \cdot \boldsymbol{\omega}, where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product. Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation. Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second). Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. Conversion to other units A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2π radians per revolution. \mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed} Adding units: \mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)} Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following. \mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000} where rotational speed is in revolutions per minute (rpm). Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to: \mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}. The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550. Use of other units (e.g. BTU/h for power) would require a different custom conversion factor. For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By definition, linear distance=linear speed × time=radius × angular speed × time. By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power: \mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox {torque} \times \mbox{angular speed}. The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: \mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \, If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion factor 33000 ft·lbf/min per horsepower: \mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\mbox{ lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252} because 5252.113122... = \frac {33000} {2 \pi}. \, Principle of moments The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from: (\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots). Torque multiplier A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output, thereby achieving greater load with minimal effort. sumber wikipediaGuntap... kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist. Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt. The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word "torque" is always used to mean the same as "moment". The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M. The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector and the lever arm. In symbols: \boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\! \tau = rF\sin \theta\,\! τ is the torque vector and τ is the magnitude of the torque, r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector, F is the force vector, and F is the magnitude of the force, × denotes the cross product, θ is the angle between the force vector and the lever arm vector. The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below. 1 Terminology 2 History 3 Definition and relation to angular momentum 3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation 4 Units 5 Special cases and other facts 5.1 Moment arm formula 5.2 Static equilibrium 5.3 Net force versus torque 6 Machine torque 7 Relationship between torque, power and energy 7.1 Conversion to other units 7.2 Derivation 8 Principle of moments 9 Torque multiplier 10 See also 11 References 12 External links See also: Couple (mechanics) In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are called a "couple" and their moment is called a "torque".[3] For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque". This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and angular acceleration, respectively. Definition and relation to angular momentum A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page. Torque is defined about a point not specifically about axis as mentioned in several books. A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5] More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by \tau = rF\sin\theta,\! where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, \tau = rF_{\perp}, where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6] It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is determined by the right-hand rule.[6] The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum, \boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum: \boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}. For rotation about a fixed axis, \mathbf{L} = I\boldsymbol{\omega}, where I is the moment of inertia and ω is the angular velocity. It follows that \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion. Proof of the equivalence of definitions for a fixed instantaneous centre of rotation The definition of angular momentum for a single particle is: \mathbf{L} = \mathbf{r} \times \mathbf{p} where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is: \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}. This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p = mv (if mass is constant), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}. The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}. Then by definition, torque τ = r × F. If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}. The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected. Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m. [8] This avoids ambiguity with mN, millinewtons. The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, E= \tau \theta\ where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7] In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and not pound-mass). Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2). Special cases and other facts Moment arm formula Moment arm diagram A very useful special case, often given as the definition of torque in fields other than physics, is as follows: |\tau| = (\textrm{moment\ arm}) (\textrm{force}). The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: |\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}). For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess. Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, we use three equations. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\ tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F} Machine torque Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is capable of providing at that speed. Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak. Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch. Relationship between torque, power and energy If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta, where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change in the rotational kinetic energy Krot of the body, given by K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2, where I is the moment of inertia of the body and ω is its angular speed.[10] Power is the work per unit time, given by P = \boldsymbol{\tau} \cdot \boldsymbol{\omega}, where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product. Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation. Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second). Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. Conversion to other units A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2π radians per revolution. \mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed} Adding units: \mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)} Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following. \mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000} where rotational speed is in revolutions per minute (rpm). Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to: \mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}. The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550. Use of other units (e.g. BTU/h for power) would require a different custom conversion factor. For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By definition, linear distance=linear speed × time=radius × angular speed × time. By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power: \mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox {torque} \times \mbox{angular speed}. The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: \mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \, If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion factor 33000 ft·lbf/min per horsepower: \mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\mbox{ lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252} because 5252.113122... = \frac {33000} {2 \pi}. \, Principle of moments The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from: (\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots). Torque multiplier A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output, thereby achieving greater load with minimal effort. sumber wikipediaGuntap... kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull, a torque can be thought of as a twist. Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque (turning force) that loosens or tightens the nut or bolt. The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word "torque" is always used to mean the same as "moment". The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M. The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector and the lever arm. In symbols: \boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\! \tau = rF\sin \theta\,\! τ is the torque vector and τ is the magnitude of the torque, r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector, F is the force vector, and F is the magnitude of the force, × denotes the cross product, θ is the angle between the force vector and the lever arm vector. The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below. 1 Terminology 2 History 3 Definition and relation to angular momentum 3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation 4 Units 5 Special cases and other facts 5.1 Moment arm formula 5.2 Static equilibrium 5.3 Net force versus torque 6 Machine torque 7 Relationship between torque, power and energy 7.1 Conversion to other units 7.2 Derivation 8 Principle of moments 9 Torque multiplier 10 See also 11 References 12 External links See also: Couple (mechanics) In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are called a "couple" and their moment is called a "torque".[3] For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque". This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and angular acceleration, respectively. Definition and relation to angular momentum A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page. Torque is defined about a point not specifically about axis as mentioned in several books. A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5] More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: \boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}, where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by \tau = rF\sin\theta,\! where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, \tau = rF_{\perp}, where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6] It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is determined by the right-hand rule.[6] The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum, \boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum: \boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}. For rotation about a fixed axis, \mathbf{L} = I\boldsymbol{\omega}, where I is the moment of inertia and ω is the angular velocity. It follows that \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion. Proof of the equivalence of definitions for a fixed instantaneous centre of rotation The definition of angular momentum for a single particle is: \mathbf{L} = \mathbf{r} \times \mathbf{p} where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is: \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}. This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p = mv (if mass is constant), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}. The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law), \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}. Then by definition, torque τ = r × F. If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that \frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}. The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected. Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m. [8] This avoids ambiguity with mN, millinewtons. The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, E= \tau \theta\ where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7] In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and not pound-mass). Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2). Special cases and other facts Moment arm formula Moment arm diagram A very useful special case, often given as the definition of torque in fields other than physics, is as follows: |\tau| = (\textrm{moment\ arm}) (\textrm{force}). The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: |\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}). For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess. Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, we use three equations. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\ tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F} Machine torque Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is capable of providing at that speed. Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak. Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch. Relationship between torque, power and energy If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta, where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change in the rotational kinetic energy Krot of the body, given by K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2, where I is the moment of inertia of the body and ω is its angular speed.[10] Power is the work per unit time, given by P = \boldsymbol{\tau} \cdot \boldsymbol{\omega}, where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product. Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation. Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second). Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. Conversion to other units A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2π radians per revolution. \mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed} Adding units: \mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)} Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following. \mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000} where rotational speed is in revolutions per minute (rpm). Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to: \mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}. The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550. Use of other units (e.g. BTU/h for power) would require a different custom conversion factor. For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By definition, linear distance=linear speed × time=radius × angular speed × time. By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power: \mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox {torque} \times \mbox{angular speed}. The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the lineareed and distance are increased proportionately by 2π in the above derivation to give;[/spoiler] Semenjak bn-amino cerita pasal seks;liwat,video seks,terkini suara seks(matSabu)smpi negara luar gelar Msia Negara Seks;otak sia kotor oleh ketahian bn-amino SF Leaders Posts: 35060 Joined: Sat Apr 05, 2008 2:30 am Location: KOTA KINABALU-RANAU khasiat lobak puti pula..boleh melansarkan tu kansing...klu ada batu karang kan kompom tebuka tu batu karang.. cara cara kasi buat tu lobak putih jus...tapi jangan *CENSORED* air atau ais...biar yg asli...sapa yg sakit2 buah pinggang cuba2 la amalkan minum ni jus lobak putih... هذه الساعة Teman Setia SF Posts: 2898 Joined: Tue Mar 30, 2010 4:26 pm Location: KG.N34 LIAWAN gampalid wrote:khasiat lobak puti pula..boleh melansarkan tu kansing...klu ada batu karang kan kompom tebuka tu batu karang.. cara cara kasi buat tu lobak putih jus...tapi jangan *CENSORED* air atau ais...biar yg asli...sapa yg sakit2 buah pinggang cuba2 la amalkan minum ni jus lobak putih... mo trai laini...sakit2 negi pinggang sia skarang.... Respected SF Veteran Posts: 27916 Joined: Mon Jun 30, 2008 12:44 am Location: Sabah tingau wrote: gampalid wrote:khasiat lobak puti pula..boleh melansarkan tu kansing...klu ada batu karang kan kompom tebuka tu batu karang.. cara cara kasi buat tu lobak putih jus...tapi jangan *CENSORED* air atau ais...biar yg asli...sapa yg sakit2 buah pinggang cuba2 la amalkan minum ni jus lobak putih... mo trai laini...sakit2 negi pinggang sia skarang.... ko try la..ada tu beliou takansing darah..dia amalkan minum ni jus lobak...terus ok owh dia bilang...mcm batu karang pun terus oecah n teikut keluda dari kansing dia bilang.... هذه الساعة Teman Setia SF Posts: 2898 Joined: Tue Mar 30, 2010 4:26 pm Location: KG.N34 LIAWAN Acai berry jua bole bantu merawat mata... dan byk lagi khasiatnya ... Posts: 35 Joined: Wed Aug 17, 2011 11:18 am Location: Labuan Who is online Users browsing this forum: No registered users and 1 guest
{"url":"http://www.sabahforum.com/forum/petua-tip/topic1682.html","timestamp":"2014-04-18T18:45:22Z","content_type":null,"content_length":"142352","record_id":"<urn:uuid:be81a089-d4dc-4772-a650-06333ae9941f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Clifton, NJ Algebra 1 Tutor Find a Clifton, NJ Algebra 1 Tutor ...I want my students to be comfortable with always asking questions! I have been a professional tutor throughout college and I have a Level II Advanced Tutor certificate in CRLA International Training. My skills as a tutor have always been derived from my love of learning and I hope to instill that in my students. 38 Subjects: including algebra 1, English, reading, ESL/ESOL ...Math is my favorite subject because it's problem solving. I enjoy a good challenge and a chance to solve problems. I've taken honors Algebra when I was in middle school, as well as high 8 Subjects: including algebra 1, geometry, elementary math, chess ...Experienced tutoring advanced subjects that rely on algebra as a foundation. I'm a college graduate in Physics. 1 year of calculus in high school, 2 years of calculus/analysis in university. In-depth familiarity with all aspects of the subject and an intuitive feel for it that I do my best to transmit to students. 17 Subjects: including algebra 1, chemistry, Spanish, calculus ...I am glad to say my student went to to be the salutatorian in her 8th grade class. I have also tutored high school students and even some college students, mostly in math (algebra 1 and 2, geometry, precalculus, calculus 1 and 2), biology, physics, and English. I have experience tutoring different age groups and I am able to connect with students and help them understand the course 16 Subjects: including algebra 1, English, physics, writing ...My position was first base for 10 years. I played on a traveling team for 3 years, the NJ Raisers in Jersey City. I've also umpired games in the past, both behind the plate and around the 16 Subjects: including algebra 1, statistics, precalculus, elementary math Related Clifton, NJ Tutors Clifton, NJ Accounting Tutors Clifton, NJ ACT Tutors Clifton, NJ Algebra Tutors Clifton, NJ Algebra 2 Tutors Clifton, NJ Calculus Tutors Clifton, NJ Geometry Tutors Clifton, NJ Math Tutors Clifton, NJ Prealgebra Tutors Clifton, NJ Precalculus Tutors Clifton, NJ SAT Tutors Clifton, NJ SAT Math Tutors Clifton, NJ Science Tutors Clifton, NJ Statistics Tutors Clifton, NJ Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bloomfield, NJ algebra 1 Tutors East Orange algebra 1 Tutors Elmwood Park, NJ algebra 1 Tutors Garfield, NJ algebra 1 Tutors Montclair, NJ algebra 1 Tutors Nutley algebra 1 Tutors Passaic algebra 1 Tutors Passaic Park, NJ algebra 1 Tutors Paterson, NJ algebra 1 Tutors Rutherford, NJ algebra 1 Tutors Union City, NJ algebra 1 Tutors Wallington algebra 1 Tutors Wayne, NJ algebra 1 Tutors Weehawken algebra 1 Tutors Woodland Park, NJ algebra 1 Tutors
{"url":"http://www.purplemath.com/Clifton_NJ_algebra_1_tutors.php","timestamp":"2014-04-21T07:41:27Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:5f4f78be-460f-46c2-981f-eb98a4cabb0e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Projective dimension of cohomology over regular rings MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Suppose $R$ is a regular ring and $F^{\bullet}: 0\to F^0 \to F^1 \to \dots \to F^d \to 0$ is a complex of finite rank free $R$-modules. Is is true that $\mathrm{projdim}H^i(F ^{\bullet}) \leq d$ for all $i$? up vote 0 down vote favorite ac.commutative-algebra homological-algebra add comment Suppose $R$ is a regular ring and $F^{\bullet}: 0\to F^0 \to F^1 \to \dots \to F^d \to 0$ is a complex of finite rank free $R$-modules. Is is true that $\mathrm{projdim}H^i(F^{\bullet}) \leq d$ for all $i$? No if $R$ has dimension $>1$. Take a finitely presented module $M$ of projective dimension $>1$. Choose a finite presentation of $M$, i.e. a complex $0\rightarrow F^0\rightarrow F^1\ up vote 1 rightarrow 0$ as in your question with $H^1(F^\bullet)=M$. Then you get a counterexample. down vote add comment No if $R$ has dimension $>1$. Take a finitely presented module $M$ of projective dimension $>1$. Choose a finite presentation of $M$, i.e. a complex $0\rightarrow F^0\rightarrow F^1\rightarrow 0$ as in your question with $H^1(F^\bullet)=M$. Then you get a counterexample. Since this is false in an embarrassingly obvious way, let me point out my motivation for asking. Suppose $(R,\mathfrak{m})$ is a local Cohen-Macaulay ring and $F^{\bullet}$ is as in the question. Then one can show that every minimal prime in the $R$-support of $H^{\ast}(F^{\bullet})$ has height $\leq d$. This would follow from the stronger but false statement which I up vote 0 queried. In the situation of Fernando Muro's counterexample, this theorem says that if $R^{m}\overset{f}{\to } R^{n} \to M \to 0$ is a presentation, then $M$ and $\ker{f}$ can't both have down vote dimension $ <d-1$. add comment Since this is false in an embarrassingly obvious way, let me point out my motivation for asking. Suppose $(R,\mathfrak{m})$ is a local Cohen-Macaulay ring and $F^{\bullet}$ is as in the question. Then one can show that every minimal prime in the $R$-support of $H^{\ast}(F^{\bullet})$ has height $\leq d$. This would follow from the stronger but false statement which I queried. In the situation of Fernando Muro's counterexample, this theorem says that if $R^{m}\overset{f}{\to } R^{n} \to M \to 0$ is a presentation, then $M$ and $\ker{f}$ can't both have dimension $ <d-1$.
{"url":"http://mathoverflow.net/questions/107173/projective-dimension-of-cohomology-over-regular-rings","timestamp":"2014-04-17T12:53:19Z","content_type":null,"content_length":"53608","record_id":"<urn:uuid:60ce180c-4584-4158-8cd4-e7595a7f9aea>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Java.lang.StrictMath Class The java.lang.StrictMath class contains methods for performing basic numeric operations such as the elementary exponential, logarithm, square root, and trigonometric functions. Class declaration Following is the declaration for java.lang.StrictMath class: public final class StrictMath extends Object Following are the fields for java.lang.StrictMath class: • static double E -- This is the double value that is closer than any other to e, the base of the natural logarithms. • static double PI -- This is the double value that is closer than any other to pi, the ratio of the circumference of a circle to its diameter. Class methods S.N. Method & Description 1 static double abs(double a) This method returns the absolute value of a double value. 2 static float abs(float a) This method returns the absolute value of a float value. 3 static int abs(int a) This method returns the absolute value of an int value. 4 static long abs(long a) This method returns the absolute value of a long value. 5 static double acos(double a) This method returns the arc cosine of a value; the returned angle is in the range 0.0 through pi. 6 static double asin(double a) This method returns the arc sine of a value; the returned angle is in the range -pi/2 through pi/2. 7 static double atan(double a) This method returns the arc tangent of a value; the returned angle is in the range -pi/2 through pi/2. 8 static double atan2(double y, double x) This method returns the angle theta from the conversion of rectangular coordinates (x, y) to polar coordinates (r, theta). 9 static double cbrt(double a) This method returns the cube root of a double value. 10 static double ceil(double a) This method returns the smallest (closest to negative infinity) double value that is greater than or equal to the argument and is equal to a mathematical integer. 11 static double copySign(double magnitude, double sign) This method returns the first floating-point argument with the sign of the second floating-point argument. 12 static float copySign(float magnitude, float sign) This method returns the first floating-point argument with the sign of the second floating-point argument. 13 static double cos(double a) This method returns the trigonometric cosine of an angle. 14 static double cosh(double x) This method returns the hyperbolic cosine of a double value. 15 static double exp(double a) This method returns Euler's number e raised to the power of a double value. 16 static double expm1(double x) This method returns e^x -1. 17 static double floor(double a) This method returns the largest (closest to positive infinity) double value that is less than or equal to the argument and is equal to a mathematical integer. 18 static int getExponent(double d) This method returns the unbiased exponent used in the representation of a double. 19 static int getExponent(float f) This method returns the unbiased exponent used in the representation of a float. 20 static double hypot(double x, double y) This method returns sqrt(x^2 +y^2) without intermediate overflow or underflow. 21 static double IEEEremainder(double f1, double f2) This method returns the natural logarithm (base e) of a double value. 22 static double log(double a) This method returns the natural logarithm (base e) of a double value. 23 static double log10(double a) This method returns the base 10 logarithm of a double value. 24 static double log1p(double x) This method returns the natural logarithm of the sum of the argument and 1. 25 static double max(double a, double b) This method returns the greater of two double values. 26 static float max(float a, float b) This method returns the greater of two float values. 27 static int max(int a, int b) This method returns the greater of two int values. 28 static long max(long a, long b) This method returns the greater of two long values. 29 static double min(double a, double b) This method returns the smaller of two double values. 30 static float min(float a, float b) This method returns the smaller of two float values. 31 static int min(int a, int b) This method returns the smaller of two int values. 32 static long min(long a, long b) This method returns the smaller of two long values. 33 static double nextAfter(double start, double direction) This method returns the floating-point number adjacent to the first argument in the direction of the second argument. 34 static float nextAfter(float start, double direction) This method returns the floating-point number adjacent to the first argument in the direction of the second argument. 35 static double nextUp(double d) This method returns the floating-point value adjacent to d in the direction of positive infinity. 36 static float nextUp(float f) This method returns the floating-point value adjacent to f in the direction of positive infinity. 37 static double pow(double a, double b) This method returns the value of the first argument raised to the power of the second argument. 38 static double random() This method returns a double value with a positive sign, greater than or equal to 0.0 and less than 1.0. 39 static double rint(double a) This method returns the double value that is closest in value to the argument and is equal to a mathematical integer. 40 static long round(double a) This method returns the closest long to the argument. 41 static int round(float a) This method returns the closest int to the argument. 42 static double scalb(double d, int scaleFactor) This method return d × 2^scaleFactor rounded as if performed by a single correctly rounded floating-point multiply to a member of the double value set. 43 static float scalb(float f, int scaleFactor) This method Return f × 2^scaleFactor rounded as if performed by a single correctly rounded floating-point multiply to a member of the float value set. 44 static double signum(double d) scaleFactor) This method returns the signum function of the argument; zero if the argument is zero, 1.0 if the argument is greater than zero, -1.0 if the argument is less than zero. 45 static float signum(float f) This method returns the signum function of the argument; zero if the argument is zero, 1.0f if the argument is greater than zero, -1.0f if the argument is less than zero. 46 static double sin(double a) This method returns the trigonometric sine of an angle. 47 static double sinh(double x) This method returns the hyperbolic sine of a double value. 48 static double sqrt(double a) This method returns the correctly rounded positive square root of a double value. 49 static double tan(double a) This method returns the trigonometric tangent of an ang 50 static double tanh(double x) This method returns the hyperbolic tangent of a double value. 51 static double toDegrees(double angrad) This method converts an angle measured in radians to an approximately equivalent angle measured in degrees. 52 static double toRadians(double angdeg) This method converts an angle measured in degrees to an approximately equivalent angle measured in radians. 53 static double ulp(double d) This method returns the size of an ulp of the argument. 54 static float ulp(float f) This method returns the size of an ulp of the argument. Methods inherited This class inherits methods from the following classes:
{"url":"http://www.tutorialspoint.com/java/lang/java_lang_strictmath.htm","timestamp":"2014-04-19T09:36:09Z","content_type":null,"content_length":"24446","record_id":"<urn:uuid:bf257a36-41f4-4411-8314-3131f48c25b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Overlapping instance problem Robert Dockins robdockins at fastmail.fm Mon Feb 13 15:48:54 EST 2006 On Feb 13, 2006, at 2:26 PM, Jeff.Harper at handheld.com wrote: > Hi, > I've posted a couple messages to the Haskell Cafe in the last few > months. I'm new to Haskell. But, I've set out to implement my own > vectors, matrices, complex numbers, etc. > One goal I have, is to overload operators to work with my new > types. The pursuit of this goal, has pushed me to learn a lot > about the > Haskell type system. When I get stuck from time-to-time, the kind > folks on this list have pointed me in the right direction. > I'm stuck now. One thing I want to avoid is adding new > multiplication operators to handle multiplication of dissimilar > types. For instance, I'd like to be able to have an expression > like k * m where k is a Double and m is a Matrix. This doesn't > work with the prelude's (*) operator because the prelude's (*) has > signature: > (*) :: (Num a) => a -> a -> a. > To get around this, I wrote my own versions of a Multiply class > that allows dissimilar types to be multiplied. You can see my > Multiply class in the module at the end of this Email. [snip error message] > I don't understand how m1 * m2 can match the scalar multiplication > instances. For instance, the scalar * matrix instance has signature: > instance (Multiply a b c, Num a, Num b, Num c) > => Multiply a (Matrix b) (Matrix c) > where > m1 in my expression would correspond to the 'a' type variable. > But, 'a' is constrained to be a Num. However, I never made my > Matrix type an instance of Num. > Is there a work around for this? In my first implementation, I did > not have the Num constraints in the matrix Multiply instances. I > added the Num constraints specifically, to remove the ambiguity of > the overlapping instance. Why didn't this work? I'm pretty sure this is due to a misfeature of the way class class instance selection works. Essentially, the typechecker IGNORES the instance context (everything before the =>) when looking for matches, and it only checks the context after it has irrevocably selected an instance. Thus, rather than backtracking and trying to find another instance, the typechecker just gives you errors about unsatisfied constraints or overlapping instance errors. Often this isn't what you want (as in this case). To be fair, doing better than this (in general) seems pretty difficult. The typechecker sometimes needs to have more information than it can currently gather. I think the following extension proposal might address this problem (if its ever implemented...) As of now, the typechecker can't be absolutely certain that 'Matrix' isn't (and will never be) an instance of 'Num'. Just because you haven't make it a member of 'Num' doesn't mean someone else couldn't! For it to do what you want, the typechecker needs to be able to prove that, given any legal collection of instances, the instance declarations in question will not overlap. It can't to that. As to workarounds... that becomes more difficult. Essentially you need to replace the bare type variable 'a' in your instance declarations with something that can guide the typechecker to select the 'correct' instance. Two options come to mind: 1) create a 'newtype' for scalars. Now you have to wrap and unwrap your scalars, which is a bit of a pain, but it is a fully general solution. Judicious use of newtype deriving may eliminate some of this pain. 2) Create separate 'Multiply' instances for each type of scalar you want to use. Eliminates the ugly wrapping/unwrapping, but limits the types of scalars you can use. Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.haskell.org//pipermail/haskell-cafe/attachments/20060213/92759193/attachment.htm More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2006-February/014468.html","timestamp":"2014-04-18T01:42:10Z","content_type":null,"content_length":"7410","record_id":"<urn:uuid:ddfefee4-ba7f-44a3-8f2a-d14a7e2aac55>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Fuzzy math? School District to review how subject is taught SEATTLE — When it comes to teaching math, a growing number of parents and educators want Seattle schools to check their numbers. Critics say the current system is failing kids, and this summer the district starts considering alternatives. Damon Ellingston, a university professor who teaches college math, physics and astronomy, says Seattle grads aren’t ready for college math. “I had students coming from the Seattle public school system who had not really been exposed to the basic algorithms of math that you and I might have been exposed to when we went to high school a long time ago,” Ellingston said. The problem, he said, is the textbooks. Students in kindergarten through 5th grade use one called ‘Everyday Math.’ Some call it ‘fuzzy math,’ with colorful pictures and stories taking the place of basic computation, repetition and drills. It’s “intended to teach students how to think about mathematical ideas and how to formulate thoughts. I think it`s actually a laudable intention, but I think it’s pretty clear those textbooks have gone woefully wrong in the execution.” Six years ago, then-Seattle School Board President Michael DeBell voted against ‘fuzzy math.’ DeBell, who is still a board member, said Seattle’s diverse student body requires an approach that all kids understand, but he agrees that ‘Everyday Math’ may have backfired. “The number one complaint is that it`s confusing and it`s difficult to offer the help at home and help students achieve,” DeBell said. The School District this summer will begin reviewing its math curriculum, and a more traditional — and effective — course of study could be in place by fall of 2014. “I think we need to move back in the direction of more direct instruction, of clear ways to solve problems, agreeing there is one best way to do it and everybody learns it,” DeBell said. State Schools Superintendent Randy Dorn said Washington needs to raise its standards. Local tech companies are hiring from other states or even other countries to fill engineering jobs here because local kids aren’t proficient in math. Dorn said the students must be more competitive. “You literally have to memorize,” he said. “I know people say, oh no, memorize? But adding, subtracting, multiplying, dividing, you have to memorize those things, have them in your mind. Every day, in math, in my work and percentages and looking at concepts, I can easily do it in my head, because I have that foundational part.” Ellingston has two daughters who will enter the public school system soon. If Seattle math is still ‘fuzzy,’ he said, they will be going somewhere else. “They are not actually teaching them mathematics or how to think mathematically, in an attempt to sort of leap frog through the math, they threw the baby out with the bath water,” he said. DeBell said the school board didn’t get enough parent feedback last time around, so that’s going to change. One thing that won’t is middle school math. Reviewing those lessons isn’t in the budget. 14 Comments to “Fuzzy math? School District to review how subject is taught” I would love to hear Ellington's view in on the math program, Investigations. It is far more "fuzzy" than Everday Math. First of all, there is no perfect textbook with all the answers. That's the reason we have teachers and school districts to determine which resources (textbooks included) will be used to enhance and promote learning. The interesting complaint on the other side is that our students don't know how to problem solve and can only handle regurgitated problems. Developing mathematical thinking is more than just knowing the basic facts, which I do believe are very important as well. Just because I'm a spelling and grammar expert doesn't mean I can write an award winning novel. It's also interesting that we ask college professors about teaching as very few actually know how to teach. Our societies understanding of teaching is if I'm a genius and I speak my knowledge then I'm teaching. Teaching isn't what the teacher knows, it's what the student can learn. I am a math teacher and over my career we have used different types of texts and we always have to supplement with what we as teaching professionals know is needed. Whether that be more drill and kill with the supposed 'fuzzy' math or more problem solving and real world problems with the 'traditional' math. If there were a perfect textbook that always working for every kid, every parent, every college professor and every career, then everyone would be using it. Spelling and grammar are the building blocks of writing a great novel, or any novel- without the fundamentals no amount of creativity will make your work readable, let alone genius. Same goes for math. It's important to learn to think and problem solve, but the fundamentals need to be there. Math is a language, and you need to learn the rules and patterns and vocabulary in order to take it to the next step, just like any language. And sadly some curriculums leave the fundamentals out. School districts across the country have already been leaving Everyday Math. Seattle is behind the curve. It doesn't work. True, no one textbook works for everyone, but it has already been shown that students who use Everyday Math are learning math less well. The lack of a perfect textbook is no excuse for using one that is inadequate. I'm surprised that Seattle doesn't think Everyday Math teaches memorization. When my son was in the program, he brought home Fact Triangles, which is a "next generation" flash card. We practiced with those regularly. I thought they were better than the flash cards I used as a kid because the Fact Triangles had complementary operations on each triangle. Smart! My son is now is Honors Math classes in a STEM academy in high school and is doing well. Interesting how the books shown are not EM books,but Connected Math Project for Middle School. Poorly researched story. EM students do well when the curriculum is presented the right way. It has been long know that teaching basic skills is essential before conceptual understanding of the material can occur. If Einstein had been born in the Stone Age, his genius might have allowed him to invent basic arithmetic. But being born at the end of the 19th century allowed him to use all of the techniques of advanced physics. Building on these techniques he created the theory of Why do educators ask students to re-invent basic math (at best) or skip over it entirely (at worst) and then expect them to move on to higher levels of math? It can't be done. There is no foundation. Why do people wonder "Why can't Johnny do math?" Because it was never taught! Let's do some fun math: EM in Seattle is a K-5 program. Seattle has been using EM for 6 years. A college professor says that students entering college are weak in math and it is because of EM. 5 plus 6 is 11. Therefore, the students who have been identified as weak in math did not learn any math via EM, but EM is blamed anyway??? On the other hand, my daughter at another district had EM K-6. Fast forward 5 years – she just finished 11th grade and scored a 5 out of 5 on the AP Calc exam. Maybe EM isn't so bad! Don't be so quick to blame the books, especially when the whole premise of an argument is fatally flawed. Now that is a story worth writing about in the newspaper… I was wondering if anyone was going to catch this. If a 5th grade started EM 6 years ago, they still would be in high school. So how can the argument be made that EM is not preparing Seattle students for college math?? There can't be data for this yet. Come on Marni Hughes and Q13Fox. Think before you start spouting your one-sided conservative propaganda. Maybe you need to wait a couple years and see that EM students in fact are getting a good math education. Actually the couple parents who commented here that have EM students give more credence to EM being a good math program. This whole story is based on a false premise and idealogical agenda that the right has been unsuccessfully pushing for the last few years. The research continually shows that the kind of rote, procedural learning that traditional textbooks promote doesn't help one learn or think mathematically. Everyday math is used in high school too. But there's no money to change the high school curriculum right now. News flash: Everyday Math certainly doesn't teach students to "learn or think mathematically." Far from it: EM encourages the use of calculators and doesn't even teach basic math facts. It moves constantly from one subject to the next ("trust the spiral!"), never allowing students to master any one concept. This program was created not by mathematicians but by "educators" at the University of Chicago, and true math experts have long since denounced this math curriculum as complete nonsense. If you think wanting elementary school kids to learn real math is "conservative propaganda," so be it, but I just call it good education. If they are using the materials shown, that is not Everyday Math. That is Connect Mathematics which is totally different. It's really sad to see that the wrong program is getting bashed. Everyday Math is an absolute joke. My kids suffered through that nonsense for years until I finally yanked them out and put them in a private school that uses a real math program. Seattle may have been using EM for only a few years, but this dismal program has been a staple of most other public school systems around the country since the 1990s, and you can see where that's gotten us: At the bottom of international math rankings for any first-world country, colleges now having to offer remedial math for the majority of freshmen, STEM careers going to foreigners, etc. I have fun with, cause I found exactly what I was taking a look for. You have ended my 4 day long hunt! God Bless you man. Have a great day. Hi there! Someone in my Myspace group shared this website with us so I came to check it out. I’m definitely loving the information. I’m book-marking and will be tweeting this to my followers! Terrific blog and terrific design and style.
{"url":"http://q13fox.com/2013/07/08/fuzzy-math-school-district-to-review-how-subject-is-taught/","timestamp":"2014-04-17T04:13:54Z","content_type":null,"content_length":"87858","record_id":"<urn:uuid:f7b92323-6d14-4a7e-84dc-6790634b31c8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
In what setting does one usually define mixed sheaves and weights for them? up vote 4 down vote favorite In BBD mixed sheaves and weights for them were only defined for ($\overline{\mathbb{Q}_l}$-)sheaves over a variety $X_0$ defined over a finite field $F$. Weights start to behave better when one extends coefficients from $F$ to its algebraic closure i.e. passes from $X_0$ to $X$. Now, BBD was published in 1982. Are any significant improvements and/or generalizations known in this field now? There is a paper by Huber and a book by Jannsen where mixed sheaves and weights for them are mentioned. Yet these authors were not able to generalize the results of BBD (in fact, an example of Jannsen shows that this is probably impossible). They also didn't extend scalars. So, are there any other papers on this subject? Upd. There seems to be two basic ways to define weights (for sheaves) explicitly. The first method uses weights of Hodge structures. It seems that this method can work only for something like the category of mixed Hodge modules. Possibly, I will study these categories in the future. Yet at the moment I study motives, and it seems that 'motivic' people usually do not understand mixed Hodge modules (and so did not relate them with motives). So, I am currently interested in the second method. It uses the eigenvalues of the Frobenius action. So, was anything interesting done using THIS approach after 1982? perverse-sheaves weights ag.algebraic-geometry Dear Mikhail, If you are not sure about the role of eigenvalues of Frobenius and their relations to weights, and also the relations to Hodge theory, I suggest you go back before 1982 and read Deligne's ICM address, Theorie de Hodge I. Mixed Hodge modules are intimately related to motives. The idea of motives and mixed sheaves is that there is an underlying category of mixed motivic sheaves (here "mixed" means "not necessarily pure", i.e. not necessarily arising as the cohomology of a smooth proper family over space where the sheaf lives). – Emerton Jul 9 '10 at 23:45 (Note: in the above, the sentence beginning "Mixed Hodge modules ... " was supposed to be the start of a new thought, and a new paragraph; but unfortunately comments don't seem to allow paragraphs. Anyway, continuing ... ) There are then realization functors to the various cohomology theories: $\ell$-adic realization takes mixed motivic sheaves to perverse sheaves as in BBD, and the Hodge theoretic realization takes values in mixed Hodge modules. A final remark: you might want to ask a more specific question then "was anything interesting done ...", since you are asking about a ... – Emerton Jul 9 '10 at 23:49 ... fundamental technique and topic in algebraic geometry, arithmetic geometry, and number theory. People use these tools all the time, and are always trying to improve them (by trying to prove everything that has been conjectured, e.g. the weight-monodromy conjecture). The problem of constructing the category of mixed motivic sheaves is a fundamental and difficult one, but much thought has been given to that too, and the philosophy it suggests is used frequently as a guide to the truth. – Emerton Jul 9 '10 at 23:52 Dear Emerton, thank you! I am in the following situation: I know something about the motivic side of the story (that depends on several conjectures), and want to know more about sheaf-theoretic and non-conjectual things. – Mikhail Bondarko Jul 10 '10 at 0:57 1 These are very good answers. I have nothing to add except perhaps to suggest also looking at Deligne's 1974 ICM talk "Poids dans la cohomologie des variétés algébriques". – Donu Arapura Jul 10 '10 at 1:09 show 1 more comment 2 Answers active oldest votes Admittedly, I'm almost completely ignorant about the $\ell$-adic setting. But, in case the following at least gets at the spirit of your question: in the de Rham setting, I found an old (1990s) preprint of Saito ("On the formalism of mixed sheaves," now TeXed up and available on the arxiv) useful in understanding the formal structure of (the system of categories of) up vote 5 mixed sheaves. The book by Peters and Steenbrink ("Mixed Hodge Structures") also has a nice section (14.1, "An axiomatic introduction") in the chapter (14) on mixed Hodge modules that down vote explains the picture well. Thank you; I will have a look. – Mikhail Bondarko Jul 9 '10 at 18:58 These references are very interesting; thank you!! Yet it seems that all explicit constructions of weights in these books use weights of Hodge modules. Is this true? See the upd. of my question. – Mikhail Bondarko Jul 9 '10 at 22:11 add comment I'm not sure if this is the kind of answer that you're looking for. This is an extension of Tom Nevin's answer. Saito's theory of mixed Hodge modules is modeled, to some extent, on BBD. The results are quite similar, but the constructions and proofs are entirely different (and unfortunately rather opaque). In a sense, this follows a general pattern. In the $\ell$-adic world, the weight filtration is determined naturally from the Galois action. In Hodge theory, one usually has to guess what it is based on analogy with it. up vote Perhaps I can say a few more words to make the formal structure of Saito's theory a little clearer. A mixed Hodge module consists a filtered perverse sheaf $(K,W)$ over $\mathbb{Q}$, and a 6 down bifiltered regular holonomic $D$-module $(M,W,F)$ such that $(M,W)$ corresponds to $(K,W)$ under Riemann-Hilbert. One then needs to impose some axioms in order to get good theory. The first vote is that over point, this datum defines a mixed Hodge structure. Before getting to the second, recall that one knows from BBD (and G I guess) that mixed perverse sheaves are stable under vanishing cycles. Saito takes this as an axiom. However, making that last statement precise takes over 100 pages. Thank you! Yet I am rather interested in weights defined in terms of eigenvalues of Frobenius actions; see the upd. to my question. – Mikhail Bondarko Jul 9 '10 at 22:14 add comment Not the answer you're looking for? Browse other questions tagged perverse-sheaves weights ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/31223/in-what-setting-does-one-usually-define-mixed-sheaves-and-weights-for-them/31239","timestamp":"2014-04-20T01:10:44Z","content_type":null,"content_length":"67886","record_id":"<urn:uuid:11b50fd2-5d21-49de-8a60-9ebd9306469f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Application Programmers Manual: Performing Inference with a Network From DSL SMILE includes functions for several popular Bayesian network inference algorithms, including the clustering algorithm, and several approximate stochastic sampling algorithms. It is up to the programmer to define the default algorithm that will be used when updating the network. This can be accomplished by using the SetDefaultBNAlgorithm() and SetDefaultIDAlgorithm() network methods. The following algorithms, as described in the listed papers, are supported: • Huang Cecil, and Darwiche, Adnan. (1996) Inference in Belief Networks: A Procedural Guide. International Journal of Approximate Reasoning, 15, 225-263. [1] • Henrion, Max. (1988). Propagating Uncertainty in Bayesian Networks by Probabilistic Logic Sampling. In Lemmer, J.F. and Kanal, L.N. (Eds.) Uncertainty in Artificial Intelligence, 2. North Holland. 149-163. • Judea Pearl. (1986). Fusion, Propagation, and Structuring in Belief Networks. Artificial Intelligence, 29(3), 241-288. [2] • Fung, Robert and Chang, Kuo-Chu. (1990). Weighing and Integrating Evidence for Stochastic Simulation in Bayesian Networks. In M. Henrion and R.D. Shachter and L.N. Kanal and J.F. Lemmer (Eds.) Uncertainty in Artificial Intelligence, 5. North Holland. 209-219 (Link is of Workshop version [3]) • Shachter, Ross D. and Peot, Mark A. (1990). Simulation Approaches to General Probabilistic Inference on Belief Networks. In M. Henrion and R.D. Shachter and L.N. Kanal and J.F. Lemmer (Eds.) Uncertainty in Artificial Intelligence, 5. North Holland. 221-231. (Link is of Workshop version [4]) • Fung, Robert and Favero, Brendan Del. (1994). Backward Simulation in Bayesian Networks. Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, 227-234. San Francisco, CA. • Cooper, G. F. (1988), A Method for Using Belief Networks as Influence Diagrams, Proceedings of the 1988 AAAI Workshop on Uncertainty in Artificial Intelligence, Minneapolis MN. [6] • Shachter, Ross D. and Peot, Mark A. (1992). Decision Making Using Probabilistic Inference Methods. Proceedings of the Eighth Annual Conference on Uncertainty in Artificial Intelligence, 227-234. Stanford University, California. [7] • Cheng, Jian and Druzdzel, Marek J. (2000). AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks. Journal of Artificial Intelligence Research 13 (2000) 155-188. Pittsburgh, PA. [8] • Yuan, Changhe and Druzdzel, Marek J. (2003). An Importance Sampling Algorithm Based on Evidence Pre-propagation. In Kjærulff, U. and Meek, C. (Eds.) Uncertainty in Artificial Intelligence, 19. Acapulco, Mexico. 624-631 [9] • Judea Pearl, (1982) "Reverend Bayes on inference engines: A distributed hierarchical approach". Proceedings of the Second National Conference on Artificial Intelligence. AAAI-82: Pittsburgh, PA. To update the network, the UpdateBeliefs() method should be called. You can check if the value of a given node was updated by calling the IsValueValid() method of a DSL_nodeValue class. To learn how to update arbitrary parts of a network, read the section on relevance reasoning.
{"url":"http://genie.sis.pitt.edu/wiki/Application_Programmers_Manual:_Performing_Inference_with_a_Network","timestamp":"2014-04-18T18:31:37Z","content_type":null,"content_length":"15283","record_id":"<urn:uuid:d5b16d41-2ce4-46f1-b097-2fccdb72a85b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Macro to move row to last row This might help. It will tell you what the last row is, then you can simply copy the data from the current place to the new place: Function iLastRowFilledInColumn(ws As Excel.Worksheet, iColumn As Long) As Dim iLastRow As Long iLastRow = ws.Cells(ws.Rows.Count, iColumn).End(xlUp).Row ' If the entire column is empty, Excel still returns 1 even though ' there is no data in row 1. Therefore, check to see if the ' cell is empty; if it is, return 0 instead of 1. If iLastRow = 1& Then If Trim(ws.Cells(iLastRow, iColumn)) = "" Then iLastRow = 0& End If End If iLastRowFilledInColumn = iLastRow End Function
{"url":"http://www.excel-answers.com/microsoft/Excel-Programming/30665754/macro-to-move-row-to-last-row.aspx","timestamp":"2014-04-19T04:19:08Z","content_type":null,"content_length":"7955","record_id":"<urn:uuid:eaa3892a-d465-4ea1-89be-333fca6b6af1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
lcm of 2 exponents: lcm of (3^19, 3^257) Re: lcm of 2 exponents: lcm of (3^19, 3^257) Re: lcm of 2 exponents: lcm of (3^19, 3^257) Are you finding the LCM of the numbers 3^19 and 3^257, of 19 and 257, or of 3 and 3? lcm of 2 exponents: lcm of (3^19, 3^257) Hi i was wondering how to find the lcm of (3^19, 3^257) i know you have to get the smallest multiple of both i believe the answer is 3 as this appears to be a trick problem zazeem wrote:Hi i was wondering how to find the lcm of (3^19, 3^257) i know you have to get the smallest multiple of both i believe the answer is 3 as this appears to be a trick problem
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=5073","timestamp":"2014-04-16T19:47:42Z","content_type":null,"content_length":"19934","record_id":"<urn:uuid:a7e7a079-6ee2-4419-8ca9-988a7d5711c6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
HELP! :( My assignment in maths is hard :( October 29th 2013, 03:22 AM HELP! :( My assignment in maths is hard :( Okay, so here is my problems: 1) Pinky measures that place is 100 feet wide, and the distance from the place of the tribune is 20 feet. Tribune angle he estimates to 45 degrees. He then draw a shape with a triangle ABC where A is 90 degrees. Then he draws a line from C to a point D on AB such that the length of AC is equal to the length of AD, then a line from C to a point E on DB so that DE has length 20 and EB has length 100 The length of the segment CD called he x. He draws out the angles ADC and DCA are both 45 ◦. Finally he lets ECB called φ and θ called ACE. a) Draw Pinky figure. Show that tan(θ +φ) = 1 + (120√2)/x and tan θ = 1 + (20√2)/x b) Find φ expressed as a function φ (x) of x for x ∈ (0,∞) c) Find lim φ(x) if (x→0^+) and lim φ(x) if (x→∞) d) Find φ(x) for x ∈ (0,∞), and determine where φ (x) grows and declines. e) Assume that the king set so that the viewing angle φ (x) onto the space was maximal. How many feet should the pirates go up the tribune to get to where the king sat? October 29th 2013, 05:13 AM Re: HELP! :( My assignment in maths is hard :( Part 1 appears to be translated from another language, making it difficult to read. What is "that place"? How does it relate to the triangle ABC? What does "the distance from the place of the tribune" mean? What is a tribune angle? Anyway, from the rest of the problem, to get you started: you know that triangle ADC is a 45-45-90 triangle. Since its hypotenuse has length x, the sides $|\overline{AC}| = |\overline{AD}| = \ frac{x}{\sqrt{2}}$. Then, $\tan\theta = \dfrac{\text{opp}}{\text{adj}} = \dfrac{|\overline{AE}|}{|\overline{AC}|} = \dfrac{\left(\dfrac{x}{\sqrt{2}} + 20\right)}{\left(\dfrac{x}{\sqrt{2}}\right)} = 1 + \dfrac{20\sqrt{2}} Can you do the rest? October 29th 2013, 07:49 AM Re: HELP! :( My assignment in maths is hard :( Thank you so much, it really helped :D And sorry for the translation, i'm not the best english writer.. :P
{"url":"http://mathhelpforum.com/new-users/223639-help-my-assignment-maths-hard-print.html","timestamp":"2014-04-21T05:35:45Z","content_type":null,"content_length":"6423","record_id":"<urn:uuid:7e7bf18b-ea78-433d-aa5f-468276e11782>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Clifton, NJ Algebra 1 Tutor Find a Clifton, NJ Algebra 1 Tutor ...I want my students to be comfortable with always asking questions! I have been a professional tutor throughout college and I have a Level II Advanced Tutor certificate in CRLA International Training. My skills as a tutor have always been derived from my love of learning and I hope to instill that in my students. 38 Subjects: including algebra 1, English, reading, ESL/ESOL ...Math is my favorite subject because it's problem solving. I enjoy a good challenge and a chance to solve problems. I've taken honors Algebra when I was in middle school, as well as high 8 Subjects: including algebra 1, geometry, elementary math, chess ...Experienced tutoring advanced subjects that rely on algebra as a foundation. I'm a college graduate in Physics. 1 year of calculus in high school, 2 years of calculus/analysis in university. In-depth familiarity with all aspects of the subject and an intuitive feel for it that I do my best to transmit to students. 17 Subjects: including algebra 1, chemistry, Spanish, calculus ...I am glad to say my student went to to be the salutatorian in her 8th grade class. I have also tutored high school students and even some college students, mostly in math (algebra 1 and 2, geometry, precalculus, calculus 1 and 2), biology, physics, and English. I have experience tutoring different age groups and I am able to connect with students and help them understand the course 16 Subjects: including algebra 1, English, physics, writing ...My position was first base for 10 years. I played on a traveling team for 3 years, the NJ Raisers in Jersey City. I've also umpired games in the past, both behind the plate and around the 16 Subjects: including algebra 1, statistics, precalculus, elementary math Related Clifton, NJ Tutors Clifton, NJ Accounting Tutors Clifton, NJ ACT Tutors Clifton, NJ Algebra Tutors Clifton, NJ Algebra 2 Tutors Clifton, NJ Calculus Tutors Clifton, NJ Geometry Tutors Clifton, NJ Math Tutors Clifton, NJ Prealgebra Tutors Clifton, NJ Precalculus Tutors Clifton, NJ SAT Tutors Clifton, NJ SAT Math Tutors Clifton, NJ Science Tutors Clifton, NJ Statistics Tutors Clifton, NJ Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bloomfield, NJ algebra 1 Tutors East Orange algebra 1 Tutors Elmwood Park, NJ algebra 1 Tutors Garfield, NJ algebra 1 Tutors Montclair, NJ algebra 1 Tutors Nutley algebra 1 Tutors Passaic algebra 1 Tutors Passaic Park, NJ algebra 1 Tutors Paterson, NJ algebra 1 Tutors Rutherford, NJ algebra 1 Tutors Union City, NJ algebra 1 Tutors Wallington algebra 1 Tutors Wayne, NJ algebra 1 Tutors Weehawken algebra 1 Tutors Woodland Park, NJ algebra 1 Tutors
{"url":"http://www.purplemath.com/Clifton_NJ_algebra_1_tutors.php","timestamp":"2014-04-21T07:41:27Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:5f4f78be-460f-46c2-981f-eb98a4cabb0e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
MAT 123-SPRING 2007 MAP 103-Spring 2012 Course Description: This is a course in the algebra necessary for future study of calculus or mathematical thinking. Topics include factoring, lines, and functions. Course Coordinator: Bill Bernhard (bill@math.sunysb.edu) The meeting time of all lectures and recitations with instructor names can be found under MAT 123 at http://www.math.sunysb.edu/schedules/currentsem.html. Syllabus: Your lecturer will determine what is covered in each lecture but here is a tentative schedule. Office Hours: Your lecturer is here to assist you. Hours will be posted here when they become available! Required materials: Intermediate Algebra -THIRD edition by Miller.*Calculators will not be permitted on exams this semester. Grading: Exam #1 (25%) on Tuesday, 2/28/12 at 8:30PM. The exam will cover 4.2, 4.3, 4.5, 4.6, and 4.8 from Intermediate Algebra Exam #2 (25%) on Tuesday, 3/20/12 at 8:30PM. The exam will cover 5.3, 5.4, and 5.5 from Intermediate Algebra Final Exam (25%) on Tuesday, 5/8/12 at 8:15AM (MORNING). The exam will be cumulative! New material will cover 2.2 2.3, 2.6, 7.5, and 8.1 from Intermediate Algebra Homework and Attendance (20%) Attending class with your homework is considered mandatory! Homework will be assigned by your lecturer. Interview (5%) A graduate student in our Mathematics Education program will interview you at some point during the semester. DSS advisory: If you have a physical, psychiatric, medical, or learning disability could adversely affect your ability to carry out assigned course work, we urge you to contact the Disabled Student Services office (DSS), Educational Communications Center (ECC) Building, room 128, (631) 632-6748. They will determine with you what accommodations are necessary and appropriate. All information and that documentation is confidential. Students requiring emergency evacuation are encouraged to discuss their needs with their professors and Disability Support Services. For procedures and information, go to the following web site: http://www.stonybrook.edu/ehs/fire/disabilities.shtml Disruptive Behavior: Stony Brook University expects students to maintain standards of personal integrity that are in harmony with the educational goals of the institution; to observe national, state, and local laws and University regulations; and to respect the rights, privileges, and property of other people. Faculty are required to report to the Office of Judicial Affairs any disruptive behavior that interrupts their ability to teach, compromises the safety of the learning environment, and/or inhibits students' ability to learn.
{"url":"http://www.bbernmath.com/map103spring2012.htm","timestamp":"2014-04-19T18:41:56Z","content_type":null,"content_length":"11808","record_id":"<urn:uuid:63c803b9-bc5c-4cb4-9f1f-56457fc47ba2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuous Branches of log z October 6th 2008, 05:00 PM #1 Oct 2008 Continuous Branches of log z Hey everyone. I am trying to define a continuous branch of log z in the complex plane slit along the positive axis: C\{[0,i*infinity]} I understand that we need to make a branch cut because log z will not be continuous on all of C. When the cut is the negative real axis I think I understand. And I also realize that the the cut can be any ray by my intuition. But I don't understand how to put it down as a function in this case. Hey everyone. I am trying to define a continuous branch of log z in the complex plane slit along the positive axis: C\{[0,i*infinity]} I understand that we need to make a branch cut because log z will not be continuous on all of C. When the cut is the negative real axis I think I understand. And I also realize that the the cut can be any ray by my intuition. But I don't understand how to put it down as a function in this case. Define $\log z = \ln |z| + i \arg (z)$ where $\theta = \arg z$ is defined angle on $-\tfrac{3\pi}{2} < \theta \leq \tfrac{\pi}{2}$ I see, it is more straightforward than I thought. Thanks a lot. How about this one: I have to show that |cosh z|^2 = cos^2(y)+sinh^2(x) I have tried writing out cosh z in terms of e^x and e^y so that I can take the modulus the old fashioned way (x^2+y^2)^(1/2). (from the e-definition of cosh z and using that e^z=e^xcos(y)+ie^xsin(y)) but the resulting mess is overwhelming and I can't seem to split up the real and imaginary parts neatly. I see, it is more straightforward than I thought. Thanks a lot. How about this one: I have to show that |cosh z|^2 = cos^2(y)+sinh^2(x) I have tried writing out cosh z in terms of e^x and e^y so that I can take the modulus the old fashioned way (x^2+y^2)^(1/2). (from the e-definition of cosh z and using that e^z=e^xcos(y)+ie^xsin(y)) but the resulting mess is overwhelming and I can't seem to split up the real and imaginary parts neatly. If I did not mess up the identity it should be, $\cosh (x+iy) = \cosh (x) \cosh (iy) + \sinh (x) \sinh (iy) = \cosh x \cos y + i\sinh x \sin y$ The identity is correct. so |coshz|^2= cosh^2x * cos^2y + sinh^2x * sin^2y. how do I get rid of the cosh and the sin terms? Edit: Solved: substitute 1-cos^2y for sin^2y and it follows pretty easily. Last edited by robeuler; October 6th 2008 at 08:59 PM. Here is one more I hope you might get to: If f(z) is an analytic branch of cosh^(-1)(z), find f'(z) Can I simply take the derivative of cosh^-1(z)= ln(x+(x-1)^(1/2)*(x+1)^(1/2))? in the end this equals 1/((x-1)^(1/2)*(x+1)^(1/2)) October 6th 2008, 06:14 PM #2 Global Moderator Nov 2005 New York City October 6th 2008, 06:42 PM #3 Oct 2008 October 6th 2008, 07:10 PM #4 Global Moderator Nov 2005 New York City October 6th 2008, 07:30 PM #5 Oct 2008 October 6th 2008, 10:14 PM #6 Oct 2008
{"url":"http://mathhelpforum.com/calculus/52322-continuous-branches-log-z.html","timestamp":"2014-04-17T21:37:46Z","content_type":null,"content_length":"48930","record_id":"<urn:uuid:89bfcc48-7eac-4ed3-96cb-a7f8cfc9efb4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: How to implement Wald estimator (for IV that is a ratio of coef [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: How to implement Wald estimator (for IV that is a ratio of coefficients)? From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: How to implement Wald estimator (for IV that is a ratio of coefficients)? Date Sun, 11 Oct 2009 09:22:57 -0400 If both equations are estimated in the same data (i.e. you are not using a two-sample IV procedure), you should use -ivreg- or -ivregress- or -ivreg2- (on SSC) instead. The approximate standard error formula for the two separate estimations on one dataset is not a good substitute for the one given by an IV estimator: use http://pped.org/card.dta reg lwage exper nearc4, nohe r loc b1=_b[nearc4] loc s1=_se[nearc4] reg educ exper nearc4, nohe r loc b2=_b[nearc4] loc s2=_se[nearc4] ivreg lwage exper (educ=nearc4), nohe r di `b1'/`b2' di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2) qui reg lwage exper nearc4 est sto r1 qui reg educ exper nearc4, nohe est sto r2 suest r1 r2 mat v=e(V) matrix cov=v["r1_mean:nearc4","r2_mean:nearc4"] loc c=cov[1,1] di `b1'/`b2'*sqrt((`s2'/`b2')^2+(`s1'/`b1')^2-2*`c'/`b1'/`b2') On Sat, Oct 10, 2009 at 3:44 PM, Misha Spisok <misha.spisok@gmail.com> wrote: > Stas, > Many thanks (большое спасибо), not just for solving this problem but > introducing me to another command in Stata. > Misha > On Fri, Oct 9, 2009 at 9:39 PM, Stas Kolenikov <skolenik@gmail.com> wrote: >> See if you can get your standard error via -nlcom- after -reg3-. I >> would guess that's the most appropriate estimation method, and -nlcom- >> is certainly the most appropriate method to deal with the >> delta-method, Stata way. >> On Fri, Oct 9, 2009 at 8:21 PM, Misha Spisok <misha.spisok@gmail.com> wrote: >>> Hello, Statalist! >>> In short, does -ivregress- (or -reg3-) include what I think is called >>> the Wald estimator? If so, how can I implement it for a problem like >>> the one below? I've searched for a command for the Wald estimator, >>> but can only find references to Wald _tests_. >>> I am considering a model similar to Ashenfelter and Greenstone (2004) >>> with two reduced-form equations, the estimates of which are used to >>> find an instrumental variable estimator in a third equation, the one >>> of primary interest. >>> My question is, how can I do this in Stata in one fell swoop? >>> The two equations are >>> F = lambda_F*VMT + PI_F*1(65mph limit in force) + epsilon >>> H = lambda_H*VMT + PI_H*1(65mph limit in force) + epsilon' >>> where 1(.) is an indicator variable which I'll call "65mph" below. >>> The equation of interest is >>> H = beta*VMT + theta*F + nu >>> The parameter of interest is theta. From the estimate of the reduced >>> form equations the IV for theta, theta_IV, is >>> theta_IV = (PI_H)/(PI_F) >>> Given estimates of PI_H and PI_F (as presented in the paper), one can >>> form the corresponding theta_IV. It seems that the authors use a >>> formula for the standard error of theta_IV like the following: >>> se_theta = theta_IV*sqrt((se_PI_H/PI_H)^2 + (se_PI_F/PI_F)^2) >>> I tried doing this in the following ways, but the results are not the >>> same. I wouldn't expect them to be, but I can't find a reference for >>> Wald estimator in Stata, so I thought I'd try it. >>> Method 1: >>> . reg F VMT 65mph >>> . reg H VMT 65mph >>> Calculate theta_IV from coefficients on 65mph in the above equations. >>> Method 2: >>> . ivregress 2sls H VMT (F 65mph) >>> Hope that theta_IV would be the coefficient on F. >>> Method 3: >>> . reg3 (F VMT 65mph) (H VMT F) >>> Hope that theta_IV would be the coefficient on F in the equation for H. >>> What is the correct way to get this ratio of coefficients (theta_IV = >>> (PI_H)/(PI_F)) and its standard error all at once in Stata? >>> Thanks, >>> Misha * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-10/msg00498.html","timestamp":"2014-04-18T16:13:37Z","content_type":null,"content_length":"11072","record_id":"<urn:uuid:0933a784-b171-4f1f-8e8e-17a0356e2a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 508.05043 Autor: Erdös, Paul; Simonovits, M. Title: Compactness results in extremal graph theory. (In English) Source: Combinatorica 2, 275-288 (1982). Review: (From the authors' abstract: ) ``Let L be a given family of ... 'prohibited graphs'. Let ex(n,L) denote the maximum number of edges a simple graph of order n can have without containing subgraphs from L. A typical extremal graph problem is to determine ex(n,L), or, at least, to find good bounds on it. Results asserting that, for a given L, there exists a 'much smaller' L^*\subset L for which ex(n,L)\approx ex(n,L^*) will be called compactness results. The main purpose of this paper is to prove some compactness results for the case when L consists of cycles. One of our main tools will be finding lower bounds on the number of paths p^k+1 in a graph on n vertices and E edges ... a `supersaturated' version of a well known theorem of Erdös and Gallai.'' Among the theorems proved, presented in the context of open conjectures, are: Theorem 1: Let k be a natural number. Then ex(n,{C^3,...,C^2k,C^2k+1}) \leq (n/2)^1+(1/k)+2^k·(n/2)^1-(1/k). Theorem 2: ex(n,{C^4,C^5}) = (n/2)^3/2+0(n). Theorem 3*: Let T be a tree with a fixed 2-colouring: A graph L is obtained from T by joining a new vertex to each vertex of one colour class by disjoint paths, each k edges long. Then, if ex(n,L) \geq cn^1+(1/k), then is a t for which lim[n > oo](ex(n,{L,C^3,C^5,...})/ex(n,{L,C^3,C^5,...,C^2[t-1]})) = 1 Theorem 5: If f(n,d) is the minimum number of walks W^k+1 a graph G^n can have with average degree d, then every graph of order n and average degree d contains at least (½)· f(n,d)-0(f(n,d)) paths p^ k+1, as d > oo. Reviewer: W.G.Brown Classif.: * 05C35 Extremal problems (graph theory) 05C38 Paths and cycles 05C15 Chromatic theory of graphs and maps Keywords: extremal graph; compactness; supersaturated; disjoint paths © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/50805043.htm","timestamp":"2014-04-21T07:46:46Z","content_type":null,"content_length":"5656","record_id":"<urn:uuid:28dcbc8c-4a5a-4084-8d02-9b28bbb0c696>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/%7Calysha%7C/answered","timestamp":"2014-04-20T14:08:27Z","content_type":null,"content_length":"61465","record_id":"<urn:uuid:bf7e851d-f3da-4130-9faf-ebcdc0bdc604>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenCL Kernel Memory Optimization - Local vs. Global Memory 07-31-2012, 04:33 PM #1 Junior Member Join Date Jul 2012 OpenCL Kernel Memory Optimization - Local vs. Global Memory I'm new to OpenCL and I consider using it for some graphics computation where using an OpenGL shader seems not to be natural. Before I actually do so I thought I'd try how much of a performance improvement I could get using OpenCL on my Nvidia GTX 460 over my CPU. For this reason, I implemented a simple skeleton skinning algorithm, once on the CPU, without multithreading but using the Eigen library, which provides SSE-optimized vector and matrix libraries, and once in an OpenCL kernel executing on the GPU. The vertices, bone matrices etc. are generated randomly on application start. I repeat the whole skinning several times so that it executes long enough to get meaningful timing results. First I simply tried a kernel where I have as much work-items as I have vertices, each one generating one output vertex. I quickly saw that this is not a good idea because performance was even worse than on the CPU. I figured this was in essence a problem of too many memory accesses, mainly to the bone matrices, which are an array of float16-vectors that is addressed four times in each work-item. Then I changed the algorithm so that each work-item handles multiple output vertices, one after the other, so that I have less work-items. In each work-group I create a copy of the bone matrices in local space, and further accesses to these matrices come from local space. The interesting part of my C++ code looks like this: Code : #define NUM_BONES 30 #define NUM_VERTICES 30000 #define NUM_VERTICES_PER_WORK_ITEM 100 #define NUM_ANIM_REPEAT 1000 uint64_t PerformOpenCLSkeletalAnimation(Matrix4* boneMats, Vector4* vertices, float* weights, uint32_t* indices, Vector4* resVertices) File kernelFile("/home/alemariusnexus/test/skelanim.cl"); char opts[256]; sprintf(opts, "-D NUM_VERTICES=%u -D NUM_REPEAT=%u -D NUM_BONES=%u -D NUM_VERTICES_PER_WORK_ITEM=%u", NUM_VERTICES, NUM_ANIM_REPEAT, NUM_BONES, NUM_VERTICES_PER_WORK_ITEM); cl_program prog = BuildOpenCLProgram(kernelFile, opts); cl_kernel kernel = clCreateKernel(prog, "skelanim", NULL); cl_mem boneMatBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_BONES*sizeof(Matrix4), boneMats, NULL); cl_mem vertexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*sizeof(Vector4), vertices, NULL); cl_mem weightBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(float), weights, NULL); cl_mem indexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(uint32_t), indices, NULL); cl_mem resVertexBuf = clCreateBuffer(ctx, CL_MEM_WRITE_ONLY, NUM_VERTICES*sizeof(Vector4), NULL, NULL); uint64_t s, e; s = GetTickcount(); clSetKernelArg(kernel, 0, sizeof(cl_mem), &boneMatBuf); clSetKernelArg(kernel, 1, sizeof(cl_mem), &vertexBuf); clSetKernelArg(kernel, 2, sizeof(cl_mem), &weightBuf); clSetKernelArg(kernel, 3, sizeof(cl_mem), &indexBuf); clSetKernelArg(kernel, 4, sizeof(cl_mem), &resVertexBuf); size_t globalWorkSize[] = { NUM_VERTICES / NUM_VERTICES_PER_WORK_ITEM }; size_t localWorkSize[] = { NUM_BONES }; for (size_t i = 0 ; i < NUM_ANIM_REPEAT ; i++) { clEnqueueNDRangeKernel(cq, kernel, 1, NULL, globalWorkSize, localWorkSize, 0, NULL, NULL); clEnqueueReadBuffer(cq, resVertexBuf, CL_TRUE, 0, NUM_VERTICES*sizeof(Vector4), resVertices, 0, NULL, NULL); e = GetTickcount(); return e-s; The associated program/kernel looks like this: Code : inline float4 MultiplyMatrixVector(float16 m, float4 v) return (float4) ( dot(m.s048C, v), dot(m.s159D, v), dot(m.s26AE, v), dot(m.s37BF, v) kernel void skelanim(global const float16* boneMats, global const float4* vertices, global const float4* weights, global const uint4* indices, global float4* resVertices) int gid = get_global_id(0); int lid = get_local_id(0); local float16 lBoneMats[NUM_BONES]; lBoneMats[lid] = boneMats[lid]; for (int i = 0 ; i < NUM_VERTICES_PER_WORK_ITEM ; i++) { int vidx = gid*NUM_VERTICES_PER_WORK_ITEM + i; float4 vertex = vertices[vidx]; float4 w = weights[vidx]; uint4 idx = indices[vidx]; resVertices[vidx] = (MultiplyMatrixVector(lBoneMats[idx.x], vertex * w.x) + MultiplyMatrixVector(lBoneMats[idx.y], vertex * w.y) + MultiplyMatrixVector(lBoneMats[idx.z], vertex * w.z) + MultiplyMatrixVector(lBoneMats[idx.w], vertex * w.w)); Now, per work-item I have only one access to the global boneMats, when I create the local copy, and it's even a lot less work-items executing altogether. Then I have NUM_VERTICES_PER_WORK_ITEM*4 accesses to the local array afterwards. As I understand, local memory should be way faster than global memory, so I thought this would greatly improve performance. Well, the opposite is the cause: When I let lBoneMats alias to the global boneMats instead, I get actually better performance than with the kernel listed above. What did I get wrong here? Thanks in advance!
{"url":"http://www.khronos.org/message_boards/showthread.php/8473-OpenCL-Kernel-Memory-Optimization-Local-vs-Global-Memory?p=27708&mode=threaded","timestamp":"2014-04-18T18:25:16Z","content_type":null,"content_length":"81223","record_id":"<urn:uuid:3df5b5e4-476c-4b7c-88bb-1ce2d21e6608>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
I Used Y(t)=dx(t)/dt And Ended Up With Rect(t/2)-delta(t-1)-delta(t+1)... | Chegg.com I used y(t)=dx(t)/dt and ended up with rect(t/2)-delta(t-1)-delta(t+1) and then took the fourier transform of that, then used the integral property. However the answer in the book is much different. Could anyone solve this problem using the integral property? btw, my answer is 1/jw*[2sinc-2cos(2w)] Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/used-y-t-dx-t-dt-ended-rect-t-2-delta-t-1-delta-t-1-took-fourier-transform-used-integral-p-q1671991","timestamp":"2014-04-19T12:24:53Z","content_type":null,"content_length":"21354","record_id":"<urn:uuid:eeaee5c2-1404-404c-a83c-c6c65f51ac62>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Selling Gold Jewelry Selling Gold Jewelry: The Magic Question When selling gold jewelry or any other form of gold you might own, there is one magic question you must ask that will shift the odds of getting top dollar for your gold: What percentage do you pay for gold? If your buyer will answer this question, and you can confirm the answer is accurate, or even if your buyer will not answer the question but you can figure out the answer, you will understand the process of selling gold jewelry. No matter what the price of gold or the purity of gold, anyone that offers to buy gold from you can answer this question, but they hate doing it. Why is that you might ask? Because if you are smart enough to ask the question or to figure out the answer to the question based upon any price quote they give you, they know it is easy for you to shop around and get the best price for your gold. This means they lose the sale or make less money buying from you. How can that be? No matter what the price of gold, or the purity, a refinery will quote a buying price based on a percentage of the purity of the gold, and so can any other gold buyer that chooses to do the same. This is the little know fact that takes the mystery out of selling gold jewelry. Most of the gold you sell will ultimately end up being sold to a refinery, or a company that will extract the gold content from your gold items, and condense it into gold ingots that they will resell to banks or manufacturers. Unless you are selling gold ingots or coins, refineries are the top of the heap in the gold buying food chain. The percentage never changes based on the fluctuating gold price or the purity of the gold. Sometimes there is a surcharge for smaller amounts of gold under 1 troy ounce but the percentage does not fluctuate with the price of gold. Sometimes you are paid a higher percentage for a larger quantity of gold, say over 5 ounces or over 25 ounces, but the percentage does not fluctuate with the price of gold. For a minute, forget about the daily fluctuation of gold, or the different purities of gold, or grams or pennyweights or troy ounces, its just all gold….so….I ask you…Would you rather sell your gold for 25% or 50% of the value of the gold content? Would you rather sell your gold for 60% or 80% of the value of the gold content? In the end, it’s just that simple. So how do you figure out what percentage you are being paid for your gold? Simple arithmetic will show you the way Understanding this simple calculation is the easiest and best way to compare prices when selling gold jewelry or any other form of gold: Gram price offer calculations First figure out the intrinsic value of the gold Multiply the price of gold X purity Divide answer by 31.1 Result is the intrinsic price of gold by gram Then Divide the Offer price / by the intrinsic price You are selling 22 grams of 14K gold Gold is $1100 toz (troy ounce) You have been offered $13.00 a gram What percentage of the price of gold have you been offered? $1100 X .583 = $641.30 641.30 / 31.1 = $20.62 Offer price is $13.00 Percentage offer is $13.00 / $20.62 = 63% Pennyweight price offer calculations First figure out the intrinsic value of the gold Multiply the price of gold X purity Divide answer by 20 Result is the intrinsic price of gold by gram Then Divide the Offer price / by the intrinsic price You are selling 22 grams of 14K gold 22 grams = 14.14 pennyweights (dwt)(calculation 22 / 1.555) so you are now selling 14.4 pennyweights Gold is $1100 toz (troy ounce) You have been offered $17.00 a pennyweight What percentage of the price of gold have you been offered? $1100 X .583 = $641.30 641.31 / 20 = $32.07 (intrinsic value of 1 pennyweight of gold at $1100 an oz) Offer price is $17.00 a pennyweight Percentage offer is $17.00 / $32.07 = 63% We offer Gold Selling Calculators and you can access them by clicking on this link. Leave Selling Gold Jewelry page to Subscribe to our Newsletter
{"url":"http://www.how-to-sell-gold.com/selling-gold-jewelry.html","timestamp":"2014-04-20T01:15:07Z","content_type":null,"content_length":"11233","record_id":"<urn:uuid:0b9ad261-76ea-486f-807b-9ccd7edeabed>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Apply Basic Operations to Matrices When you apply basic operations to matrices, it works a lot like operating on multiple terms within parentheses; you just have more terms in the "parentheses" to work with. Just like with operations on numbers, a certain order is involved with operating on matrices. Multiplication comes before addition and/or subtraction. When multiplying by a scalar, a constant that multiplies a quantity (which changes its size, or scale), each and every element of the matrix gets multiplied. When adding or subtracting matrices, you just add or subtract their corresponding terms. It's as simple as that. This figure shows how to add and subtract two matrices. Addition and subtraction of matrices. Note, however, that you can add or subtract matrices only if their dimensions are exactly the same. To add or subtract matrices, you add or subtract their corresponding terms; if the dimensions aren't exactly the same, then the terms won't line up. And obviously, you can't add or subtract terms that aren't there! When you multiply a matrix by a scalar, you're just multiplying by a constant. To do that, you multiply each term inside the matrix by the constant on the outside. Using the same matrix A from the previous example, you can find 3A by multiplying each term of matrix A by 3. This example is shown here: Multiplying matrix A by 3. Suppose a problem asks you to combine operations. You simply multiply each matrix by the scalar separately and then add or subtract them. For example, consider the following two matrices: Find 3A – 2B as follows: 1. Insert the matrices into the problem. 2. Multiply the scalars into the matrices. 3. Complete the problem by adding or subtracting the matrices. After subtracting, here is your final answer:
{"url":"http://www.dummies.com/how-to/content/how-to-apply-basic-operations-to-matrices.html","timestamp":"2014-04-18T13:13:34Z","content_type":null,"content_length":"55462","record_id":"<urn:uuid:ae14d484-3b1d-40a4-af6c-fc7a00563aba>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Acceleration with Prius Brake Slamming Example Problem Introduction to Acceleration with Prius Brake Slamming Example Problem (10:53) This is an introduction to the concept of acceleration. There is also an example problem showing applying the brakes while driving a car in order to avoid hitting a basketball. Also included are common mistakes students make while solving a simple problem like this. It is important to see what those mistakes are because it helps students avoid them in the future. Content Times: 0:19 The Equation for Acceleration 1:06 The Dimensions for Acceleration 2:18 Acceleration has both Magnitude and Direction 3:00 Reading the Problem 3:15 Video of the Problem 4:29 Translating the Problem to Physics 5:03 Starting to solve the Problem (with mistakes) 5:37 Explaining two mistakes 7:34 Explaining another mistake 10:00 Outtakes (including a basketball dribbling montage)
{"url":"http://www.flippingphysics.com/introduction-to-acceleration-with-prius-brake-slamming-example-problem.html","timestamp":"2014-04-16T16:56:23Z","content_type":null,"content_length":"20294","record_id":"<urn:uuid:bfa8145a-610c-46bb-bdf8-26673b55d762>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by kate Total # Posts: 1,648 Use average bond energies to calculate delta Hrxn for C(s)+2H2O(g)-->2H2(g)+CO2(g) like what are some specefic ideas as to what he did to improve them question How would you think a writer would hope to see themselves as a writer in the future? What should they still need to learn to strenghten it to make it better? English 101 uncanny- to be too strange marvelous- extremely good or great Chemistry Kinetics well i guess I need the negative sense it's the reactant -d[H2 + I2]/dt = k[H2]^x[I2]^y is this correct? Chemistry Kinetics If I had the equation H2 + I2 --> 2HI could I write the rate of reaction as d[H2 + I2]/dt = k[H2]^x[I2]^y were x is the order of reaction with respect to x and y is just the order of reaction with respect to y thanks why the formula for the area of a trapezoid this A=1/2 x h x(a+b) Amount after time t A (t) = yobt Where: yo = starting amount b = growth constant (percent remaining) t = number of time units In the 1978 the population of blue whale in the southern hemisphere was thought to number 5000. Since whaling has been outlawed, and an abundant food s... A 7.2 cm tall cylinder floats in water with its axis perpendicular to the surface. The length of the cylinder above water is 1.4 cm. What is the cylinder's mass density? What is the apparent weight of a 36 kg block of aluminum (density 2700 kg/m3) completely submerged in a fluid with density 1300 kg/m3? A single conservative force F = (7.0x - 11)i N, where x is in meters, acts on a particle moving along an x axis. The potential energy U associated with this force is assigned a value of 29 J at x = 0. (a) What is the maximum positive potential energy? At what (b) negative valu... A stone with a weight of 5.38 N is launched vertically from ground level with an initial speed of 25.0 m/s, and the air drag on it is 0.269 N throughout the flight. What are (a) the maximum height reached by the stone and (b) its speed just before it hits the ground Find the pH of a buffer that consists of 1.6 M sodium phenolate (C6H5ONa) and 1.0M phenol (C6H5OH). (pKa of phenol=10.00) can you explain how you figures out Heather's portion of the bill What is the volume of 325 mL of a sample of gas at 51.0°C and 710. mm Hg if the conditions are adjusted to STP? Assume the amount of gas is held constant. What is the volume of 325 mL of a sample of gas at 51.0°C and 710. mm Hg if the conditions are adjusted to STP? Assume the amount of gas is held constant. NH4NO3(s) H2O(g) + N2(g) + O2(g) Consider the unbalanced equation above. What will be the total volume of gas produced at 720. mm Hg and 570.0°C when 2.50 g of NH4NO3 completely decomposes? Use molar masses with at least as many significant figures as the data given. How many kilojoules of heat is required to completely covert 60.0 grams of water at 32.0° C to steam at 100°C? For water: s = 4.179 J/g °C Hfusion = 6.01 kJ/mol Hvap = 40.7 kJ/mol How many kilojoules of heat is required to completely covert 60.0 grams of water at 32.0° C to steam at 100°C? For water: s = 4.179 J/g °C Hfusion = 6.01 kJ/mol Hvap = 40.7 kJ/mol Direct- and indirect-object pronouns combined 8-16 Dos amigas hablan. Completa las oraciones. MARÍA: Juanita, ¿quién ______ mandó las flores? JUANITA: Alejandro ____, ____ mandó. MARÍA: ¿Vas a mostrar ____ a tus papás? JU... fill in the correct symbol. < = > 6+7 __ 7+6 1st grade how iam going to fix this sentence. i go to scott elementary school physical science how much power is used to lift a 100-n bucket a distance of 10-m in 12 seconds Thanks very much. I am doing an assignment where I need to find an example of a spring that is used in a common device. Is a phone chord considered a spring? Thanks for your help. A quantity N2 of occupies a volume of 1.4L at 290K and 1.5atm . The gas expands to a volume of 3.1 L as the result of a change in both temperature and pressure. I have the find the density of the gas at these new conditions? Please let me know if I figured this out correctly ... 2t + 4 >/= t -t -t 1t + 4 >/= 0 - 4 -4 1t >/= -4 /1 /1 t >/= -4 Calculate the H3O+ concentration and pH of a carbonated beverage that is .10 M in dissolved CO2- (Essentially all of the H3O+ comes from the first stage of dissociation, CO2 + 2H2O = H3O^+ + HCO3-, for which K1 = 4.4 x 10^-7) Thank you! :) What do noble gases have in common? PHYSICS ! Yes thank you so much! PHYSICS ! If a ball is sliding down a ramp that is on a slant, so that the ball sliding down falls off. Is the speed of the ball constant(meaning staying at the same speed)? or Speeding up? If so, how? i found out its Stratified Squamous -___- anyone know ? Is the lining of the esophagus either : 1) Simple Squamous? OR 2) Stratified Squamous? -Please help . I know it has to be either 1 or 2. 110 X 90/100=99 What is 90% of 110? Teacher Aide can you please check my answers to the same test below? it would mean the world to me : ) 1)D 2)D 3)C 4)C 5)B 6)C 7)A 8)C 9)A 10)C 11)D 12)A 13)C 14)D 15)B 16)A 17)B 18)B 19)C 20)C Calculate the mass of ethanol required to heat 200mL of water from 21 degrees celcius to 45 degress celcius what is the game for instance halo about what are some american made video games that are violent besides Grand Theft Auto? What does it mean if a line in R^3 is parallel to the xy-plane but not to any of the axes. I really don't know what this means in terms of how the parametric and symmetric equations of the line should look. Please help. What does it mean if a line in R^3 is parallel to the xy-plane but not to any of the axes. I really don't know what this means in terms of how the parametric and symmetric equations of the line should look. Please help. Please help with this assignment!Thanks. I picked "Justice" I want to write about What is the moral aspect of Justice. Tell how can I write like the following below. Have to write like this and don't know how to go about it.Thanks! Definition Read the following ... A 3.0 m long rigid beam with a mass of 100 kg is supported at each end. An 60 kg student stands 2.0 m from support 1. How much upward force does each support exert on the beam? support 1- support 2- Need help with topic "Justice" Need 2 question: is what is the moral aspect of Justice? Research plan: Have investigate my question. I am kind stuck on this topic, please help me to write 3 pages. find the zeros of the function: f(x)=x^3+2x^2-109x-110 I think I need to factor but everytime I do, it doesn't look right. I'm doing a vector problem which involves two forces with an angle of 30 degrees between them acting on an object. I think that I should draw my vector diagram with the vectors coming out of the object while my friend thinks that the vectors should point into the object. W... algebra help algebra help i think so but the answer suppose to be in the form of x and y algebra help so is the answer 50 algebra help Two motorcyclists start at the same point and travel in opposite directions. One travel 3 mph faster than the other. In two hours they are 206 miles apart. How fast is each traveling? The speed of the slower motorcyclist is ___ mph. Two children are balancing on a teeter-totter. One child has a mass of 30.0 kg and is sitting 1.3 meters from the pivot. The second child is sitting 0.8 meters from the pivot. What is the mass of the second child Two children are balancing on a teeter-totter. One child has a mass of 30.0 kg and is sitting 1.3 meters from the pivot. The second child is sitting 0.8 meters from the pivot. What is the mass of the second child An athlete at the gym holds a 1.2 kg steel ball in his hand. His arm is 64 cm long and has a mass of 4.0 kg. What is the magnitude of the torque about his shoulder if he holds his arm in each of the following ways? 1. straight but 30 degrees below horizonal? An athlete at the gym holds a 1.2 kg steel ball in his hand. His arm is 64 cm long and has a mass of 4.0 kg. What is the magnitude of the torque about his shoulder if he holds his arm in each of the following ways? 1. straight but 30 degrees below horizonal? To simplify the expression of 4r(3r-7) I came up with 12r^-28r. I cannot take this any further because they are not like terms, is that correct? A 6.9-kg box is being lifted by means of a light rope that is threaded through a single, light, frictionless pulley that is attached to the ceiling.Lifted at a constant speed of 2 m/s. If the box is lifted, at constant acceleration, from rest on the floor to a height of 1.5 m ... how do i answer these question donde esta la universidad? como es la universidad? cuando es la clase de espanol? spanish help so for it the answer would be el or ella spanish help What is the spanish equivalent to the following subject pronouns it and we If a point on the surface of a sphere with its center at the origin has coordinates (9,4,6) how do you show the radius connecting the origin to this point as a position vector. Is it simply [9,4,6] or is it more complicated than that. Thanks. I think it's correct. The negative sign in front of the parenthesis was throwing me off. Just need to make sure I'm understanding it correctly. How about this one? Simplify: -(21k - 3 + 4) - 17k -21 + 3 + -4 - 17k -38k + -1 Just so I know I am understanding this, can you please let me know if I am doing this right? Simplify: 24u - 6(8 - 4u) + 52 24u -48 + 24u + 52 48u + 4 Thanks!!!! If f(x)=x^2-x, find the value if x=(2)-2. TIP: x^2 means x squared I need help on a homework problem. "Determine a vector equation for the line perpendicular to 4x-3y=17 and through point (-2,4). The answer in the book is given as [x,y]=[-2,4]+t[-4,3].Isn't the normal to the scalar equation [4,-3] and wouldn't this be the directi... Can someone explain to me what the calorimetry equation - q=(m)(ΔT)(Cp) means exactly, in your own words, and how identifying an unknown metal relates to every day life? thanks y = 2.5 ·4^x as the equation for the blue line shown in the graph, rather than the equation given in your text. Note that your line will now be tangent to the graph at the point ( 0, 2.5 ). Enter the value of c here The population of the town of Missed Chances has, since January 1, 1972, been described by the function P = 30000 ( 1.01 )^t where P is the population t years after the start of 1972. At what rate was the population changing on January 1, 1995? people per year Calculate the molarity of 500. mL of solution containing 21.1 g of potassium iodide A gas sample is connected to a closed-end manometer. The difference in mercury levels in the 2 arms is 115 mm. What is the gas pressure in torr if the atmospheric pressure is 750. torr? A lunch box is 30cm long, 14cm wide and 12cm high. The formula for the volume of a box is v=lwh. Find the volume of the lunch box in cubic centimeters. Can you help me to set this up? Thanks!!! find an equation of the line containing the given pair of points. Use function notation to write the equation. (5/7,10/14) and (-1/7,13/14) f(x) how do I solve the equation (y+6) divided by 3- (y-3) divided by2=2 At 25 degrees celsius, the dimensions of a rectangular block of gold are 4.42 cm by 3.07 cm by 2.63 cm. The mass of the gold is 689g. I found that the density is 19.3g/cm^3 at 25 degrees but how will the density change if the temperature is increased to 125 degrees? How can you make 7 by only using the digits 1, 2, 3, and 4? You can put two numbers together (ie 12 or 34). You can use the operations: addition, subtraction, multiplication, (NO DIVISION!!), exponents, and parentheses any way you want!! Please help!!!! I also need 13, 17, and... yes and yes x=6 so i believe the answer would be 18+12+12 which equals 42 im not for sure though but thats how i think you would do it How can you make 34 by only using the digits 1, 2, 3, and 4? You can put two numbers together (ie 12 or 34). You can use the operations: addition, subtraction, multiplication, (NO DIVISION!!), exponents, and parentheses any way you want!! Please help!!!! I think the formula would be 30=2.25x 6r-3=15 -3 -3 __________ 6r=12 then do 12 divided by 6 and the answer is simple!!!!! Teachers Aide/Early Childhood Education and After- never mind! omg! i made a huge mistake! don't follow my answers! these are the answers to test #02604200 Teachers Aide/Early Childhood Education and After- okay, you've got a lot wrong. 1) pg. 59 2) pg. 28,33 3)A 4) pg. 57 5) pg. 16 6) pg. 4-5 7) pg. 26 8) pg. 30-31 9) A 10) pg. 60 11) pg. 2 12) B 13) pg. 8 14) pg. 13 15) D 16) C 17) B 18) pg. 58 19) pg. 36-37 20) pg. 64 Just checking to make sure I understand this. Absolute value bars are grouping symbols. Use the order of operations to evaluate the expression -3 + /-x + 2/ when x=12 -3 + /-12 + 2/ = 11 Thanks! A water solution containing 25.0% sucrose by mass, has a density of 1.25g/mL. What volume of this solution in liters, must be in an application requiring 2.50 KG of sucrose? Biology Chemistry Part I'm trying to calculated the rate of reaction In this lab I took a 50 mL beaker and poured in 10 ml of cold distilled water into it, I then poured in 30 ml of original enzyme solution (I believed we used cold potato juice or something), I then took a piece of paper and pla... Biology Chemistry Part I'm trying to calculated the rate of reaction In this lab I took a 50 mL beaker and poured in 10 ml of cold distilled water into it, I then poured in 30 ml of original enzyme solution (I believed we used cold potato juice or something), I then took a piece of paper and pla... thank you 3x+y=7 for y -x+y=13 for y f=5gh for h choose a metal you know and list three uses that illustrate three properties of metals solve the equation and check -(9x-4)-(2x-6)+7=-8(x-9)-(6x+2)+4 Use the following function values to calculate H '(1). f (1) f '(1) g(1) g'(1) 3 -4 8 -1 H(x) = x/(g(x) f(x)) How would this problem be done... i found H(x) to be 1/24 but don't know how to find the derivative of h'(1) solve each equation 4(n+3)=2(6+2n) Simplify the expression 4(3z^3-6)+12(2-z^3)= Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=kate&page=9","timestamp":"2014-04-19T00:34:20Z","content_type":null,"content_length":"27613","record_id":"<urn:uuid:162c8b17-dfa1-4684-bdc4-88dcb2274ee6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of a Line Tangent January 15th 2008, 03:49 PM Equation of a Line Tangent How would you find an equation of the line tangent to f(x) = 2x^2 - 4 at teh point (3, 14) ? If f(2) = 3 and f(prime)(2) = -1, what would be the tangent line when x = 2? January 15th 2008, 03:55 PM first find the slope at the point, that is, find $f'(3)$ use that as your $m$ in the point-slope form the tangent line will be given by: $y - y_1 = m(x - x_1)$ where $m$ is the slope and $(x_1,y_1)$ is a point the line passes through, here, this is namely $(3,14)$ If f(2) = 3 and f(prime)(2) = -1, what would be the tangent line when x = 2? here $m = -1$ and $(x_1,y_1) = (2,3)$ (do you see why?) use the same method as above January 16th 2008, 04:47 AM Finding an equation tangent to that line at (3, 14) involves finding $f'(3) = 4(3) \rightarrow 12$ The values of the derivative function are the slope values at those points. You have the slope, and the x and y coordinates, so therefore you can solve for the equation of the line:
{"url":"http://mathhelpforum.com/pre-calculus/26172-equation-line-tangent-print.html","timestamp":"2014-04-20T20:59:14Z","content_type":null,"content_length":"7472","record_id":"<urn:uuid:cd81f3c9-277e-4bb8-ac4a-555b3acec75b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
[ml] Stanford ML course /was: ML Monday ... Wladyslaw Zbikowski embeddedlinuxguy at gmail.com Tue Oct 4 21:24:07 PDT 2011 On Thu, Sep 29, 2011 at 5:30 PM, Wladyslaw Zbikowski <embeddedlinuxguy at gmail.com> wrote: > By the way I notice that ml-class.org now has its first video lectures > online. Would anyone be interested to review some of these Review Questions on Monday the 10th? First two weeks of videos from the site: Welcome (7 min) What is Machine Learning? (7 min) Supervised Learning (12 min) Unsupervised Learning (14 min) Model Representation (8 min) Cost Function (8 min) Cost Function - Intuition I (11 min) Cost Function - Intuition II (9 min) Gradient Descent (11 min) Gradient Descent Intuition (12 min) Gradient Descent For Linear Regression (10 min) What's Next (6 min) Matrices and Vectors (9 min) Addition and Scalar Multiplication (7 min) Matrix Vector Multiplication (14 min) Matrix Matrix Multiplication (11 min) Matrix Multiplication Properties (9 min) Inverse and Transpose (11 min) More information about the ml mailing list
{"url":"https://www.noisebridge.net/pipermail/ml/2011-October/000667.html","timestamp":"2014-04-17T11:06:54Z","content_type":null,"content_length":"3730","record_id":"<urn:uuid:243d26f3-c8c0-401b-9926-5a2ed211a0e6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years 9 Statistical Background The standard proxy reconstructions based on linear regression are generally reasonable statistical methods for estimating past temperatures but may be associated with substantial uncertainty. There is a need for more rigorous statistical error characterization for proxy reconstructions of temperature that includes accounting for temporal correlation and the choice of principal components. The variability of proxy reconstructed temperatures will be less than the variability of the actual temperatures and may not reproduce the actual temperature pattern at particular timescales. Examining the prediction of the reconstruction in a validation period is important, but the length of this period sets limits on a statistical appraisal of the uncertainty in the reconstruction. Most critically, the relatively short instrumental temperature record provides very few degrees of freedom1 for verifying the low-frequency content of a reconstruction. The differences among a collection of proxy reconstructions that have not been deliberately created as a representative statistical sample may not reveal the full uncertainty in any one of them. The process of reconstructing climate records from most proxy data is essentially a statistical one, and all efforts to estimate regional or global climate history from multiple 1 “Degrees of freedom” measures the amount of information for estimating a variance; specifically, it is the equivalent number of independent observations. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years proxies require statistical analyses. The first step is typically to separate the period of instrumental measurements into two segments: a calibration period and a validation period. The statistical relationship between proxy data (e.g., tree ring width or a multiproxy assemblage) and instrumental measurements of a climate variable (e.g., surface temperature) is determined over the calibration period. Past variations in the climate variable, including those during the validation period, are then reconstructed by using this statistical relationship to predict the variable from the proxy data. Before the proxy reconstruction is accepted as valid, the relationship between the reconstruction and the instrumental measurements during the validation period is examined to test the accuracy of the reconstruction. In a complete statistical analysis, the validation step should also include the calculation of measures of uncertainty, which gives an idea of the confidence one should place in the reconstructed record. This chapter outlines and discusses some key elements of the statistical process described in the preceding paragraph and alluded to in other chapters of this report. Viewing the statistical analysis from a more fundamental level will help to clarify some of the methodologies used in surface temperature reconstruction and highlight the different types of uncertainties associated with these various methods. Resolving the numerous methodological differences and criticisms of proxy reconstruction is beyond the scope of this chapter, but we will address some key issues related to temporal correlation, the use of principal components, and the interpretation of validation statistics. As a concrete example, the chapter focuses on the Northern Hemisphere annual mean surface temperature reconstructed from annually resolved proxies such as tree rings. However, the basic principles can be generalized to other climate proxies and other meteorological variables. Spatially resolved reconstructions can also be reproduced using these methods, but a discussion of this application is not possible within the length of this chapter. LINEAR REGRESSION AND PROXY RECONSTRUCTION The most common form of proxy reconstruction depends on the use of a multivariate linear regression. This methodology requires two key assumptions: Linearity: There is a linear statistical relationship between the proxies and the expected value of the climate variable. Stationarity: The statistical relationship between the proxies and the climate variable is the same throughout the calibration period, validation period, and reconstruction period. Note that the stationarity of the relationship does not require stationarity of the series themselves, which would imply constant means, constant variances, and time-homogeneous correlations. These two assumptions have precise mathematical formulations and address the two key questions concerning climate reconstructions: (1) How is the proxy related to the climate variable? (2) Is this relationship consistent across both the instrumental period and at earlier times? In statistical terminology, these assumptions comprise a statistical model because they define a statistical relationship among the data. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years An Illustration Figure 9-1 is a simple illustration using a single proxy to predict temperature. Plotted are 100 pairs of points that may be thought of as a hypothetical annual series of proxy data and corresponding instrumental surface temperature measurements over a 100-year calibration period. The solid black line is the linear fit to these data, or the calibration, which forms the basis for predictions of temperatures during other time periods. Here the prediction of temperature based on a proxy with value A is TA and the proxy with value B predicts the temperature TB. The curved blue lines in Figure 9-1 present the calibration error, or the uncertainty in predictions based on the calibration (technically the 95 percent prediction interval, which has probability 0.95 of covering the unknown temperature), which is a standard component of a regression analysis. In this illustration, the uncertainty associated with temperature predictions based on proxy data is greater at point A than it is at point B. This is because the calibration errors are magnified for predictions based on proxy FIGURE 9-1 An illustration of using linear regression to predict temperature from proxy values. Plotted are 100 pairs of points corresponding to a hypothetical dataset of proxy observations and temperature measurements. The solid black line is the least-squares fitted line and the blue lines indicate 95 percent prediction intervals for temperature using this linear relationship. The dashed line and the red line indicate possible departures from a linear relationship between the proxy data and the temperature data. The figure also illustrates predictions made at proxy values A and B and the corresponding prediction intervals (wide blue lines) for the temperature. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years values that are outside the range of proxy data used to estimate the linear relationship. In the case of multiple proxies used to predict temperature, it is not possible to use a two-dimensional graph to illustrate the fitted statistical model and the uncertainties. However, as in the single-proxy case, the prediction errors using several proxies will increase as the values deviate from those observed in the calibration period. Variability in the Regression Predictions Strictly speaking, assumption 1 posits a straight-line relationship between the average value of the climate variable, given the proxy, and the value of the proxy. This detail has the practical significance of potentially reducing the variability in the reconstructed series, which can also be illustrated using Figure 9-1. For example, note that there is some variability in the instrumental temperature measurements at the proxy value B (i.e., near point B, there are multiple temperature readings, most of which do not fall on the calibration line). However, estimates of past temperatures using proxy data near B will always yield the same temperature, namely TB, rather than a corresponding scatter of temperatures. This difference is entirely appropriate because TB is the most likely temperature value for each proxy measurement that yields B. In general, the predictions from regression will have less variability than the actual values, so time series of reconstructed temperatures will not fully reproduce the variability observed in the instrumental record. One way to assess methods of reconstructing temperatures is to apply them to a synthetic history in which all temperatures are known. Zorita and von Storch (2005) and von Storch et al. (2004) carried out such an exercise using temperature output from the ECHO-G climate model. These authors constructed pseudoproxies by sampling the temperature field at locations used by Mann et al. (1998) and corrupted them with increasing levels of white noise. They then reconstructed the Northern Hemisphere average temperature using both regression methods and the related Mann et al. method and found that in both cases the variance of the reconstruction was attenuated relative to the “true” temperature time series with the attenuation increasing as the noise variance was increased. This phenomenon, identified by Zorita and von Storch (2005) and others, is not unexpected. Within the calibration period, the fraction of variance of temperature that is explained by the proxies naturally decreases as the noise level of the proxies increases. If the regression equation is then used to reconstruct temperatures for another period during which the proxies are statistically similar to those in the calibration period, it would be expected to capture a similar fraction of the variance. Some other approaches to reconstruction (e.g., Moberg et al. 2005b) yield a reconstructed series that has variability similar to the observed temperature series. These approaches include alternatives to ordinary regression methods such as inverse regression and total least-squares regression (Hegerl 2006) that are not subject to attenuation. Such methods may avoid the misleading impression that a graph of an attenuated temperature signal might give, but they do so at a price: Direct regression gives the most precise reconstruction, in the sense of mean squared error, so these other methods give up accuracy. Referring back to the example in Figure 9-1, using the straight-line relationship is the best prediction to make from these data, and any inflation of the variability will degrade the accuracy of the reconstruction. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years Departures from the Assumptions The dashed line in Figure 9-1 represents a hypothetical departure from the strict linear relationship between the proxy data and temperature. This illustrates a violation of the linearity assumption because, for lower values of the proxy, the relationship is not the same as given by the (straight) least-squares best-fit line. If the dashed line describes a more accurate representation of the relationship between the proxy values and temperature measurements at lower proxy values, then using the dashed line will result in different reconstructed temperature series. The linear relationship among the temperature and proxy variables can also be influenced by whether the variables are detrended. If a temperature and a proxy time series share a common trend but are uncorrelated once the trends are removed, the regression analysis can give markedly different results. The regression performed without first removing a trend could exhibit a strong relationship, while the detrended regression could be weak. Whether to include a trend or not should be based on scientific understanding of the similarities or differences of the relationship over longer and shorter timescales. A departure from the stationarity assumption is illustrated by the red line in Figure 9-1. Suppose that in a period other than the calibration period, the proxy and the temperature are related on average by the red line, that is, a different linear relationship from the one in the calibration period. For an accurate reconstruction, one would want to use this red line and the estimate for a temperature at the proxy value A is indicated by TA* in the figure. Both the linearity and stationarity assumptions may be checked using the training and validation periods of the instrumental record. If the relationship is not linear over the training period, a variety of objective statistical approaches can be used to describe a more complicated relationship than a linear one. Moreover, one can contrast the effect of using detrended versus raw variables. Stationarity can also be tested for the validation period, although in most cases the use of the proxy relationship will involve extrapolation beyond the range of observed values, such as in the case of point A in the illustration given above. In cases such as this, the extrapolation must also rely on the scientific context for its validity; that is, one must provide a scientific basis for the assumed relationship. The distinction between the assumptions used to reconstruct temperatures and the additional assumptions required to generate statistical measures of the uncertainty of such reconstructions is critical. For example, the error bounds in Figure 9-1 are based on statistical assumptions on how the temperature departs from an exact linear relationship. These assumptions can also be checked using the training and calibration periods, and often more complicated regression methods can be used to adjust for particular features in the data that violate the assumptions. One example is temporal correlation among data points, which is discussed in the next section. Inverse Regression and Statistical Calibration Reconstructing temperature or another climate variable from a proxy such as a tree ring parameter has a formal resemblance to the statistical calibration of a measurement instrument. A statistical calibration exercise consists of a sequence of experiments in which a single factor (e.g., the temperature) is set to precise, known levels, OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years and one or more measurements are made on a response (e.g., the proxy). Subsequently, in a second controlled experiment under identical conditions, the response is measured for an unknown level of the factor, and the regression relationship is used to infer the value of the factor. This approach is known as inverse regression (Eisenhart 1939) because the roles of the response and factor are reversed from the more direct prediction illustrated in Figure 9-1. Attaching an uncertainty to the result is nontrivial, but conservative approximations are known (Fieller 1954). There remains some debate in the statistical literature concerning the circumstances when inverse or direct methods are better (Osborne 1991). The temperature reconstruction problem does not fit into this framework because both temperature and proxy values are not controlled. A more useful model is to consider the proxy and the target climate variable as a bivariate observation on a complex system. Now the statistical solution to the reconstruction problem is to state the conditional distribution of the unobserved part of the pair, temperature, given the value of the observed part, the proxy. This is also termed the random calibration problem by Brown (1982). If the bivariate distribution is Gaussian, then the conditional distribution is itself Gaussian, with a mean that is a linear function of the proxy and a constant variance. From a sample of completely observed pairs, the regression methods outlined above give unbiased estimates of the intercept and slope in that linear function. In reality, the bivariate distribution is not expected to follow the Gaussian exactly. In this case the linear function is only an approximation; however, the adequacy of these approximations can be checked based on the data using standard regression diagnostic methods. With multiple proxies, the dimension of the joint distribution increases, but the calculation of the conditional distribution is a direct generalization from the bivariate (single-proxy) case. Regression with Correlated Data In most cases, calibrations are based on proxy and temperature data that are sequential in time. However, geophysical data are often autocorrelated, which has the effect of reducing the effective sample size of the data. This reduction in sample size decreases the accuracy of the estimated regression coefficients and causes the standard error to be underestimated during the calibration period. To avoid these problems and form a credible measure of uncertainty, the autocorrelation of the input data must be taken into account. The statistical strategy for accommodating correlation in the data used in a regression model is two-pronged. The first part is to specify a model for the correlation structure and to use modified regression estimates (generalized least squares) that achieve better precision. The correctness of the specification can be tested using, for example, the Durbin-Watson statistic (Durbin and Watson 1950, 1951, 1971). The second part of the strategy is to recognize that correlation structure is usually too complex to be captured with parsimonious models. This structure may be revealed by a significant Durbin-Watson statistic or some other test, or it may be suspected on other grounds. In this case, the model-based standard errors of estimated coefficients may be replaced by more robust versions, discussed for instance by Hinkley and Wang (1991). For time series data, Andrews (1991) describes estimates of standard errors that are consistent in the presence of autocorrelated errors with changing variances. For time series data, the correlations are usually modeled as stationary; parsimonious OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years models for stationary time series, such as ARMA, were popularized by Box and Jenkins (Box et al. 1994). Note that this approach does not require either the temperature or the proxy to be stationary, only the errors in the regression equation. Reconstruction Uncertainty and Temporal Correlation An indication of the uncertainty of a reconstruction is an important part of any display of the reconstruction itself. Usually this is in the form: and the standard error is given by conventional regression calculations. The prediction mean squared error is the square of the standard error and is the sum of two terms. One is the variance of the errors in the regression equation, which is estimated from calibration data, and may be modified in the light of differences between the calibration errors and the validation errors. This term is the same for all reconstruction dates. The other term is the variance of the estimation error in the regression parameters, and this varies in magnitude depending on the values of the proxies and also the degree of autocorrelation in the errors. This second term is usually small for a date when the proxies are well within the range represented by the calibration data, but may become large when the equation is used to extrapolate to proxy values outside that range. Smoothed Reconstructions Reconstructions are often shown in a smoothed form, both because the main features are brought out by smoothing and because the reconstruction of low-frequency features may be more precise than short-term behavior. The two parts of the prediction variance are both affected by smoothing but in different ways. The effect on the first depends on the correlation structure of the errors, which may require some further modeling, but is always a reduction in size. The second term depends on the smoothed values of the proxies and may become either larger or smaller but typically becomes a more important part of the resulting standard error, especially when extrapolating. PRINCIPAL COMPONENT REGRESSION The basic idea behind principal component regression is to replace the predictors (i.e., individual proxies) with a smaller number of objectively determined variables that are linear combinations of the original proxies. The new variables are designed to contain as much information as possible from the original proxies. As the number of principal components becomes large, the principal component regression becomes close to the regression on the full set of proxies. However, in practice the number of principal components is usually kept small, to avoid overfitting and the consequent loss of prediction skill. No known statistical theory suggests that limiting the number of principal components used in regression leads to good predictions, although this practice has been found to work well in many applications. Fritts et al. (1971) introduced the idea to dendroclimatology, and it was discussed by Briffa and Cook (1990). OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years Jolliffe (2002) describes many issues in the use of principal component analysis, including principal component regression, as it is used in many areas of science. The principal components contain maximum information in the sense that the full set of proxies can be reproduced as closely as possible, given only the values of the new variables (Johnson and Wichern 2002, Suppl. 8A). In general, one should judge the set of principal components taken together as a group because they are used together to form a reconstruction. Comparing just single principal components between two different approaches may be misleading. For example, each of the two groups of principal components may give equally valid approximations to the full set of proxies. This equivalence can occur without being able to match on a one-to-one basis the principal components in one group with those in a second group. Spurious Principal Components McIntyre and McKitrick (2003) demonstrated that under some conditions the leading principal component can exhibit a spurious trendlike appearance, which could then lead to a spurious trend in the proxy-based reconstruction. To see how this can happen, suppose that instead of proxy climate data, one simply used a random sample of autocorrelated time series that did not contain a coherent signal. If these simulated proxies are standardized as anomalies with respect to a calibration period and used to form principal components, the first component tends to exhibit a trend, even though the proxies themselves have no common trend. Essentially, the first component tends to capture those proxies that, by chance, show different values between the calibration period and the remainder of the data. If this component is used by itself or in conjunction with a small number of unaffected components to perform reconstruction, the resulting temperature reconstruction may exhibit a trend, even though the individual proxies do not. Figure 9-2 shows the result of a simple simulation along the lines of McIntyre and McKitrick (2003) (the computer code appears in Appendix B). In each simulation, 50 autocorrelated time series of length 600 were constructed, with no coherent signal. Each was centered at the mean of its last 100 values, and the first principal component was found. The figure shows the first components from five such simulations overlaid. Principal components have an arbitrary sign, which was chosen here to make the last 100 values higher on average than the remainder. Principal components of sample data reflect the shape of the corresponding eigenvectors of the population covariance matrix. The first eigenvector of the covariance matrix for this simulation is the red curve in Figure 9-2, showing the precise form of the spurious trend that the principal component would introduce into the fitted model in this case. This exercise demonstrates that the baseline with respect to which anomalies are calculated can influence principal components in unanticipated ways. Huybers (2005), commenting on McIntyre and McKitrick (2005a), points out that normalization also affects results, a point that is reinforced by McIntyre and McKitrick (2005b) in their response to Huybers. Principal component calculations are often carried out on a correlation matrix obtained by normalizing each variable by its sample standard deviation. Variables in different physical units clearly require some kind of normalization to bring them to a common scale, but even variables that are physically equivalent or OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years FIGURE 9-2 Five simulated principal components and the corresponding population eigenvector. See text for details. normalized to a common scale may have widely different variances. Huybers comments on tree ring densities, which have much lower variances than widths, even after conversion to dimensionless “standardized” form. In this case, an argument can be made for using the variables without further normalization. However, the higher-variance variables tend to make correspondingly higher contributions to the principal components, so the decision whether to equalize variances or not should be based on the scientific considerations of the climate information represented in each of the proxies. Each principal component is a weighted combination of the individual proxy series. When those series consist of a common signal plus incoherent noise, the best estimate of the common signal has weights proportional to the sensitivity of the proxy divided by its noise variance. These weights in general are not the same as the weights in the principal component, as calculated from either raw or standardized proxies, either of which is therefore suboptimal. In any case, the principal components should be constructed to achieve a low-dimensional representation of the entire set of proxy variables that incorporates most of the climate information contained therein. OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years VALIDATION AND THE PREDICTION SKILL OF THE PROXY RECONSTRUCTION The role of a validation period is to provide an independent assessment of the accuracy of the reconstruction method. As discussed above, it is possible to overfit the statistical model during the calibration period, which has the effect of underestimating the prediction error. Reserving a subset of the data for validation is a natural way to offset this problem. If the validation period is independent of the calibration period, any skill measures used to assess the quality of the reconstruction will not be biased by the potential overfit during the calibration period. An inherent difficulty in validating a climate reconstruction is that the validation period is limited to the historical instrumental record, so it is not possible to obtain a direct estimate of the reconstruction skill at earlier periods. Because of the autocorrelation in most geophysical time series, a validation period adjacent to the calibration period cannot be truly independent; if the autocorrelation is short term, the lack of independence does not seriously bias the validation results. Measures of Prediction Skill Some common measures used to assess the accuracy of statistical predictions are the mean squared error (MSE), reduction of error (RE), coefficient of efficiency (CE), and the squared correlation (r2). The mathematical definitions of these quantities are given in Box 9.1. MSE is a measure of how close a set of predictions are to the actual values and is widely used throughout the geosciences and statistics. It is usually normalized and presented in the form of either the RE statistic (Fritts 1976) or the CE statistic (Cook et al. 1994). The RE statistic compares the MSE of the reconstruction to the MSE of a reconstruction that is constant in time with a value equivalent to the sample mean for the calibration data. If the reconstruction has any predictive value, one would expect it to do better than just the sample average over the calibration period; that is, one would expect RE to be greater than zero. The CE, on the other hand, compares the MSE to the performance of a reconstruction that is constant in time with a value equivalent to the sample mean for the validation data. This second constant reconstruction depends on the validation data, which are withheld from the calibration process, and therefore presents a more demanding comparison. In fact, CE will always be less than RE, and the difference increases as the difference between the sample means for the validation and the calibration periods increases. If the calibration has any predictive value, one would expect it to do better than just the sample average over the validation period and, for this reason, CE is a particularly useful measure. The squared correlation statistic, denoted as r2, is usually adopted as a measure of association between two variables. Specifically, r2 measures the strength of a linear relationship between two variables when the linear fit is determined by regression. For example, the correlation between the variables in Figure 9-1 is 0.88, which means that the regression line explains 100 × 0.882 = 77.4 percent of the variability in the temperature values. However, r2 measures how well some linear function of the predictions matches the data, not how well the predictions themselves perform. The coefficients in that linear function cannot be calculated without knowing the values being predicted, so it is not in itself a useful indication of merit. A high CE OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years BOX 9.1 Measures of Reconstruction Accuracy Let yt denote a temperature at time t and ŷt the prediction based on a proxy reconstruction. Mean squared error (MSE) where the sum on the right-hand side of the equation is over times of interest (either the calibration or validation period) and N is the number of time points. Reduction of error statistic (RE) where is the mean squared error of using the sample average temperature over the calibration period (a constant, ) to predict temperatures during the period of interest: Coefficient of efficiency (CE) where is the mean squared error of using the sample average over the period of interest as a predictor of temperatures during the period of interest: Squared correlation (r2 ) If ŷt are the predictions from a linear regression of yt on the proxies, and the period of interest is the calibration period, then RE, CE, and r2 are all equal. Otherwise, CE is less than both RE and r2. value, however, will always have a high r2, and this is another justification for considering the CE. Illustration of CE and r2 Figure 9-3 gives some examples of a hypothetical temperature series and several reconstruction series, where the black line is the actual temperatures and the colored lines are various reconstructions. The red line has an r2 of 1 but a CE of –18.9 and is OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years FIGURE 9-3 A hypothetical temperature series (black line) and four possible reconstructions. an example of a perfectly correlated reconstruction with no skill at prediction. The dashed blue line is level at the mean temperature and has an r2 and a CE that are both zero. The blue and green reconstructed lines both have a CE of 0.50. For either of these reconstructions to be better than just the mean, they must exhibit some degree of correlation with the temperatures. In this case, r2 is 0.68 for the blue line and 0.51 for the green line. Despite a common CE, these two reconstructions match the temperature series in different ways. The blue curve is more highly correlated with the short-term fluctuations, and the green curve tracks the longer term variations of the temperature series. The difference between the blue and green lines illustrates that the CE statistic alone does not contain all the useful information about the reconstruction error. Distinguishing Between RE and CE and the Validation Period The combination of a high RE and a low CE or r2 means that the reconstruction identified the change in mean levels between the calibration and validation periods reasonably well but failed to track the variations within the validation period. One way that this discrepancy can occur is for the proxies and the temperatures to be related by a common trend in the calibration period. When the trend is large this can result in a high RE. If the validation period does not have as strong a trend and the proxies are not skillful at predicting shorter timescale fluctuations in temperature, then the CE can be substantially lower. For example, the reconstructions may only do as OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years well as the mean level in the validation period, in which case CE will be close to zero. An ideal validation procedure would measure skill at different timescales, or in different frequency bands using wavelet or power spectrum calculations. Unfortunately, the paucity of validation data places severe limits on their sensitivity. For instance, a focus on variations of decadal or longer timescales with the 45 years of validation data used by Mann et al. (1998) would give statistics with just (2 × 45 ÷ 10) = 9 degrees of freedom, too few to adequately quantify skill. This discussion also motivates the choice of a validation period that exhibits the same kind of variability as the calibration period. Simply using the earliest part of the instrumental series may not be the best choice for validation. Determining Uncertainty and Selecting Among Statistical Methods Besides supplying an unbiased appraisal of the accuracy of the reconstruction, the validation period can also be used to adjust the uncertainty measures for the reconstruction. For example, the MSE calculated for the validation period provides a useful measure of the accuracy of the reconstruction; the square root of MSE can be used as an estimate of the reconstruction standard error. Reconstructions that have poor validation statistics (i.e., low CE) will have correspondingly wide uncertainty bounds, and so can be seen to be unreliable in an objective way. Moreover, a CE statistic close to zero or negative suggests that the reconstruction is no better than the mean, and so its skill for time averages shorter than the validation period will be low. Some recent results reported in Table 1S of Wahl and Ammann (in press) indicate that their reconstruction, which uses the same procedure and full set of proxies used by Mann et al. (1999), gives CE values ranging from 0.103 to –0.215, depending on how far back in time the reconstruction is carried. Although some debate has focused on when a validation statistic, such as CE or RE, is significant, a more meaningful approach may be to concentrate on the implied prediction intervals for a given reconstruction. Even a low CE value may still provide prediction intervals that are useful for drawing particular scientific conclusions. The work of Bürger and Cubasch (2005) considers different variations on the reconstruction method to arrive at 64 different analyses. Although they do not report CE, examination of Figure 1 in their paper suggests that many of the variant reconstructions will have low CE and that selecting a reconstruction based on its CE value could be a useful way to winnow the choices for the reconstruction. Using CE to judge the merits of a reconstruction is known as cross-validation and is a common statistical technique for selecting among competing models and subsets of data. When the validation period is independent of the calibration period, cross-validation avoids many of the issues of overfitting if models were simply selected on the basis of RE. QUANTIFYING THE FULL UNCERTAINTY OF A RECONSTRUCTION The statistical framework based on regression provides a basis for attaching uncertainty estimates to the reconstructions. It should be emphasized, however, that this is only the statistical uncertainty and that other sources of error need to be addressed from a scientific perspective. These sources of error are specific to each proxy and are discussed in detail in Chapters 3–8 of this report. The quantification of statistical uncertainty depends on the stationarity and linearity assumptions cited above, the OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years adjustment for temporal correlation in the proxy calibration, and the sensible use of principal components or other methods for data reduction. On the basis of these assumptions and an approximate Gaussian distribution for the noise in the relationship between temperature and the proxies, one can derive prediction intervals for the reconstructed temperatures using standard techniques (see, e.g., Draper and Smith 1981). This calculation will also provide a theoretical MSE for the validation period, which can be compared to the actual mean squared validation error as a check on the method. One useful adjustment is to inflate the estimated prediction standard error (but not the reconstruction itself) in the predictions so that they agree with the observed CE or other measures of skill during the validation period. This will account for the additional uncertainty in the predictions that cannot be deduced directly from the statistical model. Another adjustment is to use Monte Carlo simulation techniques to account for uncertainty in the choice of principal components. Often, 10-, 30-, or 50-year running means are applied to temperature reconstructions to estimate long-term temperature averages. A slightly more elaborate computation, but still a standard technique in regression analysis, would be to derive a covariance matrix of the uncertainties in the reconstructions over a sequence of years. This would make it possible to provide a statistically rigorous standard error when proxy-based reconstructions are smoothed. Interpreting Confidence Intervals A common way of reporting the uncertainty in a reconstruction is graphing the reconstructed temperature for a given year with the upper and lower limits of a 95 percent confidence interval to quantify the uncertainty. Usually, the reconstructed curve is plotted with the confidence intervals forming a band about the estimate. The fraction of variance that is not explained by the proxies is associated with the residuals, and their variance is one part of the mean squared prediction error, which determines the width of the error band. Although this way of illustrating uncertainty ranges is correct, it can easily be misinterpreted. The confusion arises because the uncertainty for the reconstruction is shown as a curve, rather than as a collection of points. For example, the 95 percent confidence intervals, when combined over the time of the reconstruction, do not form an envelope that has 95 percent probability of containing the true temperature series. To form such an envelope, the intervals would have to be inflated further with a factor computed from a statistical model for the autocorrelation, typically using Monte Carlo techniques. Such inflated intervals would be a valid description of the uncertainty in the maximum or minimum of the reconstructed temperature series. Other issues also arise in interpreting the shape of a temperature reconstruction curve. Most temperature reconstructions exhibit a characteristic variability over time. However, the characteristics of the unknown temperature series may be quite different from those of the reconstruction, which must always be borne in mind when interpreting a reconstruction. For example, one might observe some decadal variability in the reconstruction and attribute similar variability to the real temperature series. However, this inference is not justified without further statistical assumptions, such as the probability of a particular pattern appearing due to chance in a temporally correlated series. Given the attenuation in variability associated with the regression method and the temporal correlations within the proxy record that may not be related to tempera- OCR for page 83 Surface Temperature Reconstructions for the last 2,000 Years ture, quantifying how the shape of the reconstructed temperature curve is related to the actual temperature series is difficult. Ensembles of Reconstructions One approach to depicting the uncertainty in the reconstructed temperature series is already done informally by considering a sample, or ensemble, of possible reconstructions. By graphing different approaches or variants of a reconstruction on the same axes, such as Figure S-1 of this report, differences in variability and trends can be appreciated. The problem with this approach is that the collection of curves cannot be interpreted as a representative sample of some population of reconstructions. This is also true of the 64 variants in Bürger and Cubasch (2005). The differences in methodology and datasets supporting these reconstructions make them distinct, but whether they represent a deliberate sample from the range of possible temperature reconstructions is not clear. As an alternative, statistical methods exist for generating an ensemble of temperature reconstructions that can be interpreted in the more traditional way as a random sample. Although this requires additional statistical assumptions on the joint distribution of the proxies and temperatures, it simplifies the interpretation of the reconstruction. For example, to draw inferences about the maximum values in past temperatures, one would just form a histogram of the maxima in the different ensemble members. The spread in the histogram is a rigorous way to quantify the uncertainty in the maximum of a temperature reconstruction.
{"url":"http://books.nap.edu/openbook.php?record_id=11676&page=83","timestamp":"2014-04-18T23:16:47Z","content_type":null,"content_length":"82342","record_id":"<urn:uuid:40b45534-f7f2-494b-876d-1cf37ffc6a3f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A triangular prism has a volume of 10 cm3. If each dimension of the prism is multiplied by four, what is the new volume of the prism? 320 cm3 80 cm3 160 cm3 640 cm3 • one year ago • one year ago Best Response You've already chosen the best response. give me a sec to solve Best Response You've already chosen the best response. thank u im redoing them to bost my grade Best Response You've already chosen the best response. answer is 640 cm^3 how??? because solution: V= 1/2* l *w*h given V = 10 cm^3 but here given , new vol= 1/2* 4l*4w*4h =64 * 1/2* l*w*h =64*V =64*10 =640 cm^3 Best Response You've already chosen the best response. wow ur good can u help me again?? Best Response You've already chosen the best response. sure i'll try Best Response You've already chosen the best response. Each dimension of a rectangular prism is increased by 30%. What percent of the original volume is the volume of the new prism? 9 percent 169 percent 2.7 percent 219.7 percent Best Response You've already chosen the best response. ok wait lemme solve Best Response You've already chosen the best response. answer is 219.7% how lets prove.... Best Response You've already chosen the best response. l*w*h = V if we use h= 2 w= 3 l= 4 V= 24 units^3 increase by 30% we get, h= 2.6 w= 3.9 l= 5.2 v =52.73 units^3 which is 219.7% time of the original volume original volume * 219.7% => 24 * 219.7% = 52.728 hence proved Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/501d6002e4b0be43870e1ee9","timestamp":"2014-04-20T13:50:01Z","content_type":null,"content_length":"47064","record_id":"<urn:uuid:067eb015-223b-4852-bacb-e35e81b0ab10>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties of the product topology February 10th 2010, 11:04 PM Properties of the product topology Hello friends. I am doing an independent study course, and it is a bit of the Moore method style. So, right now I am studying product topology and have come up with some conjectures. I have "proof" for all of them but would appreciate (no need for proof if you don't want) if someone could validate whether or not they are true. 1. Let $\left\{X_j\right\}_{j\in\mathcal{J}}$ be a class of topological spaces and let $\mathfrak{B}_\ell$ be an open base for $X_\ell$ for each $\ell\in\mathcal{J}$. Then, $\mathfrak{B} = \prod_ {j\in\mathcal{J}}\mathfrak{B}_j$ is an open base for $X = \prod_{j\in\mathcal{J}}X_j$ under the product topology. 2. Consequently, if $X_1,\cdots,X_n$ are a finite collection of second countable topological spaces then $X_1\times\cdots\times X_n$ is separable. 3. If $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces and $X = \prod_{j\in\mathcal{J}}X_j$ is the product space of these spaces, then for any $x\in X$ we have that if $N$ is a neighborhood of $X$ then $\pi_j(N)$ is a neighborhood of $\pi_j(x)$ for each $j\in\mathcal{J}$ 4. The converse is true if $\mathcal{J}$ is finite. 5. Using this we can show that if $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces, and $\mathcal{D}_\ell$ is dense in $X_\ell$ for each $\ell\in \mathcal{J}$ then $\ prod_{j\in\mathcal{J}}\mathcal{D}_j$ is dense in $X = \prod_{j\in\mathcal{J}}X_j$ with the product topology. 6. Consequently, if $X_1,\cdots,X_n$ are separable topological spaces then $X = X_1\times\cdots\times X_n$ is separable. That's it for now. Any input would be incredibly appreciated. Also, I feel as though I should point out that even though I said this is Moore method like...this is just for my own learning. There is no attempt at foul play here. February 11th 2010, 06:41 AM Well, here are 3 I'm pretty sure of. I'll see if I manage to look at the rest, but I need more time :p 1. False. First off, the product cannot be written like you did. You're taking the product of the elements of the bases, not of the bases themselves. That aside, you need to restrict the product to only finitely many indices (the rest being the whole space). 2. A countable (need not be finite) product of second countable spaces is second countable, and every second countable space is separable. 4. Isn't it always true, independently of $\mathcal J$ being finite or not? If you pick an open neighbourhood $N$ of $\pi_j (x)$ in $X_j$, the inverse image of $N$ is open since $\pi_j$ is continuous, and it obviously contains $x$. February 11th 2010, 10:43 AM 3) Is false unless something else is suposed. As Nyrix said continuity of projections guarantee the validity of 4) in ANY case. Maybe 3 is valid only if J is finite ? IF the product is finite the product topology and "box"-topology are the same, under the box topology i think 3) is true by construction. February 11th 2010, 10:49 AM Well, here are 3 I'm pretty sure of. I'll see if I manage to look at the rest, but I need more time :p 1. False. First off, the product cannot be written like you did. You're taking the product of the elements of the bases, not of the bases themselves. That aside, you need to restrict the product to only finitely many indices (the rest being the whole space). My mistake, I should have said that $\mathcal{J}$ is finite. Otherwise, as you pointed out there is no reason for the base elements to even be open under the product topology. Also, I had a bit of a notational faux pas. What I meant by the product was this. The set $\mathfrak{B}=\left\{B_1\times \cdots\times B_n:B_k\in\mathfrak{B}_k,\text{ }1\leqslant k\leqslant n\right\}$ is an open 2. A countable (need not be finite) product of second countable spaces is second countable, and every second countable space is separable. I see, but from what I have proved in the above (with the addendum that $\mathcal{J}$ is finite) it follows relatively easily that the finite case is true. I will look more closely at the countable case. 4. Isn't it always true, independently of $\mathcal J$ being finite or not? If you pick an open neighbourhood $N$ of $\pi_j (x)$ in $X_j$, the inverse image of $N$ is open since $\pi_j$ is continuous, and it obviously contains $x$. This was just stupidity on my part. Of course you are correct, I wasn't thinking! February 11th 2010, 10:58 AM 3) Is false unless something else is suposed. As Nyrix said continuity of projections guarantee the validity of 4) in ANY case. Maybe 3 is valid only if J is finite ? IF the product is finite the product topology and "box"-topology are the same, under the box topology i think 3) is true by construction. 3. If $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces and $X = \prod_{j\in\mathcal{J}}X_j$ is the product space of these spaces, then for any $x\in X$ we have that if $N$ is a neighborhood of $X$ then $\pi_j(N)$ is a neighborhood of $\pi_j(x)$ for each $j\in\mathcal{J}$ It is actually true in general. I had it validated by someone who is algebraic topologist on AOPS! Here is the proof 3. If $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces and $X = \prod_{j\in\mathcal{J}}X_j$ is the product space of these spaces, then for any $x\in X$ we have that if $N$ is a neighborhood of $X$ then $\pi_j(N)$ is a neighborhood of $\pi_j(x)$ for each $j\in\mathcal{J}$ Proof: Let $N$ be a neighborhood of $x$ in $X$. Then, $N=\bigcup_{k\in\mathcal{K}}O_k$ where $O_k$ is an open basic set in the product topology. Then, $\pi_j\left(N\right)=\pi_j\left(\bigcup_{k\ in\mathc al{K}}O_k\right)=\bigcup_{k\in\mathcal{K}}\pi_j\le ft(O_k\right)$. Now regardless of what $O_k$ we know that $\pi_j\left(O_k\right)$ is open (either it is the full space $X_j$ or some open set $G_j$) and so each $\pi_j\left(O_k\right)$ is an open set in $X_j$ and so $\pi_j\left(N\right)=\bigcup_{k\in\mathcal{K}}\pi_j \left(O_k\right)$ is the arbitrary union of open sets in $X_j$ and thus open in $X_j$. And of course lastly noting that $x\in N\implies \pi_j(x)\in\pi_j\left(N\right)$ finishes the argument. February 11th 2010, 11:01 AM I think 5) can be proved "directly". Let $A$ be an open set in $X$, so A is a finite intersection of open sets, say $A_i_1,...,A_i_n$ in $X_i_1,...,X_i_n$ respectively. Every $D_j$ is dense so theres an $d_{i_j} \in A_{i_j} \cap D_{i_j}$ for each $j=1,...,n$ Let $x \in X$ the element which $\pi_{i_j} (x)=d_{i_j}$ for $j=1,...,n$ and $\pi_i(x)= a_i$ where $p_i$ is any element (AXIOS OF CHOICE) of $D_i.<br />$ Then $x \in A \cap D$. so D is dense in X. February 11th 2010, 11:09 AM the proof of 3) you posted is good! you are right, thank you =) February 11th 2010, 11:25 AM In case anyone is interested here are my proofs: 1. Let $\left\{X_j\right\}_{j\in\mathcal{J}}$ be a class of topological spaces and let $\mathfrak{B}_\ell$ be an open base for $X_\ell$ for each $\ell\in\mathcal{J}$. Then, $\mathfrak{B} = \left\ {B_1\times\cdots\times B_n:B_k\in\mathfrak{B}_k,\text{ }1\leqslant k\leqslant n\right\}$ is an open base for $X = \prod_{j\in\mathcal{J}}X_j$ under the product topology. If $\mathcal{J}$is finite Proof: It is clear that since the box topology coincides with the product topology for the product of a finite number of spaces that the elements of $\mathfrak{B}$ are open, so it just remains to show that they form a base. So, let $O$ be open in $X$. Since the sets of the form $G=G_1\times\cdots\times G_n$ are an open base with $G_k$ open in $X_k$ it follows that for any $o\in O$ there exists some $G$ such that $o\ in G\subseteq O$. But considering 3. (already proved) we see that $\pi_\ell(G),\text{ }1\leqslant \ell\leqslant n$ is an open set containing $\pi_\ell(o)$ and so there exists some $E_\ell\in\ mathfrak{B}_\ell$ such that $\pi_\ell(o)\in E_\ell\subseteq\pi_\ell(G)$. Clearly then since each $E_1,\cdots,E_n$ is open we have that $E_1\times\cdots\times E_n$ is an open set which is a subset of $G_1\times\cdots\times G_n$ and which contains $o$. The conclusion follows. 2. Consequently, if $X_1,\cdots,X_n$ are a finite collection of second countable topological spaces then $X_1\times\cdots\times X_n$ is second countable. Proof: Since $X_1,\cdots,X_n$ are second countable there exists countable bases $\mathfrak{B}_1,\cdots,\mathfrak{B}_n$ for each. As stated though, the set $\mathfrak{B}=\left\{B_1\times\cdots\ times B_n:B_k\in\mathfrak{B}_k,\text{ }1\leqslant k\leqslant n\right\}$ is an open base for $X$. So lastly noting that the mapping $\eta:\prod_{j=1}^{n}\mathfrak{B}_j\mapsto\mathfrak {B}$ given by $\left(B_1,\cdots,B_n\right)\mapsto B_1\times\cdots\times B_n$ is a bijection finishes the argument since the $\prod_{j=1}^{n}\mathfrak{B}_j$ is the finite product of countable sets, and thus 3. If $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces and $X = \prod_{j\in\mathcal{J}}X_j$ is the product space of these spaces, then for any $x\in X$ we have that if $N$ is a neighborhood of $X$ then $\pi_j(N)$ is a neighborhood of $\pi_j(x)$ for each $j\in\mathcal{J}$ Already proved. 4. The converse is true if $\mathcal{J}$ is finite. NOT TRUE! 5. Using this we can show that if $\left\{X_j\right\}_{j\in\mathcal{J}}$ is a collection of topological spaces, and $\mathcal{D}_\ell$ is dense in $X_\ell$ for each $\ell\in \mathcal{J}$ then $\ prod_{j\in\mathcal{J}}\mathcal{D}_j$ is dense in $X = \prod_{j\in\mathcal{J}}X_j$ with the product topology. Proof: Let $x\in X$ be arbitrary and let $N$ be a neighborhood of $x$. By 3. we know that $\pi_j\left(N\right)$ is a neighborhood of $\pi_j(x)$ for each $j\in\mathcal{J}$ and since $\mathfrak{D} _j$ is dense in $X_j$ we know there must exists some $d_j\in\mathfrak{D}_j$ such that $d_j\in\pi_j\left(N\right)$. Doing this for each $j\in\mathcal{J}$ we see that $\prod_{j\in\mathcal{J}}\{d_j \}\in\prod_{j\in\mathc al{J}}\mathcal{D}_j$ is in $N$. The conclusion follows. Alternatively, I believe it is correct that $\prod_{j\in\mathcal{J}}\overline{E_j}=\overline{\p rod_{j\in\mathcal{J}}E_j}$ from where it would follow directly since $\overline{\prod_{j\in\mathcal {J}}\mathcal{D}_j}=\p rod_{j\in\mathcal{J}}\overline{\mathcal{D}_j}=\pro d_{j\in\mathcal{J}}X_j=X$ 6. Consequently, if $X_1,\cdots,X_n$ are separable topological spaces then $X = X_1\times\cdots\times X_n$ is separable. Proof: Since $X_1,\cdots,X_n$ are separable there exists countable dense subsets $\mathcal{D}_1,\cdots,\mathcal{D}_n$ which are countable and since $\mathcal{D}_1\times\cdots\times\mathcal{D}_n$ is dense in $X_1\times\cdots\times X_n$ and the product of finitely many countable sets is countable the conclusion follows. February 11th 2010, 12:06 PM how come 4) is not true? February 11th 2010, 12:13 PM February 11th 2010, 03:31 PM I think you can extend this to say If $\left\{X_n\right\}_{n\in\mathbb{N}}$ is a countable collection of second countable spaces, then $\prod_{j=1}^{\infty}X_j$ is second countable Proof: Let $\mathfrak{B}_\ell$ be corresponding countable base for $X_\ell$ and define $\mathcal{B}_n=\left\{E:\pi_{k}(E)\in\mathfrak{B}_k ,\text{ }1\leqslant k\leqslant n,\text{ and }\pi_k(E)= X_k,\text{ }k>n\right\}$. It is cleary by definition that each element of $\mathcal{B}_n$ is open in $X$ Next, we let $\eta:\mathcal{B}_n\mapsto\mathfrak{B}_1\times\cdot s\times\mathfrak{B}_n$ by $\left(B_1,\cdots,B_n\right)\mapsto B_1\times\cdots\times B_n$. It is clear that $\eta$ is a bijection, and since $\mathfrak{B}_1\times\cdots\times\mathfrak{B}_n$ is the finite product of countable sets we see that $\mathcal{B}_n$ is countable. So, let $\mathfrak{B}=\bigcup_{j=1}^{\infty}\mathcal{B}_j$ and let $O\in X$ be an arbitrary open set. Since the set $\mathcal{S}$ which consists of all sets $E\subseteq X$ such that $\pi_{k}(E)$ is open for all $k$ and equals $X_k$ for all but finitely many $k$ is the defining open base for $X$ we know that given any $o\in O$ there exists some $S\in \mathcal{S}$ such that $o\in S\ subseteq O$. Let $k_1,\cdots,k_n$ be the set of values such that $\pi_{k_\ell}(S)e X_{k_{\ell}}$ and let $m=\max\{k_1,\cdots,k_n\}$ Since $\pi_{r}(S)$ is a neighborhood of $o$ for $1\leqslant r\leqslant m$ and $\mathfrak{B}_r$ is an open base for $X_r$ there exists some $B_r\in\mathfrak{B}_r$ such that $\pi_r(o)\in B_r\ subseteq\pi_r(S)$. So, let $B\subseteq X$ be such that $B=\prod_{z=1}^{\infty}G_z$ where $G_z=B_z,\text{ }1\leqslant z\leqslant m$ and $G_z=X_z,\text{ }z>m$. Clearly $o\in B\subseteq S\subseteq O$ and $B\in\mathfrak{B}$. It follows that $\mathfrak{B}$ is an open base for $X$ and since it's the countable union of countable sets it's countable. Therefore, $X$ is second countable
{"url":"http://mathhelpforum.com/differential-geometry/128307-properties-product-topology-print.html","timestamp":"2014-04-17T05:40:20Z","content_type":null,"content_length":"59600","record_id":"<urn:uuid:a6bf8655-88cc-41f7-a442-b98a66a930f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: AN ANT COLONY SYSTEM APPROACH FOR SOLVING THE AT-LEAST VERSION OF THE Arindam K. Das University of Washington Payman Arabshahi, Andrew Gray Jet Propulsion Laboratory We consider the "at least" version of the Generalized Min- imum Spanning tree problem (L-GMST). Unlike the MST, the L-GMST is known to be NP-hard. In this paper, we pro- pose an ant colony system based solution approach for the L-GMST. A key feature of our algorithm is its use of ants of different behavioral characteristics, which are adapted over time. Computational results on datasets used in earlier lit- erature indicate that our algorithm provides similar or better results for most of them. Given an undirected graph G = (N, E), where N is the set
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/568/2268060.html","timestamp":"2014-04-20T07:38:31Z","content_type":null,"content_length":"8028","record_id":"<urn:uuid:eda1f3db-bec6-4ca5-b470-b82857536913>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Plato Center Trigonometry Tutor Find a Plato Center Trigonometry Tutor ...I also have years of experience tutoring and teaching online. Online tutoring is available via email, Skype, or chatroom. It can include answering questions and checking work. 24 Subjects: including trigonometry, calculus, geometry, GRE ...Academically, I have studied at the Center of Applied Linguistics in Besancon, France. The CLA is arguably the most prestigious language school in France. Over the course of a year, these intensive studies ranged across all aspects of second language acquisition, including grammar, syntax, and ... 57 Subjects: including trigonometry, chemistry, calculus, French ...I graduated from the University of California, San Diego with a degree in Biochemistry in 2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this 25 Subjects: including trigonometry, chemistry, calculus, physics ...While I teach, I emphasize understanding and grasping the subjects along with solving problems. I challenge my students to think and reason. This way they become problem solver themselves. 8 Subjects: including trigonometry, physics, algebra 2, SAT math ...Consequently, I started the undergraduate calculus sequence with a significant advantage over my classmates. I completed the entire undergraduate calculus sequence through differential equations with straight A's. I have taught business calculus at Harper College. 67 Subjects: including trigonometry, chemistry, Spanish, English
{"url":"http://www.purplemath.com/plato_center_trigonometry_tutors.php","timestamp":"2014-04-16T19:23:46Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:51c1a91a-e034-42f7-8186-2d64e3d6fa8a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Plato Center Trigonometry Tutor Find a Plato Center Trigonometry Tutor ...I also have years of experience tutoring and teaching online. Online tutoring is available via email, Skype, or chatroom. It can include answering questions and checking work. 24 Subjects: including trigonometry, calculus, geometry, GRE ...Academically, I have studied at the Center of Applied Linguistics in Besancon, France. The CLA is arguably the most prestigious language school in France. Over the course of a year, these intensive studies ranged across all aspects of second language acquisition, including grammar, syntax, and ... 57 Subjects: including trigonometry, chemistry, calculus, French ...I graduated from the University of California, San Diego with a degree in Biochemistry in 2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this 25 Subjects: including trigonometry, chemistry, calculus, physics ...While I teach, I emphasize understanding and grasping the subjects along with solving problems. I challenge my students to think and reason. This way they become problem solver themselves. 8 Subjects: including trigonometry, physics, algebra 2, SAT math ...Consequently, I started the undergraduate calculus sequence with a significant advantage over my classmates. I completed the entire undergraduate calculus sequence through differential equations with straight A's. I have taught business calculus at Harper College. 67 Subjects: including trigonometry, chemistry, Spanish, English
{"url":"http://www.purplemath.com/plato_center_trigonometry_tutors.php","timestamp":"2014-04-16T19:23:46Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:51c1a91a-e034-42f7-8186-2d64e3d6fa8a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: dropped topics Replies: 0 dropped topics Posted: Jun 20, 1995 9:51 PM Why do we need to teach multiplication and division of This was the question from sross@whale.st.usm.edu. Decimals are fractions. Do you mean to ask about fractions written with numerators and denominators or all fractions? Our base ten system makes working with decimals easier because of the place value. Not all fractions are base 10, however. In fact, there are an infinite number of fractions that are not base 10. Not teaching operations using fractions written as 4/7 (numerator and denominator format) eliminates a lot of fractions. Why not eliminate all units digits and only deal with tens? (Or eliminate all pennies -- which makes economic sense.) This means we can no longer take 1/3 or 1/7 of anything. Does this make sense? Are we going to restrict our world to fractions that convert to terminating decimals? Eileen Schoaff Buff State (NOT a nudist school)
{"url":"http://mathforum.org/kb/thread.jspa?threadID=481584","timestamp":"2014-04-18T23:27:40Z","content_type":null,"content_length":"14345","record_id":"<urn:uuid:e842da5b-22d7-4863-99e2-2d2676c796cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: generating random permutations on the fly Replies: 6 Last Post: Sep 18, 2006 9:56 AM Messages: [ Previous | Next ] Re: generating random permutations on the fly Posted: Sep 14, 2006 8:01 AM Han de Bruijn wrote: > regis lebrun wrote: > > [ ... ] but I can't use it if I want to generate a really huge > > permutation (let's say n=10^9). > Instead of asking more than seven wise men can answer, wouldn't you try > to consider the possibility of sizing down your problem a little bit .. > Han de Bruijn First of all, thanks for this quick answer. Of course I could tell the users to perform their Monte Carlo-like simulations with less samples, but I don't really think it is the right answer ;-). You know, if someone is able to compute some probability with pure Monte Carlo by throwing 10^9 times the dices and evaluating a cheap analytical function over this huge sample (it takes less than 1mn in Matlab, standard PC computer), he will try to do the same with LHS, and then, BOOM->out of memory. By the way, I imagine that the underlying algorithmic problem is interresting: how much information (i.e. how much memory) do I need to store to be able to write down a permutation of [1, ..., n] without storing it explicitely? Best regards, Regis LEBRUN Date Subject Author 9/14/06 generating random permutations on the fly regis lebrun 9/14/06 Re: generating random permutations on the fly Han de Bruijn 9/14/06 Re: generating random permutations on the fly regis lebrun 9/14/06 Re: generating random permutations on the fly Robert Israel 9/15/06 Re: generating random permutations on the fly regis lebrun 9/15/06 Re: generating random permutations on the fly Herman Rubin 9/18/06 Re: generating random permutations on the fly regis lebrun
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1449373&messageID=5128924","timestamp":"2014-04-20T09:05:29Z","content_type":null,"content_length":"24306","record_id":"<urn:uuid:0f1c57cc-5d03-40e9-b523-9dd8842fb9df>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Brief Summary of Each Supplement Progress of Theoretical Physics Supplement No. 149 Chiral Restoration in Nuclear Medium Edited by Teiji Kunihiro, Atsushi Hosaka and Hajime Shimizu This volume is a collection of papers of the invited talks presented at the YITP-RCNP workshop on Chiral Restoration in Nuclear Medium (Chiral02) October 7 - 9, 2002, at Yukawa Institute for Theoretical Physics, Kyoto University. Exploring nonperturbative structure of the QCD vacuum and its possible changes in hot and/or dense matter constitute the central body of subatomic physics since 80's. The main focus of the present volume is to put on whether heavy nuclei are already dense enough that one can describe some phenomena inside the heavy nuclei in terms of possible changes of the underlying QCD vacuum. This volume is a thorough survey on the recent theoretical and experimental development in this field. In fact, the topics covered in this volume are rather wide; (1) Chiral symmetry in nuclear medium (2) Chiral fluctuation in the σ channel in nuclei and chiral symmetry (3) Vector mesons in hot and/or dense matter and chiral symmetry (4) Pionic and Kaonic nuclei and chiral symmetry (5) Chiral symmetry and baryon resonances in nuclear medium (6) Heavy quark systems in nuclear medium and chiral symmetry (7) Chiral effective field theories and chiral properties of hot and/or dense medium (8) Lattice QCD at finite baryon density (9) The past experiments (CHAOS, TAPS, GSI, KEK, LNS, SPring8 etc) on hadrons in nuclei (10) On-going and future experimental projects on hadrons in nuclear medium. Experimental data and theoretical achievements presented and discussed in the workshop are found very exciting and suggest an opening of a new era of nuclear physics; they seem to suggest that heavy nuclei is dense enough such that the chiral symmetry is partially restored. In many papers included in this volume, presentations of speakers are nicely reproduced with more details and some introductory remarks. In particular, it is appreciated that the volume contains a rather extensive over view in this field by Prof. W. Weise. Furthermore, some theoretical approaches are also discussed by several authors including Lattice QCD (including Maximum Entropy Method), QCD sum rules, chiral unitary approaches, hidden local symmetry and so on. Therefore, this volume will serve as a useful introduction for non-experts as well as students. Copyright © 2008 Progress of Theoretical Physics
{"url":"http://www2.yukawa.kyoto-u.ac.jp/~web.ptp/supple/sup.149-eng.html","timestamp":"2014-04-18T13:07:25Z","content_type":null,"content_length":"5554","record_id":"<urn:uuid:428b8490-d4d0-4aec-a956-35e32fe7baa5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Common assumptions about the concept of dimension seem to be violated by the existence both of fractals and of space-filling curves. Here we use a continuum of iterated function systems (IFS's with a time parameter) to produce families of fractal curves that in the limit become a space-filling curve. This procedure is especially useful in viewing a space-filling curve not as the limit of a discrete sequence of polygonal, one-dimensional curves (as is usually done), but rather as the limit of curves that continuously increase in dimension (or in area) to that of the space-filled region. This leads naturally to the visually elucidating production of animations showing the construction of space-filling curves. After some brief comments on the history of space-filling curves and fractals, we discuss the definition of dimension. Also, we give a brief definition of iterated function system (IFS). Then we show how to get a family of curves including the (fractal) von Koch snowflake curve that in the limit give the (space-filling) Sierpinski-Knopp curve. In this family the dimensions increase smoothly from 1 through log[3]4 (the dimension of the standard snowflake curve, about 1.26186 ) to 2. Then we consider the Hilbert curve, introducing the idea of connecting segments to show how a variation of IFS's can give us a description of the classical (Hilbert's original) approximating curves. We produce a family of curves that end at the Hilbert curve with dimension increasing from 1 to 2. Additionally, we look at the Cantor set (the original fractal) and discuss it in this context. We give a family of IFS's that produces Cantor sets of smoothly increasing dimension from 0 through log [3]2 (the dimension of the standard Cantor set, about 0.63093) to 1. Further, Peano's original space-filling curve is given this same treatment as is a space-filling curve due to Sierpinski. As a final example of fractals converging to planar space-filling curves, we come back to the Sierpinski-Knopp curve. Instead of viewing it as a limit of snowflakes, we use IFS's with attractors and connecting segments giving converging curves. Then we briefly look at two other ways of viewing space-filling curves: using area increasing curves and three-dimensional visualizations, and discuss the drawing algorithms used to produce our pictures. Also, we show some generalizations to curves that fill in 3 dimensional space. Finally, some philosophical questions involved in representing fractals are discussed. All of these are demonstrated by showing several frames of "movies": figures that are animated gif-format images. By analogy with the artist Calder's stabiles and mobiles, perhaps our figures should be differentiated as stafigs (which appear in this level of this electronic document) and mofigs (which are accessed by clicking on links given in the stafig descriptions, and require the browser back button to return). To best view with a browser, so that the complete figures are visible, one may need to hide the toolbar or to scroll to center the movie. There are also java versions that allow the user to controll the speed of the animation. We include links to source codes and to software sites. The author would like to acknowledge Jenny Harrison and William D. Withers with whom discussions helped motivate some of these ideas. Also, thanks are due to Thomas J. Sanders who prepared the java code that was adapted to show speed controlled animation. Brief History Table of contents Communications in Visual Mathematics, vol 1, no 1, August 1998. Copyright © 1998, The Mathematical Association of America. All rights reserved. Created: 18 Aug 1998 --- Last modified: 18 Aug 1998 23:59:59 Comments to: CVM@maa.org
{"url":"http://www.maa.org/external_archive/CVM/1998/01/vsfcf/article/sect1/intro.html","timestamp":"2014-04-20T00:13:37Z","content_type":null,"content_length":"7836","record_id":"<urn:uuid:2ca443c8-4a93-438d-a56b-b968f7bebf2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] how would a physicist know that we are not living in a Skolem hull? Vaughan Pratt pratt at cs.stanford.edu Fri Feb 8 22:20:38 EST 2013 On 2/7/2013 4:17 PM, Dmytro Taranovsky wrote: > At this time, we do not know whether at Planck scale, space is > continuous or discrete (or even whether points in space are physically > meaningful at Planck scale). Heisenberg uncertainty requires that measuring the position of a particle to within distance precision x requires an uncertainty of h/x in its momentum (to be precise, 1/4pi of that). Personally I interpret this as meaning that the "size" in SI units of the portion of the universe we can observe is about 1/h, or in Planck units 1/2pi. In either case this is a finite quantity. That interpretation aside, our ability to measure positions in space to arbitrary accuracy is limited by our ability to cope with arbitrarily large uncertainty in momentum. Any particle whose momentum you cannot bound from above is a particle you want to stay well clear of, as you have no guarantee that it cannot destroy arbitrarily much of your Hence the idea of being able to decide even whether space contains a countable infinity of points is already pretty far fetched. Aleph_1 is Cantorianly beyond that, while c = Beth_1 is Cohenly further beyond that. Physicists have no more insight into how to design an experiment to distinguish between Aleph_1 and Beth_1 than logicians have into the relevance of Heisenberg uncertainty to the granularity of space and time. Any conference on this topic would simply have the two sides talking at cross purposes, much as at the Universal Algebra and Category Theory conference held at MSRI in July 1993 only even more so. Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2013-February/016989.html","timestamp":"2014-04-17T10:20:28Z","content_type":null,"content_length":"4508","record_id":"<urn:uuid:777bf337-d90b-4f58-899e-70674109fff6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to Give Coins For Change January 27th, 2011, 11:14 AM Trying to Give Coins For Change I am working on a program where you have to give change in coins. The program already gives the change amount, I'm just trying to figure out how to give the change in coins. This is what i have so far: Code java: A cash register totals up sales and computes change due. package classes; public class CashRegister // add two attributes (variables) purchase and payment private double purchase; private double payment; Constructor that constructs a cash register with no money in it. public CashRegister() //initialize the purchase and payment to 0.0 double purchase = 0.0; double payment = 0.0; Records the sale of an item. @param amount the price of the item public void recordPurchase(double amount) //add the amount of purchase to purchase purchase = purchase + amount; Enters the payment received from the customer; should be called once for each coin type. @param coinCount the number of coins @param coinType the type of the coins in the payment public void enterPayment(int coinCount, Coin coinType) //calculate the payment payment = payment+ coinCount * coinType.getValue(); Computes the change due and resets the machine for the next customer. @return the change due to the customer public double giveChange() double change = purchase - payment; purchase = 0; payment = 0; return change; //calculate change,change is the difference between payment and purchase, return change public double giveCoins() double change = purchase - payment; purchase = 0; payment = 0; double dollars; double quarters; double nickels; double dimes; double pennies; {dimes=change/.10; } {nickels=change/.05; } {pennies=change/.01; } return dollars + quarters + dimes + nickels + pennies; As you can see I'm trying to make it so say you have 5.55 in change due. The program will look at 5.50, it is greater than 1 so it will do 5.50/1 to get the dollar amount. Of course I need to find a way to round the 5.5 that will result to just 5. The program will then subtract 5 from 5.50 and continue to quaerters ad so forth. January 27th, 2011, 11:17 AM Re: Trying to Give Coins For Change Hint: What happens when you convert a double value into an int? Or if you don't like that hint, you can always go the hard way: Math (Java Platform SE 6) Edit- Why are the values for the counts of each of your coins doubles instead of ints? For example, how can you have 1.5 quarters? January 27th, 2011, 02:02 PM Re: Trying to Give Coins For Change Okay So I went back and had the doubles converted to ints. Lol, however, I had to convert them back to double so the equations would work. I'm error free however, when I call the method it gives me 0.0. I have the method return 5 values so I should at least be getting 0.0, 0.0, 0.0, 0.0, 0.0... Any help is appreciated I'm not sure what the problem could be now. Code java: public double giveCoins() double change = purchase - payment; purchase = 0; payment = 0; double dollars=0; double quarters=0; double nickels=0; double dimes=0; double pennies=0; int dollarsint = (int)dollars; double dollarsafter = (double)dollarsint; int quartersint = (int)quarters; double quartersafter = (double)quartersint; {dimes=change/.10; } int dimesint = (int)dimes; double dimesafter = (double)dimesint; {nickels=change/.05; } int nickelsint = (int)nickels; double nickelsafter = (double)nickelsint; {pennies=change/.01; } int penniesint = (int)pennies; double penniesafter = (double)penniesint; return dollars + quarters + dimes + nickels + pennies; January 27th, 2011, 02:06 PM Re: Trying to Give Coins For Change I'm not sure what your code is supposed to do. What exactly are you supposed to return? But I see one major problem: you're setting both purchase and payment to 0 at the top of the method, so of course everything is going to be 0.0. You have a few other logic errors, but that's a good place to start. January 27th, 2011, 02:14 PM Re: Trying to Give Coins For Change So I put the code in the givechange() method instead and it gave me the result of 25. I found out this result is the total amount of coins to give back. return dollarsint + quartersint + dimesint + nickelsint + penniesint; How can I make it so the return statement separates the value instead of adding them together. I was trying to get a System.out.println() to work on the return but I couldn't get it :( January 27th, 2011, 02:31 PM Re: Trying to Give Coins For Change Okay so I'm making progress... I set the purchase at .25 and Pay with 1 penny, 1 dime, 8 quarters, and 1 nickel ($2.16) So I should get back 1.91 Here is the Output The Change Due is: 1.9099999999999997 Dollars: 1 Quarters: 3 Dimes: 1 Nickels: 1 Pennies: 0 Lol it is so close. There is something wrong with how the change is calculating any ideas? January 27th, 2011, 03:21 PM Re: Trying to Give Coins For Change That's happening because floating point arithmetic is not exact when talking about very small numbers. There are good explanations (try googling "what every computer scientist should know about floating-point arithmetic", but does anybody have the original link?), but the gist of it is this: you can't represent 1.91 exactly in binary, so the actual double value will be something very close to it but not quite exact. For this reason, anything dealing with things like currency should be handled as ints. Instead of 1.91 dollars, you would represent that as 191 cents, as an int. Displaying it how you want is up to you, but there are classes in the API specifically for that (look for DecimalFormat).
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/7070-trying-give-coins-change-printingthethread.html","timestamp":"2014-04-19T00:27:30Z","content_type":null,"content_length":"28078","record_id":"<urn:uuid:7e192a68-494b-428e-a96e-9fe855966ca0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution of battle 14.1: Resolution of battle Each kind of fighter has an attack and defense rating: attack defense missile peasant | 1 1 worker | 1 1 sailor | 1 1 soldier | 5 5 pikeman | 5 30 swordsman | 15 15 pirate | 5 (15) 5 (15) knight | 45 (20) 45 (20) elite guard | 90 (65) 90 (65) crossbowman | 1 1 25 archer | 5 5 50 elite archer | 10 10 75 • Numbers in parenthesis are for shipboard combat. • Nobles have strong innate heroic abilities, and so even an unarmed, untrained noble is a formidable opponent on the battlefield. • A noble is rated (attack=80, defense=80, missile=0). • A blessed soldier fights with the same attack and defense values as a regular soldier, but has a 50% chance of surviving a hit. • A pirate fights (15,15,0) on a ship, but only (5,5,0) on land. • Knights and elite guard receive a -25 penalty to both their attack and defense ratings when fighting on a ship or in a swamp province. • During combat, a random man is chosen to attack. If the attack succeeds, the target is removed from battle. This continues until the battle is over. More specifically: 1. A man is picked at random to attack. This man may be from either side in the battle. For example, if there are 10 men on side A, and 20 men on side B, there is a 1/3 chance that one of A's men will be chosen, and a 2/3 chance that one of B's men will be chosen. 2. The attacker chooses an enemy target at random. The target may be any of the men in the enemy's front units. A ship or building housing the enemy also counts as a potential target. In other words, if there are five men in a tower, each man has 1/6 chance of being selected as a target, and the tower also has 1/6 chance of being paired with an attacker. 3. The chance that the attacker will score a hit against the target is: A = attacker's attack rating B = target's defense rating A / (A + B) For example: A(attack=90) vs B(defense=45) A has a 2/3 chance of killing B A(attack=90) vs B(defense=90) A has a 1/2 chance of killing B 4. If the attacker scores a hit against a noble, the noble will receive a random wound of 1-100 health points. Note that there is a 1% chance that a perfectly healthy noble will be instantly killed, and a greater chance that a previously wounded noble will die (see Health). Wounded nobles do not continue fighting, even if their wounds are minor. If a hit is scored against a fighter (soldiers, pikemen, archers, etc.), the fighter is killed. However, a blessed soldier has a 50% chance of surviving a hit. A man successfully attacking a building or a ship will cause one point of damage to the structure. A siege engine attacking a building will cause 5-10 points of damage. 5. Repeats until a side breaks. A side breaks when its total offensive plus defensive value falls by 50%. For instance, a noble plus two pikemen has an offensive value of (80+80) + (5+30) + (5+30) = 230 with a break point of 115. A noble plus two knights has an offensive value of (80+80) + (45+45) + (45+45) = 340 with a break point of 170. Therefore, a noble with two pikemen will continue to fight if the pikemen are killed. However, a noble with two knights will be declared the loser if both knights are killed. The offensive value used is either the attack or the effective missile rating, whichever is higher. Note that in step 4, if the target is not hit, the target does not immediately retaliate against the attacker. Only when the target is randomly selected to make an attack will it have a chance to score a hit. The winner in combat will attempt to take prisoners. The chance that a given defeated unit will be taken prisoner is proportional to the sizes of the remaining forces. (Taking prisoners in battle and claiming loot requires many soldiers to run after the fleeing enemy. Thus, the chance of success is based on numerical advantage rather than combat skill.) 1:1 25% 2:1 50% 3:1 75% If the winning side outnumbers the defeated force by 2:1, there is a 50% chance that a given defeated unit will be captured. Defeated units which are not captured retreat from battle. If they are occupying a building or are located in a city, they may flee into the outlying province. The victor always has at least a 25% chance of taking a unit prisoner, but no better than a 75% chance. The victor will claim the defender's position in the location list if it is better, or will move into the defender's structure, ejecting the losing force. (The attacker may specify a flag to the attack order to inhibit this behavior.) Prisoners are stripped of their belongings by the victor, including any men accompanying the prisoners, such as workers or peasants. The stack leader of the winning force receives all of the loot from the battle. Shadow Island | Olympia PBEM | Arena PBEM | Dice server | PBM archive
{"url":"http://www.pbm.com/oly/g2rules/Resolution_of_battle.html","timestamp":"2014-04-18T08:05:52Z","content_type":null,"content_length":"6756","record_id":"<urn:uuid:6c8fa116-3cca-414c-86db-f6eedaebd478>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Author Message 45% (medium) Question Stats: (03:37) correct 40% (01:31) Joined: 03 Apr 2007 Posts: 1378 based on 175 sessions Followers: 2 At Daifu university, 40% of all students are both members of a student organization and want to reduce their tuition costs. 20% of those students who want to reduce tuition Kudos [?]: 130 [0], are not members of the student organization. What percentage of all Daifu students want to reduce tuition? given: 10 (A) 20% (B) 30% (C) 40% (D) 50% (E) 60% Source: GMAT Club Tests - hardest GMAT questions REVISED VERSION OF THIS QUESTION IS HERE: m01-q15-67289.html#p1103452 Spoiler: OA Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes This post received Joined: 10 Sep 2007 KUDOS Posts: 951 This is good example of grid question. Followers: 7 Please find the attached file for the answer. Kudos [?]: 172 [3] , given: 0 Senior Manager Agree with Abhijit Joined: 23 May 2006 Posts: 329 Followers: 2 Kudos [?]: 44 [0], given: Joined: 03 Apr 2007 How do we need decide when to use sets and when to use grid? Posts: 1378 Followers: 2 Kudos [?]: 130 [0], given: 10 This post received my 2 cents, practise venn diagram and always use it .... its more visual than grid and it makes me more confident about the right answer. for this question Joined: 27 May 2008 we need to find out (40+x) Posts: 552 and its given that (40+x) * 20/100 = x Followers: 5 x = 10 Kudos [?]: 143 [7] , given: 0 answer 40+10 = 50 venn1.JPG [ 8.19 KiB | Viewed 7881 times ] True , for 2 sets I think Venn is the best way to go. For more than 2 grid makes things relatively simpler. Joined: 27 Jun 2008 Posts: 160 Followers: 2 Kudos [?]: 16 [0], given: durgesh79 wrote: my 2 cents, practise venn diagram and always use it .... its more visual than grid and it makes me more confident about the right answer. for this question we need to find out (40+x) Joined: 20 May 2008 and its given that (40+x) * 20/100 = x Posts: 57 x = 10 Followers: 1 answer 40+10 = 50 Kudos [?]: 4 [0], given: 1 Attachment: durgesh for PRESIDENT.... bigoyal seems like question has changed a little. Director Anyway, posting the latest question: Joined: 03 Jun 2009 At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what percentage of all Daifu students are members of the swim team? Posts: 805 The solution provided assumes that the students are either member of chess club or swim team or both. But no where in the question this has been mentioned. Location: New Delhi I found it difficult to solve, since I didn't know how many students are member of neither of these clubs WE 1: 5.5 yrs in IT Followers: 56 ISB 2011-12 thread | Ask ISB Alumni @ ThinkISB Kudos [?]: 333 [0], All information related to Indian candidates and B-schools | Indian B-schools accepting GMAT scores given: 56 Self evaluation for Why MBA? irajeevsingh wrote: True , for 2 sets I think Venn is the best way to go. For more than 2 grid makes things relatively simpler. Joined: 31 Aug 2009 Is there a question on the Posts: 5 GMATClub tests Followers: 0 that uses 3 or more sets so I can practice the larger grid? So far I've always used Venn diagrams for solving these kinds of problems. Kudos [?]: 2 [0], given: Can anyone explain the answer for this ? Snowingreen At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what percentage of all Daifu students are members of the swim team? 20 % Joined: 22 Aug 2009 30 % Posts: 100 40 % Location: UK 50 % Schools: LBS,Oxford SAID,INSEAD,NUS 60 % Followers: 2 _________________ Kudos [?]: 12 [0], given: FEB 15 2010 !! well I would not disturb you after the D-day ..so please !!! It Will Snow In Green , One Day !!!! Snowingreen wrote: Can anyone explain the answer for this ? At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what percentage of all Daifu students are members of the swim team? iPinnacle 20 % Manager 30 % Joined: 06 Oct 2009 40 % Posts: 70 50 % Followers: 2 60 % Kudos [?]: 26 [0], given: Hi Snowingreen, The Answer here also is same D as this is a similar question as above. Just replace "Members of student organization" with "members of chess club" and "students who want to reduce tuition fees" with "Members of swim club". Then refer Durgesh79's explanation above. +1 kudos me if this is of any help... IMO D. We can easily rule out the first 3 options..I did not use the grid but it seems a viable option.. Joined: 28 Feb 2011 Posts: 91 Fight till you succeed like a gladiator..or doom your life which can be invincible Followers: 0 Kudos [?]: 30 [0], given: The updated question is much clearer. Senior Manager D it is.. Joined: 20 Dec 2010 % S (who only Swim) = (20/100) * ( .4 + S) Posts: 259 only S = 10. Schools: UNC Duke Kellogg Total S = 10 + 40 = 50 Followers: 3 Kudos [?]: 25 [0], given: Senior Manager oops.. I chose 60. Guess it's a common mistake. Joined: 19 Oct 2010 Posts: 272 Location: India GMAT 1: 560 Q36 V31 GPA: 3 Followers: 6 Kudos [?]: 25 [0], given: siddhans bigoyal wrote: Senior Manager seems like question has changed a little. Joined: 29 Jan 2011 Anyway, posting the latest question: At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what Posts: 390 percentage of all Daifu students are members of the swim team? Followers: 0 The solution provided assumes that the students are either member of chess club or swim team or both. But no where in the question this has been mentioned. I found it difficult to solve, since I didn't know how many students are member of neither of these clubs Kudos [?]: 13 [0], given: 87 How to solve this using 2 x 2 grid? Joined: 04 Aug 2011 if 20 % of swim club are not the member of chess then 80 % are member of chess. Posts: 45 Assume swim club has x members - then .8x = 40 solve for x you get x = 50 which makes the percentage of members overall in swim club as 50 % Location: United States IMO D Technology, Leadership GMAT 1: 570 Q45 V25 GPA: 4 WE: Information Technology (Computer Followers: 0 Kudos [?]: 11 [0], given: This post received Expert's post goalsnr wrote: At Daifu university, 40% of all students are both members of a student organization and want to reduce their tuition costs. 20% of those students who want to reduce tuition are not members of the student organization. What percentage of all Daifu students want to reduce tuition? (A) 20% (B) 30% (C) 40% (D) 50% (E) 60% Source: GMAT Club Tests - hardest GMAT questions Can Someone please explain how to solve the above Question? I could'nt getmuch help from the OE. Below is revised version of this question.At Daifu university, 40% of all students are members of both a chess club and a swim team. If 20% of members of the swim team are not members of the chess club, what percentage of all Daifu students are members of the swim team? Bunuel A. 20% Math Expert B. 30% Joined: 02 Sep 2009 C. 40% Posts: 17283 D. 50% Followers: 2868 E. 60% Kudos [?]: 18333 [2] , Assume there are total of 100 students. 40 students are members of both clubs. We are told that: "20% of members given: 2345 of the swim team are not members of the chess club", thus if S is a # of members of the swim team then 0.2S is # of members of ONLY the swim teem: 40+0.2S=S --> S=50. Answer: D. Or another way: since "20% of members of the swim team are not members of the chess club" then the rest 80% of members of the swim team (S) ARE members of the chess club, so members of both clubs: 0.8*S=40 --> S=50. Answer: D. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests I will try creating a grid here Student org No Student Org Total Intern Reduce Cost 40 x (40+x) Joined: 11 Jun 2012 No Reduce Cost Posts: 6 Total 100 Followers: 0 Now, says 20% of those who want to reduce cost are not part of student org So, 20%( 40+x)=x => 80+2x =10x => x=10 Kudos [?]: 1 [0], given: 5 so total number of students who want to reduce tuition is 40+x = 40+10=50 Therefore answer is D- 50% Followed the grid method. Set up the grid correctly but used two variables instead of one. That led to taking more time. Good question with percentages on grid. Concluded with D but since messed up the variables took abt a min more. Joined: 14 Jun 2012 Posts: 66 My attempt to capture my B-School Journey in a Blog : tranquilnomadgmat.blogspot.com Followers: 0 There are no shortcuts to any place worth going. Kudos [?]: 8 [0], given: bunuel- u rock... Joined: 14 Dec 2012 wat a simple expl. Posts: 87 Location: United States Followers: 0 Kudos [?]: 13 [0], given:
{"url":"http://gmatclub.com/forum/m01-q15-67289.html","timestamp":"2014-04-16T20:22:41Z","content_type":null,"content_length":"218000","record_id":"<urn:uuid:3311c6f1-3230-47e1-9b09-d1cbc1d24d93>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
2012 Schedule See our guide for selecting Summer Program courses during the first 4-week session. See our guide for selecting Summer Program courses during the first 4-week session. 9am-10am, 5pm-6pm Three- to Five-Day Workshops Unless otherwise indicated, workshops are held in Ann Arbor Michigan, on the University of Michigan campus. Workshops usually meet daily from 9:00 am to 5:00 pm. NOTE: please check back to the course listing for possible further developments. May 29-June 1 Network Analysis: An Introduction June 4-8 Network Analysis: A Second Course June 4-8 Spatial Econometrics: Statistical Models of Interdependence Among Observations June 4-6 Analyzing Developmental Trajectories (Amherst, MA) June 11-15 Hierarchical Linear Models I: Introduction (Amherst, MA) June 11-15 Monte Carlo Simulation and Resampling Methods (Chapel Hill, NC) June 11-15 Structural Equation Models and Latent Variables: An Introduction June 11-15 Doing Bayesian Data Analysis: An Introduction June 11-15 Models for Categorical Outcomes Using Stata: Specification, Estimation, and Interpretation June 18-22 Analysis of Large-Scale Networks June 18-22 Introduction to Spatial Regression Analysis (Chapel Hill, NC) June 18-22 Health Disparities Research and Minority Populations: Exploring ICPSR Data Sources June 25-27 Mixed Methods: Approaches for Combining Qualitative and Quantitative Research Strategies (Chapel Hill, NC) June 25-29 Longitudinal Data Analysis, including Categorical Outcomes July 9-13 Causal Inference in the Social Sciences: Matching, Propensity Scores, and Other Strategies (Berkeley, CA) July 9-13 Time Series Analysis: An Introduction for Social Scientists July 9-13 Panel Data Analysis Using Stata July 9-13 Applied Multilevel Models Using SAS and SPSS (Boulder, CO) July 9-13 Item Response Theory (Boulder, CO) July 16-18 Early Care and Education: Drawing Lessons from Weighted Samples July 16-20 Social Network Analysis: An Introduction (Chapel Hill, NC) July 16-20 Time Series Analysis: A Second Course July 23-25 Applied Power Analysis for the Social and Behavioral Sciences July 23-26 Missing Data: An Introduction to the Analysis of Incomplete Data Sets (Bloomington, IN) July 23-27 Military Nursing Research: Fundamentals of Survey Methodology July 23-27 Applied Data Science: Managing Research Data for Re-Use July 23-28 Analysis of Longitudinal Study of American Youth (LSAY) Data July 30-August 3 The National Black Election/Politics Studies: Use and Analysis July 30-August 3 Assessing and Mitigating Disclosure Risk: Essentials for Social Scientists July 30-August 3 Designing, Conducting, and Analyzing Field Experiments August 6-8 Hierarchical Linear Models for Longitudinal Data (Boulder, CO) August 6-10 Latent Growth Curve Models (LGCM): A Structural Equation Modeling Approach (Chapel Hill, NC) August 6-9 The R Statistical Computing Environment: The Basics and Beyond (Berkeley, CA) August 6-10 Network Analysis: Theory and Methods (Bloomington, IN) August 6-10 Providing Social Science Data Services: Strategies for Design and Operation August 13-15 Growth Mixture Models: A Structural Equation Modeling Approach (Chapel Hill, NC) August 13-15 Analyzing Multilevel and Mixed Models Using Stata Note: The ICPSR Summer Program in Quantitative Methods of Social Research makes every effort to provide an up-to-date course schedule, as well as fully accurate course descriptions. Occasionally, however, unforeseen circumstances may require changes in course content, instructors, timing, or location. Fortunately, such events are very rare. But when they do occur, we reserve the right to make any changes that are necessary to maintain the Program. We will post corrected information to the Summer Program website and inform participants who are affected by such changes as quickly as
{"url":"http://www.icpsr.umich.edu/icpsrweb/content/sumprog/2012/schedule.html","timestamp":"2014-04-19T13:25:54Z","content_type":null,"content_length":"20793","record_id":"<urn:uuid:4911c674-0c2d-4622-8715-255521d41a9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
ratios of parts Similar Triangles - ratios of parts In two similar triangles, their perimeters and corresponding sides, medians and altitudes will all be in the same ratio. Try this The two triangles below are similar. Drag any orange dot at P,Q,R. Note the ratio of the two corresponding sides and the ratio of the medians. In two similar triangles: • The perimeters of the two triangles are in the same ratio as the sides. • The corresponding sides, medians and altitudes will all be in this same ratio. This is illustrated by the two similar triangles in the figure above. Here are shown one of the medians of each triangle. As you resize the triangle PQR, you can see that the ratio of the sides is always equal to the ratio of the medians. In the same way, the perimeters will be in the same ratio and the altitudes will also be in the same ratio. There are ten items altogether that are in the same ratio (perimeter, three medians, three altitudes, three sides). To avoid cluttering up the diagram above, the ratios of just one side and one median are shown, but the idea applies to all ten items. Remember that a triangle has three sides, three altitudes and three medians. Be sure to compare the corresponding part in each triangle. For example in the figure below the two altitudes are corresponding altitudes because they are drawn form the same vertex in each triangle, and so the ratio of their lengths is valid. Be especially vigilant where one triangle is rotated and / or mirror-image of the other. To see this, in the figure at the top drag the point P down below Q. The triangle will be rotated 180° but the triangles are still similar and the ratios still hold. See also Similar Triangles - ratios of areas Related topics Similar Polygons • Similar Polygons defined (C) 2009 Copyright Math Open Reference. All rights reserved
{"url":"http://www.mathopenref.com/similartrianglesparts.html","timestamp":"2014-04-17T06:40:49Z","content_type":null,"content_length":"9818","record_id":"<urn:uuid:0622a276-25d9-4ce7-bbae-aa481d8757b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Jntu Kakinada B.Tech 2-1 (R10) 1st Mid Online bits 2013 Free Download Welcome to Jntu9.in/ Recently Jntu Kakinada Released The Notification for B.tech 2-1 (R10)1st Mid Examinations of ece, eee, cse, it and Mechanical Branches.These Mid Examinations are Conducted In the basis of Online and Offline.In the online examination totally 20 bits are asked in Multiple choices.The Model Online bits of These 2-1 1st Mid Online bits are given below in the Subject Wise.If You Want To Download those online bits Click On The Download Links below. B.Tech 2-1 (R10) 1st Mid Online Bits in Subject wise Electrical & Electronics Engineering Electronic Devices and Circuits Fluid Mechanics & Hydraulic Machinery Managerial Economics and Financial Analysis Mathematical Foundations of Computer Science and Engineering Probability Theory & Stochastic Processes 30 Comments □ managerial economics& financial analysis means MEFA only dat was given above 2. we need EMF R10 bits toooo… 3. we need T.D,EM,EEfor mech students □ if u r a mech student then it’s better to go for electrical theory bites. 4. mechanical subjects bits pls……. 5. plz mention the exact 10 bits 6. are these alll for this december mid?? 7. engineering mechanics bits………. 8. themodynamics bits toooo………… 9. pleas post civil engineering online bits 10. mechanics of materials online bits 11. plz mention civil engineering bits alsooooooooooooooooooooooooo □ plz need civl online bits…. 13. Please upload engineering mechanics online bits 14. Plz post civil engineering bits alsooo 15. we need mechanical online bits of E.m,E.E,T.D 16. plss post bits for civil subjects alsoooo 17. need online bits for civil r10 18. please provide online bits for civil branch for r10 regulation 19. plzz plzz post civil 2-1 online bits y cant u post civil onlinel bits is deir any reason behind??? 20. pleas post online bits for civil 21. plz post emf online bits and eca online bits 22. Sir plsssssssssssss post 2-1 online bits on signals and systems 23. plzz post em-1 for eee students (electic machines) 26. sir emf 2-1 online bits not available plz post 27. all branches bits are present except civil pls post civil bits
{"url":"http://jntu9.in/jntu-kakinada-b-tech-2-1-r10-1st-mid-online-bits-2013-free-download-of-eceeeecseitmechanical/","timestamp":"2014-04-16T21:51:23Z","content_type":null,"content_length":"72084","record_id":"<urn:uuid:4403d976-a242-4920-91e7-bbea4605e19e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
sine curve in a sentence Example sentences for sine curve Thus the general size vs sorting trend is a distorted sine curve of two cycles. The sine curve may be descending, but there are loopholes in it nonetheless. There on the screen, a continuous sine curve pulsed, and a new era began. Population trends are not straight line, but closer to a sine curve. In this example, the function plot is used to draw a function: the sine curve. In the open-loop algorithm, power output was programmed to change according to a sine curve tied to day length. Wind speed is computed from the average of the magnitudes of the positive and negative peaks of the quasi-sine curve. Explore Dictionary.com Nearby Words
{"url":"http://www.reference.com/example-sentences/sine%20curve","timestamp":"2014-04-21T04:49:03Z","content_type":null,"content_length":"20652","record_id":"<urn:uuid:d3e2e2c9-e71f-4345-a750-62f5edca2e63>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Is there a way to do comment with just one keystroke at the start as Date: Nov 14, 2012 1:50 AM Author: Murray Eisenberg Subject: Re: Is there a way to do comment with just one keystroke at the start as On Nov 13, 2012, at 12:05 AM, Eric Rasmusen <erasmuse61@gmail.com> wrote: > The standard way to do comments in Mathematica is like this > (*Here is a comment*) > In Latex, a short comment only takes one symbol, at the start of the line: > %Here is a latex comment > Is there a way to comment in Mathematica without the closing symbols? is there an any easy way to make a macro to imitate latex comments? Sorry, but that's the way it is. Of course you could use Text cells instead of embedding comments inside an Input cell. That won't work, of course, if you need to insert comments within (too?)-long code in an Input cell. In the latter case, you could try the following: Comment[blah___] := Null For example: Comment["first we do this"] 1 + 1 x + 2 The _three_ underscores in the argument pattern blah___ for Comment is to allow the final usage just shown, with no argument, so as to avoid seeing a literal Comment[] returned from it. The trouble is that the function Comment also requires a closing bracket. At least that's one less symbol than the two in *) closing an ordinary comment. You could avoid even that by using prefix notation, provided you ensure the argument is a single symbol or a string: Comment@"first we do this" Unfortunately, that requires a close-quote, so it's not better than the Comment[=85] construction. Murray Eisenberg murray@math.umass.edu Mathematics & Statistics Dept. Lederle Graduate Research Tower phone 413 549-1020 (H) University of Massachusetts 413 545-2838 (W) 710 North Pleasant Street fax 413 545-1801 Amherst, MA 01003-9305
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7922961","timestamp":"2014-04-20T01:28:57Z","content_type":null,"content_length":"2965","record_id":"<urn:uuid:6495d299-2c0b-46d1-9e9a-adef207d410b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Lossless Data Compression Algorithms From GHN (Difference between revisions) ← Older edit Newer edit → Line 400: Line 400: Matti Jakobsson published the LZJ&nbsp;algorithm in 1985[33] and it is one of the only LZ78 algorithms Matti Jakobsson published the LZJ&nbsp;algorithm in 1985[33] and it is one of the only LZ78 that deviates from LZW. The algorithm works by storing every unique string in the previous text up to algorithms that deviates from LZW. The algorithm works by storing every unique string in an arbitrary maximum length in the dictionary and assigning codes to each. When the dictionary is full, the previous text up to an arbitrary maximum length in the dictionary and assigning codes all entries that occurred only once are removed [12]. to each. When the dictionary is full, all entries that occurred only once are removed [12]. − [DEL::DEL] − [DEL:==== LZFG ====:DEL] − [DEL::DEL] − [DEL:blah :DEL] === Non-dictionary Algorithms<br> === === Non-dictionary Algorithms<br> === Line 417: Line 413: ==== PAQ ==== ==== PAQ ==== − blah + blah − [DEL:Citations:DEL] + [DEL:<ref>Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. 1069. Print.</ ref><br>:DEL][1] Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. 1069. [1] Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. 1069. Print. <br>[2] Ken Huffman. Profile: David A. Huffman, Scientific American, September 1991, pp. 54–58 Print. <br>[2] Ken Huffman. Profile: David A. Huffman, Scientific American, September 1991, <br>[3] [DEL:LZ77:DEL]<br>[4] [DEL:LZ78:DEL]<br>[5] USPTO Patent #4814746 http://www.[DEL:google:DEL]. pp. 54–58<br>[3] <br>[4] <br>[5] USPTO Patent #4814746http://www../ http:// [DEL:com:DEL]/[DEL:patents?hl=en&amp;lr=&amp;vid=USPAT4814746&amp;id=Z2oWAAAAEBAJ&amp;oi=fnd&amp;dq= www.theregister.co.uk/1999/09/01/unisys_demands_5k_licence_fee/<br>[6] http:// lempel+ziv+miller+wegman&amp;printsec=abstract#v=onepage&amp;q=lempel%20ziv%20miller%20wegman&amp;f= stephane.lesimple.fr/wiki/blog/ false<br>[5] :DEL]http://www.theregister.co.uk/1999/09/01/unisys_demands_5k_licence_fee/<br>[6] http:// lzop_vs_compress_vs_gzip_vs_bzip2_vs_lzma_vs_lzma2-xz_benchmark_reloaded <br>[7] http:// stephane.lesimple.fr/wiki/blog/lzop_vs_compress_vs_gzip_vs_bzip2_vs_lzma_vs_lzma2-xz_benchmark_reloaded www.msversus.org/archive/stac.html <br>[8] http://www.cs.tau.ac.il/~dcor/Graphics/ − <br>[7] http://www.msversus.org/archive/stac.html<br>[8] http://www.cs.tau.ac.il/~dcor/Graphics/ + adv-slides/entropy.pdf <br>[9] Shannon, C.E. (July 1948). "A Mathematical Theory of adv-slides/entropy.pdf<br>[9] Shannon, C.E. (July 1948). "A Mathematical Theory of Communication". Bell Communication". Bell System Technical Journal 27: 379–423.<br>[10] Huffman Coding<br>[11] System Technical Journal 27: 379–423.<br>[10] Huffman Coding<br>[11] Arithmetic Coding<br>[12] Modeling Arithmetic Coding<br>[12] Modeling for compression<br>[13] http://www.faqs.org/faqs/ for compression<br>[13] [DEL:comp.compression Frequently Asked Questions<br>:DEL]http://www.faqs.org/ compression-faq/part1/section-7.html<br>[14] 5051745<br>[15] http://www.bbsdocumentary.com/ faqs/compression-faq/part1/section-7.html<br>[14] [DEL:ZIP patentUS patent :DEL]5051745<br>[15] [DEL: library/CONTROVERSY/LAWSUITS/SEA/katzbio.txt <br>[16] http://www.esva.net/~thom/ Katz death :DEL]http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/SEA/katzbio.txt<br>[16] philkatz.html <br>[17] LZP<br>[18] LZR<br>[19] http://www.binaryessence.com/dct/apc/ [DEL:ARC :DEL]http://www.esva.net/~thom/philkatz.html<br>[17] LZP<br>[18] LZR<br>[19] [DEL:DEFLATE64 en000263.htm <br>[20] compression via substring enumeration<br>[21] LZSS<br>[22] http:// benchmark :DEL]http://www.binaryessence.com/dct/apc/en000263.htm<br>[20] compression via substring www.ross.net/compression/ <br>[23] LZS enumeration<br>[21] LZSS<br>[22] [DEL:LZRW* :DEL]http://www.ross.net/compression/<br>[23] LZS − [24] [DEL:LZX sold to MSFT :DEL]http://www.linkedin.com/pub/jonathan-forbes/3/70a/a4b<br> + [24] http://www.linkedin.com/pub/jonathan-forbes/3/70a/a4b <br> − [25] [DEL:LZO&nbsp;:DEL]http://www.oberhumer.com/opensource/lzo/<br> + [25] http://www.oberhumer.com/opensource/lzo/ <br> − [26] LZMA<br>[27] [DEL:LZMA2 date :DEL]http://www.7-zip.org/history.txt<br> + [26] LZMA<br>[27] http://www.7-zip.org/history.txt <br> [28] Statistical LZ [28] Statistical LZ Revision as of 10:30, 12 December 2011 There are two major categories of compression algorithms: lossy and lossless. Lossy compression algorithms involve the reduction of a file’s size usually by removing small details that require a large amount of data to store at full fidelity. In lossy compression, it is impossible to restore the original file due to the removal of essential data. Lossy compression is most commonly used to store image and audio data, and while it can achieve very high compression ratios through data removal, it is not covered in this article. Lossless data compression is the size reduction of a file, such that a decompression function can restore the original file exactly with no loss of data. Lossless data compression is used ubiquitously in computing, from saving space on your personal computer to sending data over the web, communicating over a secure shell, or viewing a PNG or GIF image. The basic principle that lossless compression algorithms work on is that any non-random file will contain duplicated information that can be condensed using statistical modeling techniques that determine the probability of a character or phrase appearing. These statistical models can then be used to generate codes for specific characters or phrases based on their probability of occurring, and assigning the shortest codes to the most common data. Such techniques include entropy encoding, run-length encoding, and compression using a dictionary. Using these techniques and others, an 8-bit character or a string of such characters could be represented with just a few bits resulting in a large amount of redundant data being removed. Data compression has only played a significant role in computing since the 1970s, when the Internet was becoming more popular and the Lempel-Ziv algorithms were invented, but it has a much longer history outside of computing. Morse code, invented in 1838, is the earliest instance of data compression in that the most common letters in the English language such as “e” and “t” are given shorter Morse codes. Later, as mainframe computers were starting to take hold in 1949, Claude Shannon and Robert Fano invented Shannon-Fano coding. Their algorithm assigns codes to symbols in a given block of data based on the probability of the symbol occuring. The probability is of a symbol occuring is inversely proportional to the length of the code, resulting in a shorter way to represent the data Two years later, David Huffman was studying information theory at MIT and had a class with Robert Fano. Fano gave the class the choice of writing a term paper or taking a final exam. Huffman chose the term paper, which was to be on finding the most efficient method of binary coding. After working for months and failing to come up with anything, Huffman was about to throw away all his work and start studying for the final exam in lieu of the paper. It was at that point that he had an epiphany, figuring out a very similar yet more efficient technique to Shannon-Fano coding. The key difference between Shannon-Fano coding and Huffman coding is that in the former the probability tree is built bottom-up, creating a suboptimal result, and in the latter it is built top-down [2]. Early implementations of Shannon-Fano and Huffman coding were done using hardware and hardcoded codes. It was not until the 1970s and the advent of the Internet and online storage that software compression was implemented that Huffman codes were dynamically generated based on the input data [1]. Later, in 1977, Abraham Lempel and Jacob Ziv published their groundbreaking LZ77 algorithm, the first algorithm to use a dictionary to compress data. More specifically, LZ77 used a dynamic dictionary oftentimes called a sliding window [3]. In 1978, the same duo published their LZ78 algorithm which also uses a dictionary; unlike LZ77, this algorithm parses the input data and generates a static dictionary rather than generating it dynamically [4] Legal Issues Both the LZ77 and LZ78 algorithms grew rapidly in popularity, spawning many variants shown in the diagram to the right. Most of these algorithms have died off since their invention, with just a handful seeing widespread use today including DEFLATE, LZMA, and LZX. Most of the commonly used algorithms are derived from the LZ77 algorithm. This is not due to technical superiority, but because LZ78 algorithms became patent-encumbered after Unisys patented the derivative LZW algorithm in 1984 and began suing software vendors, server admins, and even end users for using the GIF format without a license [5][6]. At the time, the UNIX compress utility used a very slight modification of the LZW algorithm called LZC and was later discontinued due to patent issues. Other UNIX developers also began to deviate from using the LZW algorithm in favor or open source ones. This led the UNIX community to adopt the DEFLATE-based gzip and the Burrows-Wheeler Transform-based bzip2 formats. In the long run, this was a benefit for the UNIX community because both the gzip and bzip2 formats nearly always achieve significantly higher compression ratios than the LZW format [6]. The patent issues surrounding LZW have since subsided, as the patent on the LZW algorithm expired in 2003 [5]. Despite this, the LZW algorithm has largely been replaced and is only commonly used in GIF compression. There have also been some LZW derivatives since then but they do not enjoy widespread use either and LZ77 algorithms remain dominant. Another legal battle erupted in 1993 regarding the LZS algorithm. LZS was developed by Stac Electronics for use in disk compression software such as Stacker. Microsoft used the LZS algorithm in developing disk compression software that was released with MS-DOS 6.0 that purported to double the capacity of a hard drive. When Stac Electronics found out its intellectual property was being used, it filed suit against Microsoft. Microsoft was later found guilty of patent infringement and ordered to pay Stac Electronics $120 million in damages minus $13.6 million awarded in a countersuit finding that Microsoft’s infringement was not willful [7]. Although Stac Electronics v. Microsoft had a large judgment, it did not impede the development of Lempel-Ziv algorithms as the LZW patent dispute did. The only consequence seems to be that LZS has not been forked into any new algorithms. The Rise of DEFLATE Corporations and other large entities have used data compression since the Lempel-Ziv algorithms were published as they have ever-increasing storage needs and data compression allows them to meet those needs. However, data compression did not see widespread use until the Internet began to take off toward the late 1980s when a need for data compression emerged. Bandwidth was either limited, expensive, or both, and data compression helped to alleviate these bottlenecks. Compression became especially important when the World Wide Web was developed as people began to share more images and other formats which are considerably larger than text. To meet the demand, several new file formats were developed incorporating compression including ZIP, GIF, and PNG. Thom Henderson released the first commercially successful archive format called ARC in 1985 through his company, System Enhancement Associates. ARC was especially popular in the BBS community, since it was one of the first programs capable of both bundling and compressing files and it was also made open source. The ARC format uses a modification to the LZW algorithm to compress data. A man named Phil Katz noticed ARC's popularity and decided to improve it by writing the compression and decompression routines in assembly language. He released his PKARC program as shareware in 1987 and was later sued by Henderson for copyright infringement. He was found guilty and forced to pay royalties and other penalties as part of a cross-licensing agreement. He was found guilty because PKARC was a blatant copy of ARC; in some instances even the typos in the comments were identical [16]. Phil Katz could no longer sell PKARC after 1988 due to the cross-licensing agreement, so in 1989 he created a tweaked version of PKARC that is now known as the ZIP format. As a result of its use of LZW, it was considered patent encumbered and Katz later chose to switch to the new IMPLODE algorithm. The format was again updated in 1993, when Katz released PKZIP 2.0, which implemented the DEFLATE algorithm as well as other features like split volumes. This version of the ZIP format is found ubiquitously today, as almost all .zip files follow the PKZIP 2.0 format despite its great age. The GIF, or Graphics Interchange Format, was developed by CompuServe in 1987 to allow bitmaps to be shared without data loss (although the format is limited to 256 colors per frame), while substantially reducing the file size to allow transmission over the Internet. However, like the ZIP format, GIF is also based on the LZW algorithm. Despite being patent encumbered, Unisys was unable to enforce their patents adequately enough to stop the format from spreading. Even now, over 20 years later, the GIF remains in use especially for its capability of being animated. Although GIF could not be stopped, CompuServe sought a format unencumbered by patents and in 1994 introduced the Portable Network Graphics (PNG) format. Like ZIP, the PNG standard uses the DEFLATE algorithm to perform compression. Although DEFLATE was patented by Katz [14] the patent was never enforced and thus PNG and other DEFLATE-based formats avoid infringing on patents. Although LZW took off in the early days of compression, due to Unisys's litigious nature it has more or less died off in favor of the faster and more efficient DEFLATE algorithm. DEFLATE is currently the most used data compression algorithm since it is a bit like the Swiss Army knife of compression. Beyond its use in the PNG and ZIP formats, DEFLATE is also used very frequently elsewhere in computing. For example, the gzip (.gz) file format uses DEFLATE since it is essentially an open source version of ZIP. Other uses of DEFLATE include HTTP, SSL, and other technologies designed for efficient data compression over a network. Sadly, Phil Katz did not live long enough to see his DEFLATE algorithm take over the computing world. He suffered from alcoholism for several years and his life began to fall apart in the late 1990s, having been arrested several times for drunk driving and other violations. Katz was found dead in a hotel room on April 14, 2000, at the age of 37. The cause of death was found to be acute pancreatic bleeding from his alcoholism, brought on by the many empty bottles of liquor found near his body [15]. Current Archival Software The ZIP format and other DEFLATE-based formats were king up until the mid 1990s when new and improved formats began to emerge. In 1993, Eugene Roshal released his archiver known as WinRAR which utilizes the proprietary RAR format. The latest version of RAR uses a combination of the PPM and LZSS algorithms, but not much is known about earlier implementations. RAR has become a standard format for sharing files over the Internet, specifically in the distribution of pirated media. An open-source implementation of the Burrows-Wheeler Transform called bzip2 was introduced in 1996 and rapidly grew in popularity on the UNIX platform against the DEFLATE-based gzip format. Another open-source compression program was released in 1999 as the 7-Zip or .7z format. 7-Zip could be the first format to challenge the dominance of ZIP and RAR due to its generally high compression ratio and the format's modularity and openness. This format is not limited to using one compression algorithm, but can instead choose between bzip2, LZMA, LZMA2, and PPMd algorithms among others. Finally, on the bleeding edge of archival software are the PAQ* formats. The first PAQ format was released by Matt Mahoney in 2002, called PAQ1. PAQ substantially improves on the PPM algorithm by using a technique known as context mixing which combines two or more statistical models to generate a better prediction of the next symbol than either of the models on their own. Future Developments The future is never certain, but based on current trends some predictions can be made as to what may happen in the future of data compression. Context Mixing algorithms such as PAQ and its variants have started to attract popularity, and they tend to achieve the highest compression ratios but are usually slow. With the exponential increase in hardware speed following Moore's Law, context mixing algorithms will likely flourish as the speed penalties are overcome through faster hardware due to their high compression ratio. The algorithm that PAQ sought to improve, called Prediction by Partial Matching (PPM) may also see new variants. Finally, the Lempel-Ziv Markov chain Algorithm (LZMA) has consistently shown itself to have an excellent compromise between speed and high compression ratio and will likely spawn more variants in the future. LZMA may even be the "winner" as it is further developed, having already been adopted in numerous competing compression formats since it was introduced with the 7-Zip format. Another potential development is the use of compression via substring enumeration (CSE) which is an up-and-coming compression technique that has not seen many software implementations yet. In its naive form it performs similarly to bzip2 and PPM, and researchers have been working to improve its efficiency [20]. Compression Techniques Many different techniques are used to compress data. Most compression techniques cannot stand on their own, but must be combined together to form a compression algorithm. Those that can stand alone are often more effective when joined together with other compression techniques. Most of these techniques fall under the category of entropy coders, but there are others such as Run-Length Encoding and the Burrows-Wheeler Transform that are also commonly used. Run-Length Encoding Run-Length Encoding is a very simple compression technique that replaces runs of two or more of the same character with a number which represents the length of the run, followed by the original character; single characters are coded as runs of 1. RLE is useful for highly-redundant data, indexed images with many pixels of the same color in a row, or in combination with other compression techniques like the Burrows-Wheeler Transform. Here is a quick example of RLE: Output: 3A2B4C1D6E38A Burrows-Wheeler Transform The Burrows-Wheeler Transform is a compression technique that aims to reversibly transform a block of input data such that the amount of runs of identical characters is maximized. The BWT itself does not perform any compression operations, it simply transforms the input such that it can be more efficiently coded by a Run-Length Encoder or other secondary compression technique. The algorithm for a BWT is simple: 1. Create a string array. 2. Generate all possible rotations of the input string, storing each in the array. 3. Sort the array alphabetically. 4. Return the last column of the array (the last character of each string in the array concatenated) [29]. BWT usually works best on long inputs with many alternating identical characters. Here is an example of the algorithm being run on an ideal input. Note that & is an End of File character: │Input │Rotations │Alpha-Sorted Rotations │Output │ │ │HAHAHA& │AHAHA&H │ │ │ ├──────────┼───────────────────────┤ │ │ │&HAHAHA │AHA&HAH │ │ │ ├──────────┼───────────────────────┤ │ │ │A&HAHAH │A&HAHAH │ │ │ ├──────────┼───────────────────────┤ │ │HAHAHA&│HA&HAHA │HAHAHA& │HHH&AAA│ │ ├──────────┼───────────────────────┤ │ │ │AHA&HAH │HAHA&HA │ │ │ ├──────────┼───────────────────────┤ │ │ │HAHA&HA │HA&HAHA │ │ │ ├──────────┼───────────────────────┤ │ │ │AHAHA&H │&HAHAHA │ │ Because of its alternating identical characters, performing the BWT on this input generates an optimal result that another algorithm could further compress, such as RLE which would yield "3H&3A". While this example generated an optimal result, it does not generate optimal results on real-world data. Entropy Encoding Entropy in data compression means the smallest number of bits needed, on average, to represent a symbol or literal. A basic entropy coder combines a statistical model and a coder. The input file is parsed and used to generate a statistical model that consists of the probabilities of a given symbol appearing. Then, the coder will use the statistical model to determine what bit-or-bytecodes to assign to each symbol such that the most common symbols have the shortest codes and the least common symbols have the longest codes [8]. Shannon-Fano Coding This is one of the earliest compression techniques, invented in 1949 by Claude Shannon and Robert Fano. This technique involves generating a binary tree to represent the probabilities of each symbol occurring. The symbols are ordered such that the most frequent symbols appear at the top of the tree and the least likely symbols appear at the bottom. The code for a given symbol is obtained by searching for it in the Shannon-Fano tree, and appending to the code a value of 0 or 1 for each left or right branch taken, respectively. For example, if “A” is two branches to the left and one to the right its code would be “001”. Shannon-Fano coding does not always produce optimal codes due to the way it builds the binary tree from the bottom up. For this reason, Huffman coding is used instead as it generates an optimal code for any given input. The algorithm to generate Shannon-Fano codes is fairly simple 1. Parse the input, counting the occurrence of each symbol. 2. Determine the probability of each symbol using the symbol count. 3. Sort the symbols by probability, with the most probable first. 4. Generate leaf nodes for each symbol. 5. Divide the list in two while keeping the probability of the left branch roughly equal to those on the right branch. 6. Prepend 0 and 1 to the left and right nodes' codes, respectively. 7. Recursively apply steps 5 and 6 to the left and right subtrees until each node is a leaf in the tree. [9] Huffman Coding Huffman Coding is another variant of entropy coding that works in a very similar manner to Shannon-Fano Coding, but the binary tree is built from the top down to generate an optimal result. The algorithm to generate Huffman codes shares its first steps with Shannon-Fano: 1. Parse the input, counting the occurrence of each symbol. 2. Determine the probability of each symbol using the symbol count. 3. Sort the symbols by probability, with the most probable first. 4. Generate leaf nodes for each symbol, including P, and add them to a queue. 5. While (Nodes in Queue > 1) • Remove the two lowest probability nodes from the queue. • Prepend 0 and 1 to the left and right nodes' codes, respectively. • Create a new node with value equal to the sum of the nodes’ probability. • Assign the first node to the left branch and the second node to the right branch. • Add the node to the queue 6. The last node remaining in the queue is the root of the Huffman tree. [10] Arithmetic Coding Arithmetic coding is arguably the most optimal entropy coding technique if the objective is the best compression ratio since it usually achieves better results than Huffman Coding. It is, however, quite complicated compared to the other coding techniques. Rather than splitting the probabilities of symbols into a tree, arithmetic coding transforms the input data into a single rational number between 0 and 1 by changing the base and assigning a single value to each unique symbol from 0 up to the base. Then, it is further transformed into a fixed-point binary number which is the encoded result. The value can be decoded into the original output by changing the base from binary back to the original base and replacing the values with the symbols they correspond to. A general algorithm to compute the arithmetic code is: 1. Calculate the number of unique symbols in the input. This number represents the base b (e.g. base 2 is binary) of the arithmetic code. 2. Assign values from 0 to b to each unique symbol in the order they appear. 3. Using the values from step 2, replace the symbols in the input with their codes 4. Convert the result from step 3 from base b to a sufficiently long fixed-point binary number to preserve precision. 5. Record the length of the input string somewhere in the result as it is needed for decoding [11]. Here is an example of an encode operation, given the input “ABCDAABD”: 1. Found 4 unique symbols in input, therefore base = 4. Length = 8 2. Assigned values to symbols: A=0, B=1, C=2, D=3 3. Replaced input with codes: “0.01230013[4]” where the leading 0 is not a symbol. 4. Convert “0.01231123[4]” from base 4 to base 2: “0.01101100000111[2]” 5. Result found. Note in result that input length is 8. Assuming 7-bit ASCII codes, the input is 56 bits long, while its arithmetic coding is just 14 bits long resulting in a very high compression ratio of 25%. This example demonstrates how arithmetic coding compresses well when given a limited character set. Compression Algorithms Sliding Window Algorithms Published in 1977, LZ77 is the algorithm that started it all. It introduced the concept of a 'sliding window' for the first time which brought about significant improvements in compression ratio over more primitive algorithms. LZ77 maintains a dictionary using triples representing offset, run length, and a deviating character. The offset is how far from the start of the file a given phrase starts at, and the run length is how many characters past the offset are part of the phrase. The deviating character is just an indication that a new phrase was found, and that phrase is equal to the phrase from offset to offset+length plus the deviating character. The dictionary used changes dynamically based on the sliding window as the file is parsed. For example, the sliding window could be 64MB which means that the dictionary will contain entries for the past 64MB of the input data. Given an input "abbadabba" the output would look something like "abb(0,1,'d')(0,3,'a')" as in the example below: │Position │Symbol│Output │ │0 │a │a │ │1 │b │b │ │2 │b │b │ │3 │a │ │ ├─────────┼──────┤(0, 1, 'd') │ │4 │d │ │ │5 │a │ │ ├─────────┼──────┤ │ │6 │b │ │ ├─────────┼──────┤(0, 3, 'a') │ │7 │b │ │ ├─────────┼──────┤ │ │8 │a │ │ While this substitution is slightly larger than the input, it generally achieves a smaller result given longer input data [3]. LZR is a modification of LZ77 invented by Michael Rodeh in 1981. The algorithm aims to be a linear time alternative to LZ77. However, the encoded pointers can point to any offset in the file which means LZR consumes a considerable amount of memory. Combined with its poor compression ratio (LZ77 is often superior) it is an unfeasible variant [12][18]. DEFLATE was invented by Phil Katz in 1993 and is the basis for the majority of compression tasks today. It simply combines an LZ77 or LZSS preprocessor with Huffman coding on the backend to achieve moderately compressed results in a short time. DEFLATE64 is a proprietary extension of the DEFLATE algorithm which increases the dictionary size to 64kB (hence the name) and allows greater distance in the sliding window. It increases both performance and compression ratio compared to DEFLATE [19]. However, the proprietary nature of DEFLATE64 and its modest improvements over DEFLATE has led to limited adoption of the format. Open source algorithms such as LZMA are generally used instead. The LZSS, or Lempel-Ziv-Storer-Szymanski algorithm was first published in 1982 by James Storer and Thomas Szymanski. LZSS improves on LZ77 in that it can detect whether a substitution will decrease the filesize or not. If no size reduction will be achieved, the input is left as a literal in the output. Otherwise, the section of the input is replaced with an (offset, length) pair where the offset is how many bytes from the start of the input and the length is how many characters to read from that position [21]. Another improvement over LZ77 comes from the elimination of the "next character" and uses just an offset-length pair. Here is a brief example given the input " these theses" which yields " these(0,6)s" which saves just one byte, but saves considerably more on larger inputs. │Index │0│1│2│3│4│5│6│7│8│9│10│11│12│ │Symbol │ │t│h│e│s│e│ │t│h│e│s │e │s │ │Substituted │ │t│h│e│s│e│(│0│,│6│) │s │ │ LZSS is still used in many popular archive formats, the best known of which is RAR. It is also sometimes used for network data compression. LZH was developed in 1987 and it stands for "Lempel-Ziv Huffman." It is a variant of LZSS that utilizes Huffman coding to compress the pointers, resulting in slightly better compression. However, the improvements gained using Huffman coding are negligible and the compression is not worth the performance hit of using Huffman codes [12]. LZB was also developed in 1987 by Timothy Bell et al as a variant of LZSS. Like LZH, LZB also aims to reduce the compressed file size by encoding the LZSS pointers more efficiently. The way it does this is by gradually increasing the size of the pointers as the sliding window grows larger. It can achieve higher compression than LZSS and LZH, but it is still rather slow compared to LZSS due to the extra encoding step for the pointers [12]. ROLZ stands for "Reduced Offset Lempel-Ziv" and its goal is to improve LZ77 compression by restricting the offset length to reduce the amount of data required to encode the offset-length pair. This derivative of LZ77 was first seen in 1991 in Ross Williams' LZRW4 algorithm. Other implementations include BALZ, QUAD, and RZM. Highly optimized ROLZ can achieve nearly the same compression ratios as LZMA; however, ROLZ suffers from a lack of popularity. LZP stands for "Lempel-Ziv + Prediction." It is a special case of ROLZ algorithm where the offset is reduced to 1. There are several variations using different techniques to achieve either faster operation of better compression ratios. LZW4 implements an arithmetic encoder to achieve the best compression ratio at the cost of speed [17]. Ron Williams created this algorithm in 1991, introducing the concept of a Reduced-Offset Lempel-Ziv compression for the first time. LZRW1 can achieve high compression ratios while remaining very fast and efficient. Ron Williams also created several variants that improve on LZRW1 such asa LZRW1-A, 2, 3, 3-A, and 4 [22]. Jeff Bonwick created his Lempel-Ziv Jeff Bonwick algorithm in 1998 for use in the Solaris Z File System (ZFS). It is considered a variant of the LZRW algorithm, specifically the LZRW1 variant which is aimed at maximum compression speed. Since it is used in a file system, speed is especially important to ensure that disk operations are not bottlenecked by the compression algorithm. The Lempel-Ziv-Stac algorithm was developed by Stac Electronics in 1994 for use in disk compression software. It is a modification to LZ77 which distinguishes between literal symbols in the output and offset-length pairs, in addition to removing the next encountered symbol. The LZS algorithm is functionally most similar to the LZSS algorithm [23]. The LZX algorithm was developed in 1995 by Jonathan Forbes and Tomi Poutanen for the Amiga computer. The X in LZX has no special meaning. Forbes sold the algorithm to Microsoft in 1996 and went to work for them, where it was further improved upon for use in Microsoft's cabinet (.CAB) format. This algorithm is also employed by Microsoft to compress Compressed HTML Help (CHM) files, Windows Imaging Format (WIM) files, and Xbox Live Avatars [24]. LZO was developed by Markus Oberhumer in 1996 whose development goal was fast compression and decompression. It allows for adjustable compression levels and requires only 64kB of additional memory for the highest compression level, while decompression only requires the input and output buffers. LZO functions very similarly to the LZSS algorithm but is optimized for speed rather than compression ratio [25]. The Lempel-Ziv Markov chain Algorithm was first published in 1998 with the release of the 7-Zip archiver for use in the .7z file format. It achieves better compression than bzip2, DEFLATE, and other algorithms in most cases. LZMA uses a chain of compression techniques to achieve its output. First, a modified LZ77 algorithm, which operates at a bitwise level rather than the traditional bytewise level, is used to parse the data. Then, the output of the LZ77 algorithm undergoes arithmetic coding. More techniques can be applied depending on the specific LZMA implementation. The result is considerably improved compression ratios over most other LZ variants mainly due to the bitwise method of compression rather than bytewise [26]. LZMA2 is an incremental improvement to the original LZMA algorithm, first introduced in 2009[27] in an update to the 7-Zip archive software. LZMA2 improves the multithreading capabilities and thus performance of the LZMA algorithm, as well as better handling of incompressible data resulting in slightly better compression. Statistical Lempel-Ziv Statistical Lempel-Ziv was a concept created by Dr. Sam Kwong and Yu Fan Ho in 2001. The basic principle it operates on is that a statistical analysis of the data can be combined with an LZ77-variant algorithm to further optimize what codes are stored in the dictionary [28]. Dictionary Algorithms LZ78 was created by Lempel and Ziv in 1978, hence the abbreviation. Rather than using a sliding window to generate the dictionary, the input data is parsed entirely to generate a static, unchanging dictionary [4]. While parsing the file, the LZ78 algorithm adds frequently each newly encountered character or string of characters. When it finds the same combination again, it appends the next character found to the entry and creates a new dictionary entry with the result. The output is stored in the format (dictionary_index, next_character), where a 0 for the index indicates a new symbol. If the index is greater than one, then it means that entry is equal to dictionary entry i plus the next character n. An input such as "abbadabba" would generate the output "(0,a)(0,b)(2,a)(0,d)(1,b)(2,a)." You can see how this was derived in the following example: │Input: │a │b │ba │d │ab │ba │ │Dictionary Index │1 │2 │3 │4 │5 │6 │ │Output │(0,a)│(0,b)│(2,a)│(0,d)│(1,b)│(2,a)│ LZW is the Lempel-Ziv-Welch algorithm created in 1984 by Terry Welch. It is the most commonly used derivative of the LZ78 family, despite being heavily patent-encumbered. LZW improves on LZ78 in a similar way to LZSS; it removes redundant characters in the output and makes the output entirely out of pointers. It also includes every character in the dictionary before starting compression, and employs other tricks to improve compression such as encoding the last character of every new phrase as the first character of the next phrase. LZW is commonly found in the Graphics Interchange Format, as well as in the early specificiations of the ZIP format and other specialized applications. LZW is very fast, but achieves poor compression compared to most newer algorithms and some algorithms are both faster and achieve better compression [12]. LZC, or Lempel-Ziv Compress is a slight modification to the LZW algorithm used in the UNIX compress utility. The main difference between LZC and LZW is that LZC monitors the compression ratio of the output. Once the ratio crosses a certain threshold, the dictionary is discarded and rebuilt [12]. Lempel-Ziv Tischer is a modification of LZC that, when the dictionary is full, deletes the least recently used phrase and replaces it with a new entry. There are some other incremental improvements, but neither LZC nor LZT is commonly used today [12]. Invented in 1984 by Victor Miller and Mark Wegman, the LZMW algorithm is quite similar to LZT in that it employs the least recently used phrase substitution strategy. However, rather than joining together similar entries in the dictionary, LZMW joins together the last two phrases encoded and stores the result as a new entry. As a result, the size of the dictionary can expand quite rapidly and LRUs must be discared more frequently. LZMW generally achieves better compression than LZT, however it is yet another algorithm that does not see much modern use [12]. LZAP was created in 1988 by James Storer as a modification to the LZMW algorithm. The AP stands for "all prefixes" in that rather than storing a single phrase in the dictionary each iteration, the dictionary stores every permutation. For example, if the last phrase was "last" and the current phrase is "next" the dictionary would store "lastn", "lastne", "lastnex", and "lastnext" [31]. LZWL is a modification to the LZW algorithm created in 2006 that works with syllables rather than than single characters. LZWL is designed to work better with certain datasets with many commonly occuring syllables such as XML data. This type of algorithm is usually used with a preprocessor that decomposes the input data into syllables [30]. Matti Jakobsson published the LZJ algorithm in 1985[33] and it is one of the only LZ78 algorithms that deviates from LZW. The algorithm works by storing every unique string in the previous text up to an arbitrary maximum length in the dictionary and assigning codes to each. When the dictionary is full, all entries that occurred only once are removed [12]. Non-dictionary Algorithms bzip2 is an open source implementation of the Burrows-Wheeler Transform. Its operating principles are simple, yet they achieve a very good compromise between speed and compression ratio that makes the bzip2 format very popular in UNIX environments. First, a Run-Length Encoder is applied to the data. Next, the Burrows-Wheeler Transform is applied. Then, a move-to-front transform is applied with the intent of creating a large amount of identical symbols forming runs for use in yet another Run-Length Encoder. Finally, the result is Huffman coded and wrapped with a header. [32] [1] Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. 1069. Print. [2] Ken Huffman. Profile: David A. Huffman, Scientific American, September 1991, pp. 54–58 [3] Ziv J., Lempel A., “A Universal Algorithm for Sequential Data Compression”, IEEE Transactions on Information Theory, Vol. 23, No. 3. [4] Ziv J., Lempel A., “Compression of Individual Sequences via Variable-Rate Coding”, IEEE Transactions on Information Theory, Vol. 24, No. 5. [5] USPTO Patent #4814746 [5] http://www.theregister.co.uk/1999/09/01/unisys_demands_5k_licence_fee/ [6] http://stephane.lesimple.fr/wiki/blog/lzop_vs_compress_vs_gzip_vs_bzip2_vs_lzma_vs_lzma2-xz_benchmark_reloaded [7] http://www.msversus.org/archive/stac.html [8] http://www.cs.tau.ac.il/~dcor/Graphics/adv-slides/entropy.pdf [9] Shannon, C.E. (July 1948). "A Mathematical Theory of Communication". Bell System Technical Journal 27: 379–423. [10] Huffman Coding [11] Arithmetic Coding [12] Modeling for compression [13] comp.compression FAQ [14] USPTO Patent #5051745 [15] Phil Katz' Death [16] ARC Info [17] LZP [18] LZR [19] DEFLATE64 benchmarks [20] compression via substring enumeration [21] LZSS [22] http://www.ross.net/compression/ [23] LZS [24] LZX Sold to Microsoft [25] LZO Info [26] LZMA [27] LZMA2 Release Date [28] Statistical LZ [29] Burrows M and Wheeler D (1994), A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation [30] LZWL [31] David Salomon, Data Compression – The complete reference, 4th ed., page 212 [32] bzip2 manual [33] LZJ
{"url":"http://www.ieeeghn.org/wiki/index.php?title=History_of_Lossless_Data_Compression_Algorithms&diff=67793&oldid=67789","timestamp":"2014-04-17T07:51:41Z","content_type":null,"content_length":"92921","record_id":"<urn:uuid:a4d79836-57d2-4be1-8c2c-a7620747eee8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Failure of the GCH up vote 7 down vote favorite What is the (currently known) consistency strength of global failure of the GCH? I do not have access to the exact statement of the original Foreman-Woodin result. My searches seem to indicate that they used an assumption at the region of a supercompact, although I have seen comments stating that the result has been improved to require something in the region of a hypermeasurable. Is this correct? What this exact upper bound? Thanks a lot. set-theory gch Can't you just use Easton theorem to have $2^\kappa = \kappa^{+++}$ or something like that? – Asaf Karagila Nov 3 '11 at 13:50 1 Easton does only work for regular cardinals $\kappa$. But the question is about global GCH. – Stefan Hoffelner Nov 3 '11 at 14:33 1 Closely related. – Andres Caicedo Aug 3 '13 at 3:14 add comment 4 Answers active oldest votes The following quotations are taken from Matthew Foreman and W. Hugh Woodin, "The generalized continuum hypothesis can fail everywhere," Ann. Math. 133 (1991), 1–35. THEOREM. Let $\kappa$ be a supercompact cardinal with infinitely many inaccessible cardinals above $\kappa$. Then there is a partial ordering $\mathbf P$ such that in $V^{\mathbf P}$, $V_\kappa \models ZFC + \forall \lambda: 2^\lambda > \lambda^+$. In fact we can arrange by our choice of partial orderings that $V^{\mathbf P}\models$ $\kappa$ is $\beth_n(\kappa)$-supercompact. Solovay has shown that if $\kappa$ is supercompact then $2^{\beth_\omega(\kappa)} = \beth_\omega(\kappa)^+$; hence this is near best possible. Woodin extended this result to get: up vote 10 THEOREM (Woodin). If there is a supercompact cardinal then there is a model of ZFC in which $2^\kappa = \kappa^{++}$ for each cardinal $\kappa$. down vote accepted The last sentence of the paper reads: The second author has also reduced the consistency strength of "$ZFC + \forall\kappa: 2^\kappa > \kappa^+$" and "$ZFC + \forall\kappa: 2^\kappa = \kappa^{++}$" to that of a ${\ mathscr P}^2(\kappa)$-hypermeasurable. It's not clear to me if the proofs of the two theorems attributed to Woodin have ever been published. 1 Timothy, The unpublished proofs are by now standard. You can get $ZFC+\forall\kappa(2^\kappa=\kappa^{+n})$ for any fixed natural $n>0$ from ${\mathcal P}^n(\kappa) $-hypermeasurables. The arguments in, for example, J. Cummings paper (on violating SCH at all limits) or the many papers by Gitik and his students (including his very nice Handbook article) should explain how to fill in any missing details. – Andres Caicedo Nov 3 '11 at 19:12 Thanks a lot for the answers and the references. So, by Andres' comment, we know the exact large cardinal assumption (hypermeasurability) leading to a full control of the degree of failure of GCH. Now, do we also know what kind of large cardinals can survive in these models where GCH fails? Moreover, what if we are generous enough to assume more than what is actually needed for global failure of GCH e.g., say a supercompact. Can we then get larger cardinals together with GCH failure? – eiths Nov 3 '11 at 22:06 1 SCH must hold above a strongly compact cardinal, so there can be no strong compactness in a model of global failure of GCH. – Monroe Eskew Nov 4 '11 at 20:19 add comment By work of Gitik-Mitchell a $(\kappa+2)$-strong cardinal $\kappa$ is required, and by work of Merimovich a $(\kappa+3)$-strong cardinal $\kappa$ (in fact a cardinal $\kappa$ with $o(\ kappa)=\kappa^{++}+\kappa^{+}$) is enough. Gitik and Merimovich have a project to get the total failure of $GCH$ from optimal hypotheses. It's yet incomplete. If I remember it correctly, it says something like this: Theorem. The following are equiconsistent: up vote 8 1-For any $\alpha$, there are stationary many cardinals $\kappa$ with $o(\kappa)=\kappa^{++}+\alpha,$ down vote 2-GCH fails everywhere, 3-$\forall \lambda, 2^{\lambda}=\lambda^{++}.$ add comment It looks like the other answers only deal with upper bounds, so I thought I'd point out that by a result of Gitik, http://www.sciencedirect.com/science/article/pii/016800729190016F, $\ exists \kappa\; o(\kappa) = \kappa^{++}$ is a lower bound for $\neg$SCH and therefore also for the global failure of GCH. (But we still haven't answered the question of the exact up vote 7 consistency strength of the latter. Is there a better lower bound out there?) down vote add comment see here: http://dx.doi.org/10.2178/jsl/1185803615 up vote 3 down vote add comment Not the answer you're looking for? Browse other questions tagged set-theory gch or ask your own question.
{"url":"http://mathoverflow.net/questions/79920/failure-of-the-gch/80077","timestamp":"2014-04-21T12:56:44Z","content_type":null,"content_length":"69737","record_id":"<urn:uuid:22ab66c6-5360-4323-80da-0cf068ae9e43>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
General Oceanography This page is designed to provide a guide to a planned implementation ofThe Math You Need, When You Need It.It will change as the implementation proceeds at this institution. Please check back regularly for updates and more information. General Oceanography at SUNY College at Oneonta Implementor(s): Todd Ellis Enrollment: 50-75 Anticipated Start Date: August 24, 2010 (Semester) About my institution SUNY Oneonta is a primarily undergraduate institution (PUI) located in the Central-Leatherstocking Region of New York State, nearly halfway between Binghamton and Albany. It serves roughly 6,000 students, many of whom are planning to become teachers (as a former normal school). SUNY Oneonta also has a strong Earth and Atmospheric Sciences department, with 100-200 majors at any one time. Challenges to using math in this course I believe that the challenges of offering mathematics within the General Oceanography course at SUNY Oneonta are as follows: 1. Too many students are taking general oceanography with the expectation that it will be an mathematically easier path and as such, the students often are weaker in math skills; 2. Because my oceanography course is a large lecture course, it does not have time or space for labs that allowing students to work extended in-class exercises that utilize math skills. As a result, I tend toward qualitative discussions that lend themselves to "clicker questions", especially because I don't have an opportunity to learn where every student is with their math skills. I feel like this lack of quantitative skills up until now is a huge deficiency in the way I have structured the course. 3. I do not have a TA, so grading large numbers of homework exercises by hand is something I am trying to avoid (desperately!). 4. There is an enormous spread in the mathematical abilities in the students who take this course (see below). More about your geoscience course General Oceanography is the only Oceanography course currently offered at SUNY Oneonta. It has a prerequisite of Intro Meteorology, Intro Geology, or Intro Earth Science, each of which has different levels of mathematical skill development. It is a required course for nearly all of the Earth Science majors (maybe 30%) and the Earth Science education majors (70%) (adolescence education and elementary education). The education majors tend to have the least math in their background, while the geology and meteorology majors tend to have quite strong math backgrounds. Therefore, there is a large spread in mathematical skills, with limited opportunities to remediate students who lack certain skills or confidence in those skills. General Oceanography is a 50-75 person class. The current pedagogical approach utilizes Angel as a CMS to disseminate lecture content, Twitter and Facebook to facilitate questions from students during and after class, and the use of Clickers to allow for formative assessment during class time itself (and to check attendance!). Think-Pair-Share exercises are planned for the coming semester as well. Inclusion of quantitative content pre-TMYN The course prior to TMYN does not explicitly use or train on mathematical skills. Some equations and many graphs do appear in the material, but I have not explicitly addressed skill development related to using these materials. It is my suspicion that my lack of explicit addressing these skills may be putting some students at a disadvantage. I have not explicitly addressed mathematical questions during exams in the past either. It is fair to say that prior to TMYN, quantitative skills have simply not been addressed. Which Math You Need Modules will/do you use in your course? • Rates • Hypsometric Diagrams • Density • Slopes • Topographic Profile • Rearranging Equations Strategies for successfully implementing The Math You Need My implementation strategy will be generally as follows: 1. Students will be assigned a module and asked to complete an assessment quiz on that material. If there is more than one module assigned, the assessment will include questions from all modules. 2. In the next lecture after the due date, there will be a Think-Pair-Share question or two on the topics from the preceding module(s). The answer will be provided via a multiple choice clicker 3. I will announce after the TPS exercise that, if students were still confused, that they are welcome to review the module again and contact the instructor if they have further questions. I plan to implement TMYN following this timeline: • First day of class: Following the syllabus review and course overview, introduce TMYN by reviewing the Rates module as a class and by completing a 2 question quiz with the entire class that shows how WAMAP works. Students will be expected to complete a pre-test on their own within 48 hours. Students will get full credit if they attempt the test, and will get zero if they do not take it. Rates will come up throughout the semester, but at first it comes up during the plate tectonics section in Week 3. • Week 1: Complete the Hypsometric Diagram module prior to the third lecture. I will introduce this figure in class, and it will come up primarily during the first few weeks of the semester. • Week 4: Complete the Density module. These skills will be relevant to discussing the chemistry of water that week, and then will come up in the meteorology section and when talking about water masses and the general circulation. • Week 7: Complete the Slopes and Topographic Profile modules prior to talking about the meteorology unit. I will also refer students back to places where we have talked about these topics before (bathymetry, plate tectonics, layers), and will continue to discuss it whenever contouring comes up. • Week 9: Complete the Rearranging Equations modules prior to discussing waves, where we manipulate equations to go between wavelength and wave speed and such. • Midterm I, Midterm II, and the Final Exam will include a selection of questions using these skills (roughly 10% of the questions will likely be designed to test these mathematical skills) • A Post-test, identical in content to the pre-test, will be required during the last week of classes. It will count as a The assessments will be counted as 5% of their overall grade. The modules will all be 10 point assignments, and the pre- and post-test will be a 20 point assignment. The pre- and post-tests will be single attempt quizzes, and the modules will allow up to 5 attempts per question. Reflections and Results (after implementing)
{"url":"http://serc.carleton.edu/mathyouneed/implementations/57627.html","timestamp":"2014-04-17T16:20:06Z","content_type":null,"content_length":"27041","record_id":"<urn:uuid:c2ff6eec-4285-45c8-b1a1-63a7ac8eecc7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Spacetime Points & General Covariance I'm not arguing with you, I just don't completely understand the bolded sentence in your post and would like a little clarification as to what you are saying. Are you referring to the socalled Hole Argument? Are you saying that according to the (often cited) Hole Argument diffeo invariance implies that spacetime points don't have physical existence? It's settled already. Wiki says: "It [the hole argument] is incorrectly interpreted by some philosophers as an argument against manifold substantialism, a doctrine that the manifold of events in spacetime are a "substance" which exists independently of the matter within it. Physicists disagree with this interpretation, and view the argument as a confusion about gauge invariance and gauge fixing instead." PeterDonis said: "In other words, the hole argument does not show that general covariance is inconsistent with spacetime being a "real thing". All it shows is that GR is a gauge theory." So it doesn't have to do with the Hole Argument. Unless you want to say it is indeed? What is your comment about "general covariance and diffeomorphism invariance require the spacetime points are not composed of any substance or something that can be tracked in time". Do you agree with it or disagree and why? Thanks.
{"url":"http://www.physicsforums.com/showthread.php?p=3650767","timestamp":"2014-04-23T15:30:43Z","content_type":null,"content_length":"44732","record_id":"<urn:uuid:fa31c2e5-ba0f-4f04-8955-0867118c3283>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
C3 identity issue! December 11th 2012, 12:36 PM #1 Aug 2012 C3 identity issue! Can someone please direct me in the correct identities to use here, really stuck! given that 5sec^2x + 3tan^2x =9, find the possible values of sin x. Re: C3 identity issue! Multiply through by $\cos^2(x)$ then apply the Pythagorean identity to get a quadratic in $\sin^2(x)$. Re: C3 identity issue! Knowing that: tan^2x + 1 = sec^2x, sec^2x = 1/cos^2x and also sin^2x + cos^2x = 1, is what is required to answer this question. More specifically begin by getting everything in terms of sin x and cos x and hopefully it will become much easier to manipulate. December 11th 2012, 01:07 PM #2 December 11th 2012, 01:12 PM #3 Dec 2012 December 24th 2012, 01:52 AM #4 Super Member Jul 2012
{"url":"http://mathhelpforum.com/trigonometry/209609-c3-identity-issue.html","timestamp":"2014-04-17T05:16:52Z","content_type":null,"content_length":"36048","record_id":"<urn:uuid:0f81a534-653c-4eb8-8262-87459bdaf11d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Changing Gears 2012: maths are creative, maths are not arithmetic I am really tired of schools chasing students away from mathematics. And I am really tired of schools confusing arithmetic - a mechanical grammar of numbers - with the field of mathematics. We need kids to get interested in maths. Mathematics is something essential in our society, and in our future, and we just cannot afford to continue to chase the bulk of kids away from the possibilities which come with math skills, interests, and capabilities. We cannot continue to either allow the assumption that there are "math kids" and "non-math kids" (as if math is a magical gift), or to separate students into "creative types" and "math types." Because the "next world" - the jobs of the future which are, as we speak, constructing the world we will live in tomorrow - is being built by the creative mathematicians of the world, and our students will either be part of that, or they won't. They will be able to develop their own solutions and have power over their world or they will be helpless consumers locked into their " App Store Education " (Will Richardson must read) and "App Store Existence." They will be participants or bystanders. And largely, that is for us, as educators, to decide. Let's begin here, with a quote from the end of the clip above , " Adding in clock notation, all of computer science begins when you say 1+1 = 0. It's not that you were wrong when you said 1+1 = 2, its just a different way of seeing it If this is over your head in some way, and you teach math in school, we need to talk. We need to talk now, because step eight of Changing Gears 2012 is re-understanding what mathematics are, and how we bring kids and mathematics together. Two primary issues which lead to the bigger ones, no matter what age kids you teach. First, maths are creative, they are imaginative, they are powerful, and they are fascinating. Second, arithmetic cannot continue to be your gateway, your filter, blocking children from mathematics. For the first, what have you done with Fibonacci lately, just as a first question? How does a maths idea shape how students perceive the spaces they are in? For the second, well, lets go back a month to a post I wrote, "In a math lesson a day later I watched a seventh grader, a kid who really struggled to divide 64 by 2 in his head, or 32 by 2, or, for that matter, 16 by 2, work diligently to explain to his disbelieving teacher how he knew - and he knew instantly - how many games are in the NCAA basketball tournament. He knew, because math is about rules and logic, and his logic was perfect and his understanding of the rules I had described was perfect, and because math is not arithmetic, no matter how much our poorly educated national and state leaders think it is. He and his classmates also understood, almost instantly, that the question - no calculators or paper or Google allowed - "If the temperature in Detroit, Michigan is 50 degrees what is the temperature likely to be in Windsor, Ontario? was about (a) culture, and then (b) understanding comparable scales, and then (c) order of operations." If we get past these two ideas, we can begin to bring students into what mathematics is... Pulling two quotes from a mathematics discussion board begins to get at the issues, the question being that old classic, does two plus two always equal four... "I think this discussion goes right back to Aristotle (or another Greek of similar vintage). The question is pretty much: Three clouds; three pebbles; three goats; three thoughts; three olives; when you take away clouds, pebbles, goats, thoughts, olives, then what do you have left? The concept of threeness! Each such ....ness is an integer, and there is a reasonably obvious rule to move between such concepts. This rule permits of repetition, and thus establishes the countable numbers." "For a cook, 2 apples + 2 apples might well accurately equal 5 apples if those 4 apples are larger than normal. The mathematician would argue that 2 large apples + 2 large apples must equal 4 large apples. Correct. That’s the mathematical axiom Jon Richfield is talking about. The trouble is, in reality no apple is the same size as another, so the mathematician’s axiom is limited somewhat to arithmetic theory. So why should mathematicians get the final say? The cook’s application is commonsensical and thus more accurate and fair, so in real life 2 + 2 doesn’t always equal 4. Using the equation 2 + 2 = 5, the apple pie turns out normally, as intended. Nothing meaningless about that." 10 candies? Can these be evenly divided in half? Or are these all completely different things? And here we have established the arithmetic conundrum which pulls kids away from mathematics. It should never be taught in a reductionist form which removes the possibility of creativity. Every child knows that not every apple, every piece of cake (even if the same size you have those differences in frostings), every student, is the same (a fact those who work in quantitative educational statistics have been trained to forget). Thus, the question about two apples plus two apples, as suggested above, becomes one we can argue and debate, even with five-year-olds. That is not a path to nowhere, it is, rather, the path to understanding, and to bringing students into mathematics. We have to help them learn that mathematics is a set of systems which we can apply when helpful, or rethink and re-imagine if not helpful. A long time ago I wrote a piece called "Real World Math" and one of the things I talked about was why I loved sport statistics in school maths. You cannot compute a batting average in baseball without knowing the rules about what an "At Bat" is and how that differs from a "Plate Appearance." You need to know the difference, in football, between a "Shot" and a "Shot on Goal." You need to know, in American football, how a quarterback "sack" is counted in "run yards" even if that quarterback was tackled while running forward. So these statistics do not just connect maths to a kid's interests, they explain how mathematical systems work, and how a slight change in the rules which govern that system, would change the answer. At the grocery, sometimes 2+2=4, sometimes not... Here we go... A particularly vexing problem is comparing players from different eras. One complicating factor is that the baseball rule book has changed every year since the first rule book for the National League was issued in 1877. For example, did you know that prior to the 1930 American League season, and prior to the 1931 National League season, fly balls that bounced over or through the outfield fence were home runs! All batted balls that cleared or went through the fence on the fly or that were hit more than 250 feet in the air and cleared or went through the fence after a bounce in fair territory were counted as home runs. After the rule change the batter was awarded second base and these were called "automatic doubles" (ground-rule doubles are ballpark-specific rules) and are covered by rule 6.09(d)-(h) in the MLB Rule Book." Change the rules, change the results. Could you add fruits as 2+2? Or just the same kind of fruit? Three clouds; three pebbles; three goats; three thoughts; three olives; when you take away clouds, pebbles, goats, thoughts, olives, then what do you have left? The concept of threeness! Each such ....ness is an integer..." But an "integer" is an idea, it is a "construct," which students should learn to decide is either useful or not useful. Do we count "the number of people on the earth" (US Census is at odds with other counters) or measure the cumulative carbon footprint? (and what system of maths do we use to do that?). Toss this into the mix... "three clouds"? the sky is full of water vapor, where does one cloud start and another stop? Is a three day old pygmy goat the same as an adult mountain goat? This "integer" idea, "threeness," what does it mean and how can we use it? New York's Polo Grounds, an interesting field made for interesting stats. Now, how many home runs did Babe Ruth hit? How many home runs did Lou Gehrig hit? But wait, With the exception of a couple of months at the start of the 1920 season, from 1906 to 1930 the foul lines were "infinitely long": A fly ball over the fence had to land in fair territory (as determined by the infinitely long foul lines), or be fair when last seen by the umpire, in order to be a home run. In other words, a fly ball that went over the fence in fair territory but "hooked" around the foul pole (if there was a foul pole) was ruled a foul ball." How many home runs did Babe Ruth hit? How many home runs did Lou Gehrig hit? So, the rules matter, and the rules are changeable - assuming you can make the right argument. And this is creative magic which infiltrates everything, from the music you listen to to how that classroom window frames the world beyond. Years ago I taught an Intro to Architecture course at Pratt Institute . I'd take my students to the corner of 53rd Street and Park Avenue in Manhattan, and we'd look. To the southwest was Charles McKim 's 1916 Racquet and Tennis Club , to the northwest the 1952 Gordon Bunshaft Lever House , to the southeast Mies van der Rohe 's incredible 1958 Seagram's Building - three absolute architectural masterpieces. The fourth corner, the northwest, is occupied by " 399 Park Avenue ," a 1961 structure by Carson Lundin, Kahn and Jacobs. It is an awful building, by just about anyone's standards. We'd spend a long time standing on that corner trying to figure this out, and eventually, we'd get to maths and ratios and Fibonacci and the Golden Mean. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144... Because it is that concept of ratio - embraced in three of those structures, ignored in the fourth - that so much of human comfort with proportion occurs Europeans called it, " the divine proportion ." Why? well, here you would seem to have a year long project which might carry your students anywhere and everywhere in mathematics. Architects, artists, Wall Street traders, even, yeah, that student ID card or credit card in your pocket... What makes this "the divine proportion"? OK, that's one route. Another, hinted at near the top, is Coding. Coding is not just that mix of logic and creativity which is essential to maths, but it has a "real" feel. You don't get things "right" or "wrong," they work or they don't work. Bring coding into your classrooms. Here's one simple free tool called which we have on our MITS Freedom Sticks . But better, take a look at student coding efforts around the world, from Mozilla's to Ireland's Scratch for Kinect which are all bringing kids into this in a big way. Stephen Howell (above) says, this is very different than working in that Steve Jobs iOS world where you buy the solutions you need in life. This is using the heart of mathematics to build your own world. Starting simply, kids get interested, they gain competence, they dig behind the curtains - something Jobs and Apple have never permitted - and they move deeper and deeper into what, eventually, begins to look like a much more engaging version of our curriculum. Eventually those Scratch programming kids will be building their own Lego blocks, teaching each other how to do it, challenging each other, and, yes, becoming the builders of that "next world." So please, take the way you currently teach mathematics apart. Become a mathematics educator instead of a curriculum teacher. It might make all the difference in the future of your students. - Ira Socol 2 comments: pamobriensblog said... Great post as always Ira. It has given me plenty to think about. I have been using coding in my Maths class over the last while with great success and have found that it has helped my Computer Science students connect with Math in a way that I haven't seen before. Miss Shuganah said... Math was something for me to get past instead of appreciating. If I had been taught math in a way that wasn't drudgery maybe today I would have been a physicist as well as a poet. I hate it when schools implicitly or explicitly categorize kids as one kind of kid or another. In high school I took a go at your own pace Algebra class. The Math teachers were actually good, ie, they knew their stuff. But not one of them took me aside and said, "whoa, this isn't a race," or maybe I would have better than a "D." I realize now, in teaching our younger daughter math that I really can do problems in my head. I do not want her feeling the same way I did. Math as a chore. When I went to college, my family pressured me to be a Business major. I took Business Calculus Much to my surprise, I got a C. Why then would I get a D in Algebra and a C in Calculus? That, to me, pardon the pun, never added up. I do think that K-12 teachers owe it to students like me to do a better job at engaging us. Who knows? Maybe I would have studied optics, which is something that enthralled me when I was in high school instead of being yet another English major.
{"url":"http://speedchange.blogspot.com/2012/01/changing-gears-2012-maths-are-creative.html","timestamp":"2014-04-20T15:53:42Z","content_type":null,"content_length":"253530","record_id":"<urn:uuid:b230e2df-8f38-4317-af61-0788023f1d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
General Terms Please read carefully before you start trading Annex A General Terms 1 Client's Responsibility 1.1 It is the Client's responsibility to verify that all Transactions and Services received by him are not contradictory to any applicable law and to undertake any other legal duty emanating from the use of the Site. Client holds sole liability for all Transactions in his Account, including all cards Transactions or other means of deposit and withdrawal Transactions (as stated below). The Client is responsible for securing his Username and Password for his Account. The Client hold sole responsibility for any damage caused due to any act or omission of the Client causing inappropriate or aberrant use of his Account. 1.2 It is manifestly stated and agreed by the Client that the Client bears sole responsibility for any decision made and/or to be made by the Client relying on the content of the Site and no claim and/or suit of any kind will arise to that effect against the Company and/or its directors and/or employees and/or functionaries and/or agents (the Company and/or its Agents). The Company and/or its Agents will hold no responsibility for loss of profits due to and/or related to the Site, Transactions carried out by the Client, Services and the Terms of Use or any other damages, including special damages and/or indirect damages or circumstantial damages caused, except in the event of malicious acts made by the Company. 1.3 Without limitation of the aforesaid and only in the event of peremptory judgment by court or other authorized legal institution adjudicating that the Company and/or its Agents hold liability towards the Client or third party, the Company's liability, in any event, will be limited to the amount of money deposited and/or transferred by the Client to the Account in respect of the Transaction which caused the liability of the Company and/or its Agents (if such was caused). 2 Risk Disclosure 2.1 Trading in financial markets in general and purchasing binary options in particular is speculative and involves an extremely high risk. It is manifestly stated by the Client that he fully understands that minor differences in market prices may occur in ultra-short time periods and may cause high profits or losses in relation to the binary options acquired be the Client, as high as total loss of the Client’s entire investment, all in short time period. Furthermore, it is stated by the Client that the Client fully understands that there is no existing method that can assure profits from Transactions in financial markets, including binary options. Binary options trading should therefore be carried out only with "risk capital", defined as funds that are not necessary to the Client's survival or well-being. It is the Client's responsibility to consider whether binary options trading is suitable to the Client’s financial position and investment objectives. If the Client does not thoroughly understand the risks involved in binary Options trading nor anyoption's various trading rules and policies, the Client is hereby instructed not to utilize or stop utilizing anyoption's services in the Site. 2.2 By registering to the Site and carrying out each Transaction, the Client hereby approves that reading, understanding and being fully aware of the following: 2.2.1 The type of Transactions offered on the Site may be considered special risk transactions and carrying them out might involve high level of risk. 2.2.2 Client has full information and knowledge regarding options trading, including binary options trading and the risks involved in options Transactions in general and binary options Transactions in particular. Carrying out Transactions is at Client's sole discretion and Client hereby undertakes the risks involved in such Transactions and has the financial capability to finance the aforesaid 2.2.3 In the event of purchasing binary options, the Client might expose himself to considerable loss of the invested money or even to total loss of the Client’s entire investment. 2.2.4 The Client read the terms of trading and purchasing binary options prior to such trading and purchasing and fully understands the consequences and results of success or failure. 2.2.5 The Client knows that incorrect investment may cause considerable loss. 2.2.6 The Client knows that binary option's lifetime may be as short as a few minutes. 2.3 The use of the Site and anyoption’s services is solely designated for sophisticated users with the ability to sustain swift losses up to total loss of the entire invested money. The Client is solely responsible for careful consideration whether such Transactions suit him and his purposes while taking into consideration his resources and his personal circumstances and to understand the implications of Transactions carried out by him. It is highly recommended that the Client consult with tax experts and legal advisors before trading binary options and carrying out any Transactions. 3 Financial Information 3.1 The manner of calculating the Transactions' expiration rates of indexes, stocks, currencies and commodities which are offered on the Site are detailed in Annex D to this Agreement as updated from time to time. The offered assets and the manner of calculation may change from time to time based at the Company's sole discretion. Client undertakes to be continuously updated on the assets and the manner of aforesaid calculation. 3.2 It is clear that the financial information on the Site (e.g. exchange rates, indexes and commodities) might be inaccurate, not updated and erroneous and the Company will hold no responsibility for losses and/or possible income loss caused to the Client or to third party due to reliance on the aforesaid information. 3.3 The Company is not obliged to continuously update the financial information on the Site and may remove the data from the Site form time to time with no time limit and without advanced notice. 3.4 It is the Client's duty to verify the reliability of the information on the Site and its appropriateness to his needs in comparison with other reliable sources of information. The Company will hold no responsibility for allegedly caused claim, cost, loss or damage of any kind resulting from information presented on the Site or due to information the Site referred to. 3.5 The client approves and agrees that any oral information given to him, if given, in respect of his Account might be incomplete and unconfirmed and any reliance on the information aforesaid is on Client's sole risk and responsibility. The Company does not give any warranty, neither explicit nor general, that pricing or other information supplied by the Company through Internet trading software or through phone or through any other form is correct or that it reflects current market conditions. 4 Trading Rescission Trading on the Site or on part of it may be rescinded with no advanced notice. The Client will have no claim or right of indemnification for damages allegedly caused by trading rescission, whether for carried out Transactions or for Transactions allegedly intended to be carried out. 5 Limited Liability 5.1 The Company undertakes to supply steady Services on the Site. Notwithstanding, the Company does not guarantee that Services will not be interrupted, will be supplied steadily with no rescission, will be safe and errors-free and will be immune from unauthorized access to the Sites' servers or from damages, malfunctions or failures in hardware, software, communication lines and in systems, in the suppliers of continuous trading data, in the Company, in the Client's computers and in the Company's suppliers or its agents. 5.2 Supplying Services by the Company depends, inter alia, on third parties and the Company bears no responsibility for any actions or omissions of third parties and bears no responsibility for any damage and/or loss and/or expense caused to Client and/or third party as a result of and/or in relation to any aforesaid action or omission. 5.3 The Company will bear no responsibility for any damage of any kind allegedly caused to Client which involves force majeure or any other external event uncontrolled by the Company which has influenced the Services and trading on the Site. 5.4 Under no circumstances will the Company or its Agents hold responsibility for direct or indirect damage, penalty damages, occasional, special or consequential damage and/or any other damage of any kind including, without limitation of the aforesaid, damages due to loss of use, loss of data or loss of profits, emanating from or related in any way to carrying out Transactions or to use of Services, or for delay in use of Services or incapability of carrying out Transactions or using Services or for unprovided Services or any information, software, product, Service and additional graphics obtained by the Services or emanating from any other manner of using of Services either by relying on agreement or by tort, either by absolute liability and/or any other cause, even if the Company or its Agents had been notified of the possibility of aforesaid damages. 6 Company's privileges in Client's accounts 6.1 Client agrees that (a) if he breaches any of his obligations according to the Terms of use and/or the Agreement; (b) if he is insolvent or bankrupt or in procedure of bankruptcy, reorganization, insolvency or any equivalent procedure; or (c) if the Company, at its sole discretion, finds it necessary in order to defend itself, the Company may, at any time and with no prior notice to Client (1) to terminate, cancel and/or close all or part of the Transactions between the Company and the Client and calculate damages caused to the Company as it sees fit; (2) to pledge, transfer, or sell the balance and/or securities in the Client's account(s); or (3) to perform any action which the Company, at its sole discretion, sees fit to cure the breach. The Company may perform the aforesaid actions without prior notice. 6.2 The Client confirms and agrees that the Company might impose restrictions on trade, payments, Services or any other restriction on the Account if required to by law, including without limitation, court order, tax authority, enforcement authorities and any other official authority requirement. Client agrees that the Company might be required to payoff or block amounts of money existing in Client's Account to fulfill requirements of the aforesaid authorities and the Client will have no right, claim or demand from the Company in respect of losses caused in his Account due to any such action and Client undertakes to indemnify the Company upon Company's first notice for any damage caused by Company's aforesaid action. 6.3 Credit balance Account in which Client did not make any investments for continuous 12 months ("Non-Active Account"), will be charged with an annual maintenance fee of $50.00. Notwithstanding the above it is clarified that maintenance fees shall not exceed the total balance of such Non-Active Account. 6.4 The Company cannot accept request to modify or cancel an order received from the Client and the Client will be bound to the original order and its consequences 6.5 Client will not assume that a specific order was executed unless Client received an official report from the Company which approves the execution of the specific order. It is Client's responsibility to verify the status of the pending orders prior to carrying out other orders. Client will bear responsibility if an order(s) is identical to a pending order(s) also in the event that the additional order(s) is causing negative balance in his Account. 6.6 Client is responsible for reading order confirmation and Account reports delivered by the Company either by email or mail or in any other form, immediately following receipt by the Client. It is recommended to use the 'print' option available on the Site. Company shall consider the reports accurate unless the Client objected within three (3) business days. In the event of such objection, the Company has the right to determine the validity of the objection. 7 Various Means of Deposits and Withdrawals. 7.1 According to the requirements of anti-money laundering laws and regulations, when performing a deposit by way of bank transfer, Client is required to use a single bank account located in Client's state of domicile and registered on Client's name. Client has to deliver an official confirmation of transfer stating full details of the transferring bank account and verify that the deposit order is carried out according to the Company's requirements as to the identification details of the Client and his Account. Lack of aforesaid confirmation or incompatibility between the account details and the Client's details in his Account on the Site might cause a rejection or loss of the request, transfer to a wrong account or recall of the deposit amount to the transferring bank and eventually to the annulment of the deposit order. Any withdrawal from the Account on the Site, carried out (if at all) by bank transfer, will be transferred solely to the bank account the deposit was originally received from. 7.2 If and as far as the Company will allow payment by other means of payment (e.g. internet transfer services): when carrying out deposits by means which are not credit cards and/or bank accounts the Client hereby agrees and confirms he is obliged and committed to regulations and rules applying to such services and determined in their terms of use (if any) and in applicable law, which may include, inter alia, commissions and other restrictions. The Company, at its sole discretion, might carry out withdrawal orders by alternative means to those which have been used in the original deposit order, subject to anti-money laundering laws and regulations. 7.3 The Company will pay the Client profits from his Account on the Site by way of bank wire or transfer to Client's credit card account that was used when the deposit was made, and following Client's withdrawal order to instructions presented on the Site. The Company will make its best effort to pay the Client his/hers in accordance with Client's chosen mean. Without prejudice to the aforesaid, the Company reserves the right to pay the Client his/hers profits by different means in accordance with Client's type of credit card and/or Company's internal regulations. Clients who executed the deposit with 'Visa' card will be allowed to carry out profit withdrawals solely to the credit card account and not by way of checks. Credit card withdrawal shall be performed at times and according to the Credit card company's procedures. 7.4 While withdrawal request is pending (meaning the Client has not received withdrawal confirmation from the Company), Client may order stop withdrawal, according to the instructions presented on the Site, and recall the money to his Account on the Site. Executing stop withdrawal will make the amounts meant to be withdrawn immediately available in the Account on the Site. Client agrees and confirms that if the withdrawal request is completed (meaning the Client has received withdrawal confirmation from the Company), stop withdrawal request is not possible. 7.5 Subject to the aforesaid, if the Client had requested to carry out two or more withdrawals and then requested to carry out a stop withdrawal, he may first cancel the former withdrawal request. After stopping one withdrawal, Client may stop the following one and so on. 7.6 If a Client requests to withdrawal funds from his account, but did not complete the Withdrawal Process within 7 days, the requested withdrawal amount will be refunded back to the Client's account. The Client will be notified by email, 30 days before the funds will be refunded back to the account. 8 SPECIAL OFFERS, BENEFITS AND BONUSES 8.1 Bonuses and benefits shall be credited to the client's account subject to compliance with the terms of the offer made to the client, e.g. making minimum deposits and/or purchasing a minimum amount of options within a specified time period. 8.2 Unless stated otherwise in the terms of the offer, a precondition for making withdrawals after using the bonus/benefit is to buy options of 15 times the amount of the bonus/benefit. 8.3 The Company urges its clients to take part in the offers, but to refrain from abusing them. Abusing any of the offers could lead to cancellation of the bonus/benefit and closure of the client's account on the Company's website. 8.4 The bonuses/benefits must be used within the period defined in the details of the special offer. Should the bonus/benefit fail to be used within this time frame, the bonus/benefit will be withdrawn from the client's account. 8.5 Once receiving the bonuses/benefits, the client has 3 (three) calendar months to complete the total amount of investments required as a precondition for withdrawal. If the required total amount of investments is not reached during this time period, the company will withdraw the bonuses/benefits funds from the client's account according to its discretion. 8.6 The Company reserves the right to revoke the bonus/benefit should the special offer be abused and/or should the offer's terms fail to be met. The Company's decision - should this be the case - shall be final. The Company reserves the right to revoke or change the offers at any time without prior notice. 9 ROLL FORWARD 9.1 From time to time the company will offer to selected customers, at its sole discretion, the possibility of postponing the expiry time of the purchased option to the nearest available expiry time from the original expiry time. This opportunity will be offered for a limited time at the company's discretion. 9.2 A customer who is going to be offered a Roll Forward opportunity, can postpone the expiry time of his purchased binary option - to the nearest available expiry time from the original expiry time. This is in exchange for a premium set and known in advance and displayed on the site. 9.3 The premium for Roll Forward will be debited immediately from the customer's account. 9.4 Apart from the change in expiry time, all the other option terms will remain the same, including the cost of the option, the type of option (Call or Put) and the return rate - all will not 10 TAKE PROFIT 10.1 From time to time the company will offer customers who meet certain criteria, as set in advance by the company the opportunity to immediately Take Profit on options. This opportunity will be offered for a limited time at the company's discretion. 10.2 A customer who is going to be offered a Take Profit opportunity can take, before the original expiry time of the option, the guaranteed return rate on the option (Call or Put) which was originally purchased. This is in exchange for a premium set and known in advance and displayed on the site. 10.3 A customer who buys the Take Profit offer will immediately receive the guaranteed return rate of the option (Call or Put) in his account, irrelevant of the expiry level of the option at its expiry time. 10.4 The premium for Take Profit will be debited immediately from the customer's account. The customer will then be credited immediately for the option's return (Call or Put), before the option's original expiry time. 11 Other derivatives 11.1 OPTION+ 11.1.1 OPTION+ is a unique option trading arena within our platform. Unlike regular binary options, on the OPTION+ page a Client can buy options with fixed return and expiry time, BUT at any given moment sell them back to anyoption™, whether they are in the money or out of the money (a real time visualization of the option's distance from the market will assist them). Alternatively, a Client can keep the option until its original expiry time and then it is settled like our regular binary options. Clients can buy and sell as many options as they want, as long as their balance is sufficient. This is how it works: Once an option has been purchased, the Client can at any stage request (up to "waiting for expiry" status appears) a quote from anyoption™ to buy the option from them. After clicking "Get quote", the price we will show the Client will be valid for four seconds. Expiry times and expiry level formulas may be different from our regular binary options. Please read the terms carefully. The commission charged for purchasing an OPTION+ is a flat USD/EUR/GBP 0.5. The commission will automatically be added to the purchase. 11.1.2 By clicking "Sell", the funds will be transferred immediately to the client's balance. If the quote is not attractive enough, the client can waive the proposal, and ask for an updated quote by clicking time and time again on "Get quote." or wait for the original expiry. 11.1.3 When a client trades Option+, 50.0% of the investments' turnover will be accounted for standing terms of bonuses or other benefits. By way of example - if the Client invests $2000 trading Option+, then for meeting criteria of bonuses or other benefits, the Client will be credited with [$1000.0]. 11.1.4 The bonuses of "Risk free option", "Increased refund", "Increased return", and "Increased refund and return" cannot be used on OPTION+. 11.2 One Touch 11.2.1 The following general conditions for One Touch options supplement the general conditions of the site and do not override them. 11.2.2 Option description (general): If according to the sampling rate specified below the price of the underlying asset reaches the predetermined level in the period between the date of the purchase of the option and its expiration date, the client becomes eligible to receive the promised payout at the time of the expiration. It is emphasized that in order for the client to receive the payout, the price of the underlying asset according to the sampling rate needs to reach or surpass the predetermined level only once during the lifetime of the option. In the event that the price of the underlying asset does not reach the predetermined level, the client will not receive any refund and will lose the entire amount of his investment. Therefore, the amount of profit or loss in this option is preset and known ahead of time. The option may be purchased in units only, at unit prices specified on the site. The options will be sampled over the period of five days, Monday through Friday, once a day in accordance with the method for calculating the determining price specified in paragraphs 3-14 below. Whenever sampling prices are not published five times during the week, the number of samplings will be reduced accordingly. 11.2.3 Sampling rate USD/JPY: the rate quoted by Reuters once a day at 16:00 (GMT). Reuters quote source: JPYH= 11.2.4 Sampling rate EUR/USD: The daily sampling rate published by the European Central Bank (ECB) once a day. Reuters quote source: ECBREF. 11.2.5 Sampling rate Dow Jones: The closing level quoted by Reuters once a day. Reuters quote source: .DJI TRDPRC_1 (Closing level). 11.2.6 Sampling rate Oil: The NYMEX (New York Mercantile Exchange) price quoted by Reuters once a day at 19:00 (GMT). Reuters quote source: Clv1 TRDPRC_1 (Last Value). 11.2.7 Sampling rate Gold: The CME (Chicago Mercantile Exchange) price quoted by Reuters once a day at 19:00 (GMT). Reuters quote source: GCv1 TRDPRC_1 (Last Value). 11.2.8 Sampling rate FTSE 100: The closing level quoted by Reuters once a day. Reuters quote source: .FTSE TRDPRC_1 (Closing level). 11.2.9 Sampling rate Intel: The closing level quoted by Reuters once a day. Reuters quote source: INTC.O TRDPRC_1 (Closing level). 11.2.10 Sampling rate DAX: The closing level quoted by Reuters once a day. Reuters quote source: .GDAXI TRDPRC_1 (Closing level). 11.2.11 Sampling rate Walt Disney: The closing level quoted by Reuters once a day. Reuters quote source: DIS TRDPRC_1 (Closing level). 11.2.12 Sampling rate Microsoft: The closing level quoted by Reuters once a day. Reuters quote source: MSFT.O TRDPRC_1 (Closing level). 11.2.13 Sampling rate BoA (Bank Of America): The closing level quoted by Reuters once a day. Reuters quote source: BAC TRDPRC_1 (Closing level). 11.2.14 Sampling rate IBEX35: The closing level quoted by Reuters once a day. Reuters quote source: .IBEX TRDPRC_1 (Closing level). 11.2.15 Sampling rate Facebook: The closing level quoted by Reuters once a day. Reuters quote source: FB.O TRDPRC_1 (Closing level). 11.2.16 The promised payout will be transferred to the client's account on the expiration date of the option, even if the terms of the options have been realized before the end of the period. 11.2.17 Maximum investment amount: The maximum investment amount per client per option is 1000 USD or the equivalent in other currencies. 11.2.18 Limitations: anyoption™ reserves the right to limit the amount of investment in each option or the amount of units available for purchase, change the price of options and the offered profits, or to cease the sale of the options at any time. 11.2.19 Terms of cancellation: the options may not be canceled at any time after purchase. 11.3 Binary 0-100 11.3.1 Binary 0-100 is a unique option trading arena within our platform. On the binary 0-100 trading arena a client can predict whether an event will occur or not. An event could be: A. will an expiry price at a certain hour be higher than a certain level of the event. If the client believes the event will occur, he can buy options. If the client believes the event will not occur (the expiry level will be identical or lower), he sells options B. will an expiry price at a certain hour be lower than a certain level of the event. If the client believes the event will occur, he can buy options. If the client believes the event will not occur (the expiry level will be identical or higher), he sells options 11.3.2 The investments and payouts are according to a client’s account currency. For example, if a client has a USD account, the buy, sell and return currencies will be in USD. 11.3.3 The Company offers the client a number of options to buy or sell (the same quote for each buy option or sell option). This is the risk price and it changes constantly, according to the assets fluctuations and on the Company's terms. The client can choose the amount of options from a list offered by the Company. If the client buys options, he pays the amount offered, multiplied by the amount of options. If the client sells an option, we will deduct from his balance the difference between the selling price offered to 100, multiplied by the amount of options (This is calculated as if the client receives the price for each option sold to the Company, and the Company deducts 100 for each option). For each option expiring "in-the-money", the Company will pay the client 100 (of the client’s account currency) that will be transferred immediately to the client's account. 11.3.4 If the event's expiry price is identical to the level (the level at the time the event had started), the sold options are in-the-money, and the bought options are out-of-the-money, for both types of events. This is since a client buying options predicts the event will occur (for "A" type event the price is higher, for "B" type event the price is lower), and is incorrect. A client selling options predicts the event will not occur and is correct, and therefore will receive from the Company 100 (of the client’s account currency) for each sold options expiring with the event. 11.3.5 After the client trade options, the open options appear in a separate table. The client can sell his bought option or buy his sold options directly from the table, at a quote offered by the Company. If the client sells his bought option, the price offered will be transferred immediately to the client's balance, (multiplied by the amount of options). If the client buys his sold option, the difference between the selling price and 100 will be transferred immediately to the client's balance (multiplied by the amount of options). 11.3.6 Options bought or sold from the main area while having open options in the opposite direction on this event will offset. Options will offset in “First in, first out” order – options traded first will offset first. Should the client trade more options than the open options he has in the opposite direction, all open options will close and the rest of the newly traded options will appear in the table. 11.3.7 The client can buy and sell options at any given time up to five (5) minutes before the option expires. 11.3.8 The client can choose not to buy or sell his options, and wait for the expiry time. For each option expiring "in-the-money", the Company will pay the client 100 (of the client’s account currency) that will be transferred immediately to the client's account. The promised payout will be transferred to the client's account when the options expire. 11.3.9 When a client trades Binary 0-100, 50.0% of the investments' turnover will be accounted for standing terms of bonuses or other benefits. By way of example - if the Client invests $2000 trading Binary 0-100, then for meeting criteria of bonuses or other benefits, the Client will be credited with [$1000.0]. 11.3.10 The bonuses of "Risk free option", "Increased refund", "Increased return", and "Increased refund and return" cannot be used on binary 0-100. 11.3.11 The Company reserves the right to limit the quote for each option or the amount of options available for purchase, change the price of options and the offered profits, or to cease trading options at any time. 11.3.12 Terms of cancellation: the options may not be canceled at any time after purchase. 12 Law and Jurisdiction The laws of the state of Cyprus shall govern the use of the Site and all its consequences including the Terms of Use. The competent court in Larnaca, Cyprus shall have sole jurisdiction over any matter involving use of the Site. 13 Copyright 13.1 Copyrights and Intellectual Property (IP) on the Site and Services are the Company's property or of third parties which have authorized the Company to use such IP on the Site and Services. Without prejudice to the aforesaid, the Company is the sole owner of names, trademarks, patents and designs on the Site, whether registered or not, of trade secrets concerned with the Site's operation and Services of the Site's design, of technical data concerned with the Site, including without limitation, software, applications, graphic files and other files, computer codes, texts and/ or any other material included in it, excluding Clients' Information as defined below (the "Information"). It is forbidden to copy, distribute, duplicate, present in public, or deliver the Information, in whole or in part, to third parties. It is forbidden to alter, advertise, broadcast, transfer, sell, distribute or make any commercial use of the Information, in whole or in part, except with duly signed prior permission from the Company. 13.2 Unless explicitly stated otherwise, any material and/or message, including without limitation, idea, knowledge, technique, marketing plan, information, questions, answers, suggestions, emails and comments ("Information") delivered to the Company shall not be considered Client's confidential or proprietary right of. Consent to the Terms of Use will be considered as authorization to the Company to use the entire Clients' Information (excluding Clients' Information designated for personal identification), including development of such Information according to and in favor of the Company's needs, including public relations and promotion of the Site in the media, at the absolute and sole discretion of the Company, including in the Internet, press, and/or television, and all without requirement of any additional permission from the Client and/or the payment of any compensation due to such use. 13.3 Client undertakes that any notice, message or any other material supplied by the Client shall be appropriate and shall not harm other persons including their proprietary rights. Client shall not upload to the Site and/or send through the Site any illegal and/or harmful and/or disturbing to other Clients and users of the Site, and/or any advertising and commercial material, and avoid any action which might damage the Company. 13.4 Nasdaq and Nasdaq 100 Fut base assets (together The Products) are not sponsored, endorsed, sold or promoted by The NASDAQ OMX Group, Inc. or its affiliates (NASDAQ OMX, with its affiliates, are referred to as the "Corporations"). The Corporations have not passed on the legality or suitability of, or the accuracy or adequacy of descriptions and disclosures relating to, the Product(s). The Corporations make no representation or warranty, express or implied to the owners of the Product(s) or any member of the public regarding the advisability of investing in securities generally or in the Product(s) particularly, or the ability of the Nasdaq Composite Index to track general stock market performance. The Corporations' only relationship to anyoption Holdings Ltd. ("Licensee") is in the licensing of the Nasdaq, Nasdaq Composite, and Nasdaq Composite Index registered trademarks, and certain trade names of the Corporations and the use of the Nasdaq Composite Index which is determined, composed and calculated by NASDAQ OMX without regard to Licensee or the Product(s). NASDAQ OMX has no obligation to take the needs of the Licensee or the owners of the Product(s) into consideration in determining, composing or calculating the Nasdaq Composite Index. The Corporations are not responsible for and have not participated in the determination of the timing of, prices at, or quantities of the Product(s) to be issued or in the determination or calculation of the equation by which the Product(s) is to be converted into cash. The Corporations have no liability in connection with the administration, marketing or trading of the Product(s). The Corporations do not guarantee the accuracy and/or uninterrupted calculation of the Nasdaq Composite index' or any data included therein. The Corporations make no warranty, express or implied, as to results to be obtained by Licensee, owners of the product(s), or any other person or entity from the use of the Nasdaq Composite Index or any data included therein. The Corporations make no express or implied warranties, and expressly disclaim all warranties of merchantability or fitness for a particular purpose or use with respect to the Nasdaq Composite Index or any data included therein. Without limiting any of the foregoing, in no event shall the Corporations have any liability for any lost profits or special, incidental, punitive, indirect, or consequential damages, even if notified of the possibility of such damages 14 Content and Third parties' websites 14.1 The Site might include general information, news, comments, quotes and other information related to financial markets. Some of such information is supplied to the Site by unaffiliated companies ("Third Parties' Content"). 14.2 Although Third Parties Content, if such exists, is presented on the Site in frames or through links, the Company does not prepare, edit or promote the Third Parties Content. The Company does not vouch for the credibility, accuracy or completeness of the Third Parties Content and holds no responsibility for content, advertisement, products or any other material existing on third parties' 15 Indemnification The Client will indemnify the Company and its agents, employees, directors, successors and their assignees ("Indemnified Persons") against any and all liabilities, losses, damages, costs, and expenses (including reasonable attorney's fees) incurred by the Indemnified Persons and arising out of Client's failure to fully and timely comply with its obligations set forth in this Agreement and /or out of the Company's need to enforce such liabilities. 16 Entire Agreement & Amendments 16.1 This Agreement, including all Annexes, constitutes the entire Agreement among the parties hereto and supersedes any and all prior agreements or understandings among the parties, if any, with respect to the subject matter of this Agreement. 16.2 The Company may amend Terms of Use from time to time. Client is responsible for checking whether the Terms of Use were amended. Any amendment shall come into force as of the day it was published on the Site. 16.3 The Company shall not be bound by any waiver unless made by a duly signed written instrument by the Company and no waiver or amendment of this Agreement may be implied from any course of dealings between the parties or from any failure by the Company to enforce its rights hereunder. 17 Assignment The Company is entitled to assign its rights and/or liabilities according to the Terms of Use by a notice to the Client. Client has no right to assign his rights and/or his liabilities unless a prior written consent to that effect was given by the Company, and whether or not such consent shall be given is at Company's sole discretion. 18 Term and Termination 18.1 Term of the Agreement shall be unlimited, however the Company will be allowed to terminate this Agreement at any time by notice to the Client ("Termination"). 18.2 As of Termination, Client shall not be able to carry out new Transactions. 19 Separation If any provision in the Terms of Use or its implementation towards any person or in any circumstance shall be invalid, illegal or unenforceable, the remainder of the Terms of Use and its implementation shall not be affected and will be enforceable in any manner allowed by law. 20 Adjustments to the price of an option relating to stock as the base asset, in case of a split or a reverse split, made in the base asset. If during the term between the purchasing and the expiration of a binary option, relating to stock as the base asset, the stock has been split or reverse split, then the binary option price will be adjusted according to the adjustments made to the stock price in the relevant market where it is traded due to the aforesaid split or reverse split. 21 Communications and delivery of notices; Advertising Material 21.1 Reports and any Notice hereunder may be sent to Client at the address set forth in this Agreement, or such other address notified by Client in writing to the Company from time to time. All communications sent to Client shall be deemed delivered, at the time of delivery if sent by email, facsimile, by hand delivery or notified through the Internet Trading Platform or within 3 (three) business days if posted by mail. Communications by Client shall be deemed delivered only when actually received by the Company. 21.2 Client's details which were provided and/or will be provided by the Client during his/her activity on the site may be used by the Company for sending Company's advertising content to the Client, unless the Client removes the mark approving the Company to do so. Such removal can be done when (i) opening an account or (ii) when receiving such advertising content or (iii) by logging in and going to My Account > Personal Details. The Client may also send to the Company, at any time, an e-mail to Support@anyoption.com asking the Company to cease from sending such advertising content. The aforesaid mark removal and /or email receipt by the Company will oblige the Company to cease sending advertisement content to the Client within seven business days. 22 Interpretation All terms used in this Agreement and in the General Terms and not defined herein shall have the meaning assigned to them in the Glossary of Terms. Other terms not defined herein shall have the meaning assigned to them in customary practice of the type of Transactions and Services in the Site. For avoidance of doubt and unless noted otherwise, words in singular shown in the General Terms will refer to plural and vice versa; words in masculine gender will refer to feminine gender and vice versa; words referring to a person will refer to corporation and vice versa. The headlines in the General Terms are designated for convenience only and will not be used as interpretation of the General Terms. Annex B Client's Declaration As of the date of registration and upon each use of the Site, Client hereby represents and warrants to The Company that: 1. Client is of sound mind and legal competence and has full right and authority to execute Transactions in Binary Options and any other Transactions allowed by the Company and performed by Client in his Account. 2. Client (if not a natural person) is duly organized and validly existing under the laws of the jurisdiction of its organization and have received any and all resolutions required under its documents of incorporation and applicable law to execute this Agreement and any Transaction made pursuant thereto and each person executing and delivering this Agreement or any Transaction on Client's behalf is authorized to do so. 3. Execution and delivery by Client of this Agreement or any Transaction will not violate any law, regulation, by-law, agreement, obligation, judgment, or policy applying to Client. Without prejudice to the above, Client is not an employee of any exchange, any corporation in which any exchange owns a majority of the capital stock, any member of any exchange and/or firm registered on any exchange, or any bank, trust, or insurance company that trades the same instruments as those offered by the Company. 4. Client is the full and ultimate beneficial owner of the funds deposited in the Account and no other person has or will have an interest in the Account. Client cannot and will not grant any right in the Account to other or others. 5. All details and declarations provided by Client to the Company are full and correct in all respects and Client shall immediately notify the Company of any change in such details or declarations. 6. Client has carefully read and understood this Agreement. 7. All funds deposited in the Account originate in legal sources and do not derive from drugs, abduction or any criminal activity. 8. Client is obliged to carry out only those Transactions which he is legally authorized to carry out, including without limitation not to carry out Transactions which involve the use of inside information or involve insider trading pursuant to any applicable law. 9. Client understands and acknowledges that without prejudice to the provisions of the Agreement (i) while the Internet and the World Wide Web are generally reliable, technical problems or other conditions may delay or prevent Client from accessing the Company's internet trading software, and (ii) the Company does not present, commit or vouch for the Client to be able to have access to the trading software or for the use of it at times and in places the Client chooses or that the Company will have the capacity fit for a trading software in general or in specific geographical location or that the trading software will supply nonstop and error-free Services. Annex C Representative Annex anyoption Ltd (the Company). Re:Nomination of Representative. We, the undersigned, the Client as defined in the above Agreement, hereby appoint _______________ (the Representative), whose signature appears below, as a plenipotentiary on our behalf and grant the Representative with full powers of subrogation and power of attorney to act on our behalf in relation with our Account(s) and to perform in our name and on our behalf any and all Transactions and actions we are entitled to perform pursuant to the Agreement. Accordingly, the undersigned authorize you to execute the instructions of my Representative. We fully understand and acknowledge that by appointing the Representative, the Representative shall be entitled to operate in the Account. We hereby irrevocably and unconditionally ratify and confirm all actions and Transactions performed by the Representative. We hereby agree to indemnify and hold the Company and its affiliates, employees, directors, successors and assignees (Indemnified Persons), harmless from and against any and all liabilities, losses, damages, costs and expenses (including reasonable attorney's fees) incurred by Indemnified Persons and arising out of the nomination of the Representative and the performance of any Transactions in the Account or any other acts or omissions by the Representative. This power of attorney and authorization shall remain in full force and effect and shall bind the undersigned towards you, your successors and assigns, until revoked by the undersigned in a written notice to you to that effect duly signed by the undersigned. We hereby consent to the performance of any Transactions and existence of any commercial relationship between the Representative and the Company and we acknowledge that the Company does not and shall not owe us any fiduciary, care or other duty in relation to the Representative and the performance of any Transactions upon the Representative's instructions in relation to the Account or to any other matter. Annex D Expiry Levels Calculations Expiry Type Reuters Field Expiry Formula Hourly .DJI Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .DJI Last Value Expiry Type Reuters Field Expiry Formula Hourly .MDAX Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .MDAX Last Value Expiry Type Reuters Expiry Formula Hourly RIRTS Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, RIRTS Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly Hourly The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .IXIC Last Value NASDAQ 100 FUTURE Expiry Type Reuters Expiry Formula Hourly NQ Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, NQ Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. S&P 500 Expiry Type Reuters Field Expiry Formula Hourly .SPX Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .SPX Last Value S&P 500 FUTURE Expiry Type Reuters Expiry Formula Hourly .ES Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, .ES Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. CAC 40 Expiry Type Reuters Field Expiry Formula Hourly .FCHI Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .FCHI Last Value CAC Future Expiry Type Reuters Expiry Formula Hourly FCE Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, FCE Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly .GDAXI Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .GDAXI Last Value Expiry Type Reuters Expiry Formula Hourly .FDX Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, .FDX Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly .DFMGI Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .DFMGI Last Value FTSE 100 Expiry Type Reuters Field Expiry Formula Hourly .FTSE Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .FTSE Last Value Expiry Type Reuters Field Expiry Formula Hourly .FTMIB Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .FTMIB Last Value FTSE IT All-Share Expiry Type Reuters Field Expiry Formula Hourly .FTITLMS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .FTITLMS Last Value IBEX35 (Spain) Expiry Type Reuters Field Expiry Formula Hourly .IBEX Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .IBEX Last Value IPC (Mexico) Expiry Type Reuters Field Expiry Formula Hourly .MXX Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .MXX Last Value BOMBAY 30 Expiry Type Reuters Field Expiry Formula Hourly .BSESN Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .BSESN Last Value Expiry Type Reuters Field Expiry Formula Hourly .HSI Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .HSI Last Value KLSE Future Expiry Type Reuters Expiry Formula Hourly FKLI Last Value The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd End of Day, FKLI Last decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. Week, Month Value The quote for calculating the expiry level may differ on certain days. SSE 180 (China) Expiry Type Reuters Field Expiry Formula Hourly .SSE180 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .SSE180 Last Value Tec DAX Expiry Type Reuters Field Expiry Formula Hourly .TECDAX Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .TECDAX Last Value Tel Aviv 25 Expiry Type Reuters Field Expiry Formula Hourly .TA25 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .TA25 Last Value Expiry Type Reuters Field Expiry Formula Hourly .PSI20 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .PSI20 Last Value Expiry Type Reuters Field Expiry Formula Hourly .KS11 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .KS11 Last Value S&P/ASX 200 Expiry Type Reuters Field Expiry Formula Hourly .AXJO Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .AXJO Last Value NIKKEI 225 Expiry Type Reuters Field Expiry Formula Hourly .N225 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .N225 Last Value TOPIX 500 Expiry Type Reuters Field Expiry Formula Hourly .TOPX500 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day, Week, Month .TOPX500 Last Value France Telecom Expiry Type Reuters Expiry Formula Hourly ORAN.PA Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the Value 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, ORAN.PA Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly EDF.PA Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the Value 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, EDF.PA Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly RENA.PA Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the Value 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, RENA.PA Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Societe Generale Expiry Type Reuters Expiry Formula Hourly SOGN.PA Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the Value 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, SOGN.PA Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly AUDH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, AUDH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Bitstamp field Expiry Formula Hourly bitcoin Last Value The expiry level is (BID 25% weight)+(ASK 25% w.)+(LAST 50% w.), unless LAST>ASK, then: (BID 20% w.)+(ASK 70% w.)+(LAST 10% w.) or unless LAST<BID, then: (BID 70% w.)+ End of bitcoin Last (ASK 20% w.)+(LAST 10% w.). Day Value Expiry Type Reuters Expiry Formula Hourly EURGBPH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, EURGBPH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly EURJPYH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, EURJPYH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly EURH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, EURH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly GBPJPYH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, GBPJPYH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly NZDH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, NZDH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly CHFH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, CHFH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly GBPH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, GBPH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly JPYH= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, JPYH= 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. Week, Month Expiry Type Reuters Expiry Formula Hourly KRW= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the last decimal digit is End of Day, KRW= 3 or bigger, and rounded down in the event that the last decimal digit is 2 or lower. Week, Month Expiry Type Reuters Field Expiry Formula Hourly ILS__H=RR The expiry level is the value that appears in Reuters Field at the expiry time. End of Day, Week, Month ILS__H=RR Expiry Type Reuters Field Expiry Formula Hourly CLv1 Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. End of Day CLv1 Last Value The quote for calculating the expiry level may differ on certain days. Expiry Reuters Expiry Formula Type Field Hourly HGv1 The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal End of HGv1 digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. The quote for calculating the expiry level may differ on certain days. Expiry Reuters Expiry Formula Type Field Hourly GCv1 The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal End of GCv1 digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. The quote for calculating the expiry level may differ on certain days. Expiry Reuters Expiry Formula Type Field Hourly SIv1 The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal End of SIv1 digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. The quote for calculating the expiry level may differ on certain days. America Movil Expiry Type Reuters Expiry Formula Hourly AMX The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, AMX Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly AAPL.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, AAPL.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly AIG The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, AIG Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly AMZN.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, AMZN.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Bank of America Expiry Type Reuters Expiry Formula Hourly BAC The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, BAC Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Cisco Systems Expiry Type Reuters Expiry Formula Hourly CSCO.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, CSCO.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly GG The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, GG Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly PFE The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, PFE Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Silver Wheaton Expiry Type Reuters Expiry Formula Hourly SLW The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, SLW Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Goldman Sachs Expiry Type Reuters Expiry Formula Hourly GS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, GS Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Las Vegas Sands Expiry Type Reuters Expiry Formula Hourly LVS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, LVS Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Walt Disney Expiry Type Reuters Expiry Formula Hourly DIS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, DIS Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly DAIGn.DE The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, DAIGn.DE The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Expiry Type Reuters Field Expiry Formula Hourly TSCO.L The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, TSCO.L The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Closing Level Teva Pharmaceutical Expiry Type Reuters Expiry Formula Hourly TEVA.N The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, TEVA.N Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly TWTR The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, TWTR Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly CHKP.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, CHKP.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly C The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, C Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Deutsche Bank AG Expiry Type Reuters Expiry Formula Hourly DBKGn.DE The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, DBKGn.DE The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Expiry Type Reuters Expiry Formula Hourly GASI.MI The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, GASI.MI Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly CRDI.MI The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 5th decimal digit is 5 or bigger, and rounded down in the event that the 5th decimal digit is 4 or lower. End of Day, CRDI.MI Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Intesa Sanpaolo Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly ISP.MI decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the 2nd decimal digit is 5 or bigger, and rounded down in the event that the 2nd decimal digit is 4 or lower. End of Day, ISP.MI Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. Telecom Italia Expiry Type Reuters Expiry Formula Hourly TLIT.MI The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 5th decimal digit is 5 or bigger, and rounded down in the event that the 5th decimal digit is 4 or lower. End of Day, TLIT.MI Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly GOOG.OQ The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, GOOG.OQ Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value JPMorgan Chase Expiry Type Reuters Expiry Formula Hourly JPM The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, JPM Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly MCD The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, MCD Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly MSFT.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, MSFT.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly FB.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, FB.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly MS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, MS Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Field Expiry Formula Hourly BARC.L The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, BARC.L The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Closing Level Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly TEF.MC decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the 2nd decimal digit is 5 or bigger, and rounded down in the event that the 2nd decimal digit is 4 or lower. End of Day, TEF .MC Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. Banco Santander Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly SAN.MC decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the 2nd decimal digit is 5 or bigger, and rounded down in the event that the 2nd decimal digit is 4 or lower. End of Day, SAN.MC Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. Expiry Type Reuters Expiry Formula Hourly BBVA.MC The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, BBVA.MC Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly TCELL.IS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, TCELL.IS The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Akbank Turk Anonim Sirketi Expiry Type Reuters Field Expiry Formula End of Day, Week, Month AKBNK.IS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. ISBANK Turkiye Is Bankasi Expiry Type Reuters Field Expiry Formula End of Day, Week, Month ISCTR.IS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Expiry Type Reuters Expiry Formula Hourly VOD.L The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, VOD.L Week, Month Closing The expiry level is the value that appears in Reuters Field at the expiry Time. British Petroleum (LSE) Expiry Type Reuters Expiry Formula Hourly BP.L The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, BP.L Closing The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Level Expiry Type Reuters Expiry Formula Hourly ALVG.DE The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, ALVG.DE Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly EONGn.DE The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, EONGn.DE The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Noble Energy Expiry Type Reuters Expiry Formula Hourly NBL The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. End of Day, NBL Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly GAZPS.MM The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, GAZPS.MM The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Expiry Type Reuters Expiry Formula Hourly ROSNS.MM The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, ROSNS.MM The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Expiry Type Reuters Expiry Formula Hourly SBERS.MM The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, SBERS.MM The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Exxon Mobil Expiry Type Reuters Expiry Formula Hourly XOM The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3].The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, XOM Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Expiry Type Reuters Expiry Formula Hourly INTC.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3].The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, INTC.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value BHP Billiton Limited Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly BHP.AX decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, BHP.AX Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. ANZ bank Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly ANZ.AX decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, ANZ.AX Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. Mitsubishi corp Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 8058.T Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 7203.T Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly TAMO.BO decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, TAMO.BO Week, Month Last The expiry level is the value that appears in Reuters Field at the expiry Time. DI Corporation Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 003160.KS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Hyundai HCN Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 126560.KS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. LG Corp Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 003550.KS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Samsung electronics Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 005930.KS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Wooridul Life Expiry Type Reuters Field Expiry Formula End of Day, Week, Month 118000.KS Last Value The expiry level is the value that appears in Reuters Field at the expiry Time. Expiry Type Reuters Field Expiry Formula Hourly FB.O The expiry level is the last value that appears in Reuters Field at the expiry time (Price). Expiry Type Reuters Field Expiry Formula Hourly APPL.O The expiry level is the last value that appears in Reuters Field at the expiry time (Price). Expiry Type Reuters Field Expiry Formula Hourly GOOG.O The expiry level is the value that appears in Reuters Field at the expiry time. Expiry Type Reuters Field Expiry Formula Hourly EUR= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. S&P Future Expiry Reuters Expiry Formula Type Field Hourly ESc1 Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd Value decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly CLv1 The expiry level is the last value that appears in Reuters Field at the expiry time (Price). DAX Future Expiry Reuters Expiry Formula Type Field Hourly FDXc1 Last The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 3rd Value decimal digit is 5 or bigger, and rounded down in the event that the 3rd decimal digit is 4 or lower. The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly JPY= The expiry level is the last value that appears in Reuters Field at the expiry time.(Price) Expiry Type Reuters Field Expiry Formula Hourly AUD= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly GCv1 The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly .FTMIB Last Value The expiry level is the last value that appears in Reuters Field at the expiry time (Price). Expiry Type Reuters Field Expiry Formula Hourly GBP= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Industrial Bank Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly 601166.SS decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, 601166.SS The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value CPIC Group Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly 601601.SS decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, 601601.SS The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Ping An Insurance Expiry Type Reuters Expiry Formula The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th Hourly 601318.SS decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower.The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, 601318.SS The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Air China Limited Expiry Type Reuters Field Expiry Formula Hourly 601111.SS The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3].The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, 601111.SS The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Last Value Expiry Type Reuters Expiry Formula Hourly BIDU.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3]. The result is rounded up in the event that the 4th decimal digit is 5 or bigger, and rounded down in the event that the 4th decimal digit is 4 or lower. End of Day, BIDU.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value SINA Corporation Expiry Type Reuters Expiry Formula Hourly SINA.O The expiry level is equal to the sum of the LAST value, ASK value and the BID value, divided by three [(LAST+ASK+BID)/3].The result is rounded up in the event that the (NUMBER. E.g. 5th) decimal digit is 5 or bigger, and rounded down in the event that the last decimal digit is 4 or lower. End of Day, SINA.O Last The expiry level is the value that appears in Reuters Field at the expiry Time. Week, Month Value Binary 0-100 Expiry Type Reuters Field Expiry Formula Hourly .GDAXI The expiry level is the value that appears in Reuters Field at the expiry time. DAX Future Expiry Type Reuters Field Expiry Formula Hourly FDXc1 Last Value The expiry level is the last value that appears in Reuters Field at the expiry time (Price). The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly .DJI Last Value The expiry level is the last value that appears in Reuters Field at the expiry time (Price). The quote for calculating the expiry level may differ on certain days. Expiry Type Reuters Field Expiry Formula Hourly EURJPY= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly EUR= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly GBP= The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly GCv1 The expiry level is equal to the sum of the ASK value and the BID value, divided by two [(ASK+BID)/2]. Expiry Type Reuters Field Expiry Formula Hourly CLv1 The expiry level is the last value that appears in Reuters Field at the expiry time (Price).
{"url":"http://www.anyoption.com/general-terms-options-trading","timestamp":"2014-04-18T08:04:05Z","content_type":null,"content_length":"233519","record_id":"<urn:uuid:62d8977a-8e92-4002-9f44-ad85c193dd5b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Esa's Google Maps API examples Simple circle function • Radius given as kilometers seems to be often asked • Pixel or degree radius are easier to make • Two tricks to simplify circle function: • Api's .distanceFrom() method is used for finding km/degree ratios for lat and lng. • 360 step for-loop simplifies calculation • Radius: • km • Quality: • segments • The circle function is in the end of the source
{"url":"https://d4c99048-a-62cb3a1a-s-sites.googlegroups.com/site/esailmari/circle.htm?attachauth=ANoY7cppbIR5cS2fVXOd-aeISbg3fBz1IlWSZ5VAR1E9tpHzCTxcmzK9jpGOMShTYX3lrYc3EH7BVucSgl4DvSWCX8R_pyFpHrhohL7iVgKCN1t2CYpt_HoKWrzT3EqhMDRXGRWx9_UxBcq-S8J7pYIv4D0hbIps3AKu5jWJ0ZGE1_Mr6Az7-9833JXXPkYYL-weI0ibtrk-GmYYcRer7F_XGEqr3WY1HA%3D%3D&attredirects=1","timestamp":"2014-04-23T06:38:17Z","content_type":null,"content_length":"6817","record_id":"<urn:uuid:d8d5dc35-f178-4146-9ea1-6575027eef01>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Compute Limit Using Limit Properties July 16th 2013, 08:12 PM #1 Oct 2010 Compute Limit Using Limit Properties Let {a[n]} be a convergent sequence with limit L ∈ ℝ. Use properties of limits to compute $\lim_{n \rightarrow \infty } \frac{ {a}^2_{n} -1 }{{a}^2_{n} +1}}$ 1. "Distribute" limit to numerator and denominator (allowed since {a[n]} is convergent, so {a[n]}^2-1 and {a[n]}^2+1 are convergent. 2. "Distribute" limit to {a[n]}^2 and -1 in the numerator and likewise for {a[n]}^2 and 1 in the denominator. 3. Since {a[n]} converges to L, we know {a[n]}^2 converges even faster to L. Also, we know lim 1 is 1 and lim -1 is -1. 4. Thus, we have (L-1)/(L+1) Is this an acceptable solution? I'm not sure if it's okay to have the answer in terms of "L". Any help would be appreciated. Re: Compute Limit Using Limit Properties Hey divinelogos. Shouldn't a_n^2 converge to L^2 if a_n converges to L? Re: Compute Limit Using Limit Properties That makes sense, yes. So we would have = (L+1)(L-1)/(L+1) = L-1 Is this the correct approach in general though? That is, from step 1 where I "distribute" the limit? Re: Compute Limit Using Limit Properties Because it is convergent and because its not of an indeterminate form, then this should work. The distribution should work since you don't get 0/0 (i.e. indeterminant form). Re: Compute Limit Using Limit Properties Ok, is there a way to get a "cleaner" limit though? I'm not sure if you can have the answer depend on L. Also, how do we know we don't get something messed up in the denominator with the distribution or an interderminate form? For example, what if L=-1? Then the denominator would equal 0. Is this okay since a^2 converges to L^2? Also, to take care of the indeterminate forms, is it okay because the only two indeterminate forms are +-infinity/+-infinite and 0/0, and we know 0's won't pop up because of the + and - 1's, and infinities won't because we know the sequence converges and not diverges? Re: Compute Limit Using Limit Properties What do you mean by cleaner? The [L^2 - 1]/[L^2+1] is pretty clean (Also so you have to make sure L^2 != -1 if you have complex numbers). July 16th 2013, 09:41 PM #2 MHF Contributor Sep 2012 July 16th 2013, 09:55 PM #3 Oct 2010 July 16th 2013, 10:03 PM #4 MHF Contributor Sep 2012 July 16th 2013, 10:19 PM #5 Oct 2010 July 16th 2013, 10:46 PM #6 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-applied-math/220638-compute-limit-using-limit-properties.html","timestamp":"2014-04-16T16:04:41Z","content_type":null,"content_length":"45580","record_id":"<urn:uuid:33955baa-b443-4ee7-8a0d-827c1e499ea0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Multiplicative congruental pseudo random numbers question. Replies: 1 Last Post: Sep 19, 1996 10:47 AM Messages: [ Previous | Next ] Multiplicative congruental pseudo random numbers question. Posted: Sep 17, 1996 9:12 AM I'm a first year student who's just begun Discrete Mathematics and I have a question for someone in the know. Given the random numbers series generated by using : x(0)=3387, x(i+1) = (3387.x(i))mod65536, why can it only generate 25% of the possible 2^16 numbers ? I realise they can only be odd, but that's only 50%. What's the story ? Please respond by e-mail because I'm not able to read the newsgroups too frequently. Thanks in advance to anyone who can help, Date Subject Author 9/17/96 Multiplicative congruental pseudo random numbers question. Michael McGrath 9/19/96 Re: Multiplicative congruental pseudo random numbers question. Christopher J. Truffer, et. al.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=179357","timestamp":"2014-04-16T14:03:25Z","content_type":null,"content_length":"17560","record_id":"<urn:uuid:5b9fa7c6-8bc3-44f3-9ad7-e955974d850b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
convergence in measure December 20th 2009, 11:34 PM #1 Dec 2008 convergence in measure Let $(X,F, \mu)$ be a measure space with $\mu (X) < \infty$. Prove that $f_n \rightarrow f$ in measure $\Longleftrightarrow$$\int_X \frac{|f_n-f|}{1+|f_n-f|} d \mu$$\rightarrow 0$. See here : http://www.mathhelpforum.com/math-he...-1-yn-0-a.html (the integral is like the expectation, and measure = probability) i understand the first half. but i have a question on the second half. by letting $Y_n=|f_n-f|$, $Z_n = \frac{Y_n}{1+Y_n}$and mimicking the steps. i have $\int Z_n d\mu = \int Z_n 1_{\{Z_n \leq \epsilon \} } d\mu + \int Z_n 1_{ \{Z_n > \epsilon \} } d\mu$ $\int Z_n 1_{\{Z_n \leq \epsilon \} } d\mu \leq \int \epsilon 1_{\{Z_n \leq \epsilon \} } d\mu = \epsilon \int 1_{\{Z_n \leq \epsilon \} } d\mu$ but how do i show that $\epsilon \int 1_{\{Z_n \leq \epsilon \} } d\mu \leq \epsilon$ without knowing that $\mu$ is a probability measure? is there anyway i can estimate integral $\epsilon \int 1_{\{Z_n \epsilon \} }$? please help me. It's not that difficult, think about an hypothesis you've not been using December 21st 2009, 01:34 AM #2 December 21st 2009, 04:43 PM #3 Dec 2008 December 22nd 2009, 01:55 AM #4
{"url":"http://mathhelpforum.com/differential-geometry/121239-convergence-measure.html","timestamp":"2014-04-16T08:06:38Z","content_type":null,"content_length":"43104","record_id":"<urn:uuid:a592bef3-4704-4e79-8d77-c1362d6476a2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Partial conditional independence Partial conditional independence Christopher Hautmann posted on Wednesday, December 03, 2008 - 8:12 am in the Mplus user's guide example 7.16 describes a latent class analysis with partial local independence. Is the method desribed comparable to the term "direct effect" which is used by Hagenaars (1988) or Magidson and Vermunt (2004)? Can I use the method of Mplus to account for a lack of conditional independence between a continous indicator and a nominal indicator? Hagenaars, J. A. (1988). Latent structure models with direct effects between indicators: Local dependence models. Sociological Methods and Research, 16, 379-405. Magidson, J., & Vermunt, J. K. (2004). Latent class models. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology series for the social sciences (pp. 175-198). Thousand Oaks, CA: Sage. Linda K. Muthen posted on Wednesday, December 03, 2008 - 11:41 am I believe this is what they call a direct effect. You can do this with one nominal and one continuous. Lewina Lee posted on Thursday, February 16, 2012 - 2:45 pm Drs. Muthen, In Mplus User's Guide Version 5, example 7.16 describes LCA with partial conditional independence in one of the classes such that a latent factor (f) loads on 2 continuous indicators (u2, u3). Example 7.22 models correlations among continuous indicators y1-y4. Are specifying latent factors (f) and correlations among indicators both valid ways of testing conditional dependence among indicators? If not, how are they different? My indicators are subscale scores on different measures (e.g., intelligence subscales A, B, C; personality subscales I, II, III). I want to test for residual covariance among subscales from the same construct. Is there a preferred way to do this? For instance: F1 by A B C; F2 by I II III; A with B-C; B with C; I with II-III; II with III; Thank you for your help, Linda K. Muthen posted on Friday, February 17, 2012 - 1:49 pm Residual covariances cannot be specified using WITH when numerical integration is involved. This is because each residual covariance requires one dimension of integration. Otherwise, WITH can be used. In your case, I believe WITH is allowed. Your BY statements are not correct. Each residual covariance requires one BY statement, for example, f BY u1@1 u2; f@1; [f@0]; where the residual covariance is found in the factor loading for u2. Lewina Lee posted on Tuesday, February 21, 2012 - 1:20 pm Dear Linda, Thank you for your reply. I have 21 continuous indicators, each representing a subscale score under 4 separate measures (e.g., intelligence, personality). I first attempted to run latent profile analyses with k=1 through k=8 & conditional independence. As a second step, I attempted to test for conditional dependence among indicators of the same measure. Per your earlier response, I specified conditional dependence via: A1 with A2-A4; A2 with A3-A4; A3 with A4; B1 with B2-B7; B2 with B3-B7; (and so forth) I estimated a 1-class solution & a 2-class solution. I received error messages regarding "AN ILL-CONDITIONED FISHER INFORMATION MATRIX", "A NON-POSITIVE DEFINITE FISHER INFORMATION MATRIX", "NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX" & "SINGULARITY OF INFORMATION MATRIX." Parameters involving residual covariances were referenced. What can I do? Also, how can I show that there is little or no conditional dependence across subscales of the same measure? Thank you for your insight. Bengt O. Muthen posted on Tuesday, February 21, 2012 - 6:26 pm I think your approach should work. Please send to support to more clearly see what is going on. Back to top
{"url":"http://www.statmodel.com/discussion/messages/13/3787.html?1329877584","timestamp":"2014-04-19T02:24:09Z","content_type":null,"content_length":"24942","record_id":"<urn:uuid:13113dd2-4904-4b67-9534-887ff03d7d4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluid statics From Exampleproblems Fluid statics (also called hydrostatics) is the science of fluids at rest, and is a sub-field within fluid mechanics. The term usually refers to the mathematical treatment of the subject. It embraces the study of the conditions under which fluids are at rest in stable equilibrium. The use of fluid to do work is called hydraulics, and the science of fluids in motion is fluid dynamics. Static pressure in fluids Due to an inability to resist deformation, fluids exert pressure normal to any contacting surface. In addition, when the fluid is at rest (static) that pressure is isotropic, i.e. it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes, i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. If the force is unequal, the fluid will move in the direction of the resulting force. This concept was first formulated, in a slightly extended form, by the French mathematician and philosopher Blaise Pascal in 1647, and would be later known as Pascal's Law. This law has many important applications in hydraulics. Galileo Galilei, also was a major father of hydrostatics. Hydrostatic pressure Considering a small cube of water at rest below a free surface, the weight of the water above must be balanced by a pressure in this small cube. For an infinitely small cube, this weight or equivalent pressure can be expressed as $\ P = \rho g h$ where, using SI units, P is the hydrostatic pressure (in pascals); ρ is the water density (in kilograms per cubic meter); g is gravitational acceleration (in meters per second squared); h is the height of fluid above (in meters). Atmospheric pressure The Maxwell-Boltzmann distribution predicts that, for a gas of constant temperature, T, its density, ρ, will vary with height, h, as: ρ(h) = ρ(0)e^ − gh / kT, - where k is Boltzmann's constant and g is the acceleration due to gravity. A solid body immersed in a fluid will have an upward buoyant force acting on it equal to the weight of displaced fluid. This is due to the hydrostatic pressure in the fluid. In the case of a container ship, for instance, its weight force is balanced by a buoyant force from the displaced water, allowing it to float. If more cargo is loaded onto the ship, it would sit lower in the water - displacing more water and thus receive a higher buoyant force to balance the increased weight force. Discovery of the principle of buoyancy is attributed to Archimedes. A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyant force, which, unbalanced against the weight force will push the object back up. Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral). Rotational stability depends on the relative lines of action of forces on an object. The upward buoyant force on an object acts through the centre of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its centre of gravity. An object will be stable if an angular displacement moves the line of action of these forces to set up a 'righting Liquids-fluids with free surfaces Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards a equilibrium. However, on small length scales, there is an important balancing force from surface tension. Surface tension effects Capillary action When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. Without surface tension, drops would not be able to form. The dimensions and stability of drops are determined by surface tension. See also de:Hydrostatik es:Hidrostática fr:Hydrostatique ko:유체 정역학 it:Idrostatica nl:Hydrostatica pl:Hydrostatyka pt:Hidrostática ru:Гидростатика sl:Hidrostatika
{"url":"http://exampleproblems.com/wiki/index.php/Fluid_statics","timestamp":"2014-04-16T15:59:49Z","content_type":null,"content_length":"28198","record_id":"<urn:uuid:cd7ff92d-f3b6-4e43-bf23-0c297c06dc8a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
The main goals of the Common Core State Standards are to establish the knowledge and skills necessary for college and career readiness among high school graduates, and to continually develop these skill sets at each grade level. The objectives of the Common Core State Standards for Mathematics are to improve achievement in mathematics, to present math concepts in a more unified and coherent way, and to develop increased mathematical understanding. The “Standards for Mathematical Practice,” are unifying principles for approaching mathematical problems that apply to all grade levels. In addition, the high school mathematics standards are grouped into six concept areas – Number and Quantity, Algebra, Functions, Modeling, Geometry, and Statistics and Probability – which can be combined in various ways according to different content goals. The Common Core State Standards strive to make the progression of math by grade level more logical, to cover fewer subjects in greater depth, and encourage students to apply mathematical thinking to real-world challenges.
{"url":"http://www.saylor.org/majors/math/","timestamp":"2014-04-20T01:15:50Z","content_type":null,"content_length":"22638","record_id":"<urn:uuid:1b347598-cac9-4545-9037-c4015946b5d8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Circle in Triangle October 27th 2009, 11:08 AM #1 Apr 2009 Circle in Triangle Hi I have attached the question as a word file. I have worked through it, and need help with the last bit. I do not understand how to show the relationship between A and B in ii) he first time I did the question, I tried to use the relationship to evaluate the areas asked for next, which ended up being confusing. However, this time, I just worked with the kite and sector of the circle to find A. I didn't use B at all. Since B was given right before the question to work out the final 2 Areas was asked, I am presuming that it can be used to work out the answers, or else it would have no reason of being added. So basically, I dont understand the relationship, and also, can anyone show how they would do the last part using this relationship between area A and B? Hello Aquafina The function $A(\theta)$ gives the area between any two tangents and the circle expressed as a function of $\theta$, the angle between the tangents. So if we denote the angle between the tangents from $R$ to the circle by $\phi$, then the area between these tangents and the circle is $A(\phi)$. But this area is given by $B(\theta)$. Hence $A (\phi) = B(\theta)$. But, from $\triangle ORP, \theta = \frac{\pi}{2}-\phi$. Therefore $A(\phi) = B(\frac{\pi}{2}-\phi)$. We can replace $\phi$ by any other parameter; in particular by $\theta$. So $A(\theta) = B(\ For part (iii) I should do as you have done. The calculation is very straightforward for $\theta = \frac{\pi}{3}$. Why complicate it? Hello AquafinaThe function $A(\theta)$ gives the area between any two tangents and the circle expressed as a function of $\theta$, the angle between the tangents. So if we denote the angle between the tangents from $R$ to the circle by $\phi$, then the area between these tangents and the circle is $A(\phi)$. But this area is given by $B(\theta)$. Hence $A (\phi) = B(\theta)$. But, from $\triangle ORP, \theta = \frac{\pi}{2}-\phi$. Therefore $A(\phi) = B(\frac{\pi}{2}-\phi)$. We can replace $\phi$ by any other parameter; in particular by $\theta$. So $A(\theta) = B(\ For part (iii) I should do as you have done. The calculation is very straightforward for $\theta = \frac{\pi}{3}$. Why complicate it? Thanks Grandad. I was told I had to mention something like: That the reflections would not change the areas and reverses the angles... Also, I wanted to try the relationship with B to work out the Area of A just to experiment around. But this didnt work out. I did A(pi/3) = B(pi/6) = 2*AreaQCR - Area segment = root3 - 5pi/12 rather than the pi/3... I get pi/3 if I take the angle ORP to be pi/3, not pi/6 Hello Aquafina Thanks Grandad. I was told I had to mention something like: That the reflections would not change the areas and reverses the angles... Also, I wanted to try the relationship with B to work out the Area of A just to experiment around. But this didnt work out. I did A(pi/3) = B(pi/6) = 2*AreaQCR - Area segment = root3 - 5pi/12 rather than the pi/3... I get pi/3 if I take the angle ORP to be pi/3, not pi/6 I'm not quite sure what you mean here. The straightforward proof that $A(\frac{\pi}{3})= \sqrt3-\frac{\pi}{3}$ is as follows: If $C$ is the centre of the circle, then $\angle CPQ = \frac{\pi}{6}$ $\Rightarrow PQ = \frac{1}{\tan(\tfrac{\pi}{6})} = \sqrt3$ $\Rightarrow$ area of $\triangle CPQ = \tfrac12\cdot1\cdot\sqrt3 = \frac{\sqrt3}{2}$ $\Rightarrow$ area of kite $= 2\times\triangle CPQ = \sqrt3$ The angle at the centre of the circle is $\pi - \theta = \frac{2\pi}{3}$ $\Rightarrow$ area of sector $= \tfrac12\times1^2\times\frac{2\pi}{3}= \frac{\pi}{3}$ $\Rightarrow A(\frac{\pi}{3}) =$ area of kite - area of sector $= \sqrt3 - \frac{\pi}{3}$ October 28th 2009, 07:20 AM #2 October 29th 2009, 12:47 AM #3 Apr 2009 October 29th 2009, 11:20 AM #4
{"url":"http://mathhelpforum.com/trigonometry/110843-circle-triangle.html","timestamp":"2014-04-16T05:24:25Z","content_type":null,"content_length":"50997","record_id":"<urn:uuid:c5978cdf-1fd5-42a4-b43e-18707057cfbf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Is logic part of mathematics - or is mathematics part of logic? Replies: 32 Last Post: Jul 7, 2013 9:03 PM Messages: [ Previous | Next ] Re: Is logic part of mathematics - or is mathematics part of logic? Posted: Jul 5, 2013 12:04 PM Clyde Greeno says: # Likewise, you take them to where they can experience modus ponens, and say "This is called 'modus ponens'." And the easiest and most meaningful place for learners to encounter modus ponens is within their own routine conversation. So this has devolved into a *literal* discussion of "modus ponens" in isolation, because I used the term in context to stand in for the subject "informal logic". I shouldn't be surprised! Its quite obvious to me that the majority of people are not very skilled at sustaining a line of logical discourse in routine conversation. That's the whole point of training in logic, which was well known thousands of years ago. Joe N
{"url":"http://mathforum.org/kb/message.jspa?messageID=9158693","timestamp":"2014-04-18T08:14:33Z","content_type":null,"content_length":"54404","record_id":"<urn:uuid:c092c66d-e16f-4a4b-a2f7-58745f7620da>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America A The T-matrix equations are revisited with an eye to computing surface fields. Electromagnetic scattering by cylinders is considered for both surface- and volume-type scatterers. Elliptical and other smooth surfaces are examined, as well as cylinders with edges, and the usefulness of various surface field representations is shown. An absorption matrix is defined, and discarding the skew-symmetric part of the T matrix, i.e., enforcing reciprocity, in the course of numerical computations, is found to better satisfy energy constraints both with and without losses present. For thin penetrable cylinders, extended Rayleigh formulas are found for the case kr[max]≪1, |k^′r[max]| arbitrary, where k and k^′ are, respectively, the free-space and interior propagation constants. In contrast, existing methods require that both quantities be small compared with unity. © 1999 Optical Society of America OCIS Codes (000.4430) General : Numerical approximation and analysis (260.1960) Physical optics : Diffraction theory (260.2110) Physical optics : Electromagnetic optics (260.2160) Physical optics : Energy transfer (290.3770) Scattering : Long-wave scattering P. C. Waterman, "Surface fields and the T matrix," J. Opt. Soc. Am. A 16, 2968-2977 (1999) Sort: Year | Journal | Reset 1. V. K. Varadan and V. V. Varadan, eds., Acoustic, Electromagnetic, and Elastic Wave Scattering (Pergamon, New York, 1980). 2. V. V. Varadan, A. Lakhtakia, and V. K. Varadan, “Comments on recent criticism of the T-matrix method,” J. Acoust. Soc. Am. 84, 2280–2284 (1988). 3. P. C. Waterman, “New formulation of acoustic scattering,” J. Acoust. Soc. Am. 45, 1417–1429 (1969). 4. P. C. Waterman, “Symmetry, unitarity and geometry in electromagnetic scattering,” Phys. Rev. D 3, 825–839 (1971). 5. S. Ström and W. Zheng, “Basic features of the null field method for dielectric scatterers,” Radio Sci. 22, 1273–1281 (1987). 6. B. Z. Steinberg and Y. Leviatan, “A multiresolution study of two-dimensional scattering by metallic cylinders,” IEEE Trans. Antennas Propag. 44, 572–579 (1996). 7. K. K. Mei and J. Van Bladel, “Low-frequency scattering by rectangular cylinders,” IEEE Trans. Antennas Propag. AP-11, 52–56 (1963); “Scattering by perfectly conducting rectangular cylinders,” IEEE Trans. Antennas Propag. AP-11, 185–192 (1963) 8. M. G. Andreasen and K. K. Mei, “Comments on scattering by conducting rectangular cylinders,” IEEE Trans. Antennas Propag. AP-12, 235–236 (1964). 9. S. Abdelmessih and G. Sinclair, “Treatment of singularities in scattering from perfectly conducting polygonal cylinders—a numerical technique,” Can. J. Phys. 45, 1305–1318 (1967). 10. L. Shafai, “An improved integral equation for the numerical solution of two-dimensional diffraction problems,” Can. J. Phys. 48, 954–963 (1970). 11. J. D. Hunter, “Surface current density on perfectly conducting polygonal cylinders,” Can. J. Phys. 50, 139–150 (1972). 12. R. H. T. Bates and D. J. N. Wall, “Null field approach to scalar diffraction. I. General method,” Philos. Trans. R. Soc. London Ser. A 287, 45–78 (1977). 13. D. T. DiPerna and T. K. Stanton, “Sound scattering by cylinders of noncircular cross section: a conformal mapping approach,” J. Acoust. Soc. Am. 96, 3064–3079 (1994). 14. Y. Shifman, M. Friedmann, and Y. Leviatan, “Analysis of electromagnetic scattering by cylinders with edges using a hybrid moment method,” IEE Proc. Microwave Antennas Propag. 144, 235–240 (1997). 15. K. Iizuka and J. L. Yen, “Surface currents on triangular and square metal cylinders,” IEEE Trans. Antennas Propag. AP-15, 795–801 (1967). 16. I. Navot, “An extension of the Euler–McLaurin summation formula to functions with a branch singularity,” J. Math. Phys. 40, 271–276 (1961). 17. P. C. Waterman, J. M. Yos, and R. J. Abodeely, “Numerical integration of non-analytic functions,” J. Math. Phys. 43, 45–50 (1964). 18. J. Van Bladel, “Low-frequency scattering by cylindrical bodies,” Appl. Sci. Res. Sect. B 10, 195–202 (1963); “Scattering of low-frequency E-waves by dielectric cylinders,” IEEE Trans. Antennas Propag. AP-24, 255–258 (1976). 19. V. V. Varadan and V. K. Varadan, “Low-frequency expansions for acoustic wave scattering using Waterman’s T-matrix method,” J. Acoust. Soc. Am. 66, 586–589 (1979). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-16-12-2968","timestamp":"2014-04-25T03:51:55Z","content_type":null,"content_length":"100492","record_id":"<urn:uuid:fc454da2-eaaf-4f67-9017-511ecc228c4e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate Motor Size July 2004 Based on the basic formulas for the physics of motion we can calculate the theoretical energy needs to move a vehicle. There are three major forces at work which resist a vehicle from moving: 1. Rolling resistance 2. Air resistance 3. The force of gravity as a vehicle moves up a hill Rolling Resistance The force of the rolling resistance is a function of the weight of the vehicle multiplied by a coefficient of the rolling resistance. This force is mostly independant of car speed. u[roll] - is the rolling resistance coefficient. Typical values for the rolling resitance coeffiecient (u[roll]) range from 0.010 to 0.015 M - is the mass of the vehicle (kg) a[gravity] - is the force of gravity (9.8 m/s^2) Air Resistance The force of the air resistance is proportional to the square of the speed, the density of the air, the silhoette area of the car, and the drag coeffecient for the vehicle. C[d] - is the Drag coefficient (no dimension) A - is the surface area of the car (m^2) p - is the density of the air (1.2 kg/m^3 at sea level at normal temperatures) v - is the speed of the vehicle (m/s) Here is a table of common drag coefficients for some vehicles: ┃ Vehicle │ C[d] ┃ ┃ VW New Beetle │ 0.38 ┃ ┃ Porsche Carrera │ 0.38 ┃ Force required to go uphill The force required to lift a car uphill is a function of the angle of the hill and the force of gravity ┃ ┃ ┃ F[hill] = (% grade of hill) M a[gravity[ ┃ ┃ ]] ┃ M - is the mass of the vehicle (kg) a[gravity] - is the force of gravity (9.8 m/s^2) Power required for Forward Motion Power is a measurement of work per unit time. Work is a measurement of a force moved some distance. Therefore to determine the power required to move a vehicle at a certain speed, it is simply the total of all forces to overcome multiplied by the speed. ┃ ┃ ┃ P = (F[roll] + F[air] + F[hill]) v ┃ Power required for Acceleration To calculate the power required to accelerate a vehicle, first determine the amount of energy required to accelerate a vehicle from 0 to a speed v: M - is the mass of the vehicle (kg) v - is the final speed of the vehicle then to calculate the power, divide the energy required by the time it takes to accelerate the vehicle: E[k] - is the energy required to accelerate the vehicle to the speed t - is the time is takes to accelerate the vehicle to the speed Putting it all together The Drive Power spreadsheet puts all these formulas together to calculate the energy/power needs for an electric vehicle. Updated Sept 6, 2004!
{"url":"http://www.cameronsoftware.com/ev/EV_CalculateMotorSize.html","timestamp":"2014-04-20T13:49:32Z","content_type":null,"content_length":"12241","record_id":"<urn:uuid:329e5adf-7909-4034-a595-2180b7b044c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00119-ip-10-147-4-33.ec2.internal.warc.gz"}