content
stringlengths
86
994k
meta
stringlengths
288
619
Simmechanics Inverse and Forward Dynamics Hello everyone, I beg your help, because I've been fighting this problem for more than a week without any particular result... So, I have a Simulink block-diagram representing 5-links with 5 revolute joints robot-manipulator. First I solve Inverse Dynamics, motion is given to the model as a Nx3 vector with motion, velocity and acceleration, N - is number of steps. Normally I take 1 sec time period with 100 steps. In a simpliest case motion is rotation of 1st joint for pi/2 radians with acceleration within first 0.1 sec, constant motion within interval [0.1, 0.9], and decelearation within last 0.1 sec. The result of Inverse Dynamics is a portion of computed torques on every joint. This part of simulation goes smoothly. Next I want to perform Forward Dynamics with torques I recieved in previous step, just to check if model works well. And here I have a problem. Results of Forward Dynamics do not match initial motion I set in the beginning, although if I put a sensor on a joint drawing torques, the torques on output match torques on input. So I quess that everything should work... But it doesn't. Please someone help me. Many thanks in advance. PS: Here is the link for the model (for a reduced model actually - only 3 joints) File Initial_motion.mat contains Nx3 vector of motion for the first joint. Load this file first. File robomy_q123 contains simulink model for Inverse Dynamics. After running it, run file Q_fnc.m, which converts torques in function of time. Than run simulink file robomy_q123_torq, which solves Forward dynamics. Maybe it would help.
{"url":"http://www.societyofrobots.com/robotforum/index.php?topic=13385.0","timestamp":"2014-04-17T19:01:53Z","content_type":null,"content_length":"41517","record_id":"<urn:uuid:83d6505f-5bbf-4727-92b7-ab236ab12204>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
graphical method? October 9th 2008, 01:26 PM graphical method? hey i cant think of a simple answer to this. im looking for an obvious anwswer t=switching time V= applied voltage the formula: how can I extract the constants A and B by a simple graphical method from the data October 9th 2008, 02:06 PM I can't give you an "obvious" answer ... but I do have an "answer". solving for $V^2$ ... $V^2 = A\left(\frac{1}{t}\right) + B$ if you graph $\frac{1}{t}$ vs. $V^2$ as x vs. y, you should get a linear graph. A will be the slope of the graph B will be the y-intercept.
{"url":"http://mathhelpforum.com/pre-calculus/52872-graphical-method-print.html","timestamp":"2014-04-19T12:59:38Z","content_type":null,"content_length":"4536","record_id":"<urn:uuid:bccefe91-3065-4468-86f0-1d80ac98aee2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
DMTCS Proceedings A coupon collector's problem with bonuses Toshio Nakata, Izumi Kubo In this article, we study a variant of the coupon collector's problem introducing a notion of a bonus. Suppose that there are c different types of coupons made up of bonus coupons and ordinary coupons, and that a collector gets every coupon with probability 1/c each day. Moreover suppose that every time he gets a bonus coupon he immediately obtains one more coupon. Under this setting, we consider the number of days he needs to collect in order to have at least one of each type. We then give not only the expectation but also the exact distribution represented by a gamma distribution. Moreover we investigate their limits as the Gumbel (double exponential) distribution and the Gauss (normal) distribution. Full Text:
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAG0114","timestamp":"2014-04-17T06:47:05Z","content_type":null,"content_length":"9988","record_id":"<urn:uuid:7dd6a2a2-05d5-4823-8178-2d35c3aae682>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Artificial Intelligence: Problem Set 3 Assigned: Feb. 27 Due: Mar. 20 Consider a domain whose entities are people, books, and volumes. (A "volume" is a particular physical object, which is a copy of an abstract "book", like Moby Dick). Let L be the first-order language containing the following predicates: o(P,V) --- Predicate. Person P owns volume V c(V,B) --- Predicate. Volume V is a copy of book B. a(P,B) --- Predicate. Person P is the author of book B. i(P,B) --- Predicate. Person P is the illustrator of book B. h --- Constant. Howard Pyle. s --- Constant. Sam. j --- Constant. Joe. Problem 1 Express the following statements in L: (Note correction to sentence d.) • a. Sam owns a copy of every book that Howard Pyle illustrated. Answer: forall(B) i(h,B) => exists(V) o(s,V) ^ c(V,B). • b. Joe owns a copy of a book that Howard Pyle wrote. Answer: exists(V,B) o(j,V) ^ c(V,B) ^ a(h,B). • c. Howard Pyle illustrated every book that he wrote. Answer: forall(B) a(h,B) => i(h,B). • d. Sam owns only illustrated books. Interpret this in the form "If Sam owns volume V and V is a copy of book B, then B has been illustrated by someone." Answer: forall(V,B) o(s,V) ^ c(V,B) => exists(P) i(P,B) • e. None of the books that Joe has written have been illustrated by anyone. Answer: not exists(B,P) a(j,B) ^ i(P,B). • f. Sam does not own a copy of any book that Joe has written. Answer: forall(B) a(j,B) => not exists(V) c(V,B) ^ o(s,B). • g. There is a book B such that both Sam and Joe own a copy of B. Answer: exists(B,V1,V2) c(V1,B) ^ o(s,V1) ^ c(V2,B) ^ o(j,V2). Problem 2 Using resolution, show that (g) can be proven from (a-c) and that (f) can be proven from (d,e). You must show the Skolemized form of each of the axioms and of the negated goals. You need not show the intermediate steps of Skolemization. Answer: Skolemizing (a-c) gives the following clauses. a1: -i(h,B) V o(s,sk1(B)). a2: -i(h,B) V c(sk1(B),B) b1: o(j,sk2). b2: c(sk2,sk3) b3: a(h,sk3) c1: -a(h,B) V i(h,B). The Skolemized form of the negation of (g) is g1: -c(V1,B) V -o(s,V1) V - c(V2,B) V -o(j,V2). One resolution proof proceeds as follows: h: -c(V1,B) V -o(s,V1) V -c(sk2,B). (From g1 and b1, V2 -> sk2) i: -c(V1,sk3) V -o(s,V1). (From h and b2, B -> sk3) j: i(h,sk3) (From b3 and c1, B -> sk3) k: o(s,sk1(sk3)). (From j and a1, B -> sk3) l: c(sk1(sk3),sk3). (From j and a2, B-> sk3). m: -o(s,sk1(sk3)). (From l and c, V1 -> sk1(sk3)) n: empty (From m and k) The Skolemized forms of (d,e) are d1: -o(S,V) V -c(V,B) V i(sk4(V,B),B) e1: -a(j,B) V -i(P,B) (Alternative answer for d1: d can be rewritten in the logically equivalent form forall(B) [exists(V) o(s,V) ^ c(V,B)] => exists(P) i(P,B). This form reveals that P does not actually depend on V. Applying the Skolemization procedure to this form gives d1': -o(S,V) V c(V,B) V i(sk4(B),B) In terms of finding resolution proofs, it makes no difference which of these is used.) The Skolemized form of the negation of f is f1: a(j,sk5) f2: c(sk6,sk5) f3: o(s,sk6) One resolution proof proceeds as follows: p: -c(sk6,B) V i(sk4(sk6,B),B) (From f3 and d1, V -> sk6) q: i(sk4(sk6,sk5),sk5) (From f2 and p, B -> sk5). r: -a(j,sk5). (From q and e1, B -> sk5, P -> sk4(sk6,sk5)) s: empty (From r and f1).
{"url":"http://cs.nyu.edu/courses/spring01/G22.2560-001/sol3.html","timestamp":"2014-04-16T19:55:50Z","content_type":null,"content_length":"4219","record_id":"<urn:uuid:55237911-afa0-4f3a-85a7-06ace9d407ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Isbell duality Higher algebra Algebraic theories Algebras and modules Higher algebras Model category presentations Geometry on formal duals of algebras Higher geometry In QFT and String theory higher geometry $\leftarrow$Isbell duality $\rightarrow$higher algebra A general abstract adjunction $(\mathcal{O} \dashv Spec) : CoPresheaves \stackrel{\overset{\mathcal{O}}{\leftarrow}}{\underset{Spec}{\to}} Presheaves$ relates (higher) presheaves with (higher) copresheaves on a given (higher) category $C$: this is called Isbell conjugation or Isbell duality (after John Isbell). To the extent that this adjunction descends to presheaves that are (higher) sheaves and copresheaves that are (higher) algebras this duality relates higher geometry with higher algebra. Objects preserved by the monad of this adjunction are called Isbell self-dual. Let $\mathcal{V}$ be a good enriching category (a cosmos, i.e. a complete and cocomplete closed symmetric monoidal category). Let $\mathcal{C}$ be a small $\mathcal{V}$-enriched category. Write $[\mathcal{C}^{op}, \mathcal{V}]$ and $[\mathcal{C}, \mathcal{V}]$ for the enriched functor categories. There is a $V$-adjunction $(\mathcal{O} \dashv Spec) : [C, \mathcal{V}]^{op} \stackrel{\overset{\mathcal{O}}{\leftarrow}}{\underset{Spec}{\to}} [C^{op}, \mathcal{V}]$ $\mathcal{O}(X) : c \mapsto [C^{op}, \mathcal{V}](X, \mathcal{V}(-,c)) \,,$ $Spec(A) : c \mapsto [C, \mathcal{V}]^{op}(\mathcal{V}(c,-),A) \,.$ The proof is mostly a tautology after the notation is unwinded. The mechanism of the proof may still be of interest and be relevant for generalizations and for less tautological variations of the setup. We therefore spell out several proofs. Proof A Use the end-expression for the hom-objects of the enriched functor categories to compute \begin{aligned} [C,\mathcal{V}]^{op}(\mathcal{O}(X), A) & := \int_{c \in C} \mathcal{V}(A(c), \mathcal{O}(X)(c)) \\ & := \int_{c \in C} \mathcal{V}(A(c), [C^{op}, \mathcal{V}](X, \mathcal{V}(-,c))) \ \ & := \int_{c \in C} \int_{d \in C} \mathcal{V}(A(c), \mathcal{V}(X(d), \mathcal{V}(d,c))) \\ & \simeq \int_{d \in C} \int_{c \in C} \mathcal{V}(X(d), \mathcal{V}(A(c), \mathcal{V}(d,c))) \\ & =: \ int_{d \in C} \mathcal{V}(X(d), [C,\mathcal{V}]^{op}(\mathcal{V}(d,-),A)) \\ & =: \int_{d \in C} \mathcal{V}(X(d), Spec(A)(d)) \\ & =: [C^{op}, \mathcal{V}](X, Spec(A)) \end{aligned} \,. The following proof does not use ends and needs instead slightly more preparation, but has then the advantage that its structure goes through also in great generality in higher category theory. Proof B Notice that Lemma 1: $Spec(\mathcal{V}(c,-)) \simeq \mathcal{V}(-,c)$ because we have a natural isomorphism \begin{aligned} Spec(\mathcal{V}(c,-))(d) & := [C,\mathcal{V}](\mathcal{V}(c,-), \mathcal{V}(d,-)) \\ & \simeq \mathcal{V}(d,c) \end{aligned} by the Yoneda lemma. From this we get Lemma 2: $[C^{op}, \mathcal{V}](Spec \mathcal{V}(c,-), Spec A) \simeq [C,\mathcal{V}](A, \mathcal{V}(c,-))$ by the sequence of natural isomorphisms \begin{aligned} [C^{op}, \mathcal{V}](Spec \mathcal{V}(c,-), Spec A) & \simeq [C^{op}, \mathcal{V}](\mathcal{V}(-,c), Spec A) \\ & \simeq (Spec A)(c) \\ & := [C, \mathcal{V}](A, \mathcal{V}(c,-)) \ end{aligned} \,, where the first is Lemma 1 and the second the Yoneda lemma. Since (by what is sometimes called the co-Yoneda lemma) every object $X \in [C^{op}, \mathcal{V}]$ may be written as a colimit $X \simeq {\lim_\to}_i \mathcal{V}(-,c_i)$ over representables $\mathcal{V}(-,c_i)$ we have $X \simeq {\lim_\to}_i Spec(\mathcal{V}(c_i,-)) \,.$ In terms of the same diagram of representables it then follows that Lemma 3: $\mathcal{O}(X) \simeq {\lim_{\leftarrow}}_i \mathcal{V}(c_i,-)$ because using the above colimit representation and the Yoneda lemma we have natural isomorphisms \begin{aligned} \mathcal{O}(X)(d) &= [C^{op}, \mathcal{V}](X, \mathcal{V}(-,c)) \\ & \simeq [C^{op}, \mathcal{V}]({\lim_\to}_i \mathcal{V}(-,c_i), \mathcal{V}(-,c)) \\ & \simeq {\lim_\leftarrow}_i [C ^{op}, \mathcal{V}](\mathcal{V}(-,c_i), \mathcal{V}(-,c)) \\ & \simeq {\lim_\leftarrow}_i \mathcal{V}(c_i,c) \end{aligned} \,. Using all this we can finally obtain the adjunction in question by the following sequence of natural isomorphisms \begin{aligned} [C,\mathcal{V}]^{op}(\mathcal{O}(X), A) & \simeq [C,\mathcal{V}](A, {\lim_\leftarrow}_i \mathcal{V}(c_i,-), ) \\ & \simeq {\lim_{\leftarrow}}_i [C, \mathcal{V}](A, \mathcal{V}(c_i,-)) \\ & \simeq {\lim_{\leftarrow}}_i [C^{op}, \mathcal{V}](Spec \mathcal{V}(c_i,-), Spec A) \\ & \simeq [C^{op}, \mathcal{V}]({\lim_{\to}}_i Spec \mathcal{V}(c_i,-), Spec A) \\ & \simeq [C^{op}, \ mathcal{V}](X, Spec A) \end{aligned} \,. The pattern of this proof has the advantage that it goes through in great generality also on higher category theory without reference to a higher notion of enriched category theory. An object $X$ or $A$ is Isbell-self-dual if • $A \stackrel{}{\to} \mathcal{O} Spec(A)$ is an isomorphism in $[C,\mathcal{V}]$; • $X \to Spec \mathcal{O} X$ is an isomorphism in $[C^{op}, \mathcal{V}]$, respectively. Respect for limits Choose any class $L$ of limits in $C$ and write $[C,\mathcal{V}]_\times \subset [C,\mathcal{V}]$ for the full subcategory consisting of those functors preserving these limits. The $(\mathcal{O} \dashv Spec)$-adjunction does descend to this inclusion, in that we have an adjunction $(\mathcal{O} \dashv Spec) : [C, \mathcal{V}]_{\times}^{op} \stackrel{\overset{\mathcal{O}}{\leftarrow}}{\underset{Spec}{\to}} [C^{op}, \mathcal{V}]$ Because the hom-functors preserves all limits: \begin{aligned} \mathcal{O}(X)({\lim_{\leftarrow}}_j c_j) & := [C^{op}, \mathcal{V}](X,\mathcal{V}(-,{\lim_{\leftarrow}}_j c_j)) \\ & \simeq [C^{op}, \mathcal{V}](X,{\lim_{\leftarrow}}_j \mathcal{V} (-,c_j)) \\ & \simeq {\lim_{\leftarrow}}_j [C^{op}, \mathcal{V}](X,\mathcal{V}(-,c_j)) \\ & =: {\lim_{\leftarrow}}_j \mathcal{O}(X)(c_j) \end{aligned} \,. Isbell self-dual objects By Proof B , lemma 1 we have a natural isomorphisms in $c \in C$ $Spec(\mathcal{V}(c,-)) \simeq \mathcal{V}(-,c) \,.$ Therefore we have also the natural isomorphism \begin{aligned} \mathcal{O} Spec \mathcal{V}(c,-)(d) & \simeq \mathcal{O} \mathcal{V}(-,c) (d) \\ & := [C^{op}, \mathcal{V}](\mathcal{V}(-,c), \mathcal{V}(-,d)) \\ & \simeq \mathcal{V}(c,d) \end {aligned} \,, where the second step is the Yoneda lemma. Similarly the other way round. Isbell envelope See Isbell envelope. Examples and similar dualities Isbell duality is a template for many other space/algebra-dualities in mathematics. Function $T$-Algebras on presheaves Let $\mathcal{V}$ be any cartesian closed category. Let $C := T$ be the syntactic category of a $\mathcal{V}$-enriched Lawvere theory, that is a $\mathcal{V}$-category with finite products such that all objects are generated under products from a single object $1$. Then write $T Alg := [C,\mathcal{V}]_\times$ for category of product-preserving functors: the category of $T$-algebras. This comes with the canonical forgetful functor $U_T : T Alg \to \mathcal{V} : A \mapsto A(1)$ $F_T : T^{op} \hookrightarrow T Alg$ for the Yoneda embedding. $\mathbb{A}_T := Spec(F_T(1)) \in [C^{op}, \mathcal{V}]$ the $T$-line object. For all $X \in [C^{op}, \mathcal{V}]$ we have $\mathcal{O}(X) \simeq [C^{op}, \mathcal{V}](X, Spec(F_T(-))) \,.$ In particular $U_T(\mathcal{O}(X)) \simeq [C^{op}, \mathcal{V}](X,\mathbb{A}_T) \,.$ We have isomorphisms natural in $k \in T$ \begin{aligned} [C^{op}, \mathcal{V}](X, Spec(F_T(k))) & \simeq T Alg(F_T(k), \mathcal{O}(X)) \\ & \simeq \mathcal{O}(X)(k) \end{aligned} by the above adjunction and then by the Yoneda lemma. All this generalizes to the following case: instead of setting $C := T$ let more generally $T \subset C \subset T Alg^{op}$ be a small full subcategory of $T$-algebras, containing all the free $T$-algebras. Then the original construction of $\mathcal{O} \dashv Spec$ no longer makes sense, but that in terms of the line object still does $Spec A : B \mapsto T Alg(A,B)$ $\mathcal{O}(X) : k \mapsto [C^{op}, \mathcal{V}](X, Spec(F_T(k))) \,.$ Then we still have an adjunction $(\mathcal{O} \dashv Spec) : T Alg^{op} \stackrel{\overset{\mathcal{O}}{\leftarrow}}{\underset{Spec}{\to}} [C^{op}, \mathcal{V}] \,.$ \begin{aligned} T Alg^{op}(\mathcal{O}(X), A) & := \int_{k \in T} \mathcal{V}( A(k), \mathcal{O}(X)(k) ) \\ & := \int_{k \in T} \mathcal{V}( A(k), [C^{op}, \mathcal{V}](X, Spec(F_T(k))) ) \\ & := \ int_{k \in T} \int_{B \in C} \mathcal{V}(A(k), \mathcal{V}(X(B), T Alg(F_T(k), B) )) \\ & \simeq \int_{k \in T} \int_{B \in C} \mathcal{V}(A(k), \mathcal{V}(X(B), B(k) )) \\ & \simeq \int_{k \in T} \ int_{B \in C} \mathcal{V}(X(B), \mathcal{V}(A(k), B(k) )) \\ & =: \int_{B \in C} \mathcal{V}(X(B), T Alg(A,B)) \\ & =: \int_{B \in C} \mathcal{V}(X(B), Spec(A)(B)) \\ & =: [C^{op}, Set](X,Spec(A)) \ end{aligned} \,. The first step that is not a definition is the Yoneda lemma. The step after that is the symmetric-closed-monoidal structure of $\mathcal{V}$. Function $k$-algebras on derived $\infty$-stacks The structure of our Proof B above goes through in higher category theory. Formulated in terms of derived stacks over the (∞,1)-category of dg-algebras, this is essentially the argument appearing on page 23 of (Ben-ZviNadler). Function $T$-algebras on $\infty$-stacks for the moment see at function algebras on ∞-stacks. Function 2-algebras on algebraic stacks see Tannaka duality for geometric stacks Gelfand duality Gelfand duality is the equivalence of categories between (nonunital) commutative C-star algebras and (locally) compact topological spaces. See there for more details. Serre-Swan theorem The Serre-Swan theorem says that suitable modules over an commutative C-star algebra are equivalently modules of sections of vector bundles over the Gelfand-dual topological space. duality between algebra and geometry in physics: • Michael Barr, John Kennison, R. Raphael, Isbell Duality, PDF Isbell conjugation is reviewed on page 17 of Isbell conjugacy for (∞,1)-presheaves over the (∞,1)-category of duals of dg-algebras is discussed around page 32 of Isbell self-dual ∞-stacks over duals of commutative associative algebras are called affine stacks . They are characterized as those objects that are small in a sense and local with respect to the cohomology with coefficients in the canonical line object. A generalization of this latter to $\infty$-stacks over duals of algebras over arbitrary abelian Lawvere theories is the content of • Herman Stel, $\infty$-Stacks and their function algebras – with applications to $\infty$-Lie theory , master thesis (2010) (web) See also
{"url":"http://ncatlab.org/nlab/show/Isbell%20duality","timestamp":"2014-04-21T02:01:46Z","content_type":null,"content_length":"173242","record_id":"<urn:uuid:f02b824a-8cee-4736-9d90-5552faa0c265>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Special relativity In physics, special relativity (SR, also known as the special theory of relativity or STR) is the accepted physical theory regarding the relationship between space and time. It is based on two postulates: (1) that the laws of physics are invariant (i.e., identical) in all inertial systems (non-accelerating frames of reference); and (2) that the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper "On the Electrodynamics of Moving Bodies".^[1] The inconsistency of classical mechanics with Maxwell’s equations of electromagnetism led to the development of special relativity, which corrects classical mechanics to handle situations involving motions nearing the speed of light. As of today, special relativity is the most accurate model of motion at any speed. Even so, classical mechanics is still useful (due to its sheer simplicity and high accuracy) as an approximation at small velocities relative to the speed of light. Special relativity implies a wide range of consequences, which have been experimentally verified,^[2] including length contraction, time dilation, relativistic mass, mass–energy equivalence, a universal speed limit, and relativity of simultaneity. It has replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = mc^2, where c is the speed of light in vacuum.^[3]^[4] A defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics with the Lorentz transformations. Time and space cannot be defined separately from one another. Rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the same time for one observer could occur at different times for another. The theory is called "special" because it applied the principle of relativity only to the special case of inertial reference frames. Einstein later published a paper on general relativity in 1915 to apply the principle in the general case, that is, to any frame so as to handle general coordinate transformations, and gravitational effects. As Galilean relativity is now considered an approximation of special relativity valid for low speeds, special relativity is considered an approximation of the theory of general relativity valid for weak gravitational fields. The presence of gravity becomes undetectable at sufficiently small-scale, free-falling conditions. General relativity incorporates noneuclidean geometry, so that the gravitational effects are represented by the geometric curvature of spacetime. Contrarily, special relativity is restricted to flat spacetime. The geometry of spacetime in special relativity is called Minkowski space. A locally Lorentz invariant frame that abides by Special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light,^[5] a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.^[6] Reflections of this type made it clear to me as long ago as shortly after 1900, i.e., shortly after Planck's trailblazing work, that neither mechanics nor electrodynamics could (except in “ limiting cases) claim exact validity. Gradually I despaired of the possibility of discovering the true laws by means of constructive efforts based on known facts. The longer and the more ” desperately I tried, the more I came to the conviction that only the discovery of a universal formal principle could lead us to assured results... How, then, could such a universal principle be —Albert Einstein: Autobiographical Notes^[7] Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as:^[1] • The Principle of Relativity – The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.^[1] • The Principle of Invariant Light Speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body." (from the preface).^[1] That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source. The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history.^[8] Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations.^[9] However, the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the Principle of Relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is: Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K.^[10] Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms. Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles.^[11] Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of relativity and light-speed invariance. He wrote: The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events... The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime.^[12]^[13] From the principle of relativity alone without assuming the constancy of the speed of light (i.e. using the isotropy of space and the symmetry implied by the principle of special relativity) one can show that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum.^[14]^[15] The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment.^[16]^[17] In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. Lack of an absolute reference frame[edit] The principle of relativity, which states that there is no preferred inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. However, in the late 19th century, the existence of electromagnetic waves led physicists to suggest that the universe was filled with a substance known as "aether", which would act as the medium through which these waves, or vibrations travelled. The aether was thought to constitute an absolute reference frame against which speeds could be measured, and could be considered fixed and motionless. Aether supposedly possessed some wonderful properties: it was sufficiently elastic to support electromagnetic waves, and those waves could interact with matter, yet it offered no resistance to bodies passing through it. The results of various experiments, including the Michelson–Morley experiment, indicated that the Earth was always 'stationary' relative to the aether – something that was difficult to explain, since the Earth is in orbit around the Sun. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) Reference frames, coordinates and the Lorentz transformation[edit] Relativity theory depends on "reference frames". The term reference frame as used here is an observational perspective in space which is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes. In addition, a reference frame has the ability to determine measurements of the time of events using a 'clock' (any reference device with uniform An event is an occurrence that can be assigned a single unique time and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity in each and every reference frame, pulses of light can be used to unambiguously measure distances and refer back the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired. For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S. In relativity theory we often want to calculate the position of a point from a different reference point. Suppose we have a second reference frame S′, whose spatial axes and clock exactly coincide with that of S at time zero, but it is moving at a constant velocity v with respect to S along the x-axis. Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything is always moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore S and S′ are not comoving. Define the event to have spacetime coordinates (t,x,y,z) in system S and (t′,x′,y′,z′) in S′. Then the Lorentz transformation specifies that these coordinates are related in the following way: \begin{align} t' &= \gamma \ (t - vx/c^2) \\ x' &= \gamma \ (x - v t) \\ y' &= y \\ z' &= z , \end{align} $\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$ is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of S′ is parallel to the x-axis. The y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity. There is nothing special about the x-axis, the transformation can apply to the y or z axes, or indeed in any direction, which can be done by directions parallel to the motion (which are warped by the γ factor) and perpendicular; see main article for details. A quantity invariant under Lorentz transformations is known as a Lorentz scalar. Writing the Lorentz transformation and its inverse in terms of coordinate differences, where for instance one event has coordinates (x[1], t[1]) and (x′[1], t′[1]), another event has coordinates (x [2], t[2]) and (x′[2], t′[2]), and the differences are defined as $\begin{array}{ll} \Delta x' = x'_2-x'_1 \ , & \Delta x = x_2-x_1 \ , \\ \Delta t' = t'_2-t'_1 \ , & \Delta t = t_2-t_1 \ , \\ \end{array}$ we get $\begin{array}{ll} \Delta x' = \gamma \ (\Delta x - v \,\Delta t) \ , & \Delta x = \gamma \ (\Delta x' + v \,\Delta t') \ , \\ \Delta t' = \gamma \ \left(\Delta t - \dfrac{v \,\Delta x}{c^{2}} \ right) \ , & \Delta t = \gamma \ \left(\Delta t' + \dfrac{v \,\Delta x'}{c^{2}} \right) \ . \\ \end{array}$ These effects are not merely appearances; they are explicitly related to our way of measuring time intervals between events which occur at the same place in a given coordinate system (called "co-local" events). These time intervals will be different in another coordinate system moving with respect to the first, unless the events are also simultaneous. Similarly, these effects also relate to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system. However, the spacetime interval will be the same for all observers. The underlying reality remains the same. Only our perspective changes. Consequences derived from the Lorentz transformation[edit] The consequences of special relativity can be derived from the Lorentz transformation equations.^[18] These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything humans encounter that some of the effects predicted by relativity are initially counterintuitive. Relativity of simultaneity[edit] Two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer, may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity). From the first equation of the Lorentz transformation in terms of coordinate differences $\Delta t' = \gamma \left(\Delta t - \frac{v \,\Delta x}{c^{2}} \right)$ it is clear that two events that are simultaneous in frame S (satisfying Δt = 0), are not necessarily simultaneous in another inertial frame S′ (satisfying Δt′ = 0). Only if these events are co-local in frame S (satisfying Δx = 0), will they be simultaneous in another frame S′. Time dilation[edit] The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames (e.g., the twin paradox which concerns a twin who flies off in a spaceship traveling near the speed of light and returns to discover that his or her twin sibling has aged much more). Suppose a clock is at rest in the unprimed system S. Two different ticks of this clock are then characterized by Δx = 0. To find the relation between the times between these ticks as measured in both systems, the first equation can be used to find: $\Delta t' = \gamma\, \Delta t$ for events satisfying $\Delta x = 0 \ .$ This shows that the time (Δt') between the two ticks as seen in the frame in which the clock is moving (S′), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the decay rate of muons produced by cosmic rays impinging on the Earth's atmosphere.^[19] Length contraction[edit] The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage). Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system S′, in which the clock is moving, the distances x′ to the end points of the rod must be measured simultaneously in that system S′. In other words, the measurement is characterized by Δt′ = 0, which can be combined with the fourth equation to find the relation between the lengths Δx and Δx′: $\Delta x' = \frac{\Delta x}{\gamma}$ for events satisfying $\Delta t' = 0 \ .$ This shows that the length (Δx′) of the rod as measured in the frame in which it is moving (S′), is shorter than its length (Δx) in its own rest frame (S). Composition of velocities[edit] Velocities (speeds) do not simply add. If the observer in S measures an object moving along the x axis at velocity u, then the observer in the S′ system, a frame of reference moving at velocity v in the x direction with respect to S, will measure the object moving with velocity u′ where (from the Lorentz transformations above): $u'=\frac{dx'}{dt'}=\frac{\gamma \ (dx-v dt)}{\gamma \ (dt-v dx/c^2)}=\frac{(dx/dt)-v}{1-(v/c^2)(dx/dt)}=\frac{u-v}{1-uv/c^2} \ .$ The other frame S will measure: $u=\frac{dx}{dt}=\frac{\gamma \ (dx'+v dt')}{\gamma \ (dt'+v dx'/c^2)}=\frac{(dx'/dt')+v}{1+(v/c^2)(dx'/dt')}=\frac{u'+v}{1+u'v/c^2} \ .$ Notice that if the object were moving at the speed of light in the S system (i.e. u = c), then it would also be moving at the speed of light in the S′ system. Also, if both u and v are small with respect to the speed of light, we will recover the intuitive Galilean transformation of velocities $u' \approx u-v \ .$ The usual example given is that of a train (frame S′ above) traveling due east with a velocity v with respect to the tracks (frame S). A child inside the train throws a baseball due east with a velocity u′ with respect to the train. In classical physics, an observer at rest on the tracks will measure the velocity of the baseball (due east) as u = u′ + v, while in special relativity this is no longer true; instead the velocity of the baseball (due east) is given by the second equation: u = (u′ + v)/(1 + u′v/c^2). Again, there is nothing special about the x or east directions. This formalism applies to any direction by considering parallel and perpendicular motion to the direction of relative velocity v, see main article for details. Einstein's addition of colinear velocities is consistent with the Fizeau experiment which determined the speed of light in a fluid moving parallel to the light, but no experiment has ever tested the formula for the general case of non-parallel velocities.^[citation needed] Other consequences[edit] Thomas rotation[edit] The orientation of an object (i.e. the alignment of its axes with the observer's axes) may be different for different observers. Unlike other relativistic effects, this effect becomes quite significant at fairly low velocities as can be seen in the spin of moving particles. Equivalence of mass and energy[edit] As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The energy content of an object at rest with mass m equals mc^2. Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies. In addition to the papers referenced above—which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for E = mc^2. Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a nontrivial way. For an object at rest, the energy–momentum four-vector is (E, 0, 0, 0): it has a time component which is the energy, and three space components which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes (E, Ev/c^ 2, 0, 0). The momentum is equal to the energy multiplied by the velocity divided by c^2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c^2. The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these don't talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations.^[1] The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905.^[20] Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong.^[21] Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions.^[22] Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of How far can one travel from the Earth?[edit] Since one can not travel faster than light, one might conclude that a human can never travel further from Earth than 40 light years if the traveler is active between the age of 20 and 60. One would easily think that a traveler would never be able to reach more than the very few solar systems which exist within the limit of 20–40 light years from the earth. But that would be a mistaken conclusion. Because of time dilation, a hypothetical spaceship can travel thousands of light years during the pilot's 40 active years. If a spaceship could be built that accelerates at a constant 1g, it will after a little less than a year be traveling at almost the speed of light as seen from Earth. Time dilation will increase his life span as seen from the reference system of the Earth, but his lifespan measured by a clock traveling with him will not thereby change. During his journey, people on Earth will experience more time than he does. A 5 year round trip for him will take 6½ Earth years and cover a distance of over 6 light-years. A 20 year round trip for him (5 years accelerating, 5 decelerating, twice each) will land him back on Earth having traveled for 335 Earth years and a distance of 331 light years.^[25] A full 40 year trip at 1 g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40 year trip at 1.1 g will take 148,000 Earth years and cover about 140,000 light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the cosmonaut's clock) trip at 1 g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy.^[26] This same time dilation is why a muon traveling close to c is observed to travel much further than c times its half-life (when at rest).^[27] Causality and prohibition of motion faster than light[edit] In diagram 2 the interval AB is 'time-like'; i.e., there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames. It is hypothetically possible for matter (or information) to travel from A to B, so there can be a causal relationship (with A the cause and B the effect). The interval AC in the diagram is 'space-like'; i.e., there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. If it were possible for a cause-and-effect relationship to exist between events A and C, then paradoxes of causality would result. For example, if A was the cause, and C the effect, then there would be frames of reference in which the effect preceded the cause. Although this in itself won't give rise to a paradox, one can show^[28]^[29] that faster than light signals can be sent back into one's own past. A causal paradox can then be constructed by sending the signal if and only if no signal was received previously. Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum. However, some "things" can still move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly.^[ Even without considerations of causality, there are other strong reasons why faster-than-light travel is forbidden by special relativity. For example, if a constant force is applied to an object for a limitless amount of time, then integrating F = dp/dt gives a momentum that grows without bound, but this is simply because $p = m \gamma v$ approaches infinity as $v$ approaches c. To an observer who is not accelerating, it appears as though the object's inertia is increasing, so as to produce a smaller acceleration in response to the same force. This behavior is observed in particle accelerators, where each charged particle is accelerated by the electromagnetic force. Theoretical and experimental tunneling studies carried out by Günter Nimtz and Petrissa Eckle claimed that under special conditions signals may travel faster than light.^[31]^[32]^[33]^[34] It was measured that fiber digital signals were traveling up to 5 times c and a zero-time tunneling electron carried the information that the atom is ionized, with photons, phonons and electrons spending zero time in the tunneling barrier. According to Nimtz and Eckle, in this superluminal process only the Einstein causality and the special relativity but not the primitive causality are violated: Superluminal propagation does not result in any kind of time travel.^[35]^[36] Several scientists have stated not only that Nimtz' interpretations were erroneous, but also that the experiment actually provided a trivial experimental confirmation of the special relativity theory.^[37]^[38]^[39] Geometry of spacetime[edit] Comparison between flat Euclidean space and Minkowski space[edit] Special relativity uses a 'flat' 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time. In 3D space, the differential of distance (line element) ds is defined by $ds^2 = d\mathbf{x} \cdot d\mathbf{x} = dx_1^2 + dx_2^2 + dx_3^2,$ where dx = (dx[1], dx[2], dx[3]) are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X^0 derived from time, such that the distance differential fulfills $ds^2 = -dX_0^2 + dX_1^2 + dX_2^2 + dX_3^2,$ where dX = (dX[0], dX[1], dX[2], dX[3]) are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see image right).^[41] Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime. The actual form of ds above depends on the metric and on the choices for the X^0 coordinate. To make the time coordinate look like the space coordinates, it can be treated as imaginary: X[0] = ict (this is called a Wick rotation). According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take X^0 = ct, rather than a "disguised" Euclidean metric using ict as the time coordinate. Some authors use X^0 = t, with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c^±2 are included in the metric tensor.^[42] These numerous conventions can be superseded by using natural units where c = 1. Then space and time have equivalent units, and no factors of c appear anywhere. 3D spacetime[edit] If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space $ds^2 = dx_1^2 + dx_2^2 - c^2 dt^2,$ we see that the null geodesics lie along a dual-cone (see image right) defined by the equation; $ds^2 = 0 = dx_1^2 + dx_2^2 - c^2 dt^2$ or simply $dx_1^2 + dx_2^2 = c^2 dt^2,$ which is the equation of a circle of radius cdt. 4D spacetime[edit] If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone: $ds^2 = 0 = dx_1^2 + dx_2^2 + dx_3^2 - c^2 dt^2$ $dx_1^2 + dx_2^2 + dx_3^2 = c^2 dt^2.$ This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star which I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance $d = \sqrt{x_1^2+x_2^2+x_3^2}$ away and a time d/c in the past. For this reason the null dual cone is also known as the 'light cone'. (The point in the lower left of the picture below represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".) The cone in the −t region is the information that the point is 'receiving', while the cone in the +t section is the information that the point is 'sending'. The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought-experiments in special relativity. Note that, in 4d spacetime, the concept of the center of mass becomes more complicated, see center of mass (relativistic). Physics in spacetime[edit] Transformations of physical quantities between reference frames[edit] Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation. The Lorentz transformation in standard configuration above, i.e. for a boost in the x direction, can be recast into matrix form as follows: $\begin{pmatrix} ct'\\ x'\\ y'\\ z' \end{pmatrix} = \begin{pmatrix} \gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} ct\ \ x\\ y\\ z \end{pmatrix} = \begin{pmatrix} \gamma ct- \gamma\beta x\\ \gamma x - \beta \gamma ct \\ y\\ z \end{pmatrix}.$ In Newtonian mechanics, quantities which have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this is should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used. The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component x = (x, y, z), in a contravariant position four vector with components: $X^u = (X^0, X^1, X^2, X^3)= (ct, x, y, z).$ where we define X^0 = ct so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally.^[43]^[44]^[45] Now the transformation of the contravariant components of the position 4-vector can be compactly written as: $X^{\mu'}=\Lambda^{\mu'}{}_u X^u$ where there is an implied summation on ν from 0 to 3, and $\Lambda^{\mu'}{}_{u}$ is a matrix. More generally, all contravariant components of a four-vector $T^u$ transform from one frame to another frame by a Lorentz transformation: $T^{\mu'} = \Lambda^{\mu'}{}_{u} T^u$ Examples of other 4-vectors include the four-velocity U^μ, defined as the derivative of the position 4-vector with respect to proper time: $U^\mu = \frac{dX^\mu}{d\tau} = \gamma(v)( c , v_x , v_y, v_z ) .$ where the Lorentz factor is: $\gamma(v)= \frac{1}{\sqrt{1- (v/c)^2}} \,,\quad v^2 = v_x^2 + v_y^2 + v_z^2 \,.$ The relativistic energy $E = \gamma(v)mc^2$ and relativistic momentum $\mathbf{p} = \gamma(v)m \mathbf{v}$ of an object are respectively the timelike and spacelike components of a covariant four momentum vector: $P_u = m U_u = m\gamma(v)(c,v_x,v_y,v_z)= (E/c,p_x,p_y,p_z).$ where m is the invariant mass. The four-acceleration is the proper time derivative of 4-velocity: $A^\mu = \frac{d U^\mu}{d\tau} \,.$ The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix. The four-gradient of a scalar field φ transforms covariantly rather than contravariantly: $\begin{pmatrix} \frac{1}{c}\frac{\partial \phi}{\partial t'} & \frac{\partial \phi}{\partial x'} & \frac{\partial \phi}{\partial y'} & \frac{\partial \phi}{\partial z'}\end{pmatrix} = \begin {pmatrix} \frac{1}{c}\frac{\partial \phi}{\partial t} & \frac{\partial \phi}{\partial x} & \frac{\partial \phi}{\partial y} & \frac{\partial \phi}{\partial z}\end{pmatrix}\begin{pmatrix} \gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \,.$ that is: $(\partial_{\mu'} \phi) = \Lambda_{\mu'}{}^{u} (\partial_u \phi)\,,\quad \partial_{\mu} \equiv \frac{\partial}{\partial x^{\mu}}\,.$ only in Cartesian coordinates. It's the covariant derivative which transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation: $\Lambda_{\mu'}{}^{u} T^{\mu'} = T^u$ where $\Lambda_{\mu'}{}^{u}$ is the reciprocal matrix of $\Lambda^{\mu'}{}_{u}$. The postulates of special relativity constrain the exact form the Lorentz transformation matrices take. More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law^[46] $T^{\alpha' \beta' \cdots \zeta'}_{\theta' \iota' \cdots \kappa'} = \Lambda^{\alpha'}{}_{\mu} \Lambda^{\beta'}{}_{u} \cdots \Lambda^{\zeta'}{}_{\rho} \Lambda_{\theta'}{}^{\sigma} \Lambda_{\iota'} {}^{\upsilon} \cdots \Lambda_{\kappa'}{}^{\phi} T^{\mu u \cdots \rho}_{\sigma \upsilon \cdots \phi}$ where $\Lambda_{\chi'}{}^{\psi}$ is the reciprocal matrix of $\Lambda^{\chi'}{}_{\psi}$. All tensors transform by this rule. An example of a four dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor. The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid in any inertial reference frame) which can be arranged in a 4 × 4 matrix: $\eta_{\alpha\beta} = \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}$ which is equal to its reciprocal, $\eta^{\alpha\beta}$, in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs. The Poincaré group is the most general group of transformations which preserves the Minkowski metric: $\eta_{\alpha\beta} = \eta_{\mu'u'} \Lambda^{\mu'}{}_\alpha \Lambda^{u'}{}_\beta \!$ and this is the physical symmetry underlying special relativity. The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is: $T^{\alpha}S_{\alpha}=T^{\alpha}\eta_{\alpha\beta}S^{\beta} = T_{\alpha}\eta^{\alpha\beta}S_{\beta} = \text{invariant scalar}$ Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no Λ appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself: $|\mathbf{T}| = \sqrt{T^{\alpha}T_{\alpha}}$ One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants: $T^{\alpha}{}_{\alpha}\,,T^{\alpha}{}_{\beta}T^{\beta}{}_{\alpha}\,,T^{\alpha}{}_{\beta}T^{\beta}{}_{\gamma}T^{\gamma}{}_{\alpha} = \text{invariant scalars}\,,$ similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one doesn't need to perform Lorentz transformations to determine the invariants. Relativistic kinematics and invariance[edit] The coordinate differentials transform also contravariantly: $dX^{\mu'}=\Lambda^{\mu'}{}_u dX^u$ so the squared length of the differential of the position four-vector dX^μ constructed using $d\mathbf{X}^2 = dX^\mu \,dX_\mu = \eta_{\muu}\,dX^\mu \,dX^u = -(c dt)^2+(dx)^2+(dy)^2+(dz)^2\,$ is an invariant. Notice that when the line element dX^2 is negative that √−dX^2 is the differential of proper time, while when dX^2 is positive, √dX^2 is differential of the proper distance. The 4-velocity U^μ has an invariant form: ${\mathbf U}^2 = \eta_{u\mu} U^u U^\mu = -c^2 \,,$ which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces: $2\eta_{\muu}A^\mu U^u = 0.$ So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal. Relativistic dynamics and invariance[edit] The invariant magnitude of the momentum 4-vector generates the energy–momentum relation: $\mathbf{P}^2 = \eta^{\muu}P_\mu P_u = -(E/c)^2 + p^2 .$ We can work out what this invariant is by first arguing that, since it is a scalar, it doesn't matter which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero. $\mathbf{P}^2 = - (E_\mathrm{rest}/c)^2 = - (m c)^2 .$ We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero. The rest energy is related to the mass according to the celebrated equation discussed above: $E_\mathrm{rest} = m c^2.$ Note that the mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames. To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D which contains the components of the 3D force vector among its components. If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is: $F_u = \frac{d P_{u}}{d \tau} = m A_u$ In the rest frame of the object, the time component of the four force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, i.e. dp/dt while the four force is defined by the rate of change of momentum with respect to proper time, i.e. dp/dτ. In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism. Relativity and unifying electromagnetism[edit] Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity. The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame. Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, i.e. in the language of tensor calculus.^[47] See main links for more detail. Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c^2 in the region of interest.^[48] In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. However, at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10^−20)^[49] and thus accepted by the physics community. Experimental results which appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors. Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields). Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See classical mechanics for a more detailed discussion. Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905,^[50] and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory.^ • The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear • The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times. • The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame. • The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction doesn't lead to birefringence for a co-moving observer, in accordance with the relativity principle. Particle accelerators routinely accelerate and measure the properties of particles moving at near the speed of light, where their behavior is completely consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples: Theories of relativity and quantum mechanics[edit] Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics. It is an unsolved problem in physics how general relativity and quantum mechanics can be unified; quantum gravity and a "theory of everything", which require such a unification, are active and ongoing areas in theoretical research. The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time.^[51] In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour,^[52] that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation explained not only the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron),^[52]^[53] and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics. In non-relativistic quantum mechanics, spin is phenomenological and cannot be explained. On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time. See also[edit] • Einstein, Albert (1920). Relativity: The Special and General Theory. • Einstein, Albert (1996). The Meaning of Relativity. Fine Communications. ISBN 1-56731-136-9 • Logunov, Anatoly A. (2005) Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soleviev, edited by V. A. Petrov) Nauka, Moscow. • Charles Misner, Kip Thorne, and John Archibald Wheeler (1971) Gravitation. W. H. Freeman & Co. ISBN 0-7167-0334-3 • Post, E.J., 1997 (1962) Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications. • Wolfgang Rindler (1991). Introduction to Special Relativity (2nd ed.), Oxford University Press. ISBN 978-0-19-853952-0; ISBN 0-19-853952-5 • Harvey R. Brown (2005). Physical relativity: space–time structure from a dynamical perspective, Oxford University Press, ISBN 0-19-927583-1; ISBN 978-0-19-927583-0 • Qadir, Asghar (1989). Relativity: An Introduction to the Special Theory. Singapore: World Scientific Publications. p. 128. ISBN 9971-5-0612-2. • Silberstein, Ludwik (1914) The Theory of Relativity. • Lawrence Sklar (1977). Space, Time and Spacetime. University of California Press. ISBN 0-520-03174-1. • Lawrence Sklar (1992). Philosophy of Physics. Westview Press. ISBN 0-8133-0625-6. • Taylor, Edwin, and John Archibald Wheeler (1992) Spacetime Physics (2nd ed.). W.H. Freeman & Co. ISBN 0-7167-2327-1 • Tipler, Paul, and Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman & Co. ISBN 0-7167-4345-0 Journal articles[edit] • Alvager, et al.; Farley, F. J. M.; Kjellman, J.; Wallin, L. (1964). "Test of the Second Postulate of Special Relativity in the GeV region". Physics Letters 12 (3): 260. Bibcode: 1964PhL....12..260A. doi:10.1016/0031-9163(64)91095-9. • Darrigol, Olivier (2004). "The Mystery of the Poincaré–Einstein Connection". Isis 95 (4): 614–26. doi:10.1086/430652. PMID 16011297. • Wolf, Peter; Petit, Gerard (1997). "Satellite test of Special Relativity using the Global Positioning System". Physical Review A 56 (6): 4405–09. Bibcode:1997PhRvA..56.4405W. doi:10.1103/ External links[edit] Original works[edit] Special relativity for a general audience (no mathematical knowledge required)[edit] Special relativity explained (using simple or more advanced mathematics)[edit]
{"url":"http://blekko.com/wiki/Special_relativity?source=672620ff","timestamp":"2014-04-18T12:02:11Z","content_type":null,"content_length":"251498","record_id":"<urn:uuid:21c64844-c9b1-4fd8-85f8-901ff86610c7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Experiment of the Month A Lunar Eclipse and the Distance to the Moon The duration of the totality phase (when the Moon is red) can be used to estimate the distance from the Earth to the Moon. For the estimate described here, the important part is that you can do it yourself rather than the level of precision in the estimate. The major assumption in the estimate is that the Earth's dark shadow (technically called the "umbra") is the same diameter as the Earth, when the shadow falls on the Moon to cause the eclipse. Then we know that when the edge of the Moon travels from the right side of the shadow to the left, it has traveled a distance equal to the diameter of the Earth. In the figure the distance S is the distance along the Moon's orbit that lies in the Earth's shadow. In our approximation, S = (diameter of Earth) = 13,000km. (See Measuring Earth's Radius Along I-95 .) The ratio of S to the circumference of the Moon's orbit is S/(2 p R) where R is the radius of the Moon's orbit. This ratio is the same as the ratio of the time spent in Earth's shadow to the time to complete one orbit. (time for leading edge to cross shadow)/(28 days x 24hours/day) = S/(2 p R) We can solve for R: R=(13,000 km) x (28 days x 24hours/day)/(2px(time for leading edge to cross shadow)) This result is about twice as large as the measured distance to the Moon. The Sun is a disk, and its edges shine around the sides of the Earth, making the shadow smaller than in the drawing. The fully dark shadow of the Earth is only about half the Earth's diameter. Still, the idea that you can measure the radius of the Earth for yourself, and estimate the distance to the Moon for yourself is important. Science is democratic in that way. Truth is what you can observe and deduce for yourself; not what an authority tells you.
{"url":"http://www.millersville.edu/physics/experiments/102/index.php","timestamp":"2014-04-16T16:28:23Z","content_type":null,"content_length":"19696","record_id":"<urn:uuid:e8017459-3a90-4cd3-8570-420a6e26c13e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Mccullom Lake, IL ACT Tutor Find a Mccullom Lake, IL ACT Tutor ...My name is Carolyn, and I taught math and physics in Wisconsin for ten years. Currently, I am certified to teach math in both Wisconsin and Illinois, and I do substitute teaching at about ten different schools in both Illinois and Wisconsin. I hold a Bachelor of Arts degree in Math education from Alfred University In New York. 22 Subjects: including ACT Math, physics, statistics, geometry ...In 2011, I graduated from the University of Michigan (Ann Arbor) with a BA in Classical Civilization. As a part of my coursework, I have taken the equivalent of seven semesters of Latin, and I have taken six semesters of Classical Greek. Additionally, I gained a broad array of Latin teaching skills by taking a Latin teaching practicum class. 20 Subjects: including ACT Math, English, reading, writing ...A little bit about me: -Bachelor of Arts, Human Development from Prescott College, High Honors, 4.0 GPA -Associate in Arts, Associate in General Studies, College of DuPage, High Honors, 4.0 GPA -Qualities: patient, understanding, flexible, kind, easygoing, calm, helpful What do you need help with? Please send me a message. Talk to you soon! 26 Subjects: including ACT Math, Spanish, chemistry, geometry ...I have served as a volunteer assistant coach for my former high school girls' varsity basketball team for the past five years and have competed on intramural basketball teams at both the U of I and the U of M. All that being said, I would love the opportunity to work individually with students i... 13 Subjects: including ACT Math, English, algebra 1, algebra 2 ...Qualifications include a B.S. from the University of Wisconsin-Parkside in the Health Sciences and Physics minor, five year's experience in the health care field, and current Illinois Substitute Teaching Certification. My passion for helping students succeed has driven me into the classroom time... 24 Subjects: including ACT Math, chemistry, calculus, piano
{"url":"http://www.purplemath.com/Mccullom_Lake_IL_ACT_tutors.php","timestamp":"2014-04-20T09:12:24Z","content_type":null,"content_length":"24170","record_id":"<urn:uuid:24a27fda-cfe4-4b17-9ec5-f642ae4e37b3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation between two different definitions of deficiency of a graph. up vote 1 down vote favorite From Lovasz's Matching Theory, Let $G$ be a bipartite graph with bipartition $(A, B)$. For $X \subset > A$, define $def(X) :=|X|-|\Gamma(X)|$, where $\Gamma(Χ)$ denote all points in $V(G)$ which are adjacent to at least one point of $X$. $def(G) := \max_{X \subset A} > def(X)$ will be called the $Α$-deficiency of $G$. If $A$ is understood, we shall simply call this number the deficiency of G. Let $G$ be a graph not necessarily bipartite. Define $def'(X) :=odd(G-X) - |X|$, and the deficiency of the graph is defined as $def'(G) := \max_{X \subset V(G)} def'(X)$. In a bipartite graph, these are two different concepts unfortunately with the same name in the book. From Tutte-Berge Formula, and THEOREM 1.3.1 in the book, which states The matching number of a bipartite graph G is $|A|-def(G)$. I derive that $def'(G) = |B| - |A| + 2 def(G).$ I wonder how to prove/explain this relation between the two deficiencies more directly based on their definitions? Is there some definite relation between $def(X)$ and $def'(X)$ for a subset $X$ of $A$? add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/112058/relation-between-two-different-definitions-of-deficiency-of-a-graph","timestamp":"2014-04-17T08:00:19Z","content_type":null,"content_length":"45897","record_id":"<urn:uuid:0004cfd9-eee3-45ab-9aa7-bf58907f354c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Klemperer Rosettes 48 earths, however, is not stable. 48 worlds of 2/3 the mass of earth each would be stable. (Here you see the worlds orbiting like normal, I'm not doing that one-display-per-orbit strobe effect.) It takes I think eight orbits to fall apart. I added some error in the z-direction below the precision of the orbit placements, making this not exactly symmetric nor planar. After this has fallen apart, try to imagine living on one of these worlds. A trip around the sun takes a year. The newspaper headlines would be really exciting, no?
{"url":"http://burtleburtle.net/bob/physics/kempler.html","timestamp":"2014-04-18T08:02:06Z","content_type":null,"content_length":"30885","record_id":"<urn:uuid:24793d97-d037-42b9-b2c0-749dc6d7fc58>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/xmoses1/medals","timestamp":"2014-04-21T08:01:16Z","content_type":null,"content_length":"128593","record_id":"<urn:uuid:1843f147-76db-47b3-aa97-8ad80065445b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Process Control: An Introduction to Statistics The heart and soul of statistical process control is the use of statistical analysis to characterize processes and properties. This characterization requires two fundamental measurements: a measure of central tendency and a measure of variability. Central Tendency First, we need to know where our values are centered so that we can make comparisons. The most commonly used measure of central tendency is the mean, or average, value. If we have n values, the average is defined as shown in Equation 1, where xave is the population mean, and n is the number of values in the population. Simply stated, we add up all the values in our population and divide by the number of values. Another measure of central tendency is the median value, or the value at the halfway point of our data. If we have 101 data points, and we rank them from lowest to highest, the median value is the 51st value. Fifty values will be lower, and fifty will be higher. If we have an even number of values, 100 for example, the median value is the average of the middle two values (50 and 51). In a true normal distribution, the mean and median will be equal. However, in an actual assemblage of data, the numbers will be similar, but not identical. As the number of values increases, the difference will decrease. The mode or modal value is the most commonly occurring value in the data group. It is the peak value in the frequency distribution of the data. Variability or Dispersion Dispersion is the measure of how our data are distributed about their center. We will generally have scatter or variability. If we have a lot of data, and we plot the frequency of occurrence of this data about the mean, we will get a frequency distribution. The most commonly occurring distribution in real life is the normal or Gaussian probability distribution, which is shown in Figure 1. In this distribution, the data are uniformly or symmetrically distributed about the mean, which is also the most common value. Be aware that not all distributions we will encounter in manufacturing are of this type. Mechanical strength data, for example, are often not symmetric but are skewed to one side. The graph shown in Figure 1 also allows us to introduce the statistical measures of dispersion, the prime one being standard deviation, i.e., a numerical measure of the spread or dispersion in our data. This deviation, which is often referred to as the “bell-shaped curve,” is defined as the point on either side where the curve changes from convex (looking from above) to concave. These points are indicated by the s and -s symbols in the figure. Two different types of standard deviation are the population standard deviation (s) and the sample standard deviation (S). Any scientific calculator or computer spreadsheet will calculate the standard deviation values very accurately in microseconds. Another measure of dispersion or variability is the range, defined as the highest value minus the lowest value. This is most useful for small data sets, where the calculated standard deviation is less meaningful from a mathematical standpoint. For large data sets, however, the range can be misleading, since unusually low or high values can give misleading information. A Simple Example Let’s suppose you want to compare the mechanical strength of two ceramic compositions processed through your production facility. We have produced a large number of samples of each composition and measured the strength of 25 samples randomly selected from each lot. By adding up all the values for composition A and dividing by 25, we find that it has an average strength of 35,300 psi (243 MPa). The average strength of composition B is found to be 40,100 psi (276 MPa). We have found that composition B is, on average, stronger than composition A. Let’s further suppose that we use a calculator to determine the sample standard deviation for each composition and get a value of 2118 psi (14.6 MPa) for A and 4411 psi (30.4 MPa) for B. We can now see that although composition B is stronger, it has quite a bit more variability. We can make some predictions if we assume the data follow the normal distribution shown in Figure 1 (we would want to test for this assumption). In Figure 1, we see some fractional numbers just above the horizontal axis. These give the proportion of values we would expect to find in each segment of a normally distributed database. For example, we would expect 34.1% of the data to found between the average value and the average value plus one standard deviation (+ 1s). We would expect 68.2% of our data to be between the average and ±1s. In our example, we would expect 68.2% of our strength values for composition A to fall in the range of 35,300 ± 2118 psi, or between 33,182 and 37,418. Similarly, we would expect 68.2% of the values for B to fall in the range of 40,100 ± 4411 psi, or between 35,689 and 44,511 psi. We can take this one step further and look at the proportion of data we would expect to find within ± 3s of the average. We can add up the fractional values for the 1s, 2s, and 3s segments on both sides of the average and predict that 99.87% of our data should fall within ±3s of the average value (the numbers in Figure 1 lack enough decimal places to get this number). For composition A, virtually all of our values should be between 35,300 ± 6354 psi (3 x 2118), or between 28,946 and 41,654 psi. For B, our values can be expected to lie within the range of 40,100 ± 13,233 psi, or between 26,867 and 53,333 psi. On the low strength end of each group, we can actually expect a few of the B composition samples to have lower strengths than the A group, and we could actually predict how many. If we had a critical application where a few lower strength pieces might be catastrophic, we might actually pick composition A over B even though it has lower average strength. Remember, we tested only 25 samples (it is a destructive test), but if we had produced 1000 pieces of each, we would predict only one value in each group to be outside the 3s ranges given above. These predictions are based on the assumption that we have normal distributions, which can be difficult to determine using only 25 pieces in each group. However, these analyses can still provide us with quite a bit of information about our compositions. Please be forewarned that statistical analysis in its fullest is complicated, abstract and fraught with many avenues of misapplication and misanalysis. Our series of articles will not make you enough of expert to apply statistical tools to the complex ceramic manufacturing processes; however, they should give you a basis for understanding this useful tool.
{"url":"http://www.ceramicindustry.com/articles/process-control-an-introduction-to-statistics","timestamp":"2014-04-19T02:04:15Z","content_type":null,"content_length":"61754","record_id":"<urn:uuid:2883e986-1957-4dd2-b2af-c137352a605a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Asymptotic Relative Efficiency October 22nd 2009, 11:35 AM #1 Asymptotic Relative Efficiency Show that the asymptotic relative efficiency of the sign or Wilcoxon signed-rank test relative to the t-test is unaltered by a linear change of variables - i.e., the ARE for a density $f(x)$ is the same as that for a density $g(x) = af(ax + b)$ where $a > 0$. Not sure where to start here... Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/109700-asymptotic-relative-efficiency.html","timestamp":"2014-04-17T10:39:02Z","content_type":null,"content_length":"30114","record_id":"<urn:uuid:9df3d441-c61c-420a-8bdd-2654eb1b4826>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
pushmedia1: GRE It's done! I scored 780 on the math section and... well, actually I was so excited I forgot my verbal score. Dumb ass. I think it was in the low 600's, 620, 640 or something. Anyway, I'll know for sure in a couple weeks when send the score report. What a long, hard slog that was. The sneaky bastards snuck in an extra verbal section. I think the extra section doesn't count in your score. ETS uses it to test new questions. The bad news is you don't know which of the two sections was the 'real' one. I repeat, bastards! Next step, complete my math and econ courses. From the program guide "Applicants must have knowledge of multivariate calculus, basic matrix algebra, and differential equations; completion of a two-year math sequence, which emphasizes proofs and derivations, should provide adequate preparation. All applicants are expected to have completed intermediate math-based economic theory courses. Further education in economics and economic theory is helpful, but not required. Finally, some knowledge of statistics and elementary probability is highly desirable." This quarter, I'm busy getting an A in Math 1b at DeAnza College . (Cocky, you think?) I plan to finish the 1 series of Calculus (which includes Multivariable Calculus) this academic year. They offer linear algebra and differential Calculus as separate classes but I'm not sure if the community college will have an emphasis on "proofs and derivation". I'll need to decide if I should take that course at a college or a . Additionally, I'm not sure about what Economics courses I need. Of course, now that I'm not studying for the GRE, I'll begin reading more Economics books and such (in addition to the weekly ritual). I've Amazoned . It sits upon my pile. In case that pile is looking dangerously small (not likely), Mike Moffatt has a list of books that are useful for folks like me (i.e. the sort that wanna go to grad school for econ). And there's Post a Comment << Home Links to this post:
{"url":"http://www.ambrosini.us/blog/2003/11/gre.html","timestamp":"2014-04-21T11:05:30Z","content_type":null,"content_length":"18303","record_id":"<urn:uuid:d56be37a-816c-4a4b-a8b3-31182d2cc991>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
If a student is selected at random from those with a passing grade .... June 5th 2010, 01:05 PM #1 Jan 2008 If a student is selected at random from those with a passing grade .... If a student is selected at random from those with a passing grade, what is the Probability that the student will receive a B or higher? A 6/25 B 6/22 C 9/25 D 9/22 E 8/25 Last edited by mr fantastic; June 5th 2010 at 02:32 PM. Reason: Re-titled. How many students passed? What's a passing grade? Are all grades equally distributed (ie: did the same number of people get an A as those who got a B)? June 5th 2010, 01:07 PM #2
{"url":"http://mathhelpforum.com/statistics/147874-if-student-selected-random-those-passing-grade.html","timestamp":"2014-04-17T22:18:40Z","content_type":null,"content_length":"34252","record_id":"<urn:uuid:db8e8193-2304-4853-9be9-12634862a3d4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Author Publications Number of items: 12. Kersten, P.H.M. and Krasil'shchik, I.S. and Verbovetsky, A.M. and Vitolo, R. (2010) Integrability of Kupershmidt Deformations. Acta Applicandae Mathematicae, 109 (1). pp. 75-86. ISSN 0167-8019 Golovko, V and Kersten, P.H.M. and Krasil'shchik, I. and Verbovetsky, A.V. (2008) On integrability of the Camassa–Holm equation and its invariants: A geometrical approach. Acta Applicandae Mathematicae, 101 (1-3). pp. 59-83. ISSN 0167-8019 Kersten, P.H.M. and Krasil'shchik, I. (2006) The Cartan covering and complete integrability of the KdV-mKdV system. In: Constructive Algebra and Systems Theory, November 2000, Amsterdam (pp. pp. Kersten, P.H.M. and Krasil'shchik, I. and Verbovetsky, A.V. (2006) A geometric study of the dispersionless Boussinesq type equation. Acta Applicandae Mathematicae, 90 (1-2). pp. 143-178. ISSN Verbovetsky, A.V. and Kersten, P.H.M. and Krasil'shchik, I. (2005) The D-Boussinesq equation: Hamiltonian and symplectic structures; Noether and inverse Noether operators. [Report] Kersten, P. and Krasil'shchik, I. and Verbovetsky, A. (2004) On the Integrability Conditions for Some Structures Related to Evolution Differential Equations. Acta Applicandae Mathematicae, 83 (1-2). pp. 167-173. ISSN 0167-8019 Kersten, P. and Krasil'shchik, I. and Verbovetsky, A. (2004) Hamiltonian operators and ℓ*-coverings. Journal of Geometry and Physics, 50 (1-4). pp. 273-302. ISSN 0393-0440 Kersten, P.H.M. and Krasil'shchik, I. and Verbovetsky, A.V. (2004) The Monge-Ampère equation: Hamiltonian and symplectic structures, recursions, and hierarchies. [Report] Igonin, S. and Kersten, P.H.M. and Krasil'shchik, I. (2003) On symmetries and cohomological invariants of equations possessing flat representations. Differential Geometry and its Applications, 19 (3). pp. 319-342. ISSN 0926-2245 Igonin, S. and Kersten, P.H.M. and Krasil'shchik, I. (2002) On symmetries and cohomological invariants of equations possessing flat representations. [Report] Kersten, P.H.M. and Krasil'shchik, I. (2002) From recursion operators to Hamiltonian structures. The factorization method. [Report] Kersten, P.H.M. and Krasil'shchik, I. and Verbovetsky, A.V. (2002) An extensive study of the [Report] This list was generated on Sun Apr 20 05:36:43 2014 CEST.
{"url":"http://doc.utwente.nl/view/author/102969302.html","timestamp":"2014-04-20T14:09:50Z","content_type":null,"content_length":"26721","record_id":"<urn:uuid:5289db47-d18d-47bf-9a91-b76c96bba0c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Fraunhofer (ITWM) 18 search hits On the frame-invariant description of the phase space of the Folgar-Tucker equation (2003) J. Linn The Folgar-Tucker equation is used in flow simulations of fiber suspensions to predict fiber orientation depending on the local flow. In this paper, a complete, frame-invariant description of the phase space of this differential equation is presented for the first time. A Multi-Objective Evolutionary Algorithm for Scheduling and Inspection Planning in Software Development Projects (2003) T. Hanne S. Nickel In this article, we consider the problem of planning inspections and other tasks within a software development (SD) project with respect to the objectives quality (no. of defects), project duration, and costs. Based on a discrete-event simulation model of SD processes comprising the phases coding, inspection, test, and rework, we present a simplified formulation of the problem as a multiobjective optimization problem. For solving the problem (i.e. finding an approximation of the efficient set) we develop a multiobjective evolutionary algorithm. Details of the algorithm are discussed as well as results of its application to sample problems. Intensity-Modulated Radiotherapy - A Large Scale Multi-Criteria Programming Problem (2003) T. Bortfeld K-H. Küfer M. Monz A. Scherrer C. Thieke H. Trinkhaus Radiation therapy planning is always a tight rope walk between dangerous insufficient dose in the target volume and life threatening overdosing of organs at risk. Finding ideal balances between these inherently contradictory goals challenges dosimetrists and physicians in their daily practice. Today’s planning systems are typically based on a single evaluation function that measures the quality of a radiation treatment plan. Unfortunately, such a one dimensional approach cannot satisfactorily map the different backgrounds of physicians and the patient dependent necessities. So, too often a time consuming iteration process between evaluation of dose distribution and redefinition of the evaluation function is needed. In this paper we propose a generic multi-criteria approach based on Pareto’s solution concept. For each entity of interest - target volume or organ at risk a structure dependent evaluation function is defined measuring deviations from ideal doses that are calculated from statistical functions. A reasonable bunch of clinically meaningful Pareto optimal solutions are stored in a data base, which can be interactively searched by physicians. The system guarantees dynamical planning as well as the discussion of tradeoffs between different entities. Mathematically, we model the upcoming inverse problem as a multi-criteria linear programming problem. Because of the large scale nature of the problem it is not possible to solve the problem in a 3D-setting without adaptive reduction by appropriate approximation schemes. Our approach is twofold: First, the discretization of the continuous problem is based on an adaptive hierarchical clustering process which is used for a local refinement of constraints during the optimization procedure. Second, the set of Pareto optimal solutions is approximated by an adaptive grid of representatives that are found by a hybrid process of calculating extreme compromises and interpolation methods. Overview of Symbolic Methods in Industrial Analog Circuit Design (2003) T. Haffmann T. Wichmann Industrial analog circuits are usually designed using numerical simulation tools. To obtain a deeper circuit understanding, symbolic analysis techniques can additionally be applied. Approximation methods which reduce the complexity of symbolic expressions are needed in order to handle industrial-sized problems. This paper will give an overview to the field of symbolic analog circuit analysis. Starting with a motivation, the state-of-the-art simplification algorithms for linear as well as for nonlinear circuits are presented. The basic ideas behind the different techniques are described, whereas the technical details can be found in the cited references. Finally, the application of linear and nonlinear symbolic analysis will be shown on two example circuits. Asymptotic Homogenisation in Strength and Fatigue Durability Analysis of Composites (2003) S. E. Mikhailov J. Orlik Asymptotic homogenisation technique and two-scale convergence is used for analysis of macro-strength and fatigue durability of composites with a periodic structure under cyclic loading. The linear damage accumulation rule is employed in the phenomenological micro-durability conditions (for each component of the composite) under varying cyclic loading. Both local and non-local strength and durability conditions are analysed. The strong convergence of the strength and fatigue damage measure as the structure period tends to zero is proved and their limiting values are Heuristic Procedures for Solving the Discrete Ordered Median Problem (2003) P. Dominguez-Marín P. Hansen N. Mladenovic S. Nickel We present two heuristic methods for solving the Discrete Ordered Median Problem (DOMP), for which no such approaches have been developed so far. The DOMP generalizes classical discrete facility location problems, such as the p-median, p-center and Uncapacitated Facility Location problems. The first procedure proposed in this paper is based on a genetic algorithm developed by Moreno Vega [MV96] for p-median and p-center problems. Additionally, a second heuristic approach based on the Variable Neighborhood Search metaheuristic (VNS) proposed by Hansen & Mladenovic [HM97] for the p-median problem is described. An extensive numerical study is presented to show the efficiency of both heuristics and compare them. Exact Procedures for Solving the Discrete Ordered Median Problem (2003) N. Boland Dominguez-Marín P. S. Nickel J. Puerto The Discrete Ordered Median Problem (DOMP) generalizes classical discrete location problems, such as the N-median, N-center and Uncapacitated Facility Location problems. It was introduced by Nickel [16], who formulated it as both a nonlinear and a linear integer program. We propose an alternative integer linear programming formulation for the DOMP, discuss relationships between both integer linear programming formulations, and show how properties of optimal solutions can be used to strengthen these formulations. Moreover, we present a specific branch and bound procedure to solve the DOMP more efficiently. We test the integer linear programming formulations and this branch and bound method computationally on randomly generated test problems. Padé-like reduction of stable discrete linear systems preserving their stability (2003) S. Feldmann P. Lang A new stability preserving model reduction algorithm for discrete linear SISO-systems based on their impulse response is proposed. Similar to the Padé approximation, an equation system for the Markov parameters involving the Hankel matrix is considered, that here however is chosen to be of very high dimension. Although this equation system therefore in general cannot be solved exactly, it is proved that the approximate solution, computed via the Moore-Penrose inverse, gives rise to a stability preserving reduction scheme, a property that cannot be guaranteed for the Padé approach. Furthermore, the proposed algorithm is compared to another stability preserving reduction approach, namely the balanced truncation method, showing comparable performance of the reduced systems. The balanced truncation method however starts from a state space description of the systems and in general is expected to be more computational demanding. A Polynomial Case of the Batch Presorting Problem (2003) J. Kallrath S. Nickel This paper presents new theoretical results for a special case of the batch presorting problem (BPSP). We will show tht this case can be solved in polynomial time. Offline and online algorithms are presented for solving the BPSP. Competetive analysis is used for comparing the algorithms. knowCube for MCDM – Visual and Interactive Support for Multicriteria Decision Making (2003) T. Hanne Trinkhaus H.L. In this paper, we present a novel multicriteria decision support system (MCDSS), called knowCube, consisting of components for knowledge organization, generation, and navigation. Knowledge organization rests upon a database for managing qualitative and quantitative criteria, together with add-on information. Knowledge generation serves filling the database via e.g. identification, optimization, classification or simulation. For “finding needles in haycocks”, the knowledge navigation component supports graphical database retrieval and interactive, goal-oriented problem solving. Navigation “helpers” are, for instance, cascading criteria aggregations, modifiable metrics, ergonomic interfaces, and customizable visualizations. Examples from real-life projects, e.g. in industrial engineering and in the life sciences, illustrate the application of our MCDSS.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/16196/start/0/rows/10/yearfq/2003/sortfield/year/sortorder/desc","timestamp":"2014-04-19T18:28:02Z","content_type":null,"content_length":"46020","record_id":"<urn:uuid:4845a2ad-7a77-4816-b2dd-0861a0b780dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the distribution of Touchdowns scored fit the Poisson Distribution? For Super Bowl props, I have been using the poisson distribution to help describe the distribution of touchdowns. For more on applying the poisson distribution to sports betting, get Sharp Sports Betting by Stanford Wong...note he is also my publisher, but believe me, I wouldn't recommend his book unless I thought it was good. Here is an example of how I used the poisson distribution. In Las Vegas, one of the sportsbooks had a contest prop with multiple possibilities. It was on the exact number fo TD passes that Tom Brady would throw in the Super Bowl. They also had the same prop (with different prices) for Eli Manning. For both players, I found there to be positive-EV in betting that they would throw exactly zero TDs. I bet Brady to throw zero TDs at 25 to 1 and Manning to throw zero TD passes at 4 to 1. I used two methods to value these props. The first was a simulation method which simulated the results of drives for the game for each team. I ran the simulation 10,000 times. The second method was using the Poisson Distribution. Both methods needed an accurate expected number of TD passes for the two QBs as the main input. Assuming I was accurate on that mean (if I wasn't, all results would be off), I found both 25 to 1 on Brady and 4 to 1 on Manning to be positive-EV bets. Today I started thinking: does the poisson distribution really describe Touchdowns in the NFL well? Or probably better phrased: is the distribution of touchdowns in NFL games similar to a poisson distribution? (remember, I am not a statistician, just a gambler who tries to use techniques to get better values....so I apologize if this is not the correct technical way to say it) That's a tough question to answer. First I would need to know the true mean of TD passes for the QBs. But with such small sample sizes (just 16 regular season games in a year), and other factors (quality of opponent's defenses for example), it is really difficult to peg it down too closely. So instead, I decided to throw a big net on the NFL and look at all games and see if the distribution of the number of TDs matched the poisson distribution. I think it does. Here are the results: I took all games from 1989 to the end of the regular season of 2007 (including all playoff games except this year's playoff games as I had not inputed them yet). I have the number of rushing TDs, passing TDs and defensive/special teams TDs in all games during that span. I lumped defensive TDs with special teams TDs in one category. Here are the averages both both teams combined. I did not separate out to individual teams. Rushing TD: 1.61 Passing TD: 2.59 Def/ST TD: 0.44 Next, I added up the number of games with exactly 0 rushing TDs, exactly 1 rushing TDs, exactly 2 rushing TDs, etc. etc. and repeated it for the other two ways to score TDs. Next, I plugged in the mean for each way to score a TD and had the poisson distribution spit out the expected number for each exact number of TD. These two methods (the actual exact number of TDs in real games and the expected exact number of TDs using the poisson distribution) were very similar. Here are the results. The first number is the exact number of TDs, the second is the percentage of games that actually had that exact number of TDs, the third is the poisson distribution's prediction of the expected percentage of games that had that exact number of TDs. Rushing TDs 0 19.6% 20.1% 1 32.6% 32.2% 2 25.8% 25.9% 3 14.6% 13.9% 4 4.8% 5.6% 5 1.8% 1.8% 6 0.5% 0.5% 7 0.1% 0.1% Passing TDs 0 7.9% 7.5% 1 20.3% 19.4% 2 24.0% 25.1% 3 20.4% 21.7% 4 14.7% 14.1% 5 7.7% 7.3% 6 3.2% 3.2% 7 1.2% 1.2% 8 0.3% 0.4% 9 0.2% 0.1% Defensive and Special Teams TDs 0 64.8% 64.3% 1 27.6% 28.4% 2 6.3% 6.3% 3 1.2% 0.9% 4 0.1% 0.1% As you can see, the percentage numbers match up very closely. As a first pass, my opinion is that the possion distribution describes the distribution of TDs quite well, but one needs to be correct with the expected mean numbers. For example, it would be incorrect to assume that Brady's distribution of TD passes is the same as Manning's. Brady has a much higher average. So the crucial step is still estimating the average number of TD passes they will throw.
{"url":"http://weighingtheodds.blogspot.com/2008/02/does-distribution-of-touchdowns-scored.html","timestamp":"2014-04-17T06:47:18Z","content_type":null,"content_length":"35645","record_id":"<urn:uuid:70b86175-37c4-4537-9b9e-4a89279d9708>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] predicting values based on (linear) models josef.pktd@gmai... josef.pktd@gmai... Wed Jan 14 23:50:56 CST 2009 On Wed, Jan 14, 2009 at 11:24 PM, Pierre GM <pgmdevlist@gmail.com> wrote: > On Jan 14, 2009, at 10:15 PM, josef.pktd@gmail.com wrote: >> The function in stats, that I tested or rewrote, are usually identical >> to around 1e-15, but in some cases R has a more accurate test >> distribution for small samples (option "exact" in R), while in >> scipy.stats we only have the asymptotic distribution. > We could try to reimplement part of it in C,. In any case, it might > be worth to output a warning (or at least be very explicit in the doc) > that the results may not hold for samples smaller than 10-20. I am not a "C" person and I never went much beyond HelloWorld in C. I just checked some of the doc strings, and I am usually mention that we use the asymptotic distribution, but there are still pretty vague statements in some of the doc strings, such as "The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so." >> Also, not all >> existing functions in scipy.stats are tested (yet). > We should also try to make sure missing data are properly supported > (not always possible) and that the results are consistent between the > masked and non-masked versions. I added a ticket so we don't forget to check this. > IMHO, the readiness to incorporate user feedback is here. The feedback > is not, or at least not as much as we'd like. That depends on the subpackage, some problems in stats have been reported and known for quite some time and the expected lifetime of a ticket can be pretty long. I was looking at different python packages that use statistics, and many of them are reluctant to use scipy while numpy looks very well established. But, I suppose this will improve with time and the user base will increase, especially with the recent improvements in the build/distribution and the documentation. More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-January/019404.html","timestamp":"2014-04-16T07:16:47Z","content_type":null,"content_length":"4920","record_id":"<urn:uuid:ab3f1f97-410a-4445-94c4-bc8751298191>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Chord Length Calculator - Colorado Home Loan Mortgage Length Calculator eBook, Complex Number Calculator Precision 45, Complex Calculator Precision 36 - and Others Mobile Partner Huawei E1550 For Windows 7 Free Download Iron Man Pc Game Facility Planning Software Fantasy Name Genera Video Editor Mpeg Ip Address Management Software Freeware Samsung R430 Digital Livecam Quantum Cam Driver Qhm500 8lm Unlock Sony X8 Sim Unlock Online Resize Gif Thomas Train Configurations Prim And Precise Demon Tools Prevodilac Tema Naruto Old Hindi Movie Dunyadari Chord Length Calculator 1. Colorado Home Loan Mortgage Length Calculator eBook - Home & Personal/E-books & Information Databases ... Colorado Home Loan Mortgage Length Calculator eBook v4.4 is a GREAT Resource to determine your savings if you make larger monthly payments on your home loan. Although this eBook was designed for Colorado you can still benefit by using it for other states within the USA. ... Price: $0.00 (Demo) Size: 1.2 MB Site: (AMERICAN FINANCING) 2. Complex Number Calculator Precision 45 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform calculations with complex functions. Complex Number Calculator Precision 45 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Beta Function - Complex Functions - Complex Number Calculator - Gamma Function - Hyperbolic Functions - Inverse Functions - Trigonometric Functions Price: $60.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 3. Complex Calculator Precision 36 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform calculations with comlex functions. Complex Number Calculator Precision 36 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Complex Functions - Complex Number Calculator - Gamma Function - Hyperbolic Functions - Inverse Functions - Trigonometric Functions Price: $45.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 4. CCTV Design Lens Calculator - Multimedia & Design/Video ... Advanced CCTV Lens Calculator for CCTV designers. It offers standard functions of Lens Calculators - calculation of the field-of-view size depending on the distance and the lens focal Length as well as several new, more effective tools. The CCTV Lens Calculator will help you: Select lenses, resolutions and camera positions for proper design of CCTV systems. Calculate projections of the field of view in order to draw them on the location plan. The projections are calculated in 3D coordinate ... Tags: 3d Cctv - Cctv Design - Cctv Lens Calculator - Field Of View - Focal Length - Ip Camera - Lens - Lens Calculator - Megapixel Camera - Megapixel Resolution Price: $0.00 (Freeware) Size: 8.4 MB Site: cctvcad.com (CCTVCAD Software) 5. Length Area Converter Software - Internet/Tools & Utilities ... Data Doctor area Length converter and lands price Calculator is user friendly software provides calculation of properties in various units. Download completely free software which assists you in dealing the customer, speed up your business related to properties and real estate. Area Length converter price Calculator utility provides conversion of any property related unit to and from Acres, Ares, Dunams, Hectares, Perches, Ping, Pyong, Roods, Sections, Square Centimeters, Square Chains, Square ... Tags: Acre - Area - Calculate - Commercial - Convert - Converter - Download - Feet - Flat - House Price: $0.00 (Freeware) Size: 757.0 KB Site: (REAL ESTATE NOIDA) 6. Complex Calculator Precision 27 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform calculations with comlex functions. Complex Calculator Precision 27 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Complex Calculator - Complex Functions - Hyperbolic Functions - Inverse Functions - Trigonometric Functions Price: $30.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 7. Complex Calculator Precision 18 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform calculations with complex functions. Complex Number Calculator Precision 18 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Complex Functions - Complex Number Calculator - Hyperbolic Functions - Inverse Functions - Trigonometric Functions Price: $20.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 8. College Scientific Calculator 27 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations Scientific Calculator Precision 27 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Precision of calculations is 27 digits. Trigonometric, hyperbolic, inverse and ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $11.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 9. Trades Math Calculator - Business & Productivity Tools/Calculators & Converters ... Solve common machine shop and other trades trigonometry and math problems at a price every trades person can afford! As a machinist or CNC programmer, you often have to use trigonometry to calculate hole positions, chamfers, sine bar stacks, dovetail measurements, bolt circles, etc. You often have to leaf through reference books, drill charts, speed and feed tables, thread wire charts and so on to find the information you need. On the other hand, you can use the Trades Math Calculator fast, easy ... Tags: Ball Nose Cutter - Bolt Circle - Calculators - Chord Geometry - Cutting Speed - Drill Charts - Machining Math - Machinist Calculator - Machinist Helper - Milling Speed Price: $14.99 (Shareware) Size: 5.5 MB Site: tradesmathcalculator.com (HiLo Enterprises) 10. RA Chord Hunter - Educational/Reference Tools ... RA Chord Hunter is a handy program designed to display Chord diagrams based on the data provided by the user (Chord root and Chord type), slash chords (e.g. Am7/G, Bb/D, Dm/F, etc...) are supported. RA Chord Hunter also includes a guitar tuner, transposer (both visual and as a grid), and many other features. With RA Chord Hunter you can listen to the chords just by clicking on them in the lyrics, easily transpose them, copy and save the song text. Small size No installation needed Writes ... Tags: Chord - Fretboard - Guitar - Midi - Music - Songbook - Tab - Tabulature - Transposer - Transpositor Price: $0.00 (Freeware) Size: 1.3 MB Site: rasoftware.at.ua (RA Software) 11. Area Conversion and Price Calculator - Business & Productivity Tools/Calculators & Converters ... Area converter and price Calculator software is available free of cost. Tool installation wizard makes easy to install the software on your computer on various version of windows operating system like 98, ME, NT, 2000, 2003 server, XP, VISTA. Software can figure out cost of shop, office, homes, plot, lands and variety of other real estate properties. Area converter application converts land area, property, plot and other units of your choice. Tool can change assorted land , area, property Length ... Tags: Application - Area - Calculate - Calculator - Change - Converter - Converts - Cost - Efficient - Estate Price: $0.00 (Freeware) Size: 759.0 KB Site: (REAL ESTATE NOIDA) 12. Scientific Calculator Precision 54 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 54 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Precision of calculations is 54 digits. Trigonometric, hyperbolic and ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $20.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 13. College Scientific Calculator 45 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 45 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Precision of calculations is 45 digits. Trigonometric, hyperbolic, inverse and ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $17.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 14. Scientific Calculator Precision 72 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations Scientific Calculator Precision 72 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebult Common Costants list with fundamental constants. Unlimited User ... Tags: Gamma Function - Hyperbolic Functions - Inverse Functions - Lower Incomplete Gamma Function - Scientific Calculator - Trigonometric Functions - Upper Incomplete Gamma Function Price: $30.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 15. College Scientific Calculator 36 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 36 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Precision of calculations is 36 digits. Trigonometric, hyperbolic, inverse and ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $14.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 16. Scientific Calculator Precision 63 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 63 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $25.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 17. Scientific Calculator Precision 81 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations Scientific Calculator Precision 81 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Gamma Function - Hyperbolic Functions - Inverse Functions - Lower Incomplete Gamma Function - Scientific Calculator - Trigonometric Functions - Upper Incomplete Gamma Function Price: $35.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 18. Scientific Calculator Precision 90 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 90 is programmed in C#. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for storing often used numbers. Prebuilt Common Costants list with fundamental constants. Unlimited User ... Tags: Beta Function - Gamma Function - Hyperbolic Functions - Incomplete Beta Function - Inverse Functions - Lower Incomplete Gamma Function - Scientific Calculator - Sine Integral Function - Trigonometric Functions - Upper Incomplete Gamma Function Price: $40.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 19. Modern Calculator - Business & Productivity Tools/Other Related Tools ... Modern Calculator is a powerful instrument for making various types of calculations. The program can be used as a standard Calculator for making a simple and scientific calculations. All calculations are automatically saved to a file and displayed in the log window. Except a standard Calculator the program includes some useful instruments that will allow you to make specific types of calculations: Functions: 1. Equation solver - allows to calculate roots of three types of equations: linear, ... Tags: Advanced Calculator - Arithmetic Mean - Beautiful Calculator - Expression Calculator - Grapher - History Of Calculations - Log - Modern Calculator - Plotter - Scientific Price: $9.99 (Shareware) Size: 1.4 MB Site: am-softs.com (Amazing Software Solutions) 20. Multipurpose Calculator - MultiplexCalc - Educational/Mathematics ... MultiplexCalc is a multipurpose and comprehensive desktop Calculator for Windows. It can be used as an enhanced elementary, scientific, financial or expression Calculator. It embodies prime numbers, rational numbers, generic floating-point routines, hyperbolic and transcendental routines. MultiplexCalc contains more than 100 mathematical functions and constants to satisfy your needs to solve problems ranging from simple elementary algebra to complex equations. Its underling implementation ... Tags: Calculator - Desktop - Desktop Calculator - Enhanced - Expression - Multiplexcalc - Multivariable - Scientific - Scientific Calculator Price: $15.00 (Shareware) Size: 1.9 MB Site: math-solutions.org (Institute of Mathematics and Statistics) 21. Registry Fix Review - Utilities ... A Free Calculator For Converting Miles, Kilometers, Meters, Yards, Feet. Here's all you do. Leave all the boxes blank except the one you want to covert. Enter a number in the box you want to convert, and click the "Click Here to Convert" button. All the other boxes will be filled with the 100% accurate measurement! To do another, click the reset button and go again. ... Tags: Comvert Length Measurements - Convert Miles - Feet - Kilometers - Length Calculator - Meters - Yards Price: $0.00 (Freeware) Size: 415.5 KB Site: registryfixreview.com (Registry Fix Review) 22. Compact Scientific Calculator 36 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform mathematical calculations. The Calculator was designed with purpose to fit Netbooks and Notebooks with small display. Of course, the Calculator can be used on laptop and desktop computers as well. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for ... Tags: Compact - Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $14.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 23. Compact Scientific Calculator 45 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform mathematical calculations. The Calculator was designed with purpose to fit Netbooks and Notebooks with small display. Of course, the Calculator can be used on laptop and desktop computers as well. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for ... Tags: Compact - Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $17.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 24. Compact Scientific Calculator 54 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform mathematical calculations. The Calculator was designed with purpose to fit Netbooks and Notebooks with small display. Of course, the Calculator can be used on laptop and desktop computers as well. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for ... Tags: Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $20.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 25. Compact Scientific Calculator 27 - Educational/Mathematics ... A handy, fast, reliable, precise tool if you need to perform mathematical calculations. The Calculator was designed with purpose to fit Netbooks and Notebooks with small display. Of course, the Calculator can be used on laptop and desktop computers as well. All calculations are done in proprietary data type. The Calculator handles mathematical formulas of any Length and complexity. Calculation history can be stored into text file or printed. There are ten variables or constants available for ... Tags: Compact - Decimal - High Precision - Hyperbolic Functions - Inverse Functions - Scientific Calculator - Trigonometric Functions Price: $11.00 (Shareware) Size: 2.6 MB Site: tvalx.com (Tvalx) 26. Desktop Calculator - DesktopCalc - Educational/Mathematics ... DesktopCalc is an enhanced, easy-to-use and powerful scientific Calculator with an expression editor, printing operation, result history list and integrated help. Desktop Calculator gives students, teachers, scientists and engineers the power to find values for even the most complex equation set. DesktopCalc uses Advanced DAL (Dynamic Algebraic Logic) mechanism to perform all its operation with the built-in 38-digit precision math emulator for high precision. DesktopCalc combines fast ... Tags: Algebraic - Calc - Calculator - Desktop - Desktop Calculator - Desktopcalc - Education - Equation - Expression - Scientific Price: $15.00 (Shareware) Size: 1.9 MB Site: math-solutions.org (Institute of Mathematics and Statistics) 27. Xmart Calculator - Multimedia & Design/Other Related Tools ... Xmart Calculator is appreciated as an intelligent, programmable and expandable Calculator based on text expression. Users can define personal functions and all calculations are step-by-step traceable. Xmart Converter inbuilt containing most common conversions for base, PC memory, speed, Length, area, volume, weight and temperature. It is now free for windows! TE Calculate the whole expression input by user. TE Show all the calculation history. TE Multi-functional calculation. TE Calculation ... Tags: Calculator - Extendable Calculator - Function Calculator - Mathematic - Scientific - Scientific Calculator Price: $0.00 (Freeware) Size: 594.0 KB Site: xmartcalc.com (Xmartsoft) 28. Inition StereoBrain Calculator - Multimedia & Design/Other Related Tools ... Inition StereoBrain Calculator is a reliable software designed for stereoscopic 3D work. It has been developed with ease and speed of use in real scenarios in mind.It is particularly useful in pre-production for planning which rigs are suitable for particular scenarios.StereoBrain has two modes of operation:· Interaxial finder: Will find interaxial (aka stereo-base or camera separation) distance given focal Length, desired parallax and subject distance. ... Tags: 3d Viewer - Calculator - Inition Stereobrain Calculator - Shooting - Stereo Shooting - Stereoscopic - Stereoscopic Calculator Price: $0.00 (Freeware) Site: inition.co.uk (Inition Ltd.) 29. Scientific Calculator - ScienCalc - Educational/Mathematics ... ScienCalc is a convenient and powerful scientific Calculator. ScienCalc calculates mathematical expression. It supports the common arithmetic operations (+, -, *, /) and parentheses. The program contains high-performance arithmetic, trigonometric, hyperbolic and transcendental calculation routines. All the function routines therein map directly to Intel 80387 FPU floating-point machine instructions. Find values for your equations in seconds: Scientific Calculator gives students, ... Tags: Application - Arithmetic - Calculation - Calculator - Complex - Equation - Exponential - Hyperbolic - Linear - Logarithmic Price: $15.00 (Shareware) Size: 1.9 MB Site: math-solutions.org (Institute of Mathematics and Statistics) 30. Innovative Calculator - InnoCalculator - Educational/Mathematics ... InnoCalculator is a multipurpose and comprehensive desktop Calculator for Windows. Its underling implementation encompasses high precision, sturdiness and multi-functionality. With the brilliant designs and powerful features of InnoCalculator, you can bring spectacular results to your calculating routines. It contains more than hundred mathematical functions and physical constants to satisfy your needs to solve problems ranging from simple elementary algebra to complex equations.InnoCalculator ... Tags: Analysis - Arithmetic - Build - Buttons - Calculator - Complex - Cut - Desktop - Display - Documentation Price: $15.00 (Shareware) Size: 3.3 MB Site: math-solutions.org (Institute of Mathematics and Statistics) Chord Length Calculator software tags:
{"url":"http://shareme.com/showtop/chord-length-calculator.html","timestamp":"2014-04-19T07:19:29Z","content_type":null,"content_length":"86676","record_id":"<urn:uuid:ab344c22-37e7-437c-b2e3-041e8144415a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Does this “flipping lexicographic” ordering have a standard name? up vote 8 down vote favorite I’ve run into the following straightforward variant of lexicographic ordering, and am wondering if it has a standard name. I’ve been calling it the flipping lexicographic ordering, for evident reasons. I could also imagine it getting called the parity lexicographic ordering, but a brief search suggests that that’s used for some slightly different orderings. $\newcommand{\x}{\mathbf{x}} \ newcommand{\y}{\mathbf{y}} \newcommand{\N}{\mathbb{N}} \newcommand{\fl}{\mathrm{fl}} \newcommand{\lfl}{\;\sqsubset^\fl\;}$ For sets $\x, \y \in \binom{\N}{m+1}$, write $\x = \{x_0 < \ldots < x_m\}$, $\y = \{y_0 < \ldots < y_m\}$. Definition. $\x \lfl \y$ if $\x$ and $\y$ differ first in the $i$th place, and • $i$ is even, and $x_i < y_i$; or • $i$ is odd, and $y_i < x_i$. (This is the flip!) As for ordinary lex, there’s also a nice inductive characterisation: Write $\x = \{x_0\} \cup \x^{\geq 1}$, and $\y = \{y_0\} \cup \y^{\geq 1}$, similarly to above. Then $\x \lfl \y$ if and only if either $x_0 < y_0$, or $x_0 = y_0$ and $\y^{\geq 1} \lfl \x^{\geq 1}$. (Again, note the flip.) Does this ring any bells with anybody? (Of course, $\lfl$ has obvious generalisations beyond $\binom{\N}{m+1}$; I’m sticking to that case here partly for definiteness, mainly since that’s the specific case I’m interested in.) Background: I’ve been playing around with implementing the algorithms from Ross Street’s “The Algebra of Oriented Simplices” (and related papers) in Haskell/Agda, and this ordering turns out to make a computationally convenient stand-in for his $\lhd$ order, in places. co.combinatorics order-theory simplicial-stuff 4 Reminds me of boustrophedon. – Stephen S Feb 5 '11 at 9:16 2 +1 for 'boustrophedon order' – ndkrempel Feb 5 '11 at 13:03 1 The first 4-letter word in the boustrophedonic dictionary is "waxy", since the only things that would beat it are $\{a < x < y < z\}$ and $\{a < w < x < z\}$, neither of which has any anagrams. – Tracy Hall Feb 5 '11 at 21:04 add comment 1 Answer active oldest votes I found an example in the mathematical literature where the same ordering on words, and more specifically continued fractions, is called "alternating lexicographic" order. I guess that there are other examples too, and that this is name can be considered standard. The term "boustrophedonic order" also appears in the mathematical literature, but it seems to mean up vote 6 something different. The boustrophedonic order on the English alphabet is AZBYCXDW... . In my opinion, calling your ordering boustrophedonic is clever, but I think that "alternating down vote lexicographic" is more consistent as well as more standard, since it is an alternating combination of the lexicographic and colexicographic (or lex and colex) orderings. 3 Maybe «boustrophedonic» is too clever :) – Mariano Suárez-Alvarez♦ Feb 5 '11 at 20:53 Ah, thankyou very much — indeed, googling "alternating lexicographic" turns up quite a number of results, which (from a small sample) seem to describe the right ordering. This looks like the answer I was after. Although it now seems so pedestrian compared to ‘flipping’ and ‘boustrophedonic’… – Peter LeFanu Lumsdaine Feb 5 '11 at 22:39 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics order-theory simplicial-stuff or ask your own question.
{"url":"http://mathoverflow.net/questions/54377/does-this-flipping-lexicographic-ordering-have-a-standard-name/54464","timestamp":"2014-04-19T07:17:08Z","content_type":null,"content_length":"58702","record_id":"<urn:uuid:ca72bbd4-004a-4472-98ab-732ac2d337a8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
XX International Linac Conference MOE13 (Poster) Presenter: Christopher Allen (LANL) email: ckallen@lanl.gov Status: Complete FullText: ps.gz or pdf Investigation of Halo Formation in Continuous Beams Using Weighted Polynomial Expansion and Perturbational Analysis* Christopher K. Allen (LANL) Perturbation analysis, along with a weighted polynomial expansion of the self-fields, is used to construct approximate halo particle trajectories. The analysis is based on the particle-core model of halo formation. Here, the self-fields of the core are expanded in a polynomial where the coefficients are chosen to minimize the least-square error from the true fields. The error is weighted against the core distribution so that the expanded fields are an laveragedn representation of the fields felt by halo test particles. Keeping the nonlinear terms in this expansion retains the stability of the halo particles in the model, which is not possible with Hillms equation used previously in particle-core studies. From this model, we employ perturbation analysis to construct approximate solutions for the halo particle trajectories. These solutions can provide quantitative information on the halo formation, such as amplitude and time constants, accurate when the relative beam mismatch is small. *Work supported by US Department of Energy Linac2000 Author Index Linac2000 Menu Comments or Questions to linac2000@slac.stanford.edu
{"url":"http://www.slac.stanford.edu/econf/C000821/MOE13.shtml","timestamp":"2014-04-19T02:53:35Z","content_type":null,"content_length":"2633","record_id":"<urn:uuid:b97ecb9e-0adf-430d-938f-40575109e8e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
The Roommates Problem Ethan Bernstein and Sridhar Rajagaopalan EECS Department University of California, Berkeley Technical Report No. UCB/CSD-93-757 We consider a version of on-line maximum matching where an edge may be matched when either endpoint arrives. In contrast with previous work, we work with general undirected graphs with arbitrary edge weights. Our algorithms are inspired by the maxim "A bird in the hand is worth two in the bush". For the weighted case, we give a deterministic algorithm with a worst-case performance ratio of ${1 \ over 4}$. We prove an upper bound on the worst case performance ratio of any deterministic on-line algorithm of ${1 \over 3}$. For the unweighted case, we prove a tight bound of ${2 \over 3}$ on the possible worst-case performance ratio of any deterministic on-line algorithm. All running times are small polynomials ($O(\size{V}\cdot\size{E})$) using naive implementations and no complicated data structures. As justified by previous work, problems of this nature are of practical importance. BibTeX citation: Author = {Bernstein, Ethan and Rajagaopalan, Sridhar}, Title = {The Roommates Problem}, Institution = {EECS Department, University of California, Berkeley}, Year = {1993}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/6282.html}, Number = {UCB/CSD-93-757}, Abstract = {We consider a version of on-line maximum matching where an edge may be matched when either endpoint arrives. In contrast with previous work, we work with general undirected graphs with arbitrary edge weights. Our algorithms are inspired by the maxim "A bird in the hand is worth two in the bush". For the weighted case, we give a deterministic algorithm with a worst-case performance ratio of ${1 \over 4}$. We prove an upper bound on the worst case performance ratio of any deterministic on-line algorithm of ${1 \over 3}$. For the unweighted case, we prove a tight bound of ${2 \over 3}$ on the possible worst-case performance ratio of any deterministic on-line algorithm. All running times are small polynomials ($O(\size{V}\cdot\size{E})$) using naive implementations and no complicated data structures. As justified by previous work, problems of this nature are of practical importance.} EndNote citation: %0 Report %A Bernstein, Ethan %A Rajagaopalan, Sridhar %T The Roommates Problem %I EECS Department, University of California, Berkeley %D 1993 %@ UCB/CSD-93-757 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/6282.html %F Bernstein:CSD-93-757
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/6282.html","timestamp":"2014-04-20T08:16:45Z","content_type":null,"content_length":"6265","record_id":"<urn:uuid:3cb8a891-dc18-4296-9e8e-0d7852643c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Definitions for quotientˈkwoʊ ʃənt This page provides all possible meanings and translations of the word quotient Random House Webster's College Dictionary quo•tientˈkwoʊ ʃənt(n.) 1. the result of division; the number of times one quantity is contained in another. Category: Math Origin of quotient: 1400–50; late ME quocient, quociens < L quotiēns (adv.) how many times Princeton's WordNet 1. quotient(noun) the ratio of two quantities to be divided 2. quotient(noun) the number obtained by division 1. quotient(Noun) The number resulting from the division of one number by another. The quotient of 12 divided by 4 is 3. 2. quotient(Noun) By analogy, the result of any process that is the inverse of multiplication as defined for any mathematical entities other than numbers. 3. quotient(Noun) A quotum or quota. Origin: From quotiens, from quoties Webster Dictionary 1. Quotient(noun) the number resulting from the division of one number by another, and showing how often a less number is contained in a greater; thus, the quotient of twelve divided by four is three 2. Quotient(noun) the result of any process inverse to multiplication. See the Note under Multiplication 1. Quotient In mathematics, a quotient is the result of division. For example, when dividing 6 by 3, the quotient is 2, while 6 is called the dividend, and 3 the divisor. The quotient further is expressed as the number of times the divisor divides into the dividend, e.g. 3 divides 2 times into 6. A quotient can also mean just the integer part of the result of dividing two integers. For example, the quotient of 13 and 5 would be 2 while the remainder would be 3. For more, see the Euclidean division. In more abstract branches of mathematics, the word quotient is often used to describe sets, spaces, or algebraic structures whose elements are the equivalence classes of some equivalence relation on another set, space, or algebraic structure. See: ⁕quotient set ⁕quotient group ⁕quotient ring ⁕quotient module ⁕quotient space ⁕quotient space of a topological space ⁕quotient object ⁕quotient category ⁕right quotient and left quotient The quotient rule is a method for finding derivatives in calculus. The New Hacker's Dictionary 1. quotient See coefficient of X. Find a translation for the quotient definition in other languages: Use the citation below to add this definition to your bibliography: Are we missing a good definition for quotient?
{"url":"http://www.definitions.net/definition/quotient","timestamp":"2014-04-21T07:27:54Z","content_type":null,"content_length":"26851","record_id":"<urn:uuid:f747bb04-d4cb-4f95-8c20-5912bf74db0f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Peas Galore Puzzle The Puzzle: At a school fete people were asked to guess how many peas there were in a jar. No one guessed correctly, but the nearest guesses were 163, 169, 178 and 182. One of the numbers was one out, one was three out, one was ten out and the other sixteen out. How many peas were there in the jar? Do you have the answer? Check against our solution!
{"url":"http://www.mathsisfun.com/puzzles/peas-galore.html","timestamp":"2014-04-16T15:59:52Z","content_type":null,"content_length":"5526","record_id":"<urn:uuid:32364a16-9a28-427c-b87e-44bae626445d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Who Is Responsible for Student Achievement? Replies: 4 Last Post: Feb 18, 2012 2:09 AM Messages: [ Previous | Next ] Re: Who Is Responsible for Student Achievement? Posted: Feb 17, 2012 9:18 AM Let's leave aside for the moment the issue of whether teachers understand the relationship of memory to mathematics and/or reflect that relationship poorly to their students. Students ARE NOT passing tests by memory. Indeed, students are doing very poorly on tests, which this thread owes its existence to. But some students do pass tests and some of those even do well. Thus, if there is any truth to the idea of a "memory poser" they would have to be in that smaller set of students that pass tests. I have found no indication of this. In fact, one of the reasons that students fail tests is not that they memorize but that they only memorize and even then they memorize poorly. I just wanted to clarify that if you go into an authentic mathematics test and memory is all you got then fail you will. Regarding concepts. You don't teach concepts, you experience them. Concepts (and theory) are simply how we share those thought experiences. Some concepts embody those experiences in logic more nicely than others but if the student and teacher are not having those same experiences (the epiphanetic moments) then the student is failing. Regarding word problems involving apples and trains. If the student cannot do math in that freakedly simple context then the student is seriously failing. Not much more needs to be said about that. Memory + Inference + Interest Bidda Badda Boom Bob Hansen On Feb 17, 2012, at 4:56 AM, john <jthiret1982@gmail.com> wrote: > When looking at the problem we see the both sides blaming each other, when it actually takes both of them for education to work. Now as to whose place is it to actually inspire children to learn. The obvious choice is the educators, after all why spend that much money and that much of your life devoted to education, when you can't even encourage learning. If all parents were qualified educators, there would be no need for the public school system or teachers at all. > As for the parents, their role should not be supplemental, but should be a compliment to the teachers role. The parents should not take up for teachers "slack", but rather should have more communication with teachers and see what and how they are teaching the children. > The parents will then be in a position help reinforce those ideas. This works because with effective communication the parents and teachers will not be contradicting each other and confusing the > I use the word slack not because teachers are not doing their jobs, but simply because educators are now forced to give exam prep not education. It is not solely the fault of educators, but after all the years of "experience" in educating most still don't know the impact of the memorization and regurgitation style of teaching. > Most educators equate memorizing a bunch of formulas and methods to understanding the material. Just because a student can memorize what a teacher wants them to and recall it at test time does not mean the student "understands" the material. > I have a great memory, and when I was in middle school I constantly tested my memory to see what I could remember. I memorized the bill of rights for a U.S. history class I could recite the bill of rights in reverse order or recall a specific amendment on command. > However, it was not until long after I graduated high school until I truly understood its meaning or how important it really was. By this time I had forgotten most of the bill of rights, making the memorized knowledge of it useless to me. When in reality that is the most important document in U.S. history. > Now with mathematics we have a similar problem. Students, some with good memory and some without, all are taught that they need to memorize all these rules and formulas, and regurgitate them on tests. They do get to use these concepts in their homework, which educators think that helps them understand it. It does not In some cases it actually reinforces the students hatred or anxiety for the subject. We attempt to help them understand with word problems such as: > [Sally sells apple for extra money for the summer....... OR train a is going 55 mph and train b is...] > Most student do not sell apples and will never need to know which train is going to arrive first. The people at the train station can tell them that. I feel very lucky to have gotten a great educator in high school mathematics. In addition to teaching us the concepts formulas and rules of mathematics she listened to us in the hallways and in class and even in the lunchrooms., and picked up on some problems we experienced in our daily lives. She would then utilize that in class, and mathematically model some of the problems we experienced. In doing this she even manged to slip in some math that was not on the exams that stuck with me for the past 12 years. > Most teachers like to feel they are doing this, and some feel they just simply don't have the time in their day to do that. Well this is where we can actually shift some blame back to the education system in the U.S. We spend so much more time on the "easier" subjects then we should. > I remember in high school we were required to take a class on economics. That could easily be integrated into a math class, giving more time teaching mathematics. There are more examples of "required" classes that could be {"-mathematized-"} > So to wrap up a "huge" change is needed on all sides to ensure future generations are more educated than the ones before them. Date Subject Author 2/16/12 Who Is Responsible for Student Achievement? Jerry P. Becker 2/17/12 Re: Who Is Responsible for Student Achievement? john 2/17/12 Re: Who Is Responsible for Student Achievement? Robert Hansen 2/17/12 Re: Who Is Responsible for Student Achievement? Wayne Bishop 2/18/12 Re: Who Is Responsible for Student Achievement? Louis Talman
{"url":"http://mathforum.org/kb/message.jspa?messageID=7670555","timestamp":"2014-04-19T17:46:17Z","content_type":null,"content_length":"26495","record_id":"<urn:uuid:5a198df0-40d9-47f1-9324-c2afbc501adc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph theory minors question! 1. The problem statement, all variables and given/known data Prove that, if G is a simple graph with no K5-minor and |V(G)| Does not = 0, then G has a vertex at most 5. 2. Relevant equations |E(G)| <= 3|V(G)| - 6 for |V(G)| >= 3 (Proved by earlier part of problem set) Handshake theorem (I don't believe we are allowed to use hadwiger's conjecture 3. The attempt at a solution Well I first used the handshake theorem to show that the sum of the degrees in G are equal to 2 times the edges in G. Sum(Deg (v)) for all v in G = 2 |E(G)| use V average and replace |E(G)| with 3|V(G)| - 6 so: Vaverage |V(G)| <= 2(3|V(G)| - 6) = 6|V(G)| - 12 divide by |V(G)| = 6 - 12/|V(G)| now for any V(G) less than 3, we can see it intuitively. However...I can't have |V(G)| > 12.....Help please! I've been a little frustrated. Thanks for your time!
{"url":"http://www.physicsforums.com/showthread.php?t=403099","timestamp":"2014-04-19T02:09:34Z","content_type":null,"content_length":"20193","record_id":"<urn:uuid:f9977279-c805-407c-b5ab-99da10e895d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from December 2010 on Just Rakudo It There’s been some discussion on reddit today about whether my @fib := 1, 1, *+* ...^ * >= 100; is unreadable gibberish or not, with the following Haskell suggested as an easier-to-understand version. fib = 1 : 1 : zipWith (+) fib (tail fib) (I’ve “corrected” both versions so they start the sequence with 1, 1.) The first thing to observe here is that this are not the same at all! The Perl 6 version is the Fibonacci numbers less than 100, while the Haskell version lazily generates the entire infinite sequence. If we simplify the Perl 6 to also be the (lazy) infinite Fibonacci sequence, we get the noticeably simpler my @fib := 1, 1, *+* ... *; To my (admittedly used to Perl 6) eye, this sequence is about as clean and straightforward as it is possible to get. We have the first two elements of the sequence: 1, 1 We have the operation to apply repeatedly to get the further elements of the sequence: And we are told the sequence will go on forever: ... * The *+* construct may be unfamiliar to people who aren’t Perl 6 programmers, but I hardly think it is more conceptually difficult than referring to two recursive copies of the sequence you are building, as the Haskell version does. Instead, it directly represents the simple understanding of how to get the next element in the Fibonacci sequence in source code. Of course, this being Perl, there is more than one way to do it. Here’s a direct translation of the Haskell version into idiomatic Perl 6: my @fib := 1, 1, (@fib Z+ @fib[1..*]); Well, allowing the use of operators and metaoperators, that is, as zipWith (+) becomes Z+ and tail fib becomes @fib[1..*]. To the best of my knowledge no current Perl 6 implementation actually supports this. I’d be surprised if any Perl 6 programmer would prefer this version, but it is out there. If you’re insistent on writing function calls instead of operators, you could also say it my @fib := 1, 1, zipwith(&[+], @fib, tail(@fib)); presuming you write a tail function, but that’s an easy one-liner.
{"url":"http://justrakudoit.wordpress.com/2010/12/","timestamp":"2014-04-20T13:19:15Z","content_type":null,"content_length":"25923","record_id":"<urn:uuid:520f284c-bd04-4bf7-81ae-53d2118151b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview for JoshuaZ - Less Wrong What do you mean by half the prisoners? Let's start there. How about I choose a prisoner at random from among all the prisoners in the problem. What is the probability that the prisoner I have chosen has correctly stated the color of the hat on his head? In particular, is that probability more than, less than, or equal to 0.5? While we are in the neighborhood, if there is a prisoner who is more likely to get the answer correctly than not, if you could tell me what ihis step by step process of forming his answer is, in detail similar to "if he is prisoner n, he guesses his hat color is the opposite of that of prisoner n^2+1" or some such recipe that a Turing machine or a non-mathematician human could follow. Thanks in advance How about I choose a prisoner at random from among all the prisoners in the problem. What is the probability that the prisoner I have chosen has correctly stated the color of the hat on his head? So what do you mean to choose a prisoner at random when one has infinitely many prisoners? In the winning strategy, do fewer than half the prisoners guess wrong? Do more prisoners guess correctly than incorrectly? I'm trying to get a handle on whether it is worth my while to try to penetrate the jargon in the "correct solutions." What do you mean by half the prisoners? Let's start there. In response to Is my view contrarian? Maybe the easy answer is to turn "contrarian" into a two place predicate. In response to comment by on Is my view contrarian? What would the two places be? In response to How much wealth is produced by high IQ people? One notable fact in this regard is that even though the population of Israel is nearly half Ashkenazi jews, the most intelligent ethnic group on Earth, scoring .5 to 1 standard deviation above Europeans, Israel is not incredibly rich but actually poorer than many European countriespercapita). Even though Israel of course is a rather special case - e.g. it has had to fight a number of wars - this casts some doubt on the notion that one can infer from the fact that intelligent people in the US are generally wealthy to the notion that intelligent people are the ones who create wealth. I do think, though, that intelligent people do in fact contribute enormously to progress and wealth (a related argument to this effect by me can be found here). However, I don't think that wealth is a very strong proxy for productivity or contribution to progress. Consider John von Neumann, for instance, who contributed spectacularly both to science, to the American war efforts in the 2nd world war and the cold war, and to the economy via, for instance, his contributions to the development of the computer. If people would have been awarded in accordance with their contributions, he would have died one of the wealthiest men on Earth. That said, I do think that people who contribute more generally are better paid, but the relationship is not as strong as some people seem to believe. The economists' notion of "productivity" according to which higher-earning people are by definition more productive is highly misleading in this regard, since people tend to conflate this notion of productivity with the ordinary language notion of a productive person as someone who contributes a lot to wealth-creation. In response to comment by on How much wealth is produced by high IQ people? There are other complicating factors in the Israel example. About 10% of Israel is ultra-orthodox (charedi) (source) and a large fraction of them are on essentially perpetual welfare with the men staying inside yeshivot all day studying and not generating any economic productivity. Also note that by some intelligence metrics other ethnic groups outscore Ashkenazim (especially Han Chinese). This is however a minor quibble and your essential points seem correct. In response to comment by on Open Thread: March 4 - 10 My best take on the thing is that, historically, most great physics discoveries were made by generalist, wide-branching natural philosophers. Granted, "natural philosophy" is arguably the direct ancestor of physics from which spawned the bastards of "chemistry" and "biology", but even regardless, the key point is that they were generalists and that, if we were going to solve the current problem simply by throwing more specialized physicists and gamma ray guns at it, this is not the evidence I'd expect to see. Given historical base rates of generalists vs specialists in physics, and the ratio of Great Discoveries made by the former rather than the latter, it feels as if generalists have a net advantage in "consolidating" recent research into a Great Discovery. I do have to agree, though, that all of them came from physicists, if not necessarily formally trained, although in most cases they were. Good knowledge of physics is necessary, that I won't argue. But what I'll point out is that I've personally met many more game developers and programmers with a much better grasp of (basic) physics (i.e. first volume of Feynman's Lectures) than college physics department members, on a purely absolute count. It doesn't seem that far-fetched, to me, to assume there's a comparable difference in base rates of people within and outside physics departments with a solid enough grasp of physics for the Next Great Discovery, whatever that threshold may be (and obviously, the lower the actual threshold, the more likely it is that it will come from outside Physics Departments). In response to comment by on Open Thread: March 4 - 10 To expand on shminux's point about what has happened in the last 100 years that's different: There's a serious lack of low-hanging fruit. Ideas are more complicated and the simple ideas that a generalist has any chance to find have to a large extent already been discovered. Note also that in fact it is well before 100 years ago that this trend already started. Darwin, Maxwell, Faraday and many other 19th century researchers were already specialists by most notions of the term. So really this trend has been going on for almost 200 years. In response to Open Thread: March 4 - 10 A new paper by Lenny Susskind discusses the black hole firewall problem and suggests that the computations necessary to actually create the standard paradoxical situation are computationally intractable. Paper here, discussion by Scott Aaronson here. In response to Open thread for January 1-7, 2014 A new paper gives a much better algorithm for approximating max flow in undirected graphs. Paper is here. Article for general readers is here. Although the new algorithm is asymptotically better, it remains to be seen if it is substantially better in the practical range. However, this is an example of discovering a substantially more efficient algorithm where one might not have guessed that substantial improvements were possible. In response to comment by on Handshakes, Hi, and What's New: What's Going On With Small Talk? For instance, I often greet people with "How do you do?". Most people of my generation don't really know how to react to this, and it makes them stop, think, and give a more "real" answer than if I asked "What's up?" or "How's it going?". This might backfire, though - at least in our English class, we were taught that the only acceptable response to being asked "How do you do" is to repeat "How do you do" back. In response to comment by on Handshakes, Hi, and What's New: What's Going On With Small Talk? I hope you didn't take that instruction too strictly or did you have another protocol for getting out of apparent infinite loops? In response to comment by on Doubt, Science, and Magical Creatures - a Child's Perspective But the hypothesis where the TF's knowledge is more closely linked to the parents' is less natural; to me it feels like making excuses for a bad hypothesis. In response to comment by on Doubt, Science, and Magical Creatures - a Child's Perspective Does it? Suppose for example that that the Tooth Fairy has every house with little children bugged and so hears verbal statements about loose teeth. In response to comment by on One Sided Policy Debate - The Science of Literature The most useful aspect of this service would be to prevent people from writing things that people don't want to read. Effectively you are saying that you want to censor unpopular stuff and this seems to be an effective way of doing so. Often times society advances precisely because someone writes something that people don't want to hear. With technology you have to be careful what you wish for, because it measures what you tell it to measure and optimizes towards that goal. In response to comment by on One Sided Policy Debate - The Science of Literature Often times society advances precisely because someone writes something that people don't want to hear. Often these are things that some people in power don't want to hear, not what people in general don't want to hear.
{"url":"http://lesswrong.com/user/JoshuaZ/overview/","timestamp":"2014-04-20T03:14:52Z","content_type":null,"content_length":"80421","record_id":"<urn:uuid:4cfd46f8-19f8-484c-adb2-eda9a1609d69>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Algebraic Topology: Fundamental group of Klein bottle April 25th 2010, 02:05 PM #1 Junior Member Apr 2010 [SOLVED] Algebraic Topology: Fundamental group of Klein bottle Prove that the fundamental group of a Klein bottle is $G = \{ a^{m}b^{2n+\epsilon} \ ;\ m \in \mathbb{Z}, \ n \in \mathbb{Z} \ \epsilon = 0 \ or\ 1, \ ba=a^{-1}b\}$, i.e. G is the group on two generators $a ,\ b$ with one relation $ba=a^{-1}b$ Hey bro. Consider the Klein bottle as the unit square with opposite edges identified (with twists). FIx as basepoint b the origin. There is a natural covering map $p<img src=$\mathbb{R}^2,0)\to(\mathrm {Klein\ bottle},0)" alt="p Draw a picture! A moment's thought will convince you that in fact, the fibre $p^{-1}(0)$ consists of the diamond pattern $\Phi := \{ (x,x+2k) : x\in \mathbb{Z}, k\in\mathbb{Z}\}$. For every path in the Klein bottle, we can lift that path to the covering space, the plane, and then find a unique homotopic path which transverses only the edges of squares. In this way, paths in the Klein bottle are in one to one corospondence with "taxicab" paths in the plane that start at the origin and end at a point in $\Phi$; call these paths good. Let a mean "move right by 1 unit" and b mean "move up by 1 unit". If you think of starting at the origin, then a word $a^mb^{2n+\epsilon}$ defines a good path, ie a path that ends up at a point of $\Phi$. In addition, the identification we used above (a corner of a sqaure is identified with the corner opposite it) means that if we move up by 1 and then move right by 1, it is the same as if we move left by 1 and then up by 1. This is the relation $ba = a^{-1}b$. Its easy to see that this gives all the proper identifications ie it identifies all the points in $\Phi$. (Some other relations that work are $ab^{-1} = a^{-1}b$ and $a^{-1}b=a^{-1}b^{-1}$; this is easy to see algebraically, what are their interpretation geometrically?). Hopefully you can tighten everything up into a rigorous argument, or at least get started :] The main point will be to show that the map from good path to words in a and b is a isomorphism. Last edited by maddas; April 25th 2010 at 03:34 PM. Reason: word choice Use Van Kampen's theorem. Let a Klein bottle be K such that $K = U \cup V$. I'll omit the base point for clarity. You may need to include base points and their transforms for the more rigourous proof. The choice for U and V for K for Van Kampen can be: U: K-{y}, where the point y is the center point of the square. V: the image of the interior of the square under identification. Since V is simply connected, we apply the following theorem. Theorem. Assume V is simply connected. Then, $\psi_1:\pi_1(U) \rightarrow \pi_1(K)$ is an epimorphism, and its kernel is the smallest normal subgroup of $\pi_1(U)$ containing the image of $\phi_1 (\pi_1(U \cap V))$, where $\phi_1$ is a group homomorphism $\phi_1:\pi_1(U \cap V) \rightarrow \pi_1(U)$. It is easily seen that $\psi_1$ is an epimorphism. We know that $\pi_1(U)$ is a free group with two generators, let's say a, b (deformation retracts to figure 8 space). Since $\pi_1(U \cap V)$ is the infinite cyclic group generated by, let's say r, then $\phi_1(r)=aba^{-1}b$. Then the kernel of $\psi_1$ is the the group generated by $aba^{-1}b$ by the above theorem. By the first isomorphism theorem, we see that $\pi_1(K)= <a, b|aba^{-1}b>$. April 25th 2010, 03:18 PM #2 Senior Member Feb 2010 April 25th 2010, 05:57 PM #3 Senior Member Nov 2008 April 25th 2010, 06:04 PM #4 Senior Member Feb 2010
{"url":"http://mathhelpforum.com/differential-geometry/141328-solved-algebraic-topology-fundamental-group-klein-bottle.html","timestamp":"2014-04-16T11:56:30Z","content_type":null,"content_length":"47956","record_id":"<urn:uuid:926cc791-c334-43ad-bf43-543652e297b6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me have some fun... With all due respect to a lovely lady I know who works a problem like this one every week for real, here's my "submission." Mistresses Petra, Wendy, Noelle, Domina, and Estrella work in a local sessions house. Their availability is subject to the following constraints. Wendy cannot work on Monday or Thursday. Domina cannot work on Wednesday. Estrella cannot work on Monday or Friday. Noelle can work at any time. Petra cannot work evenings. Wendy can only work evenings. Petra will not work on Wednesday if Noelle works on Thursday, and Noelle works on Thursday if Petra cannot work on Wednesday. At any given time there are always three women available for sessions. 1. At which one of the following times can Wendy, Domina, and Estrella all be working? (A) Monday morning (B) Friday evening (C) Tuesday evening (D) Friday morning (E) Wednesday morning 2. For which day will another lady need to be hired? (A) Monday (B) Tuesday (C) Wednesday (D) Thursday (E) Friday 3. Which one of the following must be false? (A) Domina does not work on Tuesday. (B) Estrella does not work on Tuesday morning. (C) Petra works every day of the week except Wednesday. (D) Noelle works every day of the week except Wednesday. (E) Domina works every day of the week except Wednesday. 4. If Noelle does not work on Thursday, then which one of the following must be true? (A) Petra works Tuesday morning. (B) Domina works Tuesday morning. (C) Estrella works on Tuesday. (D) Petra works on Wednesday. (E) Wendy works on Tuesday morning. These are real (modeled after real GRE questions)! Scroll down to see the answers ONLY after you've worked them out yourself! 1) All three can work on Tuesday night. The answer is (C). 2) Domina and Noelle are the only people who can work Monday evenings, and three women are always available for sessions, so extra help will be needed for Monday evenings. The answer is (A). 3) The condition "Petra will not work on Wednesday if Noelle works on Thursday, and Noelle works on Thursday if Petra cannot work on Wednesday" can be symbolized as (P_W)<-->(N=TH). Now, if Noelle works every day of the week, except Wednesday, then in particular she works Thursday. So from the condition (P_W)<-->(N=TH), we know that Petra cannot work on Wednesday. But this leaves only Noelle and Estrella to work Wednesday mornings. Hence the answer is (D). (4) If you remember to think of an if-and-only-if statement as an equality, then this will be an easy problem. Negating both sides of the condition(P_W)<-->(N=TH) gives (P=W)<-->(N_TH). This tells us that Petra must work on Wednesday if Noelle does not work on Thursday. The answer, therefore, is (D).
{"url":"http://forum.myredbook.com/cgi-bin/dcforum2/dcboard.pl?az=show_thread&om=46&forum=DCForumID30&viewmode=all","timestamp":"2014-04-19T17:45:32Z","content_type":null,"content_length":"53354","record_id":"<urn:uuid:f246e7a8-6669-4dba-8570-23f0afb654e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Computation F. Theory of Computation • F.0 GENERAL • F.1 COMPUTATION BY ABSTRACT DEVICES • F.2 ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY (see also B.6, B.7, F.1.3) □ F.2.0 General □ F.2.1 Numerical Algorithms and Problems (see also G.1, G.4, I.1) ☆ Computation of transforms (e.g., fast Fourier transform) ☆ Computations in finite fields ☆ Computations on matrices ☆ Computations on polynomials ☆ Number-theoretic computations (e.g., factoring, primality testing) □ F.2.2 Nonnumerical Algorithms and Problems (see also E.2, E.3, E.4, E.5, G.2, H.2, H.3) ☆ Complexity of proof procedures ☆ Computations on discrete structures ☆ Geometrical problems and computations ☆ Pattern matching ☆ Routing and layout ☆ Sequencing and scheduling ☆ Sorting and searching □ F.2.3 Tradeoffs between Complexity Measures (see also F.1.3) □ F.2.m Miscellaneous • F.3 LOGICS AND MEANINGS OF PROGRAMS □ F.3.0 General □ F.3.1 Specifying and Verifying and Reasoning about Programs (see also D.2.1, D.2.4, D.3.1, E.1) ☆ Assertions ☆ Invariants ☆ Logics of programs ☆ Mechanical verification ☆ Pre- and post-conditions ☆ Specification techniques □ F.3.2 Semantics of Programming Languages (see also D.3.1) ☆ Algebraic approaches to semantics ☆ Denotational semantics ☆ Operational semantics ☆ Partial evaluation (new) ☆ Process models (new) ☆ Program analysis (new) □ F.3.3 Studies of Program Constructs (see also D.3.2, D.3.3) ☆ Control primitives ☆ Functional constructs ☆ Object-oriented constructs (new) ☆ Program and recursion schemes ☆ Type structure □ F.3.m Miscellaneous • F.4 MATHEMATICAL LOGIC AND FORMAL LANGUAGES □ F.4.0 General □ F.4.1 Mathematical Logic (see also F.1.1, I.2.2, I.2.3, I.2.4) ☆ Computability theory ☆ Computational logic ☆ Lambda calculus and related systems ☆ Logic and constraint programming (revised 1998) ☆ Mechanical theorem proving ☆ Modal logic (new) ☆ Model theory ☆ Proof theory ☆ Recursive function theory ☆ Set theory (new) ☆ Temporal logic (new) □ F.4.2 Grammars and Other Rewriting Systems (see also D.3.1) ☆ Decision problems ☆ Grammar types (e.g., context-free, context-sensitive) ☆ Parallel rewriting systems (e.g., developmental systems, L-systems) ☆ Parsing ☆ Thue systems □ F.4.3 Formal Languages (see also D.3.1) ☆ Algebraic language theory ☆ Classes defined by grammars or automata (e.g., context-free languages, regular sets, recursive sets) ☆ Classes defined by resource-bounded automata (retired since 1998) ☆ Decision problems ☆ Operations on languages □ F.4.m Miscellaneous • F.m MISCELLANEOUS
{"url":"http://www2.informatik.uni-stuttgart.de/zd/buecherei/cr_f.html","timestamp":"2014-04-17T21:23:16Z","content_type":null,"content_length":"7118","record_id":"<urn:uuid:8620f256-3051-453c-9419-7a8baa148ee7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
2012 DERPA Challenge From FamiLAB Wiki • If it's got wheels, they may not be circular NOR can their axis of rotation be near their center. • If it's got legs, the minimum number of leg lengths for N number of legs is (N/2)+3 • If it's in the category of planicopters, then it cannot have an fixed wings or static arms linking to its props. • If it's a dirigible, it must have multiple buoyancy devices, separated by the maximum width of any of the balloons, and the smallest must not be smaller than half the size of the largest Judging Criteria Possible Acronyms • Dont Expect Really Practical Applications • Deliberately Engineering Rather Problematic Applications • Derpy Engineers for Reaching Profound Altitutdes • Daring Entropy Research Project Agency
{"url":"http://familab.org/wiki/index.php?title=2012_DERPA_Challenge&oldid=983","timestamp":"2014-04-20T00:42:18Z","content_type":null,"content_length":"13891","record_id":"<urn:uuid:4ca4d40c-be74-4bda-a33d-0999e3f3092a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating a Basic Space From A Set of Issue Scales American Journal of Political Science, 42 (July 1998), pp. 954-993. This paper develops a scaling procedure for estimating the latent/unobservable dimensions underlying a set of manifest/observable variables. The scaling procedure performs, in effect, a singular value decomposition of a rectangular matrix of real elements with missing entries. In contrast to existing techniques such as factor analysis that work with a correlation or covariance matrix computed from the data matrix, the scaling procedure shown here analyzes the data matrix directly. The scaling procedure is a general-purpose tool that can be used not only to estimate latent/unobservable dimensions but also to estimate an Eckart-Young lower-rank approximation matrix of a matrix with missing entries. Monte Carlo tests show that the procedure reliably estimates the latent dimensions and reproduces the missing elements of a matrix even at high levels of error and missing data. The Model Let x[ij ]be the i^th individual’s (i=1, ..., n) reported position on the j^th issue (j = 1, ..., m) and let X[0]be the n by m matrix of observed data where the "0" subscript indicates that elements are missing from the matrix -- not all individuals report their positions on all issues. Let y[ik ]be the i^th individual’s position on the k^th (k = 1, ..., s) basic dimension. The model estimated X[0 ] = [Y W' + J[n]c'][0] + E[0] where Y is the n by s matrix of coordinates of the individuals on the basic dimensions, W is an m by s matrix of weights, c is a vector of constants of length m, J[n] is an n length vector of ones, and E[0] is a n by m matrix of error terms. W and c map the individuals from the basic space onto the issue dimensions. The elements of E[0] are assumed to be random draws from a symmetric distribution with zero mean. The decomposition is accomplished by a simple alternating least least squares procedure coupled with some long established techniques for extracting eigenvectors. The estimation procedure is covered in great detail in the AJPS article. The paper How to Use the Black Box (Updated, 4 August 1998) is in Adobe Acrobat (*.pdf) format and explains how to use the software used in the AJPS article. (If you do not have an Adobe Acrobat reader, you may obtain one for free at http://www.adobe.com.) The files below contain the FORTRAN programs, input files, and executables that perform the analyses shown in the AJPS article. These files are documented in the "How To Use the Black Box" paper Programs and Input Files From AJPS Article (.6 meg LHA file) Programs and Input Files From AJPS Article (.58 meg ZIP file) Common Space Scores Congresses 75 - 111 (15 February 2010) The Format of the Common Space Scores Common Space Scores (Text File) Common Space Scores (Excel File) Common Space Scores (Stata 8 File) Common Space Scores (Stata 7 File) Spatial Maps of Common Space Scores 1937 - 2002 Each token represents a member of Congress. D is a Northern (Non-Southern) Democrat, S is a Southern Democrat (11 States of the Confederacy plus Kentucky and Oklahoma), R is a Republican, and P is the President. Below is a plot of the first (Liberal-Conservative) dimension of the Common Space Scores. The histograms are for Democrats and Republicans in the two Chambers. Senator Kerry (D-MA) is located at -.364 which is to the left of the mean of the Senate Democrats, -.243 (standard deviation, .187). Senator Edwards (D-NC) is located at -.238 which is almost exactly on the mean. Senator Kennedy (D -MA) is located at -.474. President Bush is located at .399 which is to the right of the means of both the Senate Republicans, .271 (standard deviation, .187), and the House Republicans, .300 (standard deviation, .158). Vice President Cheney is located at .509 and former Senate Helms (R-NC) is located at .648. In the animation below the Houses and Senates that Senator Kerry (D-MA) has served in are shown in one picture. The flashing K indicates Senator Kerry's location in the two dimensional map, the flashing E is Senator Edwards' location, the flashing B is President Bush's location, and the flashing C is Vice-President Cheney's location. House and Senate Images of Common Space Scores 75^th to 107^th Congresses VOTEVIEW Blog NOMINATE Data, Roll Call Data, and Software Course Web Pages: UGA (2010 - ) Course Web Pages: UC San Diego (2004 - 2010) University of San Diego Law School (2005) Course Web Pages: University of Houston (2000 - 2005) Course Web Pages: Carnegie-Mellon University (1997 - 2000) Spatial Models of Parliamentary Voting Recent Working Papers Analyses of Recent Politics About This Website K7MOA Log Books: 1960 - 2012 Bio of Keith T. Poole Related Links
{"url":"http://voteview.com/basic.htm","timestamp":"2014-04-17T06:41:12Z","content_type":null,"content_length":"9221","record_id":"<urn:uuid:e10b3dfa-1394-4a3a-b2d6-a19d04093f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding and Subtracting Mixed Fractions No. There aren't any. The page is nicely made. Since the operations of Multiplication and Division would also involve the same method, that is, conversion of a mixed fraction into an improper fraction, this could've been mentioned at the end of the page. Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=107925","timestamp":"2014-04-17T01:04:14Z","content_type":null,"content_length":"14514","record_id":"<urn:uuid:094b1821-1de6-47b2-bf36-8026a320de41>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/jone133/asked","timestamp":"2014-04-18T00:40:17Z","content_type":null,"content_length":"75466","record_id":"<urn:uuid:ad68f5b9-b2bc-4748-9eda-39d5ad2223b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
GRE Practice Tests 1. Evaluate the Square Root of 120409. a. 234 b. 347 c. 141 d. 102 e. 122 2. Evaluate the Square Root of 0.000064. a. 0.8 b. 0.08 c. 0.00008 d. 0.008 e. 0.0008 This question is based upon the figure shown below 3. Evaluate the given expression. a. 4.13 b. 6.00 c. 1.04 d. 6.66 e. 1.10 4. Find the SquareRoot of 0.00064 upto 3 places of decimal. a. 0.08 b. 0.008 c. 0.025 d. 0.0008 e. 8 This question is based upon the figure shown below 5. If SquareRoot of 12 = 3.464, find the value of the given expression. a. 0.154 b. 1.154 c. 1.173 d. 3.464 e. None of the above This question is based upon the figure shown below 6. If SquareRoot of 14 = 3.741, find the value of the given expression. a. 0.86 b. 3.74 c. 1.87 d. 1.44 e. 0.44 This question is based upon the figure shown below 7. If SquareRoot of 9 = 3, find the value of the given expression. a. 3 b. 0.732 c. 9 d. 2.012 e. 1.732 8. Find the least square numbers, which is exactly divisible by 4,5, and 10. a. 500 b. 700 c. 400 d. 200 e. 100 9. Find the smallest number, that must be added to 6319 to make it a perfect square. a. 90 b. 50 c. 64 d. 81 e. 76 This question is based upon the figure shown below 10. Find the value of given expression. a. 0.244 b. 1.452 c. 1.826 d. 2.414 e. 0.366 This question is based upon the figure shown below 11. Find the value of given expression. a. 2.414 b. 0.309 c. 0.114 d. 1.114 e. 1.309 This question is based upon the figure shown below 12. Find the value of given expression. a. 2.414 b. 0.3 c. 0.5 d. 1.114 e. 1.309 What others think about GRE Practice Test - Math - Square Root Problems Test Questions By: Erin on 4/20/2014 it was a fun and interseting quiz! By: Dave on 4/19/2014 I really enjoyed this quiz. I will take a lot of information away with me. Great quiz By: Nancy on 4/18/2014 I would like to see a complete page of horror movie quizzes for the horror genre fans! By: Roger on 4/17/2014 I love Quiz Games. QuizMoz is an excellent Quiz site By: Hannah on 4/16/2014 Enjoyed it, and learned a lot about general knowledge By: Laura on 4/15/2014 I appreciate the time and effort that the quiz maker put into the quiz By: Lil Wayne Fan on 4/14/2014 Give me more quiz questions about LilWayne By: Tallitha on 4/13/2014 By: Quiz Game Player on 4/12/2014 One day I will crack all the Impossible quizzes in the world By: Alice on 4/11/2014 Its a very good Quiz. It rocks my socks!!!!! By: Sridhar on 4/10/2014 Its a really good way to brush up things and informative also. By: Free quiz seeker on 4/9/2014 I am searching for an impossible quiz By: Tommy on 4/8/2014 Great site. Good learning and fun. By: Hot girl on 4/7/2014 I love answering Quiz Questions By: Kayla on 4/6/2014 I think this is a great quiz full of knowlodge and information. By: Bobby Kalsi on 4/5/2014 Try to make it easier to search for a QUIZ Category.. (It should be easier to seach for a quiz category...) By: Roger on 4/4/2014 I love Quiz Games. QuizMoz is an excellent Quiz site By: QuizMoz Fan on 4/3/2014 I love all the QuizMoz quizzes. The general knowledge quiz is my favorite By: Tom on 4/2/2014 I want more quiz questions By: Teresa on 4/1/2014 I took the quiz. It let me know that I failed. But I wasn't able to see what the correct answers. It would be great to see what the answers are so I can learn.
{"url":"http://www.quizmoz.com/quizzes/GRE-Practice-Tests/g/GRE-Practice-Test-Math-Square-Root-Problems-Test-Questions.asp","timestamp":"2014-04-20T21:15:21Z","content_type":null,"content_length":"164082","record_id":"<urn:uuid:23e13b2c-7ccc-4e8a-a5d6-f7c5787be9ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluids Of Viscosities 1 =0.1 N.s/ms And 2 ... | Chegg.com Fluids of viscosities 1 =0.1 N.s/ms and 2 =0.15 N.s/m2 are contained between two plates (each plate 1 m2 area). The thicknesses are h1=.5 mm and h2=0.3 mm, respectively. Find the force F to make the upper plate move at a speed of 1 m/s. What is the fluid velocity at the interface between the two fluids? Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/fluids-viscosities-1-01-ns-ms-2-015-ns-m2-contained-two-plates-plate-1-m2-area--thicknesse-q1247028","timestamp":"2014-04-19T13:49:06Z","content_type":null,"content_length":"21400","record_id":"<urn:uuid:6dbf6cd3-38af-4597-8aa1-980eea955bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
[Search][Subject Index][MathMap][Tour][Help!] Analysis of Numerical Topics [Return to start of tour] [Up to Mathematical Analysis] Here we consider fields of mathematics which address the issues of how to carry out --- numerically or even in principle --- those computations and algorithms which are treated formally or abstractly in other branches of analysis. These fields have shown enormous growth in recent decades in response to demands for effective, robust solutions to demanding problems from science, engineering, and other quantitative applications. • 65: Numerical analysis involves the study of methods of computing numerical data. In many problems this implies producing a sequence of approximations; thus the questions involve the rate of convergence, the accuracy (or even validity) of the answer, and the completeness of the response. (With many problems it is difficult to decide from a program's termination whether other solutions exist.) Since many problems across mathematics can be reduced to linear algebra, this too is studied numerically; here there are significant problems with the amount of time necessary to process the initial data. Numerical solutions to differential equations require the determination not of a few numbers but of an entire function; in particular, convergence must be judged by some global criterion. Other topics include numerical simulation, optimization, and graphical analysis, and the development of robust working code. • 41: Approximations and expansions primarily concern the approximation of classes of real functions by functions of special types. This includes approximations by linear functions, polynomials (not just the Taylor polynomials), rational functions, and so on; approximations by trigonometric polynomials is separated into the separate field of Fourier analysis. Topics include criteria for goodness of fit, error bounds, stability upon change of approximating family, and preservation of functional characteristics (e.g. differentiability) under approximation. Effective techniques for specific kinds of approximation are also prized. This is also the area covering interpolation and splines. • 90: Operations research may be loosely described as the study of optimal resource allocation. Mathematically, this is the study of optimization. Depending on the options and constraints in the setting, this may involve linear programming, or quadratic-, convex-, integer-, or boolean-programming. For the more abstract theory of algorithms or information flow, jump to the Computer Sciences part of the tour. If you've been clicking on topics "in order", you've now visited all the general areas of analysis. If you missed any, or if you'd like to continue the tour of other areas besides analysis, you can do so by returning now to the analysis page. Otherwise, you may wish to proceed to the next portion of the tour: Probability and Statistics You can reach this page through http://www.math-atlas.org/welcome.html Last modified 2000/01/26 by Dave Rusin. Mail:
{"url":"http://www.math.niu.edu/~rusin/known-math/index/tour_na.html","timestamp":"2014-04-18T15:38:47Z","content_type":null,"content_length":"4420","record_id":"<urn:uuid:4a60dd54-7f0b-42a2-b57f-c18f11b2f09b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal Distribution April 6th 2007, 09:28 PM #1 Junior Member Mar 2007 Normal Distribution A random sample of the duration of 50 telphone calls handled by a local telephone company had a mean of 11.6 min and a standard devation of 3.8 Minutes. Find a 95% confidence interval for the true mean duration of calls handled by the company. The random variable: t = (SampleMean - PopulationMean)/(SampleSD/sqrt(n)) where n is the sample size, has a t distribution With n=50, nu the number of degrees of freedom is 49, with this number of degrees of freedom a 95% interval for t is ~=(-2.01, 2.01). So the 95% confidence interval for the populaion mean is: (SampleMean - 2.01*SampleSD/sqrt(n), SampleMean + 2.01*SampleSD/sqrt(n)) or: (10.52, 12.68). If you used the normal approximation here the 2.01 would be replaced by 1.95 and the interval would be (10.55, 12.65) April 6th 2007, 10:01 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-statistics/13418-normal-distribution.html","timestamp":"2014-04-24T17:03:00Z","content_type":null,"content_length":"33815","record_id":"<urn:uuid:5f575ca7-e572-40bc-8bec-87dccf5b8b26>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Raf Total # Posts: 75 what is a computer? Travel tourism describe tourism Write a pseudocode to input 10 temperatures per day for 20 days and output the average per day and the average for the 20 days. Computer Studies Write a pseudocode to input 10 temperatures per day for 20 days and output the average per day and average for the 20 days. Linear Algebra 4 points A,B,C and D are situated in a 3-dimensional space. In a certain spot, the coordinates of A, C and D are known: A(1,1,1) C(2,0,1) D(0,3,0) The coordinates of the barycentre of {A,B,C} are: (4 /3, 1/3, 4/3). a) Determine the coordinates of point B in the used spot. Linear Algebra 4 points A,B,C and D are situated in a 3-dimensional space. In a certain spot, the coordinates of A, C and D are known: A(1,1,1) C(2,0,1) D(0,3,0) The coordinates of the barycentre of {A,B,C} are: (4 /3, 1/3, 4/3). a) Determine the coordinates of point B in the used spot. A crane is holding a bloc of metal of 10 000kg. If the crane's arm weights 1000kg and is 10,0m long, what is the tension in cable a and how big is the force exerted on the pivot of the cranes arm. the cable passes through the top of the cranes arm and then is vertical whil... Physics - Question thanks bob! Physics - Question I have this number where they give me the coordinates of 3 particles and the mass of each one of them. Afterwards, there's a force applied on 2 of those particles and they ask me to calculate the systems total moment of inertia. But then they ask me this: If the system goe... Calculus AB tnx for the correction Steve, my bad Sarah Calculus AB What you do with ln(xy) is just separate it into 2 pieces: ln(x) + ln(y) ln(x) + ln(y) = 1 + y' 1/x + y'/y = 1 + y' 1/x - 1 = y' (1 - y) you have: m1 = 60 kg v1 = 5 m/s m2 = 40 kg v2 = 0 m/s (I presume she's not moving because there's no mention of her velocity) so m1v1 = (m1 + m2)V V = m1v1 / (m1 + m2) solve for v once you find v, you plug it into your kinetic energy formula: K = 1/2mv^2 where you'... Rise is the change in the y coordinate, if you think of the graph, you are going up or down. Run is the change in the x coordinate, left to right. To find the change, we take the second coordinate, and subtract the first coordinate. So, what does that give us? y2 - (y1) = y ch... il/elle acheter will be il/elle VA acheter je acheterai will be written j'acheterai Kinetic energy is defined with K = 1/2*m*v^2 so you have Kf - Ki [1/2(m)(vf)^2] - [1/2(m)(vi)^2] = m = mass v = velocity; f for final, i for initial your answer will be in kg * m/s^2 which is the developed way of writing Newtons because 1 kg * m/s^2 = 1 N the science that deals with the collection, classification, analysis, and interpretation of numerical facts or data, and that, by use of mathematical theories of probability, imposes order and regularity on aggregates of more or less disparate elements. same goes for salts Electrical conductivity is a physical property. A chemical property involves a change that occurs. However, when electricity passes through an object or substance, no change occurs. Physics (2) I have this number where they give me the coordinates of 3 particles and the mass of each one of them. Afterwards, there's a force applied on 2 of those particles and they ask me to calculate the systems total moment of inertia. But then they ask me this: If the system goe... A crane is holding a bloc of metal of 10 000kg. If the crane's arm weights 1000kg and is 10,0m long, what is the tension in cable a and how big is the force exerted on the pivot of the cranes arm. the cable passes through the top of the cranes arm and then is vertical whil... A crane is holding a bloc of metal of 10 000kg. If the crane's arm weights 1000kg and is 10,0m long, what is the tension in cable a and how big is the force exerted on the pivot of the cranes arm. the cable passes through the top of the cranes arm and then is vertical whil... I can't figure this problem out: A bar that's initially held horizontally, weights 500g and is 2m. long. It's maintained by a pivot at its extremity. We then let it fall until it's vertical. We neglect air resistance. What will be the angular velocity at that m... thank you If I have a unit vector of (2.06 j)m/s(I have no i vector because it's 0 on a horizontal impulsion, in accordance with my problem,how do I convert that into polar notation? Will I be doing rcos0, with r being 2.06? . But then again I have no angle, just the angle of before... Physics - Question but the angle will still have no effect right? Physics - Question thank you very much! Physics - Question A ball of 250g hits the floor at a velocity of 2,50 m/s at an angle of 70* relative to the vertical. The vertical force in function with time between the floor and the ball is: from 0 to 50 N : from 0 to 1 sec. from 50 N to 100 N : from 1 to 2 sec. constant 100 N : from 2 to 3... I know that I have to use 1/2 mv^2 for the velocity and mgh for gravitational energy, but I'm lost from here, especially when they ask to put it in unit vectors. Thank you A ball of 250g hits the floor at a velocity of 2,50 m/s at an angle of 70* relative to the vertical. The vertical force in function with time between the floor and the ball is: from 0 to 50 N : from 0 to 1 sec. from 50 N to 100 N : from 1 to 2 sec. constant 100 N : from 2 to 3... Physics - Question When I'm asked to find work done by air resistance against a skier on a slope, will I have to do: Wr = -Fr*d or is it affected by an angle: Wr = -Fr*d*cosTHETA I was thinking that it isn't because air resistance isn't a specific force acting at one spot but everywh... Physics - Verification oh and I see what you mean, it's 95 116 J, thanks again! Physics - Verification and drag and friction work has to be higher at the end than gravity in order to stop the skier at the bottom, right? Physics - Verification what is the mistake? And thank you for the information! Physics - Verification A skier (75 kg) goes down a 15* slope. The coefficient of kinetic friction between the skis and the snow is 0.185. The length of the slope is of 500m. a) What is the work done by gravity? Answer) Wmg = mgd = 75kg(9.8N/kg)500m(sin15) = 354 977 J Is it cos or sin? And is Work do... same thing with kinetic friction. I have a question where they ask me to find the work done by friction between the skis and the snow. my answer is: Wf = -Fc * d, but I have a feeling that I need to add the angle to it, or is that only with gravity? This is a theory question. Let's say I go down a slope on my skis. When I want to calculate the Work done by air resistance, do I take the angle of the slope in consideration? For example, will it be: Wr = -Fr * d * sinTHETA or will it only be Wr = -Fr * d Thank you A ball of 250g hits the floor at a velocity of 2,50 m/s at an angle of 70* relative to the vertical. The vertical force in function with time between the floor and the ball is: from 0 to 50 N : from 0 to 1 sec. from 50 N to 100 N : from 1 to 2 sec. constant 100 N : from 2 to 3... how do you find this? 1500*0+5000V=6500V' Thank you for your time A truck of 5000kg hits a car of 1500kg at an intersection. After the collision, both vehicles remained glued together and slid over a distance of 30 meters before coming to a halt. Knowing that the coefficient of kinetic friction between both vehicles and the street is 0.700, ... A truck of 5000kg hits a car of 1500kg at an intersection. After the collision, both vehicles remained glued together and slid over a distance of 30 meters before coming to a halt. Knowing that the coefficient of kinetic friction between both vehicles and the street is 0.700, ... Physics - Verification okay cool, thanks a lot Bob Physics - Verification nevermind my first question, my mistake hehe Physics - Verification and why isn't it Wg = Wspring - Wfriction as friction is going to other way? Physics - Verification ohhh I see now, but when you say Work done on spring, you take the Energy formula which is 1/2 k x^2 , wouldn't the work on a spring be mu*N*x? Physics - Verification where Kf is = 0 Physics - Verification made a mistake when I wrote Ef = Kf + Ugf instead it's --> Ef = Kf + Ugf + Usf Physics - Verification Hello, just wanted to verify is what I did is good. I posted this question yesterday and Bob helped me, so just wanted to be sure it's good. The problem is: A bloc of 10 kg is put at the top of an inclined plan of 45 degrees (to the left), attached to a spring which has a ... Okay so from what I understand, I find the elongation from the total Work done on the spring and of course by isolating it in my equation. Sorry if my terms aren't good, I do it in french and try to translate it as good as possible and what does mu stand for in x*mg*cosTheta*mu ? Thank you for your time! is it always sinTheta for a force down a plane? Or does it depend on the inclination (left, right) A bloc of 10 kg is put at the top of an inclined plan of 45 degrees (to the left), attached to a spring which has a spring constant of 250 N/m. The coefficient of kinetic friction between the bloc and the surface of the inclined plan is of 0,300. What is the maximum elongation... A bloc of 10 kg is put at the top of an inclined plan of 45 degrees (to the left), attached to a spring which has a spring constant of 250 N/m. The coefficient of kinetic friction between the bloc and the surface of the inclined plan is of 0,300. What is the maximum elongation... I don't understand the question either, that's why I posted it here and that's exactly how the problem is written word by word, so I have really no idea :(. From what I can interpret, is that we're looking for what sum of two numbers m + n would give a minimal ... There are many pairs of numbers (positive and negative) of which the sum is worth the unit. Of those, find the 2 numbers whose sum, double the square of the first number and the square of the 2nd number would give a minimal value. There are many pairs of numbers (positive and negative) of which the sum is worth the unit. Of those, find the 2 numbers whose sum, double the square of the first number and the square of the 2nd number would give a minimal value. There are many pairs of numbers (positive and negative) of which the sum is worth the unit. Of those, find the 2 numbers whose sum, double the square of the first number and the square of the 2nd number would give a minimal value. The universal law of gravitation determines the amount of force exerted by a constant mass (M) on another constant mass (m), separated by a distance (r), is given by the expression F = ( -GmM )/r^2 a) What mathematical expression characterises the instant rate of change of the... 1. Carlos jogs in a straight line at a constant speed of 1.5 m/s. He passes by Victoria, who, 10 seconds after Carlos had passed her, starts accelerating at a constant rate of 0.50 m/s^2 a) How much time after the passing of Carlos does it take Victoria to catch up to him? b) ... 1. Carlos jogs in a straight line at a constant speed of 1.5 m/s. He passes by Victoria, who, 10 seconds after Carlos had passed her, starts accelerating at a constant rate of 0.50 m/s^2 a) How much time after the passing of Carlos does it take Victoria to catch up to him? b) ... tnx Steve! Cheers The universal law of gravitation determines the amount of force exerted by a constant mass (M) on another constant mass (m), separated by a distance (r), is given by the expression F = ( -GmM )/r^2 a) What mathematical expression characterises the instant rate of change of the... Evaluate: (without using Hopital's rule) d) lim (6-3x)/(((5x+6)^1/2)-4) x->2 the question didn't submit well, here's the problem: lim (6-3x)/(((5x+6)^1/2)-4) x->2 Evaluate: (without using Hopital's rule) d) lim (6-3x)/(((5x+6)^1/2)-4) x->2 ^ | square root a car brakes to a halt in 5 seconds. The position of the car versus time is given by: x = 10t - t^2 a) What is the average speed between 0 s. and 4 s.? Thank you MATH - Calculus and it's the factorization that I can't seem to come up with, thanks for your quick answer too MATH - Calculus yes I know, it is supposed to give 0/0 and they ask us to manipulate the limit algebraically in order to be able to factorize MATH - Calculus Evaluate the following limit: e) lim f(x)-f(a) / x - a x->a if f(x) = x^2 + 5 MATH - Calculus (2) if lim f(x) = a^3 x->a and if lim g(x) = a^2 x->a calculate the following limit: lim f(x)*(x-a)/(x^3-a^3)*g(x) = x->a MATH - Calculus Evaluate the following limit: e) lim f(x)-f(a)/ x - a x-> a if f(x) = x^2 + 5 speed = km/h Car speed = 3 km/3hours = 1 km/1hour = 1 km/h Calculate the solubility of silver chloride in 10.0 M ammonia given the following information: Ksp (AgCl) = 1.6 x 10^ 10 Ag+ + NH3--->AgNH3+ K=2.1x10^3 AgNH3+ + NH3-----> Ag(NH3)2+ K=8.2x10^3 Answer : 0.48 M Calculate the concentration of NH3 in the final equilibriu... Calculate the solubility of AgCl in: Ksp = 1.6x10^-10 a) 100 ml of 4.00 x 10^-3 M Calcium chloride b) 100 ml of 4.00 x 10^-3 M Calcium Nitrate all i know is 1.6x10^-10=[Ag+][Cl-] which is 1.6x10^-10= x^2 x=1.3x10^-5, giving the M of Ag and Cl. Don't know how to go further Consider the equation A(aq) + 2B(aq) 3C(aq) + 2D(aq). In one experiment, 45.0 mL of 0.050 M A is mixed with 25.0 mL 0.100 M B. At equilibrium the concentration of C is 0.0410 M. Calculate K. a) 7.3 b) 0.34 c) 0.040 d) 0.14 e) none of these the answer for this is suppose to be ... operations management Can someone please post the answer for these questions?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Raf","timestamp":"2014-04-18T14:32:10Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:4ccff4b2-cae5-4a46-a005-eb3edb3e4364>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Base Conversion In PHP Using Built-In Functions The PHP programming language has many built-in functions for converting numbers from one base to another. In fact, it has so many functions that it can be hard to know which to use. Some functions have similar capabilities, and some work with parameters of different types. We’ll sort through the differences in this article, and explain the proper context in which to use each function. As a guide to our discussion, our mission will be to write programs that take, as input, an integer in a given base, and produce, as output, the equivalent of that integer in another base. Both the input and output will be strings — sequences of characters with encodings like ASCII or EBCDIC. That is the form numbers take when read in from and written out to a user. Contrast this with numbers that can be operated on arithmetically inside a computer, numbers represented with binary integers or floating-point binary. I call numbers in that form numeric binary. Why the distinction between string and numeric binary? Because both types are used in PHP’s conversion functions. Knowing which parameters are which type is key to using the functions properly. Summary of PHP Conversion Functions PHP has built-in functions that can convert, or help convert, integers between string representations in various number bases. We will use them to convert between decimal (base 10), binary (base 2), hexadecimal (base 16), and octal (base 8). These are number bases that anyone versed in binary should know. There’s an important point I need to make before continuing. In PHP, and just about anywhere else, functions described as converting to or from decimal really convert to or from numeric binary! If you take only one thing from this article, let that be it. Here is a summary of the PHP conversion functions used in this article, with a description of how they’re used, and the maximum integer they can safely convert: PHP Functions for Base Conversion │ Function │ Type of Conversion │ Max Integer │ │bindec() │Binary string to numeric binary │2^53 │ │hexdec() │Hex string to numeric binary │2^53 │ │octdec() │Octal string to numeric binary │2^53 │ │intval() │Base 2 to 36 string to numeric binary │2^31 – 1 │ │sscanf() │Decimal, hex, or octal string to numeric binary │2^31 – 1 │ │decbin() │Numeric binary to binary string │2^32 – 1 │ │dechex() │Numeric binary to hex string │2^32 – 1 │ │decoct() │Numeric binary to octal string │2^32 – 1 │ │strval() │Numeric binary to decimal string │Between 2^39 and 2^40│ │sprintf() │Numeric binary to decimal, binary, hex, or octal string │2^53 │ │base_convert()│Base 2 to 36 string to base 2 to 36 string │2^53 † │ († In the base_convert() documentation, there is this warning: “base_convert() may lose precision on large numbers due to properties related to the internal ‘double’ or ‘float’ type used.” If that leaves you wanting, it did me too.) Conversion Code Examples In the following sections, I give examples of conversions between specific base pairs. Look for the specific conversion in which you’re interested. In the code, I use separate variables for the input string, the intermediate numeric binary value, and the output string. The separate string variables represent the I/O of a program, keeping the code independent of any particular I/O mechanism (HTML form I/O, echo, printf, etc.). The separate variables also make clearer which parameters are which type. The examples which use intval and sscanf limit the maximum integer that can be converted — to 2^31 – 1. This is the case, for example, in the code that converts from decimal to binary, even though decbin supports integers up to 2^32 – 1. A similar thing happens when composing the `*dec’ and `dec*’ functions. For example, hexdec followed by decbin is limited to 2^32 – 1 by decbin. The code does not label input and output strings with their base (for example, with prefixes like 0b, 0x, or 0o). The base is implied with context. Converting Between Decimal and Binary Decimal to Binary Here are three ways to convert a decimal string to a binary string using built-in functions: • Use intval to convert the decimal string to numeric binary, and then use decbin to convert the numeric binary value to a binary string: $decString = "42"; $binNumeric = intval($decString); $binString = decbin($binNumeric); // = "101010" • Use sscanf to convert the decimal string to numeric binary, and then use sprintf to convert the numeric binary value to a binary string: $decString = "32"; $binString = sprintf ("%b",$binNumeric); // = "100000" Note: support of the %b format specifier is nonstandard. • Use base_convert to convert the decimal string directly to a binary string: $decString = "26"; $binString = base_convert($decString,10,2); // = "11010" Binary to Decimal Here are three ways to convert a binary string to a decimal string using built-in functions: • Use bindec to convert the binary string to numeric binary, and then use sprintf to convert the numeric binary value to a decimal string: $binString = "11011110"; $binNumeric = bindec($binString); $decString = sprintf("%.0f",$binNumeric); // = "222" • Use intval to convert the binary string to numeric binary, and then use strval to convert the numeric binary value to a decimal string: $binString = "10100"; $binNumeric = intval($binString,2); $decString = strval($binNumeric); // = "20" • Use base_convert to convert the binary string directly to a decimal string: $binString = "111000111001"; $decString = base_convert($binString,2,10); // = "3641" Converting Between Decimal and Hexadecimal Decimal to Hex Here are three ways to convert a decimal string to a hexadecimal string using built-in functions: • Use intval to convert the decimal string to numeric binary, and then use dechex to convert the numeric binary value to a hexadecimal string: $decString = "42"; $binNumeric = intval($decString); $hexString = dechex($binNumeric); // = "2a" • Use sscanf to convert the decimal string to numeric binary, and then use sprintf to convert the numeric binary value to a hexadecimal string: $decString = "112"; $hexString = sprintf ("%x",$binNumeric); // = "70" • Use base_convert to convert the decimal string directly to a hexadecimal string: $decString = "25"; $hexString = base_convert($decString,10,16); // = "19" Hex to Decimal Here are four ways to convert a hexadecimal string to a decimal string using built-in functions: • Use hexdec to convert the hexadecimal string to numeric binary, and then use sprintf to convert the numeric binary value to a decimal string: $hexString = "de"; $binNumeric = hexdec($hexString); $decString = sprintf("%.0f",$binNumeric); // = "222" • Use intval to convert the hexadecimal string to numeric binary, and then use strval to convert the numeric binary value to a decimal string: $hexString = "14"; $binNumeric = intval($hexString,16); $decString = strval($binNumeric); // = "20" • Use sscanf to convert the hexadecimal string to numeric binary, and then use strval to convert the numeric binary value to a decimal string: $hexString = "27"; $decString = strval($binNumeric); // = "39" • Use base_convert to convert the hexadecimal string directly to a decimal string: $hexString = "25"; $decString = base_convert($hexString,16,10); // = "37" Converting Between Decimal and Octal Decimal to Octal Here are three ways to convert a decimal string to an octal string using built-in functions: • Use intval to convert the decimal string to numeric binary, and then use decoct to convert the numeric binary value to an octal string: $decString = "42"; $binNumeric = intval($decString); $octString = decoct($binNumeric); // = "52" • Use sscanf to convert the decimal string to numeric binary, and then use sprintf to convert the numeric binary value to an octal string: $decString = "9"; $octString = sprintf ("%o",$binNumeric); // = "11" • Use base_convert to convert the decimal string directly to an octal string: $decString = "25"; $octString = base_convert($decString,10,8); // = "31" Octal to Decimal Here are four ways to convert an octal string to a decimal string using built-in functions: • Use octdec to convert the octal string to numeric binary, and then use sprintf to convert the numeric binary value to a decimal string: $octString = "77"; $binNumeric = octdec($octString); $decString = sprintf("%.0f",$binNumeric); // = "63" • Use intval to convert the octal string to numeric binary, and then use strval to convert the numeric binary value to a decimal string: $octString = "14"; $binNumeric = intval($octString ,8); $decString = strval($binNumeric); // = "12" • Use sscanf to convert the octal string to numeric binary, and then use strval to convert the numeric binary value to a decimal string: $octString = "14"; sscanf($octString ,"%o",&$binNumeric); $decString = strval($binNumeric); // = "12" • Use base_convert to convert the octal string directly to a decimal string: $octString = "61"; $decString = base_convert($octString,8,10); // = "49" Converting Between Power of Two Bases You can use the functions above to convert between bases 2, 8, and 16 without going through decimal strings. One approach is to use base_convert; another is to compose the `*dec’ and `dec*’ Hex to Binary Here are two ways to convert a hexadecimal string to a binary string using built-in functions: • Use hexdec to convert the hexadecimal string to numeric binary, and then use decbin to convert the numeric binary value to a binary string: $hexString = "1f"; $binNumeric = hexdec($hexString); $binString = decbin($binNumeric); // = "11111" • Use base_convert to convert the hexadecimal string directly to a binary string: $hexString = "ff"; $binString = base_convert($hexString,16,2); // = "11111111" Binary to Hex Here are two ways to convert a binary string to a hexadecimal string using built-in functions: • Use bindec to convert the binary string to numeric binary, and then use dechex to convert the numeric binary value to a hexadecimal string: $binString = "10011"; $binNumeric = bindec($binString); $hexString = dechex($binNumeric); // = "13" • Use base_convert to convert the binary string directly to a hexadecimal string: $binString = "1111"; $hexString = base_convert($binString,2,16); // = "f" Octal to Binary Here are two ways to convert an octal string to a binary string using built-in functions: • Use octdec to convert the octal string to numeric binary, and then use decbin to convert the numeric binary value to a binary string: $octString = "77"; $binNumeric = octdec($octString); $binString = decbin($binNumeric); // = "111111" • Use base_convert to convert the octal string directly to a binary string: $octString = "71"; $binString = base_convert($octString,8,2); // = "111001" Binary to Octal Here are two ways to convert a binary string to an octal string using built-in functions: • Use bindec to convert the binary string to numeric binary, and then use decoct to convert the numeric binary value to an octal string: $binString = "1010"; $binNumeric = bindec($binString); $octString = decoct($binNumeric); // = "12" • Use base_convert to convert the binary string directly to an octal string: $binString = "11011"; $octString = base_convert($binString,2,8); // = "33" Octal to Hex Here are two ways to convert an octal string to a hexadecimal string using built-in functions: • Use octdec to convert the octal string to numeric binary, and then use dechex to convert the numeric binary value to a hexadecimal string: $octString = "77"; $binNumeric = octdec($octString); $hexString = dechex($binNumeric); // = "3f" • Use base_convert to convert the octal string directly to a hexadecimal string: $octString = "123"; $hexString = base_convert($octString,8,16); // = "53" Hex to Octal Here are two ways to convert a hexadecimal string to an octal string using built-in functions: • Use hexdec to convert the hexadecimal string to numeric binary, and then use decoct to convert the numeric binary value to an octal string: $hexString = "7d8"; $binNumeric = hexdec($hexString); $octString = decoct($binNumeric); // = "3730" • Use base_convert to convert the hexadecimal string directly to an octal string: $hexString = "f0"; $octString = base_convert($hexString,16,8); // = "360" For string to string conversion, which was our mission in this article, use base_convert(). It converts between strings directly, and up to the maximum integer PHP supports. (In case you were curious, the implementation of base_convert() uses an intermediate numeric binary value.) For conversion only between string and numeric binary, use the other functions. For example, a variable assigned a decimal or hexadecimal constant (0x prefix) can be converted to a binary string using decbin. Leave a Comment (To reduce spam, cookies must be enabled)
{"url":"http://www.exploringbinary.com/base-conversion-in-php-using-built-in-functions/","timestamp":"2014-04-18T20:47:48Z","content_type":null,"content_length":"51665","record_id":"<urn:uuid:938cbc32-6597-4a83-86cb-d38b0e93cc78>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
multiples project (three five) project euler Author multiples project (three five) project euler for some reason i cant get my code to execute with the right answer. I get a continuous output of the same digit Joined: Feb my purpose: 24, 2011 1.get the variables Posts: 26 2.get highest possible multiple for each number. divide max possible with five and three 3.As the values are higher than 0 it should keep executing the if statement 4.The product of three add up to totalSum and the products of five add up to totalSum 4.the highest possible multiple of each (f, t) should decrease each by one. 5.repeat the if statement as it will always stay true til zero 6.By the time they are both zero, there are no more products to add and make the if statement false then executes else. 7.Else statement should then get the totalSum then output that value How do you know your code provides the wrong answer? Joined: Oct 24, 2010 Learning Java using Eclipse on OpenSUSE 11.2 Posts: 557 Linux user#: 501795 Joined: Feb "I get a continuous output of the same digit" 24, 2011 Posts: 26 the code should stop after it hits the else and it is only a 3 digit number. I am suppose to get all the products of three and five and add them together Ranch Hand Tim Hoang wrote:the code should stop after it hits the else Joined: Jun 04, 2007 Posts: 331 Then introduce a break statement like this: Joined: Oct Is '=+' the same as the operator '+='? (No it's not.) 24, 2010 Posts: 557 Did you mean '+='? Ranch Hand Joined: Jun 04, 2007 Posts: 331 whereas, i.e in "=+" is not a single Java operator. You should read it as assignment of a positive number. like how int a=-5; would mean assigning a negtive number Joined: Feb I looked it over countless times but for some reason I have a little extra over 7000 thats not the answer. 24, 2011 240600 != 233168 (answer) Posts: 26 Obviously, you're frustrated. We've all been there. Don't give up. Joined: Oct 24, 2010 Since you're getting the wrong answer, there's a good chance that the solution you've coded is not correct. Please state the problem exactly as it was given to you. Also, let us know Posts: 557 how you know the "right" answer. Sometimes the answer book is wrong. baba I assume you are doing Project Euler problem 1: Joined: Oct 02, 2003 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Posts: 10909 Find the sum of all the multiples of 3 or 5 below 1000. I like... There are a couple problems with your algorithm. you are setting f to 200 and t to 333. then you decrement both, so eventually, you will get to f at 0, and t at 133. your condition will fail, and so you will have missed many of the multiples of 3. But additionally, you are also adding in some factors twice. 600, for example, is a multiple of BOTH 3 and 5, and your current method will add it in twice. One thing I often do when working on these problems (and i've done the first 60 or so) is to test my algorithm against the smaller case. You know what the answer should be for using 10 instead of 1000. Change your code on lines 14 and 15 to use 10, and see if you get the right answer. Another thing you can do is to print the values your adding in each time - so after line 22 print out totalFive and after line 24 print totalThree. By setting your original limit to 10, you can see if a) you are getting everything you should, and b) that you are not counting any values twice. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors subject: multiples project (three five) project euler
{"url":"http://www.coderanch.com/t/539708/java/java/multiples-project-project-euler","timestamp":"2014-04-17T15:50:23Z","content_type":null,"content_length":"39310","record_id":"<urn:uuid:23e33486-cf13-4807-aefb-da0f7d353f96>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Domain and Range of f(x,y) October 1st 2008, 08:31 PM #1 Senior Member Jul 2007 Domain and Range of f(x,y) I'm having trouble understanding how to get these from these types of problems. What should I be looking for and how. Any hints, tips , or methods? Some examples of course would be 1. e^sqrt(z - x^2 - y^2) 2. ln(25 - x^2 - y^2 - z^2) 3. sqrt( 1 + x - y^2) More than just answering these problems i'm more interested in knowing how to derive the domains and ranges since i'm really gonna need to know it. Thanks in advance guys 1. Identify that the square root is nonnegative and that the expression under the square root must be nonnegative. So the domain is all real x, y, z such that $z - x^2 - y^2 \geq 0$, and the range is $f(x,y,z) \geq 1$, since the smallest value for the exponent is 0, and there is no largest value for the exponent. 2. The natural logarithm only takes positive arguments. Hence the domain is all real x, y, z such that $x^2 + y^2 + z^2 < 25$. The range is $0 < f(x,y,z) \leq \ln 25$. 3. Again, the square root is nonnegative and the expression under the square root must be nonnegative. So the domain is all real x, y such that $1 + x - y^2 \geq 0$, and the range is $0 \leq f(x, y)$, since $1 + x - y^2$ is unbounded from above. mMmm... ok Uhmmm I see from what you said it's pretty easy but say I didn't give you any of those problems. Can you explain what i'm supposed to look for, how I should be interpreting the functions given and so on? Basically, you need to recognize when a function would restrict domain. Some functions restrict domain and some don't. If you have a square root function or a logarithm function, it will have a restricted domain, as you have seen. Also, if a function is of the form $\frac{f(x,y,z)}{g(x,y,z)}$, then $g(x,y,z)$ is not allowed to be zero. The domain of a polynomial function is all real numbers. As for determining the range: recognize that any terms that are squared (or raised to any even power) must be positive numbers; also recognize that exponential functions must produce positive numbers. There are several more rules for functions in general but basically you need to find absolute minima and absolute maxima if they exist, and for a lot of multivariable functions that requires multivariable calculus. Thanks alot~ I got it October 1st 2008, 08:50 PM #2 MHF Contributor Apr 2008 October 1st 2008, 09:00 PM #3 Senior Member Jul 2007 October 1st 2008, 09:11 PM #4 MHF Contributor Apr 2008 October 1st 2008, 09:20 PM #5 Senior Member Jul 2007
{"url":"http://mathhelpforum.com/calculus/51635-domain-range-f-x-y.html","timestamp":"2014-04-17T06:47:22Z","content_type":null,"content_length":"41154","record_id":"<urn:uuid:28f2d301-d639-4ef6-ae9f-ae2fd23bf42b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: To use your calculator to find cot^-1(x)-π/2tan^-1(x). Best Response You've already chosen the best response. are you graphing this? otherwise you need a x-value Best Response You've already chosen the best response. cot^-1(x) = tan^-1(1/x) if that helps Best Response You've already chosen the best response. If u want to find "x" then you use cot^-1(x)=1/cot(x) and tan^-1(x)=1/tan(x) cot^-1(x) =Π/2 tan^-1(x) tan(x)=Π/2*[1/tan(x)] tan^2(x)=Π/2 tan(x)=√(Π/2) X=tan^-1[√(Π/2)] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f0ffe6ce4b04f0f8a91ec33","timestamp":"2014-04-23T09:17:45Z","content_type":null,"content_length":"32500","record_id":"<urn:uuid:bf9611de-9852-4375-8b87-5163b7e70d2f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Vedic mathematics is pure mathematics based on certain Sanskrit sutrās or formulas. It is simple and easy to learn. Speed and computational skills are the plus points. Vedās are the treasure house of knowledge. Basic principles of modern science like physics, chemistry, biology, medicine and astronomy as also mathematics are hidden between the Vedic hymns composed in crisp Sanskrit language. Decimal number system, lines, angles, geometrical shapes like circle, triangle, rectangle, parallelogram, trapezium, rhombus and its area, solids and its volume, Pythagoras theorem sprang from the Vedic literature. But by the ravages of time and many other reason we lost the legacy and original Vedic practices vanished. Some portions of astronomy, astrology, Ayurveda, yoga are still in practice. In the recent past a great seek by name Swami Bharati Krishna Thirtha took up the studies in Vedās and Sasthras and happened to come across some sutrās (formulas), for which there was no commentaries, or left out as childish or non sense. But swamiji a scholar in 8 disciplines including Sanskrit and mathematics could analyze the inner meaning and there use in pure mathematics. Swamiji wrote a book by name “Ancient Vedic mathematics” illustrating the meaning of Sanskrit sutrās and the modus operandi. Based on that we have developed some areas which will be useful for students in general. Secondary school students can easily follow our lessons and master the subject. For normal calculation up to 8 digits one can replace a calculator within 20 to 25 hrs of learning.
{"url":"http://www.vedic-maths.org/","timestamp":"2014-04-18T10:36:10Z","content_type":null,"content_length":"5124","record_id":"<urn:uuid:58786b75-1c62-40dc-bbd9-7a2ab50d21ac>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms for modern circuit simulation. Archiv für Elektronik und Übertragungstechnik (AEÜ - IEEE DAC , 2001 "... We present a new method for mismatch analysis and automatic yield optimization of analog integrated circuits with respect to global, local and operational tolerances. Effectiveness and efficiency of yield estimation and optimization are guaranteed by consideration of feasibility regions and by perfo ..." Cited by 21 (1 self) Add to MetaCart We present a new method for mismatch analysis and automatic yield optimization of analog integrated circuits with respect to global, local and operational tolerances. Effectiveness and efficiency of yield estimation and optimization are guaranteed by consideration of feasibility regions and by performance linearization at worst-case points. The proposed methods were successfully applied to two example circuits for an industrial fabrication process. - SIAM J. Sci. Stat. Comput , 1996 "... . In electric circuit simulation the charge oriented modified nodal analysis may lead to highly nonlinear DAEs with low smoothness properties. They may have index 2 but they do not belong to the class of Hessenberg form systems that are well understood. In the present paper, on the background of a d ..." Cited by 6 (2 self) Add to MetaCart . In electric circuit simulation the charge oriented modified nodal analysis may lead to highly nonlinear DAEs with low smoothness properties. They may have index 2 but they do not belong to the class of Hessenberg form systems that are well understood. In the present paper, on the background of a detailed analysis of the resulting structure, it is shown that charge oriented modified nodal analysis yields the same index as the classical modified nodal analysis. Moreover, for index 2 DAEs in the charge oriented case, a further careful analysis with respect to solvability, linearization and numerical integration is given. Key words. Differential-algebraic equations, index 2, circuit simulation, IVP, numerical integration, BDF, defect correction. AMS subject classifications. 65L10 1. Introduction. In modern circuit simulation, the so-called charge oriented modified nodal analysis is preferred for different reasons ([4], [7]). The resulting DAEs have low smoothness properties. They may h... - IN PROCEEDINGS DESIGN, AUTOMATION AND TEST IN EUROPE CONFERENCE AND EXHIBITION 2000 , 2000 "... In this paper, a new method for analog circuit sizing with respect to manufacturing and operating tolerances is presented. Two types of robustness objectives are presented, i.e. parameter distances for the nominal design and worstcase distances for the design centering. Moreover, the generalized bou ..." Cited by 3 (1 self) Add to MetaCart In this paper, a new method for analog circuit sizing with respect to manufacturing and operating tolerances is presented. Two types of robustness objectives are presented, i.e. parameter distances for the nominal design and worstcase distances for the design centering. Moreover, the generalized boundary curve is presented as a method to determine a parameter correction within an iterative trust region algorithm. Results show that a significant reduction in computational costs is achieved using the presented robustness objectives and generalized boundary curve. - Surv. Math. Ind , 1999 "... this paper we concentrate on the second step, especially the network approach for the automatic generation of the mathematical model. We achieve a descriptor formulation which is characterized by a differential--algebraic system (DAE). Their numerical solution creates new difficulties, which are cha ..." Cited by 1 (0 self) Add to MetaCart this paper we concentrate on the second step, especially the network approach for the automatic generation of the mathematical model. We achieve a descriptor formulation which is characterized by a differential--algebraic system (DAE). Their numerical solution creates new difficulties, which are characterized by the index concept. , 2002 "... In this paper, a method for nominal design of analog integrated circuits is presented that includes process variations and operating ranges by worst-case parameter sets. These sets are calculated adaptively during the sizing process based on sensitivity analyses. The method leads to robust designs w ..." Cited by 1 (0 self) Add to MetaCart In this paper, a method for nominal design of analog integrated circuits is presented that includes process variations and operating ranges by worst-case parameter sets. These sets are calculated adaptively during the sizing process based on sensitivity analyses. The method leads to robust designs with high parametric yield, while being much more efficient than design centering methods. "... Recently proposed methods for ordering sparse symmetric matrices are discussed and their performance is compared with that of the Minimum Degree and the Minimum Local Fill algorithms. It is shown that these methods applied to symmetrized modified nodal analysis matrices yield orderings significantly ..." Cited by 1 (1 self) Add to MetaCart Recently proposed methods for ordering sparse symmetric matrices are discussed and their performance is compared with that of the Minimum Degree and the Minimum Local Fill algorithms. It is shown that these methods applied to symmetrized modified nodal analysis matrices yield orderings significantly better than those obtained from the Minimum Degree and Minimum Local Fill algorithms, in some cases at virtually no extra computational cost. 1. "... For a solution ϕ(t; x0) with ϕ(0; x0) = x0 of the given implicit DAE-system F ( ˙x(t), x(t), t) = 0, a consistent initial value x0 must be found such that the periodicity condition x(0) − x(τ) = 0, (1) where τ is the period of the input and output signal is fulfilled. Similar to multiple shootin ..." Add to MetaCart For a solution ϕ(t; x0) with ϕ(0; x0) = x0 of the given implicit DAE-system F ( ˙x(t), x(t), t) = 0, a consistent initial value x0 must be found such that the periodicity condition x(0) − x(τ) = 0, (1) where τ is the period of the input and output signal is fulfilled. Similar to multiple shooting methods for high dimensional ODE systems, the nonlinear equation for the unknown initial state x0 is formulated as the minimization problem min x0∈IR n |ϕ(τ; x0) − x0 | 2 2. For the initial state of a periodical solution the residual vanishes. A Gauss–Newton method is used to solve the minimization problem. It consists of a sequence of linear approximations of the residual function to be minimized. For the actual approximation of the initial state x i 0, a correction ∆x i is determined as the solution of the linearization of the argument of the minimization problem: and ∆x i is the solution of min ∆x i ∈IR x i+1 0 = xi 0 + ∆x i, i = 0, 1,... "... Local algorithms for obtaining a pivot ordering for sparse symmetric coefficient matrices are reviewed together with their mathematical background and appropriate data structures. Recently proposed heuristics as well as improvements to them are discussed, and their performance, mainly in terms of th ..." Add to MetaCart Local algorithms for obtaining a pivot ordering for sparse symmetric coefficient matrices are reviewed together with their mathematical background and appropriate data structures. Recently proposed heuristics as well as improvements to them are discussed, and their performance, mainly in terms of the resulting number of factorization operations, is compared with that of the Minimum Degree and the Minimum Local Fill algorithms. It is demonstrated that a combination of Markowitz ’ algorithm with these symmetric methods applied to the unsymmetric matrices arising in circuit simulation is capable of accelerating the simulation significantly. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1438544","timestamp":"2014-04-18T17:41:46Z","content_type":null,"content_length":"31033","record_id":"<urn:uuid:0087fbc0-7571-426f-a6d9-f14e13ac8241>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
please help-urgent what is the greatest number of acute angle that a triangle can contian? Well, since there are three angles in a triangle, it has to be 1,2, or 3. Let's start with 3. An equilateral triangle has equal side lengths and angles. Since the sum of the angles of a triangle must equal 180, then each angle is 60 deg, which is acute. So it's possible to have 3 angles in a triangle that are acute.
{"url":"http://mathhelpforum.com/geometry/3814-please-help-urgent-print.html","timestamp":"2014-04-18T17:04:48Z","content_type":null,"content_length":"3560","record_id":"<urn:uuid:f9eb91a9-825f-49a5-b47c-09d1b5e54679>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics for Chemistry/Further reading From Wikibooks, open books for an open world Further reading[edit] Online resources[edit] There is much useful free material relevant to this book, including downloadable DVDs, funded by the HEFCE Fund for the Development of Teaching & Learning and the Gatsby Technical Education Project in association with the Higher Education Academy at Math Tutor. Discover Maths for Chemists from the Royal Society of Chemistry is a a one-stop site designed by chemists for chemists. This new free-to-use site brings together all the best resources to help you combine maths and chemistry. Maths for Chemistry is an online resource providing interactive context-based resources which explain how various aspects of maths can be applied to chemistry. There are quizzes and downloadable files to check understanding.
{"url":"http://en.wikibooks.org/wiki/Mathematics_for_Chemistry/Further_reading","timestamp":"2014-04-18T20:56:17Z","content_type":null,"content_length":"27601","record_id":"<urn:uuid:be6eeadd-4683-4dff-a43b-3e1515fb2dce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Cruz, CA Precalculus Tutor Find a Santa Cruz, CA Precalculus Tutor ...I feel that positive relationship to mathematics was of great benefit to me. It inspired me to go beyond what I was learning in grade school and pursue more advanced math courses. I had the confidence that I could rise to the challenge. 8 Subjects: including precalculus, calculus, geometry, algebra 1 ...I have a very strong foundation in K12 math, but I also have a broad range of other academics in English, history, science, economics, foreign languages and the arts. Before I was a Math major at UCSC, I was an Art Education major at Humboldt State, so I can sympathize for those who struggle in ... 12 Subjects: including precalculus, calculus, geometry, statistics ...I recently graduated from Mills College with my degree in Mathematics. In my years at school, I studied abroad in both Germany and Hungary and through that have learned many different teaching styles. At college, I worked as a peer tutor for more than seven different mathematics courses and also was a teaching assistant and taught my own workshops of 8-12 students in Calculus twice a 28 Subjects: including precalculus, English, reading, calculus ...I worked as TA and RA at UCLA, while attending classes. I tutored my classmates as well as middle school, high school and college students outside UCLA, in math (prealgebra to calculus at all levels, including multivariable, and more), and also physics at all levels including college and univers... 9 Subjects: including precalculus, physics, calculus, geometry I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including precalculus, calculus, statistics, geometry
{"url":"http://www.purplemath.com/santa_cruz_ca_precalculus_tutors.php","timestamp":"2014-04-17T07:46:44Z","content_type":null,"content_length":"24396","record_id":"<urn:uuid:4757a544-b7ba-467c-a1d4-12d8ee1b3a68>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Cepstrum computation [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Cepstrum computation Laszlo Toth wrote: > if(||FFT(x)||==0) { y=-BigNumber} > else {y=log(||FFT(x)||) > Where "BigNumber" is some big constant. This is not correct, since you would like to preserve monotonicity, i.e. the relation that smaller ||FFT(x)|| means smaller y (or at least not larger y). So it would have to be: if (||FFT(x)|| <= exp(-BigNumber)) { y=-BigNumber} else {y=log(||FFT(x)||) But note that adding a small number makes the function differentiable: y = log (||FFT(x)|| + epsilon) Bill Hartmann wrote: > You can always add a little broadband noise to keep the power spectrum > finite at all frequencies. The "epsilon" trick is equivalent to adding white noise. Perhaps we should instead add a noise whose spectrum is equivalent to the zero-phon curve? Paul Boersma Institute of Phonetic Sciences, University of Amsterdam Herengracht 338, 1016CG Amsterdam, The Netherlands
{"url":"http://www.auditory.org/mhonarc/1999/msg00187.html","timestamp":"2014-04-16T16:30:08Z","content_type":null,"content_length":"4587","record_id":"<urn:uuid:27a7732f-9b96-421a-8f5d-6cd76779c869>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation predicts ponytail shape String theory: Hair has vexed scientists and artists for centuries. (Credit: Tim Hornyak) Does your coif suffer from orientational disorder? Have you checked the gravitational effects on your locks lately? Can you solve the differential equation in your beehive? Well, scientists now can. Pioneering British researchers have succeeded in formulating an equation that unravels the deep physics mysteries of that great frontier of science, human ponytails. In a study that screams Ig Nobel Prize, the researchers from the University of Cambridge, the University of Warwick, and Unilever published a hirsute equation that for the first time describes how hairs hang together and predicts the form of a ponytail. "We identify the balance of forces in various regions of the ponytail, extract a remarkably simple equation of state from laboratory measurements of human ponytails, and relate the pressure to the measured random curvatures of individual hairs," Raymond Goldstein, Robin Ball, and Patrick Warren write in Physic Physicists have come up with an equation that predicts the shape of a ponytail. Mon 13 Feb 12 from BBC News (PhysOrg.com) -- New research provides the first mathematical understanding of the shape of a ponytail and could have implications for the textile industry, computer animation and personal care ... Mon 13 Feb 12 from Phys.org Rapunzel, Leonardo and the physics of the ponytail, Tue 14 Feb 12 from Science Blog CURLY PROBLEM: British scientists may have cracked a problem that has perplexed humanity since Leonardo da Vinci pondered it 500 years ago. Sun 12 Feb 12 from ABC Science Equation explains why hair grows differently on each person Sat 11 Feb 12 from Science Now Physicists in the U.K. have derived a simple math formula that explains the shapes of ponytails, accounting for the balance between gravity and hair springiness. Tue 14 Feb 12 from Livescience A new theoretical model of hair explains how the curliness and elasticity of hair fibers produce the characteristic shape of a ponytail. Published Mon Feb 13, 2012 Mon 13 Feb 12 from APS Physics String theory: Hair has vexed scientists and artists for centuries. (Credit: Tim Hornyak) Does your coif suffer from orientational disorder? Have you checked the gravitational effects on your ... Tue 14 Feb 12 from CNET Crave Mon 13 Feb 12 from Physicsworld Blog British scientists reported Friday they have cracked the mathematical conundrum behind the shape of hair that has perplexed humanity since Leonardo da Vinci first pondered it some 500 years ... Mon 13 Feb 12 from RedOrbit
{"url":"http://www.physnews.com/nano-materials-news/cluster216961911/","timestamp":"2014-04-20T22:17:18Z","content_type":null,"content_length":"9919","record_id":"<urn:uuid:86a2b17a-ecd0-4931-8e1e-e93cbb989b15>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Multi Level Verification of Microprocessor-Based Systems Results 1 - 10 of 26 , 1996 "... Traditional ROBDD-based methods of automated verification suffer from the drawback that they require a binary representation of the circuit. To overcome this limitation we propose a broader class of decision graphs, called Multiway Decision Graphs (MDGs), of which ROBDDs are a special case. With MDG ..." Cited by 77 (14 self) Add to MetaCart Traditional ROBDD-based methods of automated verification suffer from the drawback that they require a binary representation of the circuit. To overcome this limitation we propose a broader class of decision graphs, called Multiway Decision Graphs (MDGs), of which ROBDDs are a special case. With MDGs, a data value is represented by a single variable of abstract type, rather than by 32 or 64 boolean variables, and a data operation is represented by an uninterpreted function symbol. MDGs are thus much more compact than ROBDDs, and this greatly increases the range of circuits that can be verified. We give algorithms for MDG manipulation, and for implicit state enumeration using MDGs. We have implemented an MDG package and provide experimental results. - Formal Aspects of Computing , 1992 "... In this paper we present a formal model of asynchronous communication as a function in the Boyer-Moore logic. The function transforms the signal stream generated by one processor into the signal stream consumed by an independently clocked processor. This transformation "blurs" edges and "dilates" ti ..." Cited by 36 (5 self) Add to MetaCart In this paper we present a formal model of asynchronous communication as a function in the Boyer-Moore logic. The function transforms the signal stream generated by one processor into the signal stream consumed by an independently clocked processor. This transformation "blurs" edges and "dilates" time due to differences in the phases and rates of the two clocks and the communications delay. The model can be used quantitatively to derive concrete performance bounds on asynchronous communications at ISO protocol level 1 (physical level). We develop part of the reusable formal theory that permits the convenient application of the model. We use the theory to show that a biphase mark protocol can be used to send messages of arbitrary length between two asynchronous processors. We study two versions of the protocol, a conventional one which uses cells of size 32 cycles and an unconventional one which uses cells of size 18. Our proof of the former protocol requires the ratio of the clock rates of the two processors to be within 3% of unity. The unconventional biphase mark protocol permits the ratio to vary by 5%. At nominal clock rates of 20MHz, the unconventional protocol allows transmissions at a burst rate of slightly over 1MHz. These claims are formally stated in terms of our model of asynchrony; the proofs of the claims have been mechanically checked with the Boyer-Moore theorem prover, NQTHM. We conjecture that the protocol can be proved to work under our model for smaller cell sizes and more divergent clock rates but the proofs would be harder. Known inadequacies of our model include that (a) distortion due to the presence of an edge is limited to the time span of the cycle during which the edge was written, (b) both clocks are assumed to be linear functions of time (i.... , 1994 "... Formal verification is becoming a useful means of validating designs. We have developed a methodology for formally verifying dataintensive circuits (e.g., processors) with sophisticated timing (e.g., pipelining) against high-level declarative specifications. Previously, formally verifying a micropro ..." Cited by 25 (4 self) Add to MetaCart Formal verification is becoming a useful means of validating designs. We have developed a methodology for formally verifying dataintensive circuits (e.g., processors) with sophisticated timing (e.g., pipelining) against high-level declarative specifications. Previously, formally verifying a microprocessor required the use of an automatic theorem prover, but our technique requires little more than a symbolic simulator. We have formally verified a pre-existing 16-bit CISC microprocessor circuit extracted from the fabricated layout. Introduction Previously, symbolic switch-level simulation has been used to verify some small or simple data-intensive circuits (RAMs, stacks, register files, ALUs, and simple pipelines) [2, 3]. In doing so, the necessary simulation patterns were developed by hand or by using ad-hoc techniques, and it was then argued that the patterns were sufficient, and that their generation could be automated. We have developed sufficient theory to fully support such - Formal Methods in System Design , 1993 "... . In this article we present a structured approach to formal hardware verification by modelling circuits at the register-transfer level using a restricted form of higher-order logic. This restricted form of higher-order logic is sufficient for obtaining succinct descriptions of hierarchically design ..." Cited by 20 (7 self) Add to MetaCart . In this article we present a structured approach to formal hardware verification by modelling circuits at the register-transfer level using a restricted form of higher-order logic. This restricted form of higher-order logic is sufficient for obtaining succinct descriptions of hierarchically designed register-transfer circuits. By exploiting the structure of the underlying hardware proofs and limiting the form of descriptions used, we have attained nearly complete automation in proving the equivalences of the specifications and implementations. A hardware-specific tool called MEPHISTO converts the original goal into a set of simpler subgoals, which are then automatically solved by a general-purpose, first-order prover called FAUST. Furthermore, the complete verification framework is being integrated within a commercial VLSI CAD framework. Keywords: hardware verification, higher-order logic 1 Introduction The past decade has witnessed the spiralling of interest within the academic com... , 1997 "... Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level s ..." Cited by 19 (1 self) Add to MetaCart Formal verification uses a set of languages, tools, and techniques to mathematically reason about the correctness of a hardware system. The form of mathematical reasoning is dependent upon the hardware system. This thesis concentrates on hardware systems that have a simple deterministic high-level specification but have implementations that exhibit highly nondeterministic behaviors. A typical example of such hardware systems are processors. At the high level, the sequencing model inherent in processors is the sequential execution model. The underlying implementation, however, uses features such as nondeterministic interface protocols, instruction pipelines, and multiple instruction issue which leads to nondeterministic behaviors. The goal is to develop a methodology with which a designer can show that a circuit fulfills the abstract specification of the desired system behavior. The abstract specification describes the highlevel behavior of the system independent of any timing or implem... , 1994 "... . In this paper a methodology for verifying RISC cores is presented. This methodology is based on a hierarchical model of interpreters. This model allows us to define formal specifications at each level of abstraction and successively prove the correctness between the neighbouring abstraction levels ..." Cited by 15 (7 self) Add to MetaCart . In this paper a methodology for verifying RISC cores is presented. This methodology is based on a hierarchical model of interpreters. This model allows us to define formal specifications at each level of abstraction and successively prove the correctness between the neighbouring abstraction levels, so that the overall specification is correct with respect to its hardware implementation. The correctness proofs have been split into two steps so that the parallelism in the execution due to the pipelining of instructions, is accounted for. The first step shows that the instructions are correctly processed by the pipeline and the second step shows that the semantic of each instruction is correct. We have implemented the specification of the entire model and performed parts of the proofs in HOL. 1 Introduction Completely automating the verification of general complex systems is practically impossible. Hence appropriate heuristics for specific classes of circuits such as finite state machi... , 1996 "... We present a new approach to automating the verification of hardware designs based on planning techniques. A database of methods is developed that combines tactics, which construct proofs, using specifications of their behaviour. Given a verification problem, a planner uses the method database to ..." Cited by 13 (6 self) Add to MetaCart We present a new approach to automating the verification of hardware designs based on planning techniques. A database of methods is developed that combines tactics, which construct proofs, using specifications of their behaviour. Given a verification problem, a planner uses the method database to build automatically a specialised tactic to solve the given problem. User interaction is limited to specifying circuits and their properties and, in some cases, suggesting lemmas. We have implemented our work in an extension of the Clam proof planning system. We report on this and its application to verifying a variety of combinational and synchronous sequential circuits including a parameterised multiplier design and a simple computer microprocessor. , 1993 "... We present an abstract theory of interpreters. Interpreters are models of computation that are specifically designed for use as templates in computer system specification and verification. The generic interpreter theory contains an abstract representation which serves as an interface to the theory a ..." Cited by 13 (3 self) Add to MetaCart We present an abstract theory of interpreters. Interpreters are models of computation that are specifically designed for use as templates in computer system specification and verification. The generic interpreter theory contains an abstract representation which serves as an interface to the theory and as a guide to specification. A set of theory obligations ensure that the theory is being used correctly and provide a guide to system verification. The generic interpreter theory provides a methodology for deriving important definitions and lemmas that were previously obtained in a largely ad hoc fashion. Many of the complex data and temporal abstractions are done in the abstract theory and need not be redone when the theory is used. , 1995 "... Traditional OBDD-based methods of automated verification suffer from the drawback that they require a binary representation of the circuit. Multiway Decision Graphs (MDGs) [5] combine the advantages of OBDD techniques with those of abstract types. RTL designs can be compactly described by MDGs usin ..." Cited by 9 (7 self) Add to MetaCart Traditional OBDD-based methods of automated verification suffer from the drawback that they require a binary representation of the circuit. Multiway Decision Graphs (MDGs) [5] combine the advantages of OBDD techniques with those of abstract types. RTL designs can be compactly described by MDGs using abstract data values and uninterpreted function symbols. We have developed MDGbased techniques for combinational verification, reachability analysis, verification of behavioral equivalence, and verification of a microprocessor against its instruction set architecture. We report on the results of several verification experiments using our MDG package. I. Introduction Bryant's Reduced and Ordered Binary Decision Diagrams (OBDDs) [1] have proved to be a powerful tool for automated hardware verification [2, 6, 12]. OBDDs, however, have a drawback: they require a binary representation of the circuit even if the design is given at the Register Transfer Level. Every individual bit of every data ... , 1995 "... In this paper a practical methodology for formally verifying RISC cores is presented. This methodology is based on a hierarchical model of interpreters which reflects the abstraction levels used by a designer in the implementation of RISC cores, namely the architecture level, the pipeline stage leve ..." Cited by 9 (0 self) Add to MetaCart In this paper a practical methodology for formally verifying RISC cores is presented. This methodology is based on a hierarchical model of interpreters which reflects the abstraction levels used by a designer in the implementation of RISC cores, namely the architecture level, the pipeline stage level, the clock phase level and the hardware implementation. The use of this model allows us to successively prove the correctness between two neighbouring levels of abstractions, so that the verification process is simplified. The parallelism in the execution of the instructions, resulting from the pipelined architecture of RISCs is handled by splitting the proof into two independent steps. The first step shows that each architectural instruction is implemented correctly by the sequential execution of its pipeline stages. The second step shows that the instructions are correctly processed by the pipeline in that we prove that under certain constraints from the actual architecture, no conflic...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=666996","timestamp":"2014-04-16T23:18:39Z","content_type":null,"content_length":"39962","record_id":"<urn:uuid:5b69b262-a599-4780-88fb-2d7526ea0ada>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Math Problem July 28th 2009, 02:26 PM #1 Jul 2009 Basic Math Problem Hello everyone. Can anyone help me solve the following problem or give me an idea where to begin. It seems very basic but I can't seem to work it out. Thank you. "Mr and Mrs Matrix set out one Sunday to jog round the same course. Mr Matrix runs 4 km/h faster than his wife. He completes the course in 2 hours and she takes 3 hours. Calculate how fast each person runs and how long the course is." Hello everyone. Can anyone help me solve the following problem or give me an idea where to begin. It seems very basic but I can't seem to work it out. Thank you. "Mr and Mrs Matrix set out one Sunday to jog round the same course. Mr Matrix runs 4 km/h faster than his wife. He completes the course in 2 hours and she takes 3 hours. Calculate how fast each person runs and how long the course is." Let Dr Matrix run at $m$ km/he, then his wife runs at $(m-4)$ km/h. Let the course length be $l$ km, then: Now solve. It's not really basic, because you have to figure out what is being said here. To do that lets look at what information we have. We are told that $\mbox{"Mr. Matrix runs} \ (x+4)\frac{km}{h} \ \mbox{faster than his wife"}.$ We can then allow $x\frac{km}{h}$ to equal the speed of the wife. We are also told that Mr. Matrix takes 2 hours to complete the course, and the wife takes 3 hours to complete the course. If they are both running the SAME course, then there is an equality here we can exploit using $(distance)=(rate)(time)$. solve $\frac{x+4}{2}=\frac{x}{3}$ You could say that in 2 hours mr matrix would accumulate an 8 km gap where he reaches the target. and mrs matrix would need an extra hour to cover 8km. So she travels at 8km/h and he travels 4km/h faster than this. 12km/h Pickslides your equation gives -12km/h. i didn't know mr matrix was running backwards. lol joke Got it, thank you all. July 28th 2009, 02:44 PM #2 Grand Panjandrum Nov 2005 July 28th 2009, 02:46 PM #3 Super Member Jul 2009 July 28th 2009, 02:47 PM #4 July 28th 2009, 03:32 PM #5 Jul 2009 July 28th 2009, 03:44 PM #6 July 28th 2009, 03:47 PM #7 Jul 2009
{"url":"http://mathhelpforum.com/algebra/96336-basic-math-problem.html","timestamp":"2014-04-17T11:49:02Z","content_type":null,"content_length":"50432","record_id":"<urn:uuid:068dd564-b6a9-4da7-9d41-412d143add32>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the difference between potential difference and potential? In this context, you could say that potential is simply potential difference from a point to the reference point or ground, i.e., the point in your circuit where you define the potential to be zero. Potential difference is the difference in potential between two given points, neither one of which is the reference point Explaining this with an analogy using gravity, things would go something like this. Let's say we are standing at ground level and so we declare the ground to have zero potential energy. Ground is our reference, any mass that we raise above ground level will acquire a potential energy of mgh. mgh is your potential. If you have two of the same objects at difference heights, the potential difference would be mgh1 - mgh2, if h2 is lower than h1, then you have a positive potential difference...this could drive a "current" from point at h1 to point in h2....in other words, if you take a table and raise one end to a height of 24 inches and the other end to a height of just 12 inches...and you put a ball on the higher end...will it flow to the lower (potential) end? ...what if you put the ball in the lower end, will it climb to the higher end all by itself? so, yes, current flows from the high potential to the low potential. And not just current, also heat flows from high temperature to low temperature. And, so, I simply keep a short statement in my head to remind me of this: Power flows downhill.
{"url":"http://www.physicsforums.com/showthread.php?p=3754329","timestamp":"2014-04-19T12:43:09Z","content_type":null,"content_length":"26462","record_id":"<urn:uuid:082356af-312e-43ce-b09c-2a5c5c22bd43>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
pre algebra homework check Posted by amy on Wednesday, May 30, 2012 at 3:11pm. Find the product and simplify y(4y + 12) • pre algebra homework check - Pendergast, Wednesday, May 30, 2012 at 3:15pm • pre algebra homework check - amy, Wednesday, May 30, 2012 at 3:16pm Thank you very much!!! Related Questions pre algebra check - Find the product and simplify. m^2(3m^2) =3m^4???? Pre-Algebra - Simplify 8Y^2 - 3Y + 3Y^2 - 4Y + 12 Algebra - I'm not sure how to solve these problems. Simplify (3xsquare - 2x)... pre cal - (y^2)^-4y^8 simplify algebra - find the product. (4x+4y)^2 Math - Help Please: Please Help 1.Essay:Show all work,Find the Product. (x^2-2)(... Algebra1 - i think i have to simplify.... but the question just says: (2y-3)^2 ... Algebra1 - i think i have to simplify.... but the question just says: (2y-3)^2 a... pre algebra homework check - Find the sum or difference. 4s^2-11s+s^2-6s+21 Is ... algebra - simplify 14a^8y^3-7a^4y^5+28a^12y^2/7a^4y
{"url":"http://www.jiskha.com/display.cgi?id=1338405113","timestamp":"2014-04-19T17:46:14Z","content_type":null,"content_length":"8270","record_id":"<urn:uuid:fb160ec4-ee60-4ca4-bb15-288bd78aea67>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Model Theory From Wikibooks, open books for an open world Finite Model Theory (FMT) is a subarea of Model Theory (MT). MT is the branch of mathematical logic which deals with the relation between a formal language (syntax) and its interpretations (semantics). FMT is a restriction of MT to finite structures, such as finite graphs or strings. Since many central theorems of MT do not hold when restricted to finite structures, FMT is quite different from MT in methods and application areas. FMT has become an "unusually effective" instrument in computer science, for example in database theory, model checking or for gaining new perspectives on computational complexity. The three main areas of FMT are presented here: Expressive Power of Languages, Descriptive Complexity, and Random Structures. But first the results fundamental for all areas are introduced on the level of first order languages. Why? (Motivation) What is it? (Definition and Background) Why is it special? (... compared to 'common' Model Theory) What is it about? (Typical Logics and Structures studied) What is required? (Preliminaries) What to start with? (Basic Concepts) Ehrenfeucht-Fraisse-Games for FO The Problem (Expressibility of Properties) The Idea I (Fraisse's Theorem) The Idea II (Ehrenfeucht's Games) The Tool (Ehrenfeucht-Fraisse-Method) Some Utilities (Localities) Some Solutions (for some Structures) Expressive Power of Languages Typical problem: Given a finite graph, can the property of being an acyclic be expressed in a first order language? Descriptive Complexity Random Structures
{"url":"http://en.wikibooks.org/wiki/Finite_Model_Theory","timestamp":"2014-04-19T04:29:39Z","content_type":null,"content_length":"31963","record_id":"<urn:uuid:5aa2a217-70f8-40c1-85c2-b737108ccad5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504f88d1e4b03fffcec2892a","timestamp":"2014-04-18T16:51:43Z","content_type":null,"content_length":"38235","record_id":"<urn:uuid:53387049-2b4e-4c9a-bd69-98d95ec4bdcb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
no calculators, no problem! you should know $\tan \frac {\pi}3$ by heart, or know how to get is from the sine and cosine of pi/3 then, by reference angles, $\tan \frac {k \pi}3 = \pm \tan \frac {\pi} 3$ the sign depends on what quadrant the angle $\frac {k \pi}3$ is. $\frac {10 \pi}3$ is in the third quadrant, so we take the plus sign
{"url":"http://mathhelpforum.com/trigonometry/28096-tan-10pi-3-a-print.html","timestamp":"2014-04-20T17:19:35Z","content_type":null,"content_length":"7347","record_id":"<urn:uuid:ca5cff6b-f939-4014-88d7-ccef19a6d772>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Published in Q. Jl R. astr. Soc. (1996), 37, 519-563 PRACTICAL STATISTICS FOR ASTRONOMERS: II. CORRELATION, DATA-MODELLING AND SAMPLE COMPARISON J. V. Wall Royal Greenwich Observatory, Madingley Road, Cambridge CB3 OEZ, UK SUMMARY. Correlation is discussed in various guises in which astronomers may stumble across it, beginning with the pitfalls of searching for correlations between two variables and the tests, both parametric and non-parametric, for such correlations. This leads to the subject of regression analysis, a particular form of data modelling. Some general aspects and procedures in data modelling and parameter estimation are then described, including least squares, maximum-likelihood, Bayesian techniques and minimum chi-square. The final topic is sample comparison, an area of hypothesis testing: here some seven tests in all are described, three for the comparison of single samples with prediction and four for inter-sample comparison. Non-parametric methods are emphasized throughout as being of most use to the astronomer, who frequently faces (i) very small samples and (ii) lack of control over the Universe needed either to rerun the experiments or to understand frequency distributions from which the small samples were drawn. Table of Contents
{"url":"http://ned.ipac.caltech.edu/level5/Wall2/Wal_contents.html","timestamp":"2014-04-18T05:54:12Z","content_type":null,"content_length":"4725","record_id":"<urn:uuid:51599e45-14b9-4a6a-9fe7-ac8414b8e13f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Induction with waveguide walls This is a question I've been trying to figure out. I'll try my best to formulate it, so apologies if it's a bit ill defined! Suppose you construct a finite parallel plate waveguide of PEC(perfect electrical conductors) and PMC(perfect magnetic conductors) so that the top/bottom plates are PEC, and the left/right are PMC. For instance, something like this but instead of air on the sides, it's a PMC so H = 0. 1) Then what happens if you propagate a TE wave? More specifically, will there be any sort of induction with the walls? 2) If not, could you construct some internal structure in the waveguide which couples to the walls of it? This is what I have for 1) so far: There can't be any induction with the PMC walls, because along the surface of the PMC, the magnetic field is 0 in all directions. We have then have BC's that: [itex]\vec{E}(x=0) = \vec{E}(x=d) = \vec{0}[/itex] And then from maxwell's equations, the induced surface current is: [itex]J_s = \pm \hat{x} \times \vec{H} = \pm (\hat{y} H_z - \hat{z} H_y)[/itex] I'm a bit rusty on how to calculate any sort of behaviors here via induction onwards. I'm guessing that since this is TE, H only has y and z components. Therefore, there is no induced electric field since the magnetic flux from one plate to another is 0. That would therefore imply, to produce inductive effects with some internal structure (2), we could construct a wire loop slanted at 45°
{"url":"http://www.physicsforums.com/showthread.php?p=4218016","timestamp":"2014-04-17T12:40:53Z","content_type":null,"content_length":"20819","record_id":"<urn:uuid:c1e69901-eea9-4dce-a013-8bcb83f78e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve: −5(x − 2) = 15 1 −3.4 −5 −1 -------------------------------------------------------------------------------- Question 2 (Multiple Choice Worth 1 points) What is the first step in solving the equation −4x − 16 = 20? Multiply both sides of the equation by 4. Divide both sides of the equation by −16. Subtract 16 from both sides of the equation. Add 16 to both sides of the equation. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509ac52fe4b058d4974341ec","timestamp":"2014-04-17T04:00:18Z","content_type":null,"content_length":"47250","record_id":"<urn:uuid:a1c78f78-5432-497f-b24e-12f1ec1d88bc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Best Approaches for AI in Game Development When the term artificial intelligence (AI) is mentioned, it is widely believed that the computers really have the ability to think. However, the truth is a little bit different - computers just execute complex mathematical models and algorithms that are created to simulate thinking and decision making. In this article, we will see the most popular artificial intelligence algorithms used in game development. Minimax Algorithm The Minimax algorithm is one of the most widely used algorithms applied in two player games (e.g. Tic-tack-toe or chess). Its purpose is to determine which move is best for the AI (computer player) to make. The algorithm is based on Minimax theorem, a decision rule used in game theory, which says that Player 1's strategy is to maximize its minimum gain, and that the Player 2's strategy is to minimize its maximum loss (note that negative values are also allowed for gain and loss). In game development, the theorem is usually used in combination with the game tree. The game tree is generated from the current game position to the final game position, from Player 1's point of view. So, each node in the tree represents how effective the move is. Node values are filled from bottom to top. After generating the game tree, the Minimax algorithm is applied. MinMax (GamePosition game) { return MaxMove (game); MaxMove (GamePosition game) { if (GameEnded(game)) { return EvalGameState(game); else { best_move < - {}; moves <- GenerateMoves(game); ForEach moves { move <- MinMove(ApplyMove(game)); if (Value(move) > Value(best_move)) { best_move < - move; return best_move; MinMove (GamePosition game) { best_move <- {}; moves <- GenerateMoves(game); ForEach moves { move <- MaxMove(ApplyMove(game)); if (Value(move) > Value(best_move)) { best_move < - move; return best_move; A* Algorithm A* algorithm offers a solution to a common problem in more complex games - path finding or making characters know where they can - and where they should - move. It is widely used because of its performance and accuracy. There are two things you will need in order to make A* work: 1. A graph of your terrain 2. A method to estimate cost to reach between points A* then traverses the graph, using best-first search to find the path that has lowest expected total cost. The cost function is usually a sum of two functions: 1. Past path-cost function, which is the known distance from the starting node to the current node (G score) 2. Future path-cost function, which is a cost estimate from the current node to the goal (H score) Sample A* code: function A*(start,goal) closedset := the empty set // The set of nodes already evaluated. openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node came_from := the empty map // The map of navigated nodes. g_score[start] := 0 // Cost from start along best known path. // Estimated total cost from start to goal through y. f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal) while openset is not empty current := the node in openset having the lowest f_score[] value if current = goal return reconstruct_path(came_from, goal) remove current from openset add current to closedset for each neighbor in neighbor_nodes(current) tentative_g_score := g_score[current] + dist_between(current,neighbor) tentative_f_score := tentative_g_score + heuristic_cost_estimate(neighbor, goal) if neighbor in closedset and tentative_f_score >= f_score[neighbor] if neighbor not in openset or tentative_f_score < f_score[neighbor] came_from[neighbor] := current g_score[neighbor] := tentative_g_score f_score[neighbor] := tentative_f_score if neighbor not in openset add neighbor to openset return failure State Machine The finite-state machine is a mathematical model that perceives an object as an abstract machine that can be in a finite number of states. The machine can be only in one state at a time. It can also change states. This is called a transition and can be triggered by an event or condition. State machine is used to determine the behavior of AI, i.e. which actions to take at a certain time. Here is a simple example: switch( npc[i].state ) case state_idle: // First, see if I should attack. target = TryToFindAttackTarget(npc[i]); if( target ) npc[i].SetTarget( target ); npc[i].state = state_attack; // Next, see if I should heal myself // Check if I should heal neighbors /* other idle tasks */ case state_flee: case state_attack: // Determine if the target is still valid. Maybe it died. if( !npc[i].GetTarget() ) // Try to find a new target // If I still don't have a target, go back to idle if( !npc[i].GetTarget() ) npc[i].state = state_idle; case state_dead: Artificial Neural Networks An artificial neural network (or ANN) is an algorithm used in artificial intelligence to simulate human thinking. It is called a neural network because it works similarly to human brain - it receives input, the neurons communicate with each other and produce a "smart" output. Artificial neural network has three layers of neurons: 1. Input layer 2. Hidden layer 3. Output layer Every neuron is connected to all neurons in the next layer. The neurons solve the problem by adjusting output (i.e. weight coefficients between them). The process of adjusting weight coefficients is called training. So, you will need to prepare sample data in order to make the neural network "learn" your problem. The better the training data is, the better the output will be. Neural networks are widely used in game development because once trained they have good performance, and because they enable the game AI to learn over time. They can also be applied in resolving recognition problems, such as deciding whether the seen person is a friend or a foe. This article explains the basic algorithms used in game development and simplifies them for easier understanding. In reality, game development is much more complex. You will be working with a lot of data, which makes these algorithms expensive in terms of CPU and RAM usage. That means that you will have to consider optimization techniques, which were not discussed in this article. (click to add your comment)
{"url":"http://www.devx.com/enterprise/best-approaches-for-ai-in-game-development.html","timestamp":"2014-04-19T19:39:30Z","content_type":null,"content_length":"78953","record_id":"<urn:uuid:2527d8df-4713-4bd8-abee-703f2cf424e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Hyattsville Algebra 2 Tutor ...I have tutored math for eight years. I have worked with middle school and early high school students in prealgebra. I am very comfortable working on conceptual issues, as well as personal challenges in learning math. 11 Subjects: including algebra 2, writing, algebra 1, public speaking I am a graduate from university with Bachelor of Science in math and Master of Art in methods of teaching math. My 30 years of services in the system of education includes teaching high school and college math, developing elementary and middle school grade texts, and tutoring students of different... 7 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I've worked with students to set goals, timelines, or even simple routines to aid them in achieving their study and homework goals. I also incorporate these skills into my own life - that's what has enabled me to complete research for my master's degree in engineering, and continues to aide me i... 17 Subjects: including algebra 2, reading, writing, biology ...I have gotten tremendous satisfaction from seeing my students' grades improve and from hearing positive feedback from them (including constructive suggestions). If you are interested in finding a well rounded tutor with a passion for teaching students and learning from them, then I'd be glad to h... 40 Subjects: including algebra 2, English, reading, chemistry ...I have tutored Chinese to more than 5 students in the past 2 years. I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more than 3 years' intense training in programming, especially in C and Java, both of which have been widely used in my daily job. 27 Subjects: including algebra 2, chemistry, calculus, physics
{"url":"http://www.purplemath.com/hyattsville_md_algebra_2_tutors.php","timestamp":"2014-04-19T20:25:20Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:08e3c1fe-f536-4e2c-835a-be38a3e63996>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Pulsar Properties For additional information about pulsars, see the books Pulsar Astronomy by Andrew Lyne and Francis Graham-Smith and Handbook of Pulsar Astronomy by Duncan Lorimer and Michael Kramer. Known radio pulsars appear to emit short pulses of radio radiation with pulse periods between 1.4 ms and 8.5 seconds. Even though the word pulsar is a combination of "pulse" and "star," pulsars are not pulsating stars. Their radio emission is actually continuous but beamed, so any one observer sees a pulse of radiation each time the beam sweeps across his line-of-sight. Since the pulse periods equal the rotation periods of spinning neutron stars, they are quite stable. Even though the radio emission mechanism is not well understood, radio observations of pulsars have yielded a number of important results because: (1) Neutron stars are physics laboratories providing extreme conditions (deep gravitational potentials, densities exceeding nuclear densities, magnetic field strengths as high as $B \sim 10^{14}$ or even $10^{15}$ gauss) not available on Earth. (2) Pulse periods can be measured with accuracies approaching 1 part in $10^{16}$, permitting exquisitely sensitive measurements of small quantities such as the power of gravitational radiation emitted by a binary pulsar system or the gravitational perturbations from planetary-mass objects orbiting a pulsar. The radical proposal that neutron stars exist was made with trepidation by Baade & Zwicky in 1934: "With all reserve we advance the view that a supernova represents the transition of an ordinary star into a new form of star, the neutron star, which would be the end point of stellar evolution. Such a star may possess a very small radius and an extremely high density." Pulsars provided the first evidence that neutron stars really do exist. They tell us about the strong nuclear force and the nuclear equation of state in new ranges of pressure and density, test general relativity and alternative theories of gravitation in both shallow and relativisitically deep ($GM/(rc^2) \gg 0$) potentials, and led to the discovery of the first extrasolar planets. Discovery and Basic Properties Pulsars were discovered serendipidously in 1967 on chart-recorder records obtained during a low-frequency ($\nu = 81$ MHz) survey of extragalactic radio sources that scintillate in the interplanetary plasma, just as stars twinkle in the Earth's atmosphere. This important discovery remains a warning against overprocessing data before looking at them, ignoring unexpected signals, and failing to explore observational "parameter space" (here the relevant parameter being time). As radio instrumentation and data-taking computer programs become more sophisticated, signals are "cleaned up" before they reach the astronomer and optimal "matched filtering" tends to suppress the unexpected. Thus clipping circuits are used to remove the strong impulses that are usually caused by terrestrial interference, and integrators smooth out fluctuations shorter than the integration time. Pulsar signals "had been recorded but not recognized" several years earlier with the 250-foot Jodrell Bank telescope. Most pulses seen by radio astronomers are just artificial interference from radar, electric cattle fences, etc., and short pulses from sources at astronomical distances imply unexpectedly high brightness temperatures $T_{\rm b} \sim 10^{25}$–$10^{30} {\rm ~K} \gg 10^{12}$ K, the upper limit for incoherent electron-synchrotron radiation set by inverse-Compton scattering. However, Cambridge University graduate student Jocelyn Bell noticed pulsars in her scintillation survey data because the pulses appeared earlier by about 4 minutes every solar day, so they appeared exactly once per sidereal day and thus came from outside the solar system. Figure 1. "High-speed" chart recording of the first known pulsar, CP1919. This confirmation observation showed that the "scruffy" signals observed previously were periodic. The sources and emission mechanism were originally unknown, and even intelligent transmissions by LGM ("little green men") were seriously suggested as explanations for pulsars. Astronomers were used to slowly varying or pulsating emission from stars, but the natural period of a radially pulsating star depends on its mean density $\rho$ and is typically days, not seconds. Likewise there is a lower limit to the rotation period $P$ of a gravitationally bound star, set by the requirement that the centrifugal acceleration at its equator not exceed the gravitational acceleration. If a star of mass $M$ and radius $R$ rotates with angular velocity $\Omega = 2 \pi / P$, $$\Omega^2 R < {G M \over R^2}$$ $${4 \pi^2 R^3 \over P^2} < GM$$ $$P^2 > \biggl( { 4 \pi R^3 \over 3} \biggr) {3 \pi \over G M}$$ In terms of the mean density $$\rho = M \biggl( { 4 \pi R^3 \over 3} \biggr)^{-1}~,$$ $$P > \biggl( { 3 \pi \over G \rho} \biggr)^{1/2}$$ $$\bbox[border:3px blue solid,7pt]{\rho > { 3 \pi \over G P^2}}\rlap{\quad \rm {(6A1)}}$$ This is actually a very conservative lower limit to $\rho$ because a rapidly spinning star becomes oblate, increasing the centrifugal acceleration and decreasing the gravitational acceleration at its Example: The first pulsar discovered (CP 1919+21, where the "CP" stands for Cambridge pulsar) has a period $P = 1.3$ s. What is its minimum mean density? $$\rho > { 3 \pi \over G P^2} = { 3 \pi \over 6.67 \times 10^{-8} {\rm ~dyne~cm}^2 {\rm ~gm}^{-2} (1.3 {\rm ~s})^2 } \approx 10^8 {\rm ~g~cm}^{-3}$$ This density limit is just consistent with the densities of white-dwarf stars. But soon the faster ($P = 0.033$ s) pulsar in the Crab Nebula was discovered, and its period implied a density too high for any stable white dwarf. The Crab nebula is the remnant of a known supernova recorded by ancient Chinese astronomers as a "guest star" in 1054 AD, so the discovery of this pulsar also confirmed the Baade & Zwicky suggestion that neutron stars are the compact remnants of supernovae. The fastest known pulsar has $P = 1.4\times10^{-3}$ s implying $\rho > 10^{14}$ g cm$^{-3}$, the density of nuclear matter. For a star of mass greater than the Chandrasekhar mass $$M_{\rm Ch} \approx \biggl( {h c \over 2 \pi G} \biggr)^{3/2} {1 \over m_{\rm p}^2} \approx 1.4 M_\odot$$ (compact stars less massive than this are stable as white dwarfs), the maximum radius is $$R < \left( { 3 M \over 4 \pi \rho } \right)^{1/3}$$ In the case of the $P = 1.4 \times 10^{-3}$ pulsar with $\rho > 10^{14}$ g cm$^{-3}$, $$ R < \left( { 3 \times 1.4 \times 2.0 \times 10^{33} {\rm ~g} \over 4 \pi \times 10^{14} {\rm ~g~cm}^{-3} } \right)^{1/3} \approx 2 \times 10^6 {\rm ~cm} \approx 20 {\rm ~km}$$ The canonical neutron star has $M \approx 1.4 M_\odot$ and $R \approx 10$ km, depending on the equation-of-state of extremely dense matter composed of neutrons, quarks, etc. The extreme density and pressure turns most of the star into a neutron superfluid that is a superconductor up to temperatures $T \sim 10^9$ K. Any star of significantly higher mass ($M \sim 3 M_\odot$ in standard models) must collapse and become a black hole. The masses of several neutron stars have been measured with varying degrees of accuracy, and all turn out to be very close to $1.4 M_\odot$. The Sun and many other stars are known to possess roughly dipolar magnetic fields. Stellar interiors are mostly ionized gas and hence good electrical conductors. Charged particles are constrained to move along magnetic field lines and, conversely, field lines are tied to the particle mass distribution. When the core of a star collapses from a size $\sim 10^{11}$ cm to $\sim 10^6$ cm, its magnetic flux $\Phi \equiv \int \vec{B} \cdot \vec{n}\, da$ is conserved and the initial magnetic field strength is multiplied by $\sim 10^{10}$, the factor by which the cross-sectional area $a$ falls. An initial magnetic field strength of $B \sim 100$ G becomes $B \sim 10^{12}$ G after collapse, so young neutron stars should have very strong dipolar fields. The best models of the core-collapse process show that a dynamo effect may generate an even larger magnetic field. Such dynamos are thought to be able to produce the $10^{14}-10^{15}$G fields in magnetars, neutron stars having such strong magnetic fields that their radiation is powered by magnetic field decay. Conservation of angular momentum during collapse increases the rotation rate by about the same factor, $10^ {10}$, yielding initial periods in the millisecond range. Thus young neutron stars should drive rapidly rotating magnetic dipoles. Figure 2: A Pulsar. (left or top): A diagram of the traditional magnetic dipole model of a pulsar. (right or bottom) Diagram of a simple dipole magnetic field near the polar caps. The inset figure shows a schematic of the electon-positron cascade which is required by many models of coherent pulsar radio emission (Both figures are from the Handbook of Pulsar Astronomy by Lorimer and Kramer). If the magnetic dipole is inclined by some angle $\alpha > 0$ from the rotation axis, it emits low-frequency electromagnetic radiation. Recall the Larmor formula for radiation from a rotating electric dipole: $$P_{\rm rad} = { 2 q^2 \dot{v}^2 \over 3 c^3} = {2 \over 3} { (q \ddot{r} \sin \alpha)^2 \over c^3} = {2 \over 3} { ( \ddot{p}_\bot )^2 \over c^3}~,$$ where $p_\bot$ is the perpendicular component of the electric dipole moment. By analogy, the power of the magnetic dipole radiation from an inclined magnetic dipole is $$\bbox[border:3px blue solid,7pt]{P_{\rm rad} = {2 \over 3} { ( \ddot{m}_\bot )^2 \over c^3}}\rlap{\quad \rm {(6A2)}}$$ where $m_\bot$ is the perpendicular component of the magnetic dipole moment. For a uniformly magnetized sphere with radius $R$ and surface magnetic field strength $B$, the magnetic dipole moment is (see Jackson's Classical Electrodynamics) $$m = B R^3~.$$ If the inclined magnetic dipole rotates with angular velocity $\Omega$, $$m = m_0 \exp ( - i \Omega t)$$ $$ \dot{m} = - i \Omega m_0 \exp ( - i \Omega t)$$ $$ \ddot{m} = \Omega^2 m_0 \exp ( -i \Omega t) = \Omega^2 m$$ $$P_{\rm rad} = {2 \over 3} { m_\bot^2 \Omega^4 \over c^3} = {2 m_\bot^2 \over 3 c^3} \biggl( {2 \pi \over P} \biggr)^4 = {2 \over 3 c^3} ( B R^3 \sin \alpha)^2 \biggl( {2 \pi \over P} \biggr)^4~,$$ where $P$ is the pulsar period. This electromagnetic radiation will appear at the very low frequency $\nu = P^{-1} < 1$ kHz, so low that it cannot be observed, or even propagate through the ionized ISM. The huge power radiated is responsible for pulsar slowdown as it extracts rotational kinetic energy from the neutron star. The absorbed radiation can also light up a surrounding nebula, the Crab nebula for example. The rotational kinetic energy $E_{\rm rot}$ is related to the moment of inertia $I$ by $$E_{\rm rot} = \frac{1}{2}I \Omega^2 = {2 \pi^2 I \over P^2}~.$$ Example: The moment of inertia of the "canonical" neutron star (uniform-density sphere with $M \approx 1.4 M_\odot$ and $R \approx 10$ km) is $$I = \frac{2}{5} M R^2 \approx {2 \cdot 1.4 \cdot 2.0 \times 10^{33} {\rm ~g} \cdot (10^6 {\rm ~cm})^2 \over 5 } \approx 10^{45} {\rm ~gm~cm}^2$$ Therefore the rotational energy of the Crab pulsar ($P = 0.033$ s) is $$ E_{\rm rot} = { 2 \pi^2 I \over P^2} \approx {2 \pi^2 \cdot 10^{45} {\rm ~g~cm}^2 \over (0.033 {\rm ~s})^2 } \approx 1.8 \times 10^{49} {\rm ~ergs}$$ Pulsars are observed to slow down gradually: $$\dot{P} \equiv {d P \over d t} > 0$$ Note that $\dot{P}$ is dimensionless (e.g., seconds per second). From the observed period $P$ and period derivative $\dot{P}$ we can estimate the rate at which the rotational energy is decreasing. $${d E_{\rm rot} \over d t} = {d \over d t} \left( \frac{1}{2} I \Omega^2 \right) = I \Omega \dot{\Omega}$$ $$\Omega = {2 \pi \over P} \qquad {\rm so} \qquad \dot{\Omega} = 2 \pi (-P^{-2} \dot{P})$$ $${d E_{\rm rot} \over d t} = I \Omega \dot{\Omega} = I {2 \pi \over P} { 2 \pi ( - \dot{P}) \over P^2 }$$ $$\bbox[border:3px blue solid,7pt]{{d E_{\rm rot} \over d t} = {-4 \pi^2 I \dot{P} \over P^3}}\rlap{\quad \rm {(6A3)}}$$ Example: The Crab pulsar has $P = 0.033$ s and $\dot{P} = 10^{-12.4}$. Its rotational energy is changing at the rate $${ d E_{\rm rot} \over d t} = { - 4 \pi^2 I \dot{P} \over P^3} = { - 4 \pi^2 \cdot 10^{45} {\rm ~g~cm}^2 \cdot 10^{-12.4} {\rm ~s~s}^{-1} \over (0.033 {\rm ~s})^3 } \approx -4 \times 10^{38} {\rm Thus the low-frequency (30 Hz) magnetic dipole radiation from the Crab pulsar radiates a huge power $P_{\rm rad} \approx - d E_{\rm rot} / d t \approx 10^5 L_\odot$, comparable with the entire radio output of our Galaxy. It exceeds the Eddington limit, but that is not a problem because the energy source is not accretion. It greatly exceeds the average radio pulse luminosity of the Crab pulsar, $ \sim 10^{30}$ erg s$^{-1}$. The long-wavelength magnetic dipole radiation energy is absorbed by and powers the Crab nebula (a "megawave oven"). Figure 3: Composite image of the Crab nebula. Blue indicates X-rays (from Chandra), green is optical (from the HST), and red is radio (from the VLA). Image credit If we use $- d E_{\rm rot} / d t$ to estimate $P_{\rm rad}$, we can invert Larmor's formula for magnetic dipole radiation to find $B_\bot = B \sin \alpha$ and get a lower limit to the surface magnetic field strength $B > B \sin \alpha$, since we don't generally know the inclination angle $\alpha$. $$P_{\rm rad} = - {d E_{\rm rot} \over d t}$$ $${2 \over 3 c^3} (B R^3 \sin \alpha)^2 \biggl( { 4 \pi^2 \over P^2} \biggr)^2 = {4 \pi^2 I \dot{P} \over P^3}$$ $$B^2 = { 3 c^3 I P \dot{P} \over 2 \cdot 4 \pi^2 R^6 \sin^2\alpha }$$ $$ B > \biggl( { 3 c^3 I \over 8 \pi^2 R^6} \biggr)^{1/2} (P \dot{P})^{1/2}$$ Evaluating the constants for the canonical pulsar in cgs units, we get $$ \biggl[ { 3 \cdot (3 \times 10^{10} {\rm ~cm~s}^{-1})^3 \cdot 10^{45} {\rm ~g~cm}^2 \over 8 \pi^2 (10^6 {\rm ~cm})^6 } \biggr]^{1/2} \approx 3.2 \times 10^{19}$$ so the minimum magnetic field strength at the pulsar surface is $$\bbox[border:3px blue solid,7pt]{\biggl( { B \over {\rm Gauss}} \biggr) > 3.2 \times 10^{19} \biggl( { P \dot{P} \over {\rm s} } \biggr)^{1/2}}\rlap{\quad \rm {(6A4)}}$$ Example: What is the minimum magnetic field strength of the Crab pulsar ($P = 0.033$ s, $\dot{P} = 10^{-12.4}$)? $$ \biggl( { B \over {\rm Gauss} } \biggr) > 3.2 \times 10^{19} \biggl( { 0.033 {\rm ~s} \cdot 10^{-12.4} \over {\rm s} } \biggr) = 4 \times 10^{12}$$ This is an amazingly strong magnetic field. Its energy density is $$U_{\rm B} = { B^2 \over 8 \pi} > 5 \times 10^{23} {\rm ~erg~cm}^{-3}$$ Just one cm$^3$ of this magnetic field contains over $5 \times 10^{16} {\rm ~J} = 5 \times 10^{16} {\rm ~W~s} = 1.6 \times 10^9 {\rm ~W~yr}$ of energy, the annual output of a large nuclear power plant. A cubic meter contains more energy than has ever been generated by mankind. If $(B \sin\alpha)$ doesn't change significantly with time, we can estimate a pulsar's age $\tau$ from $P \dot{P}$ by assuming that the pulsar's initial period $P_0$ was much shorter than the current period. Starting with $$B^2 = { 3 c^3 I P \dot{P} \over 8 \pi^2 R^6 \sin^2 \alpha}$$ we find that $$P \dot{P} = {8 \pi^2 R^6 (B \sin \alpha)^2 \over 3 c^3 I }$$ doesn't change with time. Rewriting the identity $ P \dot{P} = P \dot{P}$ as $ P dP = P \dot{P} d t$ and integrating over the pulsar's lifetime $\tau$ gives $$\int_{P_0}^P P d P = \int_0^\tau (P \dot{P})\, d t = P \dot{P} \int_0^\tau d t $$ since $P \dot{P}$ is assumed to be constant over time. $${P^2 - P_0^2 \over 2} = P \dot{P} \tau$$ If $P_0^2 \ll P^2$, the characteristic age of the pulsar is $$\bbox[border:3px blue solid,7pt]{\tau \equiv { P \over 2 \dot{P}}}\rlap{\quad \rm {(6A5)}}$$ Note that the characteristic age is not affected by uncertainties in the radius $R$, moment of inertia $I$, or $B \sin \alpha$; the only assumptions in its derivation are that $P_0 \ll P$ and that $P \dot{P}$ (i.e. $B$) is constant. Example: What is the characteristic age of the Crab pulsar ($P = 0.033$ s, $\dot{P} = 10^{-12.4}$)? $$ \tau = {P \over 2 \dot{P} } = {0.033 {\rm ~s} \over 2 \cdot 10^{-12.4}} \approx 4.1 \times 10^{10} {\rm ~s} \approx {4.1 \times 10^{10} {\rm ~s} \over 10^{7.5} {\rm ~s~yr}^{-1} } \approx 1300 {\rm Its actual age is about 950 years. Figure 4: P-Pdot Diagram. The $P \dot{P}$ diagram is useful for following the lives of pulsars, playing a role similar to the Hertzsprung-Russell diagram for ordinary stars. It encodes a tremendous amount of information about the pulsar population and its properties, as determined and estimated from two of the primary observables, $P$ and $\dot P$. Using those parameters, one can estimate the pulsar age, magnetic field strength $B$, and spin-down power $\dot E$. (From the Handbook of Pulsar Astronomy, by Lorimer and Kramer) The Lives of Pulsars Pulsars are born in supernovae and appear in the upper left corner of the $P \dot{P}$ diagram. If $B$ is conserved and they age as described above, they gradually move to the right and down, along lines of constant $B$ and crossing lines of constant characteristic age. Pulsars with characteristic ages $ < 10^5$ yr are often found in or near recognizable supernova remnants. Older pulsars are not, either because their SNRs have faded to invisibility or because the supernova explosions expelled the pulsars with enough speed that they have since escaped from their parent SNRs. The bulk of the pulsar population is older than $10^5$ yr but much younger than the Galaxy ($\sim 10^{10}$ yr). The observed distribution of pulsars in the $P \dot{P}$ diagram indicates that something changes as pulsars age. One controversial possibility is that the magnetic fields of old pulsars must decay on time scales $\sim 10^7$ yr, causing old pulsars to move almost straight down in the $P \dot{P}$ diagram until they fall below into the graveyard below the death line and cease radiating radio pulses. Almost all short-period pulsars below the spin-up line near $\log [\dot{P}/P({\rm sec})] \approx -16$ are in binary systems, as evidenced by periodic (i.e. orbital) variations in their observed pulse periods. These recycled pulsars have been spun up by accreting mass and angular momentum from their companions, to the point that they emit radio pulses despite their relatively low magnetic field strengths $B \sim 10^8$ G (the accretion causes a substantial reduction in the magnetic field strength). The magnetic fields of neutron stars funnel ionized accreting material onto the magnetic polar caps, which become so hot that they emit X-rays. As the neutron stars rotate, the polar caps appear and disappear from view, causing periodic fluctuations in X-ray flux; many are detectable as X-ray Millisecond pulsars (MSPs) with low-mass ($M \sim 0.1-1 M_\odot$) white-dwarf companions typically have orbits with small eccentricities. Pulsars with extremely eccentric orbits usually have neutron-star companions, indicating that these companions also exploded as supernovae and nearly disrupted the binary system. Stellar interactions in globular clusters cause a much higher fraction of recycled pulsars per unit mass than in the Galactic disk. These interactions can result in very strange systems such as pulsar–main-sequence-star binaries and MSPs in highly eccentric orbits. In both cases, the original low-mass companion star that recycled the pulsar was ejected in an interaction and replaced by another star. (The eccentricity $e$ of an elliptical orbit is defined as the ratio of the separation of the foci to the length of the major axis. It ranges between $e =0$ for a circular orbit and $e = 1$ for a parabolic orbit.) A few millisecond pulsars are isolated. They were probably recycled via the standard scenario in binary systems, but the energetic millisecond pulsars eventually ablated their companions away. Figure 5: Examples of Doppler variations observed in binary systems containing pulsars. (left or top) The Doppler variations of the globular cluster MSP J1748$-$2446N in Terzan 5. This pulsar is in a nearly circular orbit (eccentricity $e = 4.6\times10^{-5}$) with a companion of minimum mass 0.47 M$_\odot$. The difference between the semi-major and semi-minor axes for this orbit is only 51$\pm$4 cm! The thick red lines show the periods as measured during GBT observations. (right or bottom) Similar Doppler variations from the highly eccentric binary MSP J0514$-$4002A in the globular cluster NGC 1851. This pulsar has one of the most eccentric orbits known ($e = 0.888$) and a massive white dwarf or neutron-star companion. Emission Mechanisms The radio pulses originate in the pulsar magnetosphere. Because the neutron star is a spinning magnetic dipole, it acts as a unipolar generator. The total Lorentz force acting on a charged particle $$\vec{F} = q \biggl(\vec{E} + {\vec{v} \times \vec{B} \over c} \biggr)~.$$ Charges in the magnetic equatorial region redistribute themselves by moving along closed field lines until they build up an electrostatic field large enough to cancel the magnetic force and give $\ vert\vec{F}\vert = 0$. The voltage induced is about $10^{16}$ V in MKS units. However, the co-rotating field lines emerging from the polar caps cross the light cylinder (the cylinder centered on the pulsar and aligned with the rotation axis at whose radius the co-rotating speed equals the speed of light) and these field lines cannot close. Electrons in the polar cap are magnetically accelerated to very high energies along the open but curved field lines, where the acceleration resulting from the curvature causes them to emit curvature radiation that is strongly polarized in the plane of curvature. As the radio beam sweeps across the line-of-sight, the plane of polarization is observed to rotate by up to 180 degrees, a purely geometrical effect. High-energy photons produced by curvature radiation interact with the magnetic field and lower-energy photons to produce electron-positron pairs that radiate more high-energy photons. The final results of this cascade process are bunches of charged particles that emit at radio wavelengths. The death line in the $P \dot{P}$ diagram corresponds to neutron stars with sufficiently low $B$ and high $P$ that the curvature radiation near the polar surface is no longer capable of generating particle cascades. The extremely high brightness temperatures are explained by coherent radiation. The electrons do not radiate as independent charges $e$; instead bunches of $N$ electrons in volumes whose dimensions are less than a wavelength emit in phase as charges $Ne$. Since Larmor's formula indicates that the power radiated by a chage $q$ is proportional to $q^2$, the radiation intensity can be $N^2$ times brighter than incoherent radiation from the same total number $N$ of electrons. Because the coherent volume is smaller at shorter wavelengths, most pulsars have extremely steep radio spectra. Typical (negative) pulsar spectral indices are $\alpha \sim$1.7 ($S \propto\nu^{-1.7}$), although some can be much steeper ($\alpha >3$) and a handful are almost flat ($\alpha<0.5$). Pulsars and the Interstellar Medium (Note: the following closely follows the discussion in the Handbook of Pulsar Astronomy by Lorimer and Kramer) With their sharp and short-duration pulse profiles and very high brightness temperatures, pulsars are unique probes of the interstellar medium (ISM). The electrons in the ISM make up a cold plasma having a refractive index $$\mu = \biggl[{1 - \left(\frac{\nu_{\rm p}}{\nu}\right)^2}\biggr]^{1/2}~,$$ where $\nu$ is the frequency of the radio waves, $\nu_{\rm p}$ is the plasma frequency $$\bbox[border:3px blue solid,7pt]{\nu_{\rm p} = \biggl({e^2 n_{\rm e} \over \pi m_{\rm e}}\biggr)^{1/2} \approx 8.97 {\rm ~kHz} \times\biggl({n_{\rm e} \over {\rm cm}^{-3}}\biggr)^{1/2}}\rlap{\quad \rm {(6A6)}}$$ and $n_{\rm e}$ is the electron number density. For a typical ISM value $n_{\rm e} \sim 0.03$ cm$^{-3}$, $\nu_{\rm p}\sim1.5$ kHz. If $\nu < \nu_{\rm p}$ then $\mu$ is imaginary and radio waves cannot propagate through the plasma. For propagating radio waves, $\mu < 1$ and the group velocity $v_{\rm g} = \mu c$ of pulses is less than the vacuum speed of light. For most radio observations $\nu_{\rm p} \ll \nu$ so $$\bbox[border:3px blue solid,7pt]{v_{\rm g}\approx c\biggl(1 - \frac{\nu_{\rm p}^2}{2\nu^2}\biggr)}\rlap{\quad \rm {(6A7)}}$$ A broadband pulse moves through a plasma more slowly at lower frequencies than at higher frequencies. If the distance to the source is $d$, the dispersion delay $t$ at frequency $\nu$ is $$t = \int_0^d v_{\rm g}^{-1} dl - \frac{d}{c} = \frac{1}{c}\int_0^d \biggl(1 + \frac{\nu_p^2}{2\nu^2}\biggr) dl - \frac{d}{c}$$ $$ = \frac{e^2}{2\pi m_{\rm e} c} \frac{\int_0^dn_{\rm e} dl}{\nu^2}.$$ In astronomically convenient units this becomes $$\bbox[border:3px blue solid,7pt]{\biggl({t \over {\rm sec}}\biggr) \approx 4.149\times10^3 \biggl({{\rm DM} \over {\rm pc~cm}^{-3}}\biggr) \biggl({\nu \over {\rm MHz}}\biggr)^{-2}}\rlap{\quad \rm $$\bbox[border:3px blue solid,7pt]{{\rm DM} \equiv \int_0^dn_{\rm e} dl}\rlap{\quad \rm {(6A9)}}$$ in units of pc cm$^{-3}$ is called the dispersion measure. Figure 6: Pulsar dispersion. Uncorrected dispersive delays for a pulsar observation over a bandwidth of 288 MHz (96 channels of 3 MHz width each), centered at 1380 MHz. The delays wrap since the data are folded (i.e. averaged) modulo the pulse period. (From the Handbook of Pulsar Astronomy, by Lorimer and Kramer) Measurements of the dispersion measure can provide distance estimates to pulsars. Crude estimates can be made for pulsars near the Galactic plane assuming that $n_{\rm e} \sim 0.03$ cm$^{-3}$. However, several sophisticated models of the Galactic electron-density distribution now exist (e.g. NE2001; Cordes & Lazio 2002, astro-ph/0207156) that can provide much better ($\Delta d / d \sim 30\ %$) distance estimates. Since pulsar observations almost always cover a wide bandwidth, uncorrected differential delays across the band will cause dispersive smearing of the pulse profile. For pulsar searches, the DM is unknown and becomes a search parameter much like the pulsar spin frequency. This extra search dimension is one of the primary reasons that pulsar searches are computationally intensive. Besides directly determining the integrated electron density along the line of site, observations of pulsars can be used to probe the ISM via absorption by spectral lines of HI or molecules (which can be used to estimate the pulsar distance as well), scintillation (allowing estimates of the pulsar transverse velocity), and pulse broadening. Figure 7: Pulsar HI Absorption Measurement. With a model for the Galactic rotation, such absorption measurements can provide pulsar distance estimates or constraints. (From the Handbook of Pulsar Astronomy, by Lorimer and Kramer) Figure 8: Thin Screen Diffraction/Scattering model. Inhomogeneities in the ISM cause small-angle deviations in the paths of the radio waves. These deviations result in time (and therefore phase) delays that can interfere to create a diffraction pattern, broaden the pulses in time, and make a larger image of the pulsar on the sky. (From the Handbook of Pulsar Astronomy, by Lorimer and Kramer) Figure 9: Pulse broadening caused by scattering. Scattering of the pulsed signal by ISM inhomogeneities results in delays that cause a scattering tail. This scatter-broadening can greatly decrease both the observational sensitivity and the timing precision for such pulsars. (From the Handbook of Pulsar Astronomy, by Lorimer and Kramer) Figure 10: Diffractive Scintillation of a Pulsar. The top plots show dynamic spectra of the bright pulsar B0355+54 taken on two consecutive days with the GBT. The bottom plots show the so-called secondary spectra (the Fourier transforms of the dynamic spectra) and the so-called scintillation arcs (and moving arclets). (Figure provided by Dan Stinebring)
{"url":"http://www.cv.nrao.edu/course/astr534/Pulsars.html","timestamp":"2014-04-19T11:56:56Z","content_type":null,"content_length":"38455","record_id":"<urn:uuid:86eff717-d4a9-4d17-8bab-ed177d7f860a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 25. Boost.TR1 The TR1 library provides an implementation of the C++ Technical Report on Standard Library Extensions. This library does not itself implement the TR1 components, rather it's a thin wrapper that will include your standard library's TR1 implementation (if it has one), otherwise it will include the Boost Library equivalents, and import them into namespace std::tr1.
{"url":"http://www.boost.org/doc/libs/1_47_0/doc/html/boost_tr1.html","timestamp":"2014-04-21T10:14:27Z","content_type":null,"content_length":"10703","record_id":"<urn:uuid:6932640c-78ee-48e7-bca4-d1e14fc0dc8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics 635 > Peter Craigmile > Notes > 12_nonlinear.pdf | StudyBlue Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
{"url":"http://www.studyblue.com/notes/note/n/12_nonlinearpdf/file/381119","timestamp":"2014-04-17T00:51:33Z","content_type":null,"content_length":"39384","record_id":"<urn:uuid:195ee13c-7d5e-47f8-8d2d-dd467959dabc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Jones Calculus The Jones matrix calculus is a matrix formulation of polarized light that consists of 2 × 1 Jones vectors to describe the field components and 2 × 2 Jones matrices to describe polarizing components. While a 2 × 2 formulation is "simpler" than the Mueller matrix formulation the Jones formulation is limited to treating only completely polarized light; it cannot describe unpolarized or partially polarized light. The Jones formulation is used when treating interference phenomena or in problems where field amplitudes must be superposed. A polarized beam propagating through a polarizing element is shown below. The 2 × 1 Jones column matrix or vector for the field is where E[0x] and E[0y ]are the amplitudes, δ[x] and δ[y] are the phases, and i = √-1. The components E[x] and E[y] are complex quantities. An important operation in the Jones calculus is to determine the intensity I: The row matrix is the complex transpose † of the column matrix, so I can be written formally as It is customary to normalize I to 1. The Jones vectors for the degenerate polarization states are: The Jones vectors are orthonormal and satisfy the relation E[i]† · E[j] = δ[ij], where δ[ij](I = j ,1, I ≠ j,0) is the Kronecker delta. The superposition of two orthogonal Jones vectors leads to another Jones vector. For example, which, aside from the normalizing factor of 1/√2 , is L+45P light. Similarly, the superposition of RCP and LCP yields which, again, aside from the normalizing factor is seen to be LHP light. Finally, in its most general form, LHP and LVP light are Superposing E[LHP] and E[LVP] yields This shows that two orthogonal oscillations of arbitrary amplitude and phase can yield elliptically polarized light. A polarizing element is represented by a 2 × 2 Jones matrix It is related to the 2 × 1 output and input Jones vectors by E' = J · E. For a linear polarizer the Jones matrix is For an ideal linear horizontal and linear vertical polarizer the Jones matrices take the form, respectively, The Jones matrices for a wave plate (E[0x] = E[0y] = 1) with a phase shift of φ/2 along the x-axis (fast) and φ/2 along the y-axis (slow) are (i = √-1 ) The Jones matrices for a QWP φ = π/2 and HWP φ = π are, respectively, For an incident beam that is L-45P the output beam from a QWP aside from a normalizing factor is which is the Jones vector for RCP light. Finally, the Jones matrix for a rotator is For a rotated polarizing element the Jones matrix is given by The Jones matrix for a rotated ideal LHP is Similarly, the Jones matrix for a rotated wave plate is For a HWP φ = π the matrix reduces to The matrix is almost identical to the matrix for a rotator except that the presence of the negative sign with cosθ rather than with sinθ along with the factor of 2 shows that the matrix is a pseudo-rotator; a rotating HWP reverses the polarization ellipse and doubles the rotation angle. An application of the Jones matrix calculus is to determine the intensity of an output beam when a rotating polarizer is placed between two crossed polarizers. The Jones vector for the output beam is E' = J · E and the Jones matrix for the three polarizer configuration is For input LHP light the intensity of the output beam is I = E'† · E' = [1 - cos(4θ)]/8. An important optical device is an optical isolator. The Jones matrix equation and its expansion is Thus, no light is returned to the optical source and the circular polarizer acts as an ideal optical isolator.
{"url":"http://spie.org/x32380.xml","timestamp":"2014-04-18T15:39:36Z","content_type":null,"content_length":"32667","record_id":"<urn:uuid:c62e2466-653c-43cc-9c58-2fc88941565c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Metric Mapping Evolution Through a Matrix Group Because Matrix Groups are manifolds, they can be parametrized by a subspace of $$\mathbb{R}^n$$. We will investigate a subgroup of $$GL_2$$, the square 2x2 invertible matrices. In the left panel, we see a velocity field, representing the initial velocity of a path through this subgroup. The input parameters control parameters of this initial velocity. On the right, a circle is deformed according to the path through GL2 traced out according to the initial velocity. Play around with the parameters and take note of how differing the parameters results in a different final image! Try keeping the tmax fixed at various times and playing with the other four x0: , y0: , tmax:
{"url":"http://cis.jhu.edu/education/introPatternTheory/chapters/metric/metric3.html","timestamp":"2014-04-18T23:15:04Z","content_type":null,"content_length":"4147","record_id":"<urn:uuid:832bfca7-527a-4b7f-8c87-3268ffecf260>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Recommended Download Links Average Speed 268.94 KBVector Mechanics Engineers 8th Ed - Solution ManualSolution Manual Vector Mechanics For Engineers Dynamics 8th E.zip http://engr-e-book.blogspot.com/search/label/solution manuals 14.81 KB Reply to "Untitled" to accompany Vector mechanics for engineers dynamics 54.07 MB Beer & Johnston Vector Mechanics for Engineers Beer & Johnston Vector Mechanics for Engineers 54.07 MB Beer & Johnston Vector Mechanics for Engineers Beer & Johnston Vector Mechanics for Engineers 54.07 MB Beer & Johnston Vector Mechanics for Engineers 22.54 KB E study guide for vector mechanics for engineers dynamics by beer isbn 9780073212203 cram101 te.pdf 42.15 KB E Study Guide For Vector Mechanics For Engineers Dynamics By Ferdinand Beer Isbn 9780077295493 Cram101 Textbook Reviews.pdf 22.55 KB E study guide for vector mechanics for engineers dynamics by ferdinand beer isbn 9780077295493.pdf 54.07 MB Beer & Johnston Vector Mechanics for Engineers Beer & Johnston Vector Mechanics for Engineers 102.73 MBVector Mechanics for Engineers - Statics and Vector Mechanics for Engineers - Statics and 102.73 MBVector Mechanics for Engineers - Statics and Vector mechanics for engineers - statics and
{"url":"http://www.fileshut.biz/v/vector-mechanics-for-engineers-dynamics-9th-solution-manual","timestamp":"2014-04-17T18:57:12Z","content_type":null,"content_length":"31299","record_id":"<urn:uuid:dfc1dc2b-a62d-4b01-a729-4fef18f54732>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Vertices of a polygon defined by inequalities' topic Author Comment/Response Bill Simpson Turn each of your inequalities into an equality. Thus x+y<7 becomes x+y==7. For every pair of equalities use Solve to find the unique intersection. Thus {x,y}/.Solve[{3x+y==7, 2x-y==2}, {x,y}][[1]] The Outer[] function might help you do this. Use ConvexHull on the resulting set of points. You might try doing this one small step at a time, while verifying that each step gives a correct result, before trying everything at once. URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/267710/","timestamp":"2014-04-21T10:00:32Z","content_type":null,"content_length":"25940","record_id":"<urn:uuid:155787ab-e5e5-43bc-8d5d-6fcb7b15e599>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Pesquisa Operacional Services on Demand Related links Print version ISSN 0101-7438 Pesqui. Oper. vol.28 no.1 Rio de Janeiro Jan./Apr. 2008 Ranking graph edges by the weight of their spanning arborescences or trees Paulo Oswaldo Boaventura-Netto^* Program of Production Engineering / COPPE - Federal University of Rio de Janeiro (UFRJ) - Rio de Janeiro RJ, Brazil boaventu@pep.ufrj.br A result based on a classic theorem of graph theory is generalized for edge-valued graphs, allowing determination of the total value of the spanning arborescences with a given root and containing a given arc in a directed valued graph. A corresponding result for undirected valued graphs is also presented. In both cases, the technique allows for a ranking of the graph edges by importance under this criterion. This ranking is proposed as a tool to determine the relative importance of the edges of a graph in network vulnerability studies. Some examples of application are presented. Keywords: graphs; matrix-tree theorem; vulnerability. Um resultado baseado em um teorema clássico da teoria dos grafos é aqui generalizado para grafos valorados, permitindo a determinação do valor total das arborescências parciais com raiz dada que contenham um arco dado, em um grafo orientado valorado. Um resultado correspondente para grafos não orientados valorados é também apresentado. Em ambos os casos, a técnica descrita permite uma hierarquização por importância das ligações do grafo, sob este critério. Esta hierarquização é proposta como uma ferramenta para determinar a importância relativa das ligações de um grafo em estudos sobre vulnerabilidade de redes. Alguns exemplos aplicados são apresentados. Palavras-chave: grafos; teorema "matrix-tree"; vulnerabilidade. 1. Introduction The study of maximal acyclic structures in directed and undirected graphs, especially those with spanning arborescences and trees, is a classical subject of graph theory, which we follow in this work in order to propose a theoretical tool for network vulnerability analysis. The basic notions and results of graph theory used in the theoretical developments can be found in Harary (1971), Berge (1973), Bondy & Murty (1976) and Gross & Yellen (2005). We will use the notation and concepts as in Berge. Since the publication by Tutte of the celebrated matrix-tree theorem (see for instance [Be73] ), the theoretical interest of the theme, especially in its relationship with algebraic graph theory, brought to the literature a number of important works such as those of Kelmans (1997), who early in 1965 had already published in the field. More recently, the interest contributed by applications such as communication networks and structural design has inspired a number of studies dedicated to the building of these structures, subject to various constraints such as total length of shortest paths, Wu et al. (2000), radius bounding, Serdjukov (2001), leaf degree, Kaneko (2001) and minimum diameter, Hassin & Tamir (1995). The structures of fullerene graphs, of interest in organic chemistry, discussed by Brown et al. (1996), also enhanced the interest in research on spanning trees. The formulation of algorithms for finding graph parameters related to spanning trees and arborescences resulted in works concerning the problem of the k most vital edges, Liang (2001), the unranking of arborescences, Colburn et al. (1996), the counting of minimum-weight spanning trees, Broder & Mayr (1997) and the reliability improvement of a network through a single edge addition, Fard & Lee (2001). A work on digraphs and graphs concerning the effect of link addition and removal on the number of spanning arborescences and trees is found in Boaventura-Netto (1984). Some questions concerning edge-weighted graphs, where the arborescence weight is the product of the weights of its arcs, were first presented by Bott & Mayberry (1954) and more recently discussed by Chung & Langlands (1996). This work proposes a criterion for ranking the edges of a directed (undirected) graph based on the value of the spanning arborescences (trees) to which it belongs. To that end, we develop a technique using the approach of Boaventura-Netto (1984) and Colburn et al. (1996) to deal with edge-weighted graphs where the substructure weight is given by the sum of edge values. It is fitting to observe that the total weight of the spanning arborescences (trees) of a graph, calculated for each one as the sum of its edges, could be obtained as shown in Chung & Langlands (1996), substituting exponentials e^w(i,j) for the link weights w(i,j) and taking logarithms at the end. While this calculation is theoretically very simple, it would easily cause overflow problems, unless both graph order and edge values were very small. On the other hand, the technique described in Boaventura-Netto (1984) and Colburn et al. (1996) is limited to equal values for every edge. 2. Some theoretical background When considering directed graphs, they will always have a root and we will call arcs their directed edges. The calculations will be based on a given vertex r which we call a root, in the sense of the definition used with directed graphs. Throughout the text, we will use the shorthand r-SPA for r-rooted spanning arborescence(s) and SPT for spanning tree(s). Def. 2.1: The Laplacian Q(G) = [q[ij]] of a directed graph G = (X,U) is the matrix given by q[ij] = d^-(j) i = j q[ij] = -a[ij] i ¹ j, where A(G) = [a[ij]] is the adjacency matrix of G and d^-(j) is the indegree of vertex j. With undirected graphs, we substitute the degree d(j) for the indegree d^-(j). Def. 2.2: The r^th-vertex reduced Laplacian of G, Q[r](G), is the matrix obtained from Q(G) by the removal of its r^th line and column. Theorem 2.1 (Matrix-tree theorem): Given a directed (undirected) graph G and its Laplacian Q(G), the cardinality of the set H[r](G) of r-SPA (of SPT) is the value of the determinant of Q[r](G) for a given r Î X (for any r Î X). Proof. Berge (1973). Def. 2.3: A subset R[ij](r) Í H[r](G) of r-SPA from G is said to be associated with an ordered vertex pair (i,j) if we consider this set: • to be generated by the addition of (i,j) to G, if the arc (i,j) Ï U; • to be eliminated by the removal of (i,j) from G, if the arc (i,j) Î U. The following theorem is an independent result of both Boaventura-Netto (1984) and Colburn et al. (1996). Theorem 2.2: Let G = (X,U) be a directed strongly-connected graph. Let (s,t), s, t Î X be a given ordered vertex pair and let G[1] = G (s,t) and G[2] = G È (s,t) be the graphs obtained from G respectively by the removal of arc (s,t) if (s,t) Î G, or by the addition of arc (s,t) if (s,t) Ï G. Then the cardinality of R[st](r) as in Def. 2.3, is given by a matrix B[r] = [ where the Q[r](G) or, more precisely, the elements of the transposed adjoint matrix Proof. Two graphs G and G* (where G* = G[1] or G* = G[2]) differing one from another by a single arc (s,t) will have their Laplacians differing by two elements on their t^th columns, that is, q[st] (equal to 0 or 1) and q[tt] (whose value will change by one unity). The corresponding reduced Laplacians will differ from one another, either by two elements if s ¹ r or by one element if s = r as in this case the line and the column r will not be present. The number of r-SPA in R[ij](r) will be the modulus of difference between the determinants of the two reduced Laplacians, Q[r](G) and Q[r](G*). We will then expand |Q[r](G)| and |Q[r](G*)| by their cofactors associated with the elements of their t^th columns. Let us define K[st] = original graph G) to take into account the sign of the expansion terms concerned with (s,t). For the sake of clarity, we will isolate from the general sum the elements which change from one expansion to the other. We can correct the value of q[tt] in the new matrix by adding K[st] to it: if (s,t) doesn t exist then K[st] = +1, else K[st] = 1. For the new value of q[st] we will have to subtract K[st ], given the negative sign of q[st ]. Then we have, with X[r] = X {r}, The difference between (4a-b) and (3), affected by the sign of K[st], corresponds to the thesis. Def. 2.4: With G = (X,U) directed (undirected), we will call B[r] = [Î X, the r^th-spanning arborescence pertinence matrix, r-APM (B = [b[ij]], for any r Î X, the spanning tree pertinence matrix, Example 2.1: Let G be the graph represented in Fig. 2.1, where the alphabetic ordering of the arc labels corresponds to the lexicographic ordering of the corresponding pairs: We have where the underlined bold entries are associated with void pairs (non-existent arcs in G). The addition of w = (1,4) to the graph, for instance, creates six new 1-SPA s, which can be identified as acw, abw, bfw, agw, cfw and gfw. On the other hand (4,2) that is, f is in the graph but does not belong to any 1-SPA. The expressions (3) and (4a-b) can be applied to undirected graphs, by considering their symmetric adjacency matrices, to obtain the TPM. But we have two differences: • there is only one TPM B for a given graph, regardless of the choice of the root; • the choice of a root implies a directed structure: we thus have to consider pairs of symmetric elements in order to take all tree-forming possibilities into account. In the example below we have, for instance, b[43] = 1 and b[34] = 4 but we will want to find in both entries the sum corresponding to the sole structure containing (4,3) and the four structures containing (3,4). If we then consider the symmetric graph associated with the graph of Fig. 2.1, the matrix obtained from (1a-c) will be B but our final answer will be B = B + (B )^T. Both matrices are shown below, with the entries referred to in bold underlined characters: It is easy to calculate a reduced Laplacian determinant of the original undirected graph, whose value is 8. According to B, the addition of the edge (1,4) corresponds to 8 new trees, summing up to 16. But now we have a complete graph, for which we have the already-known result of 4^4 -2 = 16 trees, Harary (1971). 3. The generalization for valued graphs Now let G = (X,U) be an arc-valued directed graph with value matrix V = [v[ij]]. Def. 3.1: The valued Laplacian Q(G) = [q[ij]] of a valued graph G = (X,U) is the matrix given by A known result (see for instance Chung & Langlands (1996)) states that, if H[r](G) is the set of r-SPA on G, then the determinant of the valued reduced Laplacian verifies that where the weight of any arborescence is defined as the product of the values of its arcs. (If v[ij] = 1 for every (i,j) Î U we have the result already given by Theorem 2.1). We want to define the weight of a given arborescence as the sum of its arc values, since it should be suitable for many applications and thus avoid going through exponentials as already discussed. As a consequence of Def. 2.3, we can observe that the exclusion (inclusion) of the arc (s,t) from (in) G implies the exclusion (inclusion) of [ij](r) Ì H[r](G). Considering the entries of B[r] which are associated with arcs of G, the arc counting for the entire set of spanning arborescences will be and we have, for the corresponding total value, Now let us consider the valued graph G[st] = G (s,t) (G[st] = G È (s,t)) and its adjacency matrix, whose reduced Laplacian matrix will be To define r-rooted spanning arborescence value matrices (r-APVM) , S[r] = [Î X, of the r-SPA disconnected by the removal (created by the addition) of the arc (s,t), we write where in the case of (s,t) removal (addition) we will evidently have [st] has the same meaning as in the proof of Theorem 2.2. In a similar way we can build a spanning tree value matrix (TPVM). The counting matrices [st ], according to (1a-c) and (2), It follows immediately that if they exist: if for K[st] = -1 (edge removal) and for given r, s, t we have in Q[r]| in (8a,b), every r-SPA will contain (s,t), then | We have already obtained B[r ]. As Q[r] and ^-1 and their adjoints. To that end, we consider a transform matrix E[st] such that Let us express both inverses through their column vectors, On the other hand, E[st] can be written as where [e[r]][i] (i ¹ r) is the i^th unit column vector without the r[th] component and the vector y being given by To calculate h, we observe that the changes brought to Q[r] by the addition or the removal of an arc (s,t) can be written as Then, premultiplying (15a,b) by ^th column, According to (14), the first member of these equations is equal to y; then, from (16a,b) we have Then, with (14) we obtain and finally, from (10) and (12), from which we obtain the corresponding adjoints. These expressions cannot be used in the removal case for any arc (s,t) such that Q[r]|, as already discussed: in this case we will have, from (6), For undirected graphs, the change in the original Laplacian affects the columns s and t: we thus have to use the proper indexing to apply the procedure once more. The complexity The most time-consuming operations in the process are those associated with matrix inversions and products, which are at most O(n^3). Each root r in a directed graph requires at most O(n^3) operations for its r-APM to be obtained. (The same is required for the TPM of an undirected graph). These exigencies apply to each vertex pair (i,j) such that i ¹ j and j ¹ r for the determination of the r-APVM s and, as a consequence, of the TPVM. Therefore the complexity in this last case is O(mn^3). 4. Some examples Example 4.1: Let us consider the following value matrices for the graph of Fig. 2.1 (which we will now denote G, since it is now arc-valued) and for the corresponding undirected graph G : Let s = r = 1. To find S[1] entries, we will first work by inspection. The counting of 1-SPA given by Theorem 2.2 gives us |Q[1]| = 3. The three arborescences, with the values corresponding to V(G) and their total weights, are shown in Fig. 4.1 below: We have, from (5) and (6), with B[1 ]as from Example 2.1, We can see that this value is the total value of the nine arcs in Fig. 4.1, as 11 + 10 + 12 = 33. The arcs (1,2) and (2,4) belong to every 1-SPA, so their removal values are equal to this last result, It is easy to find the remaining removal values, : we can see that each one of these arcs belongs to a single arborescence. We will next examine the case of (s,t) = (1,4) Ï U addition to G, now working with the algebraic technique already discussed. We will be assigning the weight 1 to the new arcs. Here s = r = 1: with [q[1]][4] = [-1, 0, 1 ] and K[14]= +1 we apply (19a) to obtain h = [-1/3, 0, 1/3 ]. Thus with t = 4, we have From (8a) and (9a), | We already have B[1 ]from Example 2.2. Applying (7) with = 0 we obtain for (i, j) Î U, a total weight of = 51. It is easy to verify that this value corresponds to the 6 new arborescences in Fig. 4.2, where (1,4) is dashed and the arc orientations are to be taken from the root (in white) on. For the addition of (3,2) to G we obtain one new arborescence, ((1,3),(3,2),(2,4)) with value 8 and for that of (3,4) (also to the original G), three new arborescences are created, with total value 9 + 7 + 10 = 26. The complete 1-APVM for this graph is and the corresponding TPVM, obtained as already discussed, is Example 4.2: We will consider a subset of the central region of the city of Rio de Janeiro, represented by a directed graph where we might be interested in studying the impact on traffic of blocking off a given street, for public works or as a consequence of an accident. The vertices will be associated with the corners and the arcs with street segments between two corners, with the direction corresponding to the traffic flow, as in Fig. 4.3 below: The figure shows the graph and a table with weights given to different streets according to their importance to traffic flow. Pedestrian-only streets were suppressed. Table 4.1 shows the percentual values of the participation of some arcs in sets of arborescences with given roots. It is interesting to note that the arcs are chiefly the same in the examples shown, but the percentual values show significant changes from one root to another. This table does not include arcs (e.g. (1,5)) which are the unique exit for a vertex as they obviously belong to every partial arborescence. Example 4.3: This last example deals with the French commutation network TRANSPAC, Bulteau (1997). The vertices correspond to French cities. In Fig. 4.4 below, the position of vertices in the plane is approximately that of the corresponding cities. The more important edges have values equal to 2 or 3; the remaining ones have value 1. Here we are interested both in the effect of an eventual edge failure and in the gain one could obtain by building a new connection. The evaluative measure in both cases will be the number of spanning trees associated with the concerned edge, which belongs to the network in the first case and is projected to be built in the second case. Tables 4.2 and 4.3 below show, respectively, the number of spanning trees associated with fifty vertex pairs, both in the first and in the second case. We can see that a number of the pairs giving a strong increase in the number of spanning trees contain 2-degree vertices, which can be easily associated with the level of connectivity. From these it is interesting to observe that 1 and 3 are geographically near one another. Thus the building of an edge (1,3) should therefore have a good cost-benefit ratio, since it would increase the number of spanning trees in the graph by a factor of 2.82. On the other hand, the most critical edge in terms of failure is (10,22), which has value 3 and connects two high-degree points (10 is Paris). The participation of vertices in pairs which are important both for building new edges and in terms of edge failures is given by Table 4.4. 5. Conclusions In this work, we have presented a technique for evaluating the relative importance of the edges in a directed (undirected) edge-weighted graph by using as a criterion the weights of the spanning arborescences (spanning trees) associated with each pair of vertices, as defined in the text. We consider this criterion to be a useful tool for ranking both existent edges according to the importance of their eventual failure in a network, and non-existent edges according to their positive influence within the planning of network improvements. When dealing with this subject, we took into account the needs of some applications related to the fields of communication and transportation, as we found them expressed by authors working in spanning arborescences and trees. For the support for this work, we are indebted to Laboratoire PRiSM of Versailles University in St. Quentin-on-Yvelines, particularly to Professor Catherine Roucairol. We are also indebted to CNPq (Brazilian Council for Scientific and Technological Development) for their financial support of our visit to PRiSM. (1) Berge, C. (1973). Graphes et hypergraphes. Dunod, Paris. [ Links ] (2) Berman, K.A. (1980). A proof of Tutte s trinity theorem and a new determinant formula. SIAM Journal on Algebraic and Discrete Methods, 1(1), 64-69, Society for Industrial and Applied Mathematics. [ Links ] (3) Bott, R. & Mayberry, J. (1954). Matrices and trees. In: Economic activity analysis [edited by O. Morgenstern], John Wiley and Sons, New York, 391-340. [ Links ] (4) Brown, T.J.N.; Mallion, R.B.; Pollak, P. & Roth, A. (1996). Some methods for counting the spanning trees in labelled molecular graphs, examined in relation to certain fullerenes. Discrete Applied Mathematics, 67, 51-66. [ Links ] (5) Broder, A.Z. & Mayr, E.W. (1997). Counting minimum weight spanning trees. Journal of Algorithms, 24, 171-176. [ Links ] (6) Boaventura-Netto, P.O. (1984). La différence d un arc et le nombre d arborescences partielles d un graphe. Pesquisa Operacional, 4(2), 12-20. [ Links ] (7) Bulteau, S. (1997). Étude topologique des réseaux de communication: fiabilité et vulnerabilité. D.Sc. Thesis, University of Rennes 1, France. [ Links ] (8) Chung, F.R.K. & Langlands, R.P. (1996). A combinatorial Laplacian with vertex weights. Journal of Combinatorial Theory (A), 75, 316-327. [ Links ] (9) Colburn, C.J.; Myrwold, W.J. & Neufeld, E. (1996). Two algorithms for unranking arborescences. Journal of Algorithms, 20, 268-281. [ Links ] (10) Fard, N. & Lee, T.H. (2001). Spanning tree approach in all-terminal network reliability expansion. Computer Communications, 24(13), 1348-1353. [ Links ] (11) Gross, J.L. & Yellen, J. (2005). Graph theory and its applications. 2^nd ed., Chapman & Hall / CRC. [ Links ] (12) Hadley, G. (1965). Linear Algebra. Addison-Wesley, Reading, New York, London. [ Links ] (13) Harary, F. (1971). Graph Theory. Addison-Wesley, Reading, New York, London. [ Links ] (14) Hassin, R. & Tamir, A. (1995). On the minimum diameter spanning tree problem. Information Processing Letters, 53, 109-111. [ Links ] (15) Kaneko, A. (2001). Spanning trees with constraints on the leaf degree. Discrete Applied Mathematics, 115, 73-76. [ Links ] (16) Kelmans, A.K. (1997). Transformations of a graph increasing its Laplacian polynomial and number of spanning trees. European Journal of Combinatorics, 18, 35-48. [ Links ] (17) Liang, W. (2001). Finding the k most vital edges with respect to minimum spanning trees for fixed k. Discrete Applied Mathematics, 113, 319-327. [ Links ] (18) Serdjukov, A.I. (2001). On finding a maximum spanning tree of bounded radius. Discrete Applied Mathematics, 114, 249-253. [ Links ] (19) Wu, B.Y.; Chao, K.M. & Tang, C.Y. (2000). Approximation algorithms for the shortest total path length spanning tree problem. Discrete Applied Mathematics, 105, 273-289. [ Links ] Recebido em 07/2006; aceito em 11/2007 Received July 2006; accepted November 2007 * Corresponding author / autor para quem as correspondências devem ser encaminhadas
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0101-74382008000100004&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T12:15:24Z","content_type":null,"content_length":"60691","record_id":"<urn:uuid:6cc26298-d9e4-49e4-96f5-2a29114c5027>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Lawndale, CA Math Tutor Find a Lawndale, CA Math Tutor ...I want my services to be viewed as a long-term investment, not a short-term expense. For that reason, I will only charge for my services if you are completely satisfied. All I expect is a hard-working student who will put in the time necessary for the level of success they want to reach. 22 Subjects: including algebra 1, vocabulary, grammar, Microsoft Excel ...As a civil engineer, I have had extensive working knowledge of geometry concepts which I can use to help bring real-world application to the problems presented in this branch of math. The objective of pre-algebra is to prepare the student for the study of algebra. I have helped develop study skills for elementary school children as a mentor. 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I have tutored algebra at Los Angeles City College, and I have also tutored children in 8th grade math at Thomas Starr King Middle School in Los Angeles, CA. I have done private tutoring for the comprehensive high school exit exam as well. I have tutored precalculus at both the high school and college level. 65 Subjects: including algebra 1, chemistry, logic, differential equations ...This past year I taught both College Writing and Introduction to Poetry (literature) to undergraduates. I have tutoring experience in math, writing, English, Spanish, etc, for elementary, middle, and high school-aged students. I scored above the 90th percentile in all my standardized tests (inc... 13 Subjects: including SAT math, Spanish, English, SAT writing ...I've been tutoring Calculus ever since I graduated high school. Perhaps the best example of my experience here is when I helped one student get his yearly grade from an F to a B from the end of one semester to the end of the next. I took AP Physics for the first time in 8th grade, as a student of the only middle school in the country to offer the course, at least at the time. 6 Subjects: including calculus, physics, precalculus, trigonometry Related Lawndale, CA Tutors Lawndale, CA Accounting Tutors Lawndale, CA ACT Tutors Lawndale, CA Algebra Tutors Lawndale, CA Algebra 2 Tutors Lawndale, CA Calculus Tutors Lawndale, CA Geometry Tutors Lawndale, CA Math Tutors Lawndale, CA Prealgebra Tutors Lawndale, CA Precalculus Tutors Lawndale, CA SAT Tutors Lawndale, CA SAT Math Tutors Lawndale, CA Science Tutors Lawndale, CA Statistics Tutors Lawndale, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/lawndale_ca_math_tutors.php","timestamp":"2014-04-21T07:49:31Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:6fe6c485-639c-47c7-9421-1b786e358a10>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is the slope of the line passing through the points (2,7) and (-1,3)? a.1/7 b.3/4 c.4/3 d.1/3 • 6 months ago • 6 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/525852ebe4b0ede9e447fd45","timestamp":"2014-04-16T16:41:37Z","content_type":null,"content_length":"58585","record_id":"<urn:uuid:ee1527b6-d578-45ac-97f8-27ed65cbbd3f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionBackground ModelEstimation of Unmixing MatrixImage PreprocessingMeasure of GaussianityParameter EstimationThe Background Gaussianity and Independent Component SeparabilityMotion DetectionExperiments and ResultsImplementation IssuesExperimental ModelResults and DiscussionsConclusionsReferencesA Motion Detection and PCA approachFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s100606092 sensors-10-06092 Article Background Subtraction Approach based on Independent Component Analysis Jiménez-HernándezHugo Centro de Investigación en Ciencia Aplicada y Tecnología Aplicada Cerro Blanco No. 141. Col. Colinas del Cimatario, Santiago de Querétaro, Querétaro, Mexico Author to whom correspondence should be addressed; E-Mail: hugojh@gmail.com; Tel.: +52-1-442-229-0804; +52-1-442-211-9800 Ext. 1330; Fax: +52-1-442-211-9839. 2010 18 6 2010 10 6 6092 6114 26 4 2010 16 5 2010 28 5 2010 © 2010 by the authors; licensee MDPI, Basel, Switzerland. 2010 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). In this work, a new approach to background subtraction based on independent component analysis is presented. This approach assumes that background and foreground information are mixed in a given sequence of images. Then, foreground and background components are identified, if their probability density functions are separable from a mixed space. Afterwards, the components estimation process consists in calculating an unmixed matrix. The estimation of an unmixed matrix is based on a fast ICA algorithm, which is estimated as a Newton-Raphson maximization approach. Next, the motion components are represented by the mid-significant eigenvalues from the unmixed matrix. Finally, the results show the approach capabilities to detect efficiently motion in outdoors and indoors scenarios. The results show that the approach is robust to luminance conditions changes at scene. background subtraction independent component analysis motion detection One of fundamental steps, in several computer vision systems, is the motion detection process. The moving objects represent the main features used to analyze the motion dynamics at scene. The background and foreground commonly represent fixed areas and moving areas respectively. The process of identifying foreground and background objects is a tough task. The background subtraction approach consists on label moving objects and fixed regions. However, there are several factors, like luminance, reflections, shadows, or even camera shaking that make this process difficult [1, 2]. The foreground labeling process can be considered as a general classification problem, where using a model M, we try to estimate a set of parameters P = {p[1], p[2], . . .}, which correctly label/ classify background and foreground objects. The parameter estimation uses previous knowledge about scenario and object properties. The capabilities of classification depend on both the raw data and the model M. It is usually pretended to build a single binary classifier [2–4] (if it only identifies fixed and moving zones) or multiple classifiers [1, 5] (if it wishes to model several types of moving objects and fixed layers). The success of motion detection for a particular model depends on the scene constraints, the data dynamics on both temporal and spatial conditions, and the separability of data under the current model M. In the literature, one of the most accepted approaches is the work of Stauffer and Grimson [3]. They proposed an approach based on the assumption that the foreground and the background can be modeled as a mixture of Gaussians (MOG), where the Gaussians with low probability represent moving objects. This approach is computationally efficient; moreover, the convergence velocity and the spatial object consistency are not considered. Furthermore, in their work, Toyama et al. [1] proposed a background approach capable of discarding periodic movements in background as the waves of the sea or the up and down of electric stairs. Moreover, this approach is limited to scenarios with the fixed luminance conditions. Next, Elgammal et al. [6] proposed an approach based on a pixel probability density function (pdf) approximation using last immediate frames, which are smoothed with a Gaussian kernel. Also, Horprasert et al. [2] proposed a method based on object segmentation that supports shadows and changing luminance conditions, against an increase of the computational complexity. The same form, Oliver et al. [7] proposed a background approach based on a temporal correlation of pixel values. Next, the Principal Component Analysis (PCA) [8] is applied to eliminate those components, which do not provide information to the model. This approach was improved to support small luminance changes by Rymel et al. [9]. More recently, Han et al. [10] presented an approach to estimate a set of models of pixel intensities based on a combination of mean-shift pdf’s estimators and a propagated Gaussian kernel. Following the trend, Tang and Miao [11] presented an extension to the MOG approach, which supports shadows. Other relevant work is proposed by Zhou et al. [12], where they show a different background approach, based on the analysis of image texture using Gabor filters. Finally, at a recent time, in their work [13], Du-Ming and Shiah-Chin proposed a way to detect motion based on Independent Component Analysis (ICA), which is used to estimate the background information. This approach is based on the Particle Swarm Optimization (PSO) algorithm to search the best unmixing matrix. The background process consists on estimate the unmixing matrix using a background estimation and the current image. Afterwards, the approach, for consecutive frames, separates the background and the foreground objects as a single threshold classification task, using the same estimated unmixed matrix. Moreover, this approach is limited to scenarios where the background dependences between foreground and background never change. As a consequence, in outdoor scenarios this approach is not suitable. This work presents a novel background subtraction approach based on Independent Component Analysis [14, 15]. This approach exploits the property of separability of the pdf in several components; in which one of them represents the background, and the rest of the components represent foreground areas and noise effects. This approach considers the task as a cocktail problem, where background and foreground information are mixed spatially and temporally. The process consists on identifying the pdf of the background from the rest, separating the different components as a linear independent component. The separation process is performed with an unmixing matrix. The parameters of the unmixing matrix are estimated as a maximization of non-Gaussian problem, which is solved via Newton-Raphson[16]. Background and foreground zones are detected by extracting the most and middle significant components from the unmixed matrix. To compensate the dynamic changes of the scenario, the approach is continuously estimating the unmixed matrix and the estimation of background information. As a consequence, this approach is adaptable to different luminance conditions. The paper is organized as follows. In §2, a background model based on ICA is presented. The parameter estimation for the unmixing matrix is discussed in §3. Next, the motion detection from the unmixed matrix is presented in §4. Afterwards, in §5 the approach is tested, comparing foreground detection with the MOG approach [3]. Both approaches are tested in several image sequences took from a PETS database [17, 18] and outdoors/indoors scenarios (gardens and vehicular intersections). Using the PETS database we can develop a background bench scheme to quantify the accuracy of tested approaches. Finally, the conclusion is shown. Given a set of consecutive images ϒ = {I[1], I[2], . . . , I[n]}, the information of moving objects and fixed objects are mixed. Each particular image pixel position x = [x[1],x[2]] is indexed as I [i](x) for k × l image dimensions. To start with, an E(ϒ) operator is defined. This operator mixes the images in ϒ and estimates the dominant color pixel value for each position x. Table 1 shows some common operators used for this purpose, even though, there are many others operators; the complexity and computational resources could increase. The estimation of the color of the pixel depends on the temporal window size used. For reducing the computational complexity, the estimation is approximated with recursive algorithms based on time differences, i.e., Kalman recursive filters [19], or parameter estimation using Expectation Maximization algorithm [20]. This work assumes that motion objects and background areas can be represented as independent image components U[1],U[2], . . .. The most significant component represents the background image, and the rest of the components represent moving objects, and the less significant may represent noise motion. But, these images are unfortunately, unknown. Using E(ϒ) and a set of images ϒ, the estimation of the matrix W is performed. This matrix unmixes into a set of images U[1],U[2], . . ., which represent the independent components of fixed and moving objects. Namely, for a particular position x we have, Φ ( x ) = W Ω ( x )where the W matrix separates the mixed color components in Ω(x) = [I[1](x), I[2](x), . . . , I[n](x),E(ϒ(x))]; Φ(x) represents the unmixed independent components Φ(x) = [U[1] (x),U[2](x), . . . ,U[n](x),U[n][+1](x)]. The estimation of W needs to express each image in ϒ as a vector. Then, this transformation is performed with v(I[i]) transformation. This transformation catches spatial information of texture and luminance conditions, expressing any image I[i] as a vector form I[i]. The transformation v(I[i]) is useful if we need to reduce the computer complexity when images are too big, or when there are some restrictions of zones scene. Additionally, the transformation v(I[i]) implies that the inverse exists v^−1(I[i]), that is, the images encoded as vectors I[i] can be mapped into original images again, preserving the same structure than the original images encoded I[i]. For the rest of the document, the bold letter images refer to the vector version resulted from applying v transformation. Next, the parameter estimation of the matrix W is needed for detecting moving objects from an image sequence. Both processes are discussed in the following sections. The matrix W is estimated assuming that the pdf of each U[i](x) are both separable and independent from the mixed matrix Ω = [I[1], . . . , I[n], I[n][+1]] for all I[i] ∈ ϒ for n = 1, . . . ,n and I [n][+1] = E(ϒ), where E(ϒ) represents the image estimator and ϒ raw data, i.e., the joint probability of all pdfs are factored as follows p ( U 1 , … , U n + 1 ) = p ( U 1 ) p ( U 2 ) … , p ( U n + 1 )where p(U[i]) represents the pdf of unmixed images expressed as vector and the independent components are represented in matrix form as Φ = [U[1], . . . ,U[n],U[n][+1]]. The parameter estimation is performed by identifying data directions that decrease the Gaussianity of the mixed distributions. In this sense, we can assume that the pdf of the moving objects u[i] has a non-Gaussian distribution. The background distribution behaves mainly as a Gaussian, and it does not affect the component separation process, whenever the other components become non-Gaussian. This assumption establishes the minimum criterion to separate the independent components from the mixed data. In case that all pdfs have a Gaussians distribution, the mixed distribution is symmetrical and is not possible to separate it, i.e., if objects with motion have the same pdf as background, they could have not been identified. In a preprocessing step, all mixed images I[i] are centered subtracting the mean value m(I[i]) so as to make a zero-mean variable. After, each I[i] is uncorrelated with a linear transformation for which its components are equal to one, as follows, I ˜ i = S Σ − 1 / 2 S T I iwhere Ĩ[i] denotes uncorrelated version of I[i]; Σ and S result from factorizing covariance matrix E { I i I i T } = S Σ S T. Finally, Σ^−1/2 is computed by a simple component-wised operation. Then, since data are uncorrelated, the weight parameters of W = [w[1], w[2], . . . , w[n]]^T are estimated for each row w[i] such that the projection w[i]^TĨ maximizes the non-Gaussianity. In this sense, the non-Gaussianity degree is measured with an approximation of neg-entropy [14]. The advantages to use neg-entropy instead of kurtosis, for instance, are that is well justified in statistical theory and in some sense, neg-entropy is the optimal estimator of non-Gaussianity as far as statistical properties are concerned. Therefore, a simpler approximation of neg-entropy could be estimated as follows, J ( y ) ∝ [ E { G ( y ) } − E { G ( v ) } ] 2where G is commonly a non-quadratic function. The election of any approximation of G depends of the scene conditions and the behavior of raw data; however, there are some common approximations of neg-entropy that are shown in Table 2. At this point, the task to estimate the W weight values, is considered as an optimization problem. In this sense, using an approach based on Newton-Raphson like [16], an optimization procedure based on fast ICA [14] is performed to estimate each of the w[i] weight vector parameters in W as follows: For each w[i] in W Choose initial (i.e., random) values weight for w[i]. Let w i + = E { I ˜ i g ( w i T I ˜ i ) ) } − E { g ′ ( w i T I ˜ i ) } w. Let w i + = w i + ‖ I ˜ i + ‖. If convergence is not achieved, go back to 1 (a). Continue with the next w[i] Each internal iteration only estimates one w[i] vector. Then, the algorithm is repeated n+1 times, once by each image analyzed in Ω. In [21] is shown a parallel approach to this algorithm, with the advantage of time reduction, against of a considerable reduction in the precision level [22]. The matrix W is a linear transformation that separates each non-Gaussian pdf contained in an I[i](x) image. But, Stauffer and Grimson [3] have proved experimentally that the background can be modeled as a Gaussian distribution. This implies that there is always one component that is Gaussian. The symmetrical morphology of a Gaussian distribution may affect the optimization steps 1(b) and 1(c), which assume non-Gaussian distribution over each component. In the practice, this constraint can be relaxed, considering the effects that this implies, i.e., if we had all the components distributed as a Gaussian, these could be represented as a hipper ball; which we have not gotten enough geometrical information to separate each independent component (because they are totally symmetrical and they do not have any maximum). But when one or some components are Gaussians, the non-Gaussian components add vertexes (i.e., maximums), which are used to estimate each independent component. Figure 1 illustrates the space conformed by three different pdfs; whenever they follow a Gaussian distribution (Figure 1a) there is not enough information for unmixing, instead, when only one has a Gaussian distribution, the others provide geometrical information for unmixing the space. As it is appreciated, the background Gaussianity does not affect the unmixed process at the parameter estimation process, whenever the pdfs of moving objects have non-Gaussian distribution. This situation may be appreciated as a limitation, but in practice, the majority of moving objects, in short time stamps, does not follow a Gaussian distribution, thus being the only background component considered having Gaussian distribution. The data in Equations 1 and 2 are uncorrelated and normalized, being impossible to rank or define an order over all the estimated components. However, the values contained at W provide information about the amount of information over the estimated components. The matrix W represents the linear transformation that separates the mixed images as independent components, the inverse of the transformation W^−1 mixes up again the sources U(x). The linearity of each component expressed in W is used to define the importance of each component and, as a consequence, the detection of background, foreground and noise components. The difference between using a PCA [8] approach instead of an ICA, consists in the extra constraint added to the component estimation; i.e., the components must be orthogonal and uncorrelated at same time. Next, from analysis of the singular values of W^−1, the most significant singular value component represents the background data, under the assumption that fixed areas are proportionally greater than moving areas in scene. The next most significant singular values (except for the first one) correspond to moving components, and the last components, represent noise motion. Then, to separate the first (background component) and last components (noise components) on W^−1 matrix, the singular value decomposition is applied erasing the eigenvalues that correspond to the most significant component and the less significant components. Rebuilding the matrix W^−1 without the first one and the last eigenvalues, the components Ω* are estimated as follows Ω i * = W * − 1 Φ = ( S Σ * D T ) Φwhere W^−1 = SΣD^T; Σ* = Σ without the most significant and the less significant components; i.e., ∑ 11 * = 0 and Σ[ii] = 0 for {i, i+1, . . . ,n}, which preserves the same information except for the background and noise information; and W*^−1 = SΣ*D^T. The data, in the reconstructed image Ω*, contain only data corresponding to the moving objects of each independent component U[i] contained in Φ. Next, it is needed to binarize each one of I i * = v − 1 ( I i * ) that conforms Ω * = [ I 1 * T , I 2 * T , … , I n * T , I n + 1 * T ] T for finding out moving Each particular image I i * does not contain background data and noise data; i.e., the effect of putting Σ[11] = 0 and some Σ[ii] = 0 consists in that the information over each component has been discarded, resulting in spurious data around zero value. In fact, the background corresponds to the majority of the area, and it is represented by the pdf global maximum, and the moving objects by the rest of the pdfs local maximum. The values, that belong to the biggest maximum, are labeled with zero value (which is located around zero value), and rest to one. The resulting binary map represents moving objects as follows B ( x ) = { 0 if I i * ( x ) ∈ Background 1 other casewhere the labeling process is achieved via a k-means’s variation [23], which group all the pixels in two categories; background and foreground. As an example, using the estimated image E(ϒ) and the current image I[t], Figure 2a shows the distribution of independent unmixed components U[1] and U[2], after estimating the unmix matrix W. The main component represents the background information (vertical component), and lower components represent the moving objects information (horizontal component). Then, the reconstructed I t * is estimated from the mixing matrix W*^−1. The pdf of I t * is presented in Figure 2b. The data that surround the zero value correspond to the background; the rest corresponds to the moving components. Finally, the classification task consists on grouping all the elements around zero labeling them as background, and the rest as moving objects. The motion detection process is invariant to the global luminance variations and diffuse lights. The global luminance variations are tolerated, in sense, they represent the displacement of the pdf along its range. As a consequence, the uncorrelated process always centers and normalizes raw data, being not affected thus by the displacements on range. The diffuse lights affect locally the raw data, but do not affect considerably the pdf distribution of the pixel. Then, this does not have a significant effect in the independent component separation process. The luminance invariants make feasible that this approach supports blur shadows, whenever the pdf morphology is not affected (experimental evidence is presented in §5). Moreover, motion detection may be degenerated when the parameter estimation does not converge, avoiding the correct detection. This happens when both foreground and background have Gaussian distribution, or foreground raw data has not enough evidence to, numerically, estimate its pdf. As a complement, the component estimation of moving objects could be affected with noise. An additional step consists in applying a connectivity analysis. It can be performed with a morphological operator P applied to B(x) [24], where an useful morphological filter is the opening by reconstruction, which removes low connected areas. The motion areas may be deformed, because, these operators assume that the structural element is not affected by the current image camera projection; i.e., the background is fronto-parallel to the camera view. The opening by reconstruction filter removes elements low-connected than the structural element λ, resulting in a better connected map B. Then, the morphological filtering process of the binary map is denoted as follows B ˜ = γ ˜ ( B ) = lim n → ∞ δ B n ( ε μ ( B ) )The filtered motion map B̃ enhances the motion zones discarding additive noise that could affect the process. In this section we present an experimental model to test our approach and some implementation issues. The experimental model matches the performance of motion detection between MOG [3] and our approach, against a ground truth based on PETS database [17, 18]. The implementation issues point out some useful remarks to reduce the complexity of motion detection. To reduce the computational complexity of the algorithm, we should take the following modification to the approach. The E(ϒ) operator should be based on a recursive efficient estimator to compute the expected value [19, 20]. But, we must consider that a simple model could affect the accuracy of the E(ϒ) estimator. Additionally, the v(I[i]) function should sample each image, for avoiding the use of Gaussians filters, which in certain conditions are equivalent [25], but the sampling process reduces the computational resources needed. The number of independent components to be estimated in Equation (1) should be reduced to only the image estimator E(ϒ) and the last image acquired. This makes that the unmixed matrix W becomes to a 2 × 2 dimension. Now, the second eigenvalue of Σ* in Equation (5) represents the amount of information of moving objects and noise motion, being a necessary thresholding step over a second eigenvalue to detect object motion and noise motion. A quantitative process to measure the reliability of a background approach is not an easy task. In background model context, some works like [26] and [27] propose methods for quantifying the degree of success of different background approaches. Moreover, many times, background models are a dependable application, making these proposals not completely general. In this sense, we introduce a quantitative analysis based on the comparison of a well accepted background method as a reference and a set of sequences of images as motion ground truth. Both provided a relative point of comparison to evaluate the effectiveness and accuracy of motion detection in different scenarios. A performance quantitative measure of our background model consists in comparing motion detection efficiency between our approach and a reference model. The MOG approach [3] is used as a reference model. This approach is a well accepted method and it generally offers good results. To measure the algorithm performance, a motion ground truth is introduced, which is conformed by several sequences of images taken from PETS database [17, 18]. PETS databases are used as a reference for evaluating surveillance algorithms. The sequences of images used are taken from 2001 and 2007 PETS databases, which include indoors/outdoors scenarios. The motion ground truth is performed by hand-selecting motion zones. Only the objects displaced are considered as motion zones, discarding reflections, noise and shadows. We use, as ground truth, three different sequences (see Figure 3). Table 3 summarizes the principal information of sequences used as ground truth. We introduce three different error measures to compare the two approaches against ground truth data. These errors are expressed in pixels. The first one quantifies the degree of modeling between the foreground estimated map and its ground truth map. This error is well known as BIAS error [28], which, in our context, consists in the arithmetical summation of a binary image difference between motion detection performed by the background approach and its ground truth. This measure provides information about the under/over modeled of motion scenario. Formally, this measure is defined as ε b = ∑ x ∈ B B ( x ) − I gt ( x )where B(x) is the foreground estimated map and I[gt](x) is the ground truth map. The second measure, represents the number of pixels labeled as motion zones that correspond to free movement zones. It is commonly named as a false-positive error [29] and it is defined as the complement of a I[gt](x) map multiplied point to point with a current motion map B(x). The error measures near to zero, meaning that the background approach discards efficiently the free movement zones, avoiding thus, the addition of motion noise at B(x) map. Formally, this error is denoted as follows, ε fp = ∑ x ∈ B B ( x ) ¯ * I gt ( x )where B ( x ) ¯ is the complement of the map of motion detected B(x) and the operator * is the single multiplication. The third measure represents the number of pixels labeled as free motion zones that correspond to motion zones. This error is named as a false-negative error and represents the capabilities of our approach to model correctly motion zones. Error measures near to zero, mean that all objects with motion have been adequately detected. Consequently, error measures greater than zero, mean that the background approach does not detect efficiently the motion areas. This error measure is defined as follows ε fn = ∑ x ∈ B B ( x ) * I gt ( x ) ¯where I gt ( x ) ¯ is the complement of ground truth map and * operator is single multiplication. Next, the approach is tested with a set of sequences taken from different outdoor/indoor scenarios. These scenarios show different common situations in surveillance systems like luminance disturbance, shadows, reflection, etc. Then, the work is compared with the PCA approach (see Appendix A) for segmenting the background from the foreground. Finally, it is shown the results for applying a morphological analysis to improve the quality of the motion map. The proposal and MOG approach are tested as we described above. The proposal was implemented with only two components (the estimator of images E(ϒ) and the last frame acquired), the function v(I) is defined as the concatenation of odd columns conformed by odd pixel positions. The threshold to identify moving objects and noise is defined to 0.05. The operator E(ϒ) is defined as an average operator to estimate background information with a window of 100 frames. The MOG approach was implemented using a ρ = 0.005, and 3σ as belonging criterion. The background initialization was performed with 100 frames. All tests are performed without applying any connectivity criterion. The results of testing the proposal and the MOG approach are showed in Figures 4–6. Additionally, Figures 3 and 7 show some frame motion detected with both approaches. As it is appreciated, in Figure 4, the MOG approach is generally more sensitive to changing light conditions in the scene; on the other hand, the proposed approach becomes more stable in tested scenes. The bias error is greater with a MOG approach than the proposal, i.e., the proposal detects better objects motion in the sequences of images. The false-positive error quantifies when the approach adds noise from motion to the motion detection map. In this sense, we appreciate in Figure 5 that the MOG approach detects false motion zones. The false motion zones are difficult to deal with, and several times, it would be confusing with the motion objects, degrading the post analysis stages in vision systems. Mainly, the noise is caused by the sudden reflections and the shadows caused by the objects moving, like people walking down (Figure 7a), in MOG approach; but in the proposal, the error measure is small, behaving more efficiently than the MOG approach (Figure 3a). In outdoor scene, MOG error remains constant and usually greater than the proposed approach. Thus, the MOG approach produces binary maps of motion affected by noise, whereas the proposal produces binary maps of motion more clearly, without applying any connectivity analysis. The sudden noise peaks are caused particularly when objects are too small (a deep discussion is made in the following paragraphs). On the other hand, the false-negative error quantifies the accuracy to detect foreground areas. In this sense, we appreciate that the MOG approach usually detects correctly objects with motion. However, when foreground has a high similarity degree with background or it is constituted by big flat surfaces, the MOG approach only detects foreground contours. This fact causes a high false-negative error with the MOG approach (see Figure 6a–c). The proposal estimates and separates the mixed pdfs that conforms background and foreground zones. Consequently, it produces a better motion segmentation, and a small false-negative error. In Figure 6a both approaches behave too similar. The small variations are caused when skin zones belonging to people walking down are too similar with background. In Figure 6b, the camera perspective and the reflections difficult the process of foreground detections, causing that people walking down are not detected. Finally, in outdoor scenarios, the small objects are detected efficiently with MOG; however, big objects with slow motion are broken (Figure 7c). But our approach segment with fewer noise degree, and the slow movement objects are better segmented (Figure 3c). Consequently, the error level is similar in both cases, except for the end of the sequence that corresponds to a vehicle with slow motion. Visually, the results can be appreciated in Figures 3 and 7. Additionally, the proposal is tested with different image sequences from distinct scenarios. The sequences of images correspond to outdoor scenarios, where the luminance conditions are changeable. The sequences of images were taken from gardens with people walking down and at a vehicular intersection. Figure 8 shows frames belonging to different outdoor sequences. The results show that the proposal is capable to identify foreground objects efficiently. Figure 8a,b shows people walking down in outdoor gardens. Both experiments use the median operator as estimator E(ϒ) and 1 a 1 log cosh a[1]u with a[1] = 1 to estimate neg-entropy. The images present few noise effects caused by the reflectance and luminance conditions of different objects and materials. In Figure 8c, the object motion corresponds to vehicles at intersections. The scene is affected by vehicle reflections and cloud shadows. The foreground zones are detected efficiently with a weight-average estimator for E(ϒ) and a Gaussian for neg-entropy. Finally, in Figure 9, it is presented a scenario with sudden luminance changes. Figure 9a shows some frames used to illustrate the environment conditions. The floor and the wall are affected by soft shadows caused by a woman walking down and the sudden light turned on. Figure 9b shows the results with the MOG approach. The soft shadows, reflections and sudden luminance changes affect negatively the foreground detection. However, it is appreciated that the approach is capable to discard the majority of the soft shadows and reflections. As an example, when light is turned on, foreground detection is performed adequately. The ICA approach can be considered as an extension of the PCA approach (see Appendix A), in this sense, an approximation of the motion detection process could be performed using PCA approach. The results are similar when the second order independence is equivalent; in other cases, they are different. This is, when in the PCA approach, the principal components are both, independent and uncorrelated, the PCA is equivalent to the ICA. In the practice, both are equivalent when the pixels suffer global effects.But, when scenario have several luminance sources, there are some considerable differences. To illustrate it, Figure 10 shows the differences of I* and the second principal component estimated with the PCA approach, using the the sequence of the intersection. The images are in pseudo-color where dark zones represent small differences, and red color, considerable differences. The main differences are detected in zones with shadows and reflecting materials. The proposal identifies the objects better as the local distribution of pixels values must be independent and uncorrelated of the estimation E(ϒ); in contrary, the PCA approach is enough that data are independent. In the practice, the differences depend of the optimization process to estimate W and the approximation of neg-entropy used. But, the results are always better in the proposal, given that it adds the constraint of uncorrelation that the PCA does not consider. As additional results, the binary motion map B(x) is improved with a opening by reconstruction filter. This filter helps out to clarify the motion zones and eliminates noise classification regions. Figure 11 shows some frames acquired from a public garden, where luminance conditions are changeable. The motion detection is performed, but it is affected with noise. The morphological filter eliminates those regions low-connected. The resulting image offers a better definition of the motion objects. However the filters must be used carefully, because they require more computational resources and affects the frame rate specially in real time systems. The proposal method works efficiently to detect foreground objects when data information that represents the foreground is separable from the background. The proposal method offers a numerical approximation, and in practice, it needs to consider some points that could affect the foreground detection. The level of accuracy depends of the convergence criterion of the algorithm showed in §3.3, the number of components that are considered as noise data in Σ*, and the amount of information to estimate each independent component; consequently, the small foreground zones are difficult to be detected. As an example, we can see in Figures 3 and 6 where a person walking down is not completely detected. In this work, we presented a background subtraction approach based on an independent component analysis. This approach exploits the separability and the non-Gaussianity to estimate each of the pdfs. The motion is efficiently detected when raw data are separable. The results show a robust approach at several scenarios. The moving objects are detected with high resolution even when the scenario presents luminance changes or shadows. The implementation issues have been discussed and this approach can be implemented in a real time monitoring system. Finally, this approach has a better accuracy than the MOG approach in the tested scenarios and is superior in scenarios with changing environmental conditions. We give thanks to the CIDESI Research Center for the support and material provided to develop this research. ToyamaK.KrummJ.BrumittB.MeyersB.Wallflower: Principles and Practice of Background ManteinanceProceedings of IEEE Conference on Computer Vision and Pattern RecognitionFt. Collins, CO, USAJune 23–25, 19991255261 HorprasertT.HarwoodD.DavisL.S.A Robust Background Subtraction and Shadow DetectionProceedings of Asian Conference on Computer VisionTaipei, TaiwanFebruary 27–March 3, 2000 StaufferC.GrimsonW.Adaptive Background Mixture Models for Real-Time TrackingProceedings of IEEE Conference on Computer Vision and Pattern RecognitionFt. Collins, CO, USAJune 23–25, 19992252259 CheungS.C.S.KamathC.Robust Background Subtraction with Foreground Validation for Urban Traffic Video200520052330234010.1155/ASP.2005.2330 KaewTraKulPongP.BowdenR.An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow DetectionProceedings of the 2nd European Workshop on Advanced Video Based Surveillance SystemsLondon, UKSeptember 4, 2001149158 ElgammalA.HarwoodD.DavisL.S.Non-parametric Model for Background SubtractionProceedings of IEEE International Conference on Computer VisionKerkyra, Corfu, GreeceSeptember 20–25, 1999 OliverN.M.RosarioB.PentlandA.P.A Bayesian Computer Vision System for Modeling Human Interactions20002283184310.1109/34.868684 ShawP.J.A.Hodder-ArnoldLondon, UK2003 RymelJ.RennoJ.GreenhillD.OrwellJ.JonesG.Adaptive Eigen-Backgrounds for Object DetectionProceedings of IEEE International Conference on Image ProcessingSingaporeOctober 24–27, 2004318471850 HanB.ComaniciuD.DavisL.Sequential Kernel Density Approximation through Mode Propagation: Applications to Background ModelingProceedings of Asian Conference on Computer VisionJeju Island, KoreaJanuary 2004 TangZ.MiaoZ.Fast Background Subtraction and Shadow Elimination Using Improved Gaussian Mixture ModelProceedings of the 6th IEEE International Workshop on Haptic, Audio and Visual Environments and GamesOttawa, ON, CanadaOctober 12–14, 20073841 ZhouD.ZhangH.RayN.Texture based Background SubtractionProceedings of International Conference on Information and AutomationZhangjiajie, Hunan, ChinaJune 20–23, 2008601605 TsaiD.M.LaiS.C.Independent Component Analysis-Based Background Subtraction for Indoor Surveillance20091815816710.1109/TIP.2008.200755819095527 HyvarinenA.Survey on Independent Component Analysis1999294128 ComonP.Independent Component Analysis—A New Concept?19943628731410.1016/0165-1684(94)90029-9 SuliE.MayersD.Cambridge University PressCambridge, UK2003 The University of Reading. Performance Evaluation of Tracking and Surveillance (PETS)Internet, ftp://ftp.pets.rdg.ac.uk/pub/PETS2001/, 2001. The University of Reading, UK. ThirdeD.FerrymanJ.CrowleyJ.L.Performance Evaluation of Tracking and Surveillance (PETS)Available online: http://pets2006.net/ (accessed on 5 February 2010). KalmanR.A New Approach to Linear Liltering and Prediction Problems196082354510.1115/1.3662552 DempsterA.LairdN.RubinD.Maximum Likelihood from Incomplete Data via the EM Algorithm197739138 HyvainenA.Fast and Robust Fixed-Point Algorithms for Independent Component Analysis19991062663410.1109/72.761722 HarryL.PapadimitrouC.Prentice HallUpper Saddle River, NJ, USA1988 RussellS.NorvigP.Prentice HallUpper Saddle River, NJ, USA2003 SalembierP.SerraJ.Flat Zones Filtering, Connected Operators and Filters by Reconstruction199541153116010.1109/83.40342218292010 BaoP.ZhangL.Noise Reduction for Multi-scale Resonance Images via Adaptive Multiscale Products Thresholding2003210891099 PlessR.LarsonJ.SiebersS.WestoverB.Evaluation of Local Models of Dynamic BackgroundsProceedings of IEEE Computer Society Conference on Computer Vision and Pattern RecognitionMadison, WI, USAJune 16–22, 200327378 HerreroS.BescósJ.Background Subtraction Techniques: Systematic Evaluation and Comparative AnalysisSpringer VerlagBerlin, Germany2009 PapoulisA.McGraw-HillNew York, NY, USA1991 DudaR.O.HartP.E.StorkD.G.Wiley-InterscienceMalden, MA, USA2000 The PCA approach is used as a subspace projection technique. In PCA, the basis vectors are obtained by solving the algebraic eigenvalue system R^T (XX^T)R = Σ, where X is the centered data, R is a matrix of eigenvectors, and Σ is the corresponding diagonal matrix of eigenvalues. The projection of the data C n = R n T X, from the original p dimensional space to a subspace spanned by n principal eigenvectors is optimal under the mean squared error; i.e., the projection of C[n] back into the p dimensional space, has a minimum reconstruction error. In fact, if n is large enough to include all the eigenvectors with non-zero eigenvalues, the projection is lossless. Thus, the goal in the PCA approach is to minimize the projection error from compressed data, the goal of the ICA approach is to minimize the statistical dependence between the basis vectors; i.e., this can be written as WX^T = U, where the ICA approach searches for a linear transformation W that minimizes the statistical dependence between the rows of U, given a training set X. Unlike PCA, the basis vectors in ICA are neither orthogonal nor ranked in order. Also, there is no closed form expression to find W. This work is based on the FastICA approach [21], which use the neg-entropy to estimate W, given that it is invariant to linear transformation [15]. This is, the estimation of W minimizes the mutual information, resulting roughly equivalent to find the directions in which the neg-entropy is maximized. This formulation shows explicitly the connection between the ICA and projection pursuit. In fact, finding a single direction that maximizes neg-entropy is a form of projection pursuit, and could also be interpreted as estimation of a single independent component. Thus ICA is an extension of PCA approach with the adds of the imposing independence up to the second order and, the definition of directions that are orthogonal as discussed in [15]. The Motion Detection process uses the middle-significant eigenvalues of the mixing matrix W^−1; i.e., there is a dimensionality reduction. However, the reduction is performed in the unmixed data U. Then, the dimensionality reduction on this matrix could result equivalent to PCA, when the second order independence are equivalent to the obtained with the PCA estimation. Data distribution for 3 variables (a) when data have a Gaussian Distribution there is not enough information to estimate each independent component (the mixed distribution is totally symmetrical); (b) when data have mixture of non-Gaussian or some of the independent components are Gaussian, it is possible to identify other non-Gaussian components (there are some vortex that define maximums). Foreground detection from independent components using the estimation E(ϒ) and current image I[t]. In (a) the unmixed space as orthogonal components resulted from the pdfs of U[1] and U[2]; the main component represents background information (vertical component) and the second component represents foreground objects; In (b) the pdf of estimated I i * after removing background data. The global maxima correspond to background data, and second maxima (right side) correspond to objects with motion. Foreground detection from image sequences of the PETS database [17, 18] using the proposal. The sequences in (a) and (b) correspond to cameras monitoring a train stop; the sequence in (c) corresponds to a car parking. The foreground detection is performed without spatial filter. The level of resolution is higher and soft shadows are discarded automatically. Bias error resulted to compare ground truth against our approach and the MOG approach. In (a) it is shown the bias error of the first scenario that corresponds to a train station. In (b), the same train station is monitored from other perspective, where the reflection, the shadows, and the perspective make more complicated the motion detection. In (c) the error bias at an outdoors scenario; the noise in the sequence affects negatively the motion detection. The proposed approach generally offers a smaller bias error than the MOG approach. False-positive errors resulted after comparing ground truth against our approach and the MOG approach. In (a), the first train station scenario, where the MOG approach is more sensitive to introduce noise; on the other hand, the proposal adds less error of motion. In (b) the second scenario of the train station at a different perspective, the behavior is similar, except for the end, our proposal has a significative major error bias. In outdoor scenarios (c), the MOG is more sensitive to luminance variations than our proposal. False-negative errors resulted after comparing ground truth against the proposal and the MOG approach. In the first train station sequence, (a) both errors are similar except for the end of sequence, where our approach has better accuracy for segmenting moving objects. In second train station sequence (b), the MOG approach is more sensitive to people shadows and reflection. Instead, the proposal always has better accuracy for detecting object moving. Finally, in (c) both measures are similar, except at the end of sequence, where our approach offers better accuracy than the MOG approach. Foreground detection from images sequences of the PETS database [17, 18] using the MOG approach[3]. The sequences in (a) and (b) correspond to cameras monitoring a train stop; the sequence in (c) corresponds to a car parking. The objects detected are not well defined, and in several frames, some objects are discarded (especially in thin object structures). The motion detection is sensitive to shadows and images are pruned with noise. Foreground detection from (a) a camera monitoring a public garden, (b) a camera monitoring a people lane, (c) a camera monitoring an intersection. The first two images show several leafs moving caused by wind and the scene is continuously changing. The third one, shows cloud shadows causing changes of luminance conditions. Foreground detection at a scene with sudden luminance change. In (a), we present a room, where light is turning on, the floor shows reflections and the wall shows soft shadows. (b) When the MOG approach is applied, the reflections and the shadows are detected as moving objects. (c) Instead, the proposal is capable to detect better the human silhouette without the introduction of moving noise, even the light condition has changed. Images differences amongst the components estimated with the proposal and the PCA estimation. The main differences correspond to the areas affected by several lights, as the roof of the vehicles and the buildings, and, in minor sense, the global lights effects over the scenario. The enhancement of binary motion maps. The motion objects are better defined, and the noise effects are discarded. In (a) the original frames; in (b) the estimation of B(x), and in (c) the enhancement map B̃(x). List of some common function estimator E(ϒ) for pixel values. No. Estimator 1 E(ϒ) = Ĩ(x) where each I ˜ ( x ) = 1 ‖ ϒ ‖ ∑ i ∈ ϒ I ( x ) 2 E(ϒ) = Ĩ(x) where each Ĩ(x) = median {I[i][∈ϒ](x)} 3 E(ϒ) = μ[x] where each Ĩ(x) ∼ G(μ[x], σ[x]) Common estimator functions G for a neg-entropy approximation. No. Function 1 G ( u ) = 1 a 1 log cosh a 1 u for 0 ≤ a 1 ≤ 1 2 G ( u ) = − e − u 2 2 3 G(u) = u^3 List of sequences of images used as ground truth. These images present complex scenarios. The first two sequences represent scenarios with shadows caused by different light sources and both color and texture of skin in some frames, that are too similar to background scenario. The third one, represents a scenario with high noise degree, the reflections are caused by the windows and the moving objects are represented by small zones. No. Place Source Num. Img.^1 bf Num. Train. ^2 1 Train Station PETS 2007 300 100 2 Train Station PETS 2007 300 100 3 Outdoors Park PETS 2001 500 100 Total number of images used. Number of images used to train background model approach.
{"url":"http://www.mdpi.com/1424-8220/10/6/6092/xml","timestamp":"2014-04-18T11:31:10Z","content_type":null,"content_length":"95343","record_id":"<urn:uuid:9ebc87de-2bda-4eda-9caa-96e87bfe5957>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistician Jokes Funny Jokes A famous statistician would never travel by airplane, because he had studied air travel and estimated the probability of there being a bomb on any given flight was 1 in a million, and he was not prepared to accept these odds. One day a colleague met him at a conference far from home. "How did you get here, by train?" "No, I flew" "What about your the possibiltiy of a bomb?" Well, I began thinking that if the odds of one bomb are 1:million, then the odds of TWO bombs are (1/1,000,000) x (1/1,000,000). This is a very, very small probability, which I can accept. So, now I bring my own bomb along!" I asked a statistician for her phone number... and she gave me an estimate. A biologist, a statistician, a mathematician and a computer scientist are on a photo-safari in Africa. They drive out on the savanna in their jeep, stop and scout the horizon with their binoculars. The biologist: "Look! There is a herd of zebras! And there, in the middle, a white zebra! It is fantastic! There are white zebras! We will be famous!" The statistician: "It is not significant. We only know there is one white The mathematician: "Actually, we only know there exists a zebra, which is white on one side." The computer scientist: "Oh, no! A special case!" A mathematician, a physicist, and a statistician go out on a duck hunt. They have only one gun. The come across some geese and the mathematician quickly calculates the distance, the velocity, the angle, etc. and shoots. Well, he misses by a foot to the LEFT! They come across geese again, and this time the physicist takes the gun. After calculating all the angles, flight paths, velocities, etc. the physicist also takes into consideration the gravity, air frictions, and such things... and fires! Well, s/he misses by a foot to the RIGHT! The statistician jumps up and down, yelling, "We got'em! We got'em!" A mathematician, applied mathematician and a statistician all apply for the same job. At the interview they are asked the question, what is 1+1. The mathematician replies, "I can prove that it exists but not that it is unique." The applied mathematician after some thought replies, "the answer is approximately 1.99 with an error in the region of 0.01." The statistician steps outside the room, mulls it over for several minutes, and eventually in desparation returns and inquires, "so what do you want it to be?" Recent Activity
{"url":"http://www.jokebuddha.com/Statistician","timestamp":"2014-04-17T18:42:30Z","content_type":null,"content_length":"22741","record_id":"<urn:uuid:e0887b6b-5b18-4aa6-ae69-5c3b00332c35>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Iselin Math Tutor Find a Iselin Math Tutor ...I am a very frequent guest teacher in Middle School, teaching Pre-Algebra, Algebra, and Geometry. I have studied and reviewed many sample SAT exams, and I have developed pertinent and essential strategies for navigating the test quickly and confidently. I am a highly qualified Certified Teacher... 41 Subjects: including calculus, reading, ACT Math, statistics ...Patience is a top priority since frustration is a main side effect. Science is a large field of study and of great importance which should not be taken lightly, whether it's starting your children off on the right foot or having a study partner for the next test. Do not procrastinate.I love math, no real idea why. 13 Subjects: including geometry, prealgebra, algebra 2, algebra 1 ...Hello,I've been tutoring all aspects of the ASVAB for over 2 years. I have found my knowledge of advanced mathematics, English and other standardized tests can be directly applied to help potential students achieve their goals in this test. I break down the exam into efficient and effective tes... 55 Subjects: including calculus, discrete math, differential equations, career development ...The writing involves understanding the subject that the student must address; formulating a point of view; identifying the main ideas supporting that point of view; supplying the supporting evidence; and capping the essay with a concluding sentence—all in less than a half-hour. If you practice i... 23 Subjects: including algebra 1, prealgebra, SAT math, reading ...But that's not true!! All you need, is someone with a bit of perspective to take a look at how you do math, find a few small mistakes, and explain what to do in PLAIN ENGLISH, once that happens, all I have to do is sit by and watch for a few small details as my students solve equations that terri... 15 Subjects: including algebra 1, algebra 2, American history, European history
{"url":"http://www.purplemath.com/Iselin_Math_tutors.php","timestamp":"2014-04-19T17:18:56Z","content_type":null,"content_length":"23611","record_id":"<urn:uuid:e80b0413-f3a3-4d9e-a4eb-5da2caf3aa47>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Plandome, NY Algebra 2 Tutor Find a Plandome, NY Algebra 2 Tutor ...Using his or her individual learning style I ensure that our learning sessions are not only educationally productive, but also enjoyable. I am very patient and always use the approach that best suits my client’s needs. Let me assist you with gaining a better understanding of math while removing the fear and frustration that sometimes accompanies learning something new. 8 Subjects: including algebra 2, geometry, algebra 1, SAT math ...For 7 years, I worked as a case manager, outreach worker, and supervisor at the Hetrick-Martin Institute, providing services to runaway, homeless, and at-risk youth Currently, I am privately tutoring a special needs child in piano. I am extremely patient and empathetic, and get along great wi... 29 Subjects: including algebra 2, reading, biology, piano ...It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family and friends. I am responsible, hardworking, caring, and a great listener. 29 Subjects: including algebra 2, English, chemistry, geometry ...I receive excellent reviews from my students. Some of my students have ended up switching their majors because of their improved mastery of the Math and Computer Science concepts. I have also served as as a mentor to many that I have taught over the years. 27 Subjects: including algebra 2, calculus, statistics, algebra 1 ...I have extensive tutoring experience tutoring Chemistry and for the SAT exam, AP exam and university level courses. Received A's in calculus, multivariable calculus, linear algebra, differential equations. Was a tutor for Calculus 1 in Cornell University's Department of Mathematics. 17 Subjects: including algebra 2, chemistry, algebra 1, MCAT
{"url":"http://www.purplemath.com/plandome_ny_algebra_2_tutors.php","timestamp":"2014-04-17T16:19:19Z","content_type":null,"content_length":"24275","record_id":"<urn:uuid:5e74d9b3-4f3b-4f72-b6f9-25e8ea32f7d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Mathematical Surveys and Monographs 2001; 181 pp; softcover Volume: 89 Reprint/Revision History: reprinted 2005 ISBN-10: 0-8218-3792-3 ISBN-13: 978-0-8218-3792-4 List Price: US$68 Member Price: US$54.40 Order Code: SURV/89.S It was undoubtedly a necessary task to collect all the results on the concentration of measure during the past years in a monograph. The author did this very successfully and the book is an important contribution to the topic. It will surely influence further research in this area considerably. The book is very well written, and it was a great pleasure for the reviewer to read it. --Mathematical Reviews The observation of the concentration of measure phenomenon is inspired by isoperimetric inequalities. A familiar example is the way the uniform measure on the standard sphere \(S^n\) becomes concentrated around the equator as the dimension gets large. This property may be interpreted in terms of functions on the sphere with small oscillations, an idea going back to Lévy. The phenomenon also occurs in probability, as a version of the law of large numbers, due to Emile Borel. This book offers the basic techniques and examples of the concentration of measure phenomenon. The concentration of measure phenomenon was put forward in the early seventies by V. Milman in the asymptotic geometry of Banach spaces. It is of powerful interest in applications in various areas, such as geometry, functional analysis and infinite-dimensional integration, discrete mathematics and complexity theory, and probability theory. Particular emphasis is on geometric, functional, and probabilistic tools to reach and describe measure concentration in a number of settings. The book presents concentration functions and inequalities, isoperimetric and functional examples, spectrum and topological applications, product measures, entropic and transportation methods, as well as aspects of M. Talagrand's deep investigation of concentration in product spaces and its application in discrete mathematics and probability theory, supremum of Gaussian and empirical processes, spin glass, random matrices, etc. Prerequisites are a basic background in measure theory, functional analysis, and probability theory. Graduate students and research mathematicians interested in measure and integration, functional analysis, convex and discrete geometry, and probability theory and stochastic processes. • Concentration functions and inequalities • Isoperimetric and functional examples • Concentration and geometry • Concentration in product spaces • Entropy and concentration • Transportation cost inequalities • Sharp bounds of Gaussian and empirical processes • Selected applications • References • Index
{"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-89.S","timestamp":"2014-04-19T07:06:12Z","content_type":null,"content_length":"16447","record_id":"<urn:uuid:42220dec-c723-4870-b54c-c2e277b430e4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluation of Electronic Effects in the Solvolyses of Journal of Chemistry Volume 2013 (2013), Article ID 248534, 9 pages Research Article Evaluation of Electronic Effects in the Solvolyses of p-Methylphenyl and p-Chlorophenyl Chlorothionoformate Esters ^1Department of Chemistry, Wesley College, 120 N. State Street, Dover, DE 19901-3875, USA ^2Department of Chemistry and Biochemistry, Northern IL University, DeKalb, IL 60115-2862, USA Received 15 June 2012; Accepted 27 June 2012 Academic Editor: Theocharis C. Stamatatos Copyright © 2013 Malcolm J. D’Souza et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The solvolyses of p-tolyl chlorothionoformate and p-chlorophenyl chlorothionoformate are studied in a variety of organic mixtures of widely varying nucleophilicity and ionizing power values. This solvolytic data is accumulated at 25.0°C using the titration method. An analysis of the rate data using the extended (two-term) Grunwald-Winstein equation and the concept of similarity of substrates based on their l/m ratios shows the occurrence of simultaneous side-by-side addition-elimination and unimolecular 1 mechanisms. 1. Introduction Chlorothionoformate esters are useful as derivatizing agents [1] and common organic building blocks in the synthesis of commercial thiocarbonate esters, nitriles, and isonitriles [2, 3]. High fungicidal activity was demonstrated [4] for the p-substituted aryl thiocarbamate analogs which are also cytotoxic [5]. McKinnon and Queen [6] affirmed that phenyl chlorothionoformate (PhOCSCl, 1, Figure 1), and alkyl chlorothionoformates (ROCSCl) in general hydrolyze rapidly to yield hydrochloric acid, carbonyl sulfide, and the corresponding alcohol. In water at 4.96°C, they obtained rates of s^−1, s^−1, and s^−1, for PhOCSCl, methyl chlorothionoformate (MeOCSCl), and ethyl chlorothionoformate (EtOCSCl), respectively. The increase in reactivity observed with the increase in the electron-donating ability of the alkyl group, coupled with the positive entropy of activation, and a solvent deuterium isotope effect, k(D[2]O)/k(H[2]O) of 0.78 for the hydrolysis of MeOCSCl, made them conclude [6] that such aryl and alkyl chlorothionoformate esters hydrolyze by unimolecular 1 mechanisms. Lee at al. [7–9] expanded on Queen’s MeOCSCl and PhOCSCl mechanistic work to include in their study the substrates ethanolyses, methanolyses, and solvolyses in water, aqueous ethanol, and aqueous acetone. They then proposed that MeSCOCl [7, 8] had 1 character in the water-rich solvents and a greater 2 character in the more organic mixtures. For PhOCSCl, Koo and others [9] also suggested a general base catalyzed 2 mechanism in the aqueous binary mixtures of ethanol (EtOH), methanol (MeOH), and acetone. On the basis of large negative cross-interaction coefficients obtained for the aminolysis of substituted aryl chlorothionoformates with substituted anilines in acetonitrile, Oh et al. [10] proposed a concerted mechanism with a four-membered hydrogen-bonded cyclic transition state. On the other hand, Castro and coworkers [11–16] proposed that the aminolysis of chlorothionoformates using pyridine, alicyclic, and bicyclic amines is a stepwise process with the formation of a zwitterionic tetrahedral intermediate while their phenolysis [17] is concerted. In a recently summarized [18] and ongoing solvolytic mechanistic study of nucleophilic substitution in chloroformate (ROCOCl), chlorothioformate (RSCOCl), chlorothionoformate (ROCSCl), and chlorodithioformate (RSCSCl) esters, we have successfully correlated their solvolytic rate coefficients in a series of binary aqueous organic mixtures of varying solvent nucleophilicity () [19, 20] and ionizing power () [21–23] values using the extended (two-term) Grunwald-Winstein (G-W) equation (1) [24]: In (1), and are the specific rates of solvolysis of a substrate in a given solvent and the standard solvent (80% ethanol), respectively, l is the sensitivity to changes in solvent nucleophilicity (N ), m represents the sensitivity that controls the importance of the solvent ionizing power value (Y), and c is a constant (residual) term. A thorough Grunwald-Winstein analysis in 49 solvents yielded an l value of 1.66, and an m value of 0.56 for phenyl chloroformate (PhOCOCl, 2) [25, 26]. The l/m ratio of 2.96 obtained was advanced as being characteristic for a stepwise addition-elimination (A-E) process that is associated with the formation of a rate-determining tetrahedral intermediate [26]. For phenyl chlorodithionoformate (PhSCSCl, 3), we obtained , , and and proposed a unimolecular 1 ionization with strong rear-side nucleophilic solvation of the developing dithioacylium cation [26, 27]. We further recommended [18] that the l and m values for 2 and 3 can be taken as typical values for bimolecular addition-elimination (A-E) and unimolecular ionization (S[N]1) mechanisms occurring in alkyl and aryl ROCOCl substrates, including those where sulfur is substituted for one or both oxygens. The substitution of one sulfur for the ether oxygen in PhOCOCl yields phenyl chlorothioformate (PhSCOCl, 4). For 4 in the more nucleophilic solvents, we obtained [28] , , , and , , and in the highly ionizing fluoroalcohol mixtures. On the other hand, for PhOCSCl (1), an l value of 1.88, an m value of 0.56, and an l/m ratio of 3.36 were obtained [26, 27] in nucleophilic solvents favoring the stepwise addition-elimination pathway (Scheme 1), and an l value of 0.34, an m value of 0.93, and an l/m ratio of 0.37 were obtained in the highly ionizing solvents suggesting a dissociative S[N]1 mechanism (Scheme 2) with moderate rear-side nucleophilic solvation of the developing carbocation. These results and others recently obtained [18, 25–43] clearly demonstrate that the introduction of one sulfur in ROCOCl substrates does induce a variety of superimposed mechanisms and the ranges of dominance are dependent on the R group, the presence of one or two sulfurs, and the types of solvent studied (i.e., on the and values). Drawing upon extensive literature data and using (1) for benzoyl chloride (PhCOCl), we obtained [44] , , and in the less ionizing solvents, and , , and in the highly ionizing aqueous-organic mixtures. Recently, Bentley and Koo, Bentley and Harris [45, 46] provided convincing evidence that concurrent interchange mechanisms involving one dissociative and one an associative pathway do indeed occur in the solvolyses of p-substituted benzoyl chlorides. Bentley [47] also calculated the Gaussian 03 (G3) values at 298K for the differences in heterolytic bond dissociation energies (HBDEs) differences and arrived at values of PhOCOCl = +4.7kcal/mol, PhSCSCl = −18.4kcal/mol, PhOCSCl = −13.1kcal/mol, PhSCOCl = −8.5kcal/mol, and PhCOCl = −11.7kcal/mol. The calculations involve comparisons with acetyl chloride and the acetyl cation [47] and support the possible operation of a heterolytic fission of the covalent carbon chlorine bond in PhSCSCl, PhOCSCl, PhSCOCl, and PhCOCl, to generate a positively charged acylium (or thioacylium) ion. Like many other ROCOCl substrates, benzyl chloroformate (PhCH[2]OCOCl) proceeds through a stepwise A-E process in all of the typical aqueous organic solvents except in the aqueous fluoroalcohols where a solvolysis-decomposition type mechanism with loss carbon dioxide is dominant [48]. Choi et al. [49] showed that phenyl fluorothionoformate (5) solvolyses by a bimolecular pathway in all solvents (including the fluoroalcohols) studied with the addition step of the addition-elimination reaction being rate-determining. They obtained an l value of 1.32, an m value of 0.39, an l/m ratio of 3.38, a solvent deuterium isotope effect value for methanolysis (/) of 2.11, and entropies of activation in the range of −26.2 to −21.0calmol^−1K^−1. An analysis of the solvolytic data for p-fluorophenyl chlorothionoformate (8) using (1) recently confirmed dual mechanistic channels (Schemes 1 and 2) occurring in the fifteen binary aqueous organic solvents studied at 35.0°C and these pathways were shown to be highly dependent on the solvents ionizing ability [40]. We now present the first-order specific rate constants at 25.0°C for the solvolyses of p-tolyl (p-methylphenyl) chlorothionoformate (6) and p-chlorophenyl chlorothionoformate(7) in ethanol and methanol and binary mixtures of aqueous ethanol (EtOH), aqueous methanol (MeOH), aqueous acetone, aqueous 2,2,2-trifluoroethanol (TFE), and aqueous 1,1,1,3,3,3-hexafluoro-2-propanol (HFIP). We compare the experimental rate data of 6 and 7 and their correlation results obtained using the two-term Grunwald-Winstein equation (1) to those previously published [9, 25–28, 40, 49–51] for compounds 1–5 and 8. 2. Results and Discussion The first-order specific rates of solvolysis of 6 and 7 at 25.0°C in pure and aqueous organic mixtures of widely varying nucleophilicity () and ionizing power values () are reported in Table 1. In 6 and 7, there is a gradual rate increase that coincides with the increase of water content in the aqueous-organic mixtures (with increasing values). On the other hand, in the strongly hydrogen-bonding aqueous-HFIP mixtures, the first-order specific rates for 6 and 7 decrease with increasing water content (and increasing and decreasing values). One can also observe that 7 is approximately 10-fold faster than 6 in the aqueous ethanol solvents and 2- to 3-fold faster in the aqueous methanol and acetone mixtures, but this situation is reversed with 6 being the faster in the aqueous fluoroalcohol (TFE and HFIP) mixtures. In 100% EtOH at 25.0°C, the rates of solvolysis for substrates 1-8 are s^−1 [27], s^−1 [25], s^−1 [27], s^−1 [28], s^−1 (estimated from specific rates at lower temperature) [49], s^−1, s^−1, and s^−1 [40], respectively. For compounds 1–5 and 8, a tetrahedral A-E transition state is proposed for their ethanolyses [25, 27, 28, 40, 49]. The rate order in pure ethanol of reveals that the rate of the dithioester is the slowest. This suggests that the inductive capacity of the thiophenoxy group in 3 is very inefficient. A comparison of just the p-substituted aryl chlorothionoformate ethanolysis rate exhibits a rate order of . These observations are consistent with the Hammett values of +0.23, +0.06, and −0.17 [52] for para-Cl, para-F, and para-CH[3], respectively, with increased electron-withdrawing ability for the substituent favoring the rate-determining addition of a solvent molecule at the carbonyl carbon. An addition-elimination mechanism with the addition step rate-determining is favored for phenyl fluorothionoformate (5) in all of the solvents studied at 10.0°C [49]. Using the similarity model concept [29, 53], we first compared the log (k/) values for p-tolyl chlorothionoformate (6) to those obtained for 5. This plot is shown in Figure 2 for the seventeen common binary solvents studied. This xy-graph results in an abominable correlation coefficient (R) of 0.169, slope of , intercept (c) of , and F-test of 0.4. A review of the plot shows significant deviation for the 90 HFIP, 70HFIP, and 90 TFE values and a noticeable divergence from the line-of-best fit for the four TFE-EtOH mixtures, especially for the 80T-20E point. Removal of these seven data points does indeed improve the correlation significantly resulting in a R value of 0.973, slope of , and . This illustrates that, like 5, the A-E mechanism is the dominant pathway for 6 in the ten remaining ethanol, methanol, and binary mixtures of aqueous ethanol, methanol, and acetone. The slope of indicates that in these ten solvents there is a much later transition state for addition to 6 when compared to that seen for the solvolyses of 5. Using the extended Grunwald-Winstein equation (1) for all of the twenty specific rates of solvolysis of 6 listed in Table 1 leads to a very inferior correlation coefficient (R) of 0.505, , , , and a very low F-test value of 2.9. For the two-term G-W analyses, these are unacceptable correlation and F-test values and this could reflect the presence of concurrent mechanisms. In Table 2, we report the relevant G-W analyses for substrates 1-8. Figure 2 clearly shows that the highly ionizing common solvents (90 HFIP, 70HFIP, 90 TFE, and 80T-20E) deviate the most in the plot that is presented for log versus log . Removal of the three HFIP (97, 90, and 70) values, the three TFE (97, 90, and 80) values, and the 80T-20E rate value, in the G-W analyses of 6, results in a marginal , , , , and a F-test of 17. However, deletion of any additional TFE-EtOH points does not improve the correlation coefficient but the P value (probability value indicating that the results are statistically insignificant) for the l term rises and the F-test value decreases substantially. The resulting l/m ratio of 3.54 observed is in line with values observed in the aryl chlorothionoformate substrates 1 () and 8 () for solvents governed by a dissociative A-E mechanism shown in Scheme 1. For 6, in the seven strongly hydrogen-bonding solvents of 80T-20E, aqueous TFE, and aqueous HFIP, we get an l value of , an m value of , a c value of , an R value of 0.986, and a F-test value of 69, all derived from a G-W analyses using (1) (and listed in Table 2). The l value has an associated P value of 0.03, indicating that the result is statistically significant [54]. A large negative c value is observed because the experimental value is the one applying to the other reaction channel. For 6 in the ionizing fluoroalchohol mixtures, the l and the m values and the l/m ratio of 0.42 are in the range previously observed for ionization reactions (Scheme 2) for 1 () and 8 (). A plot of log against 1.63 + 0.46 in the twenty pure and binary solvents studied is shown in Figure 3. The seven fluoroalcohol-containing mixtures (80T-20E; 97, 90, 70, HFIP; 97, 90, 80 TFE) were excluded in the correlation analysis but are added on the plot to show their extent of deviation from the correlation line. In Figure 3, if one carefully scrutinizes the positioning of the 80T-20E data point, one can discern that there may well be some contribution from the A-E pathway in this solvent mixture. Using the equation log = 1.63 + 0.46 + 0.30, one can estimate the addition-elimination pathway specific rate for 6 in 80T-20E to be s^−1. This would suggest that in 80T-20E there is a 22% contribution from the addition-elimination pathway. In Figure 4, we show a plot of log for 4-chlorophenyl chlorothionoformate (7) against log for phenyl fluorothionoformate (5) in the seventeen common pure and binary solvents studied. This linear regression results in an inadequate correlation coefficient on 0.812, slope of , intercept of , and an F-test value of 29. It is apparent from Figure 4 that the 90 HFIP and 90 TFE values digress considerably from the correlation line. Excluding these two values leads to a significantly improved , slope of , intercept of , and an F-test value of 160. This analysis promotes the possibility that for 7 in the remaining fifteen solvents a similar bimolecular addition-elimination pathway is operative. For 7 using (1) for all of the nineteen solvents listed in Table 1 results in a low correlation coefficient of 0.679 to values with high standard errors associated of , , , and a dismal F-test value of 7. Observing that the 90 HFIP and 90 TFE values deviated significantly in Figure 4, it would be expected that if specific rates for solvolysis of 5 had been available the deviations for 97 HFIP and 97 TFE would have been even greater. Excluding the 97, and 90 HFIP, and 97, and 90 TFE data points in the G-W analyses using (1), we obtain improved values of , , , , and an F-test value of 41. Further omission of the three aqueous HFIP (97, 90, and 50) and the three aqueous TFE (97, 90, and 50) values leads to a much improved R=0.966, , , , and F-test value = 69 (reported in Table 2). The sensitivities l and m obtained are typical for substrates undergoing overall nucleophilic substitution (A-E mechanism) involving rate-determining formation of a tetrahedral intermediate (shown in Scheme 1). The l/m ratio of 3.98 observed is a little higher than those observed for solvolyses of aryl chlorothionoformate esters 1 (), 6 (), and 8 (). We have used the l/m ratio to suggest earlier and later transition states within otherwise very similar mechanisms and as a useful indicator for the presence of general base catalysis [55, 56] in solvolytic reactions of this type [29]. The l/m ratios for p-chlorobenzoyl chloride, p-nitrobenzoyl chloride, p-nitrophenyl chloroformate, and p-nitrobenzyl chloroformate of 3.19, 3.29, 3.67, and 3.50 [29, 44] are of similar values to those obtained for the aryl chlorothionoformate 1 and 6–8 when they are reacting by the addition-elimination channel. A plot of log for 4-chlorophenyl chlorothionoformate (7) against 1.79 + 0.45 in the nineteen pure and binary solvents studied is shown in Figure 5. The six data points for 97, 90, and 50 HFIP and 97, 90, and 50 TFE were excluded from the G-W analyses using (1) but were added to the plot to show their positive deviations from the correlation line. Examination of Figure 5 indicates that the 50 TFE point is quite adjacent to the correlation line. Hence for this solvent there is the possibility of having a concurrent contribution from the A-E pathway. Using the equation log (k/k[o])[7] = 1.79 + 0.45 – 0.05, we have estimated A-E rates of s^−1, s^−1, s^−1, s^−1, and s^−1, for 97 TFE, 90 TFE, 50 TFE, 80T-20E, and 50 HFIP, respectively. These calculations correspond to 2%, 5%, 40%, 53%, and 7% contributions from the A-E pathway for the solvolyses in these solvents. After subtracting out the A-E component in the rates of reaction of 7 that were indicated to be occurring in the seven fluoroalcohol mixtures (97-50 HFIP, 97-50 TFE, and 80T-20E), we can then carry out a correlation of the estimated specific rates remaining to get , , , (large negative value because is for the A-E pathway), and a F-test value of 10. The l/m ratio of 0.52 is typical for 1 mechanisms seen in acyl halides and of the type shown in Scheme 2. For the aryl chlorothionoformate esters 1, 6, 7, and 8, the evidence for a change in mechanism from a bimolecular A-E pathway to an ionization (1) mechanism in the highly ionizing fluoroalcohol mixtures is compelling and occurs even in substrates (7 and 8) that contain electron-withdrawing halogen substituents in the para position. These observations are consistent with Bentley’s G3 calculations that a C=S bond strongly stabilizes the developing carbocation [47]. This formation of a cationic transition-state, favored in the highly ionizing solvent mixtures, is in all probability due to sulfur’s ability to modify its electron cloud and therefore to be highly polarizable. 3. Conclusions The p-tolyl chlorothionoformate (6) and the p-chlorophenyl chlorothionoformate (7) are shown to solvolyze by the generation of concurrent bimolecular stepwise addition-elimination and unimolecular ionization (1) mechanisms. The exact delineation of the change in mechanism is identified utilizing the concept of substrate similarity based on l/m ratios, and statistical results obtained through the application of the two-term extended Grunwald-Winstein equation (1). For 6 in the more nucleophilic solvents, we obtain an l value of 1.63, an m value of 0.46, and an l/m ratio of 3.54. For 7 in a similar set of solvents, we obtain an l value of 1.79, an m value of 0.45, and an l/m ratio of 3.98. It is now proposed that in such nucleophilic solvents 6 and 7 undergo an addition-elimination (association-dissociation) process with the addition-step being rate In the strongly hydrogen-bonding aqueous HFIP, aqueous TFE, and 80T-20E mixtures, we obtain an l value of 0.45, an m value of 1.07, and an l/m ratio of 0.42 for 6, and an l value of 0.43, an m value of 0.82, and an l/m ratio of 0.52 for 7. The sensitivities for l and m obtained (for 6 and 7) are befitting the proposal of an ionization component with an appreciable nucleophilic solvation of the developing cationic transition state. We also found that for solvolyses of 6 there is evidence for a superimposed addition-elimination component of 22% in 80T-20E, and for 7 there are contributions from the A-E pathway of 2%, 5%, 40%, 53%, and 7% in 97 TFE, 90 TFE, 50 TFE, 80T-20E, and 50 HFIP, respectively. 4. Experimental Section The p-tolyl chlorothionoformate (97%, Sigma-Aldrich), and the p-chlorophenyl chlorothionoformate (98%, Sigma-Aldrich) were used as received. Solvents were purified and the kinetic runs carried out as described previously [19]. A substrate concentration of approximately 0.005 M in a variety of solvents was employed. For some of the runs, calculation of the specific rates of solvolysis (first-order rate coefficients) was carried out by a process [57] in which the conventional Guggenheim treatment was modified so as to give an estimate of the infinity titer, which was then used to calculate for each run a series of integrated rate coefficients. The specific rates and associated standard deviations, as presented in Table 1, were obtained by averaging all of the values from, at least, duplicate runs. Multiple regression analyses were carried out using the Excel 2010 package from the Microsoft Corporation, and the SigmaPlot 9.0 software version from Systat Software, Inc., San Jose, CA, was used for the Guggenheim treatments. Authors’ Contribution O. N. Hampton and B. M. Sansbury completed this research under the direction of M. J. D’Souza as undergraduate research assistants in the DE-INBRE/DE-EPSCR sponsored Wesley College Directed Research Program in Chemistry. D. N. Kevill is a collaborator on this project. The undergraduate research is supported by grants (DE-INBRE and DE-EPSCoR programs) from the National Center for Research Resources (NCRR) (5P20RR016472-12) and the National Institute of General Medical Sciences (NIGMS) (8 P20 GM103446-12) from the National Institutes of Health (NIH); a National Science Foundation (NSF) Delaware EPSCoR Grant EPS-0814251; an NSF MRI grant 0520492; an NSF ARI-R2 Grant 0960503. The authors would also like to thank T. W. Bentley for helpful discussions. 1. D. H. R. Barton, P. Blundell, J. Dorchak, D. O. Jang, and J. C. Jaszberenyi, “The invention of radical reactions. Part XXI. Simple methods for the radical deoxygenation of primary alcohols,” Tetrahedron, vol. 47, no. 43, pp. 8969–8984, 1991. View at Publisher · View at Google Scholar · View at Scopus 2. D. Subhas Bose and P. Ravinder Goud, “Aryl chlorothionoformate: a new versatile reagent for the preparation of nitriles and isonitriles under mild conditions,” Tetrahedron Letters, vol. 40, no. 4, pp. 747–748, 1999. View at Publisher · View at Google Scholar · View at Scopus 3. S. M. Rahmathullah, J. E. Hall, B. C. Bender, D. R. McCurdy, R. R. Tidwell, and D. W. Boykin, “Prodrugs for amidines: synthesis and anti-Pneumocystis carinii activity of carbamates of 2,5-bis (4-amidinophenyl)furan,” Journal of Medicinal Chemistry, vol. 42, no. 19, pp. 3994–4000, 1999. View at Publisher · View at Google Scholar · View at Scopus 4. M. Albores-Velasco, J. Thorne, and R. L. Wain, “Fungicidal activity of phenyl N-(4-substituted-phenyl)thionocarbamates,” Journal of Agricultural and Food Chemistry, vol. 43, no. 8, pp. 2260–2261, 1995. View at Scopus 5. M. A. H. Zahran, T. A. R. Salem, R. M. Samaka, H. S. Agwa, and A. R. Awad, “Design, synthesis and antitumor evaluation of novel thalidomide dithiocarbamate and dithioate analogs against Ehrlich ascites carcinoma-induced solid tumor in Swiss albino mice,” Bioorganic and Medicinal Chemistry, vol. 16, no. 22, pp. 9708–9718, 2008. View at Publisher · View at Google Scholar · View at Scopus 6. D. M. McKinnon and A. Queen, “Kinetics and mechanism for the hydrolysis of chlorothionoformates and chlorodithioformate esters in water and aqueous acetone,” Canadian Journal of Chemistry, vol. 50, pp. 1401–1406, 1972. 7. S. La, K. S. Koh, and I. Lee, “Nucleophilic substitution at a carbonyl carbon atom (XI). Solvolysis of methyl chloroformate and its thioanalogues in methanol, ethanol and ethanol-water mixtures,” Journal of the Korean Chemical Society, vol. 24, no. 1, pp. 1–7, 1980. 8. S. La, K. S. Koh, and I. Lee, “Nucleophilic substitution at a carbonyl carbon atom (XII). Solvolysis of methyl chloroformate and its thioanalogues in CH[3]CN-H[2]O and CH[3]COCH[3]-H[2]O mixtures,” Journal of the Korean Chemical Society, vol. 24, no. 1, pp. 8–14, 1980. 9. I. S. Koo, K. Yang, D. H. Kang, H. J. Park, K. Kang, and I. Lee, “Transition-state variation in the solvolyses of phenyl chlorothionoformate in alcohol-water mixtures,” Bulletin of the Korean Chemical Society, vol. 20, no. 5, pp. 577–580, 1999. View at Scopus 10. H. K. Oh, J. S. Ha, D. D. Sung, and I. Lee, “Aminolysis of aryl chlorothionoformates with anilines in acetonitrile: effects of amine nature and solvent on the mechanism,” Journal of Organic Chemistry, vol. 69, no. 24, pp. 8219–8223, 2004. View at Publisher · View at Google Scholar · View at Scopus 11. E. A. Castro, M. Cubillos, and J. G. Santos, “Kinetics and mechanism of the aminolysis of phenyl and 2-nitrophenyl chlorothionoformates,” Journal of Organic Chemistry, vol. 62, no. 13, pp. 4395–4397, 1997. View at Scopus 12. E. A. Castro, “Kinetics and mechanisms of reactions of thiol, thiono, and dithio analogues of carboxylic esters with nucleophilies,” Chemical Reviews, vol. 99, no. 12, pp. 3505–3524, 1999. View at Scopus 13. E. A. Castro, M. Cubillos, and J. G. Santos, “Kinetics and mechanisms of the pyridinolysis of phenyl and 4-nitrophenyl chlorothionoformates formation and hydrolysis of 1-(aryloxythiocarbonyl) pyridinium cations,” Journal of Organic Chemistry, vol. 69, no. 14, pp. 4802–4807, 2004. View at Publisher · View at Google Scholar · View at Scopus 14. E. A. Castro, M. Aliaga, P. R. Campodonico, J. R. Leis, L. García-Río, and J. G. Santos, “Reactions of aryl chlorothionoformates with quinuclidines. A kinetic study,” Journal of Physical Organic Chemistry, vol. 21, no. 2, pp. 102–107, 2008. View at Publisher · View at Google Scholar · View at Scopus 15. E. A. Castro, “Kinetics and mechanisms of reactions of thiol, thiono and dithio analogues of carboxylic esters with nucleophiles. An update,” Journal of Sulfur Chemistry, vol. 28, no. 4, pp. 407–435, 2007. View at Publisher · View at Google Scholar · View at Scopus 16. E. A. Castro, M. Gazitúa, and J. G. Santos, “Kinetics and mechanism of the reactions of aryl chlorodithioformates with pyridines and secondary alicyclic amines,” Journal of Physical Organic Chemistry, vol. 22, no. 11, pp. 1030–1037, 2009. View at Publisher · View at Google Scholar · View at Scopus 17. E. A. Castromaria Cubillos and J. G. Santos, “Concerted mechanisms of the reactions of phenyl and 4-nitrophenyl chlorothionoformates with substituted phenoxide ions,” Journal of Organic Chemistry , vol. 63, no. 20, pp. 6820–6823, 1998. View at Scopus 18. D. N. Kevill and M. J. D'Souza, “Sixty years of the Grunwald-Winstein equation: development and recent applications,” Journal of Chemical Research, no. 2, pp. 61–66, 2008. View at Publisher · View at Google Scholar · View at Scopus 19. D. N. Kevill and S. W. Anderson, “An improved scale of solvent nucleophilicity based on the solvolysis of the S-methyldibenzothiophenium ion,” Journal of Organic Chemistry, vol. 56, no. 5, pp. 1845–1850, 1991. View at Scopus 20. D. N. Kevill, “Development and uses of scales of solvent nucleophilicity,” in Advances in Quantitative Structure-Property Relationships, M. Charton, Ed., vol. 1, pp. 81–115, JAI Press, Greenwich, Conn, USA, 1996. 21. T. W. Bentley and G. E. Carter, “The ${\text{S}}_{\text{N}}$2-${\text{S}}_{\text{N}}$1 spectrum. 4. The ${\text{S}}_{\text{N}}$2 (intermediate) mechanism for solvolyses of tert-butyl chloride: a revised Y scale of solvent ionizing power based on solvolyses of 1-adamantyl chloride,” Journal of the American Chemical Society, vol. 104, no. 21, pp. 5741–5747, 1982. View at Scopus 22. T. W. Bentley and G. Llewellyn, “$Y\text{x}$ scales of solvent ionizing power,” Progress Physical Organic Chemistry, vol. 17, pp. 121–158, 1990. 23. D. N. Kevill and M. J. D’Souza, “Additional ${Y}_{\text{Cl}}$ values and the correlation of the specific rates of solvolysis of tert-butyl chloride in terms of ${N}_{\text{T}}$ and ${Y}_{\text {Cl}}$ scales,” Journal of Chemical Research, no. 5, pp. 174–175, 1993. 24. S. Winstein, E. Grunwald, and H. Walter Jones, “The correlation of solvolysis rates and the classification of solvolysis reactions into mechanistic categories,” Journal of the American Chemical Society, vol. 73, no. 6, pp. 2700–2707, 1951. View at Scopus 25. D. N. Kevill and M. J. D'Souza, “Correlation of the rates of solvolysis of phenyl chloroformate,” Journal of the Chemical Society, no. 9, pp. 1721–1724, 1997. View at Scopus 26. D. N. Kevill, F. Koyoshi, and M. J. D'Souza, “Correlations of the specific rates of solvolysis of aromatic carbamoyl chlorides, chloroformates, chlorothionoformates, and chlorodithioformates revisited,” International Journal of Molecular Sciences, vol. 8, no. 4, pp. 346–362, 2007. View at Publisher · View at Google Scholar · View at Scopus 27. D. N. Kevill and M. J. D'Souza, “Correlation of the rates of solvolysis of phenyl chlorothionoformate and phenyl chlorodithioformate,” Canadian Journal of Chemistry, vol. 77, no. 5-6, pp. 1118–1122, 1999. View at Scopus 28. D. N. Kevill, M. W. Bond, and M. J. D'Souza, “Dual pathways in the solvolyses of phenyl chlorothioformate,” Journal of Organic Chemistry, vol. 62, no. 22, pp. 7869–7871, 1997. View at Scopus 29. M. J. D'Souza, K. E. Shuman, S. E. Carter, and D. N. Kevill, “Extended Grunwald-Winstein analysis—LFER used to gauge solvent effects in p-nitrophenyl chloroformate solvolysis,” International Journal of Molecular Sciences, vol. 9, no. 11, pp. 2231–2242, 2008. View at Publisher · View at Google Scholar · View at Scopus 30. M. J. D'Souza, D. N. Reed, K. J. Erdman, J. B. Kyong, and D. N. Kevill, “Grunwald-winstein analysis—isopropyl chloroformate solvolysis revisited,” International Journal of Molecular Sciences, vol. 10, no. 3, pp. 862–879, 2009. View at Publisher · View at Google Scholar · View at Scopus 31. M. J. D'Souza, S. M. Hailey, and D. N. Kevil, “Use of empirical correlations to determine solvent effects in the solvolysis of S-methyl chlorothioformate,” International Journal of Molecular Sciences, vol. 11, no. 5, pp. 2253–2266, 2010. View at Publisher · View at Google Scholar · View at Scopus 32. H. J. Koh, S. J. Kang, and D. N. Kevill, “Kinetic studies of the solvolyses of 2,2,2-trichloro-1,1-dimethylethyl chloroformate,” Bulletin of the Korean Chemical Society, vol. 31, no. 4, pp. 835–839, 2010. View at Publisher · View at Google Scholar · View at Scopus 33. M. J. D'Souza, B. P. Mahon, and D. N. Kevill, “Analysis of the nucleophilic solvation effects in isopropyl chlorothioformate solvolysis,” International Journal of Molecular Sciences, vol. 11, no. 7, pp. 2597–2611, 2010. View at Publisher · View at Google Scholar · View at Scopus 34. M. J. D'Souza, S. E. Carter, and D. N. Kevill, “Correlation of the rates of solvolysis of neopentyl chloroformate-A recommended protecting agent,” International Journal of Molecular Sciences, vol. 12, no. 2, pp. 1161–1174, 2011. View at Publisher · View at Google Scholar · View at Scopus 35. D. H. Moon, M. H. Seong, J. B. Kyong, Y. Lee, and Y. W. Lee, “Correlation of the rates of solvolysis of 1- and 2-Naphthyl chloroformates using the extended grunwald-winstein equation,” Bulletin of the Korean Chemical Society, vol. 32, no. 7, pp. 2413–2417, 2011. View at Publisher · View at Google Scholar · View at Scopus 36. M. J. D’Souza, A. M. Darrington, and D. N. Kevill, “A study of solvent effects in the solvolysis of propargyl chloroformate,” ISRN Organic Chemistry, vol. 2011, Article ID 7671411, 6 pages, 2011. View at Publisher · View at Google Scholar 37. H. J. Koh and S. J. Kang, “A kinetic study on solvolysis of 9-fluorenylmethyl chloroformate,” Bulletin of the Korean Chemical Society, vol. 32, no. 10, pp. 3799–3801, 2011. 38. M. J. D'Souza, M. J. McAneny, D. N. Kevill, J. B. Kyong, and S. H. Choi, “Kinetic evaluation of the solvolysis of isobutyl chloro-and chlorothioformate esters,” Beilstein Journal of Organic Chemistry, vol. 7, pp. 543–552, 2011. View at Publisher · View at Google Scholar · View at Scopus 39. M. J. D’Souza, K. E. Shuman, A. O. Omondi, and D. N. Kevill, “Detailed analysis for the solvolysis of isopropenyl chloroformate,” European Journal of Chemistry, vol. 2, no. 2, pp. 130–135, 2011. 40. M. J. D’Souza, S. M. Hailey, B. P. Mahon, and D. N. Kevill, “Understanding solvent effects in the solvolysis of 4-fluorophenyl chlorothionoformate,” Chemical Sciences Journal, vol. CSJ-35, pp. 1–9, 2011. 41. H. J. Koh and S. J. Kang, “Correlation of the rates on solvolysis of 2,2,2-trichloroethyl chloroformate using the extended grunwald-winstein equation,” Bulletin of the Korean Chemical Society, vol. 33, no. 5, pp. 1729–1733, 2012. View at Publisher · View at Google Scholar · View at Scopus 42. M. J. D'Souza, J. A. Knapp, G. A. Fernandez-Bueno, and D. N. Kevill, “Use of linear free energy relationships (LFERS) to elucidate the mechanisms of reaction of a γ-methyl-β-alkynyl and an ortho -infstituted aryl chloroformate ester,” International Journal of Molecular Sciences, vol. 13, no. 1, pp. 665–682, 2012. View at Publisher · View at Google Scholar · View at Scopus 43. J. B. Kyong, Y. Lee, M. J. D’Souza, B. P. Mahon, and D. N. Kevill, “Correlation of the rates of solvolysis of tert-butyl chlorothioformate and observations concerning the reaction mechanism,” European Journal of Chemistry. In press. 44. D. N. Kevill and M. J. D'Souza, “Correlation of the rates of solvolysis of benzoyl chloride and derivatives using extended forms of the Grunwald-Winstein equation,” Journal of Physical Organic Chemistry, vol. 15, no. 12, pp. 881–888, 2002. View at Publisher · View at Google Scholar · View at Scopus 45. T. W. Bentley and I. S. Koo, “Concurrent pathways to explain solvent and substituent effects for solvolyses of benzoyl chlorides in ethanol-trifluoroethanol mixtures,” Arkivoc, vol. 2012, no. 7, pp. 25–34, 2012. View at Scopus 46. T. W. Bentley and H. C. Harris, “Solvolyses of benzoyl chlorides in weakly nucleophilic media,” International Journal of Molecular Sciences, vol. 12, no. 8, pp. 4805–4818, 2011. View at Publisher · View at Google Scholar · View at Scopus 47. T. W. Bentley, “Structural effects on the solvolytic reactivity of carboxylic and sulfonic acid chlorides. Comparisons with gas-phase data for cation formation,” Journal of Organic Chemistry, vol. 73, no. 16, pp. 6251–6257, 2008. View at Publisher · View at Google Scholar · View at Scopus 48. J. B. Kyong, B. C. Park, C. B. Kim, and D. N. Kevill, “Rate and product studies with benzyl and p-nitrobenzyl chloroformates under solvolytic conditions,” Journal of Organic Chemistry, vol. 65, no. 23, pp. 8051–8058, 2000. View at Publisher · View at Google Scholar · View at Scopus 49. S. H. Choi, M. H. Seong, Y. W. Lee, J. B. Kyong, and D. N. Kevill, “Correlation of the rates of solvolysis of phenyl fluorothionoformate,” Bulletin of the Korean Chemical Society, vol. 32, no. 4, pp. 1268–1272, 2011. View at Publisher · View at Google Scholar · View at Scopus 50. K. H. Yew, H. J. Koh, H. W. Lee, and I. Lee, “Nucleophilic substitution reactions of phenyl chloroformates,” Journal of the Chemical Society, no. 12, pp. 2263–2268, 1995. View at Scopus 51. S. K. An, J. S. Yang, J. M. Cho et al., “Correlation of the rates of solvolysis of phenyl chlorodithioformate,” Bulletin of the Korean Chemical Society, vol. 23, no. 10, pp. 1445–1450, 2002. View at Scopus 52. C. Hansch and A. Leo, Substituent Constants for Correlation Analysis in Chemistry and Biology, Wiley-Interscience, New York, NY, USA, 1979. 53. T. W. Bentley and M. S. Garley, “Correlations and predictions of solvent effects on reactivity: some limitations of multi-parameter equations and comparisons with similarity models based on one solvent parameter,” Journal of Physical Organic Chemistry, vol. 19, no. 6, pp. 341–349, 2006. View at Publisher · View at Google Scholar · View at Scopus 54. M. D. Markel, “The power of a statistical test. What does insignificance mean?” Veterinary Surgery, vol. 20, no. 3, pp. 209–214, 1991. View at Scopus 55. T. W. Bentley and H. C. Harris, “Separation of mass law and solvent effects in kinetics of solvolyses of p-nitrobenzoyl chloride in aqueous binary mixtures,” Journal of Organic Chemistry, vol. 53, no. 4, pp. 724–728, 1988. View at Scopus 56. T. W. Bentley, H. C. Harris, Z. H. Ryu, G. T. Lim, D. D. Sung, and S. R. Szajda, “Mechanisms of solvolyses of acid chlorides and chloroformates. Chloroacetyl and phenylacetyl chloride as similarity models,” Journal of Organic Chemistry, vol. 70, no. 22, pp. 8963–8970, 2005. View at Publisher · View at Google Scholar · View at Scopus 57. D. N. Kevill and M. H. Abduljaber, “Correlation of the rates of solvolysis of cyclopropylcarbinyl and cyclobutyl bromides using the extended Grunwald-Winstein equation,” Journal of Organic Chemistry, vol. 65, no. 8, pp. 2548–2554, 2000. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/jchem/2013/248534/","timestamp":"2014-04-19T15:04:57Z","content_type":null,"content_length":"187284","record_id":"<urn:uuid:8183aaf1-0b47-49bb-aeaa-e847ac6400cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
September 2011 current issue of Behavioral and Brain Sciences may be of interest to empirically-minded Bayesian M-Phi'ers. The target article is 'Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition', by Matt Jones and Bradley C. Love. Here is the abstract: The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology – namely, Behaviorism and evolutionary psychology – that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements. Among the commentators (BBS works with the formula of one target article and a number of peer-commentaries per issue), some familiar names such as Lawrence Barsalou, Nick Chater, Mike Oaksford, Clark Glymour, and my esteemed colleague Jan-Willem Romeijn. I haven't had the chance to check it out yet, but it looks promising! UPDATE October 1st: Nelson withdraws his claim. Readers probably recall the 'Voevodsky affair' of a few months ago (as reported here and in subsequent posts), prompted by Voevodsky's claim (or suggestion?) that the consistency of PA is an open problem. This week, an even more daring claim has circulated at the FOM list: PA is downright inconsistent. Its author is Edward Nelson, professor of mathematics at Princeton, known for his work on Internal Set Theory and Robinson arithmetic. In his words: I am writing up a proof that Peano arithmetic (P), and even a small fragment of primitive-recursive arithmetic (PRA), are inconsistent. He refers to a draft of the book he is working on, available here, and to a short outline of the book, available here. I've skimmed through the outline, which focuses mostly on a critique of finitism for making tacit infinitary assumptions, and towards the end there are some interesting considerations on the methodology he has been working with. In particular, he has devised a automated theorem-checker, qea: Qea. If this were normal science, the proof that P is inconsistent could be written up rather quickly. But since this work calls for a paradigm shift in mathematics, it is essential that all details be developed fully. At present, I have written just over 100 pages beginning this. The current version is posted as a work in progress at http://www.math.princeton.edu/~nelson/books.html, and the book will be updated from time to time. The proofs are automatically checked by a program I devised called qea (for quod est absurdum, since all the proofs are indirect). Most proof checkers require one to trust that the program is correct, something that is notoriously diffi cult to verify. But qea, from a very concise input, prints out full proofs that a mathematician can quickly check simply by inspection. To date there are 733 axioms, de nitions, and theorems, and qea checked the work in 93 seconds of user time, writing to les 23 megabytes of full proofs that are available from hyperlinks in the book. At this point, I really do not know what to think of Nelson's claims, and I doubt that I would be able to make much sense of his proofs anyway. So for now I'm just acting as a 'reporter', but I'd be curious to hear what others think! UPDATE: Over at The n-Category Cafe', John Baez has a much more detailed post on Nelson's claims, including a suggestion by Terry Tao on a G+ thread as to what seems to be wrong with the proof. (Yay for G+!) Check also Tao's comment at 5:29. ANOTHER UPDATE: Edward Nelson has replied at FOM to some of the queries that had been put forward. You can read his message here. He replies in particular to Tao's observations: So far as I know, the concept of the "Kolmogorov complexity of a theory", as opposed to the Kolmogorov complexity of a number, is undefined. Certainly it does not occur in Chaitin's theorem or the Kritchman-Raz proof. I work in a fixed theory Q_0^*. As Tao remarks, this theory cannot prove its own consistency, by the second incompleteness theorem. But this is not necessary. The virtue of the Kritchman-Raz proof of that theorem is that one needs only consider proofs of fixed rank and level, and finitary reasoning leads to a contradiction. UPDATE AGAIN: I really encourage everybody to go check the comments at the The n-Category Cafe' post; the explanations of what is wrong with Nelson's purported proof are really very clear and accessible. I'm wondering if it would be worth writing a separate post on this? (Anyone?) At any rate, as an anonymous commentator says below, there's much to be commended in a purported proof whose loop-hole(s) can be identified fairly quickly; at least it was well formulated to start with. In the context of my new research project The Roots of Deduction, two positions have just been advertised, one for a PhD student and one for a post-doc. Details on the two positions can be found here . The PhD candidate will reassess the literature on psychology of reasoning and mathematical cognition from the point of view of the dialogical, multi-agent conception of deduction underpinning the project, and the post-doc will work on the historical development of the deductive method in ancient Greece, again from a dialogical point of view. Please help me spread the word, and do get in touch if you think you know of suitable candidates. Thanks! It's in Richard Pettigrew's latest entry in the Stanford Encyclopedia of Philosophy. (It occurred to me that a couple of friends and M-PHIers in Munich will find this much more useful than my awkward attempts to meet their queries a few nights ago ;-) (Cross-posted at NewAPPS.) So far, I have not been following developments in formal epistemology very closely, even though the general project has always been in the back of my mind as a possible case-study for my ideas on the methodology of using formal tools in philosophy (and elsewhere). Well, last week I attended two terrific talks in formal epistemology, one by Branden Fitelson (joint work with Kenny Easwaran) in Munich, and one by Jeanne Peijnenburg (joint work with David Atkinson) in Amsterdam. (Full disclosure: Branden is a good friend, and Jeanne is my boss in Groningen! But I’m sure everybody will agree they are among the very best people working on formal epistemology these days.) These two talks illustrate two different ways in which the application of formal methods can be illuminating for the analysis of epistemological concepts and theories, and thus confirmed my hunch that formal epistemology can be a good case study for a more general reflection on formal methodology. Let me start with Branden’s talk, 'An 'evidentialist' worry about Joyce's argument for probabilism'. The starting point was the preface paradox, and how (in its ‘bad’ versions) it seems to represent a conflict between evidential norms and coherence/accuracy norms. We all seem to agree that both coherence/accuracy norms and evidential norms have a normative grip over our concept of knowledge, but if they are in conflict with one another (as made patent by preface-like cases), then it looks like we are in trouble: either our notion of knowledge is somewhat incoherent, or there can’t be such thing as knowledge satisfying these different, conflicting constraints. Now, according to Branden (and Kenny), Jim Joyce’s move towards a probabilistic account of knowledge is to a large extent motivated by the belief that the probabilistic framework allows for the dissolution of the tension/conflict between the different kinds of epistemic norms, and thus restores peace in the kingdom. However, through an ingenious but not particularly complicated argument (relying on some ‘toy examples’), Branden and Kenny show that, while Joyce’s accuracy-dominance approach to grounding a probabilistic coherence norm for credences is able to resist the old ‘evidentialist’ threats of the preface-kind, new evidentialist challenges can be formulated within the Joycian framework itself. (I refer the reader to the paper and the handout of the presentation for details.) At Q&A, I mentioned to Branden that this looks a lot like what we’ve had with respect to the Liar paradox in recent decades: as is well known, with classical logic and a naïve theory of truth, paradox is just around the corner, which has motivated a number of people to develop ‘fancy’ formal frameworks in which paradox could be avoided (Kripke’s gappy approach, Priest’s glutty approach, supervaluationism, what have you). But then, virtually all of these frameworks then see the emergence of new and even more deadly forms of paradox – what is referred to as the ‘revenge’ phenomenon. What Branden and Kenny’s work seemed to be illustrating is that the Joycean probabilistic framework is not immune to revenge-like phenomena; the preface paradox strikes again, in new clothes. Branden seemed to agree with my assessment of the situation, and concluded that one of the upshots of these results is that there seems to be something fishy with how the different kinds of epistemic norms interact on a conceptual level, which cannot be addressed simply by switching to a clever, fancy formalism. In other words, probabilism is great, but it will not make this very problem go away. This might seem like a negative conclusion with respect to the fruitfulness of applying formal methods in epistemology, but in fact the main thing to notice is that Branden and Kenny’s results emerge precisely from the formal machinery they deploy. Indeed, one of the most fascinating features of formal methods generally speaking is that they seem to be able to probe and explore their own limitations: Gödel’s incompleteness results, Arrow’s impossibility theorem, and so many other revealing examples. It is precisely by deploying these formal methods that Branden and Kenny can then conclude that more conceptual discussion on how the different kinds of epistemic norms interact is required. Three days later, I attended Jeanne’s talk at the DIP-colloquium in Amsterdam (the colloquium I used to run when I was still working there). The title of the talk is great, ‘Turtle epistemology’, which of course refers to the famous anecdote ‘it’s turtles all the way down!’. Jeanne and her co-author David are interested in all kinds of regress phenomena in epistemology, in particular in the foundationalist claim that infinite regress makes any justification impossible. I quote from the abstract: The regress problem in epistemology traditionally takes the form of a one-dimensional epistemic chain, in which (a belief in) a proposition p1 is epistemically justified by (a belief in) p2, which in turn is justified by (a belief in) p3, and so on. Because the chain does not have a final link from which the justification springs, it seems that there can be no justification for p1 at all. In this talk we will explain that the problem can be solved if we take seriously what is nowadays routinely assumed, namely that epistemic justification is probabilistic in character. In probabilistic epistemology, turtles can go all the way down. They start with a formulation of justification in probabilistic terms, more specifically in terms conditional probabilities: proposition E[n+1 ]probabilistically supports E[n] if and only if E[n] is more probable if E[n+1] is true than if it is false. P (E[n] | E[n+1]) > P (E[n] | ~E[n+1]) The rule of total probability then becomes: P (E[n]) = P (E[n] | E[n+1]) P (E[n+1]) + P (E[n] | ~E[n+1]) P (~E[n+1]) Again through an ingenious and very elegant argument, Jeanne and David then formulate infinite chains of conditional probabilities, but show that it is simply not true that they do not yield a determinate probability to the proposition in question. This is because, the longer the chain, and thus the further away the ‘ur-proposition’ is (the one we cannot get to because the chain is infinite), the smaller its influence on the total probability of E[0]. At the limit, it gets cancelled out, as it is multiplied by a number that tends to 0 (for details, check their paper here, which appeared in the Notre Dame Journal of Formal Logic). The moral I drew from their results is that, contrary to the classic, foundational axiomatic conception of knowledge and science, the firmness of our beliefs is in fact not primarily grounded in the very basic beliefs all the way down in the chain, i.e. the ‘first truths’ (Aristotle’s Arché). Rather, their influence becomes smaller and smaller as we go up the chain. At this point, there seem to be two basic options: either we must accept that the classical foundationalist picture is wrong, or we reject the probabilistic analysis of justification as in fact capturing our fundamental concept of knowledge. Either way, this particular formal analysis was able to unpack the consequences of adopting a probabilistic framework, and to show not only that in this setting, infinite regress need not be an insurmountable problem, but also that the epistemic weight of ‘basic truths’ may be much less significant than is usually thought. In a sense, this seems to me to be an example of Carnapian explication, where the deployment of formal methods can in fact unravel aspects of our concept of knowledge that we were not aware of. Thus, these two talks seemed to me to illustrate the strength of formal methodologies at their best: in investigating their own limits, and in unpacking features of some of our concepts that are nevertheless ‘hidden’, buried under some of their more superficial layers. I guess I’m starting to like formal epistemology… (Apologies for shameless self-promotion!) In several of my posts, I mentioned the book on formal languages that I've been working on for the last few years. I now have a draft of the book ready for (moderate!) public consumption, which is now available here. The two final chapters are still missing, but the draft is already something of a coherent whole, or so I hope. Many people have kindly expressed their interest in checking out the material, hence my decision to make it available online at this point, despite the fact that it is still a somewhat rough draft (references are still a mess). Needless to say, comments are always welcome :) OK, so not so much a puzzle as a question this time. I am currently co-teaching a graduate seminar on philosophy of mathematics this semester (structuralism versus logicism, to be more specific). We did a pretty good job of advertising the seminar, and as a result have a number of mathematicians sitting in the class (both faculty and graduate students). The issue is this: As we talk about the philosophical questions and their possible solutions (for example, last week we read Benecerraf's "What Sets Could Not Be" and "Mathematical Truth", since these set up the issues at issue between modal structuralism and Scottish logicism quite nicely), the mathematicians kept coming back to the fact that none of these issues seem to have any bearing on what mathematicians actually do. At one level I agree with this - when actually doing mathematics, mathematicians need not, and probably ought not, be thinking about whether their quantifiers range over abstract objects or something else. Rather, they should be worrying about what follows from what (to put it in an overly simplistic way). There might be an exception to the above paragraph in moments of mathematical crisis - for example, if one were a nineteenth-century mathematician working in real analysis. But in general the point seems, on a certain level, right. On the other hand, however, it seems obvious to me that mathematicians will benefit from thinking about philosophical issues (and benefit qua mathematician). But it is somewhat difficult to articulate why they would benefit. So, any thoughts? In short, what should we say to mathematicians regarding why they ought to care about what philosophers say? I came across an interesting short note about the largest existing mathematical proof, composed of 15.000 pages; it involved more than 100 mathematicians in its formulation. I wonder if there's an entry for it in the Guinness Book of Records? There should be! The Enormous Theorem concerns groups, which in mathematics can refer to a collection of symmetries, such as the rotations of a square that produce the original shape. Some groups can be built from others but, rather like prime numbers or the chemical elements, "finite simple" groups are elemental. There are an infinite number of finite simple groups but a finite number of families to which they belong. Mathematicians have been studying groups since the 19th century, but the Enormous Theorem wasn't proposed until around 1971, when mathematician Daniel Gorenstein of Rutgers University in New Jersey devised a plan to identify all the finite simple groups, divide them into families and prove that no others could exist. It'd better not escape notice that two M-PHIers, Hannes Leitgeb and Richard Pettigrew, made it to the top ten 2010 papers in philosophy with "An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy," Philosophy of Science, 77 (2010), 236-272. Congratulations, Hannes and Richard! (Cross-posted at NewAPPS.) Johan van Benthem is one of my favorite philosophers of logic (and not just because I'm ultimately an Amsterdam child!). He is completely idiosyncratic as a philosopher of logic, as he refuses to 'waste his time' with classical topics such as truth, consequence, paradoxes etc. But this is exactly what I like about what he has to say: he looks at the practices of logicians (being one himself!) and tries to make sense of what it is that we are doing when we 'do logic' in the currently established ways -- at times, adopting a rather critical stance as well. True enough, his observations are very much connected with his own research agenda, and yet they are also surprisingly general. One of the concepts he's been talking about -- not so much in his 'official' papers, but mostly at talks, personal communication and interviews -- is the concept of system-imprisonment. (It is, however, mentioned in his 1999 'Wider Still and Wider: resetting the bounds of logic', in A. Varzi, ed., The European Review of Philosophy, CSLI Publications, Stanford, 21–44.) Here are some interesting passages: But how good is the model of natural language provided by first-order logic? There is always a danger of substituting a model for the original reality, because of the former’s neatness and simplicity. I have written several papers over the years pointing at the insidious attractions and mind-forming habits of logical systems. Let me just mention one. The standard emphasis in formal logical systems is ‘bottom up’. We need to design a fully specified vocabulary and set of construction rules, and then produce complete constructions of formulas, their evaluation, and inferential behavior. This feature makes for explicitness and rigor, but it also leads to system imprisonment. The notions that we define are relative to formal systems. This is one of the reasons why outsiders have so much difficulty grasping logical results: there is usually some parameter relativizing the statement to some formal system, whether first-order logic or some other system. But mathematicians want results about ‘arithmetic’, not about the first-order Peano system for arithmetic, and linguists want results about ‘language’, not about formal systems that model (I can't disclose the source for this quotation for now, as it is from a paper for a project I'm involved with which must remain a secret for a few more months... Anyway, the remark on mathematicians wanting results about 'arithmetic' also reminds me of the series of posts on Voevodsky and the incompleteness of arithmetic that we had a while ago.) Nevertheless, I am worried by what I call the ‘system imprisonment’ of modern logic. It clutters up the philosophy of logic and mathematics, replacing real issues by system-generated ones, and it isolates us from the surrounding world. I do think that formal languages and formal systems are important, and at some extreme level, they are also useful, e.g., in using computers for theorem proving or natural language processing. But I think there is a whole further area that we need to understand, viz. the interaction between formal systems and natural practice. (This is from an interview at the occasion of the Chinese translation of one of his books.) I submit that the notions of system imprisonment and system generated problems must be taken seriously when we are using formal methods to investigate a given external target phenomenon. Oftentimes, a whole cottage industry becomes established to tackle what is taken to be a real issue, which is in fact an issue emerging from the formalism being used, not an issue pertaining to the target phenomenon itself. My favorite example here is the issue of 'free variables' in de re modal sentences, which then became seen as a real, deep metaphysical issue. In truth, it is simply an upshot of the formalism used, in particular the role of variables and the notions of bound or free variables. By adopting a different framework (as I did in a paper on Ockham's modal logic of many years ago, in the LOGICA Yearbook 2003 - pre-print version here) which does not treat quantification by means of variables, the 'issue' simply vanishes. More generally, system imprisonment points in the direction of the epistemic limits of formal methods. Ultimately, what we prove is always relative to a given formal system, and the result lives or perishes with the epistemic reliability of the formal system itself. This does not mean that we should resign ourselves to some form of skepticism and/or relativism (Johan clearly does not!), but simply that we must bear in mind that the formal models are exactly that: models, not the real thing. All too often, the choice of a benchmark of rational agency critically affects the consequences drawn from the empirical study of behavior in cognitive science. Fascinating examples arise from classical studies in the psychology of reasoning (here's a older post touching upon this). A recent Cognition paper by Daniel Bartels and David Pizarro provides challenging evidence concerning normative standards of moral judgment. The abstract goes as follows: "Researchers have recently argued that utilitarianism is the appropriate framework by which to evaluate moral judgment, and that individuals who endorse non-utilitarian solutions to moral dilemmas (involving active vs. passive harm) are committing an error. We report a study in which participants responded to a battery of personality assessments and a set of dilemmas that pit utilitarian and non-utilitarian options against each other. Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness. These results question the widely-used methods by which lay moral judgments are evaluated, as these approaches lead to the counterintuitive conclusion that those individuals who are least prone to moral errors also possess a set of psychological characteristics that many would consider prototypically immoral." (Bartels, D. & Pizarro, D., "The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas", Cognition, 121, 2011, pp. 154-161.) (I owe the hint to Thoughts on Thoughts.) Yesterday the announcement of a new book was sent around at the FOM list, and as it looks like a very interesting book, I thought I'd put a notice of it here at M-Phi too. It is edited by Juliette Kennedy and Roman Kossak, and the (somewhat vague) title is Set Theory, Arithmetic and Foundations of Mathematics. Here is the table of contents: 1. Introduction - Juliette Kennedy and Roman Kossak; 2. Historical remarks on Suslin's problem - Akihiro Kanamori; 3. The continuum hypothesis, the generic-multiverse of sets, and the Ω conjecture - W. Hugh Woodin; 4. ω-Models of finite set theory - Ali Enayat, James H. Schmerl and Albert Visser; 5. Tennenbaum's theorem for models of arithmetic - Richard Kaye 6. Hierarchies of subsystems of weak arithmetic - Shahram Mohsenipour; 7. Diophantine correct open induction - Sidney Raffer; 8. Tennenbaum's theorem and recursive reducts - James H. Schmerl; 9. History of constructivism in the 20th century - A. S. Troelstra; 10. A very short history of ultrafinitism - Rose M. Cherubin and Mirco A. Mannucci; 11. Sue Toledo's notes of her conversations with Gödel in 1972–1975 - Sue Toledo; 12. Stanley Tennenbaum's Socrates - Curtis Franks; 13. Tennenbaum's proof of the irrationality of √2. I'm not sure what the idea is behind grouping this particular collection of papers (I have not had the chance to check it out, there's probably something on this at the introduction), but it does look like many of these papers are a must-read. I'm particularly interested in the papers concerning non-standard models of arithmetic and Tennenbaum's theorem (full disclosure: Juliette Kennedy and I had a very interesting correspondence on the topic a few years ago), but the set-theory section is also high-power stuff for sure!
{"url":"http://m-phi.blogspot.com/2011_09_01_archive.html","timestamp":"2014-04-19T02:14:45Z","content_type":null,"content_length":"165288","record_id":"<urn:uuid:18ec82fd-0e94-4871-aa80-fe65ecf75b9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
North Chelmsford Find a North Chelmsford ACT Tutor ...One of my pleasures in life is helping people improve their English reading, speaking, and writing. I like to adjust how I do this to help with the applications that most interest each student. Building from such a base, greater skill quickly spreads into other aspects of English communications. 55 Subjects: including ACT Math, reading, English, writing ...Specifically, one of my students earned a perfect score on the SAT and I've helped coach several other students to perfect section scores. I teach students both strategies for how to tackle each section of the test and the content they'll need to know in order to excel. I also work with students on timing strategies so they neither run out of time nor rush during a section. 26 Subjects: including ACT Math, English, linear algebra, algebra 1 Hi, my name is Karen. I specialize in Language Arts and Math. I am ACT, SAT, ISEE and SSAT test preparation trained and experienced. 42 Subjects: including ACT Math, reading, English, writing ...I read avidly in my spare time and have a BA in philosophy. My vocabulary skills enable me to score in the 99th percentile on standardized tests. I enjoy helping others to master vocabulary, whether through games or discussing the different roots, suffixes, and prefixes in a word. 29 Subjects: including ACT Math, English, reading, writing I am a high school math teacher working on my second Master's Degree. I went to Lesley University for my undergraduate degree in education and mathematics. I went back to Lesley for my first Master's degree in education to be a Reading Specialist. 14 Subjects: including ACT Math, reading, geometry, algebra 1
{"url":"http://www.purplemath.com/north_chelmsford_act_tutors.php","timestamp":"2014-04-16T10:37:07Z","content_type":null,"content_length":"23866","record_id":"<urn:uuid:f94324b8-2893-4e03-8b2f-b9bb95ca469c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Iselin Math Tutor Find a Iselin Math Tutor ...I am a very frequent guest teacher in Middle School, teaching Pre-Algebra, Algebra, and Geometry. I have studied and reviewed many sample SAT exams, and I have developed pertinent and essential strategies for navigating the test quickly and confidently. I am a highly qualified Certified Teacher... 41 Subjects: including calculus, reading, ACT Math, statistics ...Patience is a top priority since frustration is a main side effect. Science is a large field of study and of great importance which should not be taken lightly, whether it's starting your children off on the right foot or having a study partner for the next test. Do not procrastinate.I love math, no real idea why. 13 Subjects: including geometry, prealgebra, algebra 2, algebra 1 ...Hello,I've been tutoring all aspects of the ASVAB for over 2 years. I have found my knowledge of advanced mathematics, English and other standardized tests can be directly applied to help potential students achieve their goals in this test. I break down the exam into efficient and effective tes... 55 Subjects: including calculus, discrete math, differential equations, career development ...The writing involves understanding the subject that the student must address; formulating a point of view; identifying the main ideas supporting that point of view; supplying the supporting evidence; and capping the essay with a concluding sentence—all in less than a half-hour. If you practice i... 23 Subjects: including algebra 1, prealgebra, SAT math, reading ...But that's not true!! All you need, is someone with a bit of perspective to take a look at how you do math, find a few small mistakes, and explain what to do in PLAIN ENGLISH, once that happens, all I have to do is sit by and watch for a few small details as my students solve equations that terri... 15 Subjects: including algebra 1, algebra 2, American history, European history
{"url":"http://www.purplemath.com/Iselin_Math_tutors.php","timestamp":"2014-04-19T17:18:56Z","content_type":null,"content_length":"23611","record_id":"<urn:uuid:e80b0413-f3a3-4d9e-a4eb-5da2caf3aa47>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Jay Wolberg • March VIX Futures Expiration Date And A Suppressed VIX 4 comments Mar 8, 2013 1:07 PM | about stocks: It's one of those unusual months for VIX where there are 35 days between monthly options expiration day instead of 28 as March options expire on the 15th and April options expire on the 19th. This means that March VIX futures don't expire until the 20th this month, since VIX futures always expire 30 days before the next month's options expire. If you don't like counting you can always find a calendar for VIX futures expiration here. This extended month also causes some issues with the value of VIX during the next week when calculating its constant 30-day volatility. VIX is calculated using weighted values of near-term (VIN) and next-term (VIF) options of the S&P 500, with near-term options having at least one week to expiration (values can be found on the CBOE's page). Once options expiration is less than a week away VIX rolls to use the next term's values as the new near-term value, weighting the new VIN with a value of 1 and the new VIF with a value of 0. As the month progresses the value of these weights shift toward VIF until the next expiration when VIF has the weight of 1 and VIN has the weight of 0. For example, if you look at the value of March and April values on the CBOE's page you will see that current values are 10.47 and 12.71, respectively. Looking at VIX today you will notice that it is essentially equal to the April VIX value (this should not be confused with April VIX futures). Since options expire next Friday, VIX must use April and May as VIN and VIF values. However, since both April and May expiration are more than 30 days away, the near-term weight is greater than 1 and the next-term weight will be negative. Specifically, on Monday April will have a weight of 1.25 and May will have a weight of -0.25. From there the roll continues as usual with weights shifting a little bit each day (on Tuesday the weights are 1.2 and -0.2, then 1.15 and -0.15 on Wednesday, etc.). As an example using current values of April and May of 12.71 and 13.76, come Monday VIX will be calculated by (1.25)*(12.71) + (-0.25)*(13.76) = 15.89 - 3.44 = 12.45. You'll notice that the VIX here is actually less than both VIN and VIF, which seems abnormal but is not unusual during these weeks. Sometimes this results in a temporarily suppressed VIX, especially when the next-term month is priced substantially higher than the near-term month. For more information on how the VIX is calculated see the CBOE's VIX white paper. Back To Jay Wolberg's Instablog HomePage » Instablogs are blogs which are instantly set up and networked within the Seeking Alpha community. Instablog posts are not selected, edited or screened by Seeking Alpha editors, in contrast to contributors' articles. □ will this affect option calls on vxx? 9 Mar 2013, 09:50 PM Reply Like □ Afraid I don't have the answer to that one, Carl. I haven't studied options on VXX enough. 12 Mar 2013, 09:26 PM Reply Like □ Great article! Wondering about trading method you use and what instrument. Arbitrage on the derivatives because of contango? Will share article. Thanks. 12 Mar 2013, 05:13 AM Reply Like □ Thanks. The conditions over the past few years have been ideal for a short VXX/long XIV trade due to the roll yield from the contango term structure. At this point, with VIX and futures at multi-year lows and the term structure relatively flat I find it to be a good time to scale back positions or exit completely and wait for a vix spike to enter a new trade. For more info on the topic you can visit my blog at http://bit.ly/14UiLo8. 12 Mar 2013, 09:25 PM Reply Like Instablogs are Seeking Alpha's free blogging platform customized for finance, with instant set up and exposure to millions of readers interested in the financial markets. Publish your own instablog in minutes.
{"url":"http://seekingalpha.com/instablog/7382421-jay-wolberg/1630051-march-vix-futures-expiration-date-and-a-suppressed-vix","timestamp":"2014-04-17T06:52:48Z","content_type":null,"content_length":"79281","record_id":"<urn:uuid:38251979-58fe-44e1-b97c-9903b779fdcf>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Mosaic Puzzles Software • Advertisement Makes instant worksheets, puzzles, and quizzes for practicing multipication, division, subtraction, and addition skills. Makes bingo cards, math crosswords, flash cards, quizzes, color-in worksheets, and more based on the skill levels you choose. □ Shareware ($15.00) □ 3.03 Mb • Don't Just Improve Math Skills! EQUALS math puzzle games is proven to be a very effective practice tool because while students are improving their math skills, EQUALS is also developing their understanding of how math works. This technique makes math. ... □ Shareware ($12.95) □ 3.25 Mb □ Win95, Win98, WinME, WinNT 3.x, WinNT 4.x, Windows2000, WinXP, Windows2003 • Create skill-building worksheets, puzzles and activities in minutes! Covers addition, subtraction, multiplication, and division! Math ActivityMaker: Skills is an invaluable worksheet creation program for teachers, parents, or home school instructors. ... □ Shareware ($15.00) □ 3.13 Mb □ Windows Vista, XP, 2000, 98, Me, NT • Merry Motors Games collection includes 27 edutainment games that will help to your kids in training of memory, logic, math skills, spatial imagination and creative thinking. The program can be useful for preschool children at the age 5+. □ Shareware ($19.99) □ 14.71 Mb □ Win95, Win98, WinME, WinNT 3.x, WinNT 4.x, Windows2000, WinXP, Windows2003 • KidRocket is a FREE Kidsafe Web Browser with new Kids Email, TimeLock time limiter, Password-protected fullscreen lockdown mode to protect your desktop from curious and click happy children. Your child can save and email their artwork from the Art. ... □ KidRocket Kid Safe Web Browser □ Windows Vista, XP, 2000, 98, Me • Math Games, Math Puzzles, and Mathematical Recreations. Puzzles and math brain teasers online, dynamic and interactive. Check out our new math Puzzle Library. □ Win 3.1x,Win95,Win98,WinME,WinXP,WinNT 3.x,WinNT 4.x,Windows2000,Windows2003,MAC 68k,Mac PPC,Mac OS X,Mac Other • Math Games, Math Puzzles, and Mathematical Recr Math Games, Math Puzzles, and Mathematical Recreations. Puzzles and math brain teasers online, dynamic and interactive. □ Win 3.1x,Win95,Win98,WinME,WinXP,WinNT 3.x,WinNT 4.x,Windows2000,Windows2003,MAC 68k,Mac PPC,Mac OS X,Mac Other • Over 500 million puzzles in a fun multimedia game ensure unlimited playing time. Sharpen your problem solving by tackling math problems that are presented at precisely your level. Feedback tells how far you've progressed. □ Shareware ($14.99) □ 1.09 Mb □ Win95, Win98, WinME, WinNT 3.x, WinNT 4.x, WinXP, Windows2000 • PIQE: Chain of Puzzles, the newest logic game by Albymedia, plunges you into the world of various mesmerizing mind-bending puzzles and mind-riddles. The three types of problems - logic, math and spatial - will challenge your intellectual skills. □ Shareware ($14.95) □ 7.55 Mb □ Win95, Win98, WinME, WinNT 3.x, WinNT 4.x, WinXP, Windows2000, Windows2003, Windows Vista • License Plate Math is a fun game you can play on the road as well as on the computer. There is a virtually unlimited number of math puzzles, with 3 ways to play. The 11 levels of difficulty ensure continuing challenge. For ages 9 and up. □ Windows 95, 98, Me, NT, 2000 • Math games and Puzzles. Math Games. Speed Math Deluxe - Use addition, subtraction, multiplication and division to solve an equation as quickly as possible! Free educational elementary and preschool math games and online lessons. Free online math acti. ... • 3X Mosaic 2.0 offers you an interesting game in which you can use different shape and size of tiles, depending on a difficulty level and your personal preferences. As in reality, you can see reduced copy of the picture.You can close some puzzle packs. ... □ Commercial ($14.95) □ 265 Kb Related: Math Mosaic Puzzles - Play Mosaic Puzzles - Games Mosaic Puzzles - Chinese Mosaic Puzzles - Mosaic Math Lessons
{"url":"http://www.winsite.com/math/math+mosaic+puzzles/","timestamp":"2014-04-19T23:12:31Z","content_type":null,"content_length":"30979","record_id":"<urn:uuid:81988ffe-1d6f-4dcf-aeb7-e1292cbd4eb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Beverly Hills, CA Geometry Tutor Find a Beverly Hills, CA Geometry Tutor ...I have a strong background in writing (BA in American Studies from Stanford). College Consulting services include: -helping students develop a list of colleges they wish to apply to -organizing a student's college application "to do" list -helping students tailor their application to make the be... 49 Subjects: including geometry, reading, writing, English ...I took and passed trigonometry in High School with an A. I love math and continue to use Trigonometry greatly in my physics research. I enjoy working with others and can think on the spot about Trigonometry problems. 13 Subjects: including geometry, calculus, physics, logic ...I have read a lot of books on the game and traded strategies with many strong players. My regular game is No Limit Hold 'Em at casinos ($100 buy in). I can teach the following: - Dealing - Basic rules and game flow - Choosing good starting hands - How to calculate pot odds and hand odds - Ho... 24 Subjects: including geometry, reading, Microsoft Excel, study skills ...During my legal career, I worked with a number of fortune 500 companies who needed help protecting their intellectual property. After practicing law for four years, I finally realized that I had to make a change. Patent law wasn't my end all be all. 16 Subjects: including geometry, English, writing, calculus ...I teach my students where formulas come from instead of just memorizing them. Application of Algebra concepts to solve Geometry problems are reviewed. Taking admission tests requires not only mastering the subject but also the test taking process. 20 Subjects: including geometry, Spanish, calculus, physics Related Beverly Hills, CA Tutors Beverly Hills, CA Accounting Tutors Beverly Hills, CA ACT Tutors Beverly Hills, CA Algebra Tutors Beverly Hills, CA Algebra 2 Tutors Beverly Hills, CA Calculus Tutors Beverly Hills, CA Geometry Tutors Beverly Hills, CA Math Tutors Beverly Hills, CA Prealgebra Tutors Beverly Hills, CA Precalculus Tutors Beverly Hills, CA SAT Tutors Beverly Hills, CA SAT Math Tutors Beverly Hills, CA Science Tutors Beverly Hills, CA Statistics Tutors Beverly Hills, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Beverly_Hills_CA_geometry_tutors.php","timestamp":"2014-04-17T04:12:52Z","content_type":null,"content_length":"24286","record_id":"<urn:uuid:4de8dd5b-9ebf-4005-9cc9-9d70192b4dcd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
pls. help division by zero Author pls. help division by zero Ranch Hand Joined: May 06, 2000 it's clearly mentioned in jaworaski (chap 3)that Posts: 396 an integer by zero an ArithmeticException. positive floating-point value by zero POSITIVE_INFINITY. negative floating point value by zero NEGATIVE_INFINITY. Note that when the sign of zero is negative, such as -0, the sign of the result is reversed. so i write a small program but it is not behaving expectedly class DivisionTest public static void main(String m[]) int i =10; float j = 10.0f; according to my understanding the o/p should be but the out put is DivisionTest.java:7: Arithmetic exception. DivisionTest.java:8: Arithmetic exception. DivisionTest.java:9: Arithmetic exception. 3 errors can anybody pls. throw some light why this is happening? i'm using WinNT /jdk1.2.1 thanx in advance Hmmm.. that's interesting. Joined: Mar 17, I am running JDK 1.2.2 and it prints correct results for me ie., 2000 Infinity Posts: 5782 -Infinity May be you can run a quick search on Sun's bug parade to see if this inconsistency was reported in JDK 1.2.1 and fixed later. Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA). Ranch Hand Joined: May 10, I think its got something to do with the java compiler,if i use 2000 JDK 1.2.2 then it compiles fine,but if i use JDK1.1.6 then it gives the compile time errors.I think the same is the problem with Posts: 98 JDK1.2.1.Thanks. [This message has been edited by Surya B (edited July 31, 2000).] Joined: Jul 13, ---------------------------------------------- 2000 an integer by zero an ArithmeticException. Posts: 29 positive floating-point value by zero POSITIVE_INFINITY. negative floating point value by zero NEGATIVE_INFINITY. Note that when the sign of zero is negative, such as -0, the sign of the result is reversed. so i write a small program but it is not behaving expectedly class DivisionTest public static void main(String m[]) int i =10; float j = 10.0f; I tried my 1.2.2 compiler, the result is same as above, but when you say" if sign of zero is negative, the sign of result is reserve??? The last two statements print all the same " -Infinity", why? please advise. Joined: Jun 18, I am using JDK 1.2 & WIN NT and it prints correct results for me ie., 2000 Infinity Posts: 15 -Infinity [This message has been edited by Netla Reddy (edited July 31, 2000).] Ranch Hand To answer Helen's qn: Joined: May 26, The primitive integer types do not differentiate between +0 and 2002 -0. So when the integral operand is promoted to a floating-point type, it simply becomes 0.0. Posts: 79 10.0/0 will give Infinity 10.0/-0 will also give Infinity But the floating-point types differentiate between +0.0 and -0.0. 10.0/0.0 will give Infinity 10.0/-0.0 will give -Infinity subject: pls. help division by zero
{"url":"http://www.coderanch.com/t/192769/java-programmer-SCJP/certification/pls-division","timestamp":"2014-04-20T22:10:41Z","content_type":null,"content_length":"29944","record_id":"<urn:uuid:021a8aee-b9d0-42f8-8529-0c90121d4f7e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Roswell, GA Find a Roswell, GA SAT Tutor I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry, algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe... 20 Subjects: including SAT reading, SAT math, SAT writing, reading ...Finished my undergrad as the Assistant for Teaching Assistant Development for the entire campus. After undergrad I scored in the 99th percentile on the GMAT and successfully taught GMAT courses for the next 3 years. Helped individuals with the math sections of the GRE, SAT, and ACT. 28 Subjects: including SAT math, physics, calculus, statistics ...WyzAnt's platform consists of video, audio, chat and whiteboard and is a very effective and convenient learning and tutoring tool. My reduced hourly rate for online tutoring is $65. If you would like to try it, let me know and we will schedule a 'tour'. 19 Subjects: including SAT math, SAT writing, SAT reading, physics ...I enjoy tutoring one on one or in small groups. I have a strong knowledge base in math and sciences, effective test-taking strategies, great analytical skills which help me understand where students are 'held up' and good ability to relate material in ways which students understand because of pr... 11 Subjects: including SAT math, chemistry, physics, biology ...I try to find activities that are both fun and practical.I have taught ESL for the past four years, for 12 hours per week, to students at all levels of proficiency. I have completed the Virginia Adult Educator Certification Program (ESOL) through Level 2, Session 1. At the time I left my ESL job, no other levels were yet available. 27 Subjects: including SAT reading, SAT math, Spanish, reading
{"url":"http://www.purplemath.com/Roswell_GA_SAT_tutors.php","timestamp":"2014-04-18T08:23:52Z","content_type":null,"content_length":"23742","record_id":"<urn:uuid:a046cc32-acd4-4c76-b320-5fdd29be6e40>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Franconia, VA Algebra 1 Tutor Find a Franconia, VA Algebra 1 Tutor ...I took a general music reading course in college that gave me the material and knowledge to break music down to where I can teach it to others. I also have 7 plus years in piano lessons, 2 years in violin, and 3 plus years in choir and sight reading. I love music and I love teaching it and sharing it with others. 56 Subjects: including algebra 1, reading, English, biology ...I tutor part-time, and am available in the evenings and on Saturdays. I look forward to working with you or your child!I've tutored Calculus for about 5 years now. Most of my calculus students have been high school seniors, but some are high-achieving sophomores and juniors. 21 Subjects: including algebra 1, Spanish, physics, reading ...I've used C++ in several classes, including data structures and parallel programming. Although I do not use C++ in my day to day work, I am knowledgeable in OpenMP, MPI, and the workings of the STL. I have tutored one student in C++ previously. 14 Subjects: including algebra 1, biology, algebra 2, Java ...I also grew up speaking French, as Maine is chock full of French Canadians. Science Marine biology has always been a hobby of mine. While I've been stationed around the country, I've taken the opportunity to work at the Boothbay Aquarium in Maine and California's Monterey Bay Aquarium, where I worked inside the penguin exhibit. 51 Subjects: including algebra 1, reading, chemistry, writing ...My name is Jenna and I'm an enthusiastic and well-rounded professional who loves to teach and help others. I have a BA in International Studies and spent the last year and a half in a village in Tanzania (East Africa) teaching in- and out-of-school youth a variety of topics including English, he... 35 Subjects: including algebra 1, Spanish, English, ESL/ESOL Related Franconia, VA Tutors Franconia, VA Accounting Tutors Franconia, VA ACT Tutors Franconia, VA Algebra Tutors Franconia, VA Algebra 2 Tutors Franconia, VA Calculus Tutors Franconia, VA Geometry Tutors Franconia, VA Math Tutors Franconia, VA Prealgebra Tutors Franconia, VA Precalculus Tutors Franconia, VA SAT Tutors Franconia, VA SAT Math Tutors Franconia, VA Science Tutors Franconia, VA Statistics Tutors Franconia, VA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Baileys Crossroads, VA algebra 1 Tutors Cameron Station, VA algebra 1 Tutors Camp Springs, MD algebra 1 Tutors Dale City, VA algebra 1 Tutors Fort Hunt, VA algebra 1 Tutors Jefferson Manor, VA algebra 1 Tutors Kingstowne, VA algebra 1 Tutors Lake Ridge, VA algebra 1 Tutors Lincolnia, VA algebra 1 Tutors North Bethesda, MD algebra 1 Tutors North Springfield, VA algebra 1 Tutors Oak Hill, VA algebra 1 Tutors Saint Charles, MD algebra 1 Tutors Springfield, VA algebra 1 Tutors West Springfield, VA algebra 1 Tutors
{"url":"http://www.purplemath.com/Franconia_VA_algebra_1_tutors.php","timestamp":"2014-04-20T21:03:45Z","content_type":null,"content_length":"24355","record_id":"<urn:uuid:36708790-4929-4c0e-921a-5876140bc753>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Type class extensions In Haskell 98, contexts consist of class constraints on type variables applied to zero or more types, as in f :: (Functor f, Num (f Int)) => f String -> f Int -> f Int In class and instance declarations only type variables may be constrained. With the option, any type may be constrained by a class, as in g :: (C [a], D (a -> b)) => [a] -> b Classes are not limited to a single argument either (see Section 6.2.4 In Haskell 98, instances may only be declared for a data or newtype type constructor applied to type variables. With the -98 option, any type may be made an instance: instance Monoid (a -> a) where ... instance Show (Tree Int) where ... instance MyClass a where ... instance C String where This relaxation, together with the relaxation of contexts mentioned above, makes the checking of constraints undecidable in general (because you can now code arbitrary Prolog programs using instances). To ensure that type checking terminates, Hugs imposes a limit on the depth of constraints it will check, and type checking fails if this limit is reached. You can raise the limit with the option, but such a failure usually indicates that the type checker wasn't going to terminate for the particular constraint problem you set it. Note that GHC implements a different solution, placing syntactic restrictions on instances to ensure termination, though you can also turn these off, in which case a depth limit like that in Hugs is With the relaxation on the form of instances discussed in the previous section, it seems we could write class C a where c :: a instance C (Bool,a) where ... instance C (a,Char) where ... but then in the expression c :: (Bool,Char) , either instance could be chosen. For this reason, overlapping instances are forbidden: ERROR "Test.hs":4 - Overlapping instances for class "C" *** This instance : C (a,Char) *** Overlaps with : C (Bool,a) *** Common instance : C (Bool,Char) However if the option is set, they are permitted when one of the types is a substitution instance of the other (but not equivalent to it), as in class C a where toString :: a -> String instance C [Char] where ... instance C a => C [a] where ... Now for the type , the first instance is used; for any type , where is a type distinct from , the second instance is used. Note that the context plays no part in the acceptability of the instances, or in the choice of which to use. The above analysis omitted one case, where the type t is a type variable, as in f :: C a => [a] -> String f xs = toString xs We cannot decide which instance to choose, so Hugs rejects this definition. However if the option is set, this declaration is accepted, and the more general instance is selected, even though this will be the wrong choice if is later applied to a list of Hugs used to have a +m option (for multi-instance resolution, if Hugs was compiled with MULTI_INST set), which accepted more overlapping instances by deferring the choice between them, but it is currently broken. Sometimes one can avoid overlapping instances. The particular example discussed above is similar to the situation described by the Show class in the Prelude. However there overlapping instances are avoided by adding the method showList to the class. In Haskell 98, type classes have a single parameter; they may be thought of as sets of types. In Hugs, they may have one or more parameters, corresponding to relations between types, e.g. class Isomorphic a b where from :: a -> b to :: b -> a Multiple parameter type classes often lead to ambiguity. Functional dependencies (inspired by relational databases) provide a partial solution, and were introduced in Type Classes with Functional Dependencies, Mark P. Jones, In Proceedings of the 9th European Symposium on Programming, LNCS vol. 1782, Springer 2000. Functional dependencies are introduced by a vertical bar: class MyClass a b c | a -> b where This says that the parameter is determined by the parameter; there cannot be two instances of with the same first parameter and different second parameters. The type inference system then uses this information to resolve many ambiguities. You can have several dependencies: class MyClass a b c | a -> b, a -> c where This example could also be written class MyClass a b c | a -> b c where Similarly more than one type parameter may appear to the left of the arrow: class MyClass a b c | a b -> c where This says that the parameter is determined by the parameters together; there cannot be two instances of with the same first and second parameters, but different third parameters.
{"url":"http://cvs.haskell.org/Hugs/pages/users_guide/class-extensions.html","timestamp":"2014-04-21T09:37:18Z","content_type":null,"content_length":"9387","record_id":"<urn:uuid:f1b5a96f-bf14-49fb-a720-09c2cb2357db>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this a terminologic issue? Jerzy Karczmarczuk karczma at info.unicaen.fr Fri Sep 19 11:44:24 EDT 2003 [[Sent to Haskell-café, and to comp.lang.fun]] If you look at some Web sites (Mathworld, the site of John Baez - a known spec. in algebraic methods in physics), or into some books on differential geometry, you might easily find something which is called pullback or Actually, it is the construction of a dual, whose meaning can be distilled and implemented in Haskell as follows. The stuff is very old, and very well Suppose you have two domains X and Y. A function F : X -> Y. The form (F x) gives some y. You have also a functor which constructs the dual spaces, X* and Y* - spaces of functionals over X or Y. A function g belongs to Y* if g : Y -> Z (some Z, let's keep one space like this). Now, I can easily construct a dual to F, the function F* : Y* -> X* by (F* g) x = g (F x) and this mapping is called pullback... While there is nothing wrong with that, and in Haskell one may easily write the 'star' generator (star f) g x = g (f x) star = flip (.) ... I have absolutely no clue why this is called a pullback. Moreover, in the incriminated diff. geom. books, its inverse is *not* called pushout, but push-forward. Anyway, I cannot draw any pullback diagram from that. The closest thing I found is the construction in Asperti & Longo, where a F in C[A,B] induces F* : C!B -> C!A where the exclam. sign is \downarrow, the "category over ...". The diagram is there, a 9-edge prism, but - in my eyes - is quite different from what one can get from this "contravariant composition" above. But my eyes are not much better than my ears, so... I sent this question to a few gurus, and the answers are not conclusive, although it seems that this *is* a terminologic confusion. Vincent Danos <Vincent.Danos at pps.jussieu.fr> wrote: > it really doesn't look like a categorical pullback > and it might well be a "pull-back" only in the sense > that if if F:A->B is a linear map say and f is a linear form on B, then F*(f) > is a linear form on A > defined as F*(f)(a)=f(b=F(a)) so one can "pull back" (linearly of course!) > linear forms on B to linear forms on A > "back" refers to the direction of F, i'd say. Does anybody have a different (or any!) idea about that? Thank you in advance for helping me to solve my homework. Jerzy Karczmarczuk Caen, France More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2003-September/005135.html","timestamp":"2014-04-17T08:28:23Z","content_type":null,"content_length":"4878","record_id":"<urn:uuid:fde65083-4cd9-4338-a454-f7b01363d578>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
[Physics FAQ] - [Copyright] To be updated fairly soon, as some of the comments below about the Hubble constant are now out of date. Original by Michael Weiss. If the universe is expanding, does that mean atoms are getting bigger? Is the Solar System expanding? Mrs Felix: Why don't you do your homework? Allen Felix: The Universe is expanding. Everything will fall apart, and we'll all die. What's the point? Mrs Felix: We live in Brooklyn. Brooklyn is not expanding! Go do your homework. (from Annie Hall by Woody Allen) Mrs Felix is right. Neither Brooklyn, nor its atoms, nor the solar system, nor even the galaxy, is expanding. The Universe expands (according to standard cosmological models) only when averaged over a very large scale. The phrase "expansion of the Universe" refers both to experimental observation and to theoretical cosmological models. Let's look at them one at a time, starting with the observations. The observation is Hubble's redshift law. In 1929, Hubble reported that the light from distant galaxies is redshifted. If you interpret this redshift as a Doppler shift, then the galaxies are receding according to the law: (velocity of recession) = H * (distance from Earth) H is called Hubble's constant; Hubble's original value for H was 550 kilometres per second per megaparsec (km/s/Mpc). Current estimates range from 40 to 100 km/s/Mpc. (Measuring redshift is easy; estimating distance is hard. Roughly speaking, astronomers fall into two "camps", some favouring an H around 80 km/s/Mpc, others an H around 40—55). Hubble's redshift formula does not imply that the Earth is in particularly bad odour in the universe. The familiar model of the universe as an expanding balloon speckled with galaxies shows that Hubble's alter ego on any other galaxy would make the same observation. But astronomical objects in our neck of the woods -- our solar system, our galaxy, nearby galaxies -- show no such Hubble redshifts. Nearby stars and galaxies do show motion with respect to the Earth (known as "peculiar velocities"), but this does not look like the "Hubble flow" that is seen for distant galaxies. For example, the Andromeda galaxy shows blueshift instead of redshift. So the verdict of observation is: our galaxy is not expanding. The theoretical models are, typically, Friedmann-Robertson-Walker (FRW) spacetimes. Cosmologists model the universe using "spacetimes", that is to say, solutions to the field equations of Einstein's theory of general relativity. The Russian mathematician Alexander Friedmann discovered an important class of global solutions in 1923. The familiar image of the universe as an expanding balloon speckled with galaxies is a "movie version" of one of Friedmann's solutions. Robertson and Walker later extended Friedmann's work, so you'll find references to "Friedmann-Robertson-Walker" (FRW) spacetimes in the literature. FRW spacetimes come in a great variety of styles -- expanding, contracting, flat, curved, open, closed, . . . The "expanding balloon" picture corresponds to just a few of these. A concept called the metric plays a starring role in general relativity. The metric encodes a lot of information; the part we care about (for this FAQ entry) is distances between objects. In an FRW expanding universe, the distance between any two "points on the balloon" does increase over time. However, the FRW model is NOT meant to describe OUR spacetime accurately on a small scale -- where "small" is interpreted pretty liberally! You can picture this in a couple of ways. You may want to think of the "continuum approximation" in fluid dynamics -- by averaging the motion of individual molecules over a large enough scale, you obtain a continuous flow. (Droplets can condense even as a gas expands.) Similarly, it is generally believed that if we average the actual metric of the universe over a large enough scale, we'll get an FRW spacetime. Or you may want to alter your picture of the "expanding balloon". The galaxies are not just painted on, but form part of the substance of the balloon (poetically speaking), and locally affect its The FRW spacetimes ignore these small-scale variations. Think of a uniformly elastic balloon, with the galaxies modelled as mere points. "Points on the balloon" correspond to a mathematical concept known as a comoving geodesic. Any two comoving geodesics drift apart over time, in an expanding FRW spacetime. At the scale of the Solar System, we get a pretty good approximation to the spacetime metric by using another solution to Einstein's equations, known as the Schwarzschild metric. Using evocative but dubious terminology, we can say this models the gravitational field of the Sun. (Dubious because what does "gravitational field" mean in GR, if it's not just a synonym for "metric"?) The geodesics in the Schwarzschild metric do NOT display the "drifting apart" behaviour typical of the FRW comoving geodesics -- or in more familiar terms, the Earth is not drifting away from the Sun. By the way, Hubble's constant, is not, in spite of its name, constant in time. In fact, it is decreasing. Imagine a galaxy D light-years from the Earth, receding at a velocity V = H*D. D is always increasing because of the recession. But does V increase? No. In fact, V is decreasing. (If you are fond of Newtonian analogies, you could say that "gravitational attraction" is causing this deceleration. But be warned: some general relativists would object strenuously to this way of speaking.) So H is going down over time. But it is constant over space, i.e., it is the same number for all distant objects as we observe them today. The "true metric" of the universe is, of course, fantastically complicated; you can't expect idealized simple solutions (like the FRW and Schwarzschild metrics) to capture all the complexity. Our knowledge of the large-scale structure of the universe is fragmentary and imprecise. In newtonian terms, one says that the Solar System is "gravitationally bound" (ditto the galaxy, the local group). So the Solar System is not expanding. The case for Brooklyn is even clearer: it is bound by atomic forces, and its atoms do not typically follow geodesics. So Brooklyn is not expanding. Now go do your homework. (My thanks to Jarle Brinchmann, who helped with this list.) Misner, Thorne, and Wheeler, Gravitation, chapters 27 and 29. Page 719 discusses this very question; Box 29.4 outlines the "cosmic distance ladder" and the difficulty of measuring cosmic distances; Box 29.5 presents Hubble's work. MTW refer to Noerdlinger and Petrosian, Ap. J., 168 1—9 (1971), for an exact mathematical treatment of gravitationally bound systems in an expanding universe. M.V.Berry, Principles of Cosmology and Gravitation. Chapter 2 discusses the cosmic distance ladder; chapters 6 and 7 explain FRW spacetimes. Steven Weinberg, The First Three Minutes, chapter 2. A non-technical treatment. Hubble's original paper: A Relation Between Distance And Radial Velocity Among Extra-Galactic Nebulae, Proc. Natl. Acad. Sci. 15, No. 3, 168—173, March 1929. Sidney van den Bergh, The cosmic distance scale, Astronomy & Astrophysics Review 1989 (1) 111—139. M. Rowan-Robinson, The Cosmological Distance Ladder, Freeman. A new method has been devised recently to estimate Hubble's constant, using gravitational lensing. The method is described in: O Gron and Sjur Refsdal, Gravitational Lenses and the age of the universe, Eur. J. Phys. 13, 1992 178—183. S. Refsdal & J. Surdej, Rep. Prog. Phys. 56, 117—185 (1994) and H is estimated with this method in: H.Dahle, S.J. Maddox, P.B. Lilje, to appear in ApJ Letters. Two books may be consulted for what is known (or believed) about the large-scale structure of the universe: P.J.E. Peebles, An Introduction to Physical Cosmology. T. Padmanabhan, Structure Formation in the Universe.
{"url":"http://johanw.home.xs4all.nl/PhysFAQ/Relativity/GR/expanding_universe.html","timestamp":"2014-04-20T23:30:48Z","content_type":null,"content_length":"9569","record_id":"<urn:uuid:2ca9b5e8-4c2f-418d-82e4-3f9f8cd43c23>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with cylindrical shells February 17th 2011, 08:48 PM #1 Mar 2010 Use the method of cylindrical shells to find the volume of the solid obtained by rotating the region bounded by the given curves about the x-axis. y = 4x^2, 2x + y = 6 I have found out that those two equations intersect at (-1.5, 9) and (1, 4). Since this is bounded by the x-axis, I use the y values (9 and 4) as the limits for the integral. However this is where a problem comes in. I don't know how to figure out this problem. I have transformed both equations so that now it is: x= sqrt(x/4) and x = 3 - (y/2) Seeing that x = 3 - (y/2) is higher than x= sqrt(x/4) I set up the integral problem as this: Integral from 4 to 9 (2pi)(y)(3 - (y/2) - sqrt(x/4)) But when I integrate that I get some crazy numbers that does not equal the answer at the back of the book. The book has (250pi)/3 as the answer. Please help me with this frustrating problem! Thank you so much in advance. This is why I like to do ALL such problems both ways. Not only does it provide more practice and more challenging problems, but I get a result verification for free. Well, that does support the book. You may wish to recognize that when solving for 'x', there are TWO branches that you might need to consider. Somewhere in that square root, we might need the negative value. Keep your eyes open for it. $\int_{0}^{4}2\cdot\pi\cdot y\cdot (\sqrt{\frac{y}{4}}-(-\sqrt{\frac{y}{4}}))\;dy$ There's the first piece. What do you get for the rest? $\int_{4}^{9}2\cdot\pi\cdot y\cdot (What(y)-WhatElse(y))\;dy$ This is why I like to do ALL such problems both ways. Not only does it provide more practice and more challenging problems, but I get a result verification for free. Well, that does support the book. You may wish to recognize that when solving for 'x', there are TWO branches that you might need to consider. Somewhere in that square root, we might need the negative value. Keep your eyes open for it. $\int_{0}^{4}2\cdot\pi\cdot y\cdot (\sqrt{\frac{y}{4}}-(-\sqrt{\frac{y}{4}}))\;dy$ There's the first piece. What do you get for the rest? $\int_{4}^{9}2\cdot\pi\cdot y\cdot (What(y)-WhatElse(y))\;dy$ Hm I don't really get what you are saying in the second part. Would it be 2*pi*y*([(y-6)/2]-(-[(y-6)/2])? No good. Nice try, but you have used the line on both sides. Draw some horizontal lines at y = 6 or y = 7 or y = 8 and see that the height of the segment is the line (as you have it) less the negative branch of the parabola (as it is in the first piece). [0,4] -- Positive Branch less Negative Branch [4,9] -- Line less Negative Branch Give it another go. February 17th 2011, 09:18 PM #2 MHF Contributor Aug 2007 February 17th 2011, 09:40 PM #3 Mar 2010 February 18th 2011, 04:20 AM #4 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/calculus/171692-need-help-cylindrical-shells.html","timestamp":"2014-04-16T20:48:59Z","content_type":null,"content_length":"41695","record_id":"<urn:uuid:5d4cad70-d56b-404e-aa8f-76f2ea74e7ed>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
The Effects of International F/X Markets on Domestic Currencies Using Wavelet Networks: Evidence from Emerging Markets Cifter, Atilla and Ozun, Alper (2007): The Effects of International F/X Markets on Domestic Currencies Using Wavelet Networks: Evidence from Emerging Markets. Download (818Kb) | Preview This paper proposes a powerful methodology wavelet networks to investigate the effects of international F/X markets on emerging markets currencies. We used EUR/USD parity as input indicator (international F/X markets) and three emerging markets currencies as Brazilian Real, Turkish Lira and Russian Ruble as output indicator (emerging markets currency). We test if the effects of international F/X markets change across different timescale. Using wavelet networks, we showed that the effects of international F/X markets increase with higher timescale. This evidence shows that the causality of international F/X markets on emerging markets should be tested based on 64-128 days effect. We also find that the effects of EUR/USD parity on Turkish Lira is higher on 17-32 days and 65-128 days scales and this evidence shows that Turkish lira is less stable compare to other emerging markets currencies as international F/X markets effects Turkish lira on shorten time scale. Item Type: MPRA Paper Institution: Marmara University Original The Effects of International F/X Markets on Domestic Currencies Using Wavelet Networks: Evidence from Emerging Markets Language: English Keywords: F/X Markets; Emerging markets; Wavelet networks; Wavelets; Neural networks C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics > C45 - Neural Networks and Related Topics Subjects: F - International Economics > F3 - International Finance > F31 - Foreign Exchange G - Financial Economics > G1 - General Financial Markets > G15 - International Financial Markets Item ID: 2482 Depositing Atilla Cifter Date 03. Apr 2007 Last 12. Feb 2013 07:53 Alsberg, B.K., A.M. Woodward, and D.B. Kell, 1997. An Introduction to Wavelet Transforms for Chemometricians: A Time-Frequency Approach, Chemometrics and Intelligent Laboratory Systems, 37, 215-239. Campbell, C., 1997, Constructive Learning Techniques for Designing Neural Network Systems, in (ed CT Leondes) Neural Network Systems Technologies and Applications. San Diego: Academic Carney J.G., Cunningham P., 1999. The NeuralBAG algorithm: Optimizing generalization performance in bagged neural networks, to be presented at 7th European Symposium on Artificial Neural Networks, Bruges (Belgium), 21-23, April 1999. Daugman, J., 1988. Complete discrete 2D Gabor transform by neural networks for image analysis and compression, IEEE Trans. Acoustics, Speech, and Signal Processing, 36, 1169-1179. Dickey, D.A., Fuller,W.A., 1981. Likelihood ratio statistics for autoregressive time series with a unit root. Econometrica 49, 1057–1072. Echauz J. and Vachtsevanos G., 1996, Elliptic and radial wavelet neural networks in Proceding of Second World Automation Congress, Montpellier, France, 5, 173-179. Engle, R.F., Granger, C.W.J., 1987. Co-integration and error correction: Representation, estimation, and testing. Econometrica 55, 251–276. Gallegati, M., 2005. A Wavelet Analysis of MENA Stock Markets, Mimeo, Universita Politecnica Delle Marche, Ancona, Italy Gencay, R., 1999. Linear, non-linear and essential foreign exchange rate prediction with some simple technical trading rules. Journal of International Economics Vol. 47, pp. 91–107 Gencay, R., Selcuk, F., and Whitcher, B., 2002. An Introduction to Wavelets and Other Filtering Methods in Finance and Economics, Academic Press Gilbert, E.W., Krishnaswamy, C.R., Pashley, M.M., 2000. Neural Network Applications in Finance: A Practical Introduction. Johansen, Søren, 1991.Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica 59, 1551–1580. Johansen, Søren, 1995. Likelihood-based Inference in Cointegrated Vector Autoregressive Models. Oxford University Press. Hertz, J. Anders Krogh, and Richard G. Palmer, 1991. Introduction to the Theory of Neural Computing. Addison-Wesley. Hu, M. Y., G. Q. Zhang, C. Z. Jiang, & B. E. Patuwo, 1999. A cross-validation analysis of neural network out-of-sample performance in exchange rate forecasting, Decision Sciences, 30/1, Jamal, A.M.M. and C. Sundar, 1997. Modeling Exchange Rate Changes with Neural Networks. Journal of Applied Business Research 14 /1, 1-5. Muzy, J.-F., D. Sornette, J. Delour and A. Arneodo, 2001, Multifractal returns and Hierarchical Portfolio Theory, Quantitative Finance Vol. 1/1, pp. 131-148. Müller, U. A, M.M. Dacorogna, R.B. Olsen, O.V. Pictet, and J.E. von Weizsacker, 1995. Volatilities of Different Time Resolutions - Analyzing the Dynamics of Market Components, The First International Conference on High Frequency Data in Finance, Zurich, March. Ozun, A., 2006. Theoretical Importance of Artificial Neural Networks For The Efficiency of Financial Markets, Proceeding of 5th International Finance Sypmosium: Integration in the Financial Markets, Vienna University and Marmara University, 613-622, 25-26 May, 2006, Istanbul,. Percival, D.B., and Walden, A.T., 2000. Wavelet Methods for Time Series Analysis, Cambridge University Press Procházka, A., V. Sýs, 1994, Time Series Prediction Using Genetically Trained Wavelet Networks. In Neural Networks for Signal Processing 3 - Proceedings of the 1994 IEEE Workshop, Ermioni, Greece, 1994. IEEE SP Society. Ramer, A. and V. Kreinovich, 1994. Maximum entropy approach to fuzzy control. Information Sciences, 81/3, 235-260. Rumelhart, D. and McClelland, J., 1986. Parallel Distributed Processing, MIT Press, Cambridge, MA. Sin., T. and Han, I., 2000, A Hybrid System Using Multiple Cyclic Decomposition Methods and Neural Network Techniques for Point Forecast, Decision Making, Proceedings of the 33rd Hawaii International Conference on System Sciences Szu, H., Telfer, B., Kadambe, S., 1992. Neural network adaptive wavelets for signal representation and classification, Opt. Engineering, 31, 1907-1916. Tkacz, G., 2001, Estimating the Fractional Order of Integration of Interest Rates Using a Wavelet OLS Estimator”, Studies in Nonlinear Dynamics&Econometrics, Vol. 5, Issue 1, ss. 19-32 Tang, Z., C. Almeida, and P.A. Fishwick, 1991, Time Series Forecasting Using Neural Networks vs Box-Jenkins Methodology, Simulation, 57/5, 303-310 Yao, J.T., H.-L. Poh, T. Jasic, 1996. Foreign exchange rates forecasting with neural networks, Proceedings of the International Conference on Neural Information Processing, Hong Kong, September 1996, 754-759. Zhang M., 1992. Study of the Inference Engine and User Interface of the Tool for Building Expert Systems with Object-Oriented Technology, Masters Thesis, Artificial Intelligence Research Centre, the Agricultural University of Hebei, Baoding, China. URI: http://mpra.ub.uni-muenchen.de/id/eprint/2482
{"url":"http://mpra.ub.uni-muenchen.de/2482/","timestamp":"2014-04-18T08:21:28Z","content_type":null,"content_length":"29495","record_id":"<urn:uuid:cb9d1a5d-e232-40db-9544-9ce0c2ada68e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Lauderhill, FL Calculus Tutor Find a Lauderhill, FL Calculus Tutor ...I have many hours of experience tutoring Discrete Mathematics subjects on a one-on-one basis, including Logic, Set Theory, Probability, Combinatorics, Order Theory, Relations, Functions, and more. I try to make learning Discrete Mathematics fun by showing students how logic can be learned throug... 27 Subjects: including calculus, English, reading, algebra 1 Computer Administrator with over 15 years IT experience. University Professor teaching computer-related courses including Microsoft Office, database systems, computer security, Unix, computer networking, programming and spreadsheet modeling. Also hold a Bachelor's degree in Electrical Engineering ... 16 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have been using Matlab for 10+ years. First as a tool to model physical processes for my Ph.D. thesis, and later in my academic and commercial work. I have worked not only with Matlab but also with Matlab clones such as Octave. 27 Subjects: including calculus, chemistry, physics, geometry ...The student learns to develop skills in problem solving dealing with rates of change and develops skills to use differential calculus with integral calculus to attack differential equations of various types. Also, differentials become the basis for some fundamental equations used in every day mathematics. Chemistry involves more than just boring theories and difficult lab experiments. 23 Subjects: including calculus, English, chemistry, physics ...I have tutored and taught successfully Algebra 2. I have very unique ways to teach Algebra in a story-like manner. I have tutored this subject for about 4-5 years with positive results. 18 Subjects: including calculus, chemistry, biochemistry, cooking Related Lauderhill, FL Tutors Lauderhill, FL Accounting Tutors Lauderhill, FL ACT Tutors Lauderhill, FL Algebra Tutors Lauderhill, FL Algebra 2 Tutors Lauderhill, FL Calculus Tutors Lauderhill, FL Geometry Tutors Lauderhill, FL Math Tutors Lauderhill, FL Prealgebra Tutors Lauderhill, FL Precalculus Tutors Lauderhill, FL SAT Tutors Lauderhill, FL SAT Math Tutors Lauderhill, FL Science Tutors Lauderhill, FL Statistics Tutors Lauderhill, FL Trigonometry Tutors Nearby Cities With calculus Tutor Cooper City, FL calculus Tutors Coral Springs, FL calculus Tutors Dania calculus Tutors Dania Beach, FL calculus Tutors Davie, FL calculus Tutors Fort Lauderdale calculus Tutors Lauderdale Lakes, FL calculus Tutors Margate, FL calculus Tutors North Lauderdale, FL calculus Tutors Oakland Park, FL calculus Tutors Plantation, FL calculus Tutors Pompano Beach calculus Tutors Sunrise, FL calculus Tutors Tamarac, FL calculus Tutors Wilton Manors, FL calculus Tutors
{"url":"http://www.purplemath.com/Lauderhill_FL_calculus_tutors.php","timestamp":"2014-04-20T19:59:10Z","content_type":null,"content_length":"24287","record_id":"<urn:uuid:7f912f80-ecd5-4002-a468-66bd22949dce>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Boylston Algebra Tutor Find a Boylston Algebra Tutor ...This is the part of history that has had more impact on society than most others. Physics classes were part of my curriculum for my science degree. As a high school science teacher, 50% of the year is teaching physics. 22 Subjects: including algebra 1, physics, writing, study skills I have tremendous experience teaching Business Studies courses including Business Communication, Organizational Behavior, Management, Marketing and Consumer Behavior. I have trained my students to give effective presentations, public speaking and group discussions. I have also experience in Business Report Writing and Resume development. 7 Subjects: including algebra 1, reading, English, writing I can help you excel in your math or physical science course. I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in... 16 Subjects: including algebra 1, algebra 2, calculus, physics ...My goal within the education field is to challenge students to think critically. For students to develop into independent critical thinkers, they must be challenged yet nurtured. Focusing on the subject matter, I would strive to have students understand the fundamentals of the subject, with the inclusion of real world (and personal) experience and discussion to maximize their focus. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have also worked with students with similar issues in my private tutoring. I have successfully helped many students taking the ISEE to get scores required for their choice of private schools in my 15 years as an independent tutor. I am qualified in SSAT, a similar test. 34 Subjects: including algebra 1, algebra 2, reading, English
{"url":"http://www.purplemath.com/Boylston_Algebra_tutors.php","timestamp":"2014-04-20T08:55:09Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:503fd0f0-fd48-48c0-8bd0-a3bf548228f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Created: Tue 06 Oct 2009 Last modified: Assigned: Wed 07 Oct 2009 Due: Wed 14 Oct 2009 1. Please review the course syllabus and make sure that you understand the course policies for grading, late homework, and academic honesty. 2. On the first page of your solution write-up, you must make explicit which problems are to be graded for "regular credit", which problems are to be graded for "extra credit", and which problems you did not attempt. Please use a table something like the following │Problem │01│02│03│04│05│06│07│08│09│... │ │ Credit │RC│RC│RC│EC│RC│RC│NA│RC│RC│... │ where "RC" is "regular credit", "EC" is "extra credit", and "NA" is "not applicable" (not attempted). Failure to do so will result in an arbitrary set of problems being graded for regular credit, no problems being graded for extra credit, and a five percent penalty assessment. 3. You must also write down with whom you worked on the assignment. If this changes from problem to problem, then you should write down this information separately with each problem. Required: Do all four of the five problems below. Problem 1 is an exercise in the book but counts as a problem here. Points: 20 points per problem. Unless otherwise indicated, exercises and problems are from Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein. The edition (2^nd or 3^rd) will be indicated if the numbering differs. 1. Exercise 16.2-5 2. Nubert is a high-level manager in a software firm and is managing n software projects. He is asked to assign m of the programmers in the firm among these n projects. Assume that all of the programmers are equally (in)competent. After some careful thought, Nubert has figured out how much benefit i programmers will bring to project j. View this benefit as a number. Formally put, for each project j, he has computed an array A[j][0..m] where A[j][i] is the benefit obtained by assigning i programmers to project j. Assume that A[j][i] is nondecreasing with increasing i. Further make the economically-seemingly-sound assumption that the marginal benefit obtained by assigning an ith programmer to a project is nonincreasing as i increases. Thus, for all j and i ≥ 1, A[j][i+1] - A[j] [i] ≤ A[j][i] - A[j][i-1]. Help Nubert design a greedy algorithm to determine how many programmers to assign to each project such that the total benefit obtained over all projects is maximized. Justify the correctness of the algorithm and analyze its running time. 3. Consider a set of m rectangular paving stones where the ith stone is l[i] units long and w[i] units wide, l[i] ≥ w[i] ≥ 1. Assume that paving stone i can be stacked on top of paving stone j iff l [i] ≤ l[j] and w[i] ≤ w[j]. Give an efficient dynamic programming algorithm for computing the maximum number of paving stones that can be stacked together. Briefly justify its correctness and analyze the asymptotic running time of your algorithm. 4. Consider a version of the activity selection problem, in which each activity has a weight, in addition to the start and finish times. (For example, the weight may signify the importance of the activity.) The goal is to select a maximum-weight set of mutually compatible activities, where the weight of a set of activities is the sum of the weights of the activities in the set. □ (a) Give a counterexample to show that the greedy choice made for the activity selection problem will not work for the weighted activity selection problem. □ (b) Use dynamic programming to solve the weighted activity selection problem. Briefly justify its correctness and analyze the running time of your algorithm. 5. Prof. Curly is planning a cross-country road-trip from Boston to Seattle on Interstate 90, and he needs to rent a car. His first inclination was to call up the various car rental agencies to find the best price for renting a vehicle from Boston to Seattle, but he has learned, much to his dismay, that this may not be an optimal strategy. Due to the plethora of car rental agencies and the various price wars among them, it might actually be cheaper to rent one car from Boston to Cleveland with Hertz, followed by a second car from Cleveland to Chicago with Avis}\, and so on, than to rent any single car from Boston to Seattle. Prof. Curly is not opposed to stopping in a major city along Interstate 90 to change rental cars; however, he does not wish to backtrack, due to time constraints. (In other words, a trip from Boston to Chicago, Chicago to Cleveland, and Cleveland to Seattle is out of the question.) Prof. Curly has selected n major cities along Interstate 90 and ordered them from East to West, where City 1 is Boston and City n is Seattle. He has constructed a table T[i,j] which for all i < j contains the cost of the cheapest single rental car from City i to City j. Prof. Curly wants to travel as cheaply as possible. Devise an algorithm which solves this problem, argue that your algorithm is correct, and analyze its running time and space requirements. Your algorithm or algorithms should output both the total cost of the trip and the various cities at which rental cars must be dropped off and/or picked up. Switch to: Harriet Fell College of Computer Science, Northeastern University 360 Huntington Avenue #340 WVH, Boston, MA 02115 Email: fell@ccs.neu.edu Phone: (617) 373-2198 / Fax: (617) 373-5121 The URL for this document is: http://www.ccs.neu.edu/home/fell/CS5800/F09/Homeworks/hw.05.html
{"url":"http://www.ccs.neu.edu/home/fell/CS5800/F09/Homework/hw.05.html","timestamp":"2014-04-19T09:27:36Z","content_type":null,"content_length":"7858","record_id":"<urn:uuid:9c219446-17a8-4ac8-b9eb-85b54558a0b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Astronomy and Mathematics Astrology (Jyothissaasthram) was popular in Kerala even in ancient times, and their deep knowledge in that branch of science is well-known. A number of great treatises (Granthhams) were written by several eminent scholars (most of them Namboothiri Braahmanans) of the area at different times. It is difficult to date some of the very ancient ones such as "Devakeralam", "Sukrakeralam" (also known as "Bhrigukeralam", "Kerala Rahasyam" or "Keraleeyam" and has 10 chapters), "Vararuchi Keralam" (or "Jaathaka Rahasyam" or "Kerala Nirnayam" - quite possibly authored by Vaakyam expert, Vararuchi), and "Keraleeya Soothram". It is said that a Kerala Braahmanan by name Achyuthan performed "Thapas" to Brihaspathi, who appeared and asked what favours he wanted. He asked for Brihaspathi's condensed version of the 2,000 "Jaathaka Skandhham" portion of the Jyothissaasthram written by Sree Naaraayanan and comprising of four lakh Granthhams. He then prayed and pleased both Sukran and Sree Parameswaran and obtained their 1,000 and 2,000 Granthhams respectively, and taught the Guru Matham, Sukra Matham and Saambasiva Matham to his disciples. The 7th century (AD) witnessed tremendous development in Jyothissaasthram in Kerala. Siksha, Vyaakaranam, Niruktham, Jyothisham, Kalpam and Chhandovichithi are the six "limbs" (Shadaangams) of Vedam. Jyothisham in those days was used for determining auspicious times for various Vaidika Karmams (religious rituals). Jyothissaasthram has three Skandhhams (branches) - Ganitham, Samhitha and Hora. In addition, it is seen to have six Angams (parts) - Jaathakam, Golam, Nimitham, Prasnam, Muhoortham and Ganitham. Of these, Golam and Ganitham are in Ganitha Skandhham, Jaathakam, Prasnam and Muhoortham in Hora Skandhham, and Nimitham in Samhitha Skandhham. Nimitham is partly covered in Hora also. Some consider Jyothissaasthram to consist of two parts - Pramaanam and Phhalam with Ganitha Skandhham discussing the Pramaanam part and the other two Skandhhams, the Phhalam part. The former includes Soorya and Chandra Grahanams (solar and lunar eclipses), the Mouddhyam of Grahams (stars and planets), Chandra Sringonnathi (lunar cycles), and the Gathi Bhedams (changes in motion) of planets, and the methods of their prediction, and also descriptions of Bhoogola Khagolams (earth, planets and stars). Whereas, Jaathakam, Prasnam, Bhootha Sakunaadi Lakshanams (omens, etc.), Muhoorthams (auspicious days/ times), etc. are included in the Phhalam part. Of these, Jaathakam and Prasnam are extremely important. Jaathakam (horoscope) involves predicting the good and the bad events during the entire life of a person based on the position of the planets and the stars at the precise time of his / her birth. Prasnam predicts the good and bad results for the subject again based on the planetary and star positions at the time of some special events/ tests proposed to be undertaken by the subject, usually the learned and the pious. CONTRIBUTIONS OF NAMBOOTHIRIS TO ASTRONOMY AND MATHEMATICS The contributions of the Namboothiris in Astrology, Astronomy and Mathematics have been immense. They had a capacity for unmistakable and sharp observations on the natural phenomena and accurate ability of deducting complicated theoretical formulae. The works of about 20 prominent ones among them during a long period of about a millenium between the seventh and the eighteenth century (AD) are enumerated here. 1. Bhaaskaraachaaryan - I (early 6th century AD) Formost among Ganithajnans (astrologer / mathematician) in the entire Bhaaratham (India), Bhaskaran-I, hailed from Kerala, according to experts. In 522 AD he wrote "Mahaa Bhaaskareeyam", also known as "Karma Nibandhham". A Vyaakhyaanam (explanations and discussions) on Aaryabhateeyam as well as a condensed version - "Laghu Bhaaskareeyam" - of Aaryabhateeyam, have also come down to us. (Bhaaskaraachaaryan-II who wrote "Leelaavathy" lived in the 11th century). 2. Haridathan (650 - 750 AD) Though the Aarybhata system had been followed in calculating the planetary positions, Namboothiri scholars recognised variations between the computed and observed values of longitudes of the planets. A new system called "Parahitham" was proposed by Haridathan through his famous works "Graha-Chakra-Nibandhhana" and "Mahaa-Maarga-Nibandhhana". In 683 AD, this system was accepted throughout Kerala on the occasion of the 12-yearly Mahaamaagha festival at Thirunavaya, and is recorded in many later works. Haridathan introduced many improvements over Aarybhata system, like using the more elegant Katapayaadi (Click here) system of notation in preference to the more complicated Aarybhataa's notation. Haridathan introduced the unique system of enunciating graded tables of the sines of arcs of anomaly (Manda-jya) and of conjugation (Seeghra-jya) at intervals of 3° 45' to facilitate the computation of the true positions of the planets. One of the corrections introduced by Haridathan to make the Aarybhata's results more accurate, is the "Sakaabda Samskaaram". 3. Aadi Sankaran (788 - 820 AD) Sree Sankaran was born in Kalady in Central Kerala (nearly 50 km north east of Kochi) on the banks of river Periyar as the son of Kaippilly Sivaguru Namboothiri and Arya Antharjanam (Melpazhur Mana). Scientific concepts naturally evolved from this highly logical and rational intellect. It is believed that Sree Sankaran was the first mathematician to moot the concept of Number Line. [Ref: "Sankara Bhaashyam" (4-4-25) of the "Brihadaaranyaka Upanishad"]. It was Sree Sankaran who first expounded the idea of assigning a set of natural numbers to a straight line. As the number of elements in a set of natural numbers is infinite, it requires a symbol of infinity to represent them. A straight line can be considered to be infinitely long. Sankaran adopted a straight line as a symbol of infinity. A straight line can be divided to infinite number of parts and each of these parts can be assigned the value of a particular number. This is called number line. Though his concept lacks the perfection of modern number line theory, Sree Sankaran exhibited his intellectual ingenuity in conceiving such a novel idea. Yet another example for Sree Sankaran's unbiased and pure scientific pursuit of knowledge could be seen in the second "Slokam" of "Soundarya Lahari" [a collection of 100 Slokams in praise of Goddess Durga written by Sree Sankaran]. In the Slokam "Thaneeyaamsam paamsum thava charana pankeruhabhavam", we can see a hint to the theory of inter-convertibility of mass and energy. Famous scientist Albert Einstein put forward this theory much later. Einstein said mass can be converted to energy and vice-versa according to the equation E = MC², where E = Energy released, M = Mass of the substance, and C = Velocity of light = 3 x 10¹º cm/sec. In another context, Sree Sankaran postulated that the diameter of Sun is 1 lakh "Yojanas". Later the modern scientific community calculated the diameter which agreed very closely with (just 3% error) the value provided by Sankaran. 4. Sankaranarayanan (9th century) This scholar from "Kollapuri" (Kollam) in Kerala has written a commentary (Vyaakhhyaanam) of the "Laghu Bhaaskareeyam" of Bhaaskaraachaaryan-I, titled "Sankaranaaraayaneeyam". The Granthham is dated 869 AD (ME 44). 5. Sreepathy (around 1039 AD) Sreepathy (Kaasyapa Gothram) has described methods for calculating the "Shadbalam" of the planets and stars. Prescribing of consequences should be based on these "Balams". His works include "Aarybhateeya Vyaakhhyaanams" such as "Ganitha Thilakam", "Jaathaka Karma Padhhathi" and "Jyothisha Rathna Maala". 6. Thalakkulathu Bhattathiri (1237 - 1295 AD) This Govindan Bhattathiri is believed to have been born in ME 412 in Thalakkulam of Aalathur Graamam, about three kilometer south of Tirur. The Illam does not exist anymore. His mother was apparently from Paazhoor. He is said to have left Keralam (to Paradesam, possibly Tamil Nadu) and studied the "Ulgranthhams" in Jyothisham under a scholar by name Kaanchanoor Aazhvaar, returned and prayed for a dozen years to Vadakkunnathan at Thrissur. Bhattathiri's major work is the renowned Jyothisha Granthham "Dasaadhhyaayi". It is a majestic "Vyaakhyaanam" of the first ten chapters of the famous 26-chapter "Brihajjaathakam" in the field of Jyothissaasthram, written by Varaahamihiran of Avanthi, a sixth century scholar. Bhattathiri felt that the "Aachaaryan" had not covered anything significantly more in the rest of the chapters and therefore, left them altogether. There are also other works like "Muhoortha Rathnam" to his credit. 7. Sooryadevan This Namboothiri (Somayaaji) scholar is better known as Sooryadeva Yajwaavu. "Jaathakaalankaaram" is Sooryadevan's Vyaakhyaanam for Sreepathy's (No. 5, above) "Jaathaka Karma Padhhathi". His other works include a "Laghu Vyaakhhyaanam" (simple explanation) of Aaryabhateeyam, called "Bhataprakaasam", as well as Vyaakhhyaanams for Varaahamihiran's "Brihadyaathra" and for Mujjaalakan's "Laghu Maanava Karanam". 8. Irinjaatappilly Madhavan Namboodiri (1340 - 1425) Madhavan of Sangamagraamam, as he is known, holds a position of eminence among the astute astronomers of medieval Kerala. He hailed from Sangama Graamam, the modern Irinjalakuda, near the railway station. Madhavan was the treacher of Parameswaran, the promulgator of Drigganitha school of Astronomy, and is frequently quoted in the medieval astronomical literature of Kerala as Golavith (adept in spherics). He is the author of several important treatises on Mathematics and Astronomy. The "Venvaaroham" explaining the method for computation of the moon and the moon-sentences, "Aganitham", an extensive treatise on the computation of planets, "Golavaadam", "Sphhuta-Chandraapthi", "Madhyama Nayana Prakaaram" are some of his important works. Besides these works, a number of stray verses of Madhavan are quoted by later astronomers like Neelakandha Somayaaji, Narayanan the commentator of Leelaavathy, Sankaran the commentator of Thanthrasangraham, etc. One of his significant contributions is his enunciatiation of formulae for accurate determination of the circumference of a circle and the value of p by the method of indeterminate series, a method which was rediscovered in Europe nearly three centuries later by James Gregory (1638 - 75 AD), Gottfried Wilhelm Leibniz (1646 - 1716 AD) and Newton (1642, "Principia Mathematicia"). His five Paraspara-Nyaaya contains the enunciation for the first time in the world, of the formula for the sine of sum of two angles. sine (A + B) = sine A cos B + cos A sine B This is known as "Jeeve Paraspara Nyaaya". The ideas of Calculus and Trigonometry were developed by him in the middle of the 14th century itself, as can be verified by his extensive mathematical and astronomical treatises and quotations by later authors. Madhavan deserves, in all respects, to be called the Father of Calculus and Spherical Trigonometry. For a detailed appreciation of his contribution, refer to the excellent paper of R G Gupta,"Second Order of Interpolation of Indian Mathematics", Ind, J.of Hist. of Sc. 4 (1969) 92-94. Again Madhavan provides the power series expansions for sin x and cos x for an arc x correct to 1/3600 of a degree. 9. Vatasseri Parameswaran Namboodiri (1360 - 1455) Vatasseri was a great scientist who contributed much to Astronomy and Mathematics. He was from Vatasseri Mana on the north bank of river Nila (Bhaarathappuzha) near its mouth in a village called Aalathiyur (Aswathha Graamam). This is near the present Tirur of Malappuram district. He was a Rigvedi (Aaswalaayanan) of Bhrigu Gothram. "Drigganitham" was his greatest contribution. The seventh century "Parahitha Ganitham" for calculations and projections in Astronomy continued its popularity for a few centuries, with some later modifications made by Mujjaalakan, Sreepathy and others, for correcting the differences found with actual occurences. But it was Parameswaran who, as a result of over fifty years of systematic observations and research on movements of celestial bodies, estimated the error factor and established a new method called Drig Sidhham as explained in his popular Drigganitham (ME 606, 1430-31 AD). He suggested the use of "Parahitham" for "Paralokahitham" such as Thithhi, Nakshthram, Muhoortham, etc., and his own "Drigganitham" for "Ihalokahitham" like "Jaathakam", "Graha Moudhhyam", "Grahanam", etc. Unfortunately, Drigganitham Granthham has not been traced so far. Yet another of his contribution was a correction to the angle of precision of equinox mentioned by his disciple, Kelalloor Somayaaji (vide 15, below) in his "Jyothirmeemaamsa" (ch. 17). The 13 ½° suggested by Mujjaalakan was rectified by him to 15°. There are numerous works to his credit, apart from Drigganitham. The 3-volume, 302 verse "Gola Deepika" (1443 AD) explaining about the stars and earth in very simple terms, "Jaathaka Padhhathy" in 41 verses, "Soorya Sidhhantha Vivaranam", "Grahana Mandanam", "Grahanaashtakam", "Vyatheepaathaashtaka Vrththi" in 500 verses or Slokams. (The last three are believed by experts to be his works), "Aachaarya Samgraham", "Grahana Nyaaya Deepika", "Chandra-Chhaayaa-Ganitham", "Vaakya Karmam" and "Vaakya Deepika" are his well-known works. He has written superb commentaries such as "Sidhhantha Deepika" on Govindaswamy's Mahaa Bhaaskareeyam; "Karma Deepika" or "Bhata Deepika" on Aarya Bhateeyam; "Muhoortha Rathna Vyaakhyaa" on Govindaswamy's Muhoortha Rathnam; Leelavathee Vyaakhyaa on the famous mathematical treatise, Leelavathy of Bhaaskaraachaarya-II; "Laghu Bhaaskareeya Vyaakhyaa" on Laghu Bhaaskareeyam of Bhaaskaraachaarya-I; "Jaathaka Karma Padhhathee Vyaakhyaa" on Sreepathy's 8-chapter work on Jyothisham; the one on "Laghu Maanasam" of Mujjaalakan; "Jaathakaadesa Vyaakhyaa"; and "Prasna-Nashta Panchaasikaavrthy" also called "Paarameswari" based on the work of Prathhuyasass, son of Varaahamihiran. Undoubtedly, there had not been many scholars of his calibre in the annals of history in the realm of Astronomy. 10. Damodaran Namboodiri Damodaran Namboodiri is known for his work "Muhoorthaabharanam". It is believed that he had an ancestor by name Yajnan whose brother's son, Kesavan, was a great scholar, and that Damodaran was Kesavan's younger brother. His family is said to have belonged to a village near Thriprangod, but it is clear that it was in Taliparamba Graamam. Mazhamangalam (Mahishamangalam, vide 17, below) has recognised "Muhoorthaabharanam" as a reference work similar to "Muhoortha Rathnam" and other earlier works. 11. Narayanan Namboodiri He has authored "Muhoortha Deepikam". He could be the same Narayanan, one of Vatasseri Parameswaran Namboodiri's teachers (Guru), as mentioned by Kelallur Chomaathiri (Neelakandha Somayaaji, 15, below). "Muhoortha Deepikam" is also recognised as an authoritative work, by Mazhamangalam (17, below). 12. Puthumana Somayaaji (Chomaathiri) He belonged to Puthumana Illam (Sanskritised as Noothana Graamam) of Chovvaram (Sukapuram) Graamam. He is believed to have been a contemporary of Vatasseri Namboodiri, during the 15th century AD. His famous works are "Karana Padhhathi" which is a comprehensive treatise on Astronomy in ten chapters completed in the year ME 606 (1430-31 AD), the same year as Vatasseri Namboodiri's "Drigganitham"; "Nyaaya Rathnam", an 8-chapter Ganitha Granthham; "Jaathakaadesa Maargam"; "Smaartha-Praayaschitham"; "Venvaarohaashtakam"; "Panchabodham"; "Grahanaashtakam"; and "Grahana Ganitham". To his credit is also an important mathematical equation to calculate the tangent (tan) value of an angle, as: or in the inverse form, 13. Chennas Narayanan Namboodiripad (mid 15th century) He was considered to be an authority in the fields of Vaasthusaastram (Indian Architecture), Mathematics and Tanthram. Born in 1428, Chennas Narayanan Namboodiripad authored a book titled "Thanthra Samuchayam" which is still considered as the authentic reference manual in the field of temple architecture and rituals. In this Granthham , while elaborating on various points of Indian architectural practices, he has dealt with many mathematical principles also. The following are noteworthy. a) A method of arriving at a circle starting with a square, and successively making it a regular octagon, a regular 16-sided, a 32-sided, 64-sided polygons, etc. In this method some geometrical steps have been suggested. b) Co-ordinate system of fixing points in a plane. c) Converting a square to a regular hexagon having approximately equal area. d) Finding the width of a regular octagon, given the perimeter. 14. Ravi Namboodiri He is one of the teachers of Kelallur Chomaathiri, and was a scholar in both Astronomy and Vedaantham. His treatise "Aachaara Deepika" is on Jyothisham. 15. Kelallur Neelakandha Somayaaji (1465 - 1545) He is one of the foremost astronomers of Kerala and considered an equal to Vatasseri Parameswaran Namboodiri, and known popularly as Kelallur Chomaathiri. He was born to Jathavedan and Arya in Kelallur (or Kerala Nallur, Kerala-Sad-Graamam in Sanskrit) Mana of Thrikkandiyur (Sree Kundapuram in Sanskrit), near Tirur, and belonged to Gaargya Gothram, Aaswalaayana Soothram of Rigvedam. Kelallur Mana later became extinct and their properties merged with Edamana Mana. They were staunch devotees at Thriprangot Siva temple. He is said to be a disciple of one Ravi who taught him Vedaantham and the basics of Astronomy and of Vatasseri Damodaran Namboodiri (son of the famous Parameswaran Namboodiri) who trained him in Astronomy and Mathematics. According to Ulloor, he lived during 1465 and 1545 (roughly), though according to another version, he was born on June 17, 1444 on a Wednesday. His most important work is "Thanthra Samgraham" (a treatise on Mathematics and Astronomy) in eight chapters with 432 verses, and apparently written in an unbelievable six days from Meenam 26 of 676 ME to Metam 1 the same year! The lucid manner in which difficult concepts are presented, the wealth of quotations, and the results of his personal investigations and comparative studies make this work a real masterpiece. Two commentaries on this work, "Yukthi Bhaasha" (in Malayalam) by Paarangot Jyeshthhadevan Namboodiri (No. 16 below) and "Yukthi Deepika" by Sankara Varier, themselves indicate the importance of the original work. Another of his important works is a "Bhaashyam" (commentary) on "Aaryabhateeyam". In his book "Jyorthir Meemaamsa", he demonstrates his intellectual and scientific thinking. Some of his other works are "Chandra Chhaayaa Ganitham" (calculations relating to moon's shadow), "Sidhhantha Darpanam" (mirror on the laws of Astronomy) and its Vyaakhyaa, "Golasaaram" (quintessence of spherical Astronomy), "Grahana Nirnayam", "Grahanaashtakam", "Graha Pareekshaa Kramam", and "Sundara Raaja Prasnotharam". He postulated that the ratio of circumference to diameter of a circle could never be a rational number. His commentary on Aaryabhateeyam shows that his scholastic abilities extend beyond Jyothisham and Vedaantham, to the realms of Meemaamsa, Vyaakaranam and Nyaayam. 16. Paarangottu Jyeshthhadevan Namboodiri (1500 - 1610) He was born in Paaragottu Mana situated near Thrikkandiyur and Aalathur on the banks of river Nila. Vatasseri Damodaran Namboodiri was his teacher. He wrote a Malayalam commentary, "Yukthi Bhaasha" for "Thanthra Sangraham" of Kelallur Neelakandha Somayaaji. It forms an elaborate and systematic exposition of calculation methods in Mathematics in its first part and Astronomy in the second part. The treatment is in a rational and logical manner, and may turn out to be an asset to our scientific community, if properly translated and studied. He is also the author of "Drik Karanam", a comprehensive treatise in Malayalam on Astronomy, composed in 1603 AD. 17. Mahishamangalam Narayanan Namboodiri (1540 - 1610) He was a member of Mahishamangalam (Mazhamangalam) Mana of Peruvanam in Thrissur district. His father Sankaran Namboothiri has written several Granthhams on Astronomy in Malayalam. Renouned scholar Sankara Varier has written a commentary "Kriyaakramakari" in Malayalam for the popular Mathematical manual "Leelavathy" (of Bhaskaraachaarya) but before commencing the 200th Slokam, he expired. It was Mahishamangalam Narayanan Namboodiri who, at the age of 18, took up the challenge of completing it. He was popularly known as "Ganitha Vith" [Maths wizard]. After successfully completing "Kriyaakramakari", Narayanan Namboodiri wrote his own commentary "Karmadeepika" for "Leelavathy". "Upa Raaga Kriyaa Kramam" was his original work in the related topic. He has authored many Granthhams on subjects other than Astronomy, including Smaartha Praayaschitha Vimarsanam, Vyavahaara Mala [ethical code of conduct], Mahishamangalam Bhaanam, Uthara Raamaayana Champu, Raasa Kreedaa Kaavyam, Raaja Ratnaavaleeyam [in praise of Kerala Varma, Prince of Kochi), Daarikavadham, and Paarvatheesthuthi. 18. Mathur Nambudiripad The Granthham, "Muhoortha Padavi" (the second) is credited to Mathur Nambudiripad, whose name is not known. He has condensed the old "Muhoortha Padavi" into an amazingly short version with just 35 Slokams (verses). Since Mazhamangalam of mid-sixteenth century AD, in his "Baala Sankaram" has referred to Muhoortha Padavi, it is possible that Mathur Nambudiripad lived during the second half of the 15th century AD. Apart from Mazhamangalam's commentary on this Granthham, there are: a short one in Sanskrit, "Muhoortha Saranee Deepam" (author unknown); a detailed one in Sanskrit, "Varadeepika" by Purayannur Parameswaran Nambudiripad; and yet another one in Malayalam, "Muhoortha Bhaasha" by Aazhvaancheri Thampraakkal. 19. Narayanan Namboodiri One Narayanan has written a commentary on Bhaaskaraachaaryan's Leelaavathy, which has been variously referred to as "Karmadeepika", "Karmadeepakam" and "Kriyaakramakari". The work is well-focussed and neither too elaborate nor too short. Another of his works is " Karmasaaram" which discusses "Grahasphhutaanayanam" and other aspects of the Drik tradition. It is in four chapters and may have been written during the second half of the 16th century AD. 20. Chithrabhanu Namboodiri (16th century) Born in Chovvara (Sukapuram) Graamam, Chithrabhanu Namboodiri was a mathematician and has written a Granthham titled "Eka Vimsathi Prasnothari". It is said that Sankara Varier, another scholar (mentioned earlier) who wrote the commentary "Kriyaakramakari" was Chithrabhanu Namboodiri's disciple. Varier has, at several occasions, quoted his master. Chithrabhanu Nambudiri's "Eka Vimsathi Prasnothari" gives a method of solving the binomials (A + B), (A - B), (A² + B²), (A³ + B³), (A³ - B³), AB, etc. Given any two of these, the book gives twentyone different ways to solve for A and B. As he is believed to be the master of Sankara Varier, his period could be 16th century. The achievements of such and other Kerala mathematicians were, at first, brought to the notice of scholars, both Indian and western, by Charles M Whilsh who presented a paper on the subject before the Royal Asiatic Society of Great Britain and Ireland, 3 (1835) (509 - 523).
{"url":"http://www.namboothiri.com/articles/contributions.htm","timestamp":"2014-04-16T07:29:56Z","content_type":null,"content_length":"38810","record_id":"<urn:uuid:3042029a-7217-4065-b02a-6e80453ede4b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary logistic regression &#38; empty cells I wonder whether anybody can help me. I am trying to run a small sample (n = 45) binary logistic regression, with a mixture of continuous and categorical predictors. One categorical predictor (2x2) has an empty cell, and leads to an incredibly large standard error. I would like to follow Firth (1993) and add .5 to the cells, but there seems to be no way to do it in SPSS (apart from running a multinomial regression) - any suggestions? Many thanks,
{"url":"https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014922964","timestamp":"2014-04-19T04:24:43Z","content_type":null,"content_length":"45287","record_id":"<urn:uuid:093b2a01-8203-4c0e-986a-b50a9b21cded>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Write A Program That Prompts The User To Enter ... | Chegg.com 1. Write a program that prompts the user to enter three positive integers and finds their greatest common divisor. (35 pts) Solution: Suppose you enter two integers 8, 4 and 2, their greatest common divisor is 2. Suppose you enter two integers 16, 24, and 32, their greatest common divisor is 8. So, how do you find the greatest common divisor? Let the two input integers be n1, n2 and n3. You know number 1 is a common divisor, but it may not be the greatest commons divisor. So you can check whether k (for k = 2, 3, 4, and so on) is a common divisor for n1, n2 and n3, until k is greater than n1 or n2 or n3. 2. Write a program that deals with the grades of 10 courses for 15 students. (65 pts) a. Declare a two?dimensional array; b. Initialize the array using randomly generated numbers (double data type) between 0 and 100; c. Print out the original array to the console; d. Calculate the avg grade by row for each student, and find the max and min grade for this students; e. Calculate the avg grade by column for each course, and find the max and min grade for this course; f. Define a void swap() which takes two double variables and exchanges the values of those two variables in the function body; (Hint: pass?by?reference or pass?by?value?) g. In main(), invoke swap() to exchange the real values of max and min for each student; h. Open a new file named “newArray.txt”, and write the new array to the file and format Sample output for the console: The avg grade for course 1 is 62.6. The max and min is 78.1 and 56.7, respectively. The avg grade for course 2 is 75.3. The max and min is 92.1 and 63.9, respectively. The avg grade for course 10 is 80.3. The max and min is 82.1 and 62.8, respectively. The avg grade for student 1 is 95.7. The max and min is 100 and 82.6, respectively. The avg grade for student 2 is 65.3. The max and min is 82.1 and 53.9, respectively. The avg grade for student 15 is 86.8. The max and min is 93 and 66.2, respectively. Sample output for the file “newArray.txt”: 82.6 98.5 89 100 88 89 85.9 100 100 93.2 53.9 77 82.1 75.4 81.2 80 75 64.8 68 81 Computer Science
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-write-program-prompts-user-enter-three-positive-integers-finds-greatest-common-divisor-3-q3105744","timestamp":"2014-04-23T21:40:55Z","content_type":null,"content_length":"27202","record_id":"<urn:uuid:47cf0720-63b1-4e00-acc0-c8b696d190e5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00185-ip-10-147-4-33.ec2.internal.warc.gz"}