content
stringlengths
86
994k
meta
stringlengths
288
619
Maths Mate Term 3 Sheet 7 Problem 22 This week I did sheet 7 problem 22. In problem 22 I had to find out someones age, here is what I read. Jim and his younger sister Rachael were born on the same day of the year, but 5 years apart. There was a total of 25 candles on their cakes last birthday. How old is Rachael? As soon as I had read this I started working on the problem, I knew from reading this that the total of their ages was 25 and Jim is 5 years older than Rachael, I worked onwards from that. I took the 25 and seeing that Jim was 5 years older, I gave 5 years to Jim. Jim 5 Rachael 0. Then I split the 20 years that was left between them. Jim 15 Rachael 10. After this I saw that Jim was 15 and Rachael was 10 so I put the Answer 10 into the answer box. 0 Responses to “Maths Mate Term 3 Sheet 7 Problem 22” • No Comments
{"url":"http://matthew2011.global2.vic.edu.au/maths-mate-problem-solving/maths-mate-term-3-sheet-7-problem-22/","timestamp":"2014-04-16T16:00:00Z","content_type":null,"content_length":"53194","record_id":"<urn:uuid:b4f672a4-2618-4423-8454-a9bb89af1184>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Regression: in-place operations (possibly intentional) [Numpy-discussion] Regression: in-place operations (possibly intentional) Chris Barker chris.barker@noaa.... Fri Sep 21 16:04:33 CDT 2012 On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith <njs@pobox.com> wrote: > You're right of course. What I meant is that > a += b > should produce the same result as > a[...] = a + b > If we change the casting rule for the first one but not the second, though, > then these will produce different results if a is integer and b is float: I certainly agree that we would want that, however, numpy still needs to deal tih pyton symantics, which means that wile (at the numpy level) we can control what "a[...] =" means, and we can control what "a + b" produces, we can't change what "a + b" means depending on the context of the left hand side. that means we need to do the casting at the assignment stage, which I gues is your point -- so: a_int += a_float should do the addition with the "regular" casting rules, then cast to an int after doing that. not sure the implimentation details. Oh, and: a += b should be the same as a[..] = a + b should be the same as np.add(a, b, out=a) not sure what the story is with that at this point. Christopher Barker, Ph.D. Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-September/063971.html","timestamp":"2014-04-16T20:11:47Z","content_type":null,"content_length":"4442","record_id":"<urn:uuid:f409f528-1b69-4b76-88bb-4b9f1fdfbc51>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
A Catechism of the Steam Engine Summary given, then to find the pitch divide the strength by the breadth in inches, and extract the square root of the quotient, which is the proper pitch in inches. The length of the teeth is usually about 5/8ths of the pitch. Pinions to work satisfactorily should not have less than 30 or 40 teeth, and where the speed exceeds 220 feet in the minute, the teeth of the larger wheel should be of wood, made a little thicker, to keep the strength unimpaired. 356. Q.—­What was Mr. Watt’s rule for the pitch of wheels? A.—­Multiply five times the diameter of the larger wheel by the diameter of the smaller, and extract the fourth root of the product, which is the pitch. 357. Q.—­Cannot you give some rules of strength which will be applicable whatever pressure may be employed? A.—­In the rules already given, the effective pressure may be reckoned at from 18 to 20 lbs. upon every square inch of the piston, as is usual in land engines; and if the pressure upon every square inch of the piston be made twice greater, the dimensions must just be those proper for an engine of twice the area of piston. It will not be difficult, however, to introduce the pressure into the rules as an element of the computation, whereby the result will be applicable both to high and low pressure engines. 358. Q.—­Will you apply this mode of computation to a marine engine, and first find the diameter of the piston rod? A.—­The diameter of the piston rod may be found by multiplying the diameter of the cylinder in inches, by the square root of the pressure on the piston in lbs. per square inch, and dividing by 50, which makes the strain 1/7th of the elastic force. 359. Q.—­What will be the rule for the connecting rod, supposing it to be of malleable iron? A.—­The diameter of the connecting rod at the ends, may be found by multiplying 0.019 times the square root of the pressure on the piston in lbs. per square inch by the diameter of the cylinder in inches; and the diameter of the connecting rod in the middle may be found by the following rule:—­to 0.0035 times the length of the connecting rod in inches, add 1, and multiply the sum by 0.019 times the square root of the pressure on the piston in lbs. per square inch, multiplied by the diameter of the cylinder in inches. The strain is equal to 1/6th of the elastic force. 360. Q.—­How will you find the diameter of the cylinder side rods of a marine engine? A.—­The diameter of the cylinder side rods at the ends may be found by multiplying 0.0129 times the square root of the pressure on the piston in lbs. per square inch by the diameter of the cylinder; and the diameter of the cylinder side rods at the middle is found by the following rule:—­to 0.0035 times the length of the rod in inches, add 1, and multiply the sum by 0.0129 times the square root of the pressure on the piston in lbs. per square inch, multiplied by the diameter of the cylinder in inches; the product is the diameter of each side rod at the centre in inches. The strain upon the side rods is by these rules equal to 1/6th of the elastic force.
{"url":"http://www.bookrags.com/ebooks/10998/110.html","timestamp":"2014-04-20T06:43:58Z","content_type":null,"content_length":"34476","record_id":"<urn:uuid:be740c62-6338-4afe-82f7-cb70a7479ddc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Artin's_conjecture_on_primitive_roots In mathematics, the Artin conjecture is a conjecture on the set of primes p modulo which a given integer a > 1 is a primitive root. The conjecture was made by Emil Artin to Helmut Hasse on September 27, 1927, according to the latter's diary. The precise statement is as follows. Let a be an integer which is not a perfect square and not -1. Denote by S(a) the set of prime numbers p such that a is a primitive root modulo p. Then 1. S(a) has a positive Schnirelmann density inside the set of primes. In particular, S(a) is infinite. 2. under the condition that a be squarefree, this density is independent of a and equals the Artin constant which can be expressed as an infinite product $C_\left\{Artin\right\}=prod_\left\{p mathrm\left\{prime\right\}, p>0\right\} left\left(1-frac\left\{1\right\}\left\{p\left(p-1\right)\right\}right\right) = 0.3739558136ldots$ Similar product formulas exist for the density when contains a square factor. For example, take a = 2. The conjecture claims that the set of primes p for which 2 is a primitive root has the above density C. The set of such primes is S(2)={3, 5, 11, 13, 19, 29, 37, 53, 59, 61, 67, 83, 101, 107, 131, 139, 149, 163, 173, 179, 181, 197, 211, 227, 269, 293, 317, 347, 349, 373, 379, 389, 419, 421, 443, 461, 467, 491, ...} It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends to ) is 38/95=0.41051... To prove the conjecture, it is sufficient to do so for prime numbers a. In 1967, Hooley published a conditional proof for the conjecture, assuming certain cases of the Generalized Riemann hypothesis. In 1984, R. Gupta and M. Ram Murty showed unconditionally that Artin's conjecture is true for infinitely many a using sieve methods. Roger Heath-Brown improved on their result and showed unconditionally that there are at most two exceptional prime numbers a for which Artin's conjecture fails. This result is not constructive, as far as the exceptions go. For example, it follows from the theorem of Heath-Brown that one out of 3, 5, and 7 is a primitive root modulo p for infinitely many p. But the proof does not provide us with a way of computing which one. In fact, there is not a single value of a for which the Artin conjecture is known to hold. See also
{"url":"http://www.reference.com/browse/wiki/Artin's_conjecture_on_primitive_roots","timestamp":"2014-04-23T21:07:52Z","content_type":null,"content_length":"75686","record_id":"<urn:uuid:643ba3a7-f761-4d39-9398-d60a52d227de>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Can p?getrf and p?getri work on submatrix? Dear all, I've tried to call p?getrf and p?getri to perform the inverse on a sub matrix of a general distributed matrix but failed. According to the documentations, my understanding is that you can do the inverse directly on the sub matrix by manipulating (m,n,... ia, ja...) easily. But when I set ia (or ja) not equal to 1, p?getrf and p?getri keeps telling me that the parameter 4 (or 5) is illegal. However, the subroutine works well at ia=ja=1, which means I can only perform the operation directly on the upper-left corner. So I looked into the source code of pdgetrf and found this, 00178 IF( INFO.EQ.0 ) THEN 00179 IROFF = MOD( IA-1, DESCA( MB_ ) ) 00180 ICOFF = MOD( JA-1, DESCA( NB_ ) ) 00181 IF( IROFF.NE.0 ) THEN 00182 INFO = -4 00183 ELSE IF( ICOFF.NE.0 ) THEN 00184 INFO = -5 00185 ELSE IF( DESCA( MB_ ).NE.DESCA( NB_ ) ) THEN 00186 INFO = -(600+NB_) 00187 END IF 00188 END IF I read the code as "you can only play with the sub matrix starting right at IA = x*blocksize + 1 and JA = y*blacksize + 1". Otherwise, iroff (or icoff) would not equal to 0 and the subroutine would exit and report its parameter 4(or 5) is illegal. Correct? I am wondering if anyone has successfully did the LU decomp or matrix inverse on the bottom-right sub matrix directly? I'd really appreciate any feedback.
{"url":"https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=4348","timestamp":"2014-04-21T07:07:03Z","content_type":null,"content_length":"14487","record_id":"<urn:uuid:77100a64-5961-4859-bfdc-3c1622d35d89>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Axion's objetive Franklin Vera Pacheco franklin at ghost.matcom.uh.cu Tue Jan 8 16:40:07 EST 2002 It hapens that the axiomatic constructions are intended to represent all the intuitive characteristics of the theory. eg, We give a union axiom because we intuitively always can get the union of whatever two sets. It hapens too that because of this there are,in the theory, some operations that we can't do. Some times the theory acerts the existence of an element that we can't construct or very strong proposition as the induction principle. It maybe can be interesting to make an axiomatic construction that is intended to represent the human capability ( finitistic thinking,constructions, deductions) To see this in a better way: 1)there are not infinitely many elements 2)for the naturals we can get the 0,the sucesor axiom but not the induction one. 3)all the infinite sets only can be given as the rule ( finite ) to get its elements.. Does someboddy know some work near this? Franklin Vera Pacheco 45 #10029 e/100 y 104 Marianao, C Habana, e-mail:franklin at ghost.matcom.uh.cu More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-January/005128.html","timestamp":"2014-04-21T13:24:27Z","content_type":null,"content_length":"3270","record_id":"<urn:uuid:f8217b76-0edc-4715-843d-fcaf32c25d1e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On the power of two, three and four probes Noga Alon Uriel Feige December 4, 2008 An adaptive (n, m, s, t)-scheme is a deterministic scheme for encoding a vector X of m bits with at most n ones by a vector Y of s bits, so that any bit of X can be determined by t adaptive probes to Y . A non-adaptive (n, m, s, t)-scheme is defined analogously. The study of such schemes arises in the investigation of the static membership problem in the bitprobe model. Answering a question of Buhrman, Miltersen, Radhakrishnan and Venkatesh [SICOMP 2002] we present adaptive (n, m, s, 2) schemes with s < m for all n satisfying + 4n < m and adaptive (n, m, s, 2) schemes with s = o(m) for all n = o(log m). We further show that there are adaptive (n, m, s, 3)-schemes with s = o(m) for all n = o(m), settling a problem of Radhakrishnan, Raman and Rao [ESA 2001], and prove that there are non-adaptive (n, m, s, 4)-schemes with s = o(m) for all n = o(m). Therefore, three adaptive probes or four non-adaptive probes already suffice to obtain a significant saving in space compared to the total length of the input vector. Lower bounds are discussed as well. Schools of Mathematics and Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/376/1705893.html","timestamp":"2014-04-20T18:45:44Z","content_type":null,"content_length":"8537","record_id":"<urn:uuid:e2deacac-a244-4846-8041-f0c538afea0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael Kleder’s “Learning the Kalman Filter” mini tutorial, along with the great feedback it has garnered (73 comments and 67 ratings, averaging 4.5 out of 5 stars), is one of the most popular downloads from Matlab Central and for good reason. In his in-file example, Michael steps through a Kalman filter example in which a voltmeter is used to measure the output of a 12-volt automobile battery. The model simulates both randomness in the output of the battery, and error in the voltmeter readings. Then, even without defining an initial state for the true battery voltage, Michael demonstrates that with only 5 lines of code, the Kalman filter can be implemented to predict the true output based on (not-necessarily-accurate) uniformly spaced, measurements: This is a simple but powerful example that shows the utility and potential of Kalman filters. It’s sure to help those who are trepid about delving into the world of Kalman filtering.
{"url":"http://jonathankinlay.com/index.php/2010/07/","timestamp":"2014-04-19T17:03:11Z","content_type":null,"content_length":"31618","record_id":"<urn:uuid:afc21fb8-452f-49be-9189-c623da7cf960>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
orthogonal group over local field MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Would anyone be able to tell me how to prove that the orthogonal group over a local field for an anisotropic quadratic form is compact? up vote 4 down vote favorite ag.algebraic-geometry add comment Would anyone be able to tell me how to prove that the orthogonal group over a local field for an anisotropic quadratic form is compact? I am lazy, so I'll write this out as a sequence of claims without proofs. Let $K$ be the local field and let $| \ |$ denote the absolute value on $K$. Let $V$ be the vector space with anisotropic form $\langle \ , \ \rangle$. Choose an arbitrary basis $e_1$, ..., $e_n$ of $V$. Define functions $| \ |_{\infty}$ and $| \ |_2$ from $V \to \mathbb{R}$ as follows: $$\left| \sum a_i e_i \right|_{\infty} = \max(|a_i|).$$ $$| v |_2 = | \ langle v,v \rangle|^{1/2}.$$ Claim: The unit $\infty$-ball, $B_{\infty}:=\{ v \in V: |v|_{\infty} \leq 1 \}$, is compact. Claim: The unit $\infty$-sphere, $S_{\infty} := \{ v \in V: |v|_{\infty} = 1 \}$, is compact. up vote 8 down vote accepted Claim: The function $| \ |_2$ is continuous. Claim: There is a positive constant $r>0$ such that $|v|_2 \geq r$ on $S_{\infty}$. (This is the step that uses anisotropy.) Claim: Define the unit $2$-sphere by $S_2 := \{ v \in V: |v|_{2} = 1 \}$. Then $S_2 \subset (1/r) B_{\infty}$, and $S_2$ is compact. Claim: The orthogonal group embeds as a closed subspace of $S_2^n$, and is hence compact. add comment I am lazy, so I'll write this out as a sequence of claims without proofs. Let $K$ be the local field and let $| \ |$ denote the absolute value on $K$. Let $V$ be the vector space with anisotropic form $\langle \ , \ \rangle$. Choose an arbitrary basis $e_1$, ..., $e_n$ of $V$. Define functions $| \ |_{\infty}$ and $| \ |_2$ from $V \to \mathbb{R}$ as follows: $$\left| \sum a_i e_i \right|_{\infty} = \max(|a_i|).$$ $$| v |_2 = | \langle v,v \rangle|^{1/2}.$$ Claim: The unit $\infty$-ball, $B_{\infty}:=\{ v \in V: |v|_{\infty} \leq 1 \}$, is compact. Claim: The unit $\infty$-sphere, $S_{\infty} := \{ v \in V: |v|_{\infty} = 1 \}$, is compact. Claim: There is a positive constant $r>0$ such that $|v|_2 \geq r$ on $S_{\infty}$. (This is the step that uses anisotropy.) Claim: Define the unit $2$-sphere by $S_2 := \{ v \in V: |v|_{2} = 1 \}$. Then $S_2 \subset (1/r) B_{\infty}$, and $S_2$ is compact. Claim: The orthogonal group embeds as a closed subspace of $S_2^n$, and is hence compact. To address Jim Humphreys' comment concerning a unified argument covering all (non-archimedean) local fields, see the proof by Gopal Prasad (in "An elementary proof of a theorem of Bruhat-Tits-Rousseau and of a theorem of Tits", Bull. SMF 110) that a connected reductive group $G$ over a henselian non-trivially valued field $k$ has $G(k)$ bounded (equiv. compact, when $k$ is locally compact) if and only if $G$ is $k$-anisotropic. up vote Relevance: the special orthogonal group of an anisotropic non-degenerate quadratic form over a field $K$ is $K$-anisotropic as a connected semisimple algebraic group over $K$ (so the 4 down question posed falls into the context of Prasad's argument). Indeed, arguing by contradiction, suppose there is a nontrivial split $K$-torus in the special orthogonal group. This leads to a vote nontrivial $K$-rational zero of the quadratic form by considering the weight space decomposition for the action of such a split torus on the $K$-vector space in question (any nontrivial element in a single weight space is such a $K$-rational zero). add comment To address Jim Humphreys' comment concerning a unified argument covering all (non-archimedean) local fields, see the proof by Gopal Prasad (in "An elementary proof of a theorem of Bruhat-Tits-Rousseau and of a theorem of Tits", Bull. SMF 110) that a connected reductive group $G$ over a henselian non-trivially valued field $k$ has $G(k)$ bounded (equiv. compact, when $k$ is locally compact) if and only if $G$ is $k$-anisotropic. Relevance: the special orthogonal group of an anisotropic non-degenerate quadratic form over a field $K$ is $K$-anisotropic as a connected semisimple algebraic group over $K$ (so the question posed falls into the context of Prasad's argument). Indeed, arguing by contradiction, suppose there is a nontrivial split $K$-torus in the special orthogonal group. This leads to a nontrivial $K$-rational zero of the quadratic form by considering the weight space decomposition for the action of such a split torus on the $K$-vector space in question (any nontrivial element in a single weight space is such a $K$-rational zero). At least in the case of an algebraic extension $K$ of a $\mathbf Q_p$ with ring of integers $A$, you can also easily check (or read in O'Meara) that if a form $q$ on a vector space $V$ up vote 3 over $K$ is anisotropic, then the set of vectors $x$ with $q(x)\in A$ is an $A$-lattive $L$ on $V$. Thus $O(V,q)=O(L,q\vert_L)$ is compact. down vote add comment At least in the case of an algebraic extension $K$ of a $\mathbf Q_p$ with ring of integers $A$, you can also easily check (or read in O'Meara) that if a form $q$ on a vector space $V$ over $K$ is anisotropic, then the set of vectors $x$ with $q(x)\in A$ is an $A$-lattive $L$ on $V$. Thus $O(V,q)=O(L,q\vert_L)$ is compact. The proof for the archimedean case is given in our own Pete Clark's notes., see Theorem 1. up vote 2 down vote add comment The proof for the archimedean case is given in our own Pete Clark's notes., see Theorem 1.
{"url":"http://mathoverflow.net/questions/90117/orthogonal-group-over-local-field","timestamp":"2014-04-18T06:07:35Z","content_type":null,"content_length":"63583","record_id":"<urn:uuid:be4c0dbf-e249-40d9-aacc-f535a99aae7e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry: Congruence To prove that two triangles have the same shape, certain parts of one triangle must coincide with certain parts of the other triangle. Specifically, the vertices of each triangle must have a one-to-one correspondence. This phrase means that the measure of each side and angle of each triangle corresponds to a side or angle of the other triangle. As we will see, triangles don't necessarily have to be congruent to have a one-to-one correspondence; but when they are congruent, it is necessary to know the correspondence of the triangles to know exactly which sides and which angles are As you know, when a triangle's name is derived from the letters given to either its angles or sides (ex. triangle ABC). Until now, it didn't seem to matter which letters were there--as long as all three vertices were in the name, we knew which triangle we were talking about. Now, when we want to say that a given triangle, like triangle ABC, is congruent to another triangle, like triangle DEF, the order of the vertices in the name makes a big difference. Congruent triangles ABC and DEF When two triangle are written this way, ABC and DEF, it means that vertex A corresponds with vertex D, vertex B with vertex E, and so on. This means that side CA, for example, corresponds to side FD; it also means that angle BC, that angle included in sides B and C, corresponds to angle EF. These relationships aren't especially important when triangles aren't congruent or similar. But when they are congruent, the one-to-one correspondence of triangles determines which angles and sides are congruent. When a triangle is said to be congruent to another triangle, it means that the corresponding parts of each triangle are congruent. By proving the congruence of triangles, we can show that polygons are congruent, and eventually make conclusions about the real world.
{"url":"http://www.sparknotes.com/math/geometry2/congruence/section1.rhtml","timestamp":"2014-04-18T23:29:30Z","content_type":null,"content_length":"51788","record_id":"<urn:uuid:b3120227-de20-4b9a-89c6-cee625d13a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
The old MAT135/136 webpages have been removed. The new MAT135H1F webpages will be posted here in September 2014. The textbook for MAT135H1 and MAT136H1 (for the summer of 2014 as well as the 2014-2015 academic year) will be: Single Variable Calculus, Early Transcendentals version - 7th edition - by James Stewart (Publishers: Brooks/Cole). You should also buy the Student Solutions Manual which accompanies the above textbook. Note that the 7th edition is not the same as the 6th edition and has a variety of new exercises, among other changes. Of course, only the new Student Solutions Manual has the solutions to the new exercises of the 7th edition.
{"url":"http://www.math.toronto.edu/lam/","timestamp":"2014-04-21T05:47:58Z","content_type":null,"content_length":"23526","record_id":"<urn:uuid:6f55876f-f573-40a0-85fd-7d296aade11d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
One of the papers currently on my list of "Breaking Research" (see right sidebar) has the potential to be unusually explosive; perhaps world-changing. It's conclusions represent dynamite for everyone involved in the economic assessment (i.e. cost-benefit analysis) of various proposals for measures to respond to climate change or environmental degradation more generally. All this from a bit of algebra (and good thinking). Here's why. Five years ago, the British Government issued the so-called Stern Review of the economics of climate change, authored by economist Nicholas Stern. The review had strong conclusions: “If we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more.” The review recommended that governments take fast action to reduce greenhouse-gas emissions. In response, many economists -- most prominently William Nordhaus of Yale University -- have countered the Stern Review by criticizing the way it "discounted" the value of consequences in the future. They said it didn't discount the future strongly enough. In this essay in 2007, for example, Nordhaus argued that the value of future economic losses attributed to climate change (or any other concerns about the environment) should be discounted at about 7% per year, far higher than the value of 1.4% used in the Stern Review. Here is his comment on this difference, providing some context: In choosing among alternative trajectories for emissions reductions, the key economic variable is the real return on capital, r, which measures the net yield on investments in capital, education, and technology. In principle, this is observable in the marketplace. For example, the real pretax return on U.S. corporate capital over the last four decades has averaged about 0.07 per year. Estimated real returns on human capital range from 0.06 to > 0.20 per year, depending on the country and time period (7). The return on capital is the “discount rate” that enters into the determination of the efficient balance between the cost of emissions reductions today and the benefit of reduced climate damages in the future. A high return on capital tilts the balance toward emissions reductions in the future, whereas a low return tilts reductions toward the present. The Stern Review’s economic analysis recommended immediate emissions reductions because its assumptions led to very low assumed real returns on capital. Of course, one might wonder if four decades of data is enough to project this analysis safely into untold centuries in the future (think sub-prime crisis and the widespread belief that average housing prices in the US have fallen, based on a study going back 30 years or so). That to one side, however, there may be something much more fundamentally wrong with Nordhaus's critique, as well as with the method of discounting used by Stern in his review and by most economists today in almost every cost benefit analysis involving the projections into future. The standard method of economic discounting follows an exponential decay. Using the 7% figure, each movement of roughly 10 years into the future implies a decrease in current value by a factor of 2. With a discounting rate r, the discount factor applied at time T in the future is exp(-rT). Is this the correct way to do it? Economists have long argued that it is for several reasons. To be "rational", in particular, discounting should obey a condition known as "time consistency" -- essentially that subsequent periods of time should all contribute to the discounting in an equal way. This means that a discount over a time A+B should be equal to a discount over time A multiplied by a discount over time B. If this is true -- and it seems sensible that it should be -- then it's possible to show that exponential discounting is the only possibility. It's the rational way to discount. That would seem beyond dispute, although it doesn't settle the question of which discount rate to use. But not so fast. Physicist Doyne Farmer and economist John Geanakoplos have taken another look at the matter in the case in which the discount rate isn't fixed, but varies randomly through time (as indeed do interest rates in the market). This blog isn't a mathematics seminar so I won't get into details, but their analysis concludes that in such a (realistically) uncertain world, the exponential discounting function no longer satisfies the time consistency condition. Instead, a different mathematical form is the natural one for discounting. The proper or rational discounting factor D(T) has the form D(T) = 1/(1 + αT)^β, where α and β are constants (here ^ means "raised to the power of"). For long times T, this form has a power law tail proportional to T^-β, which falls off far more slowly than an exponential. Hence, the value of the future isn't discounted to anywhere near the same degree. Farmer and Geanakoplos illustrate the effect with several simple models. You might take the discount rate at any moment to be the current interest rate, for example. The standard model in finance for interest rate movements in the geometric random walk (the rate gets multiplied or divided at each moment by a number, say 1.1, to determine the next rate). With discount rates following this fluctuating random process, the average effective discount after a time T isn't at all like that based on the current rate projected into the future. Taking the interest rate as 4%, with a volatility of 15%, the following figure taken from their paper compares the resulting discount factors as time increases: For the first 100 years, the numbers aren't too different. But at 500 years the exponential is already discounting values about one million times more strongly than the random process (GRW), and it gets worse after that. This is truly a significant hole in the analyses performed to date on climate policy (or steps to counter other problems where costs come in the future). Farmer and Geanakoplos don't claim that this geometric random walk model is THE correct one, it's only illustrative (but also isn't obviously unreasonable). But the point is that everything about discounting depends very sensitively on the kinds of assumptions made, not only about the rate of discounting but the very process it follows through time. As they put it: What this analysis makes clear, however, is that the long term behavior of valuations depends extremely sensitively on the interest rate model. The fact that the present value of actions that affect the far future can shift from a few percent to infinity when we move from a constant interest rate to a geometric random walk calls seriously into question many well regarded analyses of the economic consequences of global warming. ... no fixed discount rate is really adequate – as our analysis makes abundantly clear, the proper discounting function is not an exponential. It seems to me this is a finding of potentially staggering importance. I hope it quickly gets the attention it deserves. It's incredible that what are currently considered the best analyses of some of the world's most pressing problems hinge almost entirely on quite arbitrary -- and possible quite mistaken -- techniques for discounting the future, for valuing tomorrow much less than today. But it's true. In his essay in criticizing the Stern Review, Nordhaus makes the following quite amazing statement, which is nonetheless taken by most economists, I think, as "obviously" sensible: In fact, if the Stern Review’s methodology is used, more than half of the estimated damages “now and forever” occur after 2800. Can you imagine that? Most of the damage could accrue after 2800 -- i.e., in that semi-infinite expanse of the future leading forward into eternity, rather than in the 700 years between now and then? Those using standard economics are so used to the idea that the future should receive very little consideration find this kind of idea crazy. But their logic looks to me seriously full of holes. 1 comment: 1. There's less here than meets the eye. The important general point is "the long term behavior of valuations depends extremely sensitively on the interest rate model." However, mathematical literacy is important. Stern's discount rate of 1.4% corresponds to a doubling or halving time of 50 years. Suppose that damage is constant per year, ignoring discounting. Then damage 50 years out counts for 1/2, 100 years out for 1/4, 150 years out for 1/8, etc. So the total discounted damage 50 and more years in the future equals 1, where damage now also equals 1. That means, even under Stern's methodology "more than half of the estimated damages “now and forever” occur after" the year 2060, not the year 2800! My conclusion is that arguments about more than 50 years in the future cannot be based centrally on any model of interest rates.
{"url":"http://physicsoffinance.blogspot.com/2011/05/deep-discounting-errors.html","timestamp":"2014-04-17T15:25:17Z","content_type":null,"content_length":"141712","record_id":"<urn:uuid:d04c5e9c-8f75-4f0e-9f5b-f5b0675b774b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Some questions about Ackermann set theory up vote 5 down vote favorite In a comment on this site Andreas Blass stated: "To fit this situation into my philosophical point of view, I'd say that what Ackermann's theory calls proper classes are really certain sets. That notion has some support in the Levy-Vaught interpretation of Ackermann set theory in a conservative extension of ZF, where both the sets and the classes of Ackermann are interpreted as certain sets in the sense of ZF. – Andreas Blass Feb 9 at 16:58 " Does it mean that the following statements (analogs of the axioms of pairing, union and powerset respectively) are consistent with Ackermann set theory: 1) For any two classes $X, Y$ there exists the class $Z$ which contains just $X$ and $Y$. 2) For any class $X$ there exists the class, whose members are just the members of the members of $X$; 3) For any class $X$ there exists the class whose members are just all the subclasses of $X$. set-theory lo.logic foundations add comment 1 Answer active oldest votes The answer to your question is "Yes." In trying to understand why Ackermann nonetheless wanted to distinguish proper classes such as V from sets - where V is the proper class of all sets, taken here to be a natural model of ZFC - it is necessary to take seriously Ackermann's claims about the universe V being continually "under construction," so to speak, or always in the process of being "built" (see Penelope Maddy's article "Proper Classes" (Journal of Symbolic Logic, 1983), p. 122, on this point). At some particular "time" (or "step") t in this construction process, there may exist (e.g.) classes larger than V ("superclasses") that have been obtained by iterating the power-set operation on V denumerably many times. Consider the totality T of such superclasses existing at t; i.e., T (at t) is up the limit of a denumerable sequence of iterated power-set operations on V. Since T (at t) is clearly not a natural model of ZFC, it cannot be regarded as a new "universe" that supersedes or vote 3 replaces V. Hence, those collections which (at t) are members of T but not of V are not members of a suitable universe or natural model; and so they are to be distinguished from the members of down V, a distinction expressed by calling them "proper classes" (and calling the members of V "sets"). One glaring problem with this account is that it's unclear what we're supposed to take as the time t. Why, in particular, should we suppose that t is such that the power set operation on V has been iterated denumerably many times (or even some nonzero number of times)? Without any constraints on t, there's simply no way to know what proper classes we're supposed to regard as existing; and, in fact, there are no theoretical constraints on what t's value is. Hence, there's no basis for saying anything definite about proper classes at all; and so it seems best, as Andreas Blass said, to treat Ackermann's "proper classes" simply as sets. @wmitt: Thanks for the answer. Could you provide some references to theorems supporting your answer? Does it follow from your answer that there exists a conservative extension $AZ$ of Ackermann set theory which ($AZ$) is also a conservative extension of $ZF$? – Victor Makarov Apr 20 '12 at 15:37 There's F.A. Muller's "Sets, Classes and Categories," Brit. Jrnl for the Phil. of Sci. 52 (2001), 539-73. Muller adds a Class Separation Schema ("ClsSep") to Ackermann's theory ("A"), and 1 this yields the desired analogs of Pairing, Union and Powerset (565). Contrary to Muller (564), I believe his theory is a conservative extension of A (and of ZF), since (i) ClsSep entails Ackermann's class existence schema ClsEx, and (ii) ClsEx is just the restriction of ClsSep to classes that contain only sets. Cf. A. Blass's review of Muller at MathSciNet (MR1851712), 3rd paragraph from the end. – wmitt Apr 20 '12 at 21:49 Btw, note that Muller combines A with Regularity for sets, which gives a conservative extension of A (see Levy & Vaught). Also, it's clear from Reinhardt that any conservative extension of A is, ipso facto, a conservative extension of ZF. (The references for Levy-Vaught and Reinhardt are in the comment by Blass linked to above.) Finally, Muller tries to explicate Ackermann's set/proper class distinction by formalizing the idea of unsharpness; but he only formalizes "X is unsharply distinguished from Y," and not "X is unsharp"- and it's the latter that needs formalization. – wmitt Apr 21 '12 at 15:27 add comment Not the answer you're looking for? Browse other questions tagged set-theory lo.logic foundations or ask your own question.
{"url":"http://mathoverflow.net/questions/92704/some-questions-about-ackermann-set-theory","timestamp":"2014-04-18T18:50:40Z","content_type":null,"content_length":"58061","record_id":"<urn:uuid:cc25150e-d334-4656-a5bc-a3d526b7af2e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
ISBN: 9780321132253 | 0321132254 Edition: 4th Format: Paperback Publisher: Addison Wesley Pub. Date: 1/1/2004 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/prealgebra-4th-bittinger-marvin-l/bk/9780321132253","timestamp":"2014-04-19T15:21:12Z","content_type":null,"content_length":"29847","record_id":"<urn:uuid:28e820fe-dc41-4380-9246-347ee0d1c9ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Platonism in the Philosophy of Mathematics 1. Does platonism directly contradict physicalism? The answer will depend on how physicalism is defined. If physicalism is defined as the view that everything supervenes on the physical, and if all mathematical truths are necessary, then the two views will be formally consistent. For assuming S5, any two worlds are alike with respect to necessary truths. Thus a fortiori, any two worlds that are alike with respect to physical truths are also alike with respect to mathematical truths. But this is a standard definition of the claim that mathematical truths supervene on the physical. If on the other hand physicalism is defined as the view that all entities are composed of, or constituted by, fundamental physical entities, then the two views will contradict each other. (See the entry on 2. For instance, there is wide-spread agreement among mathematicians about the guiding problems of their field and about the kinds of methods that are permissible when attempting to solve these problems. Moreover, using these methods, mathematicians have made, and continue to make, great progress towards solving these guiding problems. 3. However, the philosophical analysis itself could be challenged. For this analysis goes beyond mathematics proper and does therefore not automatically inherit its strong scientific credentials. 4. However, it is not easy to understand what this dependence or constitution amounts to. More recent forms of intuitionism are often given an alternative development in the form of a non-classical semantics for the language of mathematics. Semantic theories of this sort seek to replace the classical notion of truth with the epistemologically more tractable notion of proof. Where classical platonism says that a mathematical sentence S is true just in case the objects that S talks about have the properties that S ascribes to them, the present form of intuitionism says that S is true (in some suitably lightweight sense) just in case S is provable. See Wright 1992 and Dummett 1991b. 5. To highlight the contrast with truth-value realism, platonism and anti-nominalism are sometimes referred to as forms of ‘object realism’. This is not a term that I will use here. 6. One example is the “modal structuralism” of Hellman 1989, where an arithmetical sentence A is analyzed as ☐∀X∀f∀x[PA^2(X/ℕ, f/s, x/0)A(X/ℕ, f/s, x/0)], where PA^2 is the conjunction of the axioms of second-order Peano Arithmetic. 7. This is the point of Kreisel's dictum, which makes many appearances in the writings of Michael Dummett, for instance: As Kreisel remarked in a review of Wittgenstein, “the problem is not the existence of mathematical objects but the objectivity of mathematical statements”. (Dummett 1978b, p. xxxviii) See also Dummett 1981, p. 508. The remark of Kreisel's to which Dummett is alluding appears to be Kreisel 1958, p. 138, fn. 1 (which, if so, is rather less memorable than Dummett's paraphrase). For another example of the view that truth-value realism is more important than platonism, see Isaacson 1994, and Gaifman 1975 for a related view. 8. See Hilbert 1996, p. 1102. Famously, one of the problems Hilbert sets is the Continuum Hypothesis. For this problem to be “solvable”, the Continuum Hypothesis must have an objective truth-value despite being independent of standard ZFC set theory. 9. Note that this step uses the parenthetical precisification in Truth. Without this precisification, it would be possible for most sentences accepted as mathematical theorems to be true and all sentences of the form mentioned in the text to be false. 10. There is a related argument which stands to object-directed intentional acts the way the Fregean argument stands to sentences or propositions. (See Gödel 1964 and Parsons 1980.) (2) People have intuitions as of mathematical objects. (3) These intuitions are veridical. These premises entail Existence as well: for an intuition can only be veridical when its intentional object exists. I will concentrate on the original Fregean argument as this seems more tractable. For it is easier to assess whether a mathematical sentence is true than whether a mathematical intuition is veridical. 11. An epistemic holist will claim that evidence for or against a linguistic analysis can in principle come from anywhere. I need not deny this claim. My point is simply that the hypothesis in question belongs to empirical linguistics and has to be assessed as such. 12. Two differences between Benacerraf's and Field's arguments deserve mention. Firstly, Field's argument is carefully formulated so as to avoid any appeal to problematic causal theories of knowledge. Secondly, unlike Field, Benacerraf does not regard his argument as an objection to mathematical platonism but rather as a dilemma. One desideratum in the philosophy of mathematics is a unified semantics for mathematical and non-mathematical language. Another desideratum is a plausible epistemology of mathematics. If we accept mathematical platonism, we satisfy the first desideratum but not the second. If on the other hand we reject mathematical platonism, we satisfy the second desideratum but not the first. 13. Even if Premise 3 turns out to be defensible, it may no longer be so when ‘anti-nominalism’ is substituted for ‘mathematical platonism’. The discussion in Section 5.2 provides some reason to doubt this modified version of Premise 3. See also Linnebo 2006, Section 5. 14. The transitive closure of a relation R is the smallest transitive relation S which contains R. The transitive closure of a relation is sometimes also known as the ancestral of the relation. 15. The full-blooded platonist recognizes a mathematical statement S as ‘objectively correct’ only if S is true in all mathematical structures answering to our ‘full conception’ of the relevant mathematical structure. See Balaguer 2001.
{"url":"http://plato.stanford.edu/entries/platonism-mathematics/notes.html","timestamp":"2014-04-21T14:59:31Z","content_type":null,"content_length":"19884","record_id":"<urn:uuid:5898eaca-0035-48b8-b258-f0abefe3a8d2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
In order to calculate the X-ray emission or absorption from a plasma, apart from the physical conditions also the ion concentrations must be known. These ion concentrations can be determined by solving the equations for ionisation balance (or in more complicated cases by solving the time-dependent equations). A basic ingredient in these equations are the ionisation and recombination rates, that we discussed in the previous section. Here we consider three of the most important cases: collisional ionisation equilibrium, non-equilibrium ionisation and photoionisation equilibrium. 4.1. Collisional Ionisation Equilibrium (CIE) The simplest case is a plasma in collisional ionisation equilibrium (CIE). In this case one assumes that the plasma is optically thin for its own radiation, and that there is no external radiation field that affects the ionisation balance. Photo-ionisation and Compton ionisation therefore can be neglected in the case of CIE. This means that in general each ionisation leads to one additional free electron, because the direct ionisation and excitation-autoionisation processes are most efficient for the outermost atomic shells. The relevant ionisation processes are collisional ionisation and excitation-autoionisation, and the recombination processes are radiative recombination and dielectronic recombination. Apart from these processes, at low temperatures also charge transfer ionisation and recombination are important. We define R[z] as the total recombination rate of an ion with charge z to charge z - 1, and I[z] as the total ionisation rate for charge z to z + 1. Ionisation equilibrium then implies that the net change of ion concentrations n[z] should be zero: and in particular for z = 0 one has (a neutral atom cannot recombine further and it cannot be created by ionisation). Next an arbitrary value for n[0] is chosen, and (23) is solved: This is substituted into (22) which now can be solved. Using induction, it follows that Finally everything is normalised by demanding that where n[element] is the total density of the element, determined by the total plasma density and the chemical abundances. Examples of plasmas in CIE are the Solar corona, coronae of stars, the hot intracluster medium, the diffuse Galactic ridge component. Fig. 7 shows the ion fractions as a function of temperature for two important elements. Figure 7. Ion concentration of oxygen ions (left panel) and iron ions (right panel) as a function of temperature in a plasma in Collisional Ionisation Equilibrium (CIE). Ions with completely filled shells are indicated with thick lines: the He-like ions O VII and Fe XXV, the Ne-like Fe XVII and the Ar-like Fe IX; note that these ions are more prominent than their neighbours. 4.2. Non-Equilibrium Ionisation (NEI) The second case that we discuss is non-equilibrium ionisation (NEI). This situation occurs when the physical conditions of the source, like the temperature, suddenly change. A shock, for example, can lead to an almost instantaneous rise in temperature. However, it takes a finite time for the plasma to respond to the temperature change, as ionisation balance must be recovered by collisions. Similar to the CIE case we assume that photoionisation can be neglected. For each element with nuclear charge Z we write: where Z + 1 that contains the ion concentrations, and which is normalised according to Eqn. 26. The transition matrix A is a (Z + 1) × (Z + 1) matrix given by We can write the equation in this form because both ionisations and recombinations are caused by collisions of electrons with ions. Therefore we have the uniform scaling with n[e]. In general, the set of equations (27) must be solved numerically. The time evolution of the plasma can be described in general well by the parameter The integral should be done over a co-moving mass element. Typically, for most ions equilibrium is reached for U ~ 10^18 m^-3s. We should mention here, however, that the final outcome also depends on the temperature history T(t) of the mass element, but in most cases the situation is simplified to T(t) = constant. 4.3. Photoionisation Equilibrium (PIE) The third case that we treat are the photoionised plasmas. Usually one assumes equilibrium (PIE), but there are of course also extensions to non-equilibrium photo-ionised situations. Apart from the same ionisation and recombination processes that play a role for plasmas in NEI and CIE, also photoionisation and Compton ionisation are relevant. Because of the multiple ionisations caused by Auger processes, the equation for the ionisation balance is not as simple as (22), because now one needs to couple more ions. Moreover, not all rates scale with the product of electron and ion density, but the balance equations also contain terms proportional to the product of ion density times photon density. In addition, apart from the equation for the ionisation balance, one needs to solve simultaneously an energy balance equation for the electrons. In this energy equation not only terms corresponding to ionisation and recombination processes play a role, but also several radiation processes (Bremsstrahlung, line radiation) or Compton scattering. The equilibrium temperature must be determined in an iterative way. A classical paper describing such photoionised plasmas is Kallman
{"url":"http://ned.ipac.caltech.edu/level5/Sept08/Kaastra/Kaastra4.html","timestamp":"2014-04-21T03:10:10Z","content_type":null,"content_length":"9419","record_id":"<urn:uuid:aa6ef50b-5965-4559-9e07-fa399b4e749d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] f.p. powers and masked arrays Michael Fitzgerald fitz at astron.berkeley.edu Wed Jun 21 23:39:51 CDT 2006 Hello all, I'm encountering some (relatively new?) behavior with masked arrays that strikes me as bizarre. Raising zero to a floating-point value is triggering a mask to be set, even though the result should be well-defined. When using fixed-point integers for powers, everything works as expected. I'm seeing this with both numarray and numpy. Take the case of 0**1, illustrated below: >>> import numarray as n1 >>> import numarray.ma as n1ma >>> n1.array(0.)**1 >>> n1.array(0.)**1. >>> n1ma.array(0.)**1 >>> n1ma.array(0.)**1. array(data = mask = fill_value=[ 1.00000002e+20]) >>> import numpy as n2 >>> import numpy.core.ma as n2ma >>> n2.array(0.)**1 >>> n2.array(0.)**1. >>> n2ma.array(0.)**1 >>> n2ma.array(0.)**1. array(data = mask = I've been using python v2.3.5 & v.2.4.3, numarray v1.5.1, and numpy v0.9.8, and tested this on an x86 Debian box and a PPC OSX box. It may be the case that this issue has manifested in the past several months, as it's causing a new problem with some of my older code. Any thoughts? Thanks in advance, More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-June/008744.html","timestamp":"2014-04-18T23:21:38Z","content_type":null,"content_length":"3921","record_id":"<urn:uuid:21cedb5f-31b4-42c6-b118-d60d98980ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Conjugate prior of a Laplace distribution December 8th 2009, 09:58 PM Conjugate prior of a Laplace distribution Does anyone know what the Bayesian conjugate prior of a Laplace distribution likelihood is? Most of the exponential family of distributions have known conjugate priors but the Laplace is not in the exponential family in general. I've been struggling for a while trying to find information on its conjugate prior but I haven't found anything. I'm working on a model that would be greatly simplified if I could produce the conjugate for the Laplacian. I am thinking it might be related to the conjugate of the (single-) exponential. December 9th 2009, 05:38 AM mr fantastic Does anyone know what the Bayesian conjugate prior of a Laplace distribution likelihood is? Most of the exponential family of distributions have known conjugate priors but the Laplace is not in the exponential family in general. I've been struggling for a while trying to find information on its conjugate prior but I haven't found anything. I'm working on a model that would be greatly simplified if I could produce the conjugate for the Laplacian. I am thinking it might be related to the conjugate of the (single-) exponential. Of related interest: http://www.math.wm.edu/~leemis/2008amstat.pdf
{"url":"http://mathhelpforum.com/advanced-statistics/119474-conjugate-prior-laplace-distribution-print.html","timestamp":"2014-04-19T08:15:10Z","content_type":null,"content_length":"5575","record_id":"<urn:uuid:12fffa5f-433f-4c5f-891e-20f55f4c8364>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
20 search hits Strangeness dynamics in relativistic nucleus-nucleus collision (2003) Elena L. Bratkovskaya Marcus Bleicher Wolfgang Cassing M. van Leeuwen Manuel Reiter Sven Soff Horst Stöcker Henning Weber We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) Signatures in the Planck regime (2003) Sabine Hossenfelder Marcus Bleicher Stefan Hofmann Jörg Ruppert Stefan Scherer Horst Stöcker String theory suggests the existence of a minimum length scale. An exciting quantum mechanical implication of this feature is a modification of the uncertainty principle. In contrast to the conventional approach, this generalised uncertainty principle does not allow to resolve space time distances below the Planck length. In models with extra dimensions, which are also motivated by string theory, the Planck scale can be lowered to values accessible by ultra high energetic cosmic rays (UHECRs) and by future colliders, i.e. M f approximately equal to 1 TeV. It is demonstrated that in this novel scenario, short distance physics below 1/M f is completely cloaked by the uncertainty principle. Therefore, Planckian effects could be the final physics discovery at future colliders and in UHECRs. As an application, we predict the modifications to the e+ e- to f+ f- cross-sections. Re-visit the N/Z ratio of free nucleons from collisions of neutron-rich nuclei as a probe of EoS of asymmetric nuclear matter (2003) Qingfeng Li Zhuxia Li Enguang Zhao Horst Stöcker The N/Z ratio of free nucleons from collisions of neutron-rich nuclei as a function of their momentum is studied by means of Isospin dependent Quantum Molecular Dynamics. We find that this ratio is not only sensitive to the form of the density dependence of the symmetry potential energy but also its strength determined by the symmetry energy coe cient. The uncertainties about the symmetry energy coe cient influence the accuracy of probing the density dependence of the symmetry energy by means of the N/Z ratio of free nucleons of neutron-rich nuclei. Probing the minimal length scale by precision tests of the muon g-2 (2003) Ulrich Harbach Sabine Hossenfelder Marcus Bleicher Horst Stöcker Modifications of the gyromagnetic moment of electrons and muons due to a minimal length scale combined with a modified fundamental scale M_f are explored. Deviations from the theoretical Standard Model value for g-2 are derived. Constraints for the fundamental scale M_f are given. Open charm and charmonium production at RHIC (2003) Elena L. Bratkovskaya Wolfgang Cassing Horst Stöcker We calculate open charm and charmonium production in Au + Au reac- tions at ps = 200 GeV within the hadron-string dynamics (HSD) transport approach employing open charm cross sections from pN and N reactions that are fitted to results from PYTHIA and scaled in magnitude to the available experimental data. Charmonium dissociation with nucleons and formed mesons to open charm (D + ¯D pairs) is included dynamically. The comover dissociation cross sections are described by a simple phase-space model including a single free parameter, i.e. an interaction strength M2 0 , that is fitted to the J/ suppression data for Pb + Pb collisions at SPS energies. As a novel feature we implement the backward channels for char- monium reproduction by D ¯D channels employing detailed balance. From our dynamical calculations we find that the charmonium recreation is com- parable to the dissociation by comoving mesons. This leads to the final result that the total J/ suppression at ps = 200 GeV as a function of centrality is slightly less than the suppression seen at SPS energies by the NA50 Collaboration, where the comover dissociation is substantial and the backward channels play no role. Furthermore, even in case that all di- rectly produced J/ mesons dissociate immediately (or are not formed as a mesonic state), a sizeable amount of charmonia is found asymptotically due to the D + ! J/ + meson channels in central collisions of Au + Au at ps = 200 GeV which, however, is lower than the J/ yield expected from f pp collis ns. Model dependence of lateral distribution functions of high energy cosmic ray air showers (2003) Hans-Joachim Drescher Marcus Bleicher Sven Soff Horst Stöcker The influence of high and low energy hadronic models on lateral distribution functions of cosmic ray air showers for Auger energies is explored. A large variety of presently used high and low energy hadron interaction models are analysed and the resulting lateral distribution functions are compared. We show that the slope depends on both the high and low energy hadronic model used. The models are confronted with available hadron-nucleus data from accelerator experiments. Mass modification of D-meson in hot hadronic matter (2003) Amruta Mishra Elena L. Bratkovskaya Jürgen Schaffner-Bielich Stefan Schramm Horst Stöcker We evaluate the in-medium D and -meson masses in hot hadronic matter induced by interactions with the light hadron sector described in a chiral SU(3) model. The e ective Lagrangian approach is generalized to SU(4) to include charmed mesons. We find that the D-mass drops substantially at finite temperatures and densities, which open the channels of the decay of the charmonium states ( 2, c, J/ ) to D pairs in the thermal medium. The e ects of vacuum polarisations from the baryon sector on the medium modification of the D-meson mass relative to those obtained in the mean field approximation are investigated. The results of the present work are compared to calculations based on the QCD sum-rule approach, the quark-meson coupling model, chiral perturbation theory, as well as to studies of quarkonium dissociation using heavy quark potential from lattice QCD. In-medium vector meson masses in a chiral SU(3) model (2003) Detlef Zschiesche Amruta Mishra Stefan Schramm Horst Stöcker Walter Greiner A significant drop of the vector meson masses in nuclear matter is observed in a chiral SU(3) model due to the e ects of the baryon Dirac sea. This is taken into account through the summation of baryonic tadpole diagrams in the relativistic Hartree approximation. The appreciable decrease of the in-medium vector meson masses is due to the vacuum polarisation e ects from the nucleon sector and is not observed in the mean field approximation. Hydrodynamics near a chiral critical point (2003) Kerstin Paech Horst Stöcker Adrian Dumitru We introduce a model for the real-time evolution of a relativistic fluid of quarks coupled to non-equilibrium dynamics of the long wavelength (classical) modes of the chiral condensate. We solve the equations of motion numerically in 3+1 spacetime dimensions. Starting the evolution at high temperature in the symmetric phase, we study dynamical trajectories that either cross the line of first-order phase transitions or evolve through its critical endpoint. For those cases, we predict the behavior of the azimuthal momentum asymmetry for highenergy heavy-ion collisions at nonzero impact parameter. GEANT4 : a simulation toolkit (2003) S. Agostinelli Dennis Dean Dietrich Walter Greiner Kerstin Anja Paech Stefan Scherer Horst Stöcker Henning Weber Detlef Zschiesche et al. Geant4 Collaboration Abstract Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics. PACS: 07.05.Tp; 13; 23
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Horst+St%C3%B6cker%22/start/0/rows/10/yearfq/2003/sortfield/title/sortorder/desc","timestamp":"2014-04-20T06:14:25Z","content_type":null,"content_length":"49503","record_id":"<urn:uuid:cde14b3d-d8ea-4c8a-9992-2103c1202251>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: "function" as a basic mathematical concept Stephen G Simpson simpson at math.psu.edu Wed Jan 14 15:11:21 EST 1998 Colin McLarty writes: > Applications of sheaf theory are traditional by now. They've been > around half a century, and they are used for all the things you > name--except possibly Fourier series. I don't know applications > there but I wouldn't expect to since I don't know much about > Fourier analysis. I guess my question still isn't clear. I'll try once more. I'm not at all interested in applications of topos theory or sheaf theory to prove new theorems in Fourier analysis. My question is, is it possible to develop TRADITIONAL applications of real analysis, including the TRADITIONAL rudiments of classical Fourier analysis, within elementary topos theory. > Now if you ask "How far are foundational presentations of real > analysis in a topos used to build bridges" I'd guess not very far. I'm not asking whether they ARE so used. I'm asking whether it's POSSIBLE IN PRINCIPLE to so use them. In other words, I'm asking about the project of using topos theory as an alternative foundational setup for mathematics, replacing the orthodox setup, ZFC. I'm wondering what will happen when you try to set up the rudiments of real analysis, in the elementary topos setup. Will you get enough real analysis to build bridges? Or will you get a horrible, ill-motivated mess that nobody can make sense out of? I'm not asking these questions in order to embarrass topos theory. I'm genuinely curious. > >I think you already said that this requires special assumptions on the > >topos, e.g. the existence of a natural number object. > That is no very special assumption. And ZF has to use its axiom of > infinity for the same purpose. Well, I wonder. How special is it, given the original motivation for topos, especially if we are talking about topos theory qua general theory of functions? > It is an interesting question which assumptions are needed for which > results. I'm not expert on that but there are results. OK. So this means there is a lot more work to do before you can say that there are no doubts or problems about real analysis in topos theory. Right? > >I'm well aware that you can get something like ZFC by piling more and > >more axioms onto the elementary topos axioms. But, there is a key > >question here: Are these additional axioms well-motivated in terms of > >the original motivation for the topos axioms? > Well, "something like" is a pregnant phrase and we could argue all > day what is like what. ... By "something like ZFC" I simply meant something intertranslatable with some reasonably rich fragment of ZFC, e.g. Zermelo set theory, or maybe Zermelo set theory with comprehension only for bounded formulas. But you didn't address my key question, about whether the additional axioms that are needed for real analysis are well-motivated in terms of the original motivation for the topos axioms. My point here is: How does topos theory stack up against ZFC? We know that ZFC is a well-motivated, plausible, natural set of axioms which arise from a certain picture that we have in mind, namely the cumulative hierarchy. Are the elementary topos axioms equally natural in this sense? What is the picture giving rise to them, if any? And what about the extra axioms needed for real analysis? > The parable itself only refutes one purported argument that topos > theory depends on set theory. Historically I think a lot of topos theory developed as a sort of "category theorist's reaction of set theory". One of the motivating examples was the category of sheaves of sets over a topological space. But I grant it's hard to unravel these historical motivations. My real question is about the foundational motivation, here and now. Is topos theory backed up by a motivating story as a general theory of functions which is supposed to be adequate for all of mathematics? The paradigm here is ZFC, which does have a well-known f.o.m. motivation as a general theory of sets which is adequate for all of mathematics. How does topos theory stack up to ZFC? -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000758.html","timestamp":"2014-04-18T08:03:40Z","content_type":null,"content_length":"6730","record_id":"<urn:uuid:2fd9db4a-2dc1-49ef-aa61-3dd9b8d52e02>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All Lesson Plans in Algebra, What is a binomial? Discussion: All Lesson Plans in Algebra Topic: What is a binomial? << see all messages in this topic < previous message | next message > Subject: RE: What is a binomial? Author: The Math Guy Date: Apr 29 2005 Eric...I like this kind of discussion - thinking about our thinking. And in my way of thinking, 2x+3x is a binomial but 5x is a monomial. The name is a statement about the state of the expression - written as 2x+3x we have the sum of two terms, a binomial but written as 5x we have a single term. You change the state and you can change what you call them. Water is a solid, a liquid, and a gas depending on its state at that moment. But it is still water. Reply to this message Quote this message when replying? yes no Post a new topic to the All Lesson Plans in Algebra discussion Visit related discussions: Lesson Plans Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=cell&do=r&msg=18611","timestamp":"2014-04-16T19:35:23Z","content_type":null,"content_length":"16031","record_id":"<urn:uuid:30b62481-3c21-4148-bb4f-6e6518132ff4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Selection for the Number Field Sieve Integer Factorization Algorithm Results 1 - 10 of 12 "... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends o ..." Cited by 41 (17 self) Add to MetaCart The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiple-polynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617-decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM. - In Proc. of COCOON 2000 , 2000 "... Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In ..." Cited by 20 (1 self) Add to MetaCart Abstract. The integer factorisation and discrete logarithm problems are of practical importance because of the widespread use of public key cryptosystems whose security depends on the presumed difficulty of solving these problems. This paper considers primarily the integer factorisation problem. In recent years the limits of the best integer factorisation algorithms have been extended greatly, due in part to Moore’s law and in part to algorithmic improvements. It is now routine to factor 100-decimal digit numbers, and feasible to factor numbers of 155 decimal digits (512 bits). We outline several integer factorisation algorithms, consider their suitability for implementation on parallel machines, and give examples of their current capabilities. In particular, we consider the problem of parallel solution of the large, sparse linear systems which arise with the MPQS and NFS methods. 1 - IN: PROC. ASIACRYPT 2003, LNCS 2894 , 2003 "... We estimate the yield of the number field sieve factoring algorithm when applied to the 1024-bit composite integer RSA-1024 and the parameters as proposed in the draft version [17] of the TWIRL hardware factoring device [18]. We present the details behind the resulting improved parameter choices f ..." Cited by 12 (6 self) Add to MetaCart We estimate the yield of the number field sieve factoring algorithm when applied to the 1024-bit composite integer RSA-1024 and the parameters as proposed in the draft version [17] of the TWIRL hardware factoring device [18]. We present the details behind the resulting improved parameter choices from [18]. "... Abstract. The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used i ..." Cited by 7 (1 self) Add to MetaCart Abstract. The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records. 1. The polynomial selection method of Montgomery and Murphy In this section we briefly discuss the problem of polynomial selection for GNFS. We also sketch the polynomial selection method of Montgomery and Murphy. The first step in GNFS (see [3]) for factoring an integer N consists in the choice of two coprime polynomials f1 and f2 sharing a common root modulo N. If we denote the corresponding homogenized polynomials by F1, resp.F2, the next (and most time consuming) step in GNFS consists in finding many pairs (a, b) ∈ Z2 of coprime integers for which both values Fi(a, b), i =1, 2, are products of primes below some smoothness bounds Bi, i =1, 2 (we will refer to these pairs as sieve reports). This is usually done by a sieving procedure which identifies (most of) these pairs in some region A⊂Z2. In the case of line sieving A is of the form [−A, A] × [1,B] ∩ Z2 for some A and B. For lattice sieving the form of this region is more complicated, but we could use a rectangle as above as an approximation. The sieving region A and the smoothness bounds Bi, i =1, 2, are chosen such that one finds approximately π(B1)+π(B2) sieve reports (π(x) denotes the number of primes below x). The time spent for sieving mainly depends on the size of the region A, i.e., 2AB. So we are left with two problems for the polynomial selection phase: how to find such polynomial pairs and, having found more than one, how to select a polynomial pair which minimizes sieving time. Both problems are addressed in several articles ([4], [5], [6]). We give a short description of the results of these articles. Let ρ(x) be Dickman’s function which roughly is the probability that the largest prime factor of a natural number n is at most n 1 x. A first approximation for the number of sieve reports is given by 6 π 2 "... Abstract. Many index calculus algorithms generate multiplicative relations between smoothness basis elements by using a process called Sieving. This process allows to filter potential candidate relations very quickly, without spending too much time to consider bad candidates. However, from an asympt ..." Cited by 6 (3 self) Add to MetaCart Abstract. Many index calculus algorithms generate multiplicative relations between smoothness basis elements by using a process called Sieving. This process allows to filter potential candidate relations very quickly, without spending too much time to consider bad candidates. However, from an asymptotic point of view, there is not much difference between sieving and straightforward testing of candidates. The reason is that even when sieving, some small amount time is spend for each bad candidates. Thus, asymptotically, the total number of candidates contributes to the complexity. In this paper, we introduce a new technique: Pinpointing, which allows us to construct multiplicate relations much faster, thus reducing the asymptotic complexity of relations ’ construction. Unfortunately, we only know how to implement this technique for finite fields which contain a medium-sized subfield. When applicable, this method improves the asymptotic complexity of the index calculus algorithm in the cases where the sieving phase dominates. In practice, it gives a very interesting boost to the performance of state-of-the-art algorithms. We illustrate the feasability of the method with a discrete logarithm record in medium prime finite fields of sizes 1175 bits and 1425 bits. 1 "... Abstract. We present an algorithm that finds polynomials with many roots modulo many primes by rotating candidate Number Field Sieve polynomials using the Chinese Remainder Theorem. We also present an algorithm that finds a polynomial with small coefficients among all integral translations of X of a ..." Cited by 4 (0 self) Add to MetaCart Abstract. We present an algorithm that finds polynomials with many roots modulo many primes by rotating candidate Number Field Sieve polynomials using the Chinese Remainder Theorem. We also present an algorithm that finds a polynomial with small coefficients among all integral translations of X of a given polynomial in ZZ[X]. These algorithms can be used to produce promising candidate Number Field Sieve polynomials. 1 "... The general number field sieve (GNFS) is asymptotically the fastest known factoring algorithm. One of the most important steps of GNFS is to select a good polynomial pair. A standard way of polynomial selection (being used in factoring RSA challenge numbers) is to select a nonlinear polynomial for a ..." Cited by 3 (0 self) Add to MetaCart The general number field sieve (GNFS) is asymptotically the fastest known factoring algorithm. One of the most important steps of GNFS is to select a good polynomial pair. A standard way of polynomial selection (being used in factoring RSA challenge numbers) is to select a nonlinear polynomial for algebraic sieving and a linear polynomial for rational sieving. There is another method called a nonlinear method which selects two polynomials of the same degree greater than one. In this paper, we generalize Montgomery’s method [7] using small geometric progression (GP) (mod N) to construct a pair of nonlinear polynomials. We introduce GP of length d + k with 1 ≤ k ≤ d − 1 and show that we can construct polynomials of degree d having common root (mod N), where the number of such polynomials and the size of the coefficients can be precisely determined. "... The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This method was proposed by John Pollard [20] in 1988. Since then several variants have been implemented with the objective of improving the siever which is the most time consuming part of this ..." Cited by 1 (0 self) Add to MetaCart The Number Field Sieve (NFS) is the asymptotically fastest known factoring algorithm for large integers. This method was proposed by John Pollard [20] in 1988. Since then several variants have been implemented with the objective of improving the siever which is the most time consuming part of this method (but fortunately, also the easiest to parallelise). Pollard's original method allowed one large prime. After that the two-large-primes variant led to substantial improvements [11]. In this paper we investigate whether the three-large-primes variant may lead to any further improvement. We present theoretical expectations and experimental results. We assume the reader to be familiar with the NFS. , 2002 "... 3 Finite Fields In computational number theory and cryptographic applications, we often have to work over finite fields. A finite field F is a finite set with operations "+ " and "\Theta " which satisfy the usual associative, commutative and distributive laws: ..." Add to MetaCart 3 Finite Fields In computational number theory and cryptographic applications, we often have to work over finite fields. A finite field F is a finite set with operations &quot;+ &quot; and &quot;\ Theta &quot; which satisfy the usual associative, commutative and distributive laws: "... Abstract. Polynomial selection is the first important step in number field sieve. A good polynomial not only can produce more relations in the sieving step, but also can reduce the matrix size. In this paper, we propose to use geometric view in the polynomial selection. In geometric view, the coeffi ..." Add to MetaCart Abstract. Polynomial selection is the first important step in number field sieve. A good polynomial not only can produce more relations in the sieving step, but also can reduce the matrix size. In this paper, we propose to use geometric view in the polynomial selection. In geometric view, the coefficients ’ interaction on size and the number of real roots are simultaneously considered in polynomial selection. We get two simple criteria. The first is that the leading coefficient should not be too large or some good polynomials will be omitted. The second is that the coefficient of degree d − 2 should be negative and it is better if the coefficients of degree d − 1 and d − 3 have opposite sign. Using these new criteria, the computation can be reduced while we can get good polynomials. Many experiments on large integers show the effectiveness of our conclusion.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1789589","timestamp":"2014-04-19T18:12:57Z","content_type":null,"content_length":"38157","record_id":"<urn:uuid:b76c07e6-d228-4e17-85ca-0e3af28e364b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Paracompact Hausdorff but not compactly generated? up vote 10 down vote favorite I'm sorry to be asking a (possibly) elementary question, but I've run into a problem in point-set topology; I've just read that there exists paracompact Hausdoff spaces which are not compactly generated. I ask the following: Question: If $X$ is paracompact Hausdorff, is its compactly generated replacement, $k\left(X\right),$ paracompact Hausdorff? Recall: The inclusion $i:CGH \to Haus$ of compactly generated Hausdorff spaces into Hausdorff spaces has a right adjoint $k,$ which replaces the topology of $X$ with the following topology: $U \subset X$ is open in $k\left(X\right)$ if and only if for all compact subsets $K \subset X,$ $U \cap K$ is open in $K$. Another way of describing this topology is that it is the final topology with respect to all maps into $X$ with compact Hausdorff domain. (For the experts, $CGH$ is the mono-coreflective Hull of the category of compact Hausdorff spaces in the category of Hausdorff spaces) point-set-topology gn.general-topology It seems that it's certainly Hausdorff, as the topology of $k(X)$ is finer (if $U$ is open in $X$ then $U\cap K$ is open in $K$ for all compacta $K$, by definition of the subspace topology.) So the two separating sets that worked for $X$ still work for $k(X)$. – wildildildlife May 18 '11 at 22:33 Yes, it is indeed Hausdorff; I know that $k$ is a functor $$k:Haus \to CGH,$$ the question is whether or not it is paracompact. – David Carchedi May 18 '11 at 23:47 Every compactly generated space is a quotient of a locally compact Hausdorff space. That may help, but not in the naive way. You definitely can't conclude $k(X)$ is paracompact just because it's a quotient of a paracompact space. – David White May 22 '11 at 19:13 Thanks, I'm aware of this result, but I'm not sure how to use it. In fact, this is and if and only if, i.e. it characterizes compactly generated spaces. Moreover, for compactly generated Hausdorff spaces, they are the obvious quotient of the disjoint union of all their compact subsets, and if $X$ is not compactly generated, this quotient is $k\left(X\right).$ This means when $X$ is paracompact Hausdorff, $k\left(X\right)$ is a quotient of a space which is is both locally compact and paracompact Hausdorff. I'm not sure where to go from here. – David Carchedi May 23 '11 at 1:31 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged point-set-topology gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/65340/paracompact-hausdorff-but-not-compactly-generated?sort=newest","timestamp":"2014-04-18T08:55:09Z","content_type":null,"content_length":"52208","record_id":"<urn:uuid:cb9a8381-09b8-4090-a33d-d8c14cf8fdeb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Geometric Introduction to Control Theory by Dmitri Burago Dmitri Burago Consider an n-dimensional Riemannian manifold (you may think of a region in R^n) together with a collection of vector fields V[i], i=1,2, ... m, m< n. We will be concerned with the following a. Given two points in M, does there exist a (smooth) path connecting the points and such that its tangent vector at every point is a linear combination of V[i]'s? b. If such paths exist, what is the shortest one? c. What are the properties of the metric spaces whose distance function is defined as the length of shortest path tangent to the span of V[i]'s at every point? d. How can we analyze particular examples? One of the motivations for this set-up comes from applied problems with the number of controls smaller than the dimension of the configuration space of objects to control. A classical example is parallel parking: the driver has only steering wheel and acceleration pedal in his/her disposal, while the space of positions of the car is three-dimensional. It is even more striking for a truck with several trailers: the configuration space of a trailer train with k trailers is (3+k)-dimensional. We will discuss many other examples of such systems (planographers, bicycles, rolling a ball, falling cats, particles in magnetic field etc.) My main reason to choose Control Theory for my mini-course is that it belongs to the intersection of many mathematical topics, giving an excellent opportunity to give geometric introduction into these disciplines and then show how they can work if one puts them together. These mathematical topics include: geometry of Length Spaces; theory of Connections; Variational Methods; Nilpotent Groups. At the same time, there are many nice real-life examples and applications. This is an approximate plan of the course: 1. Our main model example: the three-dimensional Euclidean coordinate space viewed as the Heisinger group, and a distribution of two-planes invariant under the group action. In this example we will see how it is possible that, having only two-dimensional space of available directions at every point, one still can find a path connecting any two given points (and such that its velocity vector at every point belongs to the two-plane of our distribution at that point.) 2. Lie brackets of vector fields and integrability/nonintegrability conditions of Frobenius and Chow. 3. Important examples (and related notions and theories): connections in vector bundles (holonomies and curvature); contact structures. 4. Length spaces. Carnot-Caratheodory spaces (length spaces arising from control theory). Local structure of Carnot-Caratheodory spaces (box-ball theorem, metric tangent cones). 5. Examples. I do not want to suggest reading any books on this topic before the conference. It will be helpful, however, if students refresh their knowledge of differential equations (existence and uniqueness of solutions, smooth dependence of initial data and parameters), and multi-dimensional calculus (inverse and implicit function theorems, basics in differential forms and smooth manifolds). However, our exposition is planned to be almost self-contained, we will begin from the scratch and review almost everything that we need to use.
{"url":"http://www.ima.umn.edu/pi_programs/abstracts/burago1.html","timestamp":"2014-04-20T18:32:08Z","content_type":null,"content_length":"17778","record_id":"<urn:uuid:420e1bbd-2b51-42ae-8115-4ff42a41c30d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting data using Matplotlib: Part 2 Exclusive offer: get 50% off this eBook here Matplotlib for Python Developers — Save 50% Build remarkable publication-quality plots the easy way $26.99 $13.50 by Sandro Tosi | October 2009 | Open Source Plotting data from a CSV file A common format to export and distribute datasets is the Comma-Separated Values (CSV) format. For example, spreadsheet applications allow us to export a CSV from a working sheet, and some databases also allow for CSV data export. Additionally, it's a common format to distribute datasets on the Web. In this example, we'll be plotting the evolution of the world's population divided by continents, between 1950 and 2050 (of course they are predictions), using a new type of graph: bars stacked. Using the data available at http://www.xist.org/earth/pop_continent.aspx (that fetches data from the official UN data at http://esa.un.org/unpp/index.asp), we have prepared the following CSV file: Latin America,167307,323323,521228,588649,669533,729184 Northern America,171615,242360,318654,351659,397522,448464 In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given There are several ways to parse a CSV file, for example: • NumPy's loadtxt() (what we are going to use here) • Matplotlib's mlab.csv2rec() • The csv module (in the standard library) but we decided to go with loadtxt() because it's very powerful (and it's what Matplotlib is standardizing on). Let's look at how we can plot it then: # for file opening made easier from __future__ import with_statement We need this because we will use the with statement to read the file. # numpy import numpy as np NumPy is used to load the CSV and for its useful array data type. # matplotlib plotting module import matplotlib.pyplot as plt # matplotlib colormap module import matplotlib.cm as cm # needed for formatting Y axis from matplotlib.ticker import FuncFormatter # Matplotlib font manager import matplotlib.font_manager as font_manager In addition to the classic pyplot module, we need other Matplotlib submodules: • cm (color map): Considering the way we're going to prepare the plot, we need to specify the color map of the graphical elements • FuncFormatter: We will use this to change the way the Y-axis labels are displayed • font_manager: We want to have a legend with a smaller font, and font_manager allows us to do that def billions(x, pos): """Formatter for Y axis, values are in billions""" return '%1.fbn' % (x*1e-6) This is the function that we will use to format the Y-axis labels. Our data is in thousands. Therefore, by dividing it by one million, we obtain values in the order of billions. The function is called at every label to draw, passing the label value and the position. # bar width width = .8 As said earlier, we will plot bars, and here we defi ne their width. The following is the parsing code. We know that it's a bit hard to follow (the data preparation code is usually the hardest one) but we will show how powerful it is. # open CSV file with open('population.csv') as f: The function we're going to use, NumPy loadtxt(), is able to receive either a filename or a file descriptor, as in this case. We have to open the file here because we have to strip the header line from the rest of the file and set up the data parsing structures. # read the first line, splitting the years years = map(int, f.readline().split(',')[1:]) Here we read the first line, the header, and extract the years. We do that by calling the split() function and then mapping the int() function to the resulting list, from the second element onwards (as the first one is a string). # we prepare the dtype for exacting data; it's made of: # <1 string field> <len(years) integers fields> dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) NumPy is flexible enough to allow us to define new data types. Here, we are creating one ad hoc for our data lines: a string (of maximum 16 characters) and as many integers as the length of years list. Also note how the fi rst element has a name, continents, while the last integers have none: we will need this in a bit. # we load the file, setting the delimiter and the dtype above y = np.loadtxt(f, delimiter=',', dtype=dtype) With the new data type, we can actually call loadtxt(). Here is the description of the parameters: • f: This is the file descriptor. Please note that it now contains all the lines except the first one (we've read above) which contains the headers, so no data is lost. • delimiter: By default, loadtxt() expects the delimiter to be spaces, but since we are parsing a CSV file, the separator is comma. • dtype: This is the data type that is used to apply to the text we read. By default, loadtxt() tries to match against float values # "map" the resulting structure to be easily accessible: # the first column (made of string) is called 'continents' # the remaining values are added to 'data' sub-matrix # where the real data are y = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))])) Here we're using a trick: we view the resulting data structure as made up of two parts, continents and data. It's similar to the dtype that we defined earlier, but with an important difference. Now, the integer's values are mapped to a field name, data. This results in the column continents with all the continents names,and the matrix data that contains the year's values for each row of the data = y['data'] continents = y['continents'] We can separate the data and the continents part into two variables for easier usage in the code. # prepare the bottom array bottom = np.zeros(len(years)) We prepare an array of zeros of the same length as years. As said earlier, we plot stacked bars, so each dataset is plot over the previous ones, thus we need to know where the bars below finish. The bottom array keeps track of this, containing the height of bars already plotted. # for each line in data for i in range(len(data)): Now that we have our information in data, we can loop over it. # create the bars for each element, on top of the previous bars bt = plt.bar(range(len(data[i])), data[i], width=width, color=cm.hsv(32*i), label=continents[i], and create the stacked bars. Some important notes: • We select the the i-th row of data, and plot a bar according to its element's size (data[i]) with the chosen width. • As the bars are generated in different loops, their colors would be all the same. To avoid this, we use a color map (in this case hsv), selecting a different color at each iteration, so the sub-bars will have different colors. • We label each bar set with the relative continent's name (useful for the legend) • As we have said, they are stacked bars. In fact, every iteration adds a piece of the global bars. To do so, we need to know where to start drawing the bar from (the lower limit) and bottom does this. It contains the value where to start drowing the current bar. # update the bottom array bottom += data[i] We update the bottom array. By adding the current data line, we know what the bottom line will be to plot the next bars on top of it. # label the X ticks with years [int(year) for year in years]) We then add the tick's labels, the years elements, right in the middle of the bar. # some information on the plot plt.ylabel('Population (in billions)') plt.title('World Population: 1950 - 2050 (predictions)') Add some information to the graph. # draw a legend, with a smaller font plt.legend(loc='upper left', We now draw a legend in the upper-left position with a small font (to better fit the empty space). # apply the custom function as Y axis formatter Finally, we change the Y-axis label formatter, to use the custom formatting function that we defined earlier. The result is the next screenshot where we can see the composition of the world population divided by continents: In the preceding screenshot, the whole bar represents the total world population, and the sections in each bar tell us about how much a continent contributes to it. Also observe how the custom color map works: from bottom to top, we have represented Africa in red, Asia in orange, Europe in light green, Latin America in green, Northern America in light blue, and Oceania in blue (barely visible as the top of the bars). Plotting extrapolated data using curve fitting While plotting the CSV values, we have seen that there were some columns representing predictions of the world population in the coming years. We'd like to show how to obtain such predictions using the mathematical process of extrapolation with the help of curve fitting. Curve fitting is the process of constructing a curve (a mathematical function) that better fits to a series of data points. This process is related to other two concepts: • interpolation: A method of constructing new data points within the range of a known set of points • extrapolation: A method of constructing new data points outside a known set of points The results of extrapolation are subject to a greater degree of uncertainty and are influenced a lot by the fitting function that is used. So it works this way: 1. First, a known set of measures is passed to the curve fitting procedure that computes a function to approximate these values 2. With this function, we can compute additional values that are not present in the original dataset Let's first approach curve fitting with a simple example: # Numpy and Matplotlib import numpy as np import matplotlib.pyplot as plt These are the classic imports. # the known points set data = [[2,2],[5,0],[9,5],[11,4],[12,7],[13,11],[17,12]] This is the data we will use for curve fitting. They are the points on a plane (so each has a X and a Y component) # we extract the X and Y components from previous points x, y = zip(*data) We aggregate the X and Y components in two distinct lists. # plot the data points with a black cross plt.plot(x, y, 'kx') Then plot the original dataset as a black cross on the Matplotlib image. # we want a bit more data and more fine grained for # the fitting functions x2 = np.arange(min(x)-1, max(x)+1, .01) We prepare a new array for the X values because we wish to have a wider set of values (one unit on the right and one on to the left of the original list) and a fine grain to plot the fitting function # lines styles for the polynomials styles = [':', '-.', '--'] To differentiate better between the polynomial lines, we now define their styles list. # getting style and count one at time for d, style in enumerate(styles): Then we loop over that list by also considering the item count. # degree of the polynomial deg = d + 1 We define the actual polynomial degree. # calculate the coefficients of the fitting polynomial c = np.polyfit(x, y, deg) Then compute the coefficients of the fitting polynomial whose general format is: c[0]*x**deg + c[1]*x**(deg – 1) + ... + c[deg] # we evaluate the fitting function against x2 y2 = np.polyval(c, x2) Here, we generate the new values by evaluating the fitting polynomial against the x2 array. # and then we plot it plt.plot(x2, y2, label="deg=%d" % deg, linestyle=style) Then we plot the resulting function, adding a label that indicates the degree of the polynomial and using a different style for each line. # show the legend plt.legend(loc='upper left') We then show the legend, and the final result is shown in the next screenshot: Here, the polynomial with degree=1 is drawn as a dotted blue line, the one with degree=2 is a dash-dot green line, and the one with degree=3 is a dashed red line. We can see that the higher the degree, the better is the fit of the function against the data. Let's now revert to our main intention, trying to provide an extrapolation for population data. First a note: we take the values for 2010 as real data and not predictions (well, we are quite near to that year) else we have very few values to create a realistic extrapolation. Let's see the code: # for file opening made easier from __future__ import with_statement # numpy import numpy as np # matplotlib plotting module import matplotlib.pyplot as plt # matplotlib colormap module import matplotlib.cm as cm # Matplotlib font manager import matplotlib.font_manager as font_manager # bar width width = .8 # open CSV file with open('population.csv') as f: # read the first line, splitting the years years = map(int, f.readline().split(',')[1:]) # we prepare the dtype for exacting data; it's made of: # <1 string field> <6 integers fields> dtype = [('continents', 'S16')] + [('', np.int32)]*len(years) # we load the file, setting the delimiter and the dtype above y = np.loadtxt(f, delimiter=',', dtype=dtype) # "map" the resulting structure to be easily accessible: # the first column (made of string) is called 'continents' # the remaining values are added to 'data' sub-matrix # where the real data are y = y.view(np.dtype([('continents', 'S16'), ('data', np.int32, len(years))])) # extract fields data = y['data'] continents = y['continents'] This is the same code that is used for the CSV example (reported here for completeness). x = years[:-2] x2 = years[-2:] We are dividing the years into two groups: before and after 2010. This translates to split the last two elements of the years list. What we are going to do here is prepare the plot in two phases: 1. First, we plot the data we consider certain values 2. After this, we plot the data from the UN predictions next to our extrapolations # prepare the bottom array b1 = np.zeros(len(years)-2) We prepare the array (made of zeros) for the bottom argument of bar(). # for each line in data for i in range(len(data)): # select all the data except the last 2 values d = data[i][:-2] For each data line, we extract the information we need, so we remove the last two values. # create bars for each element, on top of the previous bars bt = plt.bar(range(len(d)), d, width=width, color=cm.hsv(32*(i)), label=continents[i], # update the bottom array b1 += d Then we plot the bar, and update the bottom array. # prepare the bottom array b2_1, b2_2 = np.zeros(2), np.zeros(2) We need two arrays because we will display two bars for the same year—one from the CSV and the other from our fitting function. # for each line in data for i in range(len(data)): # extract the last 2 values d = data[i][-2:] Again, for each line in the data matrix, we extract the last two values that are needed to plot the bar for CSV. # select the data to compute the fitting function y = data[i][:-2] Along with the other values needed to compute the fitting polynomial. # use a polynomial of degree 3 c = np.polyfit(x, y, 3) Here, we set up a polynomial of degree 3; there is no need for higher degrees. # create a function out of those coefficients p = np.poly1d(c) This method constructs a polynomial starting from the coefficients that we pass as parameter. # compute p on x2 values (we need integers, so the map) y2 = map(int, p(x2)) We use the polynomial that was defined earlier to compute its values for x2. We also map the resulting values to integer, as the bar() function expects them for height. # create bars for each element, on top of the previous bars bt = plt.bar(len(b1)+np.arange(len(d)), d, width=width/2, color=cm.hsv(32*(i)), bottom=b2_1) We draw a bar for the data from the CSV. Note how the width is half of that of the other bars. This is because in the same width we will draw the two sets of bars for a better visual comparison. # create the bars for the extrapolated values bt = plt.bar(len(b1)+np.arange(len(d))+width/2, y2, width=width/2, color=cm.bone(32*(i+2)), Here, we plot the bars for the extrapolated values, using a dark color map so that we have an even better separation for the two datasets. # update the bottom array b2_1 += d b2_2 += y2 We update both the bottom arrays. # label the X ticks with years [int(year) for year in years]) We add the years as ticks for the X-axis. # draw a legend, with a smaller font plt.legend(loc='upper left', To avoid a very big legend, we used only the labels for the data from the CSV, skipping the interpolated values. We believe it's pretty clear what they're referring to. Here is the screenshot that is displayed on executing this example: The conclusion we can draw from this is that the United Nations uses a different function to prepare the predictions, especially because they have a continuous set of information, and they can also take into account other environmental circumstances while preparing such predictions. Tools using Matplotlib Given that it's has an easy and powerful API, Matplotlib is also used inside other programs and tools when plotting is needed. We are about to present a couple of these tools: Build remarkable publication-quality plots the easy way Published: November 2009 eBook Price: $26.99 Book Price: $44.99 See more NetworkX ( http://networkx.lanl.gov/) is a Python module that contains tools for creating and manipulating (complex) networks, also known as graphs. A graph is defined as a set of nodes and edges where each edge is associated with two nodes. NetworkX also adds the possibility to associate properties to each node and edge. NetworkX is not primarily a graph drawing package but, in collaboration with Matplotlib (and also with Graphviz), it's able to show the graph we're working on. In the example we're going to propose, we will show how to create a random graph and draw it in a circular shape. # matplotlib import matplotlib.pyplot as plt # networkx nodule import networkx as nx In addition to pyplot, we also import the networkx module. # prepare a random graph with n nodes and m edges n = 16 m = 60 G = nx.gnm_random_graph(n, m) Here, we set up a graph with 16 nodes and 60 edges, chosen randomly from all the graphs with such characteristics. The graph returned is undirected: edges just connect two nodes, without a direction information (from node A to node B or vice versa). # prepare a circular layout of nodes pos = nx.circular_layout(G) Then we are using a node positioning algorithm, particularly to prepare a circular layout for the nodes of our graphs; the returned variable pos is a 2D array of nodes' positions forming a circular # define the color to select from the color map # as n numbers evenly spaced between color map limits node_color = map(int, np.linspace(0, 255, n)) We want to give a nice coloring to our nodes, so we will use a particular color map, but before that we have to identify what colors of the color map would be assigned to each node. We do this by selecting 16 numbers evenly spaced in the 256 available colors in the color map. We now have a progression of numbers that will result in a nice fading effect in the nodes' colors. # draw the nodes, specifying the color map and the list of color nx.draw_networkx_nodes(G, pos, node_color=node_color, cmap=plt.cm.hsv) We start drawing the graph from the nodes. We pass the graph object, the position pos to draw nodes in a circular layout, the color map, and the list of colors to be assigned to the nodes. # add the labels inside the nodes nx.draw_networkx_labels(G, pos) We then request to draw the labels for the nodes. They are numbers identifying the nodes plotted inside them. # draw the edges, using alpha parameter to make them lighter nx.draw_networkx_edges(G, pos, alpha=0.4) Finally, we draw the edges between nodes. We also specify the alpha parameter so that they are a little lighter and don't just appear as a complicated web of lines. # turn off axis elements We then remove the Matplotlib axis lines and labels. The result is as shown in the next screenshot where the nodes' colors are distributed across the whole color spectrum: We advise you to look at the examples available on the NetworkX web site. If you like this kind of stuff, then you'll enjoy it for sure. mpmath (http://code.google.com/p/mpmath/) is a mathematical library, written in pure Python for multiprecision floating-point arithmetic, which means that every calculation done using mpmath can have an arbitrarily high number of precision digits. This is extremely important for fields such as numerical simulation and analysis. It also contains a high number of mathematical functions, constants, and a library of tools commonly needed in mathematical applications with an astonishing performance. In conjunction with Matplotlib, mpmath provides a convenient plotting interface to display a function graphically. It is extremely easy to plot with mpmath and Matplotlib: In [1]: import mpmath as mp In [2]: mp.plot(mp.sin, [-6, 6]) In this example, the mpmath plot() method takes the function to plot and the interval where to draw it. Running this code, the following window pops up: We can also plot multiple functions at a time and define our own functions too: In [1]: import mpmath as mp In [2]: mp.plot([mp.sqrt, lambda x: -0.1*x**3 + x-0.5], [-3, 3]) On executing the preceding code snippet, we get the following screenshot where we have plotted the square root (in blue, upper part) and the function we defined (in red, lower part)0: To plot more functions, simply provide a list of them to plot(). To define a new function, we use a lambda expression. Note how the square root plot is done in full lines for positive values of X, while it's dotted in the negative part. This is because for X negatives, the result is a complex number: mpmath represents the real part with dashes and the imaginary part with dots. In this article, we have seen several examples of real world Matplotlib usage, including: • How to plot data read from a database • How to plot data extracted from a parsed Wikipedia article • How to plot data from parsing an Apache log file • How to plot data from a CSV file • How to plot extrapolated data using a curve fitting polynomial • How to plot using third-party tools such as NetworkX and mpmath We hope these practical examples have increased your interest in exploring Matplotlib, if you haven't already explored it! [ 1 | 2 ] Build remarkable publication-quality plots the easy way Published: November 2009 eBook Price: $26.99 Book Price: $44.99 See more If you have read this article you may be interested to view : About the Author : Sandro Tosi is a Debian Developer, Open Source evangelist, and Python enthusiast. After completing a B.Sc. in Computer Science from the University of Firenze, he worked as a consultant for an energy multinational as System Analyst and EAI Architect, and now works as System Engineer for one of the biggest and most innovative Italian Internet companies. Books From Packt Moodle 1.9 for Second eZ Publish 4: Enterprise Web Ext JS 3.0 Joomla! 1.5 Apache Maven 2 Effective Joomla! 1.5 Development WordPress MU 2.8: Joomla! 1.5 Content Language Teaching Sites Step-by-Step Cookbook SEO Implementation Cookbook Beginner's Guide Administration On plotting time by Hi Sir, I am working on graphing a .csv file... however, I need help on plotting time. the first column in my .csv file is a time stamp in the format HH:MM:SS. then, I have 8 more columns containing 8 different parameter readings. I wish to graph each parameter against the time it is taken, which is indicated by the timestamp. Please help me on this Sir... Thank you! Post new comment
{"url":"http://www.packtpub.com/article/plotting-data-using-matplotlib-part2","timestamp":"2014-04-16T06:56:55Z","content_type":null,"content_length":"99618","record_id":"<urn:uuid:69b62c31-7bbb-43ee-ab1e-2a8124f2f592>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Relative Springer isomorphisms and the conjugacy classes in Sylow p-subgroups of Chevalley groups Goodwin, Simon Mark (2005) Ph.D. thesis, University of Birmingham. Let \(G\) be a simple linear algebraic group over the algebraically closed field \(k\). Assume \(p\) = char \(k\) > 0 is good for \(G\) and that \(G\) is defined and split over the prime field \(\ char{bbold10}{0x46}_p\). For a power \(q\) of \(p\), we write \(G(q)\) for the Chevalley group consisting of the \(\char{bbold10}{0x46}_q\)-rational points of \(G\). Let \(F : G \rightarrow G\) be the standard Frobenius morphism such that \(G^F\)= \(G(q)\). Let \(B\) be an \(F\)-stable Borel subgroup of \(G\); write \(U\) for the unipotent radical of \(B\) and \(\char{eufm10}{0x75}\) for its Lie algebra. We note that \(U\) and \(\char{eufm10}{0x75}\) are \(F\)-stable and that \(U(q)\) is a Sylow \(p\)-subgroup of \(G(q)\). We study the adjoint orbits of \(U\) and show that the conjugacy classes of \(U(q)\) are in correspondence with the \(F\)-stable adjoint orbits of \(U\). This allows us to deduce results about the conjugacy classes of \(U(q)\). We are also interested in the adjoint orbits of \(B\) in \(\char{eufm10}{0x75}\) and the \(B(q)\)-conjugacy classes in \(U(q)\). In particular, we consider the question of when \(B\) acts on a \(B\)-submodule of \(\char{eufm10} {0x75}\) with a Zariski dense orbit. For our study of the adjoint orbits of \(U\) we require the existence of \(B\)-equivariant isomorphisms of varieties \(U/M \rightarrow\) \(\char{eufm10}{0x75}\)/ \(\char{eufm10}{0x6d}\), where \(M\) is a unipotent normal subgroup of \(B\) and \(\char{eufm10}{0x6d}\) = Lie\(M\). We define relative Springer isomorphisms which are certain maps of the above form and prove that they exist for all \(M\). This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Export Reference As : ASCII + BibTeX + Dublin Core + EndNote + HTML + METS + MODS + OpenURL Object + Reference Manager + Refer + RefWorks Share this item : Repository Staff Only: item control page
{"url":"http://etheses.bham.ac.uk/118/","timestamp":"2014-04-18T06:01:32Z","content_type":null,"content_length":"21592","record_id":"<urn:uuid:fe54b2dc-b6c8-4c9c-aeb1-c94f403e4c1f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
┃ Statistics of Democide ┃ ┃ ┃ ┃ Contents | Figures | Tables | Preface ┃ ┃ ┃ ┃ Chapter 1: Summary and Conclusions [Why Democide?...] ┃ ┃ Chapter 2: Pre-Twentieth Century Democide ┃ ┃ Chapter 3 Japan's Savage Military ┃ ┃ Chapter 4: The Khmer Rouge Hell State ┃ ┃ Chapter 5: Turkey's Ethnic Purges ┃ ┃ Chapter 6: The Vietnamese War State ┃ ┃ Chapter 7: Poland's Ethnic Cleansing ┃ ┃ Chapter 8: The Pakistani Cutthroat State ┃ ┃ Chapter 9: Tito's Slaughterhouse ┃ ┃ Chapter 10: Orwellian North Korea ┃ ┃ Chapter 11: Barbarous Mexico ┃ ┃ Chapter 12: Feudal Russia ┃ ┃ Chapter 13: Death American by bombing ┃ ┃ Chapter 14: The Gang of Centi-Kilo Murderers ┃ ┃ Chapter 15: The Lesser Murderers ┃ ┃ Chapter 17: Democracy, Power, and Democide ┃ ┃ Chapter 18: Social Diversity, Power, and Democide ┃ ┃ Chapter 19: Culture and Democide ┃ ┃ Chapter 20: The Context of Democide Socio-Economic and Geographic ┃ ┃ Chapter 21: War, Rebellion, and Democide ┃ ┃ Chapter 22: The Social Field and Democide ┃ ┃ Chapter 23: Democide Through the Years ┃ ┃ References ┃ ┃ ┃ ┃ Other Democide Related Documents On This Site ┃ ┃ ┃ ┃ Nontechnical: ┃ ┃ ┃ STATISTICS OF DEMOCIDE Chapter 16 The Social Field Of Democide ^* By R.J. Rummel Social reality is a dynamic complex of shapes, colors, and sounds. Through acculturalization and the trial and error tests of experience we impose order on this actuality by concepts that enable us to survive, reproduce, and achieve other goals. What latent causes and conditions underlie this complex we cannot know. But we culturally, intuitively, and rationally model this in a way that provides understanding of what interests us and presumably helps explain and predict what will occur. Thus we may try to explain a particular complex of perceptions, values, and meanings we call "crime" by an assumed underlying cause we label "poverty;" or "rebellion" by "underdevelopment;" or "war" by "a breakdown in the balance of power." And thus we have explanations of "massacres" or "genocide" in terms of "a challenge to regime power," "religious, racial, ethnic diversity," "ideology," "racism," "minority stereotyping and discrimination," and so on. Whatever explanation preferred, it is a model of a complex we only can perceive darkly. And the only way we have to check our models is by their accord with our experience, logic, and expectations. Were this all it would make our task of understanding and accounting for democide hard enough. But we are also bedeviled by the relative and interrelated nature of reality. It is not simply a matter of discriminating a concept such as "genocide," pointing to a possible underlying cause conceptualized as "racism," and doing case studies of "genocide" to determine if in fact "racism" is an explanation. Or for the more quantitative, it is not only regressing a measure of "genocide" for some sample onto a measure of "racism." Even what we take to be "genocide" or "racism" may be tightly bound up with other manifest or latent aspects of culture and society. But more important, what relationships we presumably uncover through case studies, traditional scholarship, or quantitative analysis may be in reality due to other "causes' and "conditions." For this among other reasons single correlations do not alone indicate causation (on the nature of statistical correlation, see Understanding Correlation). But more to the point, they can be misleading as to the actual relationships involved. Third, fourth, or more underlying causes and conditions may be causing the correlation. Moreover, even if a variety of variables are included to reflect such varied influences, as by trying to account for genocide by a multiple regression analysis of "genocide" on measures of "racism," "development," "minorities," "religion," and so on, the underlying interrelationships among the social phenomena each indexes may themselves be the result of other causes and conditions. All this is to say that our social reality is a social field of interrelated behavior, forces, and conditions. Then how can we determine or analyze this reality? Of course, there is no replacement for case studies, traditional scholarship and analysis, and the open confrontation of concepts, presumed facts, and ideas. Within this arena quantitative analysis can help to uncover the empirical and logical implications of our theories and ideas, systematically test our explanations, and discover manifest relationships. But because we must make sense of fundamentally uncertain perceptions and their even hazier underlying causes and conditions, even what quantitative methods to use is unclear. Consider a simple statistic that I will use to assess the relationship between types of power and democide, the correlation coefficient. There are numerous choices for which coefficient to measure the correlation between two social variables, such as the (Pearson) product moment, Spearman rho, Goodman-Kruskal gamma, or Kendall tau. Should one measure the relationship by pattern or by one that takes into account the difference in magnitudes also? Should one use linear or curvilinear measures of correlation? But more important, there is the question of whether and how the data should be transformed prior to analysis. By logarithms? Squaring? Removing outliers? Ranking? Then there is the choice of other methods of analysis. Even if one selects multiple regression, does one use ordinary regression, step-wise regression (then what is the cutoff?), interactive regression, polynomial regression, regression with interactive terms, and so on and on. This is not a matter of methodological precision, but a matter of how reality is to be modeled. It is a question of substantive theory (but in practice usually a matter of research fads). Each technique or method is a model, and which we use will determine through what window and in what direction we frame social reality. Even for the most used product moment correlation coefficient, how we first transform our data can determine whether the resulting correlation will be high or low, positive or negative. Where possible I will use throughout this and the subsequent chapters three related approaches to deal with these problems. First I will try to analyze democide and its supposed causes--fundamentally power--as within a social field. Rather than singularly conceptualize, collect data on, and test alone specific causes and conditions, I will try to delineate the major empirical patterns of variation and change across a variety of measures, determine their indicators, and estimate the lines of influence, force, and causation, among them. Second, I will use a methodology that best fits my theoretical model^1 and is particularly suited to defining the simplest and independent lines of causation in the field, even though the underlying interrelationships might be curvilinear or complex functions of multiple variables. This can be done by first calculating all the product moment correlations among the measures (on product moment correlations, see Understanding Correlation) and then reducing the resulting correlation matrix to its eigenvalues and eigenvectors. The eigenvectors appropriately scaled are then the dimensions--patterns--of this field. In this and subsequent chapters I will apply this component (factor) analysis to democide, politics, and the other aspects of the social field of democide (on this methodology, see "Understanding Factor Analysis"). To see how this method works consider L. L. Thurstone's famous box example of factor analysis.^2 Let us say that we have a sample of different boxes, some large, some small, some long, some short. Assume that these boxes comprise a spatial field for which we have a variety of manifest measurements M1, M2, M3, M4, etc. Unknown to us let these measurements really comprise functions of x, y, and z, the underlying spatial dimensions of the boxes. Let M1= xy, M2 = (x^2 + z)^1/2, M3 = z/y, M4 = y, etc. Then let us do a component or factor analysis of M1, M2, and the rest. The empirical result should be the three independent patterns (dimensions of boxes) that define all this variation in the field of boxes, that is x, y, and z. If we had included with the other measurements one of x alone, for example, then it would be wholly correlated with and thus define the x pattern. The third way I will try to meet the aforementioned problems, particularly in the seeming infinite methodological choices one has and the significantly different results one can get on the same data depending on these choices, is through theory and convergence. The social field theory and associated conflict helix I have spelled out elsewhere^3 and as appropriate I will reiterate the relevant aspects below and in subsequent chapters. It has been my guide for selecting measures, transforming them, and applying techniques of analysis. But also I will try to bracket what the data say by applying different methods and techniques where useful, especially for the crucial, theoretical inverse relationship between democide and democracy. With this background I can now turn to the actual data on democide. Trying to see this century's democide in the social field as a whole without getting distracted by one aspect or another, what does democide look like overall. How much has occurred and of what different kinds? Do these kinds of democide co-occur or are they independent? Does democide appear in distinguishable patterns along definable dimensions? Answers to these questions are critical in the search for underlying causes and conditions, and in the search for international policies to end democide. For example, if genocide (understood as the murder of people because of their social group membership) occurs independently of massacres, then each must be due to different first order causes (although at a higher order, they may share common causes and conditions).^4 Moreover, understanding the pattern of occurrence of democide enables us to look for the different causes and conditions that underlie each pattern and to identify which kind of democide may also occur, given the killing already taking place. The first question is how to classify different types (or components) of democide? Three criteria are important. One is that the types are conceptually and empirically meaningful. The second is that they can be identified among the flow of events and especially in the fog of war and violence. And the third is that there are data that can be so defined. With the diverse democide estimates given in previous chapters and reported elsewhere,^5 the types consistent with these requirements are listed in Table 16.1.^6 In appendix Table 16A.1 I present all the summary data on democide and its types for 218 state, quasi-state, and group regimes. All these summary statistics are only for those 218 regimes that have committed some sort of democide in this century and for which I could find estimates, no matter how small. At the bottom of the table is a classification of their averages and sums. Also these statistics are further subdivided for type of political system. These statistics are essential for this book and will be discussed in detail in Chapter 17 on democracy. How many regimes did not commit democide; what is the frequency of democide, taking all regimes into account? These are difficult questions, simply because it requires that all regimes existing during this century be identified. Now, as used here a regime is a government that is identified by certain political characteristics that exist for a specifiable period. These characteristics define the nature and distribution of a regime's coercive and authoritative power and the manner in which this power is exercised and power-holders changed. This goes beyond procedurally based political alternatives within some kind of regime, as for a presidential democratic system with a legislature based on a single-member district voting, or a parliamentary, proportional representation, democratic system. Those types and changes in regime of greatest interest here are such as in the change of regime from the rule of the Czar over Russia, to the Kerensky government, and then within the year to the Bolsheviks--three regimes. The change from the Kaiser monarchy to the Weimar Republic to Hitler's rule also gives us three different regimes. Some changes are not so obvious and people can differ remarkably on when a change has occurred.^7 Mainly but not completely relying on Ted Robert Gurr's (1990) political characterization of regimes (polities) from 1800 to 1986, I count 432 distinct state regimes during 1900-1987.^8 From Table 16A.1, 141 of these, or close to a third have committed some form of democide. The descriptive statistics on democide for all 432 state regimes are given in Table 16.2.^9 The distribution of total democide for different magnitudes is plotted in Figure 16.1. Note that these data here and throughout the analyses in the rest of the book are for the most probable mid-democide figures in a low to high range. The distribution of democide in Figure 16.1 appears unnatural--as few regimes (four) murdered from 1 to 999 people as murdered over 10,000,000; similarly, the thirty-four regimes killing in the thousands is near the thirty-six eliminating hundreds of thousands. How can this be? The immediate answer is that murder in the hundreds of thousands or millions is so horrendous that it can not long be hidden or overlooked. But secrecy, control over the media, or lack of international interest in a regime or its democide (especially earlier in the century), may hide the murder of a thousand or so people over a period of years. Figure 16.2 presents two hypothetical Poisson models of what the real distribution might be like, were such unknown democide revealed. For either model, democides of relatively small numbers is missed. For the first model on the left of the figure, were this the correct one, the total democide that it would add to our mid-total of near 170,000,000 killed for state regimes would be between 300,000 and 400,000 dead. The second model would add even less, between 200,000 to 300,000. However, these Poisson models assume that each regime has an equal likelihood of committing democide, an assumption in contradiction to the very hypotheses governing this data collection. By theory democratic regimes should commit the least democide by far, totalitarian regimes the most. If this is the case the true distribution of democide might not be too different from Figure 16.1, with perhaps a dozen or more cases for the two magnitudes nearest zero. Then the amount of democide missed because of secrecy or lack of interest in a regime by the media or human rights groups might be less than 100,000 dead. All this assumes, of course, that the democide mid-values of Table 16A.1 are more or less correct. We could extend the logic to the distribution of the low or high in the range of democide magnitudes (from near 72,000,000 to some 341,000,000 killed for states), but this would not change the conclusion. From my study of these data and their underlying history and assuming that the probability of a regime committing democide is strongly related to its power, I believe that Figure 16.1 is nearer the distribution of democide among states than is either model in Figure 16.2. Let us now look at how this democide is empirically patterned across regimes. By a pattern is meant the intercorrelation of certain types of democide such that when a regime has killed so many people in one kind of democide there is a high probability that it also will or will not (depending on whether the intercorrelations are positive or negative) have committed other kinds of democide. Ideally, this intercorrelation--pattern--should be largely unaffected by whatever other democide has or has not been committed. Technically, each pattern should be so defined that the influence of other patterns is partialled out. I must make clear that the various democide types are totaled over the life of a regime. For regimes surviving for only a couple of years, the different types of democide are probably simultaneous. For very long lived regimes, such as the Soviet Union or United States, different types of democide and even the different occurrences of democide for a particular type, may have been committed in years separated by decades or even half a century or more. A high correlation, then, between two democide types, such as terror and genocide, should be interpreted to mean that a regime characteristically committed both types of democide or that both are characteristic behavior of the regime, not that both types co-occurred or closely followed each other. A pattern of interrelated democide types then means that these are interrelated behavioral characteristics of regimes. To now look at these interrelations, Table 16.3 gives the product moment correlations among the fourteen types of democide defined in Table 16.1. These are for all 432 state regimes. Correlations greater than .50 are shown in brackets, which is to identify those relationships involving 25 percent or more of the variance between types. These correlations themselves do not define the patterns in characteristic democide, for any one correlation may be an effect of any combination of other types of democide. The problem is to remove any such third and fourth variable influences. Note that the correlations are generally very high, with many over .90. This is mainly due to the 214 regimes that have not committed any democide (at least any that I have estimates on). Much of this covariance among the democide types is thus due to the positive correlations among the many zeros. Besides this, many of the correlations hang on very high democide figures (that is, outliers). For example, the domestic democide for the USSR is almost 55,000,000 killed, while that for communist China is slightly over 35,000,000, and overall democide for Nazi Germany is near 21,000,000. These figures are so extreme--the USSR alone is nearly 45 standard deviations from the democide average--that basing the patterns on them would virtually make the Soviet Union, and to a lesser extent communist China and Nazi Germany, determinants of whatever patterns emerge. To avoid this, to make the patterns more general to the other demociders, all fourteen types of democide were transformed to base 10 logarithms.^10 This pulls in the extreme high cases and reduces the intercorrelations among the democide types. When a democide measure is log transformed, the name is suffixed with "L." Using component analysis I have identified the patterns of democide with and without these transformations, and I have also done so just for the 218 regimes with democide and also only for the 141 state regimes among them. In all analyses the results are largely the same as those shown in Table 16.4 for the state regimes, democide measures log transformed. Since the state regimes will be the subject of subsequent analysis and tests, I will focus on these patterns. Table 16.4 presents the statistically independent (orthogonal) democide patterns (factors, dimensions) for state regimes.^11 As shown, there are five major patterns. The substantive nature of these patterns is identified by the coefficients (loadings) in the table, which give the correlation between the democide types and the pattern. Squaring these correlations then defines the amount of variation in the democide related to the pattern. I have outlined in the table each of these correlations for which there is 25 percent or more covariation between democide type and pattern. For example, total democide (TotDemocL) has a correlation of .76 with the first pattern (Factor 1), which means that it has 58 percent of its variation across the state regimes shared with this pattern. The final estimates in the communality part of the table (last columns) give the proportion of total variation in each democide related to the five patterns. Thus, the .92 shown for TotDemocL means that 92 percent of its variation is captured by these five patterns. With this understanding of the table, then, the five patterns it identifies are labeled and their members listed in Table 16.5. I have selected one indicator for each pattern, as shown in the table, and will use these indicators as our fundamental measures of democide. They will be the basis of all subsequent analysis on democide. These patterns should be looked at as fundamental causal foci. That is, each empirical pattern reflects underlying first order causes and conditions that differ from those related to other patterns. This is not to deny that there is an overall explanation for democide in general, but that within this general explanation there are particular patterns of democide explained by more specific causes and conditions. With all this in mind, we can now test our theory that the fundamental explanation of democide is in terms of democracy versus totalitarianism, that is Power. * From the pre-publisher edited manuscript of Chapter 16 in R.J. Rummel, Statistics of Democide, 1997. For full reference to Statistics of Democide, the list of its contents, figures, and tables, and the text of its preface, click book. 1. The social field theory that is the framework for this analysis assumes a Euclidean space, linear in the mathematical functions within this space, although the terms within the functions may have non-linear relationships. The best method for delineating these functions and the dimensions of this space is component (factor) analysis. See "Understanding Factor Analysis." 2. Thurstone (1947, pp. 140-46). The methodological difference between component and factor analysis is in analyzing all the variance among a sample of variables or just the variance they presumably have in common. Factor analysts are typically concerned only with common variance, arguing that they are seeking or testing for some underlying common factors. Statistically, however, the difference is often obscured, because factor analysts will estimate the common variance by the correlation of a variable with itself (= 1.00), and thus actually be doing component analysis. For these distinctions and a full discussion of relevant aspects of component and factor analysis, see Rummel (1970). For a summary, see "Understanding Factor Analysis" 3. The most extensive development and application has been to domestic and foreign conflict. See particularly Rummel (Understanding Conflict and War, Vol. 2-4). I have used Catastrophe Theory to mathematically model the conflict helix ("A Catastrophe Theory Model Of The Conflict Helix, With Tests"), and have presented the related psychological, interpersonal, social, and international principles in a non-technical introduction (Conflict Helix: Principles and Practices....). 4. First order causes are those directly related to the effect. Second order causes are those of which the first order are the effects. For example, the first order cause of a regime's massacre of a large minority group may its rebellion, but the second order cause may by the totalization of the regime's power and the minority's challenge to it. 5. Rummel (1990, 1991, 1992). 6. I present and discuss the nature and an extended definition of democide in Rummel (1994, Chapter 2). Politicide is an important conceptual type of democide (murder by a regime for political reasons), but I could not empirically discriminate it from the types shown in the table. 7. For example, Ted Robert Gurr's (1990) Polity II data, which classifies different regimes and their political characteristics and change for the years 1800-1986, codes different "polities" (regimes, in my terms) as existing in Russia for the years -1905, 1905-17, 1917-22, 1922-53, 1953-. In total, five regimes. I classify different regimes as existing during the years -1917, 1917, 1917-, or three regimes. For the PRC, he classifies two different communist regimes as existing for the years 1949-77, 1978- , whereas I would define only one communist regime as existing over the years since 1949. Even for democracies, we differ, such as for India, where he defines one regime since 1950 and I define three, 1950-75, 1975-77, 1977-, the middle regime being the period of Prime Minister Gandhi's declaration of a national emergency and the imposition of censorship, arrest of opposition leaders, and banning of many political groups. Nonetheless, overall there is sufficient agreement with Gurr's classification that I will use his count of regimes. 8. I also consulted the lists of regimes in Calvert (1970) and Russett (1993). 9. These were calculated after adding to the 218 regimes in table 16A.1 (all of which have some kind of democide) the 214 hypothetical regimes with zeros for all democide types. The statistics are thus for all state regimes existing for any period during 1900-1987. 10. Because of zeros, 1 was added to each democide variable prior to the log transformation. This does not change the correlations, nor the subsequent multivariate results. 11. I did oblique varimax rotation, but the results were much the same. The highest correlation among the oblique factors was .49. I also separately rotated three to six factors and the five shown give the most meaningful and most parsimonious result. The standard procedures for determining the number of factors would have defined two to four factors. I decided upon five because the fifth defined the very important genocide pattern. For citations see the Statistics of Democide REFERENCES You are the Go to top of document
{"url":"http://hawaii.edu/powerkills/SOD.CHAP16.HTM","timestamp":"2014-04-20T09:55:44Z","content_type":null,"content_length":"32429","record_id":"<urn:uuid:5e61fb2d-e33d-436f-bb44-bcc910aed28e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2000 [00071] [Date Index] [Thread Index] [Author Index] Re: Usage of SymbolLabel option in MultipleListPlot • To: mathgroup at smc.vnet.net • Subject: [mg25508] Re: [mg25480] Usage of SymbolLabel option in MultipleListPlot • From: Tomas Garza <tgarza01 at prodigy.net.mx> • Date: Thu, 5 Oct 2000 23:50:26 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com It seems to me that you should have written Table[klist[[i]],{i,1,10}] in the first line of your code. But, anyway, if it runs as it is, just {"m=0", "m=1", "m=2", "m=3", "m=4", "m=5", "m=6", "m=7", "m=8", "m=9"} in a double bracket, thus SymbolLabel ->{{"m=0", "m=1", "m=2", "m=3", "m=4", "m=5", "m=6", "m=7", "m=8", "m=9"}} This should correct the problem. I would also suggest some embellishments, like for example using Frame->{True, True, False, False} instead of just Frame->True. You might also use better fonts for your labels, etc. Tomas Garza Mexico City Pedro Serrao <pserrao at dem.ist.utl.pt> wrote: > It's my intention to plot 10 points in x-y coordinates specifying a > different label for each point. For instance Point 1 should be labeled > m=0, Point 2 should be labeled m=1 and so on. > However when I evaluate the following instruction all points are > with the same string m=0. > MultipleListPlot[Table[klist[i], {i, 10}], PlotRange -> All, > SymbolShape -> {PlotSymbol[Box], Label}, > SymbolLabel -> {"m=0", "m=1", "m=2", "m=3", "m=4", > "m=5", "m=6", "m=7", "m=8", "m=9"}, Frame -> True, FrameLabel -> > {"Re(k)", "Im(k)"}]; > What is the proper way to specify differenf labels for each point?
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Oct/msg00071.html","timestamp":"2014-04-20T11:02:28Z","content_type":null,"content_length":"35910","record_id":"<urn:uuid:74fd6658-0262-48f4-b574-2989130555a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
A Balanced Dime Is Tossed Three Times. The Possible ... | Chegg.com A balanced dime is tossed three times. The possible outcomes can be represented as follows. Here, for example, HHT means that the first two tosses come up heads and the third tails. Find the probability that a. exactly two of the three tosses comes up heads b. the last two tosses comes up tails c. all three tosses come up the same d. the second toss comes up heads Statistics and Probability
{"url":"http://www.chegg.com/homework-help/questions-and-answers/balanced-dime-tossed-three-times-possible-outcomes-represented-follows-hhh-hth-thh-tth-hht-q1675183","timestamp":"2014-04-16T14:41:35Z","content_type":null,"content_length":"21180","record_id":"<urn:uuid:7869e609-e752-47d3-b30a-e5fbf1258726>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplify the following: hi Ebenezerson I think you'll need to multiply these brackets out like this Then you can start to simplify. Can you fill in the dots ? Alternative way to do this: difference of two squares: Have you met this before ? If so, you could put p = 2m + 2k and q = m - 2k You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=277749","timestamp":"2014-04-17T00:59:49Z","content_type":null,"content_length":"41177","record_id":"<urn:uuid:89bbfbcf-2f9d-47a0-9c28-d92b4747657b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Question about the Euler tensor for timelike and spacelike metric John Baez says... If you work out a manifestly coordinate-independent formula like T^uv=(u+p)U^uU^v + g_uv p using some signature, and then you whimsically decide to change your conventions regarding the signature, the formula will still be true without any changes. Yes, but something that's interesting is that when we deal with Clifford algebras, the algebra for (+---) spacetime is not the same as the algebra for (-+++) spacetime. I don't think that that difference would allow us to say that we are *really* one signature instead of another, though. Daryl McCullough Ithaca, NY Relevant Pages • Re: Dev-C++ compiling problem ... Sure, Chuck's signature is too long, and it happens to ... netiquette conventions, even though his signature block violates ... No, Chuck, I'm not, and you ought to know me better than that. ... conventions and your own special pleading that netiquette conventions do ... • Re: Question about the Euler tensor for timelike and spacelike metric ... stevendaryl3016@xxxxxxxxx (Daryl McCullough) wrote: ... using some signature, and then you whimsically decide to change your ... conventions regarding the signature, the formula will still be true ... Clifford algebras Cland Cl, but thought of as just a choice ...
{"url":"http://sci.tech-archive.net/Archive/sci.physics.research/2006-05/msg00245.html","timestamp":"2014-04-18T00:14:21Z","content_type":null,"content_length":"9373","record_id":"<urn:uuid:e3f4e79d-201b-4b78-9b9a-d19b817c8cb2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Plane Trig Review of Plane Trig. I will briefly review some of the relations used in plane trigonometry. If you are not familiar with these relations you should consult an introductory text on plane trigonometry. The symbol `` Consider the right triangle shown in the figure. The angle ACB is Because the triangle in the figure is a right triangle, the edges AC, BC, and AB obey Pythagoras' theorem. For any angle The inverse trigonometric functions are The notation Angles are given in either units of radians or units of degrees. In the following relation, To convert from one system of units to the other, solve the above equation for the unknown quantity. For example, if we know that Note the use of the symbol `` It can be shown that the trigonometric functions obey the following relations: Of particular interest, note that The angle The following table has been taken from Tables Of Integrals And Other Mathematical Data, 4th edition, by H. B. Dwight, Macmillan Pub. Co., 1961. Usage Note: My work is copyrighted. You may use my work but you may not include my work, or parts of it, in any for-profit project without my consent.
{"url":"http://www.rwgrayprojects.com/rbfnotes/trig/ptrig/trig.html","timestamp":"2014-04-16T04:49:02Z","content_type":null,"content_length":"6905","record_id":"<urn:uuid:98381121-8526-444f-bb7c-0f6e7e7773b2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
CSE 634 Data Mining Concepts and Techniques Association Rule CSE 634 Data Mining Concepts and Techniques Association Rule Mining Barbara Mucha Tania Irani Irem Incekoy Mikhail Bautin Course Instructor: Prof. Anita Wasilewska State University of New York, Stony Brook Group 6 Data Mining: Concepts & Techniques by Jiawei Han and Micheline Kamber Presentation Slides of Prateek Duble Presentation Slides of the Course Book. Mining Topic-Specific Concepts and Definitions on the Web Effective Personalization Based on Association Rule Discovery from Web Usage Data Basic Concepts of Association Rule Association & Apriori Algorithm Paper: Mining Topic-Specific Concepts and Definitions on the Web Paper: Effective Personalization Based on Association Rule Discovery from Web Usage Data Barbara Mucha What is association rule mining? Methods for association rule mining Extensions of association rule Barbara Mucha What Is Association Rule Mining? Frequent patterns: patterns (set of items, sequence, etc.) that occur frequently in a Frequent pattern mining: finding regularities in What products were often purchased together? Beer and diapers?! What are the subsequent purchases after buying a Can we automatically profile customers? Barbara Mucha Basic Concepts of Association Rule Given: (1) database of transactions, (2) each transaction is a list of items (purchased by a customer in a visit) Find: all rules that correlate the presence of one set of items with that of another set of items E.g., 98% of people who purchase tires and auto accessories also get automotive services done * Maintenance Agreement (What the store should do to boost Maintenance Agreement sales) Home Electronics * (What other products should the store stocks up?) Attached mailing in direct marketing Barbara Mucha Association Rule Definitions Set of items: I={I1,I2,…,Im} Transactions: D = {t1, t2,.., tn} be a set of transactions, where a transaction,t, is a set of Itemset: {Ii1,Ii2, …, Iik} I Support of an itemset: Percentage of transactions which contain that itemset. Large (Frequent) itemset: Itemset whose number of occurrences is above a threshold. Barbara Mucha Rule Measures: Support & An association rule is of the form : X Y where X, Y are subsets of I, and X INTERSECT Y = EMPTY Each rule has two measures of value, support, and confidence. Support indicates the frequencies of the occurring patterns, and confidence denotes the strength of implication in the rule. The support of the rule X Y is support (X UNION Y) c is the CONFIDENCE of rule X Y if c% of transactions that contain X also contain Y, which can be written as the radio: support(X UNION Y)/support(X) Barbara Mucha Support & Confidence : An Let minimum support 50%, and minimum confidence 50%, then we have, A C (50%, 66.6%) C A (50%, 100%) TransactionID ItemsBought 2000 A,B,C 1000 A,C 4000 A,D 5000 B,E,F Barbara Mucha Types of Association Rule Mining Boolean vs. quantitative associations (Based on the types of values handled) buys(x, “computer”) buys(x, “financial software”) [.2%, 60%] age(x, “30..39”) ^ income(x, “42..48K”) buys(x, “PC”) [1%, 75%] Single dimension vs. multiple dimensional associations buys(x, “computer”) buys(x, “financial software”) [.2%, 60%] age(x, “30..39”) ^ income(x, “42..48K”) buys(x, “PC”) [1%, 75%] Barbara Mucha Types of Association Rule Mining Single level vs. multiple-level analysis What brands of beers are associated with what brands of diapers? Various extensions Correlation, causality analysis Association does not necessarily imply correlation or causality Constraints enforced E.g., small sales (sum < 100) trigger big buys (sum > 1,000)? Barbara Mucha Association Discovery Given a user specified minimum support (called MINSUP) and minimum confidence (called MINCONF), an important PROBLEM is to find all high confidence, large itemsets (frequent sets, sets with high support). (where support and confidence are larger than minsup and minconf). This problem can be decomposed into two subproblems: 1. Find all large itemsets: with support > minsup (frequent 2. For a large itemset, X and B X (or Y X) , find those rules, X\{B} => B ( X-Y Y) for which confidence > minconf. Barbara Mucha Itemset: a set of items E.g., acm={a, c, m} Transaction database TDB Support of itemsets TID Items bought 100 f, a, c, d, g, I, m, p Given min_sup=3, acm is a frequent pattern 200 a, b, c, f, l,m, o 300 b, f, h, j, o Frequent pattern 400 b, c, k, s, p mining: find all frequent patterns in a 500 a, f, c, e, l, p, m, n Barbara Mucha Mining Association Rules—An Transaction ID Items Bought Min. support 50% 2000 A,B,C Min. confidence 50% 1000 A,C 4000 A,D Frequent Itemset Support {A} 75% 5000 B,E,F {B} 50% {C} 50% For rule A C: {A,C} 50% support = support({A &C}) = 50% confidence = support({A &C})/support({A}) = 66.6% The Apriori principle: Any subset of a frequent itemset must be frequent Rules from frequent sets X = {mustard, sausage, beer}; frequency = Y = {mustard, sausage, beer, chips}; frequency = 0.2 If the customer buys mustard, sausage, and beer, then the probability that he/she buys chips is 0.5 Barbara Mucha Sequential patterns find inter-transaction patterns such that the presence of a set of items is followed by another item in the time-stamp ordered transaction set. Periodic patterns It can be envisioned as a tool for forecasting and prediction of the future behavior of time-series data. Structural Patterns Structural patterns describe how classes and objects can be combined to form larger structures. Barbara Mucha Application Difficulties Wal-Mart knows that customers who buy Barbie dolls have a 60% likelihood of buying one of three types of candy bars. What does Wal-Mart do with information like that? 'I don't have a clue,' says Wal-Mart's chief of merchandising, Lee Scott Diapers and beer urban legend Barbara Mucha Thank You! Barbara Mucha CSE 634 Data Mining Concepts and Techniques Association & Apriori Algorithm Tania Irani Course Instructor: Prof. Anita Wasilewska State University of New York, Stony Brook Data Mining: Concepts & Techniques by Jiawei Han and Micheline Kamber Presentation Slides of Prof. Anita Wasilewska The Apriori Algorithm (Mining single-dimensional boolean association rules) Frequent-Pattern Growth (FP-Growth) Method The Apriori Algorithm: Key Concepts K-itemsets: An itemset having k items in it. Support or Frequency: Number of transactions that contain a particular itemset. Frequent Itemsets: An itemset that satisfies minimum support. (denoted by Lk for frequent k-itemset). Apriori Property: All non-empty subsets of a frequent itemset must be frequent. Join Operation: Ck, the set of candidate k-itemsets is generated by joining Lk-1 with itself. (L1: frequent 1-itemset, Lk: frequent k-itemset) Prune Operation: Lk, the set of frequent k-itemsets is extracted from Ck by pruning it – getting rid of all the non-frequent k-itemsets in Ck Iterative level-wise approach: k-itemsets used to explore (k+1)- The Apriori Algorithm finds frequent k-itemsets. How is the Apriori Property used in the Mining single-dimensional Boolean association rules is a 2 step process: Using the Apriori Property find the frequent itemsets: Each iteration will generate Ck (candidate k-itemsets from Ck-1) and Lk (frequent k-itemsets) Use the frequent k-itemsets to generate association Finding frequent itemsets using the Apriori Algorithm: Example TID List of Items Consider a database D, consisting T100 I1, I2, I5 of 9 transactions. Each transaction is represented T100 I2, I4 by an itemset. T100 I2, I3 Suppose min. support required is 2 (2 out of 9 = 2/9 =22 % ) T100 I1, I2, I4 Say min. confidence required is T100 I1, I3 We have to first find out the T100 I2, I3 frequent itemset using Apriori T100 I1, I3 Then, Association rules will be T100 I1, I2 ,I3, I5 generated using min. support & T100 I1, I2, I3 min. confidence. Step 1: Generating candidate and frequent 1- itemsets with min. support = 2 Compare candidate Scan D for support count with count of each Itemset Sup.Count Itemset Sup.Count minimum support candidate count {I1} 6 {I1} 6 {I2} 7 {I2} 7 {I3} 6 {I3} 6 {I4} 2 {I4} 2 {I5} 2 {I5} 2 C1 L1 In the first iteration of the algorithm, each item is a member of the set of candidates Ck along with its support count. The set of frequent 1-itemsets L1, consists of the candidate 1- itemsets satisfying minimum support. Step 2: Generating candidate and frequent 2- itemsets with min. support = 2 Generate C2 Scan D for Compare Itemset Itemset Sup. Itemset Sup candidates count of candidate from L1 x L1 {I1, I2} Count support Count candidate {I1, I2} 4 count with {I1, I2} 4 {I1, I3} minimum {I1, I4} {I1, I3} 4 support {I1, I3} 4 {I1, I5} {I1, I4} 1 {I1, I5} 2 {I2, I3} {I1, I5} 2 {I2, I3} 4 {I2, I4} {I2, I4} 2 {I2, I3} 4 {I2, I5} {I2, I5} 2 {I2, I4} 2 {I3, I4} {I2, I5} 2 L2 {I3, I5} {I3, I4} 0 {I4, I5} Note: We haven’t used {I3, I5} 1 Apriori Property yet! C2 {I4, I5} 0 Step 3: Generating candidate and frequent 3- itemsets with min. support = 2 Generate Scan D for candidate C3 count of support candidates Itemset each Itemset Sup. Itemset Sup count with from L2 {I1, I2, I3} candidate Count min support Count count {I1, I2, I3} 2 {I1, I2, I5} {I1, I2, I3} 2 {I1, I3, I5} {I1, I2, I5} 2 {I1, I2, I5} 2 {I2, I3, I4} C3 L3 {I2, I3, I5} {I2, I4, I5} Contains non-frequent C3 (2-itemset) subsets The generation of the set of candidate 3-itemsets C3, involves use of the Apriori Property. When Join step is complete, the Prune step will be used to reduce the size of C3. Prune step helps to avoid heavy computation due to large Ck. Step 4: Generating frequent 4-itemset L3 Join L3 C4 = {{I1, I2, I3, I5}} This itemset is pruned since its subset {{I2, I3, I5}} is not Thus, C4 = φ, and the algorithm terminates, having found all of the frequent items. This completes our Apriori Algorithm. What’s Next ? These frequent itemsets will be used to generate strong association rules (where strong association rules satisfy both minimum support & minimum confidence). Step 5: Generating Association Rules from frequent k-itemsets For each frequent itemset l, generate all nonempty subsets of l For every nonempty subset s of l, output the rule “s (l - s)” if support_count(l) / support_count(s) ≥ min_conf where min_conf is minimum confidence threshold. 70% in our case. Back To Example: Lets take l = {I1,I2,I5} The nonempty subsets of Lets take l are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5} Step 5: Generating Association Rules from frequent k-itemsets [Cont.] The resulting association rules are: R1: I1 ^ I2 I5 Confidence = sc{I1,I2,I5} / sc{I1,I2} = 2/4 = 50% R1 is Rejected. R2: I1 ^ I5 I2 Confidence = sc{I1,I2,I5} / sc{I1,I5} = 2/2 = 100% R2 is Selected. R3: I2 ^ I5 I1 Confidence = sc{I1,I2,I5} / sc{I2,I5} = 2/2 = 100% R3 is Selected. Step 5: Generating Association Rules from Frequent Itemsets [Cont.] R4: I1 I2 ^ I5 Confidence = sc{I1,I2,I5} / sc{I1} = 2/6 = 33% R4 is Rejected. R5: I2 I1 ^ I5 Confidence = sc{I1,I2,I5} / {I2} = 2/7 = 29% R5 is Rejected. R6: I5 I1 ^ I2 Confidence = sc{I1,I2,I5} / {I5} = 2/2 = 100% R6 is Selected. We have found three strong association rules. The Apriori Algorithm (Mining single dimensional boolean association rules) Frequent-Pattern Growth (FP-Growth) Method Mining Frequent Patterns Without Candidate Compress a large database into a compact, Frequent- Pattern tree (FP-tree) structure Highly condensed, but complete for frequent pattern mining Avoid costly database scans Develop an efficient, FP-tree-based frequent pattern mining method A divide-and-conquer methodology: Compress DB into FP-tree, retain itemset associations Divide the new DB into a set of conditional DBs – each associated with one frequent item Mine each such database seperately Avoid candidate generation FP-Growth Method : An Example TID List of Items Consider the previous example T100 I1, I2, I5 of a database D, consisting of 9 transactions. T100 I2, I4 Suppose min. support count T100 I2, I3 required is 2 (i.e. min_sup = 2/9 = 22 % ) T100 I1, I2, I4 The first scan of the database is same as Apriori, which T100 I1, I3 derives the set of 1-itemsets & T100 I2, I3 their support counts. The set of frequent items is T100 I1, I3 sorted in the order of T100 I1, I2 ,I3, I5 descending support count. The resulting set is denoted as T100 I1, I2, I3 L = {I2:7, I1:6, I3:6, I4:2, I5:2} FP-Growth Method: Construction of FP-Tree First, create the root of the tree, labeled with ―null‖. Scan the database D a second time (First time we scanned it to create 1-itemset and then L), this will generate the complete tree. The items in each transaction are processed in L order (i.e. sorted A branch is created for each transaction with items having their support count separated by colon. Whenever the same node is encountered in another transaction, we just increment the support count of the common node or Prefix. To facilitate tree traversal, an item header table is built so that each item points to its occurrences in the tree via a chain of node-links. Now, The problem of mining frequent patterns in database is transformed to that of mining the FP-Tree. FP-Growth Method: Construction of FP-Tree Item Sup Node- I2:7 Id Count link I1:2 I2 7 I1 6 I3:2 I4:1 I3 6 I4 2 I3:2 I5 2 I3:2 I4:1 An FP-Tree that registers compressed, frequent pattern Mining the FP-Tree by Creating Conditional (sub) pattern bases 1. Start from each frequent length-1 pattern (as an initial suffix pattern). 2. Construct its conditional pattern base which consists of the set of prefix paths in the FP-Tree co-occurring with suffix pattern. 3. Then, construct its conditional FP-Tree & perform mining on this tree. 4. The pattern growth is achieved by concatenation of the suffix pattern with the frequent patterns generated from a conditional FP-Tree. 5. The union of all frequent patterns (generated by step 4) gives the required frequent itemset. FP-Tree Example Continued Item Conditional pattern base Conditional Frequent pattern FP-Tree generated I5 {(I2 I1: 1),(I2 I1 I3: 1)} <I2:2 , I1:2> I2 I5:2, I1 I5:2, I2 I1 I5: 2 I4 {(I2 I1: 1),(I2: 1)} <I2: 2> I2 I4: 2 I3 {(I2 I1: 2),(I2: 2), (I1: 2)} <I2: 4, I1: 2>,<I1:2> I2 I3:4, I1 I3: 2 , I2 I1 I3: 2 I1 {(I2: 4)} <I2: 4> I2 I1: 4 Mining the FP-Tree by creating conditional (sub) pattern bases Now, following the above mentioned steps: Lets start from I5. I5 is involved in 2 branches namely {I2 I1 I5: 1} and {I2 I1 I3 I5: 1}. Therefore considering I5 as suffix, its 2 corresponding prefix paths would be {I2 I1: 1} and {I2 I1 I3: 1}, which forms its conditional pattern base. FP-Tree Example Continued Out of these, only I1 & I2 is selected in the conditional FP-Tree because I3 does not satisfy the minimum support count. For I1, support count in conditional pattern base = 1 + 1 = 2 For I2, support count in conditional pattern base = 1 + 1 = 2 For I3, support count in conditional pattern base = 1 Thus support count for I3 is less than required min_sup which is 2 Now, we have a conditional FP-Tree with us. All frequent pattern corresponding to suffix I5 are generated by considering all possible combinations of I5 and conditional FP-Tree. The same procedure is applied to suffixes I4, I3 and I1. Note: I2 is not taken into consideration for suffix because it doesn’t have any prefix at all. Why Frequent Pattern Growth Fast ? Performance study shows FP-growth is an order of magnitude faster than No candidate generation, no candidate test Use compact data structure Eliminate repeated database scans Basic operation is counting and FP-tree building The Apriori Algorithm (Mining single dimensional boolean association rules) Frequent-Pattern Growth (FP-Growth) Association rules are generated from frequent itemsets. Frequent itemsets are mined using Apriori algorithm or Frequent- Pattern Growth method. Apriori property states that all the subsets of frequent itemsets must also be frequent. Apriori algorithm uses frequent itemsets, join & prune methods and Apriori property to derive strong association rules. Frequent-Pattern Growth method avoids repeated database scanning of Apriori algorithm. FP-Growth method is faster than Apriori algorithm. Thank You! Mining Topic-Specific Concepts and Definitions on the Web Irem Incekoy May 2003, Proceedings of the 12th International conference on World Wide Web, ACM Press Bing Liu, University of Illinois at Chicago, 851 S. Morgan Street Chicago IL 60607-7053 Chee Wee Chin, Hwee Tou Ng, National University of Singapore 3 Science Drive 2 Singapore Agrawal, R. and Srikant, R. ―Fast Algorithm for Mining Association Rules‖, VLDB-94, 1994. Anderson, C. and Horvitz, E. ―Web Montage: A Dynamic Personalized Start Page‖, WWW-02, Brin, S. and Page, L. ―The Anatomy of a Large- Scale Hypertextual Web Search Engine‖, WWW7, 1998. When one wants to learn about a topic, one reads a book or a survey paper. One can read the research papers about the topic. None of these is very practical. Learning from web is convenient, intuitive, and diverse. Purpose of the Paper This paper’s task is ―mining topic-specific knowledge on the Web‖. The goal is to help people learn in-depth knowledge of a topic systematically on the Learning about a New Topic One needs to find definitions and descriptions of the topic. One also needs to know the sub-topics and salient concepts of the topic. Thus, one wants the knowledge as presented in a traditional book. The task of this paper can be summarized as ―compiling a book on the Web‖. Proposed Technique First, identify sub-topics or salient concepts of that specific topic. Then, find and organize the informative pages containing definitions and descriptions of the topic and sub-topics. Why are the current search tecnhiques not sufficient? For definitions and descriptions of the topic: Existing search engines rank web pages based on keyword matching and hyperlink structures. NOT very useful for measuring the informative value of the page. For sub-topics and salient concepts of the topic: A single web page is unlikely to contain information about all the key concepts or sub-topics of the topic. Thus, sub-topics need to be discovered from multiple web pages. Current search engine systems do not perform this task. Related Work Web information extraction wrappers Web query languages User preference approach Question answering in information retrieval • Question answering is a closely-related work to this paper. The objective of a question-answering system is to provide direct answers to questions submitted by the user. In this paper’s task, many of the questions are about definitions of terms. The Algorithm WebLearn (T) 1) Submit T to a search engine, which returns a set of relevant pages 2) The system mines the sub-topics or salient concepts of T using a set S of top ranking pages from the search engine 3) The system then discovers the informative pages containing definitions of the topic and sub-topics (salient concepts) from S 4) The user views the concepts and informative pages. If s/he still wants to know more about sub-topics then for each user-interested sub-topic Ti of T do WebLearn (Ti); Sub-Topic or Salient Concept Sub-topics or salient concepts of a topic are important word phrases, usually emphasized using some HTML tags (e.g., However, this is not sufficient. Data mining techniques are able to help to find the frequent occurring word phrases. Sub-Topic Discovery After obtaining a set of relevant top- ranking pages (using Google), sub-topic discovery consists of the following 5 steps. 1) Filter out the ―noisy‖ documents that rarely contain sub-topics or salient- concepts. The resulting set of documents is the source for sub-topic discovery. Sub-Topic Discovery 2) Identify important phrases in each page (discover phrases emphasized by HTML markup tags). Rules to determine if a markup tag can safely be ignored Contains a salutation title (Mr, Dr, Professor). Contains an URL or an email address. Contains terms related to a publication (conference, proceedings, journal). Contains an image between the markup tags. Too lengthy (the paper uses 15 words as the upper limit) Sub-Topic Discovery Also, in this step, some preprocessing techniques such as stopwords removal and word stemming are applied in order to extract quality text segments. Stopwords removal: Eliminating the words that occur too frequently and have little informational meaning. Word stemming: Finding the root form of a word by removing its suffix. Sub-Topic Discovery 3) Mine frequent occurring phrases: - Each piece of text extracted in step 2 is stored in a dataset called a transaction set. - Then, an association rule miner based on Apriori algorithm is executed to find those frequent itemsets. In this context, an itemset is a set of words that occur together, and an itemset is frequent if it appears in more than two documents. - We only need the first step of the Apriori algorithm and we only need to find frequent itemsets with three words or fewer (this restriction can be relaxed). Sub-Topic Discovery 4) Eliminate itemsets that are unlikely to be sub-topics, and determine the sequence of words in a sub-topic. Heuristic: If an itemset does not appear alone as an important phrase in any page, it is unlikely to be a main sub-topic and it is removed. Sub-Topic Discovery 5) Rank the remaining itemsets. The remaining itemsets are regarded as the sub-topics or salient concepts of the search topic and are ranked based on the number of pages that they occur. Definition Finding This step tries to identify those pages that include definitions of the search topic and its sub-topics discovered in the previous step. Preprocessing steps: Texts that will not be displayed by browsers (e.g., <script>...</ script >,<!—comments-->) are ignored. Word stemming is applied. Stopwords and punctuation are kept as they serve as clues to identify definitions. HTML tags within a paragraph are removed. Definition Finding After that, following patterns are applied to identify definitions: [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web Definition Finding Besides using the above patterns, the paper also relies on HTML structuring and hyperlink 1) If a page contains only one header or one big emphasized text segment at the beginning in the entire document, then the document contains a definition of the concept in the header. 2) Definitions at the second level of the hyperlink structure are also discovered. All the patterns and methods described above are applied to these second level documents. Definition Finding Observation: Sometimes no informative page is found for a particular sub-topic when the pages for the main topic are very general and do not contain detailed information for sub-topics. In such cases, the sub-topic can be submitted to the search engine and sub-subtopics may be found recursively. Dealing with Ambiguity One of the difficult problems in concept mining is the ambiguity of the search terms (e.g., A search engine may not return any page in the right context in its top ranking pages. Partial solution: adding terms that can represent the context (e.g., classification data mining). Disadvantage: returned web pages focus more on the context words since they represent a larger concept. Dealing with Ambiguity To handle this problem: First reduce the ambiguity of a search topic by using context words. Then, 1) Finding salient concepts only in the segment describing the topic or sub-topic. (using HTML structuring tags as cues). 2) Identifying those pages that hierarchically organize knowledge of the parent topic. To identify such pages, we can parse the HTML nested list items (e.g., <li>) structure by building a tree. Dealing with Ambiguity • We confirm whether it is a correct page by finding if the hierarchy contains at least another sub-topic of the parent topic. An example of a well-organized topic hierarchy [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web Dealing with Ambiguity Finding salient concepts enclosed within braces illustrating examples. There are many clustering approaches (e.g., hierarchical, partitioning, k-means, k-medoids), and we add that efficiency is important if the clusters contain many points. The execution of the algorithm can stop when most of the salient concepts found are parallel concepts of the search topic. Mutual Reinforcement This method applies to situations where we have already found the sub-topics of a topic, and we want to find the salient concepts of the sub-topics of the topic, to go down further. Often, when one searches for a sub-topic S1, one also finds important information about another sub-topic S2 due to the ranking algorithm used by the search engine. This method works in two steps: 1) submit each sub-topic individually to the search engine. 2) combine the top-ranking pages from each search into one set, and apply the proposed techniques to the whole set to look for all sub-topics. System Architecture The overall system is composed of five main 1) A search engine: This is a standard web search engine (Google is used in this system). 2) A crawler: It crawls the World Wide Web to download those top-ranking pages returned by the search engine. It stores the pages in ―Web Page Depository‖. 3) A salient concept miner: It uses the sub-topic discovery techniques explained before to search the pages stored in ―Web Page Depository‖, in order to identify and extract those sub-topics and salient System Architecture 4) A definition finder: It uses the technique presented in definition finding section to search through the pages stored in ―Web Page Depository‖ to find those informative pages containing definitions of the topics and the sub-topics. 5) A user interface: It enables the user to interact with the System Architecture [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web Experimental Study The size of the set of documents is limited to the first hundred results returned by Google. Table 1 shows the sub-topics and salient concepts discovered for 28 search topics In each box, the first line gives the search topic. For each topic, only ten top-ranking concepts are listed. For too specific topics, only definition finding is [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web Experimental Study In Table 2, the precision of the definition-finding task is compared with the Google search engine and AskJeeves, the web’s premier question-answering The first 10 pages of results are compared with the first 10 pages returned by Google and AskJeeves. To do a fair comparison, they also look for definitions in the second level of the search results returned by Google and AskJeeves. [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web Table 2 Experimental Study Table 3 presents the results for ambiguity handling by applying the respective methods explained before. Column 1 lists two ambiguous topics of ―data mining‖ and ―time series‖. Column 2 lists the sub-topics identified using the original technique. Column 3 lists gives the sub-topics discovered using the respective parent-topics as context terms. Column 4 uses ambiguity handling techniques. Column 5 applies mutual reinforcement in addition to others. [1] Bing Liu, Chee Wee Chin, Hwee Tou Ng. Mining Topic-Specific Concepts and Definitions on the Web The proposed techniques aim at helping Web users to learn an unfamiliar topic in- depth and systematically. This is an efficient system to discover and organize knowledge on the web, in a way similar to a traditional book, to assist Effective Personalization Based on Association Rule Discovery from Web Usage Data Mikhail Bautin Bamshad Mobasher, Honghua Dai, Tao Luo, Miki Nakagawa DePaul University 243 S. Wabash Ave. Chicago, Illinois 60604, USA (2001) B. Mobasher, H. Dai, T. Luo and M. Nakagawa: "Effective Personalization Based on Association Rule Discovery from Web Usage Data", in Proc. the 3rd ACM Workshop on Web Information and Data Management (WIDM01) (2001). R. Agarwal, C. Aggarwal, and V. Prasad. A tree projection algorithm for generation of frequent itemsets. In Proceedings of the High Performance Data Mining Workshop, Puerto Rico, 1999. R. Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules. In Proc. 20th Int. Conference on Very Large Data Bases, VLDB94, 1994. Personalize a web site: Predictactions of the user (pre-fetching etc.) Recommend new items to a customer based on viewed items and knowledge of what other customers are interested in: ―Customers who buy this also buy that...‖ Collaborative filtering Find top k users who have similar tastes or interests (k-nearest-neighbor) Predict actions based on what those users did Too much online computation needed Association rules Scalable: constant time query processing Better precision and coverage than CF Data Preparation Input: web server logs User identification (trivial if using cookies) Session and transaction identification Page view identification (for multi-frame sites) As a result of preparation: Records correspond to transactions Items correspond to page views Order of page views does not matter Pattern Discovery Running Apriori algorithm Records = transactions, items = page views Minimum support and confidence restriction Problem with global minimum support value: important but rare items can be discarded Solution: multiple minimum support values. For itemset {p1, ..., pn} require Recommendation Engine Fixed-size sliding window w: subset of |w| most recent page views Need to find rules with w on the left This is done with depth-first search Sort elements of w lexicographically Only need O(|w|) to find the itemset and O(# of page views) to produce Frequent Itemset Graph Figure 1 from the paper (Mobasher et al.) Active session window w = {B, E} Solid lines – ―lexicographic‖ extension Stippled lines – any extension The search leads to node BE (5) at level 3 Possible extensions: A and C Confidence calculated as For A it is 5/5 = 1, for C it is 4/5 Window size vs minsup For large window size it might be difficult to find frequent enough itemsets But larger window gives better accuracy Solution: the ―all-kth-order‖ method Startwith the largest possible window size Reduce window size until able to generate a No additional computation incurred Evaluation Methodology For each transaction t first n page views are used for generating recommendation and last |t| - n are used for testing ast – subset of first n elements of t – minimum required confidence R(ast, ) – set of recommendations evalt – the last |t| - n pageviews of t Measures of Evaluation The threshold is ranging from 0.1 to 1 Impact of Window Size Figure 2 from the paper (Mobasher et al.) Single vs Multiple Min. Support Figure 3 from the paper (Mobasher et al.) The all-kth-order Model Figure 4 from the paper (Mobasher et al.) Association Rules vs kNN Figure 5 from the paper (Mobasher et al.) Personalization based on association rules is better than k-nearest-neighbor approach: Faster – very little online computation Therefore, better scalability Better precision Better coverage Effective alternative to standard collaborative filtering mechanisms for Thank you!
{"url":"http://www.docstoc.com/docs/21351717/CSE-634-Data-Mining-Concepts-and-Techniques-Association-Rule","timestamp":"2014-04-23T11:35:29Z","content_type":null,"content_length":"97394","record_id":"<urn:uuid:e4ed63be-93e0-4025-9f71-fb450861b24a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Griffith, CA Precalculus Tutor Find a Griffith, CA Precalculus Tutor ...I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. I've tutored subjects as low as third grade math, and as high as trignometry. I love helping students out in math and forming a strong relationship with them to make them feel comfortable by creating a positive environment. 10 Subjects: including precalculus, calculus, geometry, algebra 1 ...My technical expertise is in the fields of optical physics, laser science and chemical physics. I can also tutor college and grad school application writing and preparation. Thanks.I have a Master of Science in Engineering degree in Electrical Engineering from the University of Michigan where I focused in electromagnetics, optics, photonics and electrical signal processing. 22 Subjects: including precalculus, chemistry, physics, calculus So about me: I graduated with honors from UCLA in 2004 with a Bachelors of Science degree in Applied Mathematics. I then went on to graduate school to receive my Master of Science in Pure Mathematics from CSUN (in 2008). I am currently (at the time I write this) a tenured professor of Mathematics... 14 Subjects: including precalculus, calculus, algebra 2, geometry ...My name is Megan, and I am a sophomore at Washington University in St. Louis. I am pursuing a pre-medicine course track, and I will most likely double major in biochemistry and classics, as well as minor in East Asian Studies. 24 Subjects: including precalculus, chemistry, English, biology I've been tutoring since 1993 and I taught high school for one year. I like to have a friendly relationship with my students so its not such a drag for them to show up to sessions and so they stay inspired to learn. I've worked with students with different academic backgrounds and learning abilities and understand the potential problems students may run into while learning new 10 Subjects: including precalculus, chemistry, algebra 2, algebra 1 Related Griffith, CA Tutors Griffith, CA Accounting Tutors Griffith, CA ACT Tutors Griffith, CA Algebra Tutors Griffith, CA Algebra 2 Tutors Griffith, CA Calculus Tutors Griffith, CA Geometry Tutors Griffith, CA Math Tutors Griffith, CA Prealgebra Tutors Griffith, CA Precalculus Tutors Griffith, CA SAT Tutors Griffith, CA SAT Math Tutors Griffith, CA Science Tutors Griffith, CA Statistics Tutors Griffith, CA Trigonometry Tutors Nearby Cities With precalculus Tutor Briggs, CA precalculus Tutors Cimarron, CA precalculus Tutors Glassell, CA precalculus Tutors Glendale Galleria, CA precalculus Tutors La Canada, CA precalculus Tutors Magnolia Park, CA precalculus Tutors Oakwood, CA precalculus Tutors Playa, CA precalculus Tutors Rancho La Tuna Canyon, CA precalculus Tutors Santa Western, CA precalculus Tutors Sherman Village, CA precalculus Tutors Starlight Hills, CA precalculus Tutors Toluca Terrace, CA precalculus Tutors Vermont, CA precalculus Tutors Westwood, LA precalculus Tutors
{"url":"http://www.purplemath.com/Griffith_CA_Precalculus_tutors.php","timestamp":"2014-04-19T09:30:26Z","content_type":null,"content_length":"24589","record_id":"<urn:uuid:4f482485-9aa6-456b-9995-8d62a202df76>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Restriction of a Set April 12th 2009, 03:44 PM #1 Junior Member Sep 2008 Restriction of a Set My professor defined the restriction of a set: R is a relation X. and Y is a subset of X. all ordered pairs (x,x') that are elements of Y X Y such that xRx'. What would be the restriction to the subset of positive real numbers for the relation xRy if x^2 + y^2 = 1? I am kind of unsure exactly what this restriction means. Last edited by Snooks02; April 12th 2009 at 05:22 PM. Restriction of a set Hello Snooks02 My professor defined the restriction of a set: R is a relation X. and Y is a subset of X. all ordered pairs (x,x') that are elements of Y X Y such that xRx'. What would be the restriction to the subset of positive real numbers for the relation xRy if x^2 + y^2 = 1? I am kind of unsure exactly what this restriction means. Suppose X is the set {1, 2, 3, 4} and R is the relation on X: xRy if and only if x > y. Then R = {(2, 1), (3, 1), (4, 1), (3, 2), (4, 2), (4, 3)} Suppose now that we define a subset Y as {1, 2, 3}. Then Y x Y = {(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (1, 3), (2, 3), (3, 3)} So the restriction of R to Y is those elements (ordered pairs) of R that are also elements of Y x Y; in other words {(2, 1), (3, 1), (3, 2)}. So, in the question you are given, instead of all values of x and y (positive and negative) that satisfy $x^2 + y^2 = 1$ you need only those that are positive. On a Cartesian (x-y) plane, instead of the whole circle centre O, radius 1, we just get the quarter of this circle that lies in the first quadrant. OK? April 13th 2009, 12:11 AM #2
{"url":"http://mathhelpforum.com/discrete-math/83414-restriction-set.html","timestamp":"2014-04-18T09:41:37Z","content_type":null,"content_length":"34629","record_id":"<urn:uuid:025f53c0-36e9-49a9-b090-978b4e846355>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate velocity of movement Topic: Calculate velocity of movement (Read 4204 times) 0 Members and 1 Guest are viewing this topic. Hi, i have a question. How do i calculate the velocity of the weighted object which is attached to the conveyor belt? Given that i have a Torque versus Speed graph for the motor. Thanks in advance. a quick way is to find the rpm of the motor, then multiply that by the circumference of the drive wheel, it should tell you how fast the belt is moving and therefore the object on it. so if the motor spins at 40rpm and the drive wheel has a circumference of 12cm, then the velocity is 480cm per minute or 8cm per second. An accurate way of finding how fast the motor is turning is to use an encoder wheel on it So, the weighted object does not influence the velocity of the belt? OK then, what if i wanna know how long it takes to reach the top speed? The only time the weighted object should alter the speed of the motors is when the object is too heavy for the motor rating meaning that the motors havent been chosen properly.Calculate the torque you need to pull the object before you buy the motors. The best way to find how long it takes for the motors to accelerate to top speed is to also use encoders in the wheels. If you pre-calibrate a top speed, just then set a counter when the motor starts and wait for the encoder feedback to become the same as the pre-calibrated speed Err... i dont have any encoder. Is it possible to calculate the time anyway? • Supreme Robot • Posts: 1,478 • Helpful? 3 you can get an encoder from the scroll wheel of a mouse, or there are 2 in the ball type mice Problems making the $50 robot circuit board? click here. http://www.societyofrobots.com/robotforum/index.php?topic=3292.msg25198#msg25198 I dont think that you could effectively calculate it to be accurate. Saying that, most electric motors have very fast acceleration to a point where they are almost full off to full on instantly provided that the load they are carrying is within their torque ratio, this is why when using motors in robots, you have to manually control the acceleration to get smooth starting and stopping. • Administrator • Supreme Robot • Posts: 11,632 • Helpful? 169 The weight will affect speed during the initial start up condition until the mass accelerates to your desired maximum velocity. I derived this equation for you to calculate whatever you want for this situation: motor_torque * pulley_radius = object_mass * max_velocity / time_to_accelerate where motor_torque matches max_velocity on your motor datasheet graph I think you mean "motor_torque / pulley_radius" for the formula you given is it? • Administrator • Supreme Robot • Posts: 11,632 • Helpful? 169 I think you mean "motor_torque / pulley_radius" for the formula you given is it? ? uhhhh no . . . motor_torque / pulley_radius = force this is how I derived it: force = mass * acceleration torque = force * distance distance = pulley_radius acceleration = (max_velocity - starting_velocity)/(starting_time - finishing_time) starting_velocity = 0 starting_time = 0 combine terms . . . Hi, i have a question. How do i calculate the velocity of the weighted object which is attached to the conveyor belt? Given that i have a Torque versus Speed graph for the motor. Thanks in advance. So, the weighted object does not influence the velocity of the belt? OK then, what if i wanna know how long it takes to reach the top speed? you can get an encoder from the scroll wheel of a mouse, or there are 2 in the ball type mice I dont think that you could effectively calculate it to be accurate. Saying that, most electric motors have very fast acceleration to a point where they are almost full off to full on instantly provided that the load they are carrying is within their torque ratio, this is why when using motors in robots, you have to manually control the acceleration to get smooth starting and stopping. I think you mean "motor_torque / pulley_radius" for the formula you given is it?
{"url":"http://www.societyofrobots.com/robotforum/index.php?topic=2578.msg17570","timestamp":"2014-04-23T06:47:25Z","content_type":null,"content_length":"65493","record_id":"<urn:uuid:bee8bc37-be75-42db-b179-9323d4047382>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Calibration of spatial light modulators suffering from spatially varying phase response « journal navigation Calibration of spatial light modulators suffering from spatially varying phase response Optics Express, Vol. 21, Issue 13, pp. 16086-16103 (2013) We present a method for converting the desired phase values of a hologram to the correct pixel addressing values of a spatial light modulator (SLM), taking into account detailed spatial variations in the phase response of the SLM. In addition to thickness variations in the liquid crystal layer of the SLM, we also show that these variations in phase response can be caused by a non-uniform electric drive scheme in the SLM or by local heating caused by the incident laser beam. We demonstrate that the use of a global look-up table (LUT), even in combination with a spatially varying scale factor, generally does not yield sufficiently accurate conversion for applications requiring highly controllable output fields, such as holographic optical trapping (HOT). We therefore propose a method where the pixel addressing values are given by a three-dimensional polynomial, with two of the variables being the (x, y)-positions of the pixels, and the third their desired phase values. The coefficients of the polynomial are determined by measuring the phase response in 8×8 sub-sections of the SLM surface; the degree of the polynomial is optimized so that the polynomial expression nearly replicates the measurement in the measurement points, while still showing a good interpolation behavior in between. The polynomial evaluation increases the total computation time for hologram generation by only a few percent. Compared to conventional phase conversion methods, for an SLM with varying phase response, we found that the proposed method increases the control of the trap intensities in HOT, and efficiently prevents the appearance of strong unwanted 0th order diffraction that commonly occurs in SLM systems. © 2013 OSA 1. Introduction Since the 1980s, liquid crystal (LC) based spatial light modulators (SLMs) have been used for holographic beam steering in optical communication applications [ 1. E. Marom and N. Konforti, “Dynamic optical interconnections,” Opt. Lett. 12, 539–541 (1987) [CrossRef] [PubMed] . 3. E. Hällstig, J. Öhgren, L. Allard, L. Sjöqvist, D. Engström, S. Hård, D. Ågren, S. Junique, Q. Wang, and B. Noharet, “Retrocommunication utilizing electroabsorption modulators and non-mechanical beam steering,” Opt.Eng. 44, 045001 (2005) [CrossRef] . ]. Since the late 1990s the technique has also been used in optical trapping systems to obtain multiple, independently controllable traps. This technique is called holographic optical trapping (HOT) 4. M. Reicherter, T. Haist, E. U. Wagemann, and H. J. Tiziani, “Optical particle trapping with computer-generated holograms written on a liquid-crystal display,” Opt. Lett. 24, 608–610 (1999) [CrossRef] . 5. E. R. Dufresne, G. C. Spalding, M. T. Dearing, S. A. Sheets, and D. G. Grier, “Computer-generated holographic optical tweezer arrays,” Rev. Sci. Instrum. 72, 1810–1816 (2001) [CrossRef] . ]. Common for all these applications is the desire to efficiently distribute the incident optical power into a number of spots/traps (in this work referred to as “spots”, except in the optical trapping experiments described in Section 5.4.3). For some of of these applications, and HOT in particular, it is not sufficient just to be able to position the spots at the desired locations but it is also important to obtain aberration free spots with well specified optical power. Similar to static diffractive optical elements, the SLMs used for holographic beam steering are generally only capable of phase modulation, and holograms with good performance are thus achieved only if an iterative optimization algorithm is used. A multitude of algorithms that optimize spot-generating phase-only holograms have been developed, most of which are either direct search algorithms [ 6. M. A. Seldowitz, J. P. Allebach, and D. W. Sweeney, “Synthesis of digital holograms by direct binary search,” Appl. Opt. 26, 2788–2798 (1987) [CrossRef] [PubMed] . 8. G. Milewski, D. Engström, and J. Bengtsson, “Diffractive optical elements designed for highly precise far-field generation in the presence of artifacts typical for pixelated spatial light modulators,” Appl. Opt. 46, 95–105 (2007) [CrossRef] . ] or, more commonly, variations of the Gerchberg-Saxton algorithm [ 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . ]. To obtain the desired relative optical power distribution among the spots, most of the above-mentioned algorithms include a weighting procedure in their iterative cycle; a few different approaches have been suggested, all of them with the same purpose, to force the power in the individual spots to approach their desired values [ 10. M. W. Farn, “New iterative algorithm for the design of phase-only gratings,” Proc. SPIE 1555, 34–42 (1991) [CrossRef] . 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . While many such algorithms provide virtually perfect distribution of light to the desired positions, it is ultimately the hologram physically realized by the SLM that determines the performance of the system. In short, an LC-SLM is typically addressed with a two-dimensional matrix of 8-bit integers. Each matrix position represents one SLM pixel, and the integer value corresponds to the desired phase that should be realized. To assure that the realized hologram accurately resembles the desired one, it is crucial to find the correct relation between desired pixel phase and pixel value in the addressing matrix. Commonly, all pixels are assumed to behave identically and thus a global look-up-table (LUT) is used to convert the desired phase, from the hologram optimization algorithm, to the corresponding pixel value for all pixels. Although a global LUT may work adequately for some SLM types and applications, local variations in the phase response of the SLM often degrade the realized hologram. If the pixel response varies over the SLM and this is not accounted for when the SLM is addressed the optical power in the realized spots will differ from the desired values. Also, an unwanted side effect is that a relatively high optical power generally ends up on the optical axis of the system as the 0th order diffraction. As will be described in more detail in Section 3.1, earlier work has shown that it is possible to correct for a spatially varying phase response, provided the relation between phase and pixel addressing value only changes by a space-dependent constant [ 15. X. D. Xun and R. W. Cohn, “Phase calibration of spatially nonuniform spatial light modulators,” Appl. Opt. 43, 6400–6406 (2004) [CrossRef] [PubMed] . 16. J. Oton, P. Ambs, M. S. Millan, and E. Perez-Cabre, “Multipoint phase calibration for improved compensation of inherent wavefront distortion in parallel aligned liquid crystal on silicon displays,” Appl. Opt. 46, 5667–5679 (2007) [CrossRef] [PubMed] . ]. However, a more complex spatially varying phase response of the SLM is frequently occurring and must be compensated for by space dependent LUTs for optimal performance [ 17. D. Engström, M. Persson, and M. Goksör, “Spatial phase calibration used to improve holographic optical trapping,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu2C.3 [CrossRef] . 18. G. Thalhammer, R. W. Bowman, G. D. Love, M. J. Padgett, and M. Ritsch-Marte, “Speeding up liquid crystal SLMs using overdrive with phase change reduction,” Opt. Express 21, 1779–1797 (2013) [CrossRef] [PubMed] . ]. The importance of accounting for a spatially dependent phase response of the SLM is emphasized in Ref. 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . with the particular application of holographic imaging, which in this context can be viewed as an SLM producing a very large number of spots where the exact intensities of the individual spots are of less importance. Compared to our work, the spatially resolved characterization of the phase response of the SLM is done differently as well as the pixel value generation, but also here an improved phase modulation accuracy over the entire SLM is achieved, which, in this case, shows as a much improved visual quality of the holographic images, although no quantitative data are given for this improvement. Further, also Ref. 20. Z. Zhang, H. Yang, B. Robertson, M. Redmond, M. Pivnenko, N. Collings, W. A. Crossland, and D. Chu, “Diffraction based phase compensation method for phase-only liquid crystal on silicon devices in operation,” Appl. Opt. 51, 3837–3846 (2012) [CrossRef] [PubMed] . deals with compensation for spatially varying response in phase-only SLMs using a space dependent LUT, resulting in improved diffraction efficiency and reduced crosstalk (in other orders than the 0th, which is not measured). However, they only demonstrate single point beam steering with their calibrated system, so it is not clear to what extent their approach would improve the intensity uniformity of multispot patterns and suppress the 0th diffraction order intensity – two factors that are crucially important in HOT applications. In this work we demonstrate the detrimental impact of a spatially varying phase response and how such response occurs in LC-based SLMs, in Sections 2 and 3, respectively. Our method, applicable for any type of phase modulating SLM, is explained in detail in Section 4 and verified by numerous experiments in Section 5. Finally, the conclusions are given in Section 6. 2. Impact of a spatially varying phase response on ideal holograms To illustrate how an ideal hologram, i.e., the spatial phase distribution from the hologram optimization, is affected by a spatially varying phase response of the SLM we have simulated the performance of two different holograms, see Fig. 1 . The ideal performance of holograms producing a circle of 14 spots and an array of 24 out of 5×5 spots is shown in Figs. 1(d) and 1(g) , respectively. Note that neither of the holograms ideally produces a spot on the optical axis. The spot patterns are almost perfect with a uniformity of the spot powers of 98.5% and 97.7% for the circle and array, respectively. Furthermore, the (undesired) power on the optical axis, the 0th order power , is very low, 0.2% and 0.1% of the total power for the circle and array, respectively. If the SLM induces a static aberration, i.e., the zero-level for the phase varies over the SLM surface, see Fig. 1(b) the spot shape is strongly affected while the uniformity and are hardly affected at all, see Figs. 1(e) and 1(h) . By compensating for such aberrations it has been shown that it is possible to restore the ideal shape of the spots [ 21. T. Cizmar, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics 4, 388–394 (2010) [CrossRef] . 22. R. W. Bowman, A. J. Wright, and M. J. Padgett, “An SLM-based ShackHartmann wavefront sensor for aberration correction in optical tweezers,” J. Opt. 12, 124004 (2010) [CrossRef] . A second type of possible error induced by the SLM is a spatially varying phase response, see Fig. 1(c) . This type of phase error increases the power in the zeroth order quite drastically, see Figs. 1(f) and 1(i) . The zeroth order is ∼3.5 and ∼5 times stronger than the average spot in the circle and array pattern, respectively. Also, the uniformity among the 24 desired spots of the array drops to ∼72%. Thus, it is important that the phase mapping, i.e., the conversion between desired and realized phase, is accurate also locally on the SLM. 3. Spatial variations in phase response of LC SLMs In a reflective LC-based SLM, by far the most commonly used type of SLM for phase modulation, the LC layer is typically sandwiched between a reflective backplane with pixelated electrodes and a transmissive front glass with a common electrode. The modulated phase φ of the polarized light exiting such a reflective SLM (thus passing the LC layer twice) is given by Here, λ is the used wavelength, Δn = n[eff] − n[o] is the difference between the effective and the ordinary refractive index of the LC material, and d is the thickness of the LC layer. The effective refractive index, and consequently the phase, for each pixel can be varied (n[o] ≤ n[eff] ≤ n[e]; n[e] being the extraordinary refractive index) by rotating the rod-shaped LC molecules in a plane normal to the polarization of the incident beam. This is achieved by controlling the electric field over the pixel, in turn accomplished by applying a voltage over the pixel electrode and the common electrode. The SLM is addressed with an 8 or 16-bit number for each pixel, hereafter referred to as the pixel value (PV), which is converted into voltage by the SLM driving hardware. Since the pixel phase does not correspond linearly to the PV, and hence the applied voltage, a LUT is typically applied to the hologram, in some cases by the SLM driver itself. Typically a global LUT, which only takes the desired phase as input, is used, and consequently the SLM pixels are addressed according to where ) is the phase of the ideal hologram in position ( ). The LUT can either be provided by the SLM manufacturer or determined by measuring the phase response of the SLM [ 23. T. H. Barnes, K. Matsumoto, T. Eijo, K. Matsuda, and N. Ooyama, “Grating interferometer with extremely high stability, suitable for measuring small refractive index changes,” Appl. Opt. 30, 745–751 (1991) [CrossRef] [PubMed] . 26. D. Engström, G. Milewski, J. Bengtsson, and S. Galt, “Diffraction-based determination of the phase modulation for general spatial light modulators,” Appl. Opt. 45, 7195–7204 (2006) [CrossRef] [PubMed] . Applying a global LUT implies that a pixel’s position on the SLM does not affect its phase response. However, this is not always true. In the remainder of this section three experimentally verified reasons why LC-based SLMs often show spatial variations in their phase response are described. 3.1. Thickness variations in liquid crystal SLMs Due to manufacturing issues, the pixelated backplane of the SLM can become slightly nonflat [ ]. Combined with a flat front glass, a non-flat backplane yields an LC layer with varying thickness ). Such a thickness variation also affects the electric field applied across the pixels, resulting in a spatially dependent effective refractive index ). Thus, Eq. (2) becomes dependent on and therefore a non-flat backplane results in a spatially varying phase response. In previously presented work, which focused on such a non-flat backplane imperfection, it has been shown that to compensate for this effect it is sufficient to use a global LUT multiplied by a spatially varying scaling factor ), hereafter referred to as the “scaling matrix method” [ 15. X. D. Xun and R. W. Cohn, “Phase calibration of spatially nonuniform spatial light modulators,” Appl. Opt. 43, 6400–6406 (2004) [CrossRef] [PubMed] . 16. J. Oton, P. Ambs, M. S. Millan, and E. Perez-Cabre, “Multipoint phase calibration for improved compensation of inherent wavefront distortion in parallel aligned liquid crystal on silicon displays,” Appl. Opt. 46, 5667–5679 (2007) [CrossRef] [PubMed] . ]. The SLM is then addressed according to where, for a certain position ( ) is chosen such that Eq. (4) is exactly fulfilled for some fixed value of ), e.g., or 2 3.2. Non-uniform electric drive scheme For nematic LCs only the absolute value of the drive voltage matters for the induced birefringence. To avoid degradation of the LC over time (due to ion migrations) it is, however, crucial that the material is kept DC-balanced, i.e., that the time-averaged applied voltage is 0 V. As an example, in most SLMs fabricated by Boulder Nonlinear Systems this is accomplished by keeping the common electrode (positioned on the front glass of the SLM) at a voltage of 2.5 V while the pixelated backplane electrode is switched between 2.5 −U[pixel] V and 2.5 +U[pixel] V at a frequency of 1 kHz. This results in a maximum usable voltage U[pixel] of 2.5 V, for backplane electrode voltage switching between 0 and 5 V. To allow for real-time experiments, e.g., position clamps, [ 28. D. Preece, R. Bowman, A. Linnenberger, G. Gibson, S. Serati, and M. Padgett, “Increasing trap stiffness with position clamping in holographic optical tweezers,” Opt. Express 17, 22718–22725 (2009) [CrossRef] . ] a short response time of the SLM is crucial. For an LC-based device the response time is proportional to ) where is the electric field applied over the LC and is a material constant [ 29. M. Schadt and W. Helfrich, “Voltage-dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18, 127–128 (1971) [CrossRef] . ]. Thus, it is beneficial to increase the electric voltage applied across the SLM pixels. Recently, the company introduced a new drive scheme that allows twice as large applied electric field without increasing the backplane voltage. By time sequentially toggling the voltage on the common electrode between 0 V and 5 V and the backplane voltage between and 5 − , the full backplane voltage of 5 V can be utilized while the LC is still DC-balanced [ ]. However, since the pixel electrodes are switched sequentially (row by row, or similarly), i.e., at slightly different times relative to the switching of the common electrode, the latter approach often induces a spatially varying phase behavior. This effect is exemplified in Fig. 2 , which shows the realized phase response from 8×8 regions of an SLM. As all the curve shapes are different, no spatial scaling function can yield an accurate phase mapping. 3.3. LC heating induced by high laser power For applications requiring high optical power in the incident beam, heating of the LC material can affect the phase response. Since the intensity usually varies across the surface of the SLM, e.g. for a Gaussian beam, the change in phase response may also be spatially different. Figure 3 shows the measured local phase response in different locations of an SLM for an incident power of 50 mW and 1 W. The measurements show a rather complicated relation between realized phase and pixel value; a region can demand a relatively high pixel value to reach a phase of /2 but a relatively low value to reach 2 . It is also evident that the LUT (or a more advanced addressing scheme) must be determined using the same power of the incident beam as in subsequent experiments. 4. Method Here we first describe how the phase response of the SLM is spatially characterized and then how a hologram is converted to a PV matrix using a 3D polynomial prior to being addressed to the SLM. 4.1. Phase modulation characterization In order to characterize the SLM with a minimum of modifications of the setup, a diffraction-based method was used. This avoids the problems with an interferometric approach or mapping of the phase as a grayscale intensity [ 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . ], both of which techniques require additional optical components to be inserted or repositioning of the camera. In a diffraction based method, a set of simple holograms is displayed on the SLM, and the phase modulation is determined by measuring the varying intensity in the resulting diffraction spots [ 25. Z. Zhang, G. Lu, and F. T. S. Yu, “Simple method for measuring phase modulation in liquid crystal televisions,” Opt. Eng. 33, 3018–3022 (1994) [CrossRef] . 26. D. Engström, G. Milewski, J. Bengtsson, and S. Galt, “Diffraction-based determination of the phase modulation for general spatial light modulators,” Appl. Opt. 45, 7195–7204 (2006) [CrossRef] [PubMed] . To determine the localized phase response, the SLM is divided into subregions, typically 8×8, and a sequence of holograms with only two phase levels, realized as linear gratings with 50% duty cycle, is displayed on one subregion at a time, while the rest of the SLM is blank, see Fig. 4(a) . In each sequence, one phase level is kept constant and the other is stepped through the range of pixel values to be characterized. For each displayed hologram, the intensity is measured in either the +1st or −1st diffraction order, see Fig. 4(b) . The measured data from each region is then normalized such that the maxima corresponding to a phase of equal one; a typical result is shown in Fig. 4(c) . Finally, the phase is calculated according to is the region index and is the normalized power in the ±1st order for region 4.2. Fitting a 3D polynomial to the measured data The aim here is to find a 3D polynomial ) that gives the optimal PV for a desired phase value at the position on the SLM given by . To do this, the determined relations between and PV for all measured SLM regions are arranged in a linear equation system. Exemplified using a polynomial of the seventh order we end up with is the number of SLM subregions used for the fit, is the number of measured phase levels, = [ contains the coefficients that determine the polynomial, and is the number of terms in the polynomial. In the example given above, with a polynomial of the seventh order, = 120. Each row in the matrix on the left side contains the polynomial terms for a set ( ) and on the same row in the vector on the right side is the corresponding PV that yielded the phase . Here, correspond to the coordinates defining the center of SLM subregion . Finally, the coefficients are obtained by solving the, generally overdetermined, linear equation system in Eq. (6) 4.3. Phase compensation using the 3D polynomial Once the polynomial coefficients are determined, it is straightforward to convert a desired phase hologram, φ[desired](x, y), to the suitable PV matrix; for each SLM pixel the PV is given by the For test purposes the conversion was done either in Matlab or LabVIEW. The calculation time needed to convert a 512×512 element phase hologram to an equally large PV matrix was roughly 10 s. For real applications, such as the trapping experiments presented in Section 5.4.3, the method was implemented in the parallel programming language CUDA for C. In the latter case the calculation time was ∼0.13 ms and thus it is possible to generate optimized holograms and convert them using the 3D polynomial at a rate higher than 100 Hz [ 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . ]. In most cases this means that the SLM response time, rather than the hologram and PV calculation time, is the bottleneck of the HOT setup. 5. Experiments and results 5.1. Optical setup The optical tweezers setup, illustrated in Fig. 5 , was built around a motorized inverted epifluorescence microscope (DMI6000B, Leica Microsystems). The laser beam (1070 nm, IPG Photonics) was first magnified by an afocal telescope to match the beam diameter to the full width of the SLM (HSPDM512 1064-PCIe, Boulder Nonlinear Systems). The SLM has a flat, highly reflective dielectric mirror that covers the backplane electrodes. A second afocal telescope was used to image the SLM plane onto the back focal plane of the microscope objective (100×, NA 1.3). The magnification of the second telescope was chosen such that the output beam slightly overfilled the back aperture of the microscope objective. The microscope objective then forms the spots/traps in the vicinity of its imaging/trapping plane. A camera (Photon-focus MV-D1024E-160-CL, pixel size 10.6 m) was used for capturing bright field images except for the trapping experiments described in Section 5.4.3. Following the approach described in Section 4.1, we characterized our SLM within the HOT setup. Only minor adjustments of the HOT setup were made, see Fig. 5(b) . First, the IR filter positioned in front of the bright field camera was removed in order to image the reflection of the trapping laser. Second, the 0th order spot was blocked outside the microscope. For measurements with high optical power incident on the SLM, ≥ 0.5 W, a reflective ND filter (optical density 2–3) was also placed in the beam path outside the microscope. The latter modifications were done in order to reduce the amount of light incident on the camera. Since only 1/64 of the SLM area is used to diffract light to the 1st order in each sequence, the optical power in the 0th order is very high and might even damage the camera sensor. A cover glass (or a mirror in case of a full SLM evaluation) was placed in the image plane of the microscope so that the reflection of the diffraction spots in its upper surface was focused on the camera sensor. As the corners of the SLM were blocked by the circular aperture stop of the microscope objective, only the central 52 of the 8×8 subregions were characterized and used for fitting the polynomial coefficients. 5.2. Phase characterization Characterization was done at a number of different optical powers incident on the SLM. The binary gratings used had a large period of 16 pixels to minimize pixel crosstalk [ 8. G. Milewski, D. Engström, and J. Bengtsson, “Diffractive optical elements designed for highly precise far-field generation in the presence of artifacts typical for pixelated spatial light modulators,” Appl. Opt. 46, 95–105 (2007) [CrossRef] . 32. M. Persson, D. Engström, and M. Goksör, “Reducing the effect of pixel crosstalk in phase only spatial light modulators,” Opt. Express 20, 22334–22343 (2012) [CrossRef] [PubMed] . Figure 6(a,c) shows the normalized power in the 1st diffraction order as function of PV for the different subregions of the SLM, measured at incident powers of 50 mW and 1 W. Each curve corresponds to the measurement for a certain subregion. To obtain this normalized curve, the directly obtained intensity versus PV curve was first used to find the PV values at which the different local minima and maxima occurred, corresponding to values of , 2 , etc. This raw data curve was then normalized in segments between these PV values, by subtracting a constant “dark intensity” and multiplying by an appropriate constant, such that the value is either zero or one at the beginning and end of each segment, depending on whether is an even or odd integer of in that position. The phase (PV) was then extracted from the normalized curve according to Eq. (5) and is shown in Figs. 6(b) and 6(d) for all subregions. Table 1 shows the ranges of PV that yield a phase of , 2 , and 3 somewhere on the SLM. From these measurements, it is evident that the phase response varies drastically across the surface of the SLM. In some cases, the obtained phase modulation differs by for the same PV in different positions on the SLM. As a remark, this spatial variation in the phase response is larger than for the SLM used in Ref. 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . , so its correction should be at least as difficult. We also note that the phase response is highly dependent on the optical power of the incident beam. This underlines our statement in Section 3.3; the SLM should be characterized using the same optical power as used in the real application. 5.3. Polynomial fitting and accuracy To determine the optimal polynomial order, the error between the measurements and the polynomial fit for orders between 3 and 9 were calculated, see Fig. 7 . First, each polynomial was used to calculate the PV matrix (512×512 elements) for 20 equidistant phase values between 0 and 2 Fig. 7(a) shows the error in PV between the measurements (averaged for each sub-region) and the polynomial of 7th order for four of these phase values. By averaging the PVs obtained from the polynomial for each subregion and phase value, the phase response could be determined from the measurements. For each subregion the mean and maximal phase error (among the 20 phase values) were determined. The largest mean and maximal phase error from all 52 subregions and 32 most central subregions are shown in Figs. 7(b) and 7(c) , respectively. As seen, for the central part of the SLM the phase error decreases with higher orders, while if all measured subregions are used the error increases for polynomials of orders 7–9. This is caused by Runge’s phenomenon [ ], i.e., the polynomial fitting starting to induce oscillations at the edges of the fitting region as the polynomial becomes higher; this is seen in the top parts of the sub figures in Fig. 7(a) . Runge’s phenomenon is one disadvantage of using a single continuous polynomial for the entire SLM, but on the other hand this makes the number of parameters in the phase-to-pixel value conversion method quite small, 120 in our case for a seventh order polynomial. Also in Ref. 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . global polynomials are used, but in this case each of 256 desired phase levels is associated with a globally defined polynomial which gives the required pixel value in any position of the SLM. Since a rather well-behaved SLM is used in this case, it is sufficient to use a polynomial which is the sum of only the four lowest Legendre polynomials, and thus the total number of parameters in their method is 4×256. As a contrast, in Ref. 20. Z. Zhang, H. Yang, B. Robertson, M. Redmond, M. Pivnenko, N. Collings, W. A. Crossland, and D. Chu, “Diffraction based phase compensation method for phase-only liquid crystal on silicon devices in operation,” Appl. Opt. 51, 3837–3846 (2012) [CrossRef] [PubMed] . a LUT is created for each pixel and each desired phase, yielding a much larger number of parameters. For pixels that are not located precisely at a measurement position (this SLM is characterized by measuring in 4×3 positions on the SLM surface, just as for the SLM in Ref. 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . ) linear interpolation between the nearest measurement positions is used, so Runge’s phenomenon will not appear. As the number of terms in the polynomial increases rapidly with the polynomial order and the decrease in phase error ( Fig. 7(c) ) is negligible for orders 7–9 we decided to use a polynomial of 7th order. We disregard the small increase in phase error for polynomial orders 5–7 seen in Fig. 7(b) since most of the laser power falls on the center of the SLM. For a polynomial of 7th order the largest maximum phase error is 0.46 (0.33) rad and the largest mean phase error is 0.33 (0.11) rad if the 52 (32) central SLM subregions are used in the analysis. 5.4. Method evaluation The performance of the 3D polynomial method was evaluated using three different methods: by applying sequences of binary gratings to subregions of the SLM, by applying full-frame holograms and comparing the desired and obtained spot intensities, and finally in optical trapping experiments, where the obtained trap stiffness for each trap was determined by Brownian motion analysis. In the binary grating and full frame hologram measurements, the method was compared to the use of a global LUT and to the use of the scaling matrix method, where ) is chosen such that Eq. (4) is fulfilled for . In the optical trapping experiment, the method was compared to the use of a global LUT. 5.4.1. Binary gratings covering a subregion of the SLM The binary grating measurements were done similarly to the presented calibration method. The grating period was still 16 pixels and a sequence of gratings was used; one of the two grating levels was changed in the sequence. However, instead of stepping the PV from 0 to 255, the desired phase was stepped from 0 to 2 . The three tested methods for converting desired phase to PV were then used to convert each grating to the corresponding PV matrix. Finally, the power in one of the 1st diffraction orders was measured and normalized to the maximum value and the realized phase was calculated using Eq. (5) Fig. 8 , the normalized power and realized phase are plotted against the desired phase for the three methods. Ideally, the normalized power, see Figs. 8(a)–8(c) , should follow a sine-squared curve with a period of 2 , and the phase response curves, see Figs. 8(d)–8(f) , should have a constant slope of 1 and no offset. While the scaling matrix method brings the phase response curves closer to the ideal line – the maximum error is reduced from 0.8 to 0.6 – it fails to compensate for their varying shapes. With the 3D polynomial method, the response curves are brought much closer to the ideal line and the maximum error is reduced to 0.3 5.4.2. Holograms covering the full SLM The PV calculation methods were then evaluated by studying the spot intensities for full frame holograms. In these measurements, the zeroth order was not blocked. Its intensity was instead measured and used to further judge the performance of the used methods. The measurements were performed using 1 W optical power incident onto the SLM. This power yields a stronger phase response variation over the SLM area than ≤0.5 W and is also a more realistic power used for trapping. Binary gratings covering the entire SLM were first used in a way similar to the previously described sub-region measurements; the desired phase in one of the grating levels was kept at zero and the other level was stepped from zero to 2 . Again, a grating period of 16 SLM pixels were used. The powers in the zeroth and the two first diffraction orders were measured using a camera. The power in the two first diffraction orders should then ideally vary as /2) and the zeroth order should vary as /2), where is the total power in the trapping plane. Thus, the zeroth order should completely vanish at and equal = 2 and the first diffraction orders should completely vanish at = 2 . In Fig. 9 , the powers for the three measured diffraction orders are plotted for each of the three methods. The difference between the three methods can be seen most clearly in the extreme values. For the zeroth order, the minima equal 0.19 , 0.26 , and 0.074 for the three methods, respectively. At = 2 the zeroth order equals 0.91 , 0.96 , and 0.97 and the highest of the two 1st order powers equals 0.16 , 0.069 , and 0.028 , respectively. The phase for which the 0th order minimum and ±1st order maxima is found equals 1.05 , 0.88 , and 0.93 radians for the three methods, respectively. Also, an analysis of the shapes of the curves shows that the global LUT and the 3D polynomial give an equally decent fit to the ideal sine-squared shapes while the scaling matrix method degrades the curve shapes. As seen, none of the methods removes the 0th order completely for a phase step of radians, nor do the first orders completely vanish when the phase step reaches 2 . The reason for this might be that the spatial phase response is still not perfectly corrected for. However, a more pronounced effect is likely that the realized phase gratings are smeared out due to pixel crosstalk resulting in non-ideal “binary” gratings [ 8. G. Milewski, D. Engström, and J. Bengtsson, “Diffractive optical elements designed for highly precise far-field generation in the presence of artifacts typical for pixelated spatial light modulators,” Appl. Opt. 46, 95–105 (2007) [CrossRef] . 32. M. Persson, D. Engström, and M. Goksör, “Reducing the effect of pixel crosstalk in phase only spatial light modulators,” Opt. Express 20, 22334–22343 (2012) [CrossRef] [PubMed] . ]. In summary, the measured results on binary gratings show that the 3D polynomial gives the best results while the scaling matrix method actually yields slightly worse results than the global LUT. Similar results were obtained for = 50 mW, 0.5 W, and 1.5 W. Two holograms with a more complicated phase modulation were also used. One producing 14 spots equidistantly distributed on a circle with radius 0.1875 and one producing 24 spots forming a regular 5×5 grid with a spacing of 0.125 ; the center position excluded. Here, = sin ) is the maximal steering angle allowed by the pixelated SLM; is the pixel pitch. Both spot arrangements were centered on the optical axis. The two holograms were optimized using a modified Gerchberg-Saxton algorithm to obtain nearly perfect theoretical uniformity [ 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . ]. For each hologram and phase conversion method, the power in the desired spot positions and the 0th order spot were measured. The uniformity of the spot powers and the power in the 0th order spot were used to assess the performance of the three methods. Figs. 10(a)–10(c) measured results are shown for the hologram producing a circle containing 14 spots. First of all, it is clear that the unwanted optical power on the optical axis decreases as the phase-to-PV conversion method becomes more accurate. For the data shown in Figs. 10(a)–10(c) , 8.9%, 6.9%, and 2.9% of the captured power falls into the zeroth order. Furthermore, a power uniformity of the 14 spots of 85%, 84%, and 85% is obtained for the three different conversions methods, respectively. Similar results were obtained also for the hologram producing 24 traps out of a 5×5 grid, see Figs. 10(d)–10(f) . The measurements show that 7.5%, 6.7%, and 2.7% of the captured power falls into the zeroth order and the uniformity (among the 24 desired traps) was 70%, 69%, and 70% for the three methods, respectively. Here, a higher uniformity was expected for the 3D polynomial method (see simulated results in Fig. 1 ). However, the reflective measurements are very sensitive to the mirror position and orientation. Thus, with this method it is very difficult to verify any possible increase in the uniformity. 5.4.3. Holographic optical trapping Finally, the 3D polynomial method was used for trapping and its performance was compared to the use of a global LUT. Since the scaling matrix method had not shown any real improvement over the global LUT method, as described in previous subsections, it was not implemented in CUDA and thus not tested for optical trapping. Prior to the measurements, an IR-filter was mounted onto the camera to block the trapping laser and make it possible to capture clear bright field images, see Fig. 5(a) . Also, a camera allowing for a faster frame rate was used (EoSens CL MC1362, Mikrotron GmbH, pixel size 14 Five silica beads (diameter of 2.56 μm) suspended in water were trapped and positioned with a spacing of 7.6 μm along the x-axis; the central bead coinciding with the optical axis. To obtain bright field images suitable for bead position determination the traps were positioned in a plane 2.4 μm in front of the imaging/trapping plane of the microscope objective. The latter plane coincides with the imaged Fourier plane of the SLM, which is the plane where the 0th order spot is located. The 2.4 μm longitudinal displacement of the traps was accomplished by including a spherical phase curvature when optimizing the hologram. Finally, the sample was placed such that the beads were positioned 10 μm above the glass substrate. Keeping the incident laser power onto the SLM at 1 W this was repeated for two different holograms designed to yield 10% and 20% of the total power in each of the traps, respectively. To achieve 10% of the total power in each of the 5 traps, the hologram dumped half of the laser power outside the measurement window. By allowing for power to be dumped when calculating the hologram, the intensity distribution within the measurement window can be even closer to the desired one. In particular, the theoretical power in the five traps can be virtually identical, yielding a perfectly uniform trap stiffness [ 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . ]. The second case, 5 traps each containing 20% of the total power, means that the algorithm tries to maximize the diffraction efficiency without deliberate power dumping, giving very nearly, but not quite, uniform intensity in the trap positions. The beads were monitored by capturing bright field images at a rate of 8192 frames per second for a duration of 100 s. The positions of the beads were determined in each frame using a center-of-mass calculation after subtraction of the dark signal non uniformity and a constant threshold value. Although the conditions are static the beads move due to Brownian motion. The power spectrum of the position of a bead gives information about the trap stiffness, i.e., how strongly the bead is held in the trap, which in turn is an indication of the intensity of the focused light in the trap position. To minimize the impact of drift the position data was divided into 0.25 s long segments that were used to calculate the power spectrum. In Fig. 11 the average of 100 such power spectra are shown for each of the five beads. Figs. 11(a)–11(d) it is clear that the 3D polynomial method yields more similar trapping conditions in the five traps than the global LUT method, as evidenced by the more similar power spectral density curves for the five beads. This is primarily an indication of a considerably more uniform trap stiffness. For a desired trap power of 10% of the total power the mean trap stiffness is 0.33·10 pN/nm and 0.36·10 pN/nm for the global LUT and the 3D polynomial, respectively; the trap stiffness uniformity, calculated with the maximum and minimum stiffness values similarly to Eq. (1) , is 90% and 96%, respectively. For a desired trap power of 20% of the total power the mean trap stiffness is 0.65·10 pN/nm and 0.68 pN/nm for the global LUT and the 3D polynomial, respectively. Furthermore, the trap stiffness uniformity is 81% and 94%, respectively. Thus, the 3D polynomial method increases both the trap stiffness of the traps and the uniformity thereof. As seen in Figs. 11(a)–11(d) , all five beads have roughly the same behavior. Thus, the bead in the middle trap, coinciding with the optical axis, is not strongly influenced by the 0th diffraction order spot positioned in the imaging/trapping plane 2.4 m behind it. However, when a hologram optimized to yield five traps with a desired power in each trap equal to only 2% of the total power, for the global LUT method the 0th diffraction order spot is strong enough to capture the bead from the middle trap, see Fig. 11(e) . When the 3D polynomial method was used, the 0th diffraction order was still too weak to affect the bead in the middle trap, see Fig. 11(f) . Thus, with this method very weak traps can be efficiently used also in close vicinity to the 0th diffraction order; this is important, e.g., if sensitive living cells are to be trapped. 6. Conclusions We have shown that the phase response can vary over the SLM surface and that this can depend not only on variations in the active LC layer thickness but also on variations in the incident laser power, i.e., induced heating of the LC, and a non-uniform electrical driving scheme. As a consequence, independent of which phase-to-PV conversion method is used, the data used to derive the conversion parameters should always be measured at the same power, and – if applicable – same SLM electric drive settings as used in the real application. To compensate for such spatial phase response variations, we suggest a method that converts the desired phase to pixel value using a 3D polynomial with variables being the (x, y)-coordinates and the desired phase of each pixel. Experimental evaluations of holographically generated configurations of intensity spots and optically trapped beads confirm that the SLM behaves more ideally than when previously proposed conversion methods are used. The main advantages are that the unwanted 0th diffraction order, i.e., optical power on the optical axis of the system, is strongly suppressed, and that the optical power is more accurately distributed among the desired spots/traps. In HOT, the suppression of the 0th diffraction order is a major improvement as it means that unwanted particles being drawn into the optical axis, typically in the center of the measurement region, is no longer such a severe problem. Thus, there is no need to block the 0th diffraction order outside the microscope. Instead, traps close to – or even coinciding with – the optical axis behave as any other trap. Furthermore, the 3D polynomial method has shown to increase both the trap stiffness and the trap uniformity. Since the conversion method is applied only after the desired phase pattern/hologram has been found, the optimization method used to calculate the hologram is not critical to the conversion method. Hence, the method can be used with holograms optimized with any algorithm. For instance, we are currently using the method together with an algorithm that creates holograms that minimize the pixel crosstalk effect [ 32. M. Persson, D. Engström, and M. Goksör, “Reducing the effect of pixel crosstalk in phase only spatial light modulators,” Opt. Express 20, 22334–22343 (2012) [CrossRef] [PubMed] . Even though this work has focused on HOT, the benefits of the scheme presented here can be utilized in any applications in which an SLM is used for phase modulation. As a straightforward further development of the method, to improve the behavior of the polynomial in the outer parts of the SLM, i.e., decrease the impact of Runge’s phenomenon, dummy data points may be introduced outside the measured area of the SLM. The 3D polynomial method adds calculation time to the hologram generation cycle. However, utilizing our implementation in CUDA, the conversion time for a 512×512 element hologram is merely ∼0.13 ms. This means that if holograms are created at a rate of 100 Hz, the PV conversion needs less than 2% of the time window, leaving enough time to use an accurate optimization algorithm resulting in holograms with a near-ideal performance. This work was supported by the Swedish Research Council (M.G.). D.E. acknowledges the Royal Swedish Academy of Sciences for financial support. We thank Anna Linnenberger and Teresa Ewing at Boulder Nonlinear Systems for their help linking our software to the SLM hardware and for discussions regarding BNS SLM systems. Finally, we thank Rebecca Mayer for interesting discussions. References and links 1. E. Marom and N. Konforti, “Dynamic optical interconnections,” Opt. Lett. 12, 539–541 (1987) [CrossRef] [PubMed] . 2. P. F. McManamon, T. A. Dorschner, D. L. Corkum, L. J. Friedman, D. S. Hobbs, M. Holtz, S. Liberman, H. Q. Nguyen, D. P. Resler, R. C. Sharp, and E. A. Watson, “Optical phased array technology,” Proc. SPIE 84, 268–298 (1996). 3. E. Hällstig, J. Öhgren, L. Allard, L. Sjöqvist, D. Engström, S. Hård, D. Ågren, S. Junique, Q. Wang, and B. Noharet, “Retrocommunication utilizing electroabsorption modulators and non-mechanical beam steering,” Opt.Eng. 44, 045001 (2005) [CrossRef] . 4. M. Reicherter, T. Haist, E. U. Wagemann, and H. J. Tiziani, “Optical particle trapping with computer-generated holograms written on a liquid-crystal display,” Opt. Lett. 24, 608–610 (1999) [CrossRef] . 5. E. R. Dufresne, G. C. Spalding, M. T. Dearing, S. A. Sheets, and D. G. Grier, “Computer-generated holographic optical tweezer arrays,” Rev. Sci. Instrum. 72, 1810–1816 (2001) [CrossRef] . 6. M. A. Seldowitz, J. P. Allebach, and D. W. Sweeney, “Synthesis of digital holograms by direct binary search,” Appl. Opt. 26, 2788–2798 (1987) [CrossRef] [PubMed] . 7. B. K. Jennison, J. P. Allebach, and D. W. Sweeney, “Efficient design of direct-binary-search computer-generated holograms,” J. Opt. Soc. Am. A 8, 652–660 (1991) [CrossRef] . 8. G. Milewski, D. Engström, and J. Bengtsson, “Diffractive optical elements designed for highly precise far-field generation in the presence of artifacts typical for pixelated spatial light modulators,” Appl. Opt. 46, 95–105 (2007) [CrossRef] . 9. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik 35, 237–246 (1972). 10. M. W. Farn, “New iterative algorithm for the design of phase-only gratings,” Proc. SPIE 1555, 34–42 (1991) [CrossRef] . 11. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun. 207, 169–175 (2002) [CrossRef] . 12. D. Engström, A. Frank, J. Backsten, M. Goksör, and Jörgen Bengtsson, “Grid-free 3D multiple spot generation with an efficient single-plane FFT-based algorithm,” Opt. Express 17, 9989–10000 (2009) [CrossRef] [PubMed] . 13. S. Bianchi and R. Di Leonardo, “Real-time optical micro-manipulation using optimized holograms generated on the GPU,” Comput. Phys. Commun. 181, 1442–1446 (2010) [CrossRef] . 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE 8097,80971H (2011) [CrossRef] . 15. X. D. Xun and R. W. Cohn, “Phase calibration of spatially nonuniform spatial light modulators,” Appl. Opt. 43, 6400–6406 (2004) [CrossRef] [PubMed] . 16. J. Oton, P. Ambs, M. S. Millan, and E. Perez-Cabre, “Multipoint phase calibration for improved compensation of inherent wavefront distortion in parallel aligned liquid crystal on silicon displays,” Appl. Opt. 46, 5667–5679 (2007) [CrossRef] [PubMed] . 17. D. Engström, M. Persson, and M. Goksör, “Spatial phase calibration used to improve holographic optical trapping,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu2C.3 [CrossRef] . 18. G. Thalhammer, R. W. Bowman, G. D. Love, M. J. Padgett, and M. Ritsch-Marte, “Speeding up liquid crystal SLMs using overdrive with phase change reduction,” Opt. Express 21, 1779–1797 (2013) [CrossRef] [PubMed] . 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt. 52, 2610–2618 (2013) [CrossRef] [PubMed] . 20. Z. Zhang, H. Yang, B. Robertson, M. Redmond, M. Pivnenko, N. Collings, W. A. Crossland, and D. Chu, “Diffraction based phase compensation method for phase-only liquid crystal on silicon devices in operation,” Appl. Opt. 51, 3837–3846 (2012) [CrossRef] [PubMed] . 21. T. Cizmar, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics 4, 388–394 (2010) [CrossRef] . 22. R. W. Bowman, A. J. Wright, and M. J. Padgett, “An SLM-based ShackHartmann wavefront sensor for aberration correction in optical tweezers,” J. Opt. 12, 124004 (2010) [CrossRef] . 23. T. H. Barnes, K. Matsumoto, T. Eijo, K. Matsuda, and N. Ooyama, “Grating interferometer with extremely high stability, suitable for measuring small refractive index changes,” Appl. Opt. 30, 745–751 (1991) [CrossRef] [PubMed] . 24. A. Bergeron, J. Gauvin, F. Gagnon, D. Gingras, H. H. Arsenault, and M. Doucet, “Phase calibration and applications of a liquid-crystal spatial light modulator,” Appl. Opt. 34, 5133–5139 (1995) [CrossRef] [PubMed] . 25. Z. Zhang, G. Lu, and F. T. S. Yu, “Simple method for measuring phase modulation in liquid crystal televisions,” Opt. Eng. 33, 3018–3022 (1994) [CrossRef] . 26. D. Engström, G. Milewski, J. Bengtsson, and S. Galt, “Diffraction-based determination of the phase modulation for general spatial light modulators,” Appl. Opt. 45, 7195–7204 (2006) [CrossRef] [PubMed] . 27. A. Linnenberger, S. Serati, and J. Stockley, “Advances in Optical Phased Array Technology,” Proc. SPIE 6304,63040T (2006). 28. D. Preece, R. Bowman, A. Linnenberger, G. Gibson, S. Serati, and M. Padgett, “Increasing trap stiffness with position clamping in holographic optical tweezers,” Opt. Express 17, 22718–22725 (2009) [CrossRef] . 29. M. Schadt and W. Helfrich, “Voltage-dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett. 18, 127–128 (1971) [CrossRef] . 30. A. Linnenberger and Teresa Ewing, Boulder Nonlinear Systems, 450 Courtney Way, #107 Lafayette, CO 80026, USA (personal communication, February 2013). 31. Software available at http://www.physics.gu.se/forskning/komplexa-system/biophotonics/download/hotlab/ 32. M. Persson, D. Engström, and M. Goksör, “Reducing the effect of pixel crosstalk in phase only spatial light modulators,” Opt. Express 20, 22334–22343 (2012) [CrossRef] [PubMed] . 33. C. Runge, “Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten,” inZeitschrift für Mathematik und Physik46,R. Mehmke and C. Runge, eds. (Druck und verlag von B. G. Teubner, Leipzig, 1901), 224–243. OCIS Codes (090.1760) Holography : Computer holography (090.1970) Holography : Diffractive optics (090.2890) Holography : Holographic optical elements (120.5060) Instrumentation, measurement, and metrology : Phase modulation (140.7010) Lasers and laser optics : Laser trapping (230.6120) Optical devices : Spatial light modulators (090.1995) Holography : Digital holography (350.4855) Other areas of optics : Optical tweezers or optical manipulation ToC Category: Optical Devices Original Manuscript: April 29, 2013 Revised Manuscript: June 10, 2013 Manuscript Accepted: June 13, 2013 Published: June 28, 2013 Virtual Issues Vol. 8, Iss. 8 Virtual Journal for Biomedical Optics David Engström, Martin Persson, Jörgen Bengtsson, and Mattias Goksör, "Calibration of spatial light modulators suffering from spatially varying phase response," Opt. Express 21, 16086-16103 (2013) Sort: Year | Journal | Reset 1. E. Marom and N. Konforti, “Dynamic optical interconnections,” Opt. Lett.12, 539–541 (1987). [CrossRef] [PubMed] 2. P. F. McManamon, T. A. Dorschner, D. L. Corkum, L. J. Friedman, D. S. Hobbs, M. Holtz, S. Liberman, H. Q. Nguyen, D. P. Resler, R. C. Sharp, and E. A. Watson, “Optical phased array technology,” Proc. SPIE84, 268–298 (1996). 3. E. Hällstig, J. Öhgren, L. Allard, L. Sjöqvist, D. Engström, S. Hård, D. Ågren, S. Junique, Q. Wang, and B. Noharet, “Retrocommunication utilizing electroabsorption modulators and non-mechanical beam steering,” Opt.Eng.44, 045001 (2005). [CrossRef] 4. M. Reicherter, T. Haist, E. U. Wagemann, and H. J. Tiziani, “Optical particle trapping with computer-generated holograms written on a liquid-crystal display,” Opt. Lett.24, 608–610 (1999). 5. E. R. Dufresne, G. C. Spalding, M. T. Dearing, S. A. Sheets, and D. G. Grier, “Computer-generated holographic optical tweezer arrays,” Rev. Sci. Instrum.72, 1810–1816 (2001). [CrossRef] 6. M. A. Seldowitz, J. P. Allebach, and D. W. Sweeney, “Synthesis of digital holograms by direct binary search,” Appl. Opt.26, 2788–2798 (1987). [CrossRef] [PubMed] 7. B. K. Jennison, J. P. Allebach, and D. W. Sweeney, “Efficient design of direct-binary-search computer-generated holograms,” J. Opt. Soc. Am. A8, 652–660 (1991). [CrossRef] 8. G. Milewski, D. Engström, and J. Bengtsson, “Diffractive optical elements designed for highly precise far-field generation in the presence of artifacts typical for pixelated spatial light modulators,” Appl. Opt.46, 95–105 (2007). [CrossRef] 9. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik35, 237–246 (1972). 10. M. W. Farn, “New iterative algorithm for the design of phase-only gratings,” Proc. SPIE1555, 34–42 (1991). [CrossRef] 11. J. E. Curtis, B. A. Koss, and D. G. Grier, “Dynamic holographic optical tweezers,” Opt. Commun.207, 169–175 (2002). [CrossRef] 12. D. Engström, A. Frank, J. Backsten, M. Goksör, and Jörgen Bengtsson, “Grid-free 3D multiple spot generation with an efficient single-plane FFT-based algorithm,” Opt. Express17, 9989–10000 (2009). [CrossRef] [PubMed] 13. S. Bianchi and R. Di Leonardo, “Real-time optical micro-manipulation using optimized holograms generated on the GPU,” Comput. Phys. Commun.181, 1442–1446 (2010). [CrossRef] 14. M. Persson, D. Engström, and M. Goksör, “Real-time generation of fully optimized holograms for optical trapping applications,” Proc. SPIE8097,80971H (2011). [CrossRef] 15. X. D. Xun and R. W. Cohn, “Phase calibration of spatially nonuniform spatial light modulators,” Appl. Opt.43, 6400–6406 (2004). [CrossRef] [PubMed] 16. J. Oton, P. Ambs, M. S. Millan, and E. Perez-Cabre, “Multipoint phase calibration for improved compensation of inherent wavefront distortion in parallel aligned liquid crystal on silicon displays,” Appl. Opt.46, 5667–5679 (2007). [CrossRef] [PubMed] 17. D. Engström, M. Persson, and M. Goksör, “Spatial phase calibration used to improve holographic optical trapping,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu2C.3. [CrossRef] 18. G. Thalhammer, R. W. Bowman, G. D. Love, M. J. Padgett, and M. Ritsch-Marte, “Speeding up liquid crystal SLMs using overdrive with phase change reduction,” Opt. Express21, 1779–1797 (2013). [CrossRef] [PubMed] 19. S. Reichelt, “Spatially resolved phase-response calibration of liquid-crystal-based spatial light modulators,” Appl. Opt.52, 2610–2618 (2013). [CrossRef] [PubMed] 20. Z. Zhang, H. Yang, B. Robertson, M. Redmond, M. Pivnenko, N. Collings, W. A. Crossland, and D. Chu, “Diffraction based phase compensation method for phase-only liquid crystal on silicon devices in operation,” Appl. Opt.51, 3837–3846 (2012). [CrossRef] [PubMed] 21. T. Cizmar, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics4, 388–394 (2010). [CrossRef] 22. R. W. Bowman, A. J. Wright, and M. J. Padgett, “An SLM-based ShackHartmann wavefront sensor for aberration correction in optical tweezers,” J. Opt.12, 124004 (2010). [CrossRef] 23. T. H. Barnes, K. Matsumoto, T. Eijo, K. Matsuda, and N. Ooyama, “Grating interferometer with extremely high stability, suitable for measuring small refractive index changes,” Appl. Opt.30, 745–751 (1991). [CrossRef] [PubMed] 24. A. Bergeron, J. Gauvin, F. Gagnon, D. Gingras, H. H. Arsenault, and M. Doucet, “Phase calibration and applications of a liquid-crystal spatial light modulator,” Appl. Opt.34, 5133–5139 (1995). [CrossRef] [PubMed] 25. Z. Zhang, G. Lu, and F. T. S. Yu, “Simple method for measuring phase modulation in liquid crystal televisions,” Opt. Eng.33, 3018–3022 (1994). [CrossRef] 26. D. Engström, G. Milewski, J. Bengtsson, and S. Galt, “Diffraction-based determination of the phase modulation for general spatial light modulators,” Appl. Opt.45, 7195–7204 (2006). [CrossRef] 27. A. Linnenberger, S. Serati, and J. Stockley, “Advances in Optical Phased Array Technology,” Proc. SPIE6304,63040T (2006). 28. D. Preece, R. Bowman, A. Linnenberger, G. Gibson, S. Serati, and M. Padgett, “Increasing trap stiffness with position clamping in holographic optical tweezers,” Opt. Express17, 22718–22725 (2009). [CrossRef] 29. M. Schadt and W. Helfrich, “Voltage-dependent optical activity of a twisted nematic liquid crystal,” Appl. Phys. Lett.18, 127–128 (1971). [CrossRef] 30. A. Linnenberger and Teresa Ewing, Boulder Nonlinear Systems, 450 Courtney Way, #107 Lafayette, CO 80026, USA (personal communication, February 2013). 31. Software available at http://www.physics.gu.se/forskning/komplexa-system/biophotonics/download/hotlab/ 32. M. Persson, D. Engström, and M. Goksör, “Reducing the effect of pixel crosstalk in phase only spatial light modulators,” Opt. Express20, 22334–22343 (2012). [CrossRef] [PubMed] 33. C. Runge, “Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten,” inZeitschrift für Mathematik und Physik46,R. Mehmke and C. Runge, eds. (Druck und verlag von B. G. Teubner, Leipzig, 1901), 224–243. OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/vjbo/fulltext.cfm?uri=oe-21-13-16086&id=258392","timestamp":"2014-04-20T08:29:29Z","content_type":null,"content_length":"401393","record_id":"<urn:uuid:2e133a7a-8be4-4db1-8a33-5121bc3b413d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Semi-digital lab notebooks by PhilipJ on 17 January 2009 Going on a year ago, I talked a bit about digital lab notebooks and their use as a potential replacement for the paper notebooks we all carry around. The idea of a digital notebook is very appealing: searchable, easily archived, portable (if you wanted to place it on the internet), etc. The downsides, however, are also obvious: having a computer next to you on the bench to write is not always ideal, since spilling a solvent on paper causes relatively little harm compared to frying the electronics of a laptop. It is also much slower to do a quick sketch on a computer than on a piece of When I first tried to “go digital”, I set up a wiki on my group’s server and would try to include information in it as I was going about my day. I found it, unfortunately, quite cumbersome. When the kinds of things you are doing at the optical table are alignment, position readings, power measurements, etc, I found it crucial to keep my paper notebook with me to quickly record these numbers and make sketches of the optical elements, and I was loathe to transcribe this information into the wiki later, mostly because scanning or redrawing the figures digitally takes too long. When it comes to data analysis, however, everything has to be done on a computer, as it is just not feasible to plot thousands of data points by hand in a paper notebook. My computer algebra system of choice is Mathematica, and because of Mathematica’s notebook system, it became extremely straightforward to include sufficient commentary among the analysis and calculations. The important “working” details of my day are recorded on paper that is heavy on scribbles, numbers, and comments on the minutiae of a particular instrument or measurement, followed by references to specific data files collected that day. The Mathematica notebooks where I visualize and analyze data are then filled with the relevant comments about the data collection and subsequent analysis, but not usually the random scribbles that you need to keep on paper while leading-up to and actually taking a measurement. Having everything organized by date makes it simple to correlate between paper and digital All this is to say that I’ve found a happy medium between analog and digital data retention. The paper notebooks will remain as a permanent record of the day-to-day activities in the laboratory, while digital notebooks are used to flesh out important collected data. The only downside is that Mathematica is not an open platform, but as long as there are free Notebook readers available, I’ll try not to get too worried.
{"url":"http://biocurious.com/2009/01/17/semi-digital-lab-notebooks","timestamp":"2014-04-19T06:51:57Z","content_type":null,"content_length":"11631","record_id":"<urn:uuid:cbafda84-764d-4c3b-b463-62c15a0b2b5a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
power derivative November 19th 2007, 12:54 PM #1 Jul 2007 power derivative hi, i have two questions. Hoepfully someone can help me with them. 1. Find d/dx(x^2) 2. Find the normal to the curve for y=x^3 at the point (1,1). i assume if you are asking this they want you to do it by the definition? i don't see why this is so hard for you recall, $\frac d{dx}x^n = nx^{n - 1}$ or, by the definition, $\frac d{dx}f(x) = \lim_{h \to 0} \frac {f(x + h) - f(x)}h$ 2. Find the normal to the curve for y=x^3 at the point (1,1). step 1: find the slope at x = 1 by finding the derivative of x^3 and plugging in 1 for x (use the rule i told you above). step 2: take the negative inverse of the answer. this gives you the slope for the normal line (since it is perpendicular to the tangent line, whose slope is given by the derivative). step 3: take this as the value for m, and use the point slope form $y - y_1 = m(x - x_1)$ where $m$ is the slope of the line, and $(x_1,y_1)$ is a point the line passes through. solve for y and you have your answer November 19th 2007, 01:04 PM #2
{"url":"http://mathhelpforum.com/calculus/23135-power-derivative.html","timestamp":"2014-04-16T14:37:44Z","content_type":null,"content_length":"34611","record_id":"<urn:uuid:e060c60e-86fb-4991-b4a6-77a9579c3dd3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
How to simplify after differentiation. May 14th 2009, 11:38 PM #1 Junior Member May 2009 How to simplify after differentiation. I have a problem with simplifying after I differentiate, can anyone enlighten me as what are some key ways in simplifying equations Heres my problem rearranged it. Product Rule h'[t]=2*([-1/2][1+t]^[-3/2]) + (([1+t]^[1/2])*2 now how do i go about in simplifying that? my problem is expanding the brackets with variables and powers. Show me some necessary steps needed How is this the "product rule"? (2t- 1)'= 2 and ((1+ t)^(-1/2)'= (-1/2)(1+ t)^(-3/2). The product rule gives h'= 2(1+t)^(-1/2)- (1/2)(2t-1)(1+t)^(-3/2). To simplify that, I would recomment rewriting the roots as square roots again. Of course, (1+t)^(-3/2)= (1+t)^(-1)(1+t)^(-1/2). $h'= \frac{2}{\sqrt{1+ t}}- \frac{2t-1}{2(1+t)\sqrt{1+t}}$ Now get common denominators and combine the fractions. now how do i go about in simplifying that? my problem is expanding the brackets with variables and powers. Show me some necessary steps needed[/QUOTE] I have a problem with simplifying after I differentiate, can anyone enlighten me as what are some key ways in simplifying equations Heres my problem rearranged it. Product Rule h'[t]=2*([-1/2][1+t]^[-3/2]) + (([1+t]^[1/2])*2 Mr F says: This is wrong. now how do i go about in simplifying that? my problem is expanding the brackets with variables and powers. Show me some necessary steps needed You have not applied the product rule correctly. You should do it again, putting in every step of working. Personally however, I'd use the quotient rule. As with carpentry, the job is always easier if you use the right tool. Thanks. I see where I went wrong May 15th 2009, 02:58 AM #2 MHF Contributor Apr 2005 May 15th 2009, 03:00 AM #3 May 17th 2009, 03:28 PM #4 Junior Member May 2009
{"url":"http://mathhelpforum.com/pre-calculus/89094-how-simplify-after-differentiation.html","timestamp":"2014-04-17T20:08:03Z","content_type":null,"content_length":"41086","record_id":"<urn:uuid:9eb94ef6-8036-4e9b-9f58-b3e29b15be0c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics Textbook Reviews This page contains a partial list of recent reviews of textbooks used in two typical undergraduate courses: Introductory Statistics and Mathematical Statistics. If there are other reviews that ought to be included in our lists, please send the relevant information to Ann Cannon. Introduction to the Practice of Statistics; Moore and McCabe • Ziegel, Eric R. (1994), Review of ``Introduction to the Practice of Statistics (Second edition)'' by D. S. Moore and G. P. McCabe, Technometrics, 36, 124-125. • Huberty, Carl J. (1991), Review of ``Introduction to the Practice of Statistics'' by D. S. Moore and G. P. McCabe, Journal of Educational and Behavioral Statistics, 16, 77-81. • Arnold, Harvey J. (1990), Review of ``Introduction to the Practice of Statistics'' by D. S. Moore and G. P. McCabe, Technometrics, 32, 347-348. • Calinski, T. (1990), Review of ``Introduction to the Practice of Statistics'' by D. S. Moore and G. P. McCabe, Biometrics, 46, 884-885. • Guthrie, Donald (1990), Review of ``Introduction to the Practice of Statistics'' by D. S. Moore and G. P. McCabe, Journal of the American Statistical Association, 85, 262-263. (return to booklist The Basic Practice of Statistics; Moore • Moore, Leslie M. (1996), Review of ``The Basic Practice of Statistics'' by David S. Moore, Technometrics, 38, 404-405. • Simon, Gary (1996), Review of ``The Basic Practice of Statistics'' by David S. Moore, The American Statistician, 50, 277. (return to booklist) Statistics; Freedman, Pisani, Purves and Adhikari • Lohr, Sharon (1994), Review of ``Statistics (Second edition)'' by D. Freedman, R. Pisani, R. Purves, and A. Adhikari, Technometrics, 36, 119-120. • Rees, D. G. (1993), Review of ``Statistics (Second edition)'' by D. Freedman, R. Pisani, R. Purves, and A. Adhikari, Statistician, 42, 74-75. (return to booklist) Understandable Statistics; Brase and Brase • Low, Elizabeth (1998), Review of ``Understandable Statistics'' by Charles H. Brase and Corrinne P. Brase, American Statistician, 52,198. (return to booklist) Statistics: A First Course; Freund and Simon • Short, Thomas (1996), Review of ``Statistics: A First Course'' by John E. Freund and Gary A. Simon, The American Statistician, 50, 278. (return to booklist) Statistics: Principles and Methods; Johnson and Bhattacharyya • Malone, Linda C. (1997), Review of ``Statistics: Principles and methods (Third edition)'' by R. A. Johnson and G. K. Bhattacharyya, American Statistician, 51, 94. • Morris, Pamela (1994), Review of ``Statistics: Principles and methods (Second edition)'' by R. A. Johnson and G. K. Bhattacharyya, Statistician, 43, 214. (return to booklist) Introductory Statistics; Wonnacott and Wonnacott • Rangecroft, Margaret (1992), Review of ``Introductory statistics (Fifth edition)'' by T. H. Wonnacott and R. J. Wonnacott, Teaching Statistics, 14/3, 31-32. (return to booklist) The New Statistical Analysis of Data; Anderson • Nelson, Paul I. (1997), Review of ``The New Statistical Analysis of Data'' by T. W. Anderson, The Journal of the American Statistical Association, 92, 795-796. • Nelson, Lloyd S. (1996), Review of ``The New Statistical Analysis of Data'' by T. W. Anderson, The Journal of Quality Technology, 28, 485-487. (return to booklist) Introductory Statistics; Ross • Chou, Youn-Min (1997), Review of "Introductory Statistics" by Sheldon Ross, American Statistician, 51, 95. (return to booklist) A Data-Based Approach to Statistics; Iman • Grice, John V. (1996), Review of ``A data-based approach to statistics'' by Ronald L. Iman, Technometrics, 38, 405. (return to booklist) Statistics: Learning in the Presence of Variation; Wardrop • Brill, Bob (1996) Review of "Statistics: Learning in the Presence of Variation" by Wardrop, Technometrics, 38, 405-406. • Rossman, Allan (1995), Review of "Statistics: Learning in the Presence of Variation" by Wardrop, American Statistician, 49, 237-238. (return to booklist) Statistical Methods; Freund and Wilson • Barrett, Linda (1997), Review of ``Statistical methods (Revised Edition)'' by Rudolf J. Freund and William J. Wilson, American Statistician, 51, 296. • Kasparke, Linda (1994), Review of ``Statistical methods'' by Rudolf J. Freund and William J. Wilson, American Statistician, 48, 59. • Wisniewski, Mik (1994), Review of ``Statistical methods'' by Rudolf J. Freund and William J. Wilson, Statistician, 43, 209. • Wong, Aldous (1994), Review of ``Statistical methods'' by Rudolf J. Freund and William J. Wilson, Technometrics, 36, 222. (return to booklist) Statistics and Data Analysis: An Introduction; Siegel and Morgan • Hoeting, Jennifer (1997), Review of ``Statistics and data analysis (Second edition)'' by Andrew F. Siegel and Charles J. Morgan, American Statistician, 51, 93-94. • Melander, Todd (1996), Review of ``Statistics and data analysis (Second edition)'' by Andrew F. Siegel and Charles J. Morgan, Journal of Quality Technology, 28,374-375. (return to booklist) An Introduction to Statistical Methods and Data Analysis; Ott • Ziegel, Eric R. (1994), Review of ``An introduction to statistical methods and data analysis (Fourth edition)'' by R. Lyman Ott, Technometrics, 36, 332. (return to booklist) Statistics: The Conceptual Approach; Iversen and Gergen • Wilson, William, J. (1998) Review of "Statistics: The Conceptual Approach" by Gudmund R. Iversen and Mary Gergen, Technometrics, 40, 77. (return to booklist) Introductory Statistics; Mann • Copeland, Karen A. F. (1995), Review of ``Introductory statistics (Second edition)'' by Prem S. Mann, Journal of Quality Technology, 27, 390-391. (return to booklist) Statistics: Concepts and Controversies; Moore • Wooff, David (1993), Review of ``Statistics: Concepts and controversies (Third edition)'' by David S. Moore, Statistician, 42, 202-203. • Rouncefield, Mary (1992), Review of ``Statistics concepts and controversies (Third edition)'' by David S. Moore, Teaching Statistics, 14/1, 31. (return to booklist) Data Analysis: An Introduction; Witmer • Chen, Hanfeng (1994), Review of ``Data analysis: An introduction'' by Jeffrey A. Witmer, Technometrics, 36, 427. • Dobler, Carolyn Pillers (1993), Review of ``Data analysis: An introduction'' by Jeffrey A. Witmer, American Statistician, 47, 309-310. (return to booklist) An Introduction to Mathematical Statistics and its Application; Larsen and Marx • Kotlovker, Debbi L. (1992), Review of ``Statistics'' by Richard J. Larsen and Morris L. Marx, Technometrics, 34, 245. (return to booklist) Mathematical Statistics with Applications; Mendenhall, Wackerly and Sheaffer • Mikosch, Thomas (1993), Review of ``Mathematical statistics with applications (Fourth edition)'' by W. Mendenhall, D. D. Wackerly, and R. L. Scheaffer, Statistics, 24, 291. • Loukas, S. (1992), Review of ``Mathematical statistics with applications (Fourth edition)'' by W. Mendenhall, D. D. Wackerly, and R. L. Scheaffer, Biometrics, 48, 977. • Toutenburg, H. (1992), Review of ``Mathematical statistics with applications (Fourth edition)'' by W. Mendenhall, D. D. Wackerly, and R. L. Scheaffer, Computational Statistics and Data Analysis, 13, 109. • Young, Karen (1992), Review of ``Mathematical statistics with applications'' by W. Mendenhall, D. D. Wackerly, and R. L. Scheaffer, Applied Statistics, 41, 433. (return to booklist) Statistics: Theory and Methods; Berry and Lindgren • Koehn, Uwe (1997) Review of ``Statistics: Theory and methods (Second edition)'' by D. A. Berry and B. W. Lindgren, American Statistician, 51, 295. • Harper, William V. (1991), Review of ``Statistics: Theory and methods'' by D. A. Berry and B. W. Lindgren, Technometrics, 33, 369- 370. • Fang, J. Q. (1990), Review of ``Statistics: Theory and methods'' by D. A. Berry and B. W. Lindgren, Biometrics, 46, 1236-1237. (return to booklist) Mathematical Statistics and Data Analysis; Rice • Ziegel, Eric R. (1995), Review of ``Mathematical statistics and data analysis (Second edition)'' by John Rice, Tecnometrics, 37, 127. (return to booklist) Introduction to Probability and Statistics; Giri • Garthwaite, Paul (1994), Review of ``Introduction to probability and statistics (Second edition)'' by Narayan C. Giri, Journal of the Royal Statistical Society Series A, 157, 504. • Prvan, Tania (1994) Review of ``Introduction to probability and statistics (Second edition)'' by Narayan C. Giri, Australian Journal of Statistics, 36, 385. • Wehrly, Thomas E. (1994), Review of ``Introduction to probability and statistics (Second edition)'' by Narayan C. Giri, Journal of the American Statistical Association, 89, 1567. (return to Activity-Based Statistics; Scheaffer, Gnanadesikan, Watkins, and Witmer • Peck, Roxy L. (1997) Review of "Activity-Based Statistics" by Richard L. Scheaffer, Mrudulla Gnanadesikan, Ann Watkins, and Jeffrey A. Witmer, American Statistician, 51, 208-209. (return to Workshop Statistics: Discovery with Data; Rossman • Cryer, Jonathan D. (1997), Review of "Workshop Statistics: Discovery with Data" by Allan Rossman, American Statistician, 51, 95-96. • Mulekar, Madhuri S. (1997), Review of "Workshop Statistics: Discovery with Data" by Allan Rossman, Technometrics, 39, 235-236. • Garfield, Joan (1996), Review of "Workshop Statistics: Discovery with Data" by Allan Rossman, The Statistics Teacher Network, Spring 1996. (return to booklist) Elementary Statistics Laboratory Manual; Spurrier, Edwards, and Thombs • Magoun, A. Dale (1996) Review of "Elementary Statistics Laboratory Manual" by John D. Spurrier, Don Edwards, and Lori A. Thombs, Technometrics, 38, 190. (return to booklist) An Introduction to Regression Graphics; Cook and Weisberg • Fox, John D. (1996), Review of ``An introduction to regression graphics'' by Dennis R. Cook and Sanford Weisberg, Chance, 9/1, 53-55. • Hurley, Catherine B. (1996), Review of ``An introduction to regression graphics'' by Dennis R. Cook and Sanford Weisberg, Statistics and Computing, 6, 181-183. • Kemp, C. D. (1996), Review of ``An introduction to regression graphics'' by Dennis R. Cook and Sanford Weisberg, Biometrics, 52, 776-777. • Knapp, Frank (1996), Review of ``An introduction to regression graphics'' by Dennis R. Cook and Sanford Weisberg, Allgemeines Statistisches Archiv, 80, 262-263. (return to booklist) A Casebook for a First Course in Statistics and Data Analysis; Chatterjee, Handcock, and Simonoff • Agresti, Alan (1996), Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, American Statistician, 50, 95-96. • Hassler, Uwe (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, Computational Statistics and Data Analysis, 23, • Hayden, Robert W. (1996), Review of two collections of data for use in a first course in statistics, American Statistician, 50, 168-169. • Kahn, Michael (1996), Telegraphic Review of "A Casebook for a First Course in Statistics and Data Analysis" by Samprit Chatterjee, Mark S. Handcock, and Jeffrey S. Simonoff, IMS Bulletin, 25, • Kemp, A.W. (1996), Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff and "Case Studies in Biometry" by N. Lange, L. Ryan, L. Billiard, D. Brillinger, L. Conquest, and J. Greenhouse (editors), Biometrics, 52, 373-376. • Moorthy, Uma (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, The Statistician, 45, 265. • Oldford, R. W. (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, Short Book Reviews, 16, 23. • Snell, J. Laurie (1996), Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, Chance News 5.11. • Stephenson, Paul L. (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, Journal of Quality Technology, 28, • Tanur, Judith M. (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff and "Case Studies in Biometry" by N. Lange, L. Ryan, L. Billiard, D. Brillinger, L. Conquest, and J. Greenhouse (editors), Chance, 9/2, 40-42. • Wernecke, K. D. (1996) Review of "A Casebook for a First Course in Statistics and Data Analysis" by S. Chatterjee, M.S. Handcock, and J.S. Simonoff, Biometrical Journal, 38, 716. (return to Visualizing Data; Cleveland • Gentleman, Robert (1996), Review of ``Visualizing data'' by W. S. Cleveland, Statistics and Computing, 6, 387-388. • Derr, J. A. (1994), Review of ``Visualizing data'' by W. S. Cleveland, Biometrics, 50, 890. • Gunter, Bert (1994), Review of ``Visualizing data'' by W. S. Cleveland, Technometrics, 36, 314- 315. • Nelson, Lloyd S. (1994), Review of ``Visualizing data'' by W. S. Cleveland (Corr: V26 p334), Journal of Quality Technology, 26, 244- 245. • Welsh, A. H. (1994), Review of ``Visualizing data'' by W. S. Cleveland, Journal of the American Statistical Association, 89, 1136-1138. (return to booklist) A Handbook of Small Data Sets; Hand, Daly, Lunn, McConway, and Ostrowski • Hayden, Robert W. (1996), Review of two collections of data for use in a first course in statistics, American Statistician, 50, 168-169. (return to booklist) This page is being presented locally by Cornell College. [top of page] | [textbook list top of page] | [ASA Section on Statistical Education]
{"url":"http://people.cornellcollege.edu/acannon/stated/reviews.html","timestamp":"2014-04-20T23:56:40Z","content_type":null,"content_length":"18149","record_id":"<urn:uuid:287d7d69-6770-419c-9688-4e055e99451c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Compound Interest When someone takes a loan from a bank or other lending institution, they often face difficulty in understanding the concept of calculating interest and when it comes to estimating the compound interest, the task becomes just more difficult. Interest is of two types, simple and compound. If you have availed financing from somewhere and you want to make sure whether your interest is being calculated properly or you are not overcharge, with a little help, you can easily calculate your compound interest. • 1 Understanding of terms It is of pivotal importance that an individual should have the sound knowledge of all the terms involved in lending and borrowing. For instance, principal amount is the figure which an individual actually needs to borrow from someone for a particular reason. On the other hand, interest rate is the amount that is charged by a lender to give the principal amount to the borrower. The interest rate is specified in percentages and represents the yearly rate of interest over a particular principal amount. • 2 Know the difference between simple and compound interest In order to calculate your compound interest, it is of utmost importance that you understand the difference between these two terms otherwise there is a greater chance that you will mix up both of these terms and miscalculate your interest. A simple interest is where an individual pays interest only on his principal amount while in compound interest, the person pays interest not only on his principal amount but also on the interest paid in the past. In compound interest, the lender enjoys more and more money while the borrower has to suffer the extra cost of financing. • 3 Learn the formula There is a simple formula for calculating the compound interest, you must learn it. The formula is: A = P(1 + i)^t. Here, A is the amount that you have to pay in total which comprises of the principal and the compound interest. P is the principal amount, i is the interest rate levied on the principal amount by the lender and t is the term or time for which the amount is lend to a particular borrower. • 4 Put in values Now you should put in values in the formula to calculate your compound interest. For example, if you have taken $2000 at the rate of 10 percent annually for 2 years. Then, A = $2000(1 + 0.10)^2 = $2000 * (1.10)^2 = $2000 * 1.21 = $2420
{"url":"http://www.stepbystep.com/how-to-calculate-compound-interest-53190/","timestamp":"2014-04-20T10:48:47Z","content_type":null,"content_length":"36720","record_id":"<urn:uuid:dd5c4411-579d-4736-8d0f-d85d36823436>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
East Bremerton, WA Precalculus Tutor Find an East Bremerton, WA Precalculus Tutor ...I know math can often be taught in a way that feels hopeless and overwhelming. As your tutor I'll show you that learning math is not only doable but it can be fun! Once you start understanding the 'how's and 'whys' you'll start solving harder and harder problems and your confidence will build :... 17 Subjects: including precalculus, calculus, statistics, geometry ...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning French for more than 6 years. 16 Subjects: including precalculus, chemistry, calculus, algebra 2 ...I have experience working with both elementary and middle school students and have tutored them in subjects as diverse as math (including geometry and algebra), English grammar and spelling, biology, and history. As a successful student myself (valedictorian in high school, triple-major in colle... 35 Subjects: including precalculus, English, reading, writing ...In my journey so far I have earned: - Associate of Arts and Sciences degree from Tacoma Community College - Bachelors of Science in Biology from Washington State University - I am very excited to say I have been accepted to medical school and will begin in July of 2014!! While tutoring childr... 25 Subjects: including precalculus, chemistry, algebra 1, physics My goal as a tutor is to see the student excel. I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I am aiming for more than just good test scores - I will build confidence so that my students know that they know the material. 8 Subjects: including precalculus, calculus, geometry, algebra 1 Related East Bremerton, WA Tutors East Bremerton, WA Accounting Tutors East Bremerton, WA ACT Tutors East Bremerton, WA Algebra Tutors East Bremerton, WA Algebra 2 Tutors East Bremerton, WA Calculus Tutors East Bremerton, WA Geometry Tutors East Bremerton, WA Math Tutors East Bremerton, WA Prealgebra Tutors East Bremerton, WA Precalculus Tutors East Bremerton, WA SAT Tutors East Bremerton, WA SAT Math Tutors East Bremerton, WA Science Tutors East Bremerton, WA Statistics Tutors East Bremerton, WA Trigonometry Tutors Nearby Cities With precalculus Tutor Annapolis, WA precalculus Tutors Bremerton precalculus Tutors Colby, WA precalculus Tutors Enetai, WA precalculus Tutors Marine Drive, WA precalculus Tutors Navy Yard City, WA precalculus Tutors Parkwood, WA precalculus Tutors Rocky Point, WA precalculus Tutors Sheridan Park, WA precalculus Tutors South Park Village, WA precalculus Tutors Waterman, WA precalculus Tutors Wautauga Beach, WA precalculus Tutors West Hills, WA precalculus Tutors West Park, WA precalculus Tutors Westwood, WA precalculus Tutors
{"url":"http://www.purplemath.com/East_Bremerton_WA_precalculus_tutors.php","timestamp":"2014-04-18T00:43:27Z","content_type":null,"content_length":"24575","record_id":"<urn:uuid:cc726c04-ddb0-493d-9fe5-0d77e216cbd0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Plus Advent Calendar Door #10: Positional genius If it wasn't for the Babylonians who lived millennia ago, paying your bill at a restaurant or checking your credit card bill would be much more painful than it already is. And so would anything else involving numbers. It was these Babylonian scholars who had the brilliant idea of making the value of a numeral dependent on its position in a string of numerals: if you see a 1 at the end of a string, you know it means 1, if it's shifted one to the left, you know it means 10, if it's shifted two to the left, it means 100 and so on. This means that using only the symbols 0 to 9 you can write down every single whole number, no matter how large, rather than having to invent a new symbol every time you go up a order of magnitude. The Babylonian number system actually had 60 as its base, rather than base ten as we do. The numerals we use today, including the use of a symbol (zero) for nothing instead of a gap, stems from Find out more on in the Fabulous positional system and out other articles about number systems.
{"url":"http://plus.maths.org/content/plus-advent-calendar-door-10-positional-genius","timestamp":"2014-04-19T04:26:06Z","content_type":null,"content_length":"23563","record_id":"<urn:uuid:30c76232-9920-461c-a7c9-ff0abc241336>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
MAT: Unreal Decimal Points Back in late September, Christopher Danielson, aka Triangleman, created the "Triangleman Decimal Institute", a six week course looking into the whys and whens of decimal instruction. In this seventh week, he's asked a single question: How can you show the world what you have learned these last several weeks? Stick around through my mini-summaries, at the bottom I talk about calculators. 1: Decimals before fractions? Often the decimal (or at least notation) is taught before fractions. Perhaps with the idea that they are more like whole numbers. I think in the end, I would agree that they are more like whole numbers, particularly in terms of place value, but are different enough to be a problem. 2: Money and decimals. Money is often used to teach decimals. Is this valid? I would say no - dollars and cents are seen by people as two separate units, not an extension down into "parts of dollars". I think something that exacerbates this is that the groupings are different... you have 25 cents, but 20 dollars (unless you're talking Euro). I wonder about the origin of the quarter. 3: Children's experiences with partitioning. Real world knowledge that children bring into classrooms - it tends to start with cutting in half, then half again. Possibly even throwing out a quarter to make things fair between three people. I wonder whether thirds are seen as inherently unfair, because they don't have a "nice" decimal representation (in money or otherwise)? Though I've no means to test this. Just found this similar 12-slicer on the internet. Don't know if that's good enough. 4: Interlude on the slicing of pizzas. Direct documentation of pizzas. I was busy that week, though I learned that I couldn't document a 12 slice pizza, because I had no technology that allowed for instantaneous photographs during my Saturday game night. Also, places in Europe don't serve pre-cut: you choose your own slicing. 5: Grouping is different from partitioning. The idea of starting with parts and combining them, versus starting with a whole and breaking it down, aka moving the decimal to the left or right, aka what makes a "1" (unit). I agree there's a difference, and I don't think decimals come naturally from partitioning. 6: Decimals and curriculum. Common Core State Standards were mentioned in the United States. Similar to Ontario, decimals are introduced in Grade 4, but there they immediately use up to two decimal places. In Ontario, we have one decimal place only, and gradually add another as they move through the subsequent few grades. I started by thinking decimals were subsets of fractions. I'd now say decimals are a SCIENTIFIC measure... which is corrupting the study of mathematics. I'm overdramatizing there, but bear with me. A few people (in Week 5 in particular) remarked that students preferred fractions over decimals. Not necessarily that they preferred fraction OPERATIONS, merely use of fractions. And I think that's what children do when they partition, they break things down in a fractional, more "fair" way. Fractions (pieces) are, in a sense, more natural than If you're starting with "1" unit, you have "one half" or "one third", which is SEEN right in the notation! (1/2) You don't have ".5". You might have "0.5", which finally mentions the whole, but that puts us in the realm of significant digits and scientific notation. Which is more like degrees of magnitude, not parts of a whole. (Magnitude being something we're not good at - consider this ViHart/ Sal Khan video.) Same problem with money, we have two places by definition - unless we're talking about gas prices, which can lead to that crazy picture (in New Jersey) offered up by David Wees. Granted, I bring some of my own bias into these thoughts. I'm reminded of a conversation I had with a student a couple weeks ago, about an exponential model with an asymptote at 21 degrees. They said that the item WOULD eventually be 21 degrees, so how could this be an asymptote? My counterpoint was that it was technically 21.0000000000001 degrees, but our tools cannot be that precise. (Nevermind that a uniform 21 degrees in a room is likely impossible.) The thing is, while fractions are more natural, decimals are our reality. Why? Calculators. More specifically, SCIENTIFIC calculators. These tools work with place value, as they were designed to do. Something that works the same either side of the decimal point, and allows for expressing an answer in scientific notation. (Aside: Which few students understand... what's with 'E-7'?.) Now, can you imagine the difficulty of programming fractions into early calculators?? The idea of incorporating fractions into a calculator came later. Most computer calculators STILL don't have a fraction key!!! My calculator's in 'P' mode. How do I fix it? But that's fine. Calculators do what they were designed to do. The trouble is, what they were designed to do was reduce everything down to decimals. In using them, decimals have become our reality. The fact that we seek out "real world" applications reinforces that. So am I saying DON'T use calculators? Or for that matter, don't use the metric system? Heck no. I'm saying be aware of the following: More numbers exist that CANNOT be expressed using decimals/fractions than numbers that do. We don't generally care about those numbers until Grade 11. By that point, decimals might be the default. So what's the notation for half of root 2? Is 0.5root(2) sufficient? Is that sort of division even "fair"? How easy is that number to estimate? Do you even care? If you don't care, I ask: Is mathematics the study of numbers, or is it the study of "real world applications"? I don't have answers to those questions. Of course, as any scientist would say, I don't think all the evidence is in yet. So that's what I've learned. 2 comments: 1. I had a camera at the last game if you wanted a photo of your pizza. :) 2. Heh. It actually didn't even occur at the time - it was a couple weeks after the 4:Interlude that we ordered that Extra Large. Realized when typing this up that I missed the chance, but then wouldn't have been able to take a photo anyway, what with my aging technology.
{"url":"http://mathiex.blogspot.ca/2013/11/mat-unreal-decimal-points.html","timestamp":"2014-04-17T10:31:05Z","content_type":null,"content_length":"86880","record_id":"<urn:uuid:9ed2d31a-99fe-4e17-a811-9264493aabfe>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2011/227 Robust parent-identifying codes and combinatorial arraysAlexander Barg and Grigory KabatianskyAbstract: An $n$-word $y$ over a finite alphabet of cardinality $q$ is called a descendant of a set of $t$ words $x^1,\dots,x^t$ if $y_i\in\{x^1_i,\dots,x^t_i\}$ for all $i=1,\dots,n.$ A code $\cC=\{x^1,\dots,x^M\}$ is said to have the $t$-IPP property if for any $n$-word $y$ that is a descendant of at most $t$ parents belonging to the code it is possible to identify at least one of them. From earlier works it is known that $t$-IPP codes of positive rate exist if and only if $t\le q-1$. We introduce a robust version of IPP codes which allows {unconditional} identification of parents even if some of the coordinates in $y$ can break away from the descent rule, i.e., can take arbitrary values from the alphabet, or become completely unreadable. We show existence of robust $t$-IPP codes for all $t\le q-1$ and some positive proportion of such coordinates. The proofs involve relations between IPP codes and combinatorial arrays with separating properties such as perfect hash functions and hash codes, partially hashing families and separating codes. For $t=2$ we find the exact proportion of mutant coordinates (for several error scenarios) that permits unconditional identification of parents. Category / Keywords: Combinatorial cryptography; fingerprinting; traitor tracingDate: received 9 May 2011Contact author: abarg at umd eduAvailable format(s): PDF | BibTeX Citation Version: 20110512:034957 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2011/227/20110512:034957","timestamp":"2014-04-17T13:16:42Z","content_type":null,"content_length":"2943","record_id":"<urn:uuid:af21050b-8a19-431e-88ac-e8e92a04cd71>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Felix Klein Felix Klein, in full Christian Felix Klein (born April 25, 1849, Düsseldorf, Prussia [Germany]—died June 22, 1925, Göttingen, Germany), German mathematician whose unified view of geometry as the study of the properties of a space that are invariant under a given group of transformations, known as the Erlanger Programm, profoundly influenced mathematical developments. As a student at the University of Bonn (Ph.D., 1868), Klein worked closely with the physicist and geometer Julius Plücker (1801–68). After Plücker’s death, he worked with the geometer Alfred Clebsch (1833–72), who headed the mathematics department at the University of Göttingen. On Clebsch’s recommendation, Klein was appointed professor of mathematics at the University of Erlangen (1872–75), where he set forth the views contained in his Erlanger Programm. These ideas reflected his close collaboration with the Norwegian mathematician Sophus Lie, whom he met in Berlin in 1869. Before the outbreak of the Franco-German War in July 1870, they were together in Paris developing their early ideas on the role of transformation groups in geometry and on the theory of differential equations. Klein later taught at the Institute of Technology in Munich (1875–80) and then at the Universities of Leipzig (1880–86) and Göttingen (1886–1913). From 1874 he was the editor of Mathematische Annalen (“Annals of Mathematics”), one of the world’s leading mathematics journals, and from 1895 he supervised the great Encyklopädie der mathematischen Wissenschaften mit Einschluss iher Anwendungen (“Encyclopedia of Pure and Applied Mathematics”). His works on elementary mathematics, including Elementarmathematik vom höheren Standpunkte aus (1908; “Elementary Mathematics from an Advanced Standpoint”), reached a wide public. His technical papers were collected in Gesammelte Mathematische Abhandlungen, 3 vol., (1921–23; “Collected Mathematical Treatises”). Beyond his own work Klein made his greatest impact on mathematics as the principal architect of the modern community of mathematicians at Göttingen, which emerged as one of the world’s leading research centres under Klein and David Hilbert (1862–1943) during the period from 1900 to 1914. After Klein’s retirement Richard Courant (1888–1972) gradually assumed Klein’s role as the organizational leader of this still vibrant community.
{"url":"http://www.britannica.com/print/topic/319960","timestamp":"2014-04-20T17:02:25Z","content_type":null,"content_length":"10837","record_id":"<urn:uuid:1b5da182-4845-4e16-829e-e69ba83a23c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
In case you want to learn (more) Arakelov theory Recently I asked Faltings for some references for a self-study of Arakelov theory beyond Lang’s Arakelov theory. He suggested the following (all revolving around the arithmetic Riemann-Roch theorem): 1. Papers by Gillet and Soule on arithmetic intersection theory 2. Papers by Bismut, Gillet and Soule on determinant of cohomology of an arithmetic variety (towards Riemann-Roch for this determinant.) It seems that these papers are summarized in Soule’s book. Also there is a nice book by Faltings that modesty prevented him from referring it to me: Lectures on the arithmetic Riemann-Roch theorem. Speaking of Arakelov theory, two final comments: (1) The best place to get into the right frame of mind for these type of questions in an elementary setting is Neukirch’s book Algebraic number theory (2) Our fellow student here Nikolai Durov has recently reworked the foundations of the entire theory from the point of view of generalized rings (including exotic objects like the field with one element $\mathbb{F}_1$). His thesis is available here. 1 comment
{"url":"http://vivatsgasse7.wordpress.com/2007/07/10/in-case-you-want-to-learn-more-arakelov-theory/","timestamp":"2014-04-17T03:49:57Z","content_type":null,"content_length":"55059","record_id":"<urn:uuid:37a4460a-d1bd-4e8e-845d-3b69ace4d199>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
solve Uxx-3Uxt-4Utt=0 (hyperbolic) solve Uxx-3Uxt-4Utt=0 with u(x,0)=x^2 and Ut(x,0)=e^x I know that this is hyperbolic since D=(-1.5)^2+4 >0 so I have to transform the variables x and t linearly to obtain the wave equation of the form (Utt-c^2Uxx=0). The above equation is equivalent to: (d/dx - 1.5 d/dt)*(d/dx - 1.5 d/dt)u - 6.25 d^2u/dt^2 = 0 let x=b let t=-1.5b + 2.5a Ub=Ux - (1.5) Ut Ua=2.5 Ut thus Ubb-Uaa=0. This is where I am stuck.. I know the general solution is u(a,b)=f(a+b)+g(a-b) also the explicit solution is u(a,b)=(1/2)*[φ(a+b)+φ(a-b)]*(1/2c)*(integral ψ(s)ds from a-b to a+b). where u(a,0)=φ(a) and Ub(a,0)=ψ(a). The solution is (4/5)*[e^(x+t/4)-e^(x-t)]+x^2+(1/4)*t^2 but how to obtain it?
{"url":"http://www.physicsforums.com/showthread.php?t=46039","timestamp":"2014-04-18T18:25:07Z","content_type":null,"content_length":"27650","record_id":"<urn:uuid:61a858ca-2e14-4fcd-ad4d-6a850433dfda>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching of Frederique Oggier @ NTU This page contains lecture notes, exercises (and solutions), slides, that I have prepared as teaching material. Algebraic Methods Below are lecture notes (including exercises and solutions) on: Algebraic Number Theory Below are lecture notes on: Groups and Symmetries Below are lecture notes (including exercises and solutions) on:
{"url":"http://www1.spms.ntu.edu.sg/~frederique/Teaching.html","timestamp":"2014-04-20T23:27:20Z","content_type":null,"content_length":"5578","record_id":"<urn:uuid:eb513b1c-d692-4f60-9b1c-c6cc7c81c227>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: acceptable to forecast change in wc / net capex by regressing (linear) historical data on revenues? thanks Best Response You've already chosen the best response. You can do a regression, but you will almost certainly have a very low correlation coefficient given the low number of data points, and the typical variation in the data from year to year. So don't put too much stock in the accuracy of the regression analysis. Also, whether you are using regression or some other subjective or objective approach to weighting or trending the historical data, I would suggest forecasting working capital line-by-line with appropriate drivers for each line item. So accounts receivable and inventory are typically driven by sales. Accrued liabilities are driven by certain expense categories. I'd also recommend calculating the typical working capital ratios for the historical period such as inventory turnover, A/R days, A/P days etc. These help make more intuitive assumptions going forward and are more easily compared with peer companies. Technically, the most proper way to calculate the historical ratios uses average balances (e.g., sales / ((beginning inventory + ending inventory))/2), but this can cause problems when forecasting (see footnote below). Capital expenditures are probably best forecast from discussions with management, or in the case of public companies, there is typically at least some disclosure of expectations about expansion through organic growth and acquisitions - whether from the company or research analysts. Footnote: Let's take inventory turnover as an example. If you calculate inventory turnover using average balances of inventory, then your forecast of inventory would presumably be driven by a ratio interpreted the same way. So you would be using the forecasted inventory turnover to calculate the average balance of inventory in a given year. You then have to back into the ending inventory. This always creates a see-saw effect when forecasting, where one year is below the average, then the next year is above the average, etc. So then you have changes in working capital that swing dramatically from year to year. To avoid this problem, I tend to use a simpler version of these ratios which is calculated on ending balances only. Best Response You've already chosen the best response. thank you Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4dcc0d9c8bbf8b0b1ed0f17a","timestamp":"2014-04-19T07:21:19Z","content_type":null,"content_length":"32035","record_id":"<urn:uuid:25d1a459-00d8-4cba-ae46-16b7e478d1ec>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Felix Schönbrodt September 6, 2012 Filed in: R | Statistics Update Oct-23: Added a new parameter to the function. Now multiple groups can be plotted in a single plot (see example in my comment) As a follow-up on my R implementation of Solomon’s watercolor plots, I made some improvements to the function. I fine-tuned the graphical parameters (the median smoother line now diminishes faster with increasing CIs, and the shaded watercolors look more pretty). Furthermore, the function is faster and has more features: • You can define any standard regression function for the bootstrap procedure. □ vwReg(y ~ x, df, method=lm) □ vwReg(y ~ x + I(x^2), df, method=lm) • Provide parameters for the fitting function. □ You can make the smoother’s span larger. Then it takes more points into account when doing the local fitting. Per default, the smoother fits a polynomial of degree two – that means as you increase span you will approach the overall quadratic fit: vwReg(y ~ x, df, span=2) □ You can also make the smoother’s span smaller, then it takes less points for local fitting. If it is too small, it will overfit and approach each single data point. The default span (.75) seemed to be the best choice for me for a variety of data sets: vwReg(y ~ x, df, span=0.5) □ Use a robust M-estimator for the smoother; see ?loess for details: vwReg(y ~ x, df, family=”symmetric”) • Provide your own color scheme (or, for example, a black-and-white scheme). Examples see pictures below. • Quantize the color ramp, so that regions for 1, 2, and 3 SD have the same color (an idea proposed by John Mashey). At the end of this post is the source code for the R function. Some picture – please vote! Here are some variants of the watercolor plots – at the end, you can vote for your favorite (or write something into the comments). I am still fine-tuning the default parameters, and I am interested in your opinions what would be the best default. Plot 1: The current default Plot 2: Using an M-estimator for bootstrap smoothers. Usually you get wider confidence intervalls. Plot 3:Increasing the span of the smoothers Plot 4: Decreasing the span of the smoothers Plot 5: Changing the color scheme, using a predefined ColorBrewer palette. You can see all available palettes by using this command: library(RColorBrewer); display.brewer.all() Plot 6: Using a custom-made palette Plot 7: Using a custom-made palette; with the parameter bias you can shift the color ramp to the “higher” colors: Plot 8: A black and white version of the plot Plot 9: The anti-Tufte-plot: Using as much ink as possible by reversing black and white (a.k.a. “the Milky-Way Plot“) Plot 10: The Northern Light Plot/ fMRI plot. This plotting technique already has been used by a suspicious company (called IRET – never heard of that). I hurried to publish the R code under a FreeBSD license before they can patent it! Feel free to use, share, or change the code for whatever purpose you need. Isn’t that beautiful? Plot 11: The 1-2-3-SD plot. You can use your own color schemes as well, e.g.: vwReg(y~x, df, bw=TRUE, quantize=”SD”) Any comments or ideas? Or just a vote? If you produce some nice plots with your data, you can send it to me, and I will post a gallery of the most impressive “data art”! # Copyright 2012 Felix Schönbrodt # All rights reserved. # # FreeBSD License # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER `AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The views and conclusions contained in the software and documentation # are those of the authors and should not be interpreted as representing # official policies, either expressed or implied, of the copyright # holder. # Version history: # 0.1: original code # 0.1.1: changed license to FreeBSD; re-established compability to ggplot2 (new version 0.9.2) ## Visually weighted regression / Watercolor plots ## Idea: Solomon Hsiang, with additional ideas from many blog commenters # B = number bootstrapped smoothers # shade: plot the shaded confidence region? # shade.alpha: should the CI shading fade out at the edges? (by reducing alpha; 0 = no alpha decrease, 0.1 = medium alpha decrease, 0.5 = strong alpha decrease) # spag: plot spaghetti lines? # spag.color: color of spaghetti lines # mweight: should the median smoother be visually weighted? # show.lm: should the linear regresison line be plotted? # show.CI: should the 95% CI limits be plotted? # show.median: should the median smoother be plotted? # median.col: color of the median smoother # shape: shape of points # method: the fitting function for the spaghettis; default: loess # bw = TRUE: define a default b&w-palette # slices: number of slices in x and y direction for the shaded region. Higher numbers make a smoother plot, but takes longer to draw. I wouldn'T go beyond 500 # palette: provide a custom color palette for the watercolors # ylim: restrict range of the watercoloring # quantize: either "continuous", or "SD". In the latter case, we get three color regions for 1, 2, and 3 SD (an idea of John Mashey) # add: if add == FALSE, a new ggplot is returned. If add == TRUE, only the elements are returned, which can be added to an existing ggplot (with the '+' operator) # ...: further parameters passed to the fitting function, in the case of loess, for example, "span = .9", or "family = 'symmetric'" <- function(formula , B , shade TRUE, shade. .1, spag FALSE, spag. , mweight TRUE, show. FALSE, show. median = TRUE, median. col = "white" , shape = 21 , show. FALSE, method , bw FALSE, slices , bias , ylim NULL, quantize = "continuous" , add FALSE, ... ) { <- all.vars(formula)[2] <- all.vars(formula)[1] data <- na.omit(data[order(data[ , IV IV, DV )]) if ( ) { palette <- colorRampPalette(c("#EEEEEE" , bias =2)(20) } print("Computing boostrapped smoothers ...") <- data.frame(seq(min(data[ , IV , IV )) colnames( ) <- boot <- matrix( data) for ( in 1: ) { <- data[sample(nrow(data) , IV ] if (class( )=="loess") { , data2, control = loess.control( = "i" , statistics , trace. , ... ) } else { , data2, ... ) } , i ] <- predict( m1, newdata ) } # compute median and CI limits of bootstrap library( ) library( boot <- ) quantile( x, prob .025, .5, .975, , na. -1] colnames( boot)[1:10] <- c("LL" , paste0 # scale the CI width to the range 0 to 1 and flip it (bigger numbers = narrower CI) <- ( - min( <- 1-( )) # convert bootstrapped spaghettis to long format 1] colnames( ) <- c("index" "x") library( ) library( ) # Construct ggplot # All plot elements are constructed as a list, so they can be added to an existing ggplot # if add == FALSE: provide the basic ggplot object , aes_string IV, y )) + () # initialize elements with NULL (if they are defined, they are overwritten with something meaningful) tiles <- poly <- spag <- median <- CI1 <- CI2 <- lm <- points <- title <- if ( ) { <- match.arg( "SD")) if ( == "continuous") { print("Computing density estimates for each vertical cut ...") flush.console() if (is.null( )) { <- min(min( , na. , DV , na. <- max(max( , na. , DV , na. <- c( min_value, max_value ) } # vertical cross-sectional density estimate , . function(df) { <- data.frame(density(df $value, na. TRUE, n slices, from , to "y")]) #res <- data.frame(density(df$value, na.rm=TRUE, n=slices)[c("x", "y")]) colnames( ) <- c("y" "dens") return( ) } , . <- max( <- min( scaled <- ( ## Tile approach factor <- tiles <- list( d2, aes x, y y, fill , alpha , scale_fill_gradientn , scale_alpha_continuous 1))) } if ( == "SD") { ## Polygon approach , paste0 , id. <- 0 <- data.frame() col <- c(1 1) for ( in 1:6) { , i , i <- rbind( seg1, seg2 <- col[ + 1 <- rbind( d3, seg ) } poly <- list( d3, aes x, y value, color NULL, fill , group , scale_fill_gradientn , values 1))) } } print("Build ggplot figure ...") flush.console() if ( ) { spag <- b2, aes x, y value, group , size , alpha B, color color) } if ( median == ) { if ( ) { median <- , aes x, y M, alpha , size .6, linejoin , color col) } else { median <- , aes x, y , size = 0.6 , linejoin , color col) } } # Confidence limits if ( CI == ) { CI1 <- , aes x, y , size , color CI2 <- , aes x, y , size , color ="red") } # plain linear regression line if ( ) { lm <- , color , se points <- , aes_string IV, y , size , shape shape, fill , color ="black") if (title != "") { title <- (title=title) } elements <- list( , gg. , gg. , gg. , gg. , gg. , gg. , gg. , gg. , theme position="none")) if ( ) { return( elements) } else { return( elements) } } September 5, 2012 Filed in: IRET | Psych | R | Statistics Dear valued customer, it is a well-known scientific truth that research results which are accompanied by a fancy, colorful fMRI scan, are perceived as more believable and more persuasive than simple bar graphs or text results (McCabe & Castel, 2007; Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). Readers even agree more with fictitious and unsubstantiated claims, as long as you provide a colorful brain image, and it works even when the subject is a dead salmon. The power of brain images for everybody What are the consequence of these troubling findings? The answer is clear. Everybody should be equipped with these powerful tools of research communication! We at IRET made it to our mission to provide the latest, cutting-edge tools for your research analysis. In this case we adopted a new technology called “visually weighted regression” or “watercolor plots” (see here, here, or here), and simply applied a new color scheme. But now, let’s get some hands on it! The example Imagine you invested a lot of effort in collecting the data of 41 participants. Now you find following pattern in 2 of your 87 variables: You could show that plain scatterplot. But should you do it? Nay. Of course everybody would spot the outliers on the top right. But which is much more important: it is b-o-r-i-n-g! What is the alternative? Reporting the correlation as text? “We found a correlation of r = .38 (p = .014)”. Yawn. Or maybe: “We chose to use a correlation technique that is robust against outliers and violations of normality, the Spearman rank coefficient. It turned out that the correlation broke down and was not significant any more (r = .06, p = .708).”. Don’t be silly! With that style of scientific reporting, there would be nothing to write home about. But you can be sure: we have the right tools for you. Finally, the power of pictures is not limited to brain research – now you can turn any data into a magical fMRI plot like that: Isn’t that beautiful? We recommend to accompany the figure with an elaborated description: “For local fitting, we used spline smoothers from 10`000 bootstrap replications. For a robust estimation of vertical confidence densities, a re-descending M-estimator with Tukey’s biweight function was employed. As one can clearly see in the plot, there is significant confidence in the prediction of the x =0, y=0 region, as well as a minor hot spot in the x=15, y=60 region (also known as the supra-dextral data region).” Magical Data Enhancer Tool With the Magical Data Enhancer Tool (MDET) you can … • … turn boring, marginally significant, or just crappy results into a stunning research experience • … publish in scientific journal with higher impact factors • … receive the media coverage that you and your research deserve • … achieve higher acceptance rates from funding agencies • … impress young women at the bar (you wouldn’t show a plain scatterplot, dude?!) Q: But – isn’t that approach unethical? A: No, it’s not at all. In contrast, we at IRES think that it is unethical that only some researchers are allowed to exploit the cognitive biases of their readers. We design our products with a great respect for humanity and we believe that every researcher who can afford our products should have the same powerful tools at hand. Q: How much does you product cost? A: The standard version of the Magical Data Enhancer ships for 12’998 $. We are aware that this is a significant investment. But, come on: You deserve it! Furthermore, we will soon publish a free trial version, including the full R code on this blog. So stay tuned! Best regards, Lexis “Lex” Brycenet (CEO & CTO Research Communication) International Research Enhancement Technology (IRET) August 30, 2012 Filed in: R | Statistics [Update 1: Sep 5, 2012: Explore the Magical Data Enhancer by IRES, using this visualization technique] [Update 2: Sep 6, 2012: See new improved plots, and new R code! Solomon Hsiang proposed an appealing method for visually displaying the uncertainty in regressions (see his blog [1][2], and also the discussions on the Statistical Modeling, Causal Inference, and Social Science Blog [1][2]). I implemented the method in R (using ggplot2), and used an additional method of determining the shading (especially concerning Andrew Gelman’s comment that traditional statistical summaries (such as 95% intervals) give too much weight to the edges. In the following I will show how to produce plots like that: I used following procedure: 1. Compute smoothers from 1000 bootstrap samples of the original sample (this results in a spaghetti plot) 2. Calculate a density estimate for each vertical cut through the bootstrapped smoothers. The area under the density curve always is 1, so the ink is constant for each y-slice. 3. Shade the figure according to these density estimates. Now let’s construct some plots! The basic scatter plot: No we show the bootstrapped smoothers (a “spaghetti plot”). Each spaghetti has a low alpha. That means that overlapping spaghettis produce a darker color and already give weight to highly populated Here is the shading according to the smoother’s density: Now, we can overplot the median smoother estimate for each x value (the “median smoother”): Or, a visually weighted smoother: Finally, we can add the plain linear regression line (which obviously does not refelct the data points very well): At the end of this post is the function that produces all of these plots. The function returns a ggplot object, so you can modify it afterwards, e.g.: vwReg(y~x, df, shade=FALSE, spag=TRUE) + xlab("Implicit power motive") + ylab("Corrugator activity during preparation")[/cc] Here are two plots with actual data I am working on: The correlation of both variables is .22 (p = .003). A) As a heat map (note: the vertical breaks at the left and right end occur due to single data points that get either sampled or not during the bootstrap): B) As a spaghetti plot: Finally, here's the code (sometimes the code box is collapsed - click the arrow on the top right of the box to open it). Comments and additions are welcome. : I removed the code, an updated has been published title="Visually weighted/ Watercolor Plots, new variants: Please vote!" > ( see at the of the post Tags: dataviz | ggplot2 | regression | visualization Recent Comments • What are the top 100 (most downloaded) R packages in 2013? (from simple statistics) | Baker Chen on Finally! Tracking CRAN packages downloads
{"url":"http://www.nicebread.de/tag/dataviz/","timestamp":"2014-04-16T07:53:05Z","content_type":null,"content_length":"112464","record_id":"<urn:uuid:ca408ada-6ef0-4ff1-889d-64281d38f034>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
A pyramidologist's value for pi Posted by: Dave Richeson | April 27, 2011 A pyramidologist’s value for pi Recently I came across two theories about the design of Great Pyramid of Giza. • If we construct a circle with the altitude of the pyramid as its radius, then the circumference of the circle is equal to the perimeter of the base of the pyramid. Said another way, if we build a hemisphere with the same height as the pyramid, then the equator has the same length as the perimeter of the pyramid. • Each face of the pyramid has the same area as the square of the altitude of the pyramid. Apparently these are favorite mathematical facts (especially the first one) for pyramidologists who look for mathematical relations in the measurement of the pyramids that help justify their cultish belief in the mystical power of the pyramids. Of course we should separate the mathematical properties of the pyramids that may have been legitimate design decisions by the architects, from the crazy meanings that are often attached to them. I have no training in the history of Egyptian mathematics or in the history of the pyramids, so I can’t really assess their likelihood of being true (my guess: the first one is an amazing coincidence, the second is more likely to be intentional). However, one interesting fact is that if the first one was intentional, then they were using the value 3.143 for pi, which is significantly better than the value found in the Egyptian Rhind papyrus (3.16), which was written 600-800 years after the construction of the pyramids. Just for fun, here are a few mathematical exercises: 1. Check these facts using the actual measurements of the pyramid (you can take altitude to be 146.6 meters and the length of one side of the pyramid to be 230.4 meters). They are indeed remarkably 2. Assume that the first one is true. Use the measurements given in (1) to show that the architects were using the value 3.143 for pi. 3. Assume that we have a pyramid for which both of these facts are true. Show that this would imply that Has anyone seen this approximation for pi before? I didn’t find it after performing a quick search of the internet. [Update, another way of writing this approximation is $4\sqrt{1/\varphi}$, where $\ varphi$ is the golden ratio.] [The photograph of the Pyramid of Giza is from Wikipedia.] Interesting post! Also, I enjoy reading this blog a lot! About the pi approximation at the end, I did a little research on it and here is what I’ve found: The last equation can be written as pi = 4*sqrt(phi-1) , which can also be written as 4/pi = sqrt(phi) According to Wikipedia, this relation is coincidental: By: Wei Dai on April 27, 2011 at 11:41 pm • Thanks! When I saw the square root of 5 and the 2′s I wondered if there was a golden ratio connection. I’m sure the pyramidologists love that! By: Dave Richeson on April 28, 2011 at 7:53 am • However, Please refer to : Panagiotis Stefanides By: Panagiotis Stefanides on February 6, 2012 at 9:22 am show that the architects were using the value 3.143 for pi. is it more likely that they were “using the value 3.143″, or that they were using “2 wheel diameters tall, and one wheel rotation across” and the deviation is due to imprecise construction techniques? By: jeff on June 9, 2011 at 2:47 pm From my calculations I believe this is indeed the true value for pi. Panagiotis Stefanides demonstrates two examples of this this quite clearly at his website: By: arunski on October 3, 2011 at 4:32 am • Dear Arunski I thank you. Panagiotis Stefanides. By: PANAGIOTIS STEFANIDES on November 28, 2011 at 6:33 am Posted in Math, Puzzle | Tags: numerology, pi, Pyramid of Giza
{"url":"http://divisbyzero.com/2011/04/27/a-pyramidologists-value-for-pi/","timestamp":"2014-04-20T15:52:24Z","content_type":null,"content_length":"68491","record_id":"<urn:uuid:959e4aa6-2920-46a8-97eb-a904d982ea6d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamental Physics Prize Fundamental Physics Prize Foundation announces nominees for $3 million prize The winner of the 2014 $3 million Fundamental Physics Prize and six winners of the 2014 $3 million Breakthrough Prizes in Life Sciences will be announced on December 12 November 5, 2013 (New York): The Fundamental Physics Prize Foundation today announced the 2014 winners of the Physics Frontiers Prizes and New Horizons in Physics Prizes. The prizes recognize transformative achievements in the field of fundamental physics and aim to provide recipients with more freedom and opportunity to pursue future accomplishments. The laureates of the 2014 Physics Frontiers Prize are: • Joseph Polchinski, KITP/University of California, Santa Barbara, for his contributions in many areas of quantum field theory and string theory. His discovery of D-branes has given new insights into string theory and quantum gravity, with consequences including the AdS/CFT correspondence. • Michael B. Green, University of Cambridge, and John H. Schwarz, California Institute of Technology, for opening new perspectives on quantum gravity and the unification of forces. • Andrew Strominger and Cumrun Vafa, Harvard University, for numerous deep and groundbreaking contributions to quantum field theory, quantum gravity, string theory and geometry. Their joint statistical derivation of the Bekenstein-Hawking area-entropy relation unified the laws of thermodynamics with the laws of black hole dynamics and revealed the holographic nature of quantum Laureates of the 2014 Frontiers Prize now become nominees for the 2014 Fundamental Physics Prize. Those who do not win it will each receive $300,000 and will automatically be re-nominated for the next 5 years. The laureates of 2014 New Horizons in Physics Prize are: • Freddy Cachazo, Perimeter Institute, for uncovering numerous structures underlying scattering amplitudes in gauge theories and gravity. • Shiraz Naval Minwalla, Tata Institute of Fundamental Research, for his pioneering contributions to the study of string theory and quantum field theory; and in particular his work on the connection between the equations of fluid dynamics and Albert Einstein’s equations of general relativity. • Vyacheslav Rychkov, CERN/Pierre-and-Marie-Curie University/École Normale Supérieure, for developing new techniques in conformal field theory, reviving the conformal bootstrap program for constraining the spectrum of operators and the structure constants in 3D and 4D CFT’s. The New Horizons Prize is awarded to up to three promising researchers, each of whom will receive $100,000. The winner of the 2014 Fundamental Physics Prize will be announced on December 12, 2013 in San Francisco, along with the winners of the 2014 Breakthrough Prize in Life Sciences. Media Contacts About the Prizes The Fundamental Physics Prize Foundation is a not-for-profit corporation established by the Milner Foundation and dedicated to advancing our knowledge of the Universe at the deepest level by awarding annual prizes for scientific breakthroughs, as well as communicating the excitement of fundamental physics to the public. According to the Foundation’s rules, laureates of all prizes are chosen by a Selection Committee, which is comprised of prior recipients of the Fundamental Physics Prize. The Selection Committee for the 2014 prizes included: • Nima Arkani-Hamed • Lyn Evans • Fabiola Gianotti • Alan Guth • Stephen Hawking • Joseph Incandela • Alexei Kitaev • Maxim Kontsevich • Andrei Linde • Juan Maldacena • Alexander Polyakov • Nathan Seiberg • Ashoke Sen • Edward Witten Additional information on the Fundamental Physics Prize is available at: www.fundamentalphysicsprize.org.
{"url":"https://fundamentalphysicsprize.org/news7","timestamp":"2014-04-19T02:48:30Z","content_type":null,"content_length":"7336","record_id":"<urn:uuid:7efe697c-ba03-43d8-9a41-add13ef2a911>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Cayley Contest I just looked into old Cayley Contests and found that 1997 problem #15 had all four multiple choice answers incorrect, so I sent Waterloo an email. Woops, I made a mistake, I assumed the length of the integer was all zeros except for the first digit, but actually their solution is right. Oops. Last edited by John E. Franklin (2005-12-26 12:00:54)
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=2297","timestamp":"2014-04-18T08:10:06Z","content_type":null,"content_length":"15357","record_id":"<urn:uuid:7a63a1b7-ed9c-482b-873e-9cb5830003f0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. find the slope between (0,-4) and (2,-1) Best Response You've already chosen the best response. Best Response You've already chosen the best response. ok I got 3/2 now what Best Response You've already chosen the best response. which choice has 3/2 as the slope? Best Response You've already chosen the best response. so that's your answer :) Best Response You've already chosen the best response. can you help me on this question Write an equation in slope-intercept form of the line with the given slope and point: slope = 3 and (1, -2). A. y = 3x - 5 B. y = 3x - 2 C. y = 3x + 1 D. y = 3x + Best Response You've already chosen the best response. put the information in point-slope form, then change it into slope-intercept form Best Response You've already chosen the best response. how do i make it into point slope form Best Response You've already chosen the best response. point-slope form is y - y1 = m(x - x1) Best Response You've already chosen the best response. I know that but how to I fill that in to then make it slope intercept form Best Response You've already chosen the best response. you put the numbers in first, then you can rearrange to make it into slope-intercept Best Response You've already chosen the best response. where would I put the numbers Best Response You've already chosen the best response. they give you the point (1, -2). so -2 would be y1 and 1 would be x1. they also give you the slope, 3. so 3 would be m Best Response You've already chosen the best response. would y and x not y1 and x1 but y and x become 0 Best Response You've already chosen the best response. also would it look like this y--2=3(x-1) Best Response You've already chosen the best response. yes. y + 2 = 3(x - 1). now distribute the 3 into the parentheses Best Response You've already chosen the best response. no, to distribute, you have to multiply 3 with x, and 3 with -1 Best Response You've already chosen the best response. so y+2=3-3 Best Response You've already chosen the best response. it would be 3x, not just 3 :) y + 2 = 3x - 3. now, what should we do to get it in the form of y = mx + b? it's our last step Best Response You've already chosen the best response. subtract 2 from each side Best Response You've already chosen the best response. would that be the right step Best Response You've already chosen the best response. Best Response You've already chosen the best response. so it's either b or c Best Response You've already chosen the best response. so the answer is..? Best Response You've already chosen the best response. when i calculate it it comes up with y=3x-1 but that isn't an answer Best Response You've already chosen the best response. did i do that wrong Best Response You've already chosen the best response. what's -3 - 2? Best Response You've already chosen the best response. Best Response You've already chosen the best response. thanks if I have any other questions I will send you a message Best Response You've already chosen the best response. Best Response You've already chosen the best response. on this question I came up with c is that right Write the equation y - 2 = 4(x + 5) in standard form. A. 4x + y = 7 B. 4x - y = 3 C. 4x - y = -22 Best Response You've already chosen the best response. yes, it's c :) Best Response You've already chosen the best response. is this question answer c Write the point-slope form of the equation passing through (5, -1) with a slope of 6. A. y-1=6(x+5) B. y+5=6(x-1) C. y+1=6(x-5) D. y-5=6(x+1) Best Response You've already chosen the best response. yes, that's also c Best Response You've already chosen the best response. is this c to Find the direct variation equation which passes through (0, 0) and (4, 1). A. y = 4x B. y = -4x C. y = 1/4 x D. y = -1/4 x Best Response You've already chosen the best response. yes. it's c lol. Best Response You've already chosen the best response. is this one d Write an equation in slope intercept form of the line that passes through (4, 1) and (5, -1). A. y = -4x + 1 B. y = -1/2 x + 9 C. y = 4x + 1 D. y = -2x + 9 Best Response You've already chosen the best response. Best Response You've already chosen the best response. that's a Best Response You've already chosen the best response. find the slope between (0,0) and (3,2) Best Response You've already chosen the best response. ok found it thanks! Best Response You've already chosen the best response. is this b Find the direct variation equation of the graph through the points (0, 0) and (1, -2). Write in y=kx form. A. y = 2x B. y = -2x C. y = 1/2 x D. y = -1/2 x Best Response You've already chosen the best response. yes, b Best Response You've already chosen the best response. How would I do this step by step Write the slope-intercept equation of the line perpendicular to y = 5/2 x - 2, which passes through the point (0, 2). A. y = 5x + 2 B. y = -2/5x + 2 C. y = 2x + 5 D. y = 1/2x + 5 Best Response You've already chosen the best response. perpendicular lines have negative reciprocal slopes. so our line should have a slope of -2/5 Best Response You've already chosen the best response. now we can put that info into point-slope form Best Response You've already chosen the best response. oh lol. you didn't even need to do any math Best Response You've already chosen the best response. Best Response You've already chosen the best response. would this be c Write the slope-intercept form of the equation parallel to y = 7x + 2, which passes through the point (1, -3). A. y = 7x - 10 B. y = -1/7 x + 2 C. y = 7x - 3 D. y = -7x + 10 Best Response You've already chosen the best response. well it wouldn't be d. parallel lines share the same slope, so it's either a or c Best Response You've already chosen the best response. it's a Best Response You've already chosen the best response. ok how would i get that on my own? step by step Best Response You've already chosen the best response. put it into point-slope form, then change it to slope-intercept form Best Response You've already chosen the best response. y + 3 = 7(x - 1) y + 3 = 7x - 7 y = 7x - 10 Best Response You've already chosen the best response. ok would this be b The slope of the line perpendicular to y = 3/4 x - 7 is A. 3/4 B. -4/3 C. 4/3 D. -3/4 Best Response You've already chosen the best response. yes, that's b Best Response You've already chosen the best response. how would i figure this out Is a shape with the vertices (2, 4), (2, -2), (-1, -2), and (-2, 4) best classified as a parallelogram, rectangle, or neither? Best Response You've already chosen the best response. ugh. i hate these questions Best Response You've already chosen the best response. I almost started to cry when I saw that question Best Response You've already chosen the best response. lol. i would plot it out first before i did the distance formula Best Response You've already chosen the best response. which do you think? Best Response You've already chosen the best response. yes :) Best Response You've already chosen the best response. Which statement is best represented by http://roads.advancedacademics.com/contentserver/content/roadssection/277825/questions/5-hw2/677.gif A. One fourth times a number is at least 12. B. One fourth times a number is greater than 12. C. One fourth times a number is at most 12. D. One fourth times a number is less than 12. Best Response You've already chosen the best response. I was think b or c Best Response You've already chosen the best response. at most means less than or equal to Best Response You've already chosen the best response. so it's not c Best Response You've already chosen the best response. and it would be b if it there wasn't a line underneath the inequality Best Response You've already chosen the best response. but since there is...it's not b Best Response You've already chosen the best response. and we know it's not d... Best Response You've already chosen the best response. so it must be... Best Response You've already chosen the best response. Solve the following: b + 4 < -8. A. b < -12 B. b < -4 C. b > 12 D. b > 4 Best Response You've already chosen the best response. which do you think? Best Response You've already chosen the best response. no. what's -8 - 4? Best Response You've already chosen the best response. yes, so it's a. can you make a new question thread now? lol sorry, but when it gets too long, i start to lag Best Response You've already chosen the best response. thanks and ok Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50be608ce4b09e7e3b85c071","timestamp":"2014-04-19T17:39:20Z","content_type":null,"content_length":"250381","record_id":"<urn:uuid:58ee69b1-b53d-4cfc-896e-e0d8ecc3475c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: The Mathematical Nature of Einstein's Contributions to Twentieth Century Science Matt Insall montez at rollanet.org Fri Jan 21 03:36:04 EST 2000 In an earlier post, I suggested that I would argue that, according to one ``definition'' of Mathematics presented in this forum, Einstein's major contributions to twentieth century science are mathematical in nature. In this post, I intend to present such an argument. The definition to which I refer is the following(posted by Professor Mycielski, and referred to on 12/22/99 by Professor Sazonov): Mathematics is a kind of *formal engineering*, that is engineering of (or by means of, or in terms of) formal systems serving as "mechanical devices" accelerating and making powerful the human thought and intuition (about anything - abstract or real objects or whatever we could imagine and discuss). The theories of relativity (both special and general relativity), and of quantum mechanics (also greatly influenced by Einstein's work), are formalizable `systems serving as "mechanical devices" accelerating and making powerful the human thought and intuition (about anything - abstract or real objects or whatever we could imagine and discuss)', and, with the advent of quantum logic and with the axiomatic approaches to the mathematical apparatus used to expound these theories, are becoming more ``formalized'' all the time. The fact that Einstein's work is considered a part of ``foundational studies'', as Professor Friedman so aptly put it, does not diminish the Mathematical flavour of Einstein's work (or that of his contemporaries and his and their academic descendants). On the contrary, since a significant portion of Mathematics *is the* foundations of Physics, the very fact that the relativistic and quantum mechanics are ``foundational'' puts them in the realm of Mathematics. It seems to me that the less Mathematical parts of Einstein's work was the (certainly nontrivial) recognition that appropriate experimental results which were at that time new and disturbing to the contemporary theory of light could be taken as axioms in a new theory of light that would then explain other significant experimental results of that time. I have no doubt that when the theory of relativity has been around long enough, it will be axiomatized, as is geometry, both euclidean and non-euclidean. This will make it a ``formal system'', and therefore, a bit of Mathematics. I do not agree, though, with the contention that this classification of Einstein's work as ``Mathematical'' in any way trivializes his major accomplishments. For I do not see the need to separate Mathematics from the rest of human existence quite so dramatically: To a great extent, Mathematics is a human endeavour to better understand our universe, whether the part of our universe in question is physical or psychological, or is metaphysical or is mainly the universe of our systems of reasoning. The rejection of FOM by some ``core mathematicians'' not withstanding, FOM is, IMHO, a part of Mathematics, which may, as Professor Friedman seems to have suggested, eventually not reside in Mathematics Departments, but in departments devoted to ``foundational studies''. It is, in fact, a beautiful and significant part of Mathematics, and so, as has been pointed out in this forum, should be supported by the rest of the Mathematics community. Name: Matt Insall Position: Associate Professor of Mathematics Institution: University of Missouri - Rolla Research interest: Foundations of Mathematics More information: http://www.umr.edu/~insall More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003613.html","timestamp":"2014-04-17T22:02:33Z","content_type":null,"content_length":"6045","record_id":"<urn:uuid:68e052a2-98b5-4e18-bfae-6132369b7663>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Cubics, squares. June 11th 2007, 11:02 AM Going Postal Cubics, squares. The problem: The volume of a rectangular box is 192 centimeters. What is the height of the box if the area of the base is 48 square centimeters. How do I solve this? Am I supposed to turn the square centimeters into cubic CMs? If so, how? (I know to find the volume of a rectangle to go V=lw) Am I even on the right track? Basically I have no idea what in the world I'm doing. :D June 11th 2007, 11:27 AM The problem: The volume of a rectangular box is 192 centimeters. What is the height of the box if the area of the base is 48 square centimeters. How do I solve this? Am I supposed to turn the square centimeters into cubic CMs? If so, how? (I know to find the volume of a rectangle to go V=lw) Am I even on the right track?... the volume of a cuboid is calculated by: $V = \underbrace{length \cdot width}_{\text{base area}} \cdot height$ Plug in the values you already know into the above given equation: $192 cm^3 = 48 cm^2 \cdot height\ \Longrightarrow \ height = \frac{192 cm^3}{48 cm^2} = 4 cm$
{"url":"http://mathhelpforum.com/geometry/15850-cubics-squares-print.html","timestamp":"2014-04-19T00:19:22Z","content_type":null,"content_length":"5206","record_id":"<urn:uuid:dcae9a21-9392-4ca7-b020-f11933788057>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #19309 View entire discussion From: Ralph (for Teacher2Teacher Service) Date: Feb 04, 2008 at 10:15:02 Subject: Re: Interpreting remainders in division Hi Sachia, Thank you for writing to T2T. The challenge for many teachers and students is that when a division problem is put into a context, the remainder can be interpreted in many different ways (and NONE of them are the usual notation that students learn when doing division of "R___" (whatever the remainder is!) For example, if a car holds 5 passengers, and 32 passengers need transportation, how many cars will be needed? The division produces an answer of 6 cars, with 2 passengers left over, but since you wouldn't want to leave anyone behind, you'd need another car. Answer: 7 cars. Note that the remainder is "rounded up" even though it's less than 1/2. But consider this problem: You have 32 apples, and 5 apples are needed to make an apple pie. How many pies can you make? The answer would be that you could make 6 pies (and yes, there would be 2 apples left over). Change the problem one more time. You have 32 inches of ribbon, and want to make 5 awards. How long will each award ribbon be? In this example, you use all 32 inches of material, so each ribbon would be 6 2/5 inches long. In this case the "remainder" would be written as a fraction or decimal and wouldn't really be "a remainder" at all :) Sorry for such a long answer to such a short question, but the answer boils down to -- how the remainder is interpreted depends on the context of the Hope this helps, -Ralph, for the T2T service Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=19309&message=2","timestamp":"2014-04-17T08:37:54Z","content_type":null,"content_length":"5767","record_id":"<urn:uuid:5928f4ce-813f-49ad-b898-ac9c7f302d57>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply This suddenly felt impossible: How do I transform these three equations... (1): xz=bc (2): cos(y) + xsin(y)=0 (3): zsin(y) + xzcos(y) - c(a+b)=0 (three unknown x,y,z with three equations, always solvable right!? But i only want to solve this for x!!!) (I've looked it through carefully, so it shouldn't be any errors in there) ...to either of these too: answer, version1: x=1/√((a/b+1)²-1) , version2: x=tan(arcsin(b/(a+b)) This is how far I've got: (1) can be rewritten to: x=bc/z , new(4) (2) -"- : x=cos(y)/sin(y) , new(5) (3) -"- : z=c(a+b)/(sin(y)+xcos(y)) ,new(6) (6) in (4) --> x=b(sin(y)+xcos(y))/(a+b) , new(7) (5) in (7) --> cos(y)/sin(y)=b(sin(y)+cos²(y)/sin(y))/(a+b) --> [multiplying both sides with sin(y)] cos(y)=b/(b+c) So, finally: I'm getting y=arccos(b/(a+b)), BUT HOW DO I PROCEED... It's a dead end. I don't know what more to do... Maybe you do?
{"url":"http://www.mathisfunforum.com/post.php?tid=1223&qid=11523","timestamp":"2014-04-17T03:55:11Z","content_type":null,"content_length":"17455","record_id":"<urn:uuid:2803ca74-36cc-49d9-836f-b4bfbf1838ca>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Great Neck Geometry Tutor Find a Great Neck Geometry Tutor ...I think I'm pretty personable and down to earth, but also knowledgeable and structured. I believe that everybody learns at his or her own pace and with his or her own style and that one's instructor needs to be cognizant of this. Although I have taken several upper level engineering courses, I ... 16 Subjects: including geometry, reading, English, calculus ...My senior year of high school I took AP Statistics and became absolutely intrigued by the idea that math could be manipulated in such strange ways. People just plug numbers in to equations and report them as if they were "statistics." The problem with this is that they lack the analysis part of ... 11 Subjects: including geometry, Spanish, algebra 2, algebra 1 ...I have two master degrees (physics and math) and have very deep understanding of physics and math concepts. I have my own way to present difficult concepts in an easiest way to make sure even low level students can understand. I am very patient and very nice to students and have very good reputation. 12 Subjects: including geometry, calculus, algebra 2, algebra 1 ...I have had great success tutoring GMAT both independently and for GMAT prep companies. I've found that for me, it takes about 6-9 weeks on average of working with a student to get to an 80-100 point improvement, and I can work with Quant, Verbal, or both. Background: I have a BS in Electrical Engineering from MIT and an MBA with Distinction from the University of Michigan. 11 Subjects: including geometry, calculus, algebra 1, algebra 2 ...As part of the master's program, I had to do many high level courses in discrete math. I have also taught discrete math at the university level. Specifically, at HofstraUniversity as an adjunct instructor of math. 11 Subjects: including geometry, calculus, statistics, algebra 2 Nearby Cities With geometry Tutor Bayside, NY geometry Tutors Great Nck Plz, NY geometry Tutors Great Neck Estates, NY geometry Tutors Great Neck Plaza, NY geometry Tutors Kensington, NY geometry Tutors Kings Point, NY geometry Tutors Lake Success, NY geometry Tutors Little Neck geometry Tutors Manhasset geometry Tutors North Hills, NY geometry Tutors Plandome, NY geometry Tutors Port Washington, NY geometry Tutors Russell Gardens, NY geometry Tutors Saddle Rock, NY geometry Tutors Thomaston, NY geometry Tutors
{"url":"http://www.purplemath.com/Great_Neck_Geometry_tutors.php","timestamp":"2014-04-16T04:16:47Z","content_type":null,"content_length":"24180","record_id":"<urn:uuid:b639c216-fa47-47a4-8839-3d6a3c85377d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Dif EQ T/F State whether the following are true or false. Justify your choice. a.) Given that $y = e^{-t}(\cos{t} + \sin{2t})$ solves a 2nd order, linear, homogeneous ordinary Dif EQ (ODE), the ODE in equation would be of the form $ay'' + by' + cy = 0$ where $a,b,c \in \ mathbb{R}$ are constants. b.) Given that $y_1$ and $y_2$ are fundamental sol'ns to a 2nd order, linear, homogenous ordinary dif. eq (ODE) with each solution defined on an open interval $I$. On all $I$, the Wronskian $W (y_1,y_2)$ is either strictly positive or it is strictly negative.
{"url":"http://mathhelpforum.com/calculus/24006-dif-eq-t-f.html","timestamp":"2014-04-18T09:43:45Z","content_type":null,"content_length":"49481","record_id":"<urn:uuid:496c8de6-f4f0-42c1-9a26-6e6e79ce15c3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
A theory of eveything in 5 pages of program code! 2011-Dec-08, 01:21 AM #1 Join Date Dec 2011 A theory of eveything in 5 pages of program code! This doesn’t try to mimic everyone's observations, like most universe simulations do.... this provides ROOT CAUSALITY on the most fundamental levels of arithmetic, geometry and other exchanges of numbers. These numbers represent a "common currency" of exchange between heat, radiation, motion, geometric shape, mass, gravity, and "orderliness". The systems PRE-ASSUMES that finite limitations are an implicit part of the actual implementation... that not only is the universe: 1. Composed of a finite number of particles. 2. Contained within a finite space. 3. Of a non-eternal age (at least since the last big bang), THAT IT ALSO: 4. Works in discrete space (a finite-size digitized grid of INTEGER numbers is the set of all locations). 5. Works in discrete time (in a series of sequential steps I call "ticks of time"). Note that the first 3 of these were only proven true within my own lifetime. I’m claiming that the other 2 are true as well, as is the direction the general scientific consensus is heading more and more by the day. In trying to create a virtual reality within these simple 2 additional constraints (both of which, by the way, are unavoidable in any computer simulation, actually in any REALISTIC implementation), I believe that I have made some important discoveries on how the speed of light might be implemented in our own Universe. I was able to seperate the relative value of these currencies from the resolution the system is run in (similar to the way chemists have been doing it with atomic numbers and mass). Being my first post on this forum, I will wait for a reply before I go on and on and on. I have put > 4 man-years of work into this effort, and I have a lot to show anyone interseted, all prepared for immediate review. Thanks for your time. I don't exactly agree with the premise; I don't think any of it has been "proven." They are the strongest theories at present. But please go on. . . As above, so below Hoo boy, are you holding onto your hat? But, hey WAIT, you don't agree with the premise (you belive in the space time continuum)? I can prove that right here and now in a BRAND NEW new way that just came to me yesterday. When you look at my latest paper, I give a visual example of blocks (representing either matter or a theorietic place where matter virtualy is). If matter were real it wouild take up real space. Virtual space is just a set of imaginary points with zero beef. OK? Now, try to add a layer of blocks around the outside of a "real" chunk, you wind up surrounding it like this: SHOWN IN 1 - D: 1 x 3 oxo 5 oxxxo 7 oxxxxxo Now visalize this in 3-d, and you wind up with: do those numbers look familiar?, me niether. lets try it the digital way... lets try this experiment using the virtual locations, the CORNERS of the beef, where it there..., NOW there isn't a little lump to grow around, so now we can just do it staright up from the edges and corners. SHOWN IN 1 - D: 2 oo 4 oooo 6 oooooo Now visalize this in 3-d, and you wind up with: do those nmbers look familiar? if each represented a quantum layer of "energy" as those levels change, you get Schrodinger. You get didly in the "Real" universe... the numbers don't work when any spatial matter gets in the way. hey i just made this up, first time! proof of a digital universe is what i live for!! Last edited by Marty Wollner; 2011-Dec-08 at 10:05 PM. Reason: Type-o in the second 1dimensional image Are you ready for this? This starts out as an email to a scientist from theoryofeverything.com explaining my newest ideas. His web site has a "crackpot index" that I mention in the preface. PLEASE give this some time. If you can find ANY piece of non-scientific fact anywhere in this paper, I will mail you a free golf disc. The universe will end up in the shape of a perfect cube. All matter will be queue up in the outermost layer, and all energy will have been isolated and gathered by a big black hole sitting at the center, too far away for matter to get to. When ALL particles are in position, in the next tick of time they are allowed to advance 1 grid location, which simultaneously moves every one of them back into the rollover location 0. This is the "pre-big-bang" configuration, and in the following tick, ka-boom. I have quantified the entire set of exchanges all the way, and believe everything will add up to 0 (no gain/loss in entropy) over the entire span, at the correct resolution. PLEASE let everyone know about this, i would like some feedback. thanks! The visible universe has a finite number of particles, but the entirety of the universe might or might not have an infinite number of particles. The entirety of the universe might or might not be spatially infinite. Anyway, is your idea falsifiable? Does your idea make useful predictions? "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln I say there is an invisible elf in my backyard. How do you prove that I am wrong? The Leif Ericson Cruiser The systems PRE-ASSUMES that finite limitations are an implicit part of the actual implementation... that not only is the universe: 1. Composed of a finite number of particles. 2. Contained within a finite space. 3. Of a non-eternal age (at least since the last big bang), Note that the first 3 of these were only proven true within my own lifetime. No, they weren't. There is no sign that the universe is of finite size...it is either infinite or so large that the curvature across the visible portion is too small to measure with current instruments. There is no reason to think there's a finite number of particles in the universe...if it's infinite, this is most likely not true. This doesn’t try to mimic everyone's observations, like most universe simulations do.... this provides ROOT CAUSALITY on the most fundamental levels of arithmetic, geometry and other exchanges of numbers. These numbers represent a "common currency" of exchange between heat, radiation, motion, geometric shape, mass, gravity, and "orderliness". The systems PRE-ASSUMES that finite limitations are an implicit part of the actual implementation... that not only is the universe: 1. Composed of a finite number of particles. 2. Contained within a finite space. 3. Of a non-eternal age (at least since the last big bang), THAT IT ALSO: 4. Works in discrete space (a finite-size digitized grid of INTEGER numbers is the set of all locations). 5. Works in discrete time (in a series of sequential steps I call "ticks of time"). Note that the first 3 of these were only proven true within my own lifetime. I’m claiming that the other 2 are true as well, as is the direction the general scientific consensus is heading more and more by the day. In trying to create a virtual reality within these simple 2 additional constraints (both of which, by the way, are unavoidable in any computer simulation, actually in any REALISTIC implementation), I believe that I have made some important discoveries on how the speed of light might be implemented in our own Universe. I was able to seperate the relative value of these currencies from the resolution the system is run in (similar to the way chemists have been doing it with atomic numbers and mass). Being my first post on this forum, I will wait for a reply before I go on and on and on. I have put > 4 man-years of work into this effort, and I have a lot to show anyone interseted, all prepared for immediate review. Thanks for your time. So when are you going to show some meat? Everything you are saying is in sanskrit! Please note that the discussion needs to happen here. Links can be made to supporting text, but the meat needs to be here. Thank you, members of cosmoquest forum, you are a part of my life I value. Your writing style is really not going to help your case. I doubt anyone is going to read more than the first couple of sentences of that before it goes in the round filing system. The universe will end up in the shape of a perfect cube. This wouldn't be because you have chosen a cube as your simulation grid, would it? Have you included boundary conditions? What if you don't, does it still end up as a cube within the boundaries (unlikely). Are the dimensions of your cube limited by the resolution of your calculations? (I do hope you are not using floating point!) It isn't clear if you have come up with this simulation technique yourself (in which case, well done - but you might want to read up some optimization techniques and how to ensure your simulation will converge, etc). The idea that the universe is just a cellular automaton has been discussed before: http://en.wikipedia.org/wiki/Cellula...ysical_reality I don't find it particularly convincing (or useful) though. Thanks for the comments. I'm not sure if you read the link or not, but the moderator wants me to try presenting everything right here. I appologize for my writing style, the concepts I'm presenting are more logical than analytical so I feel anyone CAN understand this and so its written for the general public. THE ENTIRE POINT of the paper is to show exactly what your asking about... confining C into its own system. And the result I came up with points to a totally different approach to undestanding reality. STUFF CAN SIT STILL WHILE OTHER STUFF GETS MOVED. The overlaps in timing ARE NOT accounted for in any of the contiuum-based equations we have for physics. I can walk you through this right here on this formum without any links or "scanscrcit". I would love to do it. Say Go. Last edited by Marty Wollner; 2011-Dec-08 at 10:08 PM. Reason: correct spelling of moderator This doesn’t try to mimic everyone's observations, like most universe simulations do.... this provides ROOT CAUSALITY on the most fundamental levels of arithmetic, geometry and other exchanges of numbers. These numbers represent a "common currency" of exchange between heat, radiation, motion, geometric shape, mass, gravity, and "orderliness". The system PRE-ASSUMES that finite limitations are an implicit part of the actual implementation... that not only is the universe: 1. Composed of a finite number of particles. 2. Contained within a finite space. 3. Of a non-eternal age (at least since the last big bang), THAT IT ALSO: 4. Works in discrete space (a finite-size digitized grid of INTEGER numbers is the set of all locations). 5. Works in discrete time (in a series of sequential steps I call "ticks of time"). Note that the first 3 of these were only proven true within my own lifetime. I’m claiming that the other 2 are true as well, as is the direction the general scientific consensus is heading more and more by the day. In trying to create a virtual reality within these simple 2 additional constraints (both of which, by the way, are unavoidable in any computer simulation, actually in any REALISTIC implementation), I believe that I have made some important discoveries on how the speed of light might be implemented in our own Universe. I was able to seperate the relative value of these currencies from the resolution the system is run in (similar to the way chemists have been doing it with atomic numbers and mass). Being my first post on this forum, I will wait for a reply before I go on and on and on. I have put > 4 man-years of work into this effort, and I have a lot to show anyone interseted, all prepared for immediate review. Thanks for your time. Oh my. Four years of work? Oh my. First, see the tag line below. We make observations, then develop a theory based on those observations, then use the theory to make predictions, then evaluate the results. Then back to the start. You have deliberately ignored the observation part. You have a theory in search of supporting observations. The cart is before the horse. It will quickly run off the road. Secondly, as others have pointed out, none of your first three assumptions is true. Number four is meaningless, since points are by definition dimensionless and therefore cannot fill space. Between any two points, INTEGER or otherwise, there are an infinite number of other points. As for number five, IIRC, time appears to flow continuously even below the Planck interval. So, aside from all the underlying assumptions being incorrect, you have an interesting idea, treating the universe as a sort of 3-D version of John Conway’s game of Life. On a very small scale, ignoring random quantum events, it might work. Your thoughts on this? Regards. John M. I'm not a hardnosed mainstreamer; I just like the observations, theories, predictions, and results to match. "Mainstream isn’t a faith system. It is a verified body of work that must be taken into account if you wish to add to that body of work, or if you want to change the conclusions of that body of work." - korjik I have always liked, and will argue for the theroum; The Universe is Finite, but Unbound. That would seem to be in conflict with your conclusions.. Your points 1, 2, 3. are not accepted by me as true. Those are Not facts concluded as true. BUT... do not be discouraged from continuing to talk of your thoughts and conclusions.. Like any true advocate of science.. I will look at it. Thanks for your interest, John. I dont think I will have to prove the premise of digitized space, that will have to be assumed by my being able to digitally replicate the double slit experiment and explain quantum entanglement stright up in a digital simulation. It happens IMPLICITLY from this program's activities!!! NO KIDDING! I expect to see the doubtful tones I'm picking up here. I don't want to revert back to "sandscrit" but I'm demonstrating ROOT CAUSALITY. For example, If someone asks me how I can prove that the Universes my simple formula create demonstrate characteristics of parabolic motion, I CAN NOW DO IT. The thing is, the only way to "PROVE" it is to run the program and look at the motions of the componets. The "laws of parabolic motion" are NOT programmed in, they result from EMERGENCE from these basic causal actvites. In the time it takes you to read this, you could read about 1/10 of the TOTAL explanation of this simple theory. That's my point. Its really simple. The obsevations observed from it aren't and that's what everyone on most science forums are all hung up on. OK, so now OBVIOUSLY its up to me to show this to the world, RIGHT? Hey I just came up with this new speed of light breakthrough within the past few weeks, and my goal now is to do just that. Right now I do have a program (posted on the web, free to download ) that demostates my first few "rules of motion") I'm in the procss of making the new "complete" version as an educational tool, allowing the user to select any number of possible ways to execute the program... I will now go ahead and provide everyone with lsson number 1, the obvious mechanism of virtual reality: Moderator: I am seperating this into small topics of discussion. I appologize for this "external link", but I dont want to waste my time re-formatting these chapters into this BB text styles. If this is unacceptable, please inform me, but really, there isn't all that much more to it! thanks.! Last edited by Marty Wollner; 2011-Dec-12 at 08:17 PM. Reason: repair links No attempts to measure an absolute rest frame have succeeded, and current theory is that there isn't one, that every object in an inertial frame has an equal right to consider its frame to be "the rest frame". And what "overlaps in timing" aren't accounted for? Nothing can move anywhere between 1/2 C and C. I prove this in my paper. Its REALLY SIMPLE! That means that slower motion is accomplished by waiting a certain number of TICKS and then moving ONE GRID LOCATION. The stuff moving at C can be attenuated at a "frequency" by WAITING (freq) number # of ticks, then jumping (freq) number # grid positions in all directions, thus appearing to grow outward as a virtual cube shape each freq. ticks jumping out freq size. When the first one hits anything all of them terminate. All are appearing at the same instant. If the photon intercepts multiple substrate concurrently, it will programatically select only one to act upon (otherwise, it would be violting the laws of conservation of energy). This, however, explains why observing the experiment affects it, it's the same photon at the same instant, and the photo detection cell on one side of the room is absorbing the one they can't observe. While waiting, things overlap and this is why we THINK we need to explain our observtions using relativity. WE DONT, its simply the fact that nothing travels faster than 1 GL / TICK PERIOD.When somebody shines a light from a train traveling at a slow speed of ONE GL PER 1000 ticks, and shines a light forward, the photon will begin traversing the grid at 1 GC / tick, and thats the observed speed from anywhere in the U. without bending any time or making it any more complex than what it is.... SIMPLE!!!! Its mind-boggling. Last edited by Marty Wollner; 2011-Dec-09 at 09:25 AM. Reason: (freq) number # grid positions for radiation, that's why it jumps from spot to spot, explainig the double-slit exp. I'm either to excited not to shout or I'm nuts. If you can find any flaws in my implementation, go ahead and call me a screaming idiot. But please give it a fair shake. Thanks! In reality, it is not all that difficult to accelerate particles to velocities in this range. How do you reconcile this with your "proof"? That means that slower motion is accomplished by waiting a certain number of TICKS and then moving ONE GRID LOCATION. The stuff moving at C can be attenuated at a "frequency" by WAITING (freq) number # of ticks, then moving ONE grid position. While waiting, things overlap and this is why we THINK we need to explain our observtions using relativity. WE DONT, its simply the fact that nothing travels faster than 1 GL / TICK PERIOD. When somebody shines a light from a train traveling at a slow speed of ONE GL PER 1000 ticks, and shines a light forward, the photon will begin traversing the grid at 1 GC / tick, and thats the observed speed from anywhere in the U. without bending any time or making it any more complex than what it is.... SIMPLE!!!! Your example would seem to indicate that the train would see the light travel 999 GC ahead of it after 1000 ticks. How do you reconcile this with the result seen in real world measurements, that the train sees the light travel the same distance regardless of its motion? Also, you give the train in your example a speed in "grid positions per tick". This seems to imply that there is an absolute coordinate system and absolute time. How do you explain the differences in time experienced by clocks traveling different paths and our consistent failure to detect any sign of absolute motion? Perhaps the real world isn't as simple as you think. The 1/2 C to C thing... I can make my program do it, but it introduces the theory of relativity into the fold... actually, I think its really another way to answer the question of digital vs. analog U. Here is my explanation: The observations we are making in the close proximity of an accelerator is done within the same time frame it and everything else runs in. In order to make observations between 1/2 C and C I'm guessing there might be some overlapping of timing caused by a unique situation introduced. As a matter of fact I just wrote an small explanation as to why the speed of light "observed to travel faster than C" in my last paper, and its the same thing, really. Its small enough to include right here: Light traveling faster than C Very recent discoveries have been made indicating that the speed of light appears to have been slightly exceeded. I think I have an answer for this: The observance time frame must be synchronized with the real activities occurring. I suggest that these new observations occur in very peculiar scenarios that cause the prioritization of processing activities to occur in atypical orderings. For example, if such a situation occurs, it might cause a photon that is being created to be delivered before it would typically be delivered… the result is that the size of the wavelength itself might appear to be added onto the observed overall wave delivery. Because of my new approach to defining how light traverses the grid by pausing (frequency number of ticks) and then jumping (frequency count of grid locations), it becomes even more obvious that this indeed might be the case. 34.1.1 Proof: All they need to do is try to correlate the newly observed “faster than C” speed with a physical distance and ask, “Is this extra distance equal to the wavelength size itself”? How about it being proportional to the wavelength size? I bet it is!!! 34.1.2 Implications: And hey, you know what, perhaps that can help shed some light on figuring out our universal parameters as well. Dang, over the past 5 years I really have gained a handle on figuring this stuff 35 The New Irony: 35.1 (At least 3) parameters are replaced by 1 The irony is, I’m just discovering there are no parameters in many cases…. There is no speed of light, there is no grid size, and there is no tick duration…. It all just happens implicitly because of these very simple new ways of implementing the speed of light into the sequential processing. All 3 of those suddenly got eliminated when we made the switch to these new speed methods… ALL 3 GET REPLACED by our new single “scaling factor”: WordSize. If you look at some of the suppositions of modern theoryofeverything the concept of digitization of time and running the U. in a sequnce of steps is an accpted basis for the science. I'm not going to try to answer this question any more. Perhaps I'm on the wrong thread here. I'm only trying to provide a reason for causality!! Not only that, there are many that feel it is possible that there is anend to numbers, and I'm proposing that this happes exactly at the last prime number possible. I'm basin this on my interpretation of the theory of arithmatic which kind of allows for it in discontinous space which s what I'm taliking about here. Not only that, (hey at least I'M NOT SHOUTING) the entire program itself must be written the way it is in order to run the complete universe life cycle that i'm proposing: Orderliness – Mass and spatial displacement (one time during the big bang) Mass – Gravity Gravity – Kinetic energy Kinetic energy – collisions (pv-nrt) Colllisions – HEAT Heat - Changes in Geomerty Changes in geometry – quantum heat exchanges and radiation Radiation –> mass (one way only in the black hole) Mass – organization ... i'm claiming that I didnt write it, it just is. My statements on how this might apply to OUR universe are strictly supposition based upon my efforts to actually get the thing to run, which I actually did. The material I keep harping on has the psedo code for the entire program right there in it. thanks for your interest. Sorry, that was a strong statement, but I do try proving it. I have this explained in another post I'm waiting to appear. In the accelerators, the efffect I'm explaining in this other post gets incrementally added upon itself perhaps each loop though, appearing to stretch the observation timeframe. That's just it... the train is pretty much standing still while the light takes off at ONLY 1 GC / tick (overall, it actally jumps freq GC / freq ticks). If the train's velocity was 1 gc /1000 ticks, and we flash just as we pass the guy on the side, it doesnt "thrust this momentum onto the light" because it travels at 1 GC/tick in any case, everywhere. After 100 ticks, the light's range is 100 GC away and the train is still here. After 800 ticks, same thing. Upon the 1000th tick, the train moves ONE GC, so now its position is 1 GC closer. The guy on the side of the road, he was traveling at 0 gc per tick. After 100 ticks, he's still there, SAME AS THE TRAIN (there's yer overlap). after 1000 ticks, SAME THING, both he and the train observe it to be 1000 GC away!!!! !!! In the very next tick, the train moves 1 GC. Do I have to keep going? Do you GET IT, its not GRID LOCATIONS PER TICK, EVER. The grid itself is the speed of light, there is no definition of a speed of light in this system at all!! I made the exact same mistake. My book explains THIS EXACT THING !! That's what its about! Also, you give the train in your example a speed in "grid positions per tick". This seems to imply that there is an absolute coordinate system and absolute time. How do you explain the differences in time experienced by clocks traveling different paths and our consistent failure to detect any sign of absolute motion? Not sure about those clocks yet, but the detection of motion is ALSO explaed in this other post. No. Its a lot simpler. Its just what your seeing doesnt make sense. I guess. Thanks for you interest, CJ ! Of course they look familiar. I had no idea what you were doing, until I recognized 125 as 5 cubed, and 343 as 7 cubed. 1 is also 1 cubed, so I'm pretty sure you made a mistake there and should have 27 instead of 9. Now visalize this in 3-d, and you wind up with: do those nmbers look familiar? if each represented a quantum layer of "energy" as those levels change, you get Schrodinger. Well, you have 0 cubed, 2 cubed, 4 cubed--the next should be 6 cubed, the way you're stacking things, which is 216 not 256. You get didly in the "Real" universe... the numbers don't work when any spatial matter gets in the way. hey i just made this up, first time! You get the same relationship both ways, if you don't make mistakes. Parabolic motion won't emerge from the 5 assumptions you have provided so far. At each time step, you are updating the positions of each cell. That requires more rules specifying how one cell is affected by the contents of other cells - adjacent and perhaps further away. You have implicitly programmed the resulting motion into those rules. How about summarizing them here? All of the links in your post were borken... Sory about this fast explanation to confusion. We actually need to count the number of blocks in the outer layer of the cube being created. Plaese see my paper on this. Thanks for your reply. LOVE TO!! This is the pseudocode for the actual program, no kiddding; For each dimension d, 1 to dimension_count: For each record in VP list: Read it into X.Location(d), X.heat(d), X.mass(d) For each record other than X: Read it into Y.Location(d), Y.heat(d), Y.mass(d) Are they co-occupying? -> Nuclear Force Rule Are they close? -> Are they plasma? -> Fusion (Not plasma) -> ATOMIC force rule Otherwise, Newtonian force rule Next Y: Next X: Next d: That was the theory of everything RIGHT THERE. The subroutines are SIMPLE too, for example here is the ENTIRE function for newtonia motion: For each dimension d REM: NOT NEEDED IF CALLED FROM LOOP For each record X: Read it into X.Location(d), X.heat(d), X.mass(d) DragSum = 0 UsedCounter = 0 For each other record Y: Read it into Y.Location(d), Y.heat(d), Y.mass(d) If (abs (X.Location – Y.Location) < _ 2** WordSize)) then Rem: Count this one and accumulate drag for it UsedCounter = UsedCounter + 1 DragSum = DragSum + _ ((X.mass + Y.mass) /(X.Location – Y.Location)) Rem: It’s too far away, forget it Next Y If (UsedCounter > 0) then Rem: move the VP by gravity X.Location(d) = X.Location(d) + _ (DragSum / UsedCounter) End if Next X Next d REM: NOT NEEDED when called from process loop Of course they look familiar. I had no idea what you were doing, until I recognized 125 as 5 cubed, and 343 as 7 cubed. 1 is also 1 cubed, so I'm pretty sure you made a mistake there and should have 27 instead of 9.Well, you have 0 cubed, 2 cubed, 4 cubed--the next should be 6 cubed, the way you're stacking things, which is 216 not 256. He's probably cubing powers of 2, not multiples of 2. 0, 1, 2, 4, 8, 16...it'd fit with his computer theme. As long as you ignore non-binary digital computers, anyway. That's just it... the train is pretty much standing still while the light takes off at ONLY 1 GC / tick (overall, it actally jumps freq GC / freq ticks). If the train's velocity was 1 gc /1000 ticks, and we flash just as we pass the guy on the side, it doesnt "thrust this momentum onto the light" because it travels at 1 GC/tick in any case, everywhere. After 100 ticks, the light's range is 100 GC away and the train is still here. After 800 ticks, same thing. Upon the 1000th tick, the train moves ONE GC, so now its position is 1 GC closer. The guy on the side of the road, he was traveling at 0 gc per tick. After 100 ticks, he's still there, SAME AS THE TRAIN (there's yer overlap). after 1000 ticks, SAME THING, both he and the train observe it to be 1000 GC away!!!! !!! In the very next tick, the train moves 1 GC. Do I have to keep going? Do you GET IT, its not GRID LOCATIONS PER TICK, EVER. The grid itself is the speed of light, there is no definition of a speed of light in this system at all!! I made the exact same mistake. My book explains THIS EXACT THING !! That's what its about! I don't care about your book, you need to explain your theory here. In relativity and in reality, two clocks can go on different paths through spacetime and be brought back together after experiencing different amounts of time. There is no universal time. There is no universal rest frame and no absolute motion, an object in inertial motion will always measure the same internal physics, and can always consider itself to be at rest. You can't even define simultaneity universally...events one observer sees as taking place at the same time will happen at different times according to another observer. Time does not progress in global ticks, and distances are not measured in global intervals. To have any chance of success, your model must reproduce these effects...does it? Where? So far I haven't seen a real attempt at explaining anything. As far as I'm concerned, you have not answered any of my questions. Are you serious? When observations disagree with your theory, your response is that the observations don't make sense? You don't get to pick and choose which parts of reality you like. 2011-Dec-08, 03:48 AM #2 2011-Dec-08, 04:35 AM #3 Join Date Dec 2011 2011-Dec-08, 04:56 AM #4 Join Date Dec 2011 2011-Dec-08, 05:33 AM #5 2011-Dec-08, 05:48 AM #6 Join Date Dec 2011 2011-Dec-08, 06:06 AM #7 2011-Dec-08, 06:25 AM #8 Established Member Join Date Jan 2010 Wisconsin USA 2011-Dec-08, 07:10 AM #9 2011-Dec-08, 12:58 PM #10 2011-Dec-08, 05:03 PM #11 Join Date Dec 2011 2011-Dec-08, 07:06 PM #12 Established Member Join Date Jun 2006 2011-Dec-08, 07:38 PM #13 2011-Dec-08, 08:16 PM #14 Join Date Dec 2011 2011-Dec-08, 08:33 PM #15 2011-Dec-08, 08:58 PM #16 2011-Dec-08, 09:10 PM #17 Join Date Dec 2011 2011-Dec-08, 10:11 PM #18 Join Date Dec 2011 2011-Dec-09, 04:36 AM #19 Established Member Join Date Jan 2008 2011-Dec-09, 05:20 AM #20 2011-Dec-09, 06:21 AM #21 Order of Kilopi Join Date Nov 2002 2011-Dec-09, 08:27 AM #22 Join Date Dec 2011 2011-Dec-09, 10:09 AM #23 Join Date Dec 2011 2011-Dec-09, 01:17 PM #24 2011-Dec-09, 01:24 PM #25 2011-Dec-09, 03:13 PM #26 Join Date Dec 2011 2011-Dec-09, 03:19 PM #27 Join Date Dec 2011 2011-Dec-09, 03:41 PM #28 2011-Dec-09, 06:10 PM #29 2011-Dec-09, 06:47 PM #30
{"url":"http://cosmoquest.org/forum/showthread.php?125256-A-theory-of-eveything-in-5-pages-of-program-code!&p=1967027","timestamp":"2014-04-20T05:43:15Z","content_type":null,"content_length":"207417","record_id":"<urn:uuid:7e486ef8-a212-41a6-8ed0-37f30068f747>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutor Profiles Veronica J. Certification Mathematics In-service grades 5-8 current Degree Mathematics Education Florida State University BS 1993 Teaching Style I love teaching and the reward that it brings. I am a step-by-step oriented teacher. I've had many students return saying how my style of teaching continues to help them in their current studies. In my 14 years of experience, I've learned that many students learn with many different styles. I believe that every child can learn; however, the teacher must reach them at their level. If I have a good understanding of where the student is academically, I can help them to grow academically. Experience Summary I've taught middle school for 14 years. My goal in becoming a teacher was to reach those students who had, somehow, fallen through the cracks of education. I taught at a drop-out-prevention school for 7 years. During those years, my student scores continuously rose. I taught 7th and 9th grade at this school. I transferred to a different middle school where I taught 6th & 7th grade for two years and the remaining five years, I taught Pre-Algebra and Algebra 1. I began to tutor after school during my first year to struggling students. I was also listed on the school boards list of tutors. I also worked at an after-school program. During these years, I tutored second graders and up to mathematical levels of geometry, Algebra 1 and Algebra II. Davorin S. Degree Mathematics with Concentration in Computer Science University of North Carolina at Greensboro BS 2007 Teaching Style From my tutoring experience, I have noticed that students have trouble understanding the meaning of numbers and symbols on paper simply because no one has taught them how to visualize and interpret them in a real world situation. They also don't realize that they have plethora of resources and tools available to them to help them, yet they rarely utilize them. I try to give hints and clues to my students and let them obtain the right answers on their own instead of simply solving the problem for them. I believe this gives them a much better understand of the material. Experience Summary Having recently graduated with a Bachelor's degree, I am continuing my education to obtain my Master's degree in mathematics. Number Theory will be my focus, as I intend to get involved in the encryption field. During my senior year as undergraduate, I have worked on campus as a teacher's assistant and at home as an online tutor for UNCG's iSchool program. I have privately tutored undergraduates needing help in pre-calculus and calculus. I enjoyed helping the students and showing them some of my own tricks and ways when it comes to solving the problems. Lori M. Degree Statistics University of North Florida Master's of Science (ABD) 2008 Certification Biostatistics University of West Florida Accredited 2006 Degree Mathematics University of West Florida Bachelor's of Science 2006 Teaching Style I believe that students are wary of mathematics and statistics because they appear clinic and distant, with little to do with the “real” world. To educate I try to create dynamic and above all relevant “uses” for the lessons. My experiences have exposed me to different teaching styles and class formats and allowed me to develop a teaching philosophy that encompasses the best of all these methods. My philosophy is best described with reference to six primary concepts: 1. Knowledge conveyed in a relevant context 2. Interaction with each student and the material 3. Passion for teaching and the subject 4. Adaptability 5. Creativity in teaching methods and 6. Respect between the student and the teacher All students seek knowledge; it’s the teacher’s role to facilitate learning and guide them along the path. A successful lesson is one in which the student comes out seeing the world a little differently. Experience Summary During my education and career, the teaching of others has featured prominently in my personal goals and life objectives. While attending high school, I tutored fellow students professionally in subjects ranging from basic algebra to complex calculus. My undergraduate degree was in the field of Mathematics and Statistics with an additional biological statistics certificate and at present I am completing my Masters of Science in statistics. My understandings of these fields lead to my recruit by a number of professors and researchers to provide assistance and advisement on statistical analysis of their projects. In addition to assisting my professors, I was also selected to be a Graduate Teaching Assistant, and was also selected to conduct "stand alone" courses. I have a strong passion for these subjects and I believe my years of teaching experience have given me the insight, patience and ability to convey the complex world of mathematics and statistics to my students. Ashley T. Degree Mathematics UNC Chapel Hill BA 2005 Teaching Style First and foremost, I believe it is important to establish a trusting relationship with those you teach. Throughout my experiences in the field of education, I have found the most fruitful of those relationships to be those in which I was able to work with a student or group of students regularly over an extended period of time to develop a routine and a strong relationship in which great amounts of learning and understanding could be accomplished. Although many people experience obstacles in learning Mathematics, I believe that by presenting multiple ways to approach a problem, every student can find a method that works for them. Every student should be allowed the resources and opportunity to realize that they CAN achieve their goals. I am here to help students who have had difficulty with Math in the past to succeed, and feel confident in both their abilities in Math, and in life. Experience Summary As a student, I was always committed to learning and to achieving my goals. As a teacher and tutor, I strive to help others share the same love for learning and for understanding as I do. Now, I help others set goals, and work toward achieving them. I have worked with all ages, all ability levels, and various sized groups and have enjoyed each and every experience. In the past, I have primarily tutored in the subject of Mathematics, but am also trained by the Literacy Council to tutor reading and writing, and have enjoyed volunteering with that organization as well. I believe that I can help anyone enjoy and understand math, and help them feel better about themselves for it. Robert H. Degree Electrical Engineering Marquette University MS 1971 Degree Electrical Engineering GMI (Kettereing University) BS 1971 Teaching Style I’ve always been interested in the application of math and science to the solution of real world problems. This led me to a very satisfying career in engineering. Therefore my approach to teaching is very application oriented. I like to relate the subject to problems that the students will encounter in real life situations. I've generally only worked with older students; high school or college age or older mature adults who have returned to school to get advance training or learn a new trade. Experience Summary I’ve always been interested in math and science; especially in their application to solving real world problems. This led me to a very satisfying career in engineering. I have a BS in electrical engineering from General Motors Institute (now Kettering University) and an MS in electrical engineering from Marquette University. I am a registered professional engineer in Illinois. I have over 30 years of experience in the application, development, and sales/field support of electrical/electronic controls for industrial, aerospace, and automotive applications. I’m currently doing consulting work at Hamilton-Sundstrand, Delta Power Company, and MTE Hydraulics in Rockford. I also have college teaching and industrial training experience. I have taught several courses at Rock Valley College in Electronic Technology, mathematics, and in the Continuing Education area. I’ve done industrial technical training for Sundstrand, Barber Colman, and others. I’ve also taught math courses at Rasmussen College and Ellis College (online course). I’ve also been certified as an adjunct instructor for Embry-Riddle Aeronautical University for math and physics courses. I've tutored my own sons in home study programs. I'm currently tutoring a home schooled student in math using Saxon Math. I hope to do more teaching/tutoring in the future as I transition into retirement. Gary K. Degree Mathematics Southern Illinois University MS 1999 Degree Mathematics Allegheny College BS 1996 Teaching Style My teaching style has been, for the most part, dictated by student response. I am comfortable teaching in a traditional lecture format, in a format that uses a cooperative learning approach exclusively, or in a hybrid format. The goal is for effective learning to take place, and I believe my strongest quality is to be able to adapt in such a way that best helps students reach their academic goals. Experience Summary For eight years, I taught freshman and sophomore-level mathematics courses at Arizona State University. These courses included College Algebra, Pre-Calculus, Calculus, Finite Mathematics, and Elementary Mathematics Theory. Additionally, I have tutored students in these courses both in the Mathematics Department Tutor Center, and on my own personal time. Lucille C. Degree Math University of Toronto M.Sc. 1966 Teaching Style My students know I love math. I am enthusiastic, patient, and caring. I believe everyone can learn math given the right circumstances. I take interest in my students, I email them, I encourage them to do their homework. Through this personal interest, my students work to please the teacher. I also use different teaching styles: discovery learning, Look-Do-Learn, one-to-one instruction, critical thinking. Once a student, always a student for me. I go the extra mile. Experience Summary From an early age I loved math and so when I graduated with a BA in Math and Latin, I went straight into teaching math at the high school level. During graduate school years, I taught math at the University of Toronto. While working with computer programming, I taught math in the evening division of Westbury College in Montreal. I have taught math in different countries: Jamaica, Canada, U.S. Virgin Islands, Nigeria-West African Educational Council, The Bahamas, and Florida. Dara M. Certification Physics 6 - 12 State of Florida 9 credit hours current Degree Mathematics University of Central Florida M.S. 1993 Degree Physics University of Central Florida B.S. 1990 Teaching Style I believe in my students and their abilities to learn and synthesize their experiences. My students learn their subjects because I provide a variety of techniques to command their attention. I believe teaching is not just about giving students information but about reaching students who might "get lost" in the system without a guide and friend to help them along. Experience Summary I have taught for over ten years in the public school system and learned how to "connect" with students. I have taught physics, chemistry, and mathematics at the high school level, and I have a wide range of teaching experiences in those fields. I have taught AP, honors, and standard classes. Using interesting movies followed by a lab to reinforce the concept is one of the ways I have used to reach students and make a difference in their lives. Get Started Today! If you are a Tutor who Receive Your Personalized List of Tutors is interested in joining Submit the form below or call us at: 1-800-540-9505 our team click here. Select Grade Level Select Discipline Select Subject Please provide detailed information about your student's needs. Contact Information First Name Last Name Email Phone City State Zip How did you hear about us? I am a... Algebra Tutors Understanding the concepts of algebra is a basic requirement of all students in school. Every high school standardized test, college entrance exam, and graduation requirement mandates a certain level of knowledge in mathematics to be achieved. Any upper level math course is built on the basic foundations that a student learns in their algebra classes. Any student struggling in these preliminary courses should acquire the services of a qualified tutor immediately. We have expert math tutors that can assist students in any of the following algebra courses: Pre-algebra - Pre-algebra is a common name for a course in middle school mathematics. In the United States, it is generally taught between the seventh and ninth grades, although students have taken this course as early as fifth or sixth grade. The objective of pre-algebra is to prepare the student to the study of algebra. Pre-algebra includes several broad subjects: Review of natural- and whole-number arithmetic; introduction of new types of numbers such as integers, fractions, decimals and negative numbers; Factorization of natural numbers; Properties of operations (associative, distributive and so on); Simple roots and powers; Rules of evaluation of expressions, such as operator precedence and use of parentheses; Basics of equations, including rules for invariant manipulation of equations; Variables and exponentiation. Pre-algebra often includes some basic subjects from geometry, mostly the kinds that further understanding of algebra and show how it is used, such as area, volume, and perimeter. Wikipedia Pre-algebra. Algebra I & II - Algebra is a branch of mathematics concerning the study of structure, relation, and quantity. Together with geometry, analysis, combinatory, and number theory, algebra is one of the main branches of mathematics. Elementary algebra is often part of the curriculum in secondary education and provides an introduction to the basic ideas of algebra, including effects of adding and multiplying numbers, the concept of variables, definition of polynomials, along with factorization and determining their roots. Algebra is much broader than elementary algebra and can be generalized. In addition to working directly with numbers, algebra covers working with symbols, variables, and set elements. Addition and multiplication are viewed as general operations, and their precise definitions lead to structures such as groups, rings and fields. Wikipedia Algebra Abstract Algebra - Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulas and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings. Contemporary mathematics and mathematical physics make intensive use of abstract algebra; for example, theoretical physics draws on Lie algebras. Subject areas such as algebraic number theory, algebraic topology, and algebraic geometry apply algebraic methods to other areas of mathematics. Representation theory, roughly speaking, takes the 'abstract' out of 'abstract algebra', studying the concrete side of a given structure; see model theory. Wikipedia Abstract Algebra For most students success in any math course comes from regular studying and practicing habits. However, Algebra class can be a foreign language for many students. Whether you are in need of a little extra help or someone who can teach the subject from scratch, hiring a professional tutor with a strong background in mathematics can make a dramatic impact on a student’s performance and outlook on all future course work. Our Tutoring Service We offer our clients only the very best selection of tutors. When you request a tutor for a certain subject, you get what you ask for. Our tutors are expertly matched to your individual needs based on the criteria you provide to us. We will provide you with the degrees, credentials, and certifications that each selected tutor holds. Equally important is the peace of mind we offer you knowing that each of our tutors has been cleared by a nation-wide criminal background check, a sexual predator check, and social security verification. We want you to have the same confidence in our tutors as we do.
{"url":"http://advancedlearners.com/algebra/tutor/profiles.aspx","timestamp":"2014-04-19T19:33:31Z","content_type":null,"content_length":"76440","record_id":"<urn:uuid:ee78eec9-2d5d-4cef-905e-ca157d310117>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Pre-Calculus: Using Double-Angle Identities Video | MindBites Pre-Calculus: Using Double-Angle Identities About this Lesson • Type: Video Tutorial • Length: 6:46 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 72 MB • Posted: 01/22/2009 This lesson is part of the following series: Trigonometry: Full Course (152 lessons, $148.50) Pre-Calculus Review (31 lessons, $61.38) Trigonometry: Trigonometric Identities (23 lessons, $26.73) Trigonometry: Double-Angle Identities (3 lessons, $4.95) Double-angle identities allow you to simplify trigonometric equations with a 2 as the coefficient. (similar formulae exist for trig functions with 1/2 or 3 as the coefficient). In this lesson, Professor Burger uses the equation cos2x = sinx as an example. If this equation were simply cos x = sinx, we could divide to re-write the formula as sinx/cosx = tan x = 0, but in this case, we have a coefficient in advance of one of the arguments, which is why we need to use the double-angle formulas. After using the double-angle formulas in the provided example to simplify, you can further simplify these equations using trig identities (like the Pythagorean identity) and factoring. These tools will help you to solve many trig equations. The duble angle identities for sine, cosine, tangent and cotangent are: sin2x = 2sinxcosx, cos2x = cos^2x-sin^2x, tan 2x = 2tanx/(1-tan^2x), and cot2x = (cot^2x-1)/2cotx. Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations, vectors, complex numbers, and more. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Using Double-Angle Identities All right, let’s take a look at how we could actually use this double-angle formula business to actually start solving even more exotic trig equations. I know that’s what you live for, solving these trig equations. Suppose I want to solve this one: cos2sinxx=, and I only want to find the solutions for x inside of zero to 2?. So that means I just want the angles between zero and 2? and nothing else. Well, you see, this one is a little bit annoying. Your first instinct maybe is to divide by the cosine, so I get sincos, which is tangent. But see, they’re not the same angle now. So sincos2xx is not tangent of anything. It’s just a big mess. So this is actually pretty scary. How would you handle it? Well, now we’re armed with a formula for double-angles for cosine, so I can actually take cos2x and convert it into just cosines and sines of x’s. So let’s do that. So using the double-angle formula for cosine, I see this actually equals 2cossinxx?, and now that equals, by this thing, sinx. Well, now we’re in pretty good shape because now everything is just x’s. There are no 2x’s here. I’m a little bit troubled and disturbed by the fact that this is cosine stuff and everything else seems to be sine. I would love everything to be uniform, either sine or cosine. Well, I can’t change that to cosine. That seems almost impossible. Could I change this to sine? Well, yes, because now I remember the fundamental Pythagorean identity, which is equivalent to saying that 22sincos1xx+. So what does cosine squared equal? Well, if I bring this term over to this side, I see that 2cos1sinxx=?. So in place of cosine squared here, I can replace it by . So let’s do that. If I do that, I now see, in place of this I write . And don’t forget that term, 21sin?21sin?2sinx?. And that’s still equals sinx. So notice that now everything just has x’s, no 2x’s, and everything happens to the sine. So in fact, that’s a good sign. All right. Now, you may think these cancel out but be careful, they don’t because they have a and a , and that’s . So in fact those squared terms remain. Let’s bring everything over to this side. So I’ll bring the 2sin?2sin?22sin?22sinx? to this side and it becomes 22sinx+, so I have a 22sinx+. I have that term, sinx. I’ll bring everything over to this side. I bring that over. I see a -1. And what does that equal? Well, that would equal zero. Well, now what you might want to do is just call this “something” in order not to confuse the issue because I’m going to try to factor now. Or you may just begin to start to think about the “something” at the same time as actually factoring. So I’m going to put a sinx here and a sinx here. I’ll put the 2 here. That product is 22sinx. Now I want the terms I multiply at the end to give 1, so I’ll put in a 1 and a 1. And the sines should be opposite. But how should I put them in so that when I do this inside and outside, I get a +1? Well, I think if I put a plus sign here and minus sign here, it’s going to work well. Because here I see 2sinx+, minus a sinx gives me a sinx+. So in fact this actually works out great. So I factored this in one fell swoop, and so what does that mean? Well, it means that now what I can do is set up two equations. One is that this thing equals zero, and the other is that this equals zero. So let’s just rewrite that up here. So I see (. That’s what I’m left with now. So either this equals zero—if 2sin1)(sin1)0xx?+2sin1x? equals zero, that means that sinx must equal 12, right? Because I bring this over to the other side with a +1 and divide both sides by 2, so I’d see this. Or the other possibility is that this equals zero, which means sin. 1x=? Well, now we can find the solutions. And remember, we’re only looking for solutions between zero and 2?, so you have to look only inside here. Well -1 is an easy one. There’s only one answer. It equals -1 right here at 32?, so that’s easy. Here 32x?=. Well, what about this one? This one is a teeny bit more work but not too much more work. At 12, it crosses the half here but also here, so there will be two answers to this one and they’re going to be both symmetric to this thing, so whatever I have here, I’ll take ? minus that here. And what’s the answer here? Well, the sine of what angle is 12? Well, I happen to know that’s actually going to be 30° or 6? radians. So I see that 6x?= radians. That’s this answer right here gives me 12. And how do I get the one that’s sort of symmetric on this side? Well, l take this length, which is 6?, and subtract it from ?. I move off that much this way. So I take ? and I subtract 6?, and ? is actually 66?; that’s just ?. But minus 6? gives me 56?. So the other answer is 56x?=. So this equation actually has three solutions: 6x?=, 56x?=, and 32x?=. Those are all solutions to this original question of where the x is found in this region. So notice what I did. I had this thing that had cosines and sines but, more fatal, it has a 2x, and an x. So I got rid of that by using the double-angle formula. Then I didn’t like the cosine so I got rid of that using the Pythagorean theorem formula. And then I set up this nice thing, which turned out to be just a factorable quadratic. This had two solutions; this had one solution; and they’re all the solutions in that range. Okay, great. So you can actually solve these kinds of exotic equations now using the double-angle formula. Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/1235-pre-calculus-using-double-angle-identities","timestamp":"2014-04-20T18:36:14Z","content_type":null,"content_length":"60304","record_id":"<urn:uuid:4b1ad5f5-f617-4665-a79b-2e183df83c7e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
HIV model January 19th 2011, 03:56 PM #1 Nov 2010 HIV model Cells that are susceptible to HIV infection are called T(target) cells. Let T(t) be the population of uninfected T-cells, T*(t) that of the infected T-cells, and V(t) the population of the HIV virus. A model for the rate of change of the infected T-cells is dT*/dt = kVT - gT*, (eq1) where g is the rate of clearance of infected cells by the body, and k is the rate constant for the infection of the T-cells by the virus. The equation for the virus is the same as dV/dt = P-cV, (eq2) but now the production of the virus can be modeled by P(t) = NgT*(t). Here N is the total number of virions produced by an infected T-cell during its lifetime. Since 1/g is the length of its lifetime, NgT*(t) is the total rate of production of V(t). At least during the initial stages of infection, T can be treated as an approximate constant. Equations (eq1) and (eq2) are the two coupled equations for the two variables T*(t) and V(t). A drug therapy using RT (reverse transcriptase) inhibitors blocks infection, leading to k~=0. Setting K = 0 in (eq 1), solve for T*(t). Substitute it into (eq 2) and solve for V(t). Show that the solution is V(t) = [V(0)/(c-g)][ce^(gt) - ge^(-ct)]. Please HELP !!! Cells that are susceptible to HIV infection are called T(target) cells. Let T(t) be the population of uninfected T-cells, T*(t) that of the infected T-cells, and V(t) the population of the HIV virus. A model for the rate of change of the infected T-cells is dT*/dt = kVT - gT*, (eq1) where g is the rate of clearance of infected cells by the body, and k is the rate constant for the infection of the T-cells by the virus. The equation for the virus is the same as dV/dt = P-cV, (eq2) but now the production of the virus can be modeled by P(t) = NgT*(t). Here N is the total number of virions produced by an infected T-cell during its lifetime. Since 1/g is the length of its lifetime, NgT*(t) is the total rate of production of V(t). At least during the initial stages of infection, T can be treated as an approximate constant. Equations (eq1) and (eq2) are the two coupled equations for the two variables T*(t) and V(t). A drug therapy using RT (reverse transcriptase) inhibitors blocks infection, leading to k~=0. Setting K = 0 in (eq 1), solve for T*(t). Substitute it into (eq 2) and solve for V(t). Show that the solution is V(t) = [V(0)/(c-g)][ce^(gt) - ge^(-ct)]. Please HELP !!! Equation 1 separation of variables. First solve that one. How do I solve that one ??? do you mean setting k = 0 and dT*/T = -gdt??? How can I solve that and substitute to the eq2??? Can anyone help me more on this problem??? hello dear i will make u the first equation u have dT*/dt = kVT - gT* when k=0 it becomes dT*/dt =0 - gT* which is: dT*/dt = - gT* now divide the whole equation by T* and multiply by dt u get: dT*/T*=- gdt now integrate both sides u get ln(T*)=-gt+c (c is a constant) best regards ok I understood the first one, and then what should I do? I thought that first I had to put T*=e^(-gt + c) in to P(t) and plug that into eq 2 but that doesn't give me a proper solution... Help !!! Umm..... or do I consider P as 0???? I have no idea how to do this please help Equation two is not composed of multiplication... Can you give me more explanation please? What do you mean by taking differential equations?? I am totally lost, I got T* by setting k = 0 from equation 1, which is Ce^gt if that is what you meant, but I have no idea where to plug that in If you ask a DE question, you should have some idea how to solve it. Your second equations can be separated. Number 2. How can I factor V out.... or just ignore P??? January 19th 2011, 04:24 PM #2 MHF Contributor Mar 2010 January 19th 2011, 08:29 PM #3 Nov 2010 January 20th 2011, 07:11 AM #4 Nov 2010 January 20th 2011, 08:24 AM #5 Junior Member Jan 2011 January 20th 2011, 10:40 AM #6 Nov 2010 January 20th 2011, 11:11 AM #7 Nov 2010 January 20th 2011, 12:41 PM #8 Nov 2010 January 20th 2011, 12:47 PM #9 MHF Contributor Mar 2010 January 20th 2011, 12:53 PM #10 Nov 2010 January 20th 2011, 12:54 PM #11 MHF Contributor Mar 2010 January 20th 2011, 12:59 PM #12 Nov 2010 January 20th 2011, 01:08 PM #13 MHF Contributor Mar 2010 January 20th 2011, 01:13 PM #14 Nov 2010 January 20th 2011, 01:15 PM #15 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/differential-equations/168797-hiv-model.html","timestamp":"2014-04-18T23:48:44Z","content_type":null,"content_length":"70620","record_id":"<urn:uuid:d1f4f282-d62e-4c87-a0ad-43b5c2e54820>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: secant of (13pi)/2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508b38c6e4b0d596c460e991","timestamp":"2014-04-19T02:02:17Z","content_type":null,"content_length":"51258","record_id":"<urn:uuid:f8094c38-7ec1-4e10-aebd-c3637e3525bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Algorithmic Game Theory COMS 6998-3: Introduction to Algorithmic Game Theory Instructors: Sergei Vassilvitskii, Sébastien Lahaie (both at Yahoo! Research) E-mail: { sergei, lahaies } at yahoo-inc.com Teaching Assistants • Etienne Vouga (evouga at gmail). Office Hours Sunday 4-5pm, Location 122 Mudd. • Rajat Dixit (usa.rajat at gmail). Office Hours Thursday 11:45am-12:45pm. Location, 122 Mudd Time/Location: 4:10-6:00PM on Mondays in 233 Mudd. [S:253 Engineering Terrace:S] Course description: Algorithmic game theory is an emerging area at the intersection of computer science and microeconomics. Motivated by the rise of the internet and electronic commerce, computer scientists have turned to models where problem inputs are held by distributed, selfish agents (as opposed to the classical model where the inputs are chosen adversarially). This new perspective leads to a host of fascinating questions on the interplay between computation and incentives. This course provides a broad survey of topics in algorithmic game theory, such as: algorithmic mechanism design; combinatorial and competitive auctions; congestion and potential games; computation of equilibria; network games and selfish routing; and sponsored search. No prior knowledge of game theory is necessary; the most important prerequisite is mathematical maturity. Prerequisites: Algorithms ( COMS 4231 ), Discrete Math ( COMS 3203 ). Optional Text: Algorithmic Game Theory. Nisan, Roughgarden, Tardos, Vazirani. Cambridge University Press, 2007. *** The book is available online for free! Thanks to the progressive-minded publisher, you can read the entire book by clicking here (username=agt1user, password=camb2agt). Course requirements: Two problem sets (25% each); a 10-15 page report summarizing 1-3 research papers (40%); participation (10%). Pass/Fail students, select either the problem sets or the project. Late Policy: You can hand in one of the problem sets 3 days late (Thursday at 5pm). No other late homeworks will be accepted. You cannot hand in the project late. • Problem Set #1 (Out 9/29, due in class 10/13.) • Problem Set #2 (Out 10/13, due in class 10/27.) • Report preferences: due Tue 11/11 by email. • Report abstracts (1-2 pages): due Tue 11/18 by email. • Final report: due Mon 12/8 in class. Homework policy: You are encouraged to discuss the course material and the homework problems with each other in small groups (2-3 people), but you must list all discussion partners on your problem set. Discussion of homework problems may include brainstorming and verbally walking through possible solutions, but should not include one person telling the others how to solve the problem. In addition, each person must write up their solutions independently; you may not look at another student's written solutions. You may consult outside materials, but all materials used must be appropriately acknowledged, and you must always write up your solutions in your own words. Violation of this policy will result in a penalty to be assessed at the instructor's discretion. This may include receiving a zero grade for the assignment in question AND a failing grade for the whole course, even for the first infraction. Schedule and references: • Mon 9/8 (SV, SL): What is Algorithmic Game Theory? Examples: auctions, selfish routing, complexity. Lecture notes [pdf] • Mon 9/15 (SV): Selfish Routing and Price of Anarchy. Lecture notes [pdf] • Mon 9/22 (SV): Shapley Network Design Games. Lecture notes [pdf] • Mon 9/29 (SL): Sealed-Bid Combinatorial Auctions. Lecture notes [pdf] • Mon 10/6 (SL): Iterative Combinatorial Auctions. Lecture notes [pdf] □ Supplementary notes on linear programming [pdf] □ Chapter 11.7 of the textbook. □ S. de Vries, J. Schummer, R. Vohra. On Ascending Vickrey Auctions for Heterogeneous Objects. Journal of Economic Theory, 132(1):95-118, 2007. □ S. Bikhchandani, J. M. Ostroy. The Package Assignment Model. Journal of Economic Theory, 107(2):377-406, 2002. • Mon 10/13 (SL): Communication Complexity of Combinatorial Auctions. Lecture notes [pdf] • Mon 10/20 (SV): Digital Goods Auctions. • Mon 10/27 (SL): Algorithms for Market Equilibrium. • Mon 11/3: No class (Academic holiday---Elections). • Mon 11/10 (SV): TBD. • Mon 11/17 (SV) : Complexity of Computing Nash Equilibria. • Mon 11/24: No class (Thanksgiving break). • Mon 12/1 (SL): Sponsored Search I: Truthfulness. Lecture notes [pdf] • Mon 12/8 (SL): Sponsored Search II: Equilibrium Properties.
{"url":"http://www.cs.columbia.edu/coms6998-3/","timestamp":"2014-04-18T05:34:33Z","content_type":null,"content_length":"9334","record_id":"<urn:uuid:adf387b7-6652-4fc7-9663-d5902c036c20>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Localism Act 2011 This sectionnoteType=Explanatory Notes has no associated 15After section 36 insert—E+W “36ASubstitute calculations: England (1)An authority in England which has made calculations in accordance with sections 31A, 31B and 34 to 36 above in relation to a financial year (originally or by way of substitute) may make calculations in substitution in relation to the year in accordance with those sections, ignoring section 31A(11) above for this purpose. (2)None of the substitute calculations shall have any effect if— (a)the amount calculated under section 31A(4) above, or any amount calculated under section 31B(1) or 34(2) or (3) above as the basic amount of council tax applicable to any dwelling, would exceed that so calculated in the previous calculations, or (b)the billing authority fails to comply with subsection (3) below in making the substitute calculations. (3)In making substitute calculations under section 31B(1) or 34(3) above, the billing authority must use any amount determined in the previous calculations for item T in section 31B(1) above or item TP in section 34(3) above. (4)For the purposes of subsection (2)(a) above, one negative amount is to be taken to exceed another if it is closer to nil (so that minus £1 is to be taken to exceed minus £2). (5)Subsections (2) and (3) above do not apply if the previous calculations have been quashed because of a failure to comply with sections 31A, 31B and 34 to 36 above in making the calculations.”
{"url":"http://www.legislation.gov.uk/ukpga/2011/20/schedule/7/paragraph/15/prospective?view=plain","timestamp":"2014-04-19T03:02:40Z","content_type":null,"content_length":"9005","record_id":"<urn:uuid:a2ac0ff4-646a-467a-a841-994e51a1d95d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do we want to own houses? post #1 of 60 7/22/08 at 2:27pm Thread Starter I am terribly critical of housing. Where I live, run down shacks cost $300k and nice places can be rented for $750/month. In San Diego, average houses cost twice as much per month to own than rent, not counting insurance, taxes, etc. There is no rational economic argument for owning in much of the country unless you count on appreciation three or four times the rate of inflation, which has never existed except for the past few years. I have a great rental house, I am mobile, I can treat the house as my own in regards to landscaping, can paint, have no worries about the landlord, etc. Because of rental prices, I save enough each month by not owning that the money I didn't spend on a house will, by the time I retire, be enough of a nest egg that I could rent a mansion off of the interest alone. Despite all of this, I'm constantly looking at realtor.com and shopping for houses not only where I live but in towns and cities where I used to live. I can't seem to get the idea out of my head that I really need to own a house, even if that idea is totally irrational. I often wonder, what is the psychology behind this? There is no material difference between owning and renting. I suppose I'm just brainwashed. I'll buy when I have 20% down and the year is 2011, I tell myself. Then at least the market will again be rational. Buying a house makes lots of sense ... it's a place to live, a good investment. Odds are over an extended period of time, it's gonna appreciate well. Now... I don't consider a 30 year mortgage to be the same as "buying" a house. That's just renting from a bank! Even if you pay it off in 30 years, you've now paid nearly 500k for a 200k house... I would have a hard time PLANNING on THAT much appreciation just to break even! A 10 year mortgage?... That makes a lot more sense ... gaining equity quickly and not paying nearly as much in interest over the life of the loan. Less than 20% down?... then you can't afford the house. Why put yourself in the position of paying insurance premiums to insure the loan FOR the Bank! (PMI) I think the real key is: DON'T BUY A HOUSE IF YOU CAN'T AFFORD IT! Just because a bank is willing to loan you money, doesn't mean you can afford the loan. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... Originally Posted by KingOfSomewhereHot Buying a house makes lots of sense ... it's a place to live, a good investment. Odds are over an extended period of time, it's gonna appreciate well. Now... I don't consider a 30 year mortgage to be the same as "buying" a house. That's just renting from a bank! Even if you pay it off in 30 years, you've now paid nearly 500k for a 200k house... I would have a hard time PLANNING on THAT much appreciation just to break even! A 10 year mortgage?... That makes a lot more sense ... gaining equity quickly and not paying nearly as much in interest over the life of the loan. Less than 20% down?... then you can't afford the house. Why put yourself in the position of paying insurance premiums to insure the loan FOR the Bank! (PMI) I think the real key is: DON'T BUY A HOUSE IF YOU CAN'T AFFORD IT! Just because a bank is willing to loan you money, doesn't mean you can afford the loan. It is my thought that people want to buy houses even if it isn't a wise financial decision. Otherwise, how would a single house sell in say the Bay Area or Boston? The only way you can get math to support that decision financially is if you count on say 10 or 12% appreciation over the long haul. I miss a lot of things about Ohio, including housing prices that make sense. So anyhow, the idea I am interested in is that people in our culture have some kind of drive to own real estate that isn't based on finances. In some areas it makes more sense to rent, for sure. In these areas, there's a lot of land value speculation, which is never really a good thing. There are investments you can make that are no more risky, but they pay off many times more: stocks, currency, commodity, private equity, etc. Where I live, an earthquake could come tomorrow and geologically devalue a given plot of land to $0, yet the perceived land value is absurdly inflated. Unless I had a huge amount money and I could actually develop, I wouldn't touch real estate with a ten foot pole. It's just not a good investment, considering the alternatives. That said, I still own a condo in Florida, which I used to live in. In that area, it was and still is cheaper to buy than to rent. I agree... if it involves taking out a loan to buy said real-estate. However, if you pay cash for it (or perhaps a short 5 year loan), it can be a pretty stable investment, because your risk is much lower if you're not servicing debt. By not having a mortgage over my head, I can afford to have a rental property sit empty while I find GOOD tenants... or charge lower rent than others and still make a profit. But then a LOT of investments that look bad when debt-financed, become great investments when there is no debt involved. But, to the OP's thoughts... Banks do a very good job of MARKETING home ownership. Even to people who have no business buying a house just yet. When people hear that they should buy a house day-in and day-out from various sources, they end up thinking it's the right thing to do even if they're not yet financially ready for that. They may end up paying 30 years worth of interest that could have been spent on a better lifestyle (a lifestyle most people apparently live anyway through the use of credit... but you already know how I feel about that From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... I currently rent a house in Boulder for $2500/month - and the mortgage would be $6000/month if I bought it (roughly $600K house). I don't think that real estate will appreciate much over the next 15 years, due to the baby boomers selling their homes - and Colorado will end up being an uninhabitable desert once global warming makes the mountain ice packs go away. Having said that, I still plan on buying - because: 1. I want a larger house - and I can't get my perfect setup without buying. 2. I want more freedom (my current landlady does not like her wood burning fireplace to be used, because she thinks it will smell up the house like a smoker does, etc). 3. A landlord can kick you out at any time, you have more security if you buy. 4. In 11 years I plan on moving to Hawaii, and I think that the ice packs will still be OK by then (crosses fingers). 45 2a3 300b 211 845 833 45 2a3 300b 211 845 833 Originally Posted by e1618978 I currently rent a house in Boulder for $2500/month - and the mortgage would be $6000/month if I bought it (roughly $600K house). I don't think that real estate will appreciate much over the next 15 years, due to the baby boomers selling their homes - and Colorado will end up being an uninhabitable desert once global warming makes the mountain ice packs go away. Having said that, I still plan on buying - because: 1. I want a larger house - and I can't get my perfect setup without buying. 2. I want more freedom (my current landlady does not like her wood burning fireplace to be used, because she thinks it will smell up the house like a smoker does, etc). 3. A landlord can kick you out at any time, you have more security if you buy. 4. In 11 years I plan on moving to Hawaii, and I think that the ice packs will still be OK by then (crosses fingers). Basically, you want a house because it would be yours and you can do with it what you want. I suppose I feel the same way. But isn't it a more true freedom when you aren't a servant to a bank? Especially when you can save several thousand dollars per month...that's enough money to seriously enrich a life. I'm not putting down the idea of home ownership, but in the case as you describe it probably isn't the wisest decision financially, but there is still a drive to own something. I guess that drive is simply not playing by someone else's rules? Originally Posted by KingOfSomewhereHot Now... I don't consider a 30 year mortgage to be the same as "buying" a house. That's just renting from a bank! Even if you pay it off in 30 years, you've now paid nearly 500k for a 200k house... I would have a hard time PLANNING on THAT much appreciation just to break even! There's more to it than just the house though. For example, my sister and her husband have a 30 year mortgage on a house. They could pay it off in 10, but they're not going to do that. Why? Because they intend to live in that house until the day they die. If they pay it off in 10 years, then it's just equity in a home they're not going to sell. Financially, it is more prudent to have the 30 year mortgage, and invest the money they would have been paying if they had a 10 year, which will more than make up for the difference in the total amount of money they would pay over a 10 year vs. a 30 year mortgage. Because when I'm 55 I'll be rent & mortgage free - at least, that's the idea Originally Posted by Flounder There's more to it than just the house though. For example, my sister and her husband have a 30 year mortgage on a house. They could pay it off in 10, but they're not going to do that. Why? Because they intend to live in that house until the day they die. If they pay it off in 10 years, then it's just equity in a home they're not going to sell. Financially, it is more prudent to have the 30 year mortgage, and invest the money they would have been paying if they had a 10 year, which will more than make up for the difference in the total amount of money they would pay over a 10 year vs. a 30 year mortgage. There's always the "peace of mind" that comes from not owing anybody anything ... I can pay my 10 year mtg in 7 years, and then invest all the money I'm no longer paying in principle and interest and still end up with a hefty little sum at the end of 30 years. And spend the last 22 of those 30 with no worries about job security, making payments, etc, etc. Yes, you can make the numbers say that a big long mortgage is the right decision ... but when that person loses their job 10 years later (for whatever reason)... It'd sure be easier to cope if there were no big debt payments hanging over his head. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... true. this idea makes sense in a lot of parts of the country, particularly in the midwest. however, where I live and mortgage payments are double rent, if i put aside the difference each month in an index fund that earns 9% over the next 30 years, i'd be able to pay for rent for the rest of my life just off the interest alone. then i'd die and have a nice blob of money for my children. when i lived in Cincinnati, rent prices approximately equaled mortgage payment PLUS taxes & insurance PLUS another 10-15% or even more. In a place like that, the financially prudent thing to do is buy. But in other places, this isn't so...yet we still want to own. Of course, it helps that I bought pre-2000 before prices went absolutely crazy I couldn't afford to step onto that proverbial ladder now California is a terrible place for the first time home buyer, taxes are out of control and the price of gas is like an adrenaline shot to the heart. It's becoming the Forbidden Zone in the Planet of the Apes. Well, she's a professor who should be given tenure any day now and he works at the college as well. Thus, they have excellent job security. It all depends on the individual situation. $300K would buy you a mansion here in South Carolina. A year ago I bought a new 3BR, 2BA 1500 sq. ft home for $98,000, one mile from downtown. i think it's time for me to move! No kidding. We move 2 years ago from CT to NC and were able to put 40% down on a 15 year, and live in a better house and neighborhood. Also, slightly off topic, but still relevant: Foreclosure suicide Now, correct me if I'm wrong, you can't collect on a policy if the insured committed suicide, right? Especially if they left a note. Boy if anyone knew the answer to this, we wouldn't be in the housing crisis we are in now. From a purely financial standpoint, buying in today market if mortgage payments are double rent payments, does not make a lot of sense, unless you are able to put a lot of money down so your monthly payments are low and you can leverage the equity in this home for future investments. However, when you pay rent, evey cent of that goes to someone else, whereas if you pay a mortgage, some of that goes into your pocket. Also, there has been mention of saving money to retirement. You don't save your way to prosperity. As well, there is quality of life. When I rented, I never really wanted to do anything to the place and therefore did not want to spend too much time there. When I own, I want to make it a place where I want to be. The OP alluded to this. I was lucky in that I purchased in the hot Vancouver, BC Market back in 2002, 2004 and 2005. For someone to buy a 1 bedroom 540 square feet here in downtown Vancouver today its about $380-400K So, in order to have a mortgage of under $1500/month, you would have to put down at least $180-200K and that does not include Property Purchase Tax, legal fees etc. I bought because I wanted to have a place that was mine. I always hated renting and I had a good deal when I rented. I paid between 900-980 for a ten year period in Manhattan. I knew when I purchased my house in August of 2006 the market was topping out, but I also knew that historically the neighborhood I was purchasing in was very stable and I wanted to be there for a long time. I'm also surprised that no one has mentioned the tax advantages to owning. Exercise: two equivalent townhouses in a "chi-chi" part of Silicon Valley. The ownership is five years. I assume a 5% annual real estate growth, which at this point may not be realistic. Then again, the 5% apr mortgage isn't realistic either. So this is a bull market study. #1 Rents for $3000/month #2 Sells for $1M (20/80 mortgage, 30 year, roughly 5%) #1 sunk cost over 5 years, assuming 5% annual rent hikes = $199K #2 sunk cost over 5 years is more difficult to determine. We must factor in: - mortgage interest - property tax - upkeep (easy for townhouse, add $300/month with %5 annual increase - tax writeoffs (interest paid, some property tax) - appreciation and sale price The plan here is to solve for the amount of write-off needed to make option 1 and 2 have the same sunk cost. Mortgage per month is roughly $5600 (interest per month is roughly $3200) 5 years of mortgage payments = $336000 5 years of Property tax is roughly = $55000 5 years of assoc fees are = $19900 5 years total cost = $411K money returned: appreciation = $276K after agent fees = $212K #2 total sunk cost = $199K So, in this bull market study, the tax advantage is the only thing that makes ownership more desirable than renting. However, renters have better mobility (if that's an issue) and, more importantly, the present market has higher rates and much less potential for appreciation, especially as the boomers start to pass. The other question is: what if this were five years ago (actually quite realistic) but you had rented, and instead put the $200K into google or Apple? after capital gains taxation, you would come out $1.9M better off than the buyer of the unit next door! Originally Posted by Flounder There's more to it than just the house though. For example, my sister and her husband have a 30 year mortgage on a house. They could pay it off in 10, but they're not going to do that. Why? Because they intend to live in that house until the day they die. If they pay it off in 10 years, then it's just equity in a home they're not going to sell. Financially, it is more prudent to have the 30 year mortgage, and invest the money they would have been paying if they had a 10 year, which will more than make up for the difference in the total amount of money they would pay over a 10 year vs. a 30 year mortgage. I have over $100,000 of student loans. That's kind of like a mortgage in a way, isn't it? What do you think the best repayment plan is? I was thinking of putting as much as I can into my 401k, living like a Spartan for a few years, and paying off as much as I can at first. Is there a better way? (Although I have a feeling that just paying the minimum might be difficult on any budget). Did you mean to say "you don't SPEND your way to prosperity?" Just want to double check before I write a long reply. Spline, that is the gigantic elephant in the room. As the sheer number of people decline over the coming years, the housing market will continue to implode. The idiotic "housing meltdown" we're currently experiencing will be dwarfed by what follows. The economies of the world will experience this over the next 30 to 40 years! Way back when the dinosaurs were roaming the Earth and my wife and I were married with no children, we made a decision to move away from the very expensive beach area and move inland. A lot of folks like to chide that decision (including some on here) but the reality is that unless you want to run to stand still, you need to insure housing costs are reasonable and affordable. Originally Posted by trick fall I bought because I wanted to have a place that was mine. I always hated renting and I had a good deal when I rented. I paid between 900-980 for a ten year period in Manhattan. I knew when I purchased my house in August of 2006 the market was topping out, but I also knew that historically the neighborhood I was purchasing in was very stable and I wanted to be there for a long time. I'm also surprised that no one has mentioned the tax advantages to owning. As you note, there are several tax incentives to buying. No matter what the numbers have to make sense. It won't do anyone any good to think they own a home when the numbers really don't make sense. This is a very big part of the last bubble. I made it a really clear point to people by noting that at the height or even run up of the bubble I wouldn't have purchased the very house I was residing in. I told them I would have rented because it made sense. You do have to have discipline to invest the portion of money above the rent that you would have left over by not buying. Most people are not disciplined enough in this area. Originally Posted by Splinemodel So, in this bull market study, the tax advantage is the only thing that makes ownership more desirable than renting. However, renters have better mobility (if that's an issue) and, more importantly, the present market has higher rates and much less potential for appreciation, especially as the boomers start to pass. The other points are fun but I really have to give this one a whack. There is NO, repeat NO estimate I have seen whereby just because the boomers die, the population of the United States actually declines. If anyone is screwed it is the boomers themselves since they are used to the market accommodating their every wish and that may finally no longer be true. They may want to move from a four bedroom home to a two bedroom condo and while they may not get the inflated dollars they want for their home, it also doesn't mean anyone is going to run out and build a bunch of retirement communities for them to go retire to when they can't supposedly sell those homes. The need for housing will not disappear. The population will still increase. The boomers may have trouble cashing out but that means it is a deal for everyone who needs a home and tries to cash in. Originally Posted by ShawnJ I have over $100,000 of student loans. That's kind of like a mortgage in a way, isn't it? What do you think the best repayment plan is? I was thinking of putting as much as I can into my 401k, living like a Spartan for a few years, and paying off as much as I can at first. Is there a better way? (Although I have a feeling that just paying the minimum might be difficult on any budget). This would be a mortgage for many people. I'll tell you what I would do. There is a reason people tease about trailer trash. It is because trailer parks, or living in mobiles or recreational vehicles is much cheaper than even an apartment most times. If I were you, I would get a nice older class C motorhome and live in it as cheaply as possible for a couple years. Keep your car (which is a great car) and you can even use your "home" for cheap vacations and travel. Most of the "advantages" of stick and brick homes have been offset by technology. You can get satellite television. You can use your cell phone. There is huge money in law depending upon what field you are in and where you are employed. You've never disclosed this so it could radically affect the answer for you. Family law versus underling district attorney are huge earnings differences. So if you are going the big money route, you might not need to do above (you might just need to drink a lot instead.) "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell Originally Posted by SpamSandwich Spline, that is the gigantic elephant in the room. As the sheer number of people decline over the coming years, the housing market will continue to implode. The idiotic "housing meltdown" we're currently experiencing will be dwarfed by what follows. The economies of the world will experience this over the next 30 to 40 years! I'm going to have to radically disagree with that. Just because white, western baby boomers are declining doesn't mean anyone else or anything else is doing so. The population of the world is still slated to go up. Wealth will still go up. The reason the boomers are screwed is because they didn't save and spent themselves into a whole while promising themselves the moon and stars. They want the lawn man and the 1.6 children they had to pay for all this. The lawn man and their kids are going to tell them to screw themselves. "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell In the spirit of this thread I was actually wondering what the best way to pay back a loan is: slower or faster. I have six figures worth of subsidized, unsubsidized, and private loans. This thread got me thinking because of Flounder's idea that investing money will more than recoup the extra payments of a 30 year mortgage. I'm a novice with this long-term financial stuff. It depends upon the term, several other assumptions and what you are most comfortable with. "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell Yeah, it should really depends a lot on the specifics, what sort of return you think you can get on your money, and how much risk you're willing to take in relation to the return on your money that you seek For instance, if I was my sister, I'd probably pay off that house quickly. Psychologically, I don't like debt hanging around my head, no matter what the numbers say. Originally Posted by progmac I am terribly critical of housing. Where I live, run down shacks cost $300k and nice places can be rented for $750/month. In San Diego, average houses cost twice as much per month to own than rent, not counting insurance, taxes, etc. There is no rational economic argument for owning in much of the country unless you count on appreciation three or four times the rate of inflation, which has never existed except for the past few years. I have a great rental house, I am mobile, I can treat the house as my own in regards to landscaping, can paint, have no worries about the landlord, etc. Because of rental prices, I save enough each month by not owning that the money I didn't spend on a house will, by the time I retire, be enough of a nest egg that I could rent a mansion off of the interest alone. Despite all of this, I'm constantly looking at realtor.com and shopping for houses not only where I live but in towns and cities where I used to live. I can't seem to get the idea out of my head that I really need to own a house, even if that idea is totally irrational. I often wonder, what is the psychology behind this? There is no material difference between owning and renting. I suppose I'm just brainwashed. I'll buy when I have 20% down and the year is 2011, I tell myself. Then at least the market will again be rational. As others have said...it's where you live and what the market is like. I just bought a townhome in this area (new construction...closing in two weeks) for several reasons: 1. I need more space than my 1 bedroom apartment offers 2. My rent is over $900 a month currently, whereas my raw mortgage will be just over that (not counting taxes). 3. I was able to get a good price in this market (also, the market here is not tanking...it just slowed). In my case it makes perfect sense. My house will likely appreciate while I'm living in it, especially given the price point. I'm paying around $220K for a three bedroom townhouse with garage and full walk-out basement (with some nice upgrades as well, such as a deck, Corian countertops, etc.) As the home appreciates, my net worth will increase. I'll have equity if I need it. But in your situation, I agree...it's better to rent. Originally Posted by ShawnJ In the spirit of this thread I was actually wondering what the best way to pay back a loan is: slower or faster. I have six figures worth of subsidized, unsubsidized, and private loans. This thread got me thinking because of Flounder's idea that investing money will more than recoup the extra payments of a 30 year mortgage. I'm a novice with this long-term financial stuff. It's more than just the long term. You also need to look at what that loan debt is going to do to your credit. It's usually my advice that it's better to pay off revolving or unsecured debt as quickly as possible (loans other than auto or home). I think I would look at your rates and see if you can consolidate to a low rate. Then, once you have an actual job Originally Posted by trumptman I'm going to have to radically disagree with that. Just because white, western baby boomers are declining doesn't mean anyone else or anything else is doing so. The population of the world is still slated to go up. Wealth will still go up. The reason the boomers are screwed is because they didn't save and spent themselves into a whole while promising themselves the moon and stars. They want the lawn man and the 1.6 children they had to pay for all this. The lawn man and their kids are going to tell them to screw themselves. Agreed. And why is that no matter what kind of market we get into, people think it will last forever? When it's great, people live like they are on a drunken shore leave. When it's bad, they are shopping for guns to blow their brains out. Nothing is constant or forever in real estate. We're going through a pretty steep correction. In my opinion we're near the bottom (maybe another 6-12 months away). Give it 2 years, and the market will have stabilized. Give it 5 years, and it will be on its way back up, albeit at a reasonable pace. Even the hardest hit markets cannot go down I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. And that's the thing financial planners can't put a price on ... so it's never figured into their analysis. So they can make a big mortgage sound like a "good investment strategy" on paper... never mind the stress of being in debt. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, "Look at that!" -... Originally Posted by SDW2001 As others have said...it's where you live and what the market is like. I just bought a townhome in this area (new construction...closing in two weeks) for several reasons: 1. I need more space than my 1 bedroom apartment offers 2. My rent is over $900 a month currently, whereas my raw mortgage will be just over that (not counting taxes). 3. I was able to get a good price in this market (also, the market here is not tanking...it just slowed). In my case it makes perfect sense. My house will likely appreciate while I'm living in it, especially given the price point. I'm paying around $220K for a three bedroom townhouse with garage and full walk-out basement (with some nice upgrades as well, such as a deck, Corian countertops, etc.) As the home appreciates, my net worth will increase. I'll have equity if I need it. But in your situation, I agree...it's better to rent. If I could buy a 3 BR townhome with fancy countertops for $220k, I probably would. that said, it will probably be 5 years or more before much appreciation occurs. in my neck of the woods, the only hope for a 3BR townhome at that price is an "affordable housing" program, which includes deed restrictions relating to income, resale price, etc. Originally Posted by trumptman I'm going to have to radically disagree with that. Just because white, western baby boomers are declining doesn't mean anyone else or anything else is doing so. The population of the world is still slated to go up. Wealth will still go up. The reason the boomers are screwed is because they didn't save and spent themselves into a whole while promising themselves the moon and stars. They want the lawn man and the 1.6 children they had to pay for all this. The lawn man and their kids are going to tell them to screw themselves. Sorry, you're wrong about it being limited to white western Baby Boomers. A cursory glance at demographic data worldwide will tell you that the working populations of all nations are falling. So much so that in Japan and South Korea, they are very worried about having enough caretakers for their aging population, which has resulted in both governments pushing the development of robot assistants and workers. Originally Posted by SpamSandwich Sorry, you're wrong about it being limited to white western Baby Boomers. A cursory glance at demographic data worldwide will tell you that the working populations of all nations are falling. So much so that in Japan and South Korea, they are very worried about having enough caretakers for their aging population, which has resulted in both governments pushing the development of robot assistants and workers. Sorry if it wasn't clear enough but I consider Japan and Korea to be countries that have adopted Western style beliefs. There will still be plenty of people. There may not be plenty of Japanese and Korean people but if they want to die alone in their countries while hoards beat at the door, let them wallow in their racist ways. Demography is destiny. There isn't a law or rule that can overcome that. "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell "During times of universal deceit, telling the truth becomes a revolutionary act." -George Orwell Originally Posted by SpamSandwich Sorry, you're wrong about it being limited to white western Baby Boomers. A cursory glance at demographic data worldwide will tell you that the working populations of all nations are falling. So much so that in Japan and South Korea, they are very worried about having enough caretakers for their aging population, which has resulted in both governments pushing the development of robot assistants and workers. The same concerns exist with China due to the one-child policy and the aging population. Originally Posted by progmac If I could buy a 3 BR townhome with fancy countertops for $220k, I probably would. that said, it will probably be 5 years or more before much appreciation occurs. in my neck of the woods, the only hope for a 3BR townhome at that price is an "affordable housing" program, which includes deed restrictions relating to income, resale price, etc. That it exactly....the area. I don't blame you for thinking like you do. It's all about what one can get for the money. Don't feel you have to buy. You're likely better off renting something given the market you've described. I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. Originally Posted by trumptman There is NO, repeat NO estimate I have seen whereby just because the boomers die, the population of the United States actually declines. If anyone is screwed it is the boomers themselves since they are used to the market accommodating their every wish and that may finally no longer be true. They may want to move from a four bedroom home to a two bedroom condo and while they may not get the inflated dollars they want for their home, it also doesn't mean anyone is going to run out and build a bunch of retirement communities for them to go retire to when they can't supposedly sell those The need for housing will not disappear. The population will still increase. The boomers may have trouble cashing out but that means it is a deal for everyone who needs a home and tries to cash in. The issue is that property values in the middle to upper end markets, especially, will decline as the boomers move out of their houses or die-off. On the flip side, there might be a nice bonus for real estate in the sun-belt, but all indications seem to show that the market for middle to upper end single family homes in most parts of the nation are going to enter an increasingly supply-driven phase for quite a while. The population that will replace the boomers is younger and does not have the same level of wealth. There's also the potential for increased migration to cities and towns as energy prices appear to continue to rise, so the US suburban real-estate legacy may take quite a hit. So the point that "now is a bad time to buy" rings as loud ever. We are in agreement that property values will rise more slowly and even drop, so the $212K offset in my example, above, will simply not exist. Until the market reacts and the sale prices reach equilibrium (which may take years), it will continue to be a better idea to rent than buy. Originally Posted by Splinemodel The issue is that property values in the middle to upper end markets, especially, will decline as the boomers move out of their houses or die-off. On the flip side, there might be a nice bonus for real estate in the sun-belt, but all indications seem to show that the market for middle to upper end single family homes in most parts of the nation are going to enter an increasingly supply-driven phase for quite a while. The population that will replace the boomers is younger and does not have the same level of wealth. There's also the potential for increased migration to cities and towns as energy prices appear to continue to rise, so the US suburban real-estate legacy may take quite a hit. So the point that "now is a bad time to buy" rings as loud ever. We are in agreement that property values will rise more slowly and even drop, so the $212K offset in my example, above, will simply not exist. Until the market reacts and the sale prices reach equilibrium (which may take years), it will continue to be a better idea to rent than buy. I'm sorry, but I think you're really off base here. The market is not going to decline because the boomers "die off." If that was the case, the same would have happened when their parents were their age, some 20-30 years ago. But that didn't happen, because as they aged, their children aged along with them, getting better jobs and accumulating wealth. One generation becomes the next, so to speak. In addition, the assumption that younger people don't have the money is not necessarily a good one. We tend to become more affluent with each generation, despite the predictions. Generation X was supposed to be "the first generation not to have it as good as their parents." That turned out to be rubbish, as on the whole, we have far more. All that said, the market is going through a very steep correction. But like all corrections, it will not last forever. The market will recover over time. Even in hard hit areas, we're getting close to the bottom. One example is where my father lives (Naples, FL...one of the largest bubble markets in the nation). In 2000, he bought a new construction home for about $220,000. Over the next 5 years, the home appreciated to nearly $700,000. Now it's come down to perhaps $350,000 and holding. In other words it's become nearly affordable again. It will appreciate more slowly than it did, but the market is not likely to tank, either. I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. I can only please one person per day. Today is not your day. Tomorrow doesn't look good either. Perhaps, but you're missing the major point here: I hypothesize that, in most areas, the opportunity cost of renting and investing outside of real estate is -- and will likely continue to be -- more financially wise than is home ownership. In support of this, they are called "the baby boom" for a reason. There are larger numbers of people in that age bracket than there are in the age bracket below, and as a result the housing vacancies will cause the market to be more supply-driven than it had been previously. There's no smoke and mirrors or doom and gloom. I offer a further hypothesis that suburban land values will be hit especially hard, due to rising energy costs and the popularity of city living among generation x and post-x professionals (feel free to check the figures), but these are not central to the argument. I invite you to take a gamble on real estate. The appreciation that makes home ownership financially viable, in general, will not occur at the kinds of rates that are required to make home ownership a more sensible investment than other alternatives, should you buy at today's prices. That is all my message has ever been. You can track-back through this thread and verify that if you choose. Originally Posted by Splinemodel Perhaps, but you're missing the major point here: I hypothesize that, in most areas, the opportunity cost of renting and investing outside of real estate is -- and will likely continue to be -- more financially wise than is home ownership. In support of this, they are called "the baby boom" for a reason. There are larger numbers of people in that age bracket than there are in the age bracket below, and as a result the housing vacancies will cause the market to be more supply-driven than it had been previously. There's no smoke and mirrors or doom and gloom. I offer a further hypothesis that suburban land values will be hit especially hard, due to rising energy costs and the popularity of city living among generation x and post-x professionals (feel free to check the figures), but these are not central to the argument. I invite you to take a gamble on real estate. The appreciation that makes home ownership financially viable, in general, will not occur at the kinds of rates that are required to make home ownership a more sensible investment than other alternatives, should you buy at today's prices. That is all my message has ever been. You can track-back through this thread and verify that if you choose. Nice to see someone has done their homework on this subject. I've bypassed home ownership for years, preferring apartments and investments. Now, though the stock market is currently in the tank, it will likely recover quicker than real estate. I agree, there will be more opportunities for new buyers in the low-end of the housing market, and I also agree that we may be looking at a decades long slump. "Generation X" and "Y" are going to have a very rough go of it as the Boomer peak lays waste to our country with massively increased taxes to cover their ill-prepared retirements and skyrocketing medical expenses. From a practical standpoint, do you want to live in a neighborhood where everyone rents or everyone owns? I'm sure there are other, more practical considerations, but this one is pretty obvious. Never had ONE lesson. Never had ONE lesson. post #2 of 60 7/22/08 at 3:00pm post #3 of 60 7/22/08 at 5:08pm Thread Starter post #4 of 60 7/22/08 at 5:15pm post #5 of 60 7/22/08 at 5:58pm post #6 of 60 7/22/08 at 7:09pm • will burn in the Fiery Pit of Hell. • Joined: Jun 2003 • Location: Colorado • Posts: 5,965 • offline post #7 of 60 7/23/08 at 7:20am Thread Starter post #8 of 60 7/23/08 at 7:43am post #9 of 60 7/23/08 at 4:23pm post #10 of 60 7/23/08 at 4:48pm post #11 of 60 7/23/08 at 4:57pm Thread Starter post #12 of 60 7/23/08 at 5:05pm post #13 of 60 7/23/08 at 5:10pm post #14 of 60 7/23/08 at 5:34pm post #15 of 60 7/23/08 at 7:08pm • Joined: Oct 2006 • Location: Greenville, SC • Posts: 971 • offline post #16 of 60 7/24/08 at 7:55am Thread Starter post #17 of 60 7/24/08 at 12:47pm • Joined: Nov 2001 • Location: North Carolina • Posts: 5,989 • offline post #18 of 60 7/24/08 at 3:35pm • Joined: Apr 2007 • Posts: 25 • offline post #19 of 60 7/24/08 at 4:26pm • Rock and Roll Accountant • Joined: Nov 2001 • Location: Long Island • Posts: 1,270 • offline post #20 of 60 7/24/08 at 6:30pm post #21 of 60 7/25/08 at 12:28pm • Doctor in Web • Joined: Nov 2002 • Location: Pennsylvania • Posts: 6,588 • offline post #22 of 60 7/25/08 at 12:35pm Thread Starter post #23 of 60 7/25/08 at 2:58pm post #24 of 60 7/25/08 at 5:23pm • My snark goes to 11. • Joined: Nov 2001 • Location: The Future • Posts: 15,876 • offline post #25 of 60 7/25/08 at 5:27pm • My snark goes to 11. • Joined: Nov 2001 • Location: The Future • Posts: 15,876 • offline post #26 of 60 7/25/08 at 5:41pm • Doctor in Web • Joined: Nov 2002 • Location: Pennsylvania • Posts: 6,588 • offline post #27 of 60 7/25/08 at 6:45pm • My snark goes to 11. • Joined: Nov 2001 • Location: The Future • Posts: 15,876 • offline post #28 of 60 7/25/08 at 7:56pm post #29 of 60 7/26/08 at 6:28am post #30 of 60 7/26/08 at 7:21am post #31 of 60 7/26/08 at 6:19pm Thread Starter post #32 of 60 7/27/08 at 6:29pm post #33 of 60 7/27/08 at 7:25pm • My snark goes to 11. • Joined: Nov 2001 • Location: The Future • Posts: 15,876 • offline post #34 of 60 7/28/08 at 7:27am • Joined: Jan 2002 • Location: up above • Posts: 6,032 • offline post #35 of 60 7/28/08 at 9:17am post #36 of 60 7/28/08 at 4:07pm post #37 of 60 7/28/08 at 6:31pm post #38 of 60 7/28/08 at 7:53pm post #39 of 60 7/28/08 at 8:19pm post #40 of 60 8/14/08 at 3:18pm • Joined: Nov 2001 • Location: East of Eden • Posts: 383 • offline
{"url":"http://forums.appleinsider.com/t/89302/why-do-we-want-to-own-houses","timestamp":"2014-04-18T11:32:59Z","content_type":null,"content_length":"211401","record_id":"<urn:uuid:bafd7527-cf50-4f33-9c06-b21d6d5e0544>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Numbers - Algebra II Honors - Algebra II - 2011-08-22 Properties of Real Numbers: Objectives • to graph and order real numbers • to identify and use properties of real numbers • to evaluate algebraic equations • to simplify algebraic expressions Subsets of Real Numbers: natural numbers - numbers used for counting; starts with number 1 and goes up by 1's. 1,2,3,4.... whole numbers - natural numbers + zero. 0,1,2,3,4,... integers - natural numbers (positive integers), zero, and the negative integers ...-4,-3,-2,-1,0,1,2,3,4... - Each negative integer is the opposite, or additive inverse, of a positive integer Z is the standard symbol for the set of integers. rational numbers - all the numbers that can be written as quotients of integers. Examples: 7/5, -3/2, -4/5, 0, 0.3, -1.2, 9 • Each quotient must have a nonzero denominator. • Some rational numbers can be written as terminating decimals. Ex. 1/8=0.125 • All other rational numbers can be written as repeating decimals. Ex. 1/3 or 0.33. Use Q for the quotient. irrational numbers - all the numbers that cannot be written as quotients of integers. • their decimal representations neither terminate nor repeat. • if a positive rational number is not a perfect square such as 25 or 4/9, then its square root is irrational. Properties of Real Numbers Let a, b, and c represent real numbers. Property Addition Multiplication Closure a + b is a real number. ab is a real number Commutativ a + b = b + a ab=ba Associative (a + b) + c = a + (b+ c) (ab)c = a(bc) Identity a + 0= a, 0 + a = a a · 1 = a, 1 · a = a Inverse a + (-a) = 0 a · 1/a = 1 Distributive a (b + c) = ab + bc absolute value - the difference a number is from zero. coefficient - (for now) a number multiplied against something, typically a power of a variable. For ex.: 3x (the coefficient is 3). equation - two expressions connected by equality (=). For example: x^2 + 2x - 1 = 5x - 2 term -product of a coefficient and one or more variables to various powers. For example, 7x^2, 3y, 2x^3y. A plain number can also be a term. - like terms - terms that are identical with respect to variables (differ only in coefficients). For ex.: 3x and 4x are like, 3y and 2x are not. variable - a symbol, usually a lowercase letter, that represents one or more numbers. Ex. x, y, t. Properties for Simplifying Algebraic Expressions Let a, b, and c represent real numbers. Definition of Subtraction a - b = a + ( - b ) Definition of Division a ÷ b = a · 1/b, b ± 0 Distributive Property for Subtraction a (b - c) = ab - ac Multiplication by 0 0 · a = 0 Multiplication by - 1 - 1 · a = - a Opposite of a Sum - (a + b) = - a + (- b) Opposite of a Difference - ( a - b) = b - a Opposite of a Product - (ab) = - a · b = a · (- b ) Opposite of an Opposite - (- a) = a Solving Equalities and Inequalities. Solving Absolute Value Equations and Inequations • to solve equations • to solve and graph inequalities • to solve absolute value equations • to solve absolute value inequalities Properties of Equality Reflexive a=a Symmetric if a = b, then b = Transitive if a = b and b = c, then a = c Addition if a = b, then a + c = b + c subtraction if a = b, then a - c = b - c multiplication if a = b, then ac = bc division if a = b and c≠0, then a/c = b/c substitution if a = b, then b may be substituted for a in any expression to obtain an equivalent expression. Properties of Inequalities let a, b, and c represent all real numbers Transitive if a≤b and b≤c, then a≤c addition if a≤b then a+c≤b+c subtraction if a≤b then a-c≤b-c if a≤b and c>0, then ac≤bc if a≤b and c<0 then ac≥bc if a≤b and c>o then a/c≤b/c if a≤b and c<0 then a/c≥b/c • Compound inequality is a pair of inequalities joined by and or or • To solve a compound inequality joined by and, find all values of the variable that make both inequalities true • to solve a compound inequality joined by or, find all values of the variable that makes at least one of the inequalities true. • Algebraic definition of Absolute Value • if x>0, then lxl=x • if x<0 then lxl=-x • when solving an absolute value equation, remember that there are two solutions, a positive and a negative. • extraneous solution is a solution of an equation derived from an original equation that is not a solution of the original equation.
{"url":"http://www.studyup.com/notes/view/7335","timestamp":"2014-04-20T10:50:04Z","content_type":null,"content_length":"30719","record_id":"<urn:uuid:96903997-4dac-46b2-9b6a-88d1f591e9dc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
First-order logic From Wikipedia, the free encyclopedia First-order logic is a formal system used in mathematics, philosophy, linguistics, and computer science. It is also known as first-order predicate calculus, the lower predicate calculus, quantification theory, and predicate logic. First-order logic is distinguished from propositional logic by its use of quantified variables. A theory about some topic is usually first-order logic together with a specified domain of discourse over which the quantified variables range, finitely many functions which map from that domain into it, finitely many predicates defined on that domain, and a recursive set of axioms which are believed to hold for those things. Sometimes "theory" is understood in a more formal sense, which is just a set of sentences in first-order logic. The adjective "first-order" distinguishes first-order logic from higher-order logic in which there are predicates having predicates or functions as arguments, or in which one or both of predicate quantifiers or function quantifiers are permitted.^1 In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of There are many deductive systems for first-order logic that are sound (all provable statements are true in all models) and complete (all statements which are true in all models are provable). Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Mathematical theories, such as number theory and set theory, have been formalized into first-order axiom schemata such as Peano arithmetic and Zermelo–Fraenkel set theory (ZF) respectively. No first-order theory, however, has the strength to describe fully and categorically structures with an infinite domain, such as the natural numbers or the real line. Categorical axiom systems for these structures can be obtained in stronger logics such as second-order logic. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate takes an entity or entities in the domain of discourse as input and outputs either True or False. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences are viewed as being unrelated and are denoted, for example, by p and q. However, the predicate "is a philosopher" occurs in both sentences which have a common structure of "a is a philosopher". The variable a is instantiated as "Socrates" in the first sentence and is instantiated as "Plato" in the second sentence. The use of predicates, such as "is a philosopher" in this example, distinguishes first-order logic from propositional logic. Predicates can be compared. Consider, for example, the first-order formula "if a is a philosopher, then a is a scholar". This formula is a conditional statement with "a is a philosopher" as hypothesis and "a is a scholar" as conclusion. The truth of this formula depends on which object is denoted by a, and on the interpretations of the predicates "is a philosopher" and "is a scholar". Variables can be quantified over. The variable a in the previous formula can be quantified over, for instance, in the first-order sentence "For every a, if a is a philosopher, then a is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if a is a philosopher, then a is a scholar" holds for all choices of a. The negation of the sentence "For every a, if a is a philosopher, then a is a scholar" is logically equivalent to the sentence "There exists a such that a is a philosopher and a is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "a is a philosopher and a is not a scholar" holds for some choice of a. The predicates "is a philosopher" and "is a scholar" each take a single variable. Predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. To interpret a first-order formula, one specifies what each predicate means and the entities that can instantiate the predicated variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. Given that the interpretation with the domain of discourse as consisting of all human beings and the predicate "is a philosopher" understood as "have written the Republic", the sentence "There exist a such that a is a philosopher" is seen as being true, as witnessed by Plato. There are two key parts of first order logic. The syntax determines which collections of symbols are legal expressions in first-order logic, while the semantics determine the meanings behind these Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is legal. There are two key types of legal expressions: terms, which intuitively represent objects, and formulas, which intuitively express predicates that can be true or false. The terms and formulas of first-order logic are strings of symbols which together form the alphabet of the language. As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol $\land$ always represents "and"; it is never interpreted as "or". On the other hand, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate, depending on the interpretation at hand. Logical symbols There are several logical symbols in the alphabet, which vary by author but usually include: • The quantifier symbols ∀ and ∃ • The logical connectives: ∧ for conjunction, ∨ for disjunction, → for implication, ↔ for biconditional, ¬ for negation. Occasionally other logical connective symbols are included. Some authors use →, or Cpq, instead of →, and ↔, or Epq, instead of ↔, especially in contexts where $\to$ is used for other purposes. Moreover, the horseshoe ⊃ may replace →; the triple-bar ≡ may replace ↔, and a tilde (~), Np, or Fpq, may replace ¬; ||, or Apq may replace ∨; and &, Kpq, or the middle dot, ⋅, may replace ∧, especially if these symbols are not available for technical reasons. (Note: the aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.) • Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context. • An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, … . Subscripts are often used to distinguish variables: x[0], x[1], x[2], … . • An equality symbol (sometimes, identity symbol) =; see the section on equality below. It should be noted that not all of these symbols are required – only one of the quantifiers, negation and conjunction, variables, brackets and equality suffice. There are numerous minor variations that may define additional logical symbols: • Sometimes the truth constants T, Vpq, or ⊤, for "true" and F, Opq, or ⊥, for "false" are included. Without any such logical operators of valence 0, these two constants can only be expressed using • Sometimes additional logical connectives are included, such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq. Non-logical symbols The non-logical symbols represent predicates (relations), functions and constants on the domain of discourse. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature.^2 The traditional approach is to have only one, infinite, set of non-logical symbols (one signature) for all applications. Consequently, under the traditional approach there is only one language of first-order logic.^3 This approach is still common, especially in philosophically oriented books. 1. For every integer n ≥ 0 there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n we have an infinite supply of them: P^n[0], P^n[1], P^n[2], P^n[3], … 2. For every integer n ≥ 0 there are infinitely many n-ary function symbols: f ^n[0], f ^n[1], f ^n[2], f ^n[3], … In contemporary mathematical logic, the signature varies by application. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim-Skolem In this approach, every non-logical symbol is of one of the following types. 1. A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters P, Q, R,... . □ Relations of valence 0 can be identified with propositional variables. For example, P, which can stand for any statement. □ For example, P(x) is a predicate variable of valence 1. One possible interpretation is "x is a man". □ Q(x,y) is a predicate variable of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y". 2. A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase letters f, g, h,... . □ Examples: f(x) may be interpreted as for "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+ y". In set theory, it may stand for "the union of x and y". □ Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet a, b, c,... . The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, such a constant may stand for the empty set. The traditional approach can be recovered in the modern approach by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. Formation rules The formation rules define the terms and formulas of first order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. The set of terms is inductively defined by the following rules: 1. Variables. Any variable is a term. 2. Functions. Any expression f(t[1],...,t[n]) of n arguments (where each argument t[i] is a term and f is a function symbol of valence n) is a term. In particular, symbols denoting individual constants are 0-ary function symbols, and are thus terms. Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. The set of formulas (also called well-formed formulas^4 or wffs) is inductively defined by the following rules: 1. Predicate symbols. If P is an n-ary predicate symbol and t[1], ..., t[n] are terms then P(t[1],...,t[n]) is a formula. 2. Equality. If the equality symbol is considered part of logic, and t[1] and t[2] are terms, then t[1] = t[2] is a formula. 3. Negation. If φ is a formula, then $\lnot$φ is a formula. 4. Binary connectives. If φ and ψ are formulas, then (φ $\rightarrow$ ψ) is a formula. Similar rules apply to other binary logical connectives. 5. Quantifiers. If φ is a formula and x is a variable, then $\forall x \varphi$ (for all x, $\varphi$ holds) and $\exists x \varphi$ (there exists x such that $\varphi$) are formulas. Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas. For example, $\forall x \forall y (P(f(x)) \rightarroweg (P(x) \rightarrow Q(f(y),x,z)))$ is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. On the other hand, $\forall x\, x \rightarrow$ is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way by following the inductive definition (in other words, there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. This definition of a formula does not support defining an if-then-else function ite(c, a, b), where "c" is a condition expressed as a formula, that would return "a" if c is true, and "b" if it is false. This is because both predicates and functions can only accept terms as parameters, but the first parameter is a formula. Some languages built on first-order logic, such as SMT-LIB 2.0, add Notational conventions For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: • $\lnot$ is evaluated first • $\land$ and $\lor$ are evaluated next • Quantifiers are evaluated next • $\to$ is evaluated last. Moreover, extra punctuation not required by the definition may be inserted to make formulas easier to read. Thus the formula $(\lnot \forall x P(x) \to \exists x \lnot P(x))$ might be written as $(\lnot [\forall x P(x)]) \to \exists x [\lnot P(x)].$ In some fields, it is common to use infix notation for binary relations and functions, instead of the prefix notation defined above. For example, in arithmetic, one typically writes "2 + 2 = 4" instead of "=(+(2,2),4)". It is common to regard formulas in infix notation as abbreviations for the corresponding formulas in prefix notation. The definitions above use infix notation for binary connectives such as $\to$. A less common convention is Polish notation, in which one writes $\rightarrow$, $\wedge$, and so on in front of their arguments rather than between them. This convention allows all punctuation symbols to be discarded. Polish notation is compact and elegant, but rarely used in practice because it is hard for humans to read it. In Polish notation, the formula $\forall x \forall y (P(f(x)) \rightarroweg (P(x) \rightarrow Q(f(y),x,z)))$ becomes "∀x∀y→Pfx¬→ PxQfyxz". Free and bound variables In a formula, a variable may occur free or bound. Intuitively, a variable is free in a formula if it is not quantified: in $\forall y\, P(x,y)$, variable x is free while y is bound. The free and bound variables of a formula are defined inductively as follows. 1. Atomic formulas. If φ is an atomic formula then x is free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula. 2. Negation. x is free in $eg$φ if and only if x is free in φ. x is bound in $eg$φ if and only if x is bound in φ. 3. Binary connectives. x is free in (φ $\rightarrow$ ψ) if and only if x is free in either φ or ψ. x is bound in (φ $\rightarrow$ ψ) if and only if x is bound in either φ or ψ. The same rule applies to any other binary connective in place of $\rightarrow$. 4. Quantifiers. x is free in $\forall$y φ if and only if x is free in φ and x is a different symbol from y. Also, x is bound in $\forall$y φ if and only if x is y or x is bound in φ. The same rule holds with $\exists$ in place of $\forall$. For example, in $\forall$x $\forall$y (P(x)$\rightarrow$Q(x,f(x),z)), x and y are bound variables, z is a free variable, and w is neither because it does not occur in the formula. Freeness and boundness can be also specialized to specific occurrences of variables in a formula. For example, in $P(x) \rightarrow \forall x\, Q(x)$, the first occurrence of x is free while the second is bound. In other words, the x in $P(x)$ is free while the $x$ in $\forall x\, Q(x)$ is bound. A formula in first-order logic with no free variables is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence $\exists x\, \text{Phil}(x)$ will be either true or false in a given interpretation. Abelian groups In mathematics the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: • The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z. • The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z ≤ x + y. • The expression $(\forall x \forall y \, \mathop{\leq}(\mathop{+}(x, y), z) \to \forall x\, \forall y\, \mathop{+}(x, y) = 0)$ is a formula, which is usually written as $\forall x \forall y ( x + y \leq z) \to \forall x \forall y (x+y = 0).$ Loving relation There are 10 different formulas with 8 different meanings, that use the loving relation Lxy ("x loves y.") and the quantifiers ∀ and ∃: The diagonal is The matrix is nonempty/full: nonempty/full: The logical matrices represent the formulas for the case that there are five individuals that can love (vertical axis) and be loved (horizontal axis). Except for the sentences 9 and 10, they are examples. E.g. the matrix representing sentence 5 stands for "b loves himself."; the matrix representing sentences 7 and 8 stands for "c loves b." It's important and instructive to distinguish sentence 1, $\forall x \exist y Lyx$, and 3, $\exist x \forall y Lxy$: In both cases everyone is loved; but in the first case everyone is loved by someone, in the second case everyone is loved by the same person. Some sentences imply each other — e.g. if 3 is true also 1 is true, but not vice versa. (See Hasse diagram) An interpretation of a first-order language assigns a denotation to all non-logical constants in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, a first-order formula is a statement about these objects; for example, $\exists x P(x)$ states the existence of an object x such that the predicate P is true where referred to it. The domain of discourse is the set of considered objects. For example, one can take $D$ to be the set of integer numbers. The interpretation of a function symbol is a function. For example, if the domain of discourse consists of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function I(f) which, in this interpretation, is addition. The interpretation of a constant symbol is a function from the one-element set D^0 to D, which can be simply identified with an object in D. For example, an interpretation may assign the value $I(c)= 10$ to the constant symbol $c$. The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of the domain of discourse. This means that, given an interpretation, a predicate symbol, and n elements of the domain of discourse, one can tell whether the predicate is true of those elements according to the given interpretation. For example, an interpretation I(P) of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than the second. First-order structures The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a nonempty set D that forms the domain of discourse and an interpretation I of the non-logical terms of the signature. This interpretation is itself a function: • Each function symbol f of arity n is assigned a function I(f) from $D^n$ to $D$. In particular, each constant symbol of the signature is assigned an individual in the domain of discourse. • Each predicate symbol P of arity n is assigned a relation I(P) over $D^n$ or, equivalently, a function from $D^n$ to $\{true, false\}$. Thus each predicate symbol is interpreted by a Boolean-valued function on D. Evaluation of truth values A formula evaluates to true or false given an interpretation, and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as $y = x$. The truth value of this formula changes depending on whether x and y denote the same individual. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: 1. Variables. Each variable x evaluates to μ(x) 2. Functions. Given terms $t_1, \ldots, t_n$ that have been evaluated to elements $d_1, \ldots, d_n$ of the domain of discourse, and a n-ary function symbol f, the term $f(t_1, \ldots, t_n)$ evaluates to $(I(f))(d_1,\ldots,d_n)$. Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. 1. Atomic formulas (1). A formula $P(t_1,\ldots,t_n)$ is associated the value true or false depending on whether $\langle v_1,\ldots,v_n \rangle \in I(P)$, where $v_1,\ldots,v_n$ are the evaluation of the terms $t_1,\ldots,t_n$ and $I(P)$ is the interpretation of $P$, which by assumption is a subset of $D^n$. 2. Atomic formulas (2). A formula $t_1 = t_2$ is assigned true if $t_1$ and $t_2$ evaluate to the same object of the domain of discourse (see the section on equality below). 3. Logical connectives. A formula in the form $eg \phi$, $\phi \rightarrow \psi$, etc. is evaluated according to the truth table for the connective in question, as in propositional logic. 4. Existential quantifiers. A formula $\exists x \phi(x)$ is true according to M and $\mu$ if there exists an evaluation $\mu'$ of the variables that only differs from $\mu$ regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment $\mu'$. This formal definition captures the idea that $\exists x \phi(x)$ is true if and only if there is a way to choose a value for x such that φ(x) is satisfied. 5. Universal quantifiers. A formula $\forall x \phi(x)$ is true according to M and $\mu$ if φ(x) is true for every pair composed by the interpretation M and some variable assignment $\mu'$ that differs from $\mu$ only on the value of x. This captures the idea that $\forall x \phi(x)$ is true if every possible choice of a value for x causes φ(x) to be true. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and $\mu$ if and only if it is true according to M and every other variable assignment $\mu'$. There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol c[d] is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: 1. Existential quantifiers (alternate). A formula $\exists x \phi(x)$ is true according to M if there is some d in the domain of discourse such that $\phi(c_d)$ holds. Here $\phi(c_d)$ is the result of substituting c[d] for every free occurrence of x in φ. 2. Universal quantifiers (alternate). A formula $\forall x \phi(x)$ is true according to M if, for every d in the domain of discourse, $\phi(c_d)$ is true according to M. This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. Validity, satisfiability, and logical consequence If a sentence φ evaluates to True under a given interpretation M, one says that M satisfies φ; this is denoted $M \vDash \phi$. A sentence is satisfiable if there is some interpretation under which it is true. Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula with free variables is said to be satisfied by an interpretation if the formula remains true regardless which individuals from the domain of discourse are assigned to its free variables. This has the same effect as saying that a formula is satisfied if and only if its universal closure is satisfied. A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions. First-order theories, models, and elementary classes A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. For more information on this subject see List of first-order theories and Theory (mathematical logic) Empty domains Main article: Empty domain The definition above requires that the domain of discourse of any interpretation must be a nonempty set. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: • Many common rules of inference are only valid when the domain of discourse is required to be nonempty. One example is the rule stating that $\phi \lor \exists x \psi$ implies $\exists x (\phi \ lor \psi)$ when x is not a free variable in φ. This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted. • The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains. Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. Deductive systems A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs, but are completely formalized unlike natural-language mathematical proofs. A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus a sound argument is correct in every possible interpretation of the language, regardless whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. Rules of inference A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φt/x (often denoted φx/t) is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φt/x from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by $\exists x (x = y)$, in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φt/y is $\exists x ( x = x+1)$, which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is $\exists z ( z = x+1)$, which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. Hilbert-style systems and natural deduction A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand, or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemes of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemes of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. Sequent calculus The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form $A_1, \ldots, A_n \vdash B_1, \ldots, B_k,$ where A[1], ..., A[n], B[1], ..., B[k] are formulas and the turnstile symbol $\vdash$ is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that $(A_1 \land \ cdots\land A_n)$ implies $(B_1\lor\cdots\lor B_k)$. Tableaux method Unlike the methods just described, the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has $\lnot A$ at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that $C \lor D$ is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent $C \lor D$ and children C and The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses $A_1 \lor\cdots\lor A_k \lor C$ and $B_1\lor\cdots\lor B_l\lor\lnot C$, the conclusion $A_1\lor\cdots\lor A_k\lor B_1\lor\cdots\lor B_l$ can be obtained. Provable identities The following sentences can be called "identities" because the main connective in each is the biconditional. $\lnot \forall x \, P(x) \Leftrightarrow \exists x \, \lnot P(x)$ $\lnot \exists x \, P(x) \Leftrightarrow \forall x \, \lnot P(x)$ $\forall x \, \forall y \, P(x,y) \Leftrightarrow \forall y \, \forall x \, P(x,y)$ $\exists x \, \exists y \, P(x,y) \Leftrightarrow \exists y \, \exists x \, P(x,y)$ $\forall x \, P(x) \land \forall x \, Q(x) \Leftrightarrow \forall x \, (P(x) \land Q(x))$ $\exists x \, P(x) \lor \exists x \, Q(x) \Leftrightarrow \exists x \, (P(x) \lor Q(x))$ $P \land \exists x \, Q(x) \Leftrightarrow \exists x \, (P \land Q(x))$ (where $x$ must not occur free in $P$) $P \lor \forall x \, Q(x) \Leftrightarrow \forall x \, (P \lor Q(x))$ (where $x$ must not occur free in $P$) Equality and its axioms There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are: 1. Reflexivity. For each variable x, x = x. 2. Substitution for functions. For all variables x and y, and any function symbol f, x = y → f(...,x,...) = f(...,y,...). 3. Substitution for formulas. For any variables x and y and any formula φ(x), if φ' is obtained by replacing any number of free occurrences of x in φ with y, such that these remain free occurrences of y, then x = y → (φ → φ'). These are axiom schemes, each of which specifies an infinite set of axioms. The third scheme is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second scheme, involving the function symbol f, is (equivalent to) a special case of the third scheme, using the formula x = y → (f(...,x,...) = z → f(...,y,...) = z). Many other properties of equality are consequences of the axioms above, for example: 1. Symmetry. If x = y then y = x. 2. Transitivity. If x = y and y = z then x = z. First-order logic without equality An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. Defining equality within a theory If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemes as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. Some theories allow other ad hoc definitions of equality: • In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t $\wedge$t ≤ s. • In set theory with one relation $\in$, one may define s = t to be an abbreviation for $\forall$x (s $\in$x $\leftrightarrow$t $\in$x) $\wedge$$\forall$x (x $\in$s $\leftrightarrow$x $\in$t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, $\forall x \forall y [ \forall z (z \in x \ Leftrightarrow z \in y) \Rightarrow x = y]$, by $\forall x \forall y [ \forall z (z \in x \Leftrightarrow z \in y) \Rightarrow \forall z (x \in z \Leftrightarrow y \in z) ]$, i.e. if x and y have the same elements, then they belong to the same sets. Metalogical properties One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. Completeness and undecidability Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. The Löwenheim–Skolem theorem The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has any infinite model then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. The compactness theorem The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the signature of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as $\Sigma_1^1$ also enjoys compactness. Lindström's theorem Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: • A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order • A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic. Although first-order logic is sufficient for formalizing much of mathematics, and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm is impossible. This has led to the study of interesting decidable fragments such as C[2], first-order logic with two variables and the counting quantifiers $\exist^{\ge n}$ and $\exist^{\le n}$ (these quantifiers are, respectively, "there exists at least n" and "there exists at most n") (Horrocks 2010). The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. Formalizing natural languages First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". But there are many more complicated features of natural language that cannot be expressed in (single-sorted) first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic" (Gamut 1991, p. 75). Type Example Comment Quantification over If John is self-satisfied, then there is at least one thing he Requires a quantifier over predicates, which cannot be implemented in single-sorted first-order logic: Zj→ ∃X properties has in common with Peter (Xj∧Xp) Quantification over Santa Claus has all the attributes of a sadist Requires quantifiers over predicates, which cannot be implemented in single-sorted first-order logic: ∀X(∀x(Sx properties → Xx)→Xs) Predicate adverbial John is walking quickly Cannot be analysed as Wj ∧ Qj; predicate adverbials are not the same kind of thing as second-order predicates such as colour Relative adjective Jumbo is a small elephant Cannot be analysed as Sj ∧ Ej; predicate adjectives are not the same kind of thing as second-order predicates such as colour Predicate adverbial John is walking very quickly - Relative adjective Jumbo is terribly small An expression such as "terribly", when applied to a relative adjective such as "small", results in a new modifier composite relative adjective "terribly small" Prepositions Mary is sitting next to John The preposition "next to" when applied to "John" results in the predicate adverbial "next to John" Restrictions, extensions, and variations There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. Restricted languages First-order logic can be studied in languages with fewer logical symbols than were described above. • Because $\exists x \phi(x)$ can be expressed as $eg \forall x eg \phi(x)$, and $\forall x \phi(x)$ can be expressed as $eg \exists x eg \phi(x)$, either of the two quantifiers $\exists$ and $\ forall$ can be dropped. • Since $\phi \lor \psi$ can be expressed as $\lnot (\lnot \phi \land \lnot \psi)$ and $\phi \land \psi$ can be expressed as $\lnot(\lnot \phi \lor \lnot \psi)$, either $\vee$ or $\wedge$ can be dropped. In other words, it is sufficient to have $eg$ and $\vee$, or $eg$ and $\wedge$, as the only logical connectives. • Similarly, it is sufficient to have only $eg$ and $\rightarrow$ as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator. • It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol $\; 0$ one may use a predicate $\; 0(x)$ (interpreted as $\; x=0$ ), and replace every predicate such as $\; P(0,y)$ with $\forall x \;(0(x) \rightarrow P(x,y))$. A function such as $f(x_1,x_2,...,x_n)$ will similarly be replaced by a predicate $F(x_1,x_2,...,x_n,y)$ interpreted as $y = f(x_1,x_2,...,x_n)$. This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics. Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemes in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. Many-sorted logic This section does not cite any references or sources. (June 2013) Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory, and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols $P_1(x)$ and $P_2(x)$ and the axiom $\forall x ( P_1(x) \lor P_2(x)) \land \lnot \exists x (P_1(x) \land P_2(x))$. Then the elements satisfying $P_1$ are thought of as elements of the first sort, and elements satisfying $P_2$ as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula φ(x), one writes $\exists x (P_1(x) \land \phi(x))$. Additional quantifiers Additional quantifiers can be added to first-order logic. • Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as $\exists!$x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as $\exists$x (P(x) $\wedge\forall$y (P(y) $\rightarrow$ (x = y))). • First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. • Bounded quantifiers are often used in the study of set theory or arithmetic. Infinitary logics Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted L[αβ], where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is L[ωω]. In the logic L [∞ω], arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as L[κω]. For example, L[ω[1]ω] permits countable conjunctions and disjunctions. The set of free variables in a formula of L[κω] can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another.^6 In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in L[κ∞], a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic L[κλ] permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. Non-classical and modal logics • First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a Philosopher, but might have been a Mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all. Higher-order logics The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus $\exists a ( \text{Phil}(a))$ is a legal first-order formula, but $\exists \text{Phil} ( \text{Phil}(a))$ is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. Automated theorem proving and formal methods Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small, core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write,^7 results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. See also 1. ^ Mendelson, Elliott (1964). Introduction to Mathematical Logic. Van Nostrand Reinhold. p. 56. 2. ^ The word language is sometimes used as a synonym for signature, but this can be confusing because "language" can also refer to the set of formulas. 3. ^ More precisely, there is only one language of each variant of one-sorted first-order logic: with or without equality, with or without functions, with or without propositional variables, …. 4. ^ Some authors who use the term "well-formed formula" use "formula" to mean any string of symbols from the alphabet. However, most authors in mathematical logic use "formula" to mean "well-formed formula" and have no term for non-well-formed formulas. In every context, it is only the well-formed formulas that are of interest. 5. ^ The SMT-LIB Standard: Version 2.0, by Clark Barrett, Aaron Stump, and Cesare Tinelli. http://goedel.cs.uiowa.edu/smtlib/ 6. ^ Some authors only admit formulas with finitely many free variables in L[κω], and more generally only formulas with < λ free variables in L[κλ]. 7. ^ Avigad et al. (2007) discuss the process of formally verifying a proof of the prime number theorem. The formalized proof required approximately 30,000 lines of input to the Isabelle proof • Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer. • Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1 doi:10.1145/ • Barwise, Jon (1977); "An Introduction to First-Order Logic", in Barwise, Jon, ed. (1982). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam, NL: North-Holland. ISBN 978-0-444-86388-1. • Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press) • Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird • Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, DOI 10.2307/2687794, JStor • Gamut, L. T. F. (1991); Logic, Language, and Meaning, Volume 2: Intensional Logic and Logical Grammar, Chicago, IL: University of Chicago Press, ISBN 0-226-28088-8 • Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition) • Hodges, Wilfrid (2001); "Classical Logic I: First Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell • Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, ISBN • Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York, NY: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6 • Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Providence RI: American Mathematical Society. External links • Hazewinkel, Michiel, ed. (2001), "Predicate calculus", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • Stanford Encyclopedia of Philosophy: Shapiro, Stewart; "Classical Logic". Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style. • Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for first-order logic. • Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized. • Podnieks, Karl; Introduction to mathematical logic • Cambridge Mathematics Tripos Notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematics Tripos course taught to undergraduates students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories. Academic areas
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=First-order_logic","timestamp":"2014-04-20T21:49:55Z","content_type":null,"content_length":"267139","record_id":"<urn:uuid:bc86e486-0020-4765-b0c4-08a4cd8c6def>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangent line to a given curve. September 2nd 2009, 06:40 AM #1 Junior Member May 2008 Tangent line to a given curve. How would I find the solution to the tangent line to the curve y = 1/(x +4) that passes through the origin? Differentiate to find the gradient of the tangent . Then , we know it passes through (0,0) so we can find the equation of the tangent using y-y1=m(x-x1) ie y-0=(dy/dx)(x-0) September 2nd 2009, 06:46 AM #2 MHF Contributor Sep 2008 West Malaysia September 2nd 2009, 07:38 AM #3 Junior Member May 2008
{"url":"http://mathhelpforum.com/calculus/100243-tangent-line-given-curve.html","timestamp":"2014-04-24T00:11:46Z","content_type":null,"content_length":"34001","record_id":"<urn:uuid:6a1c2801-62b0-4d7a-92af-5d3e0fa99567>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
splitting one limit into two? up vote 0 down vote favorite Suppose I have the limit $\lim_{m\rightarrow \infty}\frac{\sum_{k=0}^ma_{k,m}}{\sum_{k=0}^mb_{k,m}}$. When can I write this as $\lim_{n\rightarrow \infty}\lim_{m\rightarrow \infty} \frac{\sum_{k=0}^ma_{k,n}}{\sum_{k=0}^mb_{k,n}}$? To be specific, both sums converge to exponentials, which tend to zero as $n\rightarrow$. I'd like to take their ratio before letting $n$ tend to infinity. As stated, I think I can cook up counterexamples, so it might be helpful to be even more specific about what you actually want to prove. – Yemon Choi Apr 8 '13 at 4:12 Actually, I now realize that the latter expression is also an ok starting point, which is what I needed. Thanks for the help. – user32851 Apr 8 '13 at 16:22 add comment 1 Answer active oldest votes The first (single) limit is totally blind to terms $a_{k\ n}\ \ b_{k\ n}$ for all $k > n$, while the second (double) limit depends on them. Thus the two limits are hardly related at all. In other words, the single limit considers finite segments, and the double limit the infinite segments. To compare these two there should be perhaps a relation given between the finite and infinite sums. up vote 0 down vote accepted Also, something should be said about denominators staying reasonably away from $0$; well--something :-) (I am surprised that the m-th sums in the single limit case have exactly m terms--I'd expect a more flexible situation). add comment Not the answer you're looking for? Browse other questions tagged convergence or ask your own question.
{"url":"https://mathoverflow.net/questions/126818/splitting-one-limit-into-two/126827","timestamp":"2014-04-18T11:14:49Z","content_type":null,"content_length":"51117","record_id":"<urn:uuid:cf98d80e-b276-4bf2-8764-ca56c3fc95ef>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
A Political Redsitricting Tool for the Rest of Us - Sweepline Redistricting Sweepline Redistricting The redistricting schemes we consider throughout this article are primarily concerned with equal populations and compact districts. For definiteness, consider the state of Washington with 9 legislative districts–the number of districts it had just prior to the 2010 census. Washington currently has 10 congressional districts, but 9 factors nicely and hence provides a better illustration of the concept. It is clear that there are a number of different ways to partition the state into regions with equal population. This paper will discuss two methods each of which can be adapted to provide a multitude of equal partitions. In this section, we borrow a term from the mathematical literature of Voronoi tessellations [Okabe]. Some methods of generating Voronoi tessellations rely on an imaginary line that sweeps from left to right across a plane. When the line meets certain conditions, geometric events are generated that ultimately lead to a Voronoi diagram [Fortune]. Our first equal partitioning scheme borrows from this idea. We start with an imaginary vertical sweepline at the left most edge of the state of Washington. We sweep from left to right across the state. When the line has swept out a certain portion of the population, it leaves a copy of itself there marking the boundary of a region and continues on to sweep out the next region. In an effort to draw 9 equal population districts as in this figure, a vertical sweepline first divides the state into three equal population regions. Subsequently, in each of the three regions, a horizontal sweepline divides the region into three equal population subregions. The result is 9 rectangular regions of equal population. A sweepline redistricting of Washington state. Each of the 9 regions has equal population. Notice in the figure that the second vertical partition is quite narrow. This results from the fact that it contains the densely populated Seattle-Tacoma area. While having equal population, these districts are not compact. They tend to be long and skinny instead of stout and blocky. To emphasize the non-uniqueness of such partitions, imagine starting first with a horizontal sweepline and then generating subregions using vertical sweeplines. There would be 9 equal population districts, but they would be different from those pictured here. How do we generate districts of equal population that are also compact? We move to another idea from the literature of Voronoi tessellations--that of sweepcircles.
{"url":"http://www.maa.org/publications/periodicals/loci/a-political-redsitricting-tool-for-the-rest-of-us-sweepline-redistricting","timestamp":"2014-04-20T15:12:08Z","content_type":null,"content_length":"99272","record_id":"<urn:uuid:e639185b-3204-4a72-b3c1-2c4456313791>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
PS Basics: Reliability Considerations in Power Supplies | EE Times PS Basics: Reliability Considerations in Power Supplies The importance of reliability can best be demonstrated using an anecdote I was told by a friend back in 2008. When working for a major IC firm from San Francisco, he had received a shipment of new and somewhat problematic desktop PCs. Within months these PCs had started to crash. The IT department was rolled in to fix the assumed operating system gremlins and/or viruses that were affecting these new computers -- to no effect. After much investigation, and with many a stripped-down PC, it was eventually revealed that the problem was caused by substandard bulk capacitors in the AC/DC power supply. These had deteriorated in use, and were causing the supply rails to be out of regulation, producing the random crashes. The episode highlights that, while power supplies may not have the glamour, nor get the attention that processors and displays receive, they are just as vital to system operation. Here we look at reliability in power supplies, how it's measured, and how it can be improved. Predicting the power supply's expected life First, a few definitions: Reliability, R[(t)]. The probability that a power supply will still be operational after a given time<./p> Failure rate, λ. The proportion of units that fail in a given time. Note, there is a high failure rate in the burn-in and wear-out phases of the cycle -- see figure 1. MTTF, 1/λ. The mean time to failure. MTBF (mean time between failures) is also commonly used in place of MTTF and is useful for equipment that will be repaired and then returned to service. MTTF is technically more correct mathematically, but the two terms are (except for a few situations) equivalent and MTBF is the more commonly used in the power industry. A supply's reliability is a function of multiple factors: a solid, conservative design with adequate margins, quality components with suitable ratings, thermal considerations with necessary derating, and a consistent manufacturing process. To calculate reliability -- the probability of a component not failing after a given time -- the following formula is used: R[(t)] = e^-λt For example, the probability that a component with an intrinsic failure rate of 10^-6 failures per hour wouldn't fail after 100,000 hours is 90.5%. After 500,000 hours this decreases to 60.6%. After 1 million hours of use this decreases to 36.7%. Going through the mathematics can reveal interesting realities. First, the failures for a constant failure rate are characterized by an exponential factor, so only 37% of the units in a large group will last as long as the MTBF number. Second, for a single supply, the probability that it will last as long as its MTBF rating is only 37%. Third, there is a 37% confidence level likelihood that it will last as long as its MTBF rating. Additionally, half the components in a group will have failed after just 0.69 of the MTBF. It should also be noted that this formula and curve can be adapted to calculate the reliability of a system: R[(t)] = e^-λ^A^t Where λ[A] is the sum total of all components failure rates (λ[A] = λ[1]n[1 +] λ[2]n[2 + … + ]λ[i]n[i])
{"url":"http://www.eetimes.com/author.asp?section_id=36&doc_id=1320968","timestamp":"2014-04-21T15:13:42Z","content_type":null,"content_length":"131298","record_id":"<urn:uuid:36a6867f-a796-47d4-af0e-f7f0dd68a333>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Gillsville Math Tutor Find a Gillsville Math Tutor ...One is in Special Education, the other in Community Counseling. I am currently working on my Education Specialist degree in Curriculum and Instruction. I have applied to the doctorate program in Curriculum and Instruction. 34 Subjects: including SAT math, English, writing, algebra 1 ...Thank you for choosing me to provide your Tutoring Services. I look forward to hearing from you and assisting you in achieving an exciting and successful journey to success in your learning experience. Robert S., P.E. 7 Subjects: including algebra 1, algebra 2, calculus, geometry ...I believe in rigorous instruction and study habits. I have had many students, as well as parents, comment on the fact that I make learning enjoyable and inspiring.I have taught 3rd - 6th grade social studies, math, science, reading, gifted education and language arts in the state of Georgia under the new GPS guidelines. I have taught gifted education and study skills for 16 years. 30 Subjects: including calculus, precalculus, Bible studies, differential equations ...This class did an excellent job of applying the material to real research scenarios which I really enjoyed. As a graduate student I took more advanced courses like applied statistics, regression methods, analysis of variance (ANOVA) and design of experiments, data analysis methodology, and appli... 51 Subjects: including ACT Math, discrete math, differential equations, English Hi, my name is Christi and I am a 21 year old undergraduate student, double majoring in English Literature and Cellular Biology. I have been a tutor from high school through college in various areas, not limited to my major subjects but also different math courses and social sciences such as psycho... 14 Subjects: including algebra 1, algebra 2, trigonometry, reading
{"url":"http://www.purplemath.com/gillsville_math_tutors.php","timestamp":"2014-04-20T23:32:13Z","content_type":null,"content_length":"23724","record_id":"<urn:uuid:7d6d5295-9663-44af-b494-7364767870c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Relativity: The Special and General Theory/Appendix From Wikisource ←Part III Relativity: The Special and General Theory by , translated by Robert William Lawson Appendix II→ Appendix I - Simple Derivation of the Lorentz Transformation Appendix I - Simple Derivation of the Lorentz Transformation[edit] (SUPPLEMENTARY TO SECTION 11) For the relative orientation of the co-ordinate systems indicated in Fig. 2, the $x$-axes of both systems permanently coincide. In the present case we can divide the problem into parts by considering first only events which are localised on the $x$-axis. Any such event is represented with respect to the co-ordinate system $K$ by the abscissa $x$ and the time $t$, and with respect to the system $K'$ by the abscissa $x'$ and the time $t'$. We require to find $x'$ and $t'$ when $x$ and $t$ are given. A light-signal, which is proceeding along the positive axis of $x$, is transmitted according to the equation $x = ct\,$ Since the same light-signal has to be transmitted relative to $K'$ with the velocity c, the propagation relative to the system $K'$ will be represented by the analogous formula Those space-time points (events) which satisfy (1) must also satisfy (2). Obviously this will be the case when the relation $(x' - ct') = \lambda (x - ct)\,$ (3) is fulfilled in general, where $\lambda$ indicates a constant; for, according to (3), the disappearance of $(x - ct)$ involves the disappearance of $(x' - ct')$. If we apply quite similar considerations to light rays which are being transmitted along the negative $x$-axis, we obtain the condition $(x' + ct') = \mu(x + ct)\,$ (4) By adding (or subtracting) equations (3) and (4), and introducing for convenience the constants $a$ and $b$ in place of the constants $\lambda$ and $\mu$, where we obtain the equations $\left.\begin{array}{rl} x'= & ax-bct\\ ct'= & act-bx\end{array}\right\}$ (5) We should thus have the solution of our problem, if the constants $a$ and $b$ were known. These result from the following discussion. For the origin of $K'$ we have permanently $x' = 0$, and hence according to the first of the equations (5) If we call $v$ the velocity with which the origin of $K'$ is moving relative to $K$, we then have The same value $v$ can be obtained from equations (5), if we calculate the velocity of another point of $K'$ relative to $K$, or the velocity (directed towards the negative $x$-axis) of a point of $K$ with respect to $K'$. In short, we can designate $v$ as the relative velocity of the two systems. Furthermore, the principle of relativity teaches us that, as judged from $K$, the length of a unit measuring-rod which is at rest with reference to $K'$ must be exactly the same as the length, as judged from $K'$, of a unit measuring-rod which is at rest relative to $K$. In order to see how the points of the $x$-axis appear as viewed from $K$, we only require to take a "snapshot" of $K'$ from $K$; this means that we have to insert a particular value of $t$ (time of $K$), e.g. $t = 0$. For this value of $t$ we then obtain from the first of the equations (5) $x' = ax\,$ Two points of the $x'$-axis which are separated by the distance $\Delta x'=1$ when measured in the $K'$ system are thus separated in our instantaneous photograph by the distance $\Delta x=\frac{1}{a}$ (7) But if the snapshot be taken from $K'(t' = 0)$, and if we eliminate $t$ from the equations (5), taking into account the expression (6), we obtain From this we conclude that two points on the $x$-axis separated by the distance 1 (relative to $K$) will be represented on our snapshot by the distance $\Delta x'=a\left(1-\frac{v^{2}}{c^{2}}\right)$ (7a) But from what has been said, the two snapshots must be identical; hence Dx in (7) must be equal to $\Delta x'$ in (7a), so that we obtain $a^{2}=\frac{1}{1-\frac{v^{2}}{c^{2}}}$ (7b) The equations (6) and (7b) determine the constants $a$ and $b$. By inserting the values of these constants in (5), we obtain the first and the fourth of the equations given in Section 11. $\left.\begin{array}{c} x'=\frac{x-vt}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\\ \\t'=\frac{t-\frac{v}{c^{2}}x}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\end{array}\right\}$ (8) Thus we have obtained the Lorentz transformation for events on the $x$-axis. It satisfies the condition $x'^{2}-c^{2}t'^{2}=x^{2}-c^{2}t^{2}$ (8a) The extension of this result, to include events which take place outside the $x$-axis, is obtained by retaining equations (8) and supplementing them by the relations $\left.\begin{array}{c} y'=y\\ z'=z\end{array}\right\}$ (9) In this way we satisfy the postulate of the constancy of the velocity of light in vacuo for rays of light of arbitrary direction, both for the system $K$ and for the system $K'$. This may be shown in the following manner. We suppose a light-signal sent out from the origin of $K$ at the time $t$ = 0. It will be propagated according to the equation or, if we square this equation, according to the equation $x^{2}+y^{2}+z^{2}-c^{2}t^{2}=0$ (10) It is required by the law of propagation of light, in conjunction with the postulate of relativity, that the transmission of the signal in question should take place — as judged from $K'$ — in accordance with the corresponding formula $r' = ct'\,$ $x'^{2}+y'^{2}+z'^{2}-c^{2}t'^{2}=0$ (10a) In order that equation (10a) may be a consequence of equation (10), we must have $x'^{2}+y'^{2}+z'^{2}-c^{2}t'^{2}=\sigma\left(x^{2}+y^{2}+z^{2}-c^{2}t^{2}\right)$ (11) Since equation (8a) must hold for points on the $x$-axis, we thus have $\sigma=1$. It is easily seen that the Lorentz transformation really satisfies equation (11) for $\sigma=1$; for (11) is a consequence of (8a) and (9), and hence also of (8) and (9). We have thus derived the Lorentz transformation. The Lorentz transformation represented by (8) and (9) still requires to be generalised. Obviously it is immaterial whether the axes of $K'$ be chosen so that they are spatially parallel to those of $K$. It is also not essential that the velocity of translation of $K'$ with respect to $K$ should be in the direction of the $x$-axis. A simple consideration shows that we are able to construct the Lorentz transformation in this general sense from two kinds of transformations, viz. from Lorentz transformations in the special sense and from purely spatial transformations. which corresponds to the replacement of the rectangular co-ordinate system by a new system with its axes pointing in other directions. Mathematically, we can characterise the generalised Lorentz transformation thus: It expresses $x'$, $y'$, $z'$, $t'$, in terms of linear homogeneous functions of $x$, $y$, $z$, $t$, of such a kind that the relation $x'^{2}+y'^{2}+z'^{2}-c^{2}t'^{2}=x^{2}+y^{2}+z^{2}-c^{2}t^{2}$ (11a) is satisfied identically. That is to say: If we substitute their expressions in $x$, $y$, $z$, $t$, in place of $x'$, $y'$, $z'$, $t'$, on the left-hand side, then the left-hand side of (11a) agrees with the right-hand side. Appendix II - Minkowski's Four-Dimensional Space ("World")[edit] (SUPPLEMENTARY TO SECTION 17) We can characterise the Lorentz transformation still more simply if we introduce the imaginary $\sqrt{-1}ct$ in place of $t$, as time-variable. If, in accordance with this, we insert \begin{align} x_1 & = x \\ x_2 & = y \\ x_3 & = z \\ x_4 & = \sqrt{-1}ct \end{align} and similarly for the accented system $K'$, then the condition which is identically satisfied by the transformation can be expressed thus : $x_1'^2 + x_2'^2 + x_3'^2 + x_4'^2 = x_1^{2} + x_2^{2} + x_3^{2} + x_4^{2}$ (12) That is, by the afore-mentioned choice of " coordinates," (11a) [see the end of Appendix I] is transformed into this equation. We see from (12) that the imaginary time co-ordinate $x_4$, enters into the condition of transformation in exactly the same way as the space co-ordinates $x_1$, $x_2$, $x_3.$ It is due to this fact that, according to the theory of relativity, the "time" $x_4$, enters into natural laws in the same form as the space co ordinates $x_1$, $x_2$, $x_3$. A four-dimensional continuum described by the "co-ordinates" $x_1$, $x_2$, $x_3$, $x_4$, was called "world" by Minkowski, who also termed a point-event a "world-point." From a "happening" in three-dimensional space, physics becomes, as it were, an "existence" in the four-dimensional "world." This four-dimensional "world" bears a close similarity to the three-dimensional "space" of (Euclidean) analytical geometry. If we introduce into the latter a new Cartesian co-ordinate system ($x_1'$, $x_2'$, $x_3'$) with the same origin, then $x_1'$, $x_2'$, $x_3'$, are linear homogeneous functions of $x_1$, $x_2$, $x_3$ which identically satisfy the equation $x_1'^2 + x_2'^2 + x_3'^2 = x_1^{2} + x_2^{2} + x_3^{2}$ The analogy with (12) is a complete one. We can regard Minkowski's "world" in a formal manner as a four-dimensional Euclidean space (with an imaginary time coordinate); the Lorentz transformation corresponds to a "rotation" of the co-ordinate system in the four-dimensional "world." Appendix III - The Experimental Confirmation of the General Theory of Relativity[edit] From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of a large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is, as it were, a purely empirical enterprise. But this point of view by no means embraces the whole of the actual process; for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigator rather develops a system of thought which, in general, is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a theory. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and it is just here that the "truth" of the theory lies. Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete that it becomes difficult to find any deductions in which the two theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypothesis of the hereditary transmission of acquired characters. We have another instance of far-reaching agreement between the deductions from two theories in Newtonian mechanics on the one hand, and the general theory of relativity on the other. This agreement goes so far, that up to the present we have been able to find only a few deductions from the general theory of relativity which are capable of investigation, and to which the physics of pre-relativity days does not also lead, and this despite the profound difference in the fundamental assumptions of the two theories. In what follows, we shall again consider these important deductions, and we shall also discuss the empirical evidence appertaining to them which has hitherto been obtained. (a) Motion of the Perihelion of Mercury[edit] According to Newtonian mechanics and Newton's law of gravitation, a planet which is revolving round the sun would describe an ellipse round the latter, or, more correctly, round the common centre of gravity of the sun and the planet. In such a system, the sun, or the common centre of gravity, lies in one of the foci of the orbital ellipse in such a manner that, in the course of a planet-year, the distance sun-planet grows from a minimum to a maximum, and then decreases again to a minimum. If instead of Newton's law we insert a somewhat different law of attraction into the calculation, we find that, according to this new law, the motion would still take place in such a manner that the distance sun-planet exhibits periodic variations; but in this case the angle described by the line joining sun and planet during such a period (from perihelion — closest proximity to the sun — to perihelion) would differ from 360°. The line of the orbit would not then be a closed one but in the course of time it would fill up an annular part of the orbital plane, viz. between the circle of least and the circle of greatest distance of the planet from the sun. According also to the general theory of relativity, which differs of course from the theory of Newton, a small variation from the Newton-Kepler motion of a planet in its orbit should take place, and in such a way, that the angle described by the radius sun-planet between one perihelion and the next should exceed that corresponding to one complete revolution by an amount given by (N.B. — One complete revolution corresponds to the angle 2π in the absolute angular measure customary in physics, and the above expression gives the amount by which the radius sun-planet exceeds this angle during the interval between one perihelion and the next.) In this expression $a$ represents the major semi-axis of the ellipse, $e$ its eccentricity, $c$ the velocity of light, and $T$ the period of revolution of the planet. Our result may also be stated as follows: According to the general theory of relativity, the major axis of the ellipse rotates round the sun in the same sense as the orbital motion of the planet. Theory requires that this rotation should amount to 43 seconds of arc per century for the planet Mercury, but for the other Planets of our solar system its magnitude should be so small that it would necessarily escape detection.^[1] In point of fact, astronomers have found that the theory of Newton does not suffice to calculate the observed motion of Mercury with an exactness corresponding to that of the delicacy of observation attainable at the present time. After taking account of all the disturbing influences exerted on Mercury by the remaining planets, it was found (Leverrier — 1859 — and Newcomb — 1895) that an unexplained perihelial movement of the orbit of Mercury remained over, the amount of which does not differ sensibly from the above mentioned +43 seconds of arc per century. The uncertainty of the empirical result amounts to a few seconds only. (b) Deflection of Light by a Gravitational Field[edit] In Section 22 it has been already mentioned that according to the general theory of relativity, a ray of light will experience a curvature of its path when passing through a gravitational field, this curvature being similar to that experienced by the path of a body which is projected through a gravitational field. As a result of this theory, we should expect that a ray of light which is passing close to a heavenly body would be deviated towards the latter. For a ray of light which passes the sun at a distance of Δ sun-radii from its centre, the angle of deflection (α) should amount to $\alpha=\frac{1.7\ \mathrm{seconds\ of\ arc}}{\Delta}$ It may be added that, according to the theory, half of this deflection is produced by the Newtonian field of attraction of the sun, and the other half by the geometrical modification (" curvature ") of space caused by the sun. This result admits of an experimental test by means of the photographic registration of stars during a total eclipse of the sun. The only reason why we must wait for a total eclipse is because at every other time the atmosphere is so strongly illuminated by the light from the sun that the stars situated near the sun's disc are invisible. The predicted effect can be seen clearly from the accompanying diagram. If the sun (S) were not present, a star which is practically infinitely distant would be seen in the direction $D_1$, as observed front the earth. But as a consequence of the deflection of light from the star by the sun, the star will be seen in the direction $D_2$, i.e. at a somewhat greater distance from the centre of the sun than corresponds to its real position. In practice, the question is tested in the following way. The stars in the neighbourhood of the sun are photographed during a solar eclipse. In addition, a second photograph of the same stars is taken when the sun is situated at another position in the sky, i.e. a few months earlier or later. As compared with the standard photograph, the positions of the stars on the eclipse-photograph ought to appear displaced radially outwards (away from the centre of the sun) by an amount corresponding to the angle α. We are indebted to the [British] Royal Society and to the Royal Astronomical Society for the investigation of this important deduction. Undaunted by the [first world] war and by difficulties of both a material and a psychological nature aroused by the war, these societies equipped two expeditions — to Sobral (Brazil), and to the island of Principe (West Africa) — and sent several of Britain's most celebrated astronomers (Eddington, Cottingham, Crommelin, Davidson), in order to obtain photographs of the solar eclipse of 29th May, 1919. The relative discrepancies to be expected between the stellar photographs obtained during the eclipse and the comparison photographs amounted to a few hundredths of a millimetre only. Thus great accuracy was necessary in making the adjustments required for the taking of the photographs, and in their subsequent measurement. The results of the measurements confirmed the theory in a thoroughly satisfactory manner. The rectangular components of the observed and of the calculated deviations of the stars (in seconds of arc) are set forth in the following table of results: (c) Displacement of Spectral Lines Towards the Red[edit] In Section 23 it has been shown that in a system $K'$ which is in rotation with regard to a Galileian system $K$, clocks of identical construction, and which are considered at rest with respect to the rotating reference-body, go at rates which are dependent on the positions of the clocks. We shall now examine this dependence quantitatively. A clock, which is situated at a distance $r$ from the centre of the disc, has a velocity relative to $K$ which is given by $v = \omega r\,$ where ω represents the angular velocity of rotation of the disc $K'$ with respect to $K$. If $v_0$, represents the number of ticks of the clock per unit time ("rate" of the clock) relative to $K$ when the clock is at rest, then the "rate" of the clock ($u$) when it is moving relative to $K$ with a velocity $v$, but at rest with respect to the disc, will, in accordance with Section 12, be given by or with sufficient accuracy by This expression may also be stated in the following form: If we represent the difference of potential of the centrifugal force between the position of the clock and the centre of the disc by $\phi$, i.e. the work, considered negatively, which must be performed on the unit of mass against the centrifugal force in order to transport it from the position of the clock on the rotating disc to the centre of the disc, then we have From this it follows that In the first place, we see from this expression that two clocks of identical construction will go at different rates when situated at different distances from the centre of the disc. This result is also valid from the standpoint of an observer who is rotating with the disc. Now, as judged from the disc, the latter is in a gravitational field of potential $\phi$, hence the result we have obtained will hold quite generally for gravitational fields. Furthermore, we can regard an atom which is emitting spectral lines as a clock, so that the following statement will hold: An atom absorbs or emits light of a frequency which is dependent on the potential of the gravitational field in which it is situated. The frequency of an atom situated on the surface of a heavenly body will be somewhat less than the frequency of an atom of the same element which is situated in free space (or on the surface of a smaller celestial body). Now $\phi=-K\frac{M}{r}$, where $K$ is Newton's constant of gravitation, and $M$ is the mass of the heavenly body. Thus a displacement towards the red ought to take place for spectral lines produced at the surface of stars as compared with the spectral lines of the same element produced at the surface of the earth, the amount of this displacement being For the sun, the displacement towards the red predicted by theory amounts to about two millionths of the wave-length. A trustworthy calculation is not possible in the case of the stars, because in general neither the mass $M$ nor the radius $r$ are known. It is an open question whether or not this effect exists, and at the present time astronomers are working with great zeal towards the solution. Owing to the smallness of the effect in the case of the sun, it is difficult to form an opinion as to its existence. Whereas Grebe and Bachem (Bonn), as a result of their own measurements and those of Evershed and Schwarzschild on the cyanogen bands, have placed the existence of the effect almost beyond doubt, while other investigators, particularly St. John, have been led to the opposite opinion in consequence of their measurements. Mean displacements of lines towards the less refrangible end of the spectrum are certainly revealed by statistical investigations of the fixed stars; but up to the present the examination of the available data does not allow of any definite decision being arrived at, as to whether or not these displacements are to be referred in reality to the effect of gravitation. The results of observation have been collected together, and discussed in detail from the standpoint of the question which has been engaging our attention here, in a paper by E. Freundlich entitled "Zur Prüfung der allgemeinen Relativitäts-Theorie" (Die Naturwissenschaften, 1919, No. 35, p. 520: Julius Springer, Berlin). At all events, a definite decision will be reached during the next few years. If the displacement of spectral lines towards the red by the gravitational potential does not exist, then the general theory of relativity will be untenable. On the other hand, if the cause of the displacement of spectral lines be definitely traced to the gravitational potential, then the study of this displacement will furnish us with important information as to the mass of the heavenly bodies. Appendix IV - The Structure of Space According to the General Theory of Relativity[edit] (SUPPLEMENTARY TO SECTION 32) Since the publication of the first edition of this little book, our knowledge about the structure of space in the large (" cosmological problem ") has had an important development, which ought to be mentioned even in a popular presentation of the subject. My original considerations on the subject were based on two hypotheses: (1) There exists an average density of matter in the whole of space which is everywhere the same and different from zero. (2) The magnitude ("radius") of space is independent of time. Both these hypotheses proved to be consistent, according to the general theory of relativity, but only after a hypothetical term was added to the field equations, a term which was not required by the theory as such nor did it seem natural from a theoretical point of view ("cosmological term of the field equations"). Hypothesis (2) appeared unavoidable to me at the time, since I thought that one would get into bottomless speculations if one departed from it. However, already in the twenties, the Russian mathematician Friedman showed that a different hypothesis was natural from a purely theoretical point of view. He realized that it was possible to preserve hypothesis (1) without introducing the less natural cosmological term into the field equations of gravitation, if one was ready to drop hypothesis (2). Namely, the original field equations admit a solution in which the "world radius" depends on time (expanding space). In that sense one can say, according to Friedman, that the theory demands an expansion of space. A few years later Hubble showed, by a special investigation of the extra-galactic nebulae (" milky ways "), that the spectral lines emitted showed a red shift which increased regularly with the distance of the nebulae. This can be interpreted in regard to our present knowledge only in the sense of Doppler's principle, as an expansive motion of the system of stars in the large -- as required, according to Friedman, by the field equations of gravitation. Hubble's discovery can, therefore, be considered to some extent as a confirmation of the theory. There does arise, however, a strange difficulty. The interpretation of the galactic line-shift discovered by Hubble as an expansion (which can hardly be doubted from a theoretical point of view), leads to an origin of this expansion which lies "only" about $10^9$ years ago, while physical astronomy makes it appear likely that the development of individual stars and systems of stars takes considerably longer. It is in no way known how this incongruity is to be overcome. I further want to remark that the theory of expanding space, together with the empirical data of astronomy, permit no decision to be reached about the finite or infinite character of (three-dimensional) space, while the original "static" hypothesis of space yielded the closure (finiteness) of space. 1. ↑ Especially since the next planet Venus has an orbit that is almost an exact circle, which makes it more difficult to locate the perihelion with precision.
{"url":"https://en.wikisource.org/wiki/Relativity:_The_Special_and_General_Theory/Appendix","timestamp":"2014-04-16T22:17:05Z","content_type":null,"content_length":"79594","record_id":"<urn:uuid:8bd937ed-694d-4b2d-ae62-d3614ca1845f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of Fuzzy Composition Relation For DNA Sequence Classification (IJCSIS) International Journal of Computer Science and Information Security,Vol. 8,, No. 6, 2010 A DNA sequence is essentially represented as a string of fourcharacters A, C, T, G and looks something likeACCTGACCTTACG. These strings can also be represented interms of some probability measures and using these measuresit can be depicted graphically as well. This graphicalrepresentation matches the Markov Hidden Model. A physicalor mathematical model of a system produces a sequence of symbols according to a certain probability associated withthem. This is known as a stochastic process [2]. There aredifferent ways to use probabilities for depicting the DNAsequences. The diagrammatical representation can be shownas follows:FIG 1: [The states of A, C, G and T.]For example, the transition probability from state G to state Tis 0.08, i,e, G xT xP In a given sequence of length , x , …… x , represent thenucleotides. The sequence starts at the first state , and makessuccessive transitions to , x and so on, till . Using Markovproperty [6], the probability of , depends on the value of only the previous state, , not on the entire previoussequence. This characteristic is known as Markov property [5]and can be written as: xP x xP x xP x xP xP L L L L ii Li x xP xP (1)In Equation (1) we need to specify the probability of thestarting state. For simplicity, we would like to model this as atransition too. This can be done by adding a begin state,denoted by , so that the starting state becomes Now considering , the transition probability we canrewrite (1) as i x x Li x x a xP (2)If there are classes, then we calculate the probability of asequence being in all the classes. To overcome thisdrawback we use Fuzzy composition relation. That is, wedivide the classes into different groups based on theirsimilarities. So, if out of are similar then they aretreated as one group and their individual transition probabilitytables are merged using the fuzzy composition relation. Theremaining ( n – m) classes are similarly grouped. Lets say, if there are two classes , the Fuzzy compositionrelation between [6][7] can be written as follows: y R x R Min Max R R Different class representation Grouping of similar classes Fig 2: Grouping of similar classesA table is then constructed representing the entire ( n – m) similar classes. From this table we compute the probabilitythat a sequence belongs to a given group using the followingequation: xa xa xP xP (4)Here “+” represents transition probability of the sequencebelonging to one of the classes using fuzzy compositionrelation and “-“ represents the transition probability of thesame for another class [1].If this ratio is greater than zero then we can say that thesequence is from the first class else from the other one.An Example:Let us consider an example for applying this classificationmethod. We have taken into consideration the Swine fludata.[11] The different categories of the Swine flu data areshown as shows the Transition Probability of Type 1, Type2 and Type 3 varieties of Avian Flu. 146http://sites.google.com/site/ijcsis/ISSN 1947-5500
{"url":"http://www.scribd.com/doc/39040894/Application-of-Fuzzy-Composition-Relation-For-DNA-Sequence-Classification","timestamp":"2014-04-18T08:44:18Z","content_type":null,"content_length":"235785","record_id":"<urn:uuid:00ed48d5-37ff-455b-a6dd-9ca6d1c7efc5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Explain why an angle that is supplementary to an acute angle must be an obtuse angle. Best Response You've already chosen the best response. supplementary angles add up to 180 degrees. if one is less than 90, the other angle must be greater than 90 in order to add up to 180 degrees Best Response You've already chosen the best response. because they add up to 180 degrees so if one is acute the other must be > 90 degrees ie obtuse Best Response You've already chosen the best response. So supplementary means that angles A + B = 180 So say that angle a is <90 degrees (acute) Its supplement must therefore fill the rest of the 180, meaning it would have to be >90 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e6e93ee0b8beaebb295f8a7","timestamp":"2014-04-20T23:49:32Z","content_type":null,"content_length":"32583","record_id":"<urn:uuid:08043f05-b404-491c-b40d-6a7da0fa1d7c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Munster Calculus Tutor ...I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes. I am fluent in a range of Science and History disciplines. As an Ivy League graduate, I learned from professors at the very top of their fields. 41 Subjects: including calculus, chemistry, physics, English ...I believe that everyone can be a success and that the only time you fail is when you quit. I enjoy opening students' minds to the idea that math is cool and conquerable!Bachelors in Mathematics (one class away). Over 11 years tutoring students in Algebra and Geometry, mostly with learning disab... 24 Subjects: including calculus, chemistry, special needs, study skills ...These guides have been used to improve scores all over the midwest. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the GMAT. I've helped students push past the 700 mark, or just bring up one part of their score to push up their overall score. 24 Subjects: including calculus, physics, geometry, GRE ...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav... 10 Subjects: including calculus, physics, geometry, algebra 1 ...I've used Word for over 20 years. I have over 15 year programming experience with Java. I added this subject because I took the ACT in high school and very familiar with Math test. 16 Subjects: including calculus, piano, algebra 1, algebra 2
{"url":"http://www.purplemath.com/munster_in_calculus_tutors.php","timestamp":"2014-04-17T07:26:53Z","content_type":null,"content_length":"23721","record_id":"<urn:uuid:acfe5c04-b313-4f49-9049-ff9664be9373>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Regexps from DFA clark@quarry.zk3.dec.com (Chris Clark USG) 7 Feb 1997 23:35:06 -0500 From comp.compilers | List of all articles for this month | From: clark@quarry.zk3.dec.com (Chris Clark USG) Newsgroups: comp.compilers Date: 7 Feb 1997 23:35:06 -0500 Organization: Digital Equipment Corporation - Marlboro, MA References: 97-02-030 Keywords: lex, DFA > It might be expressed as a ``minimal regexpr for a given set of strings''. > I didn't clarify the notion of generality vs. simplicity, as '.*' can match > anything and has a minimal number of states as a DFA, while the exact match > for all strings is minimal in term of size of the generated language. > The interesting expr/automatas are somewhere in between. Expressed that way, there is at least one "natural" solution to your problem. This will construct a set of NFA's which describe the languages starting with your exact set of strings and work its way to "sigma*". Each machine it adds to its set in the construction process accepts a little larger language (in that it accepts more strings) than the element of the set it was derived from. That seems to match your generality concept. (BTW, If you want DFA's, you simply need to run a conversion algorithm. Allyn Dimock supplied several 1) Create the following NFA as the initial element of your set. Create a unique start state. For each string in your set For each character in the string Add a unique state, if the character is the last character of the string, make the state an accepting state Add a transition (for the character) from the state describing the previous character (the start state if this is the first character of the string) to the state for this character. The NFA for the strings "car" and "cat" would look like the one below. -[0] (start state) | "c" "a" "r" +------>[1]----->[2]----->[3]+ (accepting state) | "c" "a" "t" |------>[4]----->[5]----->[6]+ (accepting state) This machine accepts the minimal set of strings. It is obviously not minimal in terms of states nor transitions except under some restrictive rules. However, it has some nice properities for the set it constructs in the rest of the algorithm. Any other NFA which described the set of strings could also be used as the initial element and some would yield slightly different sets at the end of this algorithm. 2) For each NFA currently in your set (the input NFA), create a set of new NFA's by the following method. Take each pair of states in the input NFA and merge them, each pair-wise merger of state yields a potential new NFA for your set, if the resulting NFA is not in your set add it. Here are two of the new machines which would be added given original machine. Onw is the merger of states 3 and 5 from the above machine. The resulting machine would look like: -[0] (start state) | "c" "a" "r" | | | "c" "a" | + "t" |----->[4]-------------+--->[5]---------->[6]+ (accepting state) (accepting state) That machine would, of course, accept "car", "cart", "ca", and "cat". If the two states to be merged caused a cycle to be formed, the resulting machine would except regular expressions which repetitions, as in the next machine, where states 4 and 6 from the original machine were merged. -[0] (start state) | "c" "a" "r" +----->[1]----->[2]----->[3]+ (accepting state) | "c" + "a" "t" ^ (accepting state) | The resulting machine would accept "car" | "c"("at")*. 3) Repeat step 2 until there are no more machines which can be added. One element of you machine set, should be a machine with one state, which is both the start and an accepting state and has a transition to itself on each character in any string in the problem (i.e. sigma*, unless sigma includes some characters not in any of the strings). Note, that the resulting set of machines is finite (and strictly limited by a function of the number of initial characters within the initial strings). Other extensions to the set which produce finite supersets are also possible. For example, machines could be added to the set where non-accepting state were changed to accepting states. Or, additional transitions could be added between states. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/97-02-040","timestamp":"2014-04-16T07:15:46Z","content_type":null,"content_length":"11104","record_id":"<urn:uuid:27ba80c2-d98f-4994-9229-0aaaf1f009b9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics: The Relativity of Simultaneity Video | MindBites Physics: The Relativity of Simultaneity About this Lesson • Type: Video Tutorial • Length: 11:13 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 120 MB • Posted: 07/01/2009 This lesson is part of the following series: Physics (147 lessons, $198.00) Physics: Relativity (9 lessons, $18.81) Physics: Einstein's Special Theory of Relativity (4 lessons, $7.92) This lesson was selected from a broader, comprehensive course, Physics I. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/ product/physics. The full course covers kinematics, dynamics, energy, momentum, the physics of extended objects, gravity, fluids, relativity, oscillatory motion, waves, and more. The course features two renowned professors: Steven Pollock, an associate professor of Physics at he University of Colorado at Boulder and Ephraim Fischbach, a professor of physics at Purdue University. Steven Pollock earned a Bachelor of Science in physics from the Massachusetts Institute of Technology and a Ph.D. from Stanford University. Prof. Pollock wears two research hats: he studies theoretical nuclear physics, and does physics education research. Currently, his research activities focus on questions of replication and sustainability of reformed teaching techniques in (very) large introductory courses. He received an Alfred P. Sloan Research Fellowship in 1994 and a Boulder Faculty Assembly (CU campus-wide) Teaching Excellence Award in 1998. He is the author of two Teaching Company video courses: “Particle Physics for Non-Physicists: a Tour of the Microcosmos” and “The Great Ideas of Classical Physics”. Prof. Pollock regularly gives public presentations in which he brings physics alive at conferences, seminars, colloquia, and for community audiences. Ephraim Fischbach earned a B.A. in physics from Columbia University and a Ph.D. from the University of Pennsylvania. In Thinkwell Physics I, he delivers the "Physics in Action" video lectures and demonstrates numerous laboratory techniques and real-world applications. As part of his mission to encourage an interest in physics wherever he goes, Prof. Fischbach coordinates Physics on the Road, an Outreach/Funfest program. He is the author or coauthor of more than 180 publications including a recent book, “The Search for Non-Newtonian Gravity”, and was made a Fellow of the American Physical Society in 2001. He also serves as a referee for a number of journals including “Physical Review” and “Physical Review Letters”. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Albert Einstein's special theory of relativity is based on two premises. Premise number one: the laws of physics are the same in any inertial reference frame. It's a perfectly reasonable idea that says that you can't tell whether your reference frame is at rest in some absolute sense. You can talk about relative velocities between reference frames, but nobody's reference frame is special. Postulate number 2 is the weird one. It says the speed of light is a law of physics - the speed of light in vacuums (3 x 10^8 m/sec) is the same in all reference frames, no matter whether the source is moving or you're moving. If you measure the speed of light, you always get the same number. Very counter-intuitive - it disagrees with our sense that if something's running towards me and I'm running towards it, I should measure something going faster. That's what I think should happen. It's not what happens with light or with anything when the speeds involved are close to the speed of This fact has many consequences. And one of the consequences I want to talk about right now is that it really calls into question your deep-seated ideas about what time is and how time works. Let's think about events. Events are really the way physicists describe what's going on in the world. You talk about an event, which means you've got a position x, y and z that describe where the event occurred and then you have to tell when the event occurred. And this describes something that happened, like the snapping of a finger happened at a place and a time. If I'm in a different reference frame, I'm going to have different coordinates - x', y', z', t' - different numbers to describe the same event. Now let's think about two events which occur at different places. So I'm going to snap my fingers in two different places, but at the same time, these two events are simultaneous. That seems meaningful and I can get good at it and really make them very accurately simultaneous. And how do I know they were simultaneous? You should think about how you measure where an event takes place and when. For me, it's easy. I'm in the middle of my two hands. As I snap my fingers, a sound wave and a light wave of the event, telling me about it, travels towards me. Those two things happen at different speeds. The sound gets to me after the light. But I'm equidistant and the sound and light arrive at my ears or my brain. The two sound waves arrive simultaneously later. So I conclude that the two events were simultaneous. It all seems perfectly intuitive. But now, let me ask you, what would it look like if you watched those same two events from a train that was moving by in this direction at some high speed. You have to think about it very carefully, because your immediate gut reaction is, those two events, if they were simultaneous for me, they'll be simultaneous for the person who is moving by. And let me convince you that that's not Let me, first of all, think of a different pair of events and here's the way I'm going to set up the events. Here's a train car, and I'm going to live in the frame of the train. So I'm sitting inside this train car. I'm at rest. I see the world going by me at speed z, but never mind. I don't care. I'm in a perfectly valid inertial reference frame. And I flash a light bulb. So the light flashes. Light waves start propagating out towards the outside of the train. I'm going to use light because light is traveling at a very high speed, and so relativity - Einstein's special relativity - effects will become important. As this light travels outward, if the flash was right at the center of the train, then, remember Einstein says the speed of light is the same in all directions for everybody. So it's going to be traveling away from me, symmetrically - same speeds to the right and to the left. So if I flashed in the middle, the light will strike the two edges of the train simultaneously in my reference frame. This is an event. The light strikes the edge of the train. Perhaps there's a detector there that goes, "ping!" And the two pings are simultaneous. These are space-time events. You have to describe them by x, y, z and t. So I have just set up a scenario where I've got two events and they are absolutely, without question, simultaneous. Now, let me watch this exact same procedure, but from the ground. So let me flip to the new reference frame. Here's the Earth's frame. And I see I will have a flash of light - it's the same event - and the light begins to travel outwards. But remember two things. The train is now moving, with velocity v as far as I'm concerned. So the back of the train is moving to the right and the front of the train is moving to the right. And here's the weird thing. In any reference frame, the speed of light is always the same. So this beam that's heading to the right and this beam, which is heading to the left, as far as I'm concerned, they're both traveling at the speed of light (c), one to the left and one to the right. So what happens as time goes by? The train catches up to this beam, and the train is running away from this beam, so at a later time, I see this scenario. The train has caught up to this beam, on the backside, where the front side is still trying to run away. It's closer, but it hasn't yet made it. So the event of light striking the back of the train happens first. And later on, the next event will happen. So these two events - light striking the back of the train, light striking the front of the train - one person thinks they're simultaneous events. Another perfectly valid physics observer says, "No, this event happened and the other event happened later." This is really weird. What I'm telling you is it's meaningless to ask whether those two clicks were simultaneous. It's meaningful to say were they simultaneous in my reference frame. But it doesn't make any sense to ask in some absolute way, did they happen at the same time? This really throws into question our intuition about what time means. Galileo would have said, "Of course, time is absolute. All observers agree that time is passing, like some cosmic clock, ticking away. And if I think they were simultaneous, everybody should think they're simultaneous." Einstein says, "No, not if the speed of light is the same for all observers." Let me show you another similar story just to try to get this idea about how events can be simultaneous in one frame and not in another. I've got two train cars, 1 and 2. Train number 1 is moving with velocity v with respect to the tracks. Train number 2 is sitting at rest with respect to the tracks. And all of my pictures - I have to show you a picture from some reference frame. My reference frame is the frame of the tracks. Supposing that there are two lightning strikes - one at the two ends. So instead of having an event that started in the middle and working its way out, now I'm going to have two events that started at the outside and work their way towards the middle. These are two events. They have a place and a time, and the question is, were they simultaneous? Let me first ask that question in the reference frame of number 2. So let me focus my attention on observer number 2. First of all, at this moment in time, for observer number 2, observer number 2 doesn't know anything's happened yet. It takes a finite amount of time for light to travel. It's fast, but it's a finite amount of time. So 2 is just sitting there, oblivious of the lightning strike. A brief instant later, however, the light will have traveled and 2 is sitting in the middle of the train. The speed of light is the same in all directions. So these two waves are traveling towards observer number 2 at the same speed, c. And so observer number two will see the two waves striking together and concludes that the two original strikes must have been simultaneous at some earlier time. When this happens, that's when 2 knows that lightning had struck, and by working backwards - calculating backwards - the distances were the same. The speed was the same. Velocity x Time = Distance, so the times must been the same. So these two lightning strikes are simultaneous as far as observer number 2 is Now let's just watch what happens to observer number 1. Let me add an intermediate step. So remember, observer number 1 is traveling to the right. So a little brief moment later, the waves haven't quite reached number 2 yet, but number 1 has been shifting over. And so the wave from the right has reached observer number 1. So observer number 1 is sitting there, thinks that she's sitting still - right, everybody thinks they're sitting still - and she sees a light wave coming from the right and nothing else. So she concludes that there was a lightning strike on the front of the train. And then what happens? She's still moving to the right. This wave is catching up. And at some later time, the wave has finally caught up. So what did she observe? She's sitting there. A wave comes from the right. And then, she waits a while, and then a wave comes from the left. So, she scratches her head and walks around. She thinks she's at rest. And she sees a lightning scar over on this side and a lightning scar over on that side. And she says, "I was in the middle the whole time. I was symmetrically in the middle and the pulse from one side came first, so it must have occurred earlier." Observer number correctly concludes that the lightning strikes were not simultaneous. The lightning strike at the front of the train happened at an earlier time. It's weird. And you ask yourself, "Well, who's right? Did the lightning strike occur simultaneously or was one earlier than the other?" And that's as much of a nonsense question as it is to ask, "Who was really at rest?" It depends on the observer and all observers are equally valid. One observer says two events are simultaneous, another says they're not. They're both correct in their reference So this is this crazy result of Albert Einstein's special theory of relativity. We abandoned our deep-seated intuition that it makes sense to talks about simultaneity and some absolute time that's ticking away independent of us. Time is relative. Time depends on the observer. Understanding Einstein's Special Theory of Relativity The Relativity of Simultaneity Page [2 of 2] Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/4581-physics-the-relativity-of-simultaneity","timestamp":"2014-04-21T09:46:37Z","content_type":null,"content_length":"62786","record_id":"<urn:uuid:80f6e492-2b2d-4d44-be41-8dc64fc32c2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Some numbers get no respect. Take 0, for example, who usually gets the short end of the stick. In fact, people treat him like he’s not even there; he's a complete nobody, a nothing. But this isn’t quite what inequalities are about. Besides, 0 is actually much larger than a lot of other numbers. He just doesn’t throw it back in their faces. He’s bigger than that. Inequalities are mathematical statements that compare quantities. If a problem asks you to solve for the inequality in an equation such as this one: -7 __ 3 ... you would insert a "<" (less than) symbol in the blank to indicate that the 3 is the larger of the two numbers. You can remember this in one of two ways: think of the symbol as the mouth of a hungry alligator that would prefer to go after the bigger meal, or an arrow that is pointing and laughing at the smaller number. Either way, the numbers won’t be getting out of this situation entirely unscathed. You’ll also encounter inequality problems that involve variables, as well as ones that feature "greater than or equal to" and "less than or equal to" symbols. These are just like the hungry alligator /pointing finger (...put together, that could end badly), except they like hanging out at the bottom of an equals sign. 4x – 5 ≥ 10 To solve a problem like as this one, add 5 to both sides and then divide both by 4 in order to isolate the variable. Because if anyone’s going to get eaten, it might as well be the lone letter. It’s every number for herself around here.
{"url":"http://www.shmoop.com/what-is-algebra/inequalities-skills.html","timestamp":"2014-04-17T21:55:25Z","content_type":null,"content_length":"35284","record_id":"<urn:uuid:3f300a42-3dde-45ac-83f9-b1a100c0e2fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Karnaugh map Karnaugh map Definition: A method for minimizing a boolean expression, usually aided by a rectangular map of the value of the expression for all possible input values. Input values are arranged in a Gray code. Maximal rectangular groups that cover the inputs where the expression is true give a minimum implementation. Also known as Veitch diagram, KV diagram. Aggregate child (... is a part of or used in me.) Gray code. See also Venn diagram. Note: "Karnaugh" is pronounced "car-no". In the example, "*" means "don't care", that is, it doesn't matter what the function value is for those inputs. This expression may be realized as AB' + AD + BC'D + B'CD'. Some expressions may be implemented more compactly by grouping the zeros, possibly including "don't care" cells, and negating the final output. The positive implementation is smaller for this expression. Author: SKS (Java) applet demonstrating minimization. A primer on Karnaugh maps motivated by minimizing logic. An interactive quiz. Maurice Karnaugh, The Map Method for Synthesis of Combinational Logic Circuits, Trans. AIEE. pt I, 72(9):593-599, November 1953. Go to the Dictionary of Algorithms and Data Structures home page. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 24 September 2012. HTML page formatted Mon Nov 18 10:44:09 2013. Cite this as: Sandeep Kumar Shukla, "Karnaugh map", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 24 September 2012. (accessed TODAY) Available from: http://
{"url":"http://xlinux.nist.gov/dads/HTML/karnaughmap.html","timestamp":"2014-04-16T10:38:08Z","content_type":null,"content_length":"4042","record_id":"<urn:uuid:710fadf5-1e11-4d3e-be0f-051f8ae10a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Plantation, FL Algebra 2 Tutor Find a Plantation, FL Algebra 2 Tutor ...I enjoy studying Theology and Biblical knowledge, computer science, mathematics, the introductory sciences, reading, writing, conversational Spanish. I have found public speaking, individual and group presentations (often with Microsoft PowerPoint), and independent study of various academic and ... 36 Subjects: including algebra 2, Spanish, reading, chemistry ...This includes a variety of teaching strategies for all sorts of students. I believe that with enough time and the right kind of instruction, all students can learn. In large classes, teachers are not always able to tailor their instruction to the individual needs of every student. 8 Subjects: including algebra 2, geometry, algebra 1, precalculus ...You’re never too old or too young to laugh while learning math! Calculus I is primarily concerned with understanding the idea of a derivative, techniques of derivation and applications of derivatives. Calculus 2 covers integration in the same manner. 7 Subjects: including algebra 2, calculus, geometry, algebra 1 ...Coming to the United States, I could not believe the "damage" that natural speakers of the languages could do to it, which led me to contend that a person who has actually learned English in the best academic environment - which is my case as I attended one of the very best schools in my country ... 20 Subjects: including algebra 2, English, reading, ESL/ESOL ...I do my best to not only accommodate the way they learn but help enhance their learning abilities through different approaches of teaching. I have a love of teaching and learning. Graduating with a bachelor in biology with a chemistry and government minor, followed by a masters in medical scien... 14 Subjects: including algebra 2, biology, algebra 1, trigonometry Related Plantation, FL Tutors Plantation, FL Accounting Tutors Plantation, FL ACT Tutors Plantation, FL Algebra Tutors Plantation, FL Algebra 2 Tutors Plantation, FL Calculus Tutors Plantation, FL Geometry Tutors Plantation, FL Math Tutors Plantation, FL Prealgebra Tutors Plantation, FL Precalculus Tutors Plantation, FL SAT Tutors Plantation, FL SAT Math Tutors Plantation, FL Science Tutors Plantation, FL Statistics Tutors Plantation, FL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Cooper City, FL algebra 2 Tutors Dania algebra 2 Tutors Dania Beach, FL algebra 2 Tutors Davie, FL algebra 2 Tutors Fort Lauderdale algebra 2 Tutors Hollywood, FL algebra 2 Tutors Lauderdale Lakes, FL algebra 2 Tutors Lauderhill, FL algebra 2 Tutors Margate, FL algebra 2 Tutors North Lauderdale, FL algebra 2 Tutors Oakland Park, FL algebra 2 Tutors Pembroke Pines algebra 2 Tutors Pompano Beach algebra 2 Tutors Sunrise, FL algebra 2 Tutors Tamarac, FL algebra 2 Tutors
{"url":"http://www.purplemath.com/Plantation_FL_Algebra_2_tutors.php","timestamp":"2014-04-18T05:45:34Z","content_type":null,"content_length":"24342","record_id":"<urn:uuid:9e2c5cf7-accd-4b94-a682-e291d446962d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
christmas equation worksheet Samedi 12 mars 2011 à 5h35 . Keys are also provided. . . Generate hundreds of printable quadratic equation worksheets . multiply and divide fractions ; cube roots+ti89; ks3; online 3rd degree solve; SOLVING SLOPES AND INTERCEPTSfree 8th grade math problems ; linear in 2 variable project; rational expression calculator; rationalizing the denominator in a math Maker. . . Kids are asked to look at the pictures and write the addition by counting the pictures. All rights reserved. Fun Activities to do with your family! Sneak in some learning fun with . . ©2011 About. org gives both interesting and useful material on free maths . . Basic Fractions . com. . . . . . students to use â First â Thenâ logic to solve the . Math stumpers. Mad Libs. . . . Free to graph the . Choose your options below and we will make a printable custom . . . . Make the larger fractions first in the . . Printable Word Puzzles. . . A part of The New York Times . Sketch the graphs of the . Exponents add and subtract , where can i find a factoring calculator, algebra stories, printable math christmas equations , not just sums . . Print them straight from your browser. . . . . . . Base Ten Blocks | â | % Decimals and . Be sure to check our other math . These are approximately 3rd grade level math. This Fraction Maker will generate a series of . . . . . Social Studies; Printable Board Games; Coloring Pages; St. swap the numerator and denominator. . Solving Linear Free Algebra with one step addition (ex: x+2 =4), one step Subtraction (ex: x-2 =4), Addition and Subtraction (ex: x-2 =4),Two . . graphing equations worksheets : how to solve systems of linear on ti-83 . More Free Christmas Worksheets . Balancing Act - Balancing provided) Playing with Polymers . . This is appropriate for third and fourth graders. . . Generate your own custom Printable Basic Fraction Equations Worksheets . Printable MazesCase #1225: Cookie Mystery ( Chromatography - Deck the . . This page includes simple algebra suitable for . . . Patrick's Day; All Practicing Balancing Chemical #2; Balancing Chemical - Answers #2; Practicing Balancing Chemical #3Printable addition for kids with a camper van craigslist fetal pig dissection game perro sexo con mujeres youpron word list for bible pictionary mexicana caderonas how to tile around door entry
{"url":"http://vqwfif.rain-blog.com/index.htm","timestamp":"2014-04-21T02:12:16Z","content_type":null,"content_length":"14768","record_id":"<urn:uuid:d1c91312-9194-4222-a639-1c8ee5738090>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Standard error of the estimate for svy: reg [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Standard error of the estimate for svy: reg From Michael Hanson <mshanson@mac.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Standard error of the estimate for svy: reg Date Wed, 22 Aug 2007 16:31:51 -0400 On Aug 22, 2007, at 3:23 PM, Steven Samuels wrote: I'll leave this topic with the following reference: Since you've left this topic, you've foreclosed any opportunity to clarify the relevance of that citation. In my very quick read of it, nothing in the second- and third-to-last paragraphs seems to be inconsistent with the use of the terminology "standard error of the estimate" (or "standard error of the regression") that we have previously established is not uncommon in certain social sciences but apparently unknown to at least some people in other (biomedical?) fields; the remaining paragraphs appear to discuss other topics. I personally don't see a problem with different fields having different nomenclatures, and the point of my previous message was simply to indicate that the questioned term is not uncommon in certain (broad) fields of applied statistics. With the discussion abruptly ended I gather that the implication was meant to be that certain fields (viz. the social sciences) were misusing a term. I would be interested in learning the substance of that argument, if in fact that was the case. Otherwise I am still puzzling over the contribution of the citation to the prior exchange. -- Mike P.S.: Stas's argument against the use of the RMSE/SEE in the OP's question sounds valid to me, but I do not claim any familiarity with estimation using survey data. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-08/msg00827.html","timestamp":"2014-04-16T22:11:53Z","content_type":null,"content_length":"8320","record_id":"<urn:uuid:6fb3df54-9457-47fd-b2fc-ba3e3317164d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
3D Interactive Cityplots We are experimenting with the use of VRML (Virtual Reality Modeling Language) to enable 3D interactive exploration of sparse matrices. The following are VRML models of the cityplots of several matrices in the Matrix Market. VRML allows one to manipulate these visualizations as three-dimensional objects. To view the demos you will need a VRML browser or plugin. Clicking on the matrix in the VRML browser will link to the Web page about the Matrix. fidapm05 rw136 VRML Version 2, gzipped, 66 Kb VRML Version 2, gzipped, 63 Kb • "Version 2" refers to the VRML Specification level which must be supported by your browser in order to view the model. • These data files are compressed. Help is available if you are having trouble downloading them. We have generated VRML cityplots for 232 matrices. For matrices with more than 4,000 entries only the front face of each 3D bar (representing a single matrix entry) is drawn in order to reduce the size of the VRML files. VRML cityplots for matrices with more than 24,000 have not been generated due to their large sizes. Of those included, 29 of the files are greater than 500Kb with the largest being 800Kb. We would appreciate feedback on these visualizations. If they prove useful we will make more available. The Matrix Market is a service of the Mathematical and Computational Sciences Division / Information Technology Laboratory / National Institute of Standards and Technology [ Home ] [ Search ] [ Browse ] [ Resources ] Last change in this page : 3 July 2000. [ ].
{"url":"http://math.nist.gov/MatrixMarket/vrmlcityplots.html","timestamp":"2014-04-20T19:23:29Z","content_type":null,"content_length":"4392","record_id":"<urn:uuid:df01926a-5245-4470-ae3e-07682346afef>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
uestion of the Extra Class question of the day: Smith Chart September 26, 2012 by Dan KB6NU Leave a Comment NOTE: This is the last installment of the Extra Class question of the day. I’m going to be compiling all of these into the No-Nonsense Extra Class Study Guide. Watch for it real soon now. A Smith chart is shown in Figure E9-3 above. (E9G05) It is a chart designed to solve transmission line problems graphically. While a complete discussion of the theory behind the Smith Chart is outside the scope of this study guide, a good discussion of the Smith Chart can be found on the ARRL website. The coordinate system is used in a Smith chart is comprised of resistance circles and reactance arcs. (E9G02) Resistance and reactance are the two families of circles and arcs that make up a Smith chart. (E9G04) The resistance axis is the only straight line shown on the Smith chart shown in Figure E9-3. (E9G07) Points on this axis are pure resistances. In practice, you want to position the chart so that 0 ohms is at the far left, while infinity is at the far right. The arcs on a Smith chart represent points with constant reactance. (E9G10) On the Smith chart, shown in Figure E9-3, the name for the large outer circle on which the reactance arcs terminate is the reactance axis. (E9G06) Points on the reactance axis have a resistance of 0 ohms. When oriented so that the resistance axis is horizontal, positive reactances are plotted above the resistance axis and negative reactances below. The process of normalization with regard to a Smith chart refers to reassigning impedance values with regard to the prime center. (E9G08) The prime center is the point marked 1.0 on the resistance axis. If you’re working with a 50 ohm transmission line, you’d normally divide the impedances by 50, meaning that a 50 ohm resistance would then be plotted on the resistance axis at the point marked 1.0. A reactance of 50 + j100 would be plotted on the resistance circle going through the prime center where it intersects the reactance arc marked 2.0. Impedance along transmission lines can be calculated using a Smith chart. (E9G01) Impedance and SWR values in transmission lines are often determined using a Smith chart. (E9G03) Standing-wave ratio circles are often added to a Smith chart during the process of solving problems. (E9G09) The wavelength scales on a Smith chart calibrated in fractions of transmission line electrical wavelength. (E9G11) These are useful when trying to determine how long transmission lines must be when used to match a load to a transmitter. Filed Under: antennas, Electronics Theory, Extra Class Question of the Day Extra Class question of the day: Amplifiers September 24, 2012 by Dan KB6NU Leave a Comment There are several classifications of amplifiers, based on their mode of operation. In a class A amplifier is always conducting current. That means that the bias of a Class A common emitter amplifier would normally be set approximately half-way between saturation and cutoff on the load line. (E7B04) In a class B amplifer, there are normally two transistors operating in a “push-pull” configuration. One transistor turns on during the positive half of a cycle, while the other turns on during the negative half. Push-pull amplifiers reduce or eliminate even-order harmonics. (E7B06) A Class AB amplifier operates over more than 180 degrees but less than 360 degrees of a signal cycle. (E7B01) Class B and Class AB amplifiers are more efficient than Class A amplifiers. A Class D amplifier is a type of amplifier that uses switching technology to achieve high efficiency. (E7B02) The output of a class D amplifier circuit includes a low-pass filter to remove switching signal components. (E7B03) Amplifiers are used in many different applications, but one application that is especially important, at least as far as signal quality goes, is RF power amplification. RF power amplifiers may emit harmonics or spurious signals, that may cause harmful interference. One thing that can be done to prevent unwanted oscillations in an RF power amplifier is to install parasitic suppressors and/or neutralize the stage. (E7B05) An RF power amplifier be neutralized by feeding a 180-degree out-of-phase portion of the output back to the input. (E7B08) Another thing one can do to reduce unwanted emissions is to use a push-pull amplifier. Signal distortion and excessive bandwidth is a likely result when a Class C amplifier is used to amplify a single-sideband phone signal. (E7B07) While most modern transceivers use transistors in their final amplifiers, and the output impedance is 50 ohms over a wide frequency range. A field effect transistor is generally best suited for UHF or microwave power amplifier applications. (E7B21) Many high-power amplifiers, however, still use vacuum tubes. These amplifiers require that the operator tune the output circuit. The tuning capacitor is adjusted for minimum plate current, while the loading capacitor is adjusted for maximum permissible plate current is how the loading and tuning capacitors are to be adjusted when tuning a vacuum tube RF power amplifier that employs a pi-network output circuit. (E7B09) The type of circuit shown in Figure E7-1 is a common emitter amplifier. (E7B12) In Figure E7-1, the purpose of R1 and R2 is to provide fixed bias. (E7B10) In Figure E7-1, what is the purpose of R3 is to provide self bias. (E7B11) In Figure E7-2, the purpose of R is to provide emitter load. (E7B13) In Figure E7-2, the purpose of C2 is to provide output coupling. (E7B14) Thermal runaway is one problem that can occur if a transistor amplifier is not designed correctly. What happens is that when the ambient temperature increases, the leakage current of the transistor increases, causing an increase in the collector-to-emitter current. This increases the power dissipation, further increasing the junction temperature, which increases yet again the leakage current. One way to prevent thermal runaway in a bipolar transistor amplifier is to use a resistor in series with the emitter. (E7B15) RF power amplifers often generate unwanted signals via a process called intermodulation. Strong signals external to the transmitter combine with the signal being generated, causing sometimes unexpected and unwanted emissions. The effect of intermodulation products in a linear power amplifier is the transmission of spurious signals. E7B16() Third-order intermodulation distortion products are of particular concern in linear power amplifiers because they are relatively close in frequency to the desired signal. (E7B17) Finally, there are several questions on special-application amplifiers. A klystron is a VHF, UHF, or microwave vacuum tube that uses velocity modulation. (E7B19) A parametric amplifier is a low-noise VHF or UHF amplifier relying on varying reactance for amplification. (E7B20) Filed Under: Circuit Design, Extra Class Question of the Day Extra Class question of the day: Direction finding September 22, 2012 by Dan KB6NU Leave a Comment Direction finding is an activity that’s both fun and useful. One of the ways that it’s useful is to hunt down noise sources. It can also be used to hunt down stations causing harmful interference. A variety of directional antennas are used in direction finding, including the shielded loop antenna. A receiving loop antenna consists of one or more turns of wire wound in the shape of a large open coil. (E9H09) The output voltage of a multi-turn receiving loop antenna be increased by increasing either the number of wire turns in the loop or the area of the loop structure or both. (E9H10) An advantage of using a shielded loop antenna for direction finding is that it is electro-statically balanced against ground, giving better nulls. (E9H12) The main drawback of a wire-loop antenna for direction finding is that it has a bidirectional pattern. (E9H05) Sometimes a sense antenna is used with a direction finding antenna. The function of a sense antenna is that it modifies the pattern of a DF antenna array to provide a null in one direction. (E9H08) Another way to obtain a null in only one direction is to build an antenna array with a cardioid pattern. One way to do this is to build an array with two dipoles fed in quadrature. A very sharp single null is a characteristic of a cardioid-pattern antenna is useful for direction finding. (E9H11) Another accessory that is often used in direction finding is an attenuator. It is advisable to use an RF attenuator on a receiver being used for direction finding because it prevents receiver overload which could make it difficult to determine peaks or nulls. (E9H07) If more than one operator can be mobilized for a direction-finding operation, they could use the triangulation method for finding a noise source or the source of a radio signal. When using the triangulation method of direction finding, antenna headings from several different receiving locations are used to locate the signal source. (E9H06) Filed Under: antennas, Direction finding, Extra Class Question of the Day Extra Class question of the day: Effective radiated power September 21, 2012 by Dan KB6NU Leave a Comment Effective radiated power is a widely misunderstood concept. Effective radiated power is the term that describes station output, including the transmitter, antenna and everything in between, when considering transmitter power and system gains and losses. (E9H04) The effective radiated power, or ERP, is always given with respect to a certain direction. Let’s think about this for a second. If your transmitter has an output of 100 W, the maximum power that the antenna can radiate is also 100 W. Transmitting antennas are, after all, passive devices. You can’t get more power out of them that you put into them. In reality, the total power output will be even less than 100 W because you will have losses in the feedline. An antenna can, however, concentrate the power in a certain direction. The power being radiated in that direction will be more than the power radiated in that direction by a reference antenna, usually a dipole or an isotropic antenna, which is an antenna that radiates equally in all directions. When an antenna concentrates power in a certain direction, we say that it has gain in that direction, and we specify the amount of gain in dB. If the reference antenna is an isotropic antenna, then the unit of gain is dBi. If the reference antenna is a dipole, then the unit of gain is dBd. With that in mind, let’s take a look at an example. In this example, a repeater station has 150 watts transmitter power output, there is a 2-dB feed line loss, 2.2-dB duplexer loss, and the antenna has 7-dBd gain. To calculate the system gain (or loss), you add the gains and losses, so Gain = 7 dBd – 2 dB – 2.2 dB = + 2.8 dB dB Ratio 1 1.26:1 2 1.585:1 3 2:1 Now, if you recall, 3 dB is close to a gain of 2, as shown in the table at right, so in this example, to calculate the effective radiated power, you multiply the transmitter’s output power by a factor slightly less than two. This makes the effective radiated power slightly less than 15o W x 2, or 300 W. The closest answer to 300 W is 286 W. (E9H01) Let’s look at another example. The effective radiated power relative to a dipole of a repeater station with 200 watts transmitter power output, 4-dB feed line loss, 3.2-dB duplexer loss, 0.8-dB circulator loss and 10-dBd antenna gain is 317 watts. (E9H02) In this example, the gain is equal to 10 dB – 8 dB in lossses or a net gain of 2 dB. That’s equivalent to a ratio of 1.585:1. The ERP is then 200 W x 1.585 = 317 W. Now, lets look at an example using an isotropic antenna as the reference antenna. The effective isotropic radiated power of a repeater station with 200 watts transmitter power output, 2-dB feed line loss, 2.8-dB duplexer loss, 1.2-dB circulator loss and 7-dBi antenna gain is 252 watts. (E9H03) In this example, the gain is equal to 7 dB – 2 dB – 2.8 dB – 1.2 dB = 1 dB. That’s equivalent to a ratio of 1.26:1, so the ERP is 200 W x 1.26 = 252 W. Filed Under: antennas, Extra Class Question of the Day Extra Class question of the day: Frequency counters and markers September 20, 2012 by Dan KB6NU 2 Comments To measure the frequency of a signal, you use an instrument called a frequency counter. The purpose of a frequency counter is to provide a digital representation of the frequency of a signal.(E7F09) A frequency counter counts the number of input pulses occurring within a specific period of time. (E7F08) To accurately measure high-frequency signals digitally, you need a highly stable and accurate frequency source, called the time base. The time base provides an accurate and repeatable time period, over which you count the number of pulses of the test signal. The accuracy of the time base determines the accuracy of a frequency counter. (E7F07) An alternate method of determining frequency, other than by directly counting input pulses, that is used by some counters is period measurement plus mathematical computation. (E7F10) An advantage of a period-measuring frequency counter over a direct-count type is that it provides improved resolution of low-frequency signals within a comparable time period. (E7F11) You also need an accurate and stable time base to generate and receive microwave signals. All of these choices are correct when talking about techniques for providing high stability oscillators needed for microwave transmission and reception: (E7F05) • Use a GPS signal reference • Use a rubidium stabilized reference oscillator • Use a temperature-controlled high Q dielectric resonator If you want to measure a signal whose frequency is higher than the maximum frequency of your counter, you might use a prescaler. The purpose of a prescaler circuit is to divide a higher frequency signal so a low-frequency counter can display the input frequency. (E7F01) A prescaler would, for example, be used to reduce a signal’s frequency by a factor of ten. (E7F02) You might use a decade counter digital IC in a prescaler circuit. The function of a decade counter digital IC is to produce one output pulse for every ten input pulses. (E7F03) In some cases, you might use a flip-flop. Two flip-flops must be added to a 100-kHz crystal-controlled marker generator so as to provide markers at 50 and 25 kHz. (E7F04) The purpose of a marker generator is to provide a means of calibrating a receiver’s frequency settings. (E7F06) You mostly find marker generators in older, analog receivers. Filed Under: Digital Logic, Extra Class Question of the Day, Test Equipment Extra Class question of the day: Wire and phased vertical antennas September 19, 2012 by Dan KB6NU Leave a Comment There are many ways to put up antennas that are directional. Yagis are directional antennas, but they require a structure, such as a tower, to get them high in the air. One way to get directionality without a tower is to use phased vertical arrays. In general, the phased vertical array consists of two or more quarter-wave vertical antennas. The radiation pattern that the array will have depends on how you feed the vertical antennas. So, for example, the radiation pattern of two 1/4-wavelength vertical antennas spaced 1/2-wavelength apart and fed 180 degrees out of phase is a figure-8 oriented along the axis of the array. (E9C01) The radiation pattern of two 1/4-wavelength vertical antennas spaced 1/4-wavelength apart and fed 90 degrees out of phase is a cardioid. (E9C02) The radiation pattern of two 1/4-wavelength vertical antennas spaced 1/2-wavelength apart and fed in phase is a Figure-8 broadside to the axis of the array. (E9C03) A rhombic antenna is often used for receiving on the HF bands. A basic unterminated rhombic antenna is described as bidirectional; four-sides, each side one or more wavelengths long; open at the end opposite the transmission line connection. (E9C04) The disadvantages of a terminated rhombic antenna for the HF bands is that the antenna requires a large physical area and 4 separate supports. (E9C05) Putting a terminating resistor on a rhombic antenna changes the radiation pattern from bidirectional to unidirectional. (E9C06) The type of antenna pattern over real ground that is shown in Figure E9-2 is an elevation pattern. (E9C07) The elevation angle of peak response in the antenna radiation pattern shown in Figure E9-2 is 7.5 degrees. (E9C08) The front-to-back ratio of the radiation pattern shown in Figure E9-2 is 28 dB. (E9C09) 4 elevation lobes appear in the forward direction of the antenna radiation pattern shown in Figure E9-2. (E9C10) How and where you install an antenna affects its radiation pattern. For example, the far-field elevation pattern of a vertically polarized antenna is affected when it is mounted over seawater versus rocky ground. What happens is that the low-angle radiation increases. (E9C11) The main effect of placing a vertical antenna over an imperfect ground is that it reduces low-angle radiation. (E9C13) When constructing a Beverage antenna, remember that it should be one or more wavelengths long to achieve good performance at the desired frequency. (E9C12) Filed Under: antennas, Extra Class Question of the Day Extra Class question of the day: Piezoelectric crystals and MMICs September 18, 2012 by Dan KB6NU 1 Comment Piezoelectric crystals are used in several amateur radio applications. They are called piezoelectric crystals because they use the piezoelectric effect, which is the physical deformation of a crystal by the application of a voltage. (E6E03) The equivalent circuit of a quartz crystal consist of motional capacitance, motional inductance and loss resistance in series, with a shunt capacitance representing electrode and stray capacitance. (E6E10) Perhaps the most common use for a piezoelectric crystal is as the frequency-controlling component in an oscillator circuit. To ensure that a crystal oscillator provides the frequency specified by the crystal manufacturer, you must provide the crystal with a specified parallel capacitance. (E6E09) Piezoelectric crystals are also used in crystal filters. A crystal lattice filter is a filter with narrow bandwidth and steep skirts made using quartz crystals. (E6E01) The relative frequencies of the individual crystals is the factor that has the greatest effect in helping determine the bandwidth and response shape of a crystal ladder filter. (E6E02) A “Jones filter” is a variable bandwidth crystal lattice filter used as part of a HF receiver IF stage. (E6E12) Monolithic microwave integrated circuits, or MMICs, are ICs that are made to perform various functions at high frequencies. Gallium nitride is the material that is likely to provide the highest frequency of operation when used in MMICs. (E6E11) The characteristics of the MMIC that make it a popular choice for VHF through microwave circuits are controlled gain, low noise figure, and constant input and output impedance over the specified frequency range. (E6E06) For example, a low-noise UHF preamplifier might have a typical noise figure value of 2 dB. (E6E05) 50 ohms is the most common input and output impedance of circuits that use MMICs. (E6E04) To achieve these specifications, great care is taken in building and using an MMIC. For example, microstrip construction is typically used to construct a MMIC-based microwave amplifier. (E6E07) The power-supply voltage is normally furnished to the most common type of monolithic microwave integrated circuit (MMIC) through a resistor and/or RF choke connected to the amplifier output lead. (E6E08) Filed Under: Electronic Components, Extra Class Question of the Day Extra Class question of the day: Operational amplifiers September 17, 2012 by Dan KB6NU Leave a Comment An integrated circuit operational amplifier is a high-gain, direct-coupled differential amplifier with very high input and very low output impedance. (E7G12) They are very versatile components. They can be used used to build amplifiers, filter circuits, and many other types of circuits that do analog signal processing. Because they are active components–that is to say that they amplify–filters made with op amps are called active filters. The most appropriate use of an op-amp active filter is as an audio filter in a receiver. (E7G06). An advantage of using an op-amp instead of LC elements in an audio filter is that op-amps exhibit gain rather than insertion loss. (E7G03) The values of capacitors and resistors external to the op-amp primarily determine the gain and frequency characteristics of an op-amp RC active filter. (E7G01) The type of capacitor best suited for use in high-stability op-amp RC active filter circuits is polystyrene. (E7G04) Polystyrene capacitors are used in applications where very low distortion is required. Ringing in a filter may cause undesired oscillations to be added to the desired signal. (E7G02) One way to prevent unwanted ringing and audio instability in a multi-section op-amp RC audio filter circuit is to restrict both gain and Q. (E7G05) Calculating the gain of an op amp circuit is relatively straightforward. The gain is simply R[F]/R[in]. In figure E7-4 below, R[in = ]R[1]. Therefore, the magnitude of voltage gain that can be expected from the circuit in Figure E7-4 when R1 is 10 ohms and RF is 470 ohms is 470/10, or 47. (E7G07) The absolute voltage gain that can be expected from the circuit in Figure E7-4 when R1 is 1800 ohms and RF is 68 kilohms is 68,000/1,800, or 38. (E7G10) The absolute voltage gain that can be expected from the circuit in Figure E7-4 when R1 is 3300 ohms and RF is 47 kilohms is 47,000/3,300, or 14. (E7G11) -2.3 volts will be the output voltage of the circuit shown in Figure E7-4 if R1 is 1000 ohms, RF is 10,000 ohms, and 0.23 volts dc is applied to the input. (E7G09) The gain of the circuit will be 10,000/1,000 or 10, and the output voltage will be equal to the input voltage times the gain. 0.23 V x 10 = 2.3 V, but since the input voltage is being applied to the negative input, the output voltage will be negative. Two characteristics that make op amps desirable components is their input impedance and output impedance. The typical input impedance of an integrated circuit op-amp is very high. (E7G14) This feature makes them useful in measurement applications. The typical output impedance of an integrated circuit op-amp is very low. (E7G15) The gain of an ideal operational amplifier does not vary with frequency. (E7G08) Most op amps aren’t ideal, though. While some modern op amps can be used at high frequencies, many of the older on the older ones can’t be used at frequencies above a couple of MHz. Ideally, with no input signal, there should be no voltage difference between the two input terminals. Since no electronic component is ideal, there will be a voltage between these two terminals. We call this the input offset voltage. Put another way, the op-amp input-offset voltage is the differential input voltage needed to bring the open-loop output voltage to zero. (E7G13) Filed Under: Electronic Components, Extra Class Question of the Day Extra Class question of the day: Miscellaneous rules September 16, 2012 by Dan KB6NU Leave a Comment As the name of this section implies, it contains a hodgepodge of questions covering sometimes obscure rules. About the only way to get these right is to memorize the answers. The use of spread-spectrum techniques is a topic that comes up from time to time. Many amateurs feel that the rules are too restrictive. For example, 10 W is the maximum transmitter power for an amateur station transmitting spread spectrum communications. (E1F10) Only on amateur frequencies above 222 MHz are spread spectrum transmissions permitted. (E1F01) All of these choices are correct when talking about the conditions that apply when transmitting spread spectrum emission: (E1F09) • A station transmitting SS emission must not cause harmful interference to other stations employing other authorized emissions. • The transmitting station must be in an area regulated by the FCC or in a country that permits SS emissions. • The transmission must not be used to obscure the meaning of any communication. The rules governing the use of external amplifiers is also somewhat controversial. A dealer may sell an external RF power amplifier capable of operation below 144 MHz if it has not been granted FCC certification if it was purchased in used condition from an amateur operator and is sold to another amateur operator for use at that operator’s station. (E1F03) One of the standards that must be met by an external RF power amplifier if it is to qualify for a grant of FCC certification is that it must satisfy the FCC’s spurious emission standards when operated at the lesser of 1500 watts, or its full output power. (E1F11) There are some rules that spell out restrictions based on where a station is located. For example, amateur radio stations may not operate in the National Radio Quiet Zone. The National Radio Quiet Zone is an area surrounding the National Radio Astronomy Observatory. (E1F06) The NRAO is located in Green Bank, West Virginia. There is also a regulation that protects Canadian Land/Mobile operations near the US/Canadian border from interference. Amateur stations may not transmit in the 420 – 430 MHz frequency segment if they are located in the contiguous 48 states and north of Line A. (E1F05) A line roughly parallel to and south of the US-Canadian border describes “Line A.” (E1F04) There is a corresponding “Line B” parallel to and north of the U.S./Canadian border. As you might expect, there are some questions about not making any money from operating an amateur radio station. Communications transmitted for hire or material compensation, except as otherwise provided in the rules are prohibited. (E1F08) An amateur station may send a message to a business only when neither the amateur nor his or her employer has a pecuniary interest in the communications. This next question is a bit of a trick question. 97.201 states that only Technician, General, Advanced or Amateur Extra Class operators may be the control operator of an auxiliary station. (E1F12) It’s a trick question because there are also holders of Novice Class licenses even though no new Novice licenses have been issued for many years, and the number of Novice Class licensees dwindles every year. Communications incidental to the purpose of the amateur service and remarks of a personal nature are the types of communications may be transmitted to amateur stations in foreign countries. (E1F13) The FCC might issue a “Special Temporary Authority” (STA) to an amateur station to provide for experimental amateur communications. (E1F14) The CEPT agreement allows an FCC-licensed US citizen to operate in many European countries, and alien amateurs from many European countries to operate in the US. (E1F02) Extra Class question of the day: Toroids September 14, 2012 by Dan KB6NU Leave a Comment Toroidal inductors are very popular these days. A primary advantage of using a toroidal core instead of a solenoidal core in an inductor is that toroidal cores confine most of the magnetic field within the core material. (E6D10) Another reason for their popularity is the frequency range over which you can use them. The usable frequency range of inductors that use toroidal cores, assuming a correct selection of core material for the frequency being used is from less than 20 Hz to approximately 300 MHz. (E6D07) Ferrite beads are commonly used as VHF and UHF parasitic suppressors at the input and output terminals of transistorized HF amplifiers. (E6D09) An important characteristic of a toroid core is its permeability. Permeability is the core material property that determines the inductance of a toroidal inductor. (E6D06) One important reason for using powdered-iron toroids rather than ferrite toroids in an inductor is that powdered-iron toroids generally maintain their characteristics at higher currents. (E6D08) One reason for using ferrite toroids rather than powdered-iron toroids in an inductor is that ferrite toroids generally require fewer turns to produce a given inductance value. (E6D16) To calculate the inductance of a ferrite-core toroid, we need the inductance index of the core material. The formula that we use to calculate the inductance of a ferrite-core toroid inductor is: L = A[L]×N^2/1,000,000 where L = inductance in microhenries, A[L] = inductance index in µH per 1000 turns, and N = number of turns We can solve for N to get the following formula: N = 1000 x sqrt (L/A[L]) Using that equation, we see that 43 turns will be required to produce a 1-mH inductor using a ferrite toroidal core that has an inductance index (A L) value of 523 millihenrys/1000 turns. (E6D11) N = 1000 x sqrt (1/523) = 1000 x .0437 = 43.7 turns The formula for calculating the inductance of a powdered-iron core toroid inductor is: L = A[L]×N^2/10,000 where L = inductance in microhenries, A[L] = inductance index in µH per 1000 turns, and N = number of turns We can solve for N to get the following formula: N = 100 x sqrt (L/A[L]) Using that equation, we calculate that 35 turns turns will be required to produce a 5-microhenry inductor using a powdered-iron toroidal core that has an inductance index (A L) value of 40 microhenrys/100 turns. (E6D12) N = 1000 x sqrt (5/40) = 100 x .353 = 35.3 turns Filed Under: Electronic Components, Extra Class Question of the Day Tagged With: toroids Recent Comments • Henry N6HCM on Who else is on Twitter? • Dan KB6NU on Wanna review my new study guide? • Fred Bruey on Wanna review my new study guide? • Jerry Brown on Wanna review my new study guide?
{"url":"http://www.kb6nu.com/category/classestesting/extra-class-question-of-the-day/","timestamp":"2014-04-20T18:25:22Z","content_type":null,"content_length":"74705","record_id":"<urn:uuid:e7352293-e4e6-4128-b450-4ba9fb221b25>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Vauxhall Trigonometry Tutor Find a Vauxhall Trigonometry Tutor ...I will find presentations that unlock the mystery and fun of mathematics for you! I will come to your home or meet you at a mutually convenient location (such as the library). I am happy to work with individuals or groups. Group rates can be negotiated. 10 Subjects: including trigonometry, calculus, statistics, geometry Hey there! I am a pre-med student in my second year of college and I have 3 years of tutoring experience. I currently work at the Math Center in South Orange, NJ. 29 Subjects: including trigonometry, English, reading, chemistry ...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the subject material I teach and a patient and creative approach that makes any subject simple to understand... 21 Subjects: including trigonometry, calculus, statistics, geometry Education: PhD in Applied Mathematics, MS in Math Education, BS in Math Education. Tutoring Subjects: All levels of Mathematics including, but not limited to, Trigonometry, Algebra, AP Calculus AB and BC, Calculus Honors, Pre-calculus Honors, and Advanced Mathematics. For the last 10 years I ha... 8 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I tailor my teaching strategy to the individual: first identifying whether the pupil is an auditory, visual, or kinesthetic learner; then applying wisdom from my 9+ years of experience in coaching precalculus. This method of education consistently yields positive learning outcomes. I am a mechanical engineering major who graduated from Columbia University. 32 Subjects: including trigonometry, reading, calculus, physics
{"url":"http://www.purplemath.com/vauxhall_nj_trigonometry_tutors.php","timestamp":"2014-04-20T16:24:51Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:3119b6de-a87d-48aa-806e-669a851d0ef0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
positive hermitian elements in $M_n(\mathbb{C})$ up vote 5 down vote favorite Elements of the set $P$ of positive hermitian $n×n$ matrices over complex numbers have some special properties: (i) they are closed under sum, (ii) they are closed under multiplication by positive scalars, (iii) spectrum of every matrix is positive, (all eigenvalues are nonnegative, and not all are equal to 0), (iv) $P+-P+iP+-iP=M_n(\mathbb{C})$. Does any other subset of matrix algebra $M_n(\mathbb{C})$ satisfy these properties except for $tPt^{-1}$, where $t$ is an invertible element in $M_n(\mathbb{C})$? fa.functional-analysis linear-algebra matrices $X^*AX \ge 0$ for all $X$ if $A \ge 0$. – Suvrit Oct 29 '11 at 13:07 But $x^∗ax$ is also hermitian matrix if $a$ is. So $x^∗Px⊂P$, and $x^∗M_n(\mathbb{C})x=M_n(\mathbb{C})$ iff $x$ is invertible. So $x^∗Px$ either does not satisfy (iv) or equals $P$. – spelas Oct 29 '11 at 13:54 ah, ok. i did not read (iv) at all :-) – Suvrit Oct 29 '11 at 14:21 The set of upper (lower) triangular matrices with non-negative diagonals satisfies (i), (ii), and (iii) trivially since the eigenvalues lie on the diagonal. If we call the set of such upper triangular matrices $\mathcal U$, and the set of such lower triangular matrices $\mathcal L$, then we have a variant of (iv) which is $\mathcal U + -\mathcal U + i \mathcal U + -i \mathcal U + \ mathcal L + -\mathcal L + i \mathcal L + -i \mathcal L = M_n(\mathbb C)$. – Jack Poulson Oct 29 '11 at 19:48 add comment 1 Answer active oldest votes I think I recall seeing this question in a Halmos book on linear algebra, either "Finite Dimensional Vector Spaces" or the "Linear Algebra Problem Book", but I don't remember which, and I don't have them on hand. Here are some subsets which satisfy 3 out of 4 conditions: Jack Poulson already mentioned upper triangular matrices, which only violate (iv). The set of all Hermitian matrices only violates (iii). up vote 1 down The set of Hermitian matrices $P_r$, where all eigenvalues are greater than some positive real $r$ is closed under addition — but not positive scaling — and every matrix can be written as an vote element of $P_r+(−P_r)+iP_r + (−iP_r)$. This set is a strict subset of $P$, and any element of $P \setminus P_r$ is not contained in $tP_rt^{-1}$ for any invertible $t \in M_n(\mathbb{C})$ (consider diagonalization). The set of non-diagonalizable matrices with real, non-negative eigenvalues satisfies everything but (i). For $M_2$ explictly, consider matrices of the form $$ A = \left[ {\begin{array}{cc} r_1 & z \\\ c\bar{z} & r_2 \\\ \end{array} } \right] $$ where $r_1$, $r_2$ are real, $r_1 + r_2 > 0$, $z \neq 0$, and $c = -\left(\frac{r_1 - r_2}{2|z|}\right)^2$. Then $A$ has one repeated eigenvalue, $\frac{r_1 + r_2}{2}$, and one linearly independent eigenvector $(z, \frac{r_2-r_1}{2})$. The set of all such matrices satisfies (ii), (iii), (iv), and is not conjugate to $P$ — since everything in $P$ is diagonalizable — but is not closed under addition. I cannot find the question in Halmos books. There were some other useful information. Thank you. For case $n=2$, might Mathematica be able to compute this? – spelas Oct 30 '11 at 16:02 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis linear-algebra matrices or ask your own question.
{"url":"https://mathoverflow.net/questions/79461/positive-hermitian-elements-in-m-n-mathbbc","timestamp":"2014-04-18T03:08:00Z","content_type":null,"content_length":"57548","record_id":"<urn:uuid:1e5d81dd-6379-4b4a-8dee-5fccb58fdaad>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Nutley Prealgebra Tutor Find a Nutley Prealgebra Tutor ...I then find it useful to ask questions to test your understanding, or perhaps get you to explain the concept to someone else. I also teach solid, repeatable methods for solving problems in physics, which sorts out the first type of issue I described above. This can take a bit of getting used to... 8 Subjects: including prealgebra, physics, geometry, algebra 1 ...I am an award-winning writer, and am happy to guide you through your personal statements for college, graduate school, and beyond. I will edit and develop essays and English/history assignments. I am happy scheduling in-person sessions in either NY or DC. 44 Subjects: including prealgebra, English, reading, writing ...I am also skilled in many other subjects, including my native language, Spanish. Over the past year, I have worked with students in various areas, ranging from algebra, calculus, chemistry, and physics to English and US History. I also specialize in standardized test prep (SAT/ACT/ISEE/GRE). ... 38 Subjects: including prealgebra, Spanish, chemistry, calculus ...Finally, I can assist you with public speaking for lectures, presentations, peer reviews and oral dissertation defenses, as well as accent modification for clearer speech in person and on the phone. My methods as a speech coach are based on principles of phonetics and phonology in which I identi... 39 Subjects: including prealgebra, Spanish, English, reading Hello! My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one. 9 Subjects: including prealgebra, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Nutley_prealgebra_tutors.php","timestamp":"2014-04-18T23:28:01Z","content_type":null,"content_length":"23745","record_id":"<urn:uuid:fa3ab89a-87c0-4fd6-b30e-3ce5af7f4db4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Rectifying texture from image up vote 4 down vote favorite I have a camera matrix $P$ which defines a projective transformation $\mathbb{P}^3 \rightarrow \mathbb{P}^2$. In the former space there is a plane $[ x|\pi^Tx=0 ]$. The image of the plane under $P$ does not preserve angles. How can I find a transformation $H : \mathbb{P}^2 \rightarrow \mathbb{P}^2$ such that a right angle in the plane remains a right angle after applying $H \circ P$? The application for this problem is extracting texture from a photo of a planar surface where the surface and camera locations are known. projective-geometry geometry applications add comment 1 Answer active oldest votes By picking orthogonal coordinates in the given plane you can make an angle preserving projective map $\mathbb{P}^2\to\mathbb{P}^3$ whose image is the given plane. Composing with your camera mapping, you now have a mapping $G\colon\mathbb{P}^2\to\mathbb{P}^2$ that does not preserve angles. Let $H=G^{-1}$. The composition $HG=I$ clearly preserves angles; hence so does $HP$ when restricted to the given plane. up vote 2 down vote accepted (Reverse the order of composition if you follow the usual computer graphics convention of letting matrices act on the right.) Thank you. I got something working yesterday, but I now see how to simplify it. – Ben Feb 23 '10 at 8:18 add comment Not the answer you're looking for? Browse other questions tagged projective-geometry geometry applications or ask your own question.
{"url":"http://mathoverflow.net/questions/15967/rectifying-texture-from-image/16006","timestamp":"2014-04-17T07:12:15Z","content_type":null,"content_length":"50527","record_id":"<urn:uuid:24714654-56f2-409f-96f7-e4fba872cf04>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
A remark on pseudo-exponentiation There are a number of interesting open problems about definability in the field of complex numbers with exponentiation. Zilber has proposed a novel approach. He constructed a nonelementary class of exponential algebraically closed fields and showed that in this class definable subsets of the field are countable or co-countable. He also showed the class is categorical in all uncountable cardinalities. The natural question is whether the complex numbers are the unique model in this class of size continuum. In this talk I will show that, assuming Schanuel's Conjecture, the simplest case of Zilber's strong exponential closure axiom is true in the complex numbers.
{"url":"http://www.newton.ac.uk/programmes/MAA/marker.html","timestamp":"2014-04-16T07:33:37Z","content_type":null,"content_length":"2576","record_id":"<urn:uuid:a170fef4-67c3-4c84-9cbf-a6d1a88fdd88>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Can I become a nurse if I'm not great at math? 1. 3 Feb 1, '13 by In the future, I hope to be a nurse but right now I'm finishing a nine month medical assistant program. The math is fairly easy because there's no algebra. One of my instructors is a doctor and he said I better become great at math otherwise there's no chance of me ever becoming a nurse. I'm great at anatomy/physiology, science, and anything not involving math. Whenver I'm being taught math, I just can't understand it no matter how many times people teach me. This makes me feel very sad because all my life, all I've ever wanted was to become a nurse but the reality is that if I don't become proficient then it will never happen. 2. 2 Yes! Start studying now! Learn the formulas or dimensional analysis. Once you learn one of those routes med math becomes easy. You will look forward to the math questions because they will be guaranteed points. I used to be horrible a math. 3. 4 Feb 1, '13 by TheCommuter Asst. Admin I'm still not good at math, and I'm currently working as a nurse. You'll need to learn dosage calculations and 'med math' for nursing school, which requires proficiency in 6th to 8th grade level math, although I'm cognizant that many adults in America struggle with the most basic mathematics that should have been mastered during the elementary school years. If you feel that your math skills are rusty, I suggest enrolling in a remedial or developmental math course at your local community college to become better at this subject. Good luck to you. 4. 2 Take a math placement test at your college. They will tell you what level that you need to remediate at and what is required for nursing school. I only needed one college level course for nursing school, but I took the two lower level courses I lacked performance in to make sure I was proficient enough to succeed. It was the best decision that I ever made. I made it through college Algebra, Chemistry and Med-dose with A's. 5. 0 Feb 1, '13 by from dolphincatch In the future, I hope to be a nurse but right now I'm finishing a nine month medical assistant program. The math is fairly easy because there's no algebra. One of my instructors is a doctor and he said I better become great at math otherwise there's no chance of me ever becoming a nurse. I'm great at anatomy/physiology, science, and anything not involving math. Whenver I'm being taught math, I just can't understand it no matter how many times people teach me. This makes me feel very sad because all my life, all I've ever wanted was to become a nurse but the reality is that if I don't become proficient then it will never happen. I dropped college algebra off and on for 7 years and final passed it when I began an RN-BSN program.......that was 2 years ago...been nursing 27 years this year. I think your instructor was just really trying to motivate you by hitting you where it hurts. Hey, good lesson: don't let anyone know your weaknesses. They will surely try to uses them against you. But there will be some math in pharmacology. I had to retake a test for work just yesterday because I score 75 and they wanted 80. Got 95 on the retake...of course, the retake was an easier version but was still moderately difficult.Don't let him shake you. 6. 0 I used to be horrible at math. Still a competent nurse!! My issue was to tackle how to do a problem, and the end result is to make sure when you solve for x, that it makes sense. I found that when I did equations in Chemistry, that my math improved, because the way they do equations are based on ratio and proportion and dimensional analysis. You can always get a nursing math book and improve on conversion factors, as well as learn how to do nursing math. My school used dimensional analysis, and I only had to take a nursing math test ONCE, and that was because I got anxious on a reconstituted med problem. When I got the second exam, and I got TWO, I got BOTH of them right!!! ALL things are possible...even tackling math. 7. 0 Practice, practice, practice! I always thought that I was not good in math, just took the minimum I needed in high school. But, now I am applying for an ABSN program, and I am in college algebra this semester and I am actually enjoying it! Maybe I think differently now that I am older, but there is a sense of satisfaction when I solve the problem and it is correct! I spend a lot of time on my algebra class. I take notes of equations and spend time completing problems until I understand exactly how the problem is solved - and then I do it again with different numbers until I get it right! I agree to start with the lower level math courses and get tutors if you need them. You just have to change how you think about math! 8. 0 Feb 1, '13 by Practice Practice Practice....thats what worked for me. As I typed this I just glanced to the previous poster! LOL! We have the same comment about practicing. It really does work! 9. 0 Feb 1, '13 by It has been my experience that it's really more about science than math, but completion of Algebra I is a requirement for entrance to nursing school. If you don't feel confident in math, you should enroll in math classes at a community college, you will most likely have to take a placement test to see what your skill level is. Once you are taking a class, practice, practice, practice! Math is a skill and the only way to improve your math skills is to do as much drill work as you can. I was never good at math until I took a basic math class at my college; when I registered for the class I told myself that I was not going to rush through it and I was going to take the time to actually learn and retain things. I did that and got an A and I have much more confidence than ever before. Now I am taking Algebra and so far, so good. Best of luck to you- getting into nursing school is tough but if you're determined and focused you should do fine! from dolphincatch In the future, I hope to be a nurse but right now I'm finishing a nine month medical assistant program. The math is fairly easy because there's no algebra. One of my instructors is a doctor and he said I better become great at math otherwise there's no chance of me ever becoming a nurse. I'm great at anatomy/physiology, science, and anything not involving math. Whenver I'm being taught math, I just can't understand it no matter how many times people teach me. This makes me feel very sad because all my life, all I've ever wanted was to become a nurse but the reality is that if I don't become proficient then it will never happen. 10. 0 Feb 1, '13 by TheCommuter Asst. Admin from mandilee428 It has been my experience that it's really more about science than math, but completion of Algebra I is a requirement for entrance to nursing school. I live in one of the largest metropolitan areas in the U.S. and at least half of the nursing programs around here do not require any math classes as a prerequisite or corequisite course. Of course, these are associate degree programs. My nursing program did not require the completion of Algebra I (or any math course for that matter).
{"url":"http://allnurses.com/pre-nursing-student/can-i-become-812635.html","timestamp":"2014-04-16T16:12:55Z","content_type":null,"content_length":"49551","record_id":"<urn:uuid:bc7080a2-4a56-40a5-b544-a9f9f7ca18f2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2008 [00474] [Date Index] [Thread Index] [Author Index] Re: Linear algebra with generic dimensions • To: mathgroup at smc.vnet.net • Subject: [mg93691] Re: Linear algebra with generic dimensions • From: David Bailey <dave at removedbailey.co.uk> • Date: Sat, 22 Nov 2008 06:11:00 -0500 (EST) • References: <gfubsq$eei$1@smc.vnet.net> dudesinmexico wrote: > I am looking for a way to do linear algebra computations where the > dimensions of matrices and vectors are > symbolic. Let me give an example to make this more clear. Say that you > have a matrix whose > generic element is defined as T_ij=rho^(j-i). If I want the square of > the Frobenius norm of T, I can write > Sum[Rho^(2 (j - i)), {i, 0, N - 1}, {j, 0, N - 1}], Element[{i, j}, > Integers] > and Mathematica gives as an answer a function of N and Rho: > (Rho^(2 - 2 N) (-1 + Rho^(2 N))^2)/(-1 + Rho^2)^2 > and this is what I want, a function of matrix size N and, in this > case, a matrix parameter. > However, If I use the built-in Norm[,"Frobenius"], I cannot specify an > array with a generic dimension, > and this is true of all the linear algebra functions. I think that > what I need is a new "matrix type" holding the > expression for a generic matrix element as a function of its indices > and the names of the variables > holding the dimensions. Then I could overload built-in functions like > Dot[], Norm[], Tranpose[], etc.. > with new functions. > Has this ever been done before? Is there any package or example > showing how do implement these ideas? > Thanks > -Arrigo Nobody else has responded to this with something more specific, so I would like to comment that once you move away from explicit matrices and vectors to more general symbolic representations, there are just so many special cases. It is probably unreasonable to expect Mathematica to supply this functionality directly. You have raised the case where the matrix is of unknown size N, but where a single formula can be used to represent all the elements. Obviously there are many other special cases - such as the one where the matrices are purely symbolic. I think you have almost answered your own question. You need to create objects such as and provide functions to operate on them: myTranspose[myMatrix[fa_,nA_]]:= myMatrix[fa[#2,#1]&,nA] In general, I would not overload the existing functions, but write your own with a related name. Overloading can look elegant, but in complicated situations it is easy to make a mistake which results in the non-overloaded function being called inadvertently. The resulting error messages can be challenging :) David Bailey
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00474.html","timestamp":"2014-04-21T15:14:48Z","content_type":null,"content_length":"27610","record_id":"<urn:uuid:6068e8bf-3b6d-4440-9acb-e9263d707bec>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Using synthetic division, what is the quotient of (2x^3 - 3x - 10) / (x - 2) ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51225bd4e4b06821731d547a","timestamp":"2014-04-20T16:06:11Z","content_type":null,"content_length":"104331","record_id":"<urn:uuid:051ebccf-0365-4bcd-ae87-207d8c1444b2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Business / Math & Scientific Tools Scientific Advantage Calculator 2.0 Scientific Advantageâ„¢ is a Unit Awareâ„¢ calculator that lets you work with feet-inch-fraction dimensional values and includes advanced math functions, automatic solvers, and much more. Unit Awareâ„¢ means that you can effortlessly compute with units (not just convert). It also has great options like algebraic and RPN entry, customizable buttons for unit entry, and display precision options. Available for both Windows and Windows Mobileâ„¢ Pocket PC. Here are some of the features that make Scientific Advantageâ„¢ a powerful tool. - Works with and displays Feet-Inches-Fractions (e.g., 24' 7-3/16") - User-selectable inch fraction denominator (2, 4, 8, 16, 32, 64, 128, or 1000) - Converts among and computes with over 200 different units (ft, m, lb, yd3, etc.), including both English/Imperial and SI/Metric units - Computes directly with dimensional values (e.g., 70 lb/ft3 x 5 yd3 = 9450 lb) - Algebraic and RPN entry modes (user selectable) - Allows entry of feet-inches-fractions using the decimal key. For example, to enter 4' 3-21/32", you would press [4] [.] [3] [.] [2] [1] - Allows entry of any denominator (after entering numerator, just press [.] to enter a new denominator) - Dedicated buttons for picking the units that you use most often (you can assign the units that you want to these buttons) - Mini-keypad for easy entry when working with Solvers - Triginometric - sin, cos, tan, asin, acos, atan - Logarithmic - log, ln, 10x, ex - Power - x2, square root, yx - Hyperbolic - sinh, cosh, tanh, asinh, acosh, atanh - Bessel functions - J0, J1, Y0, Y1, I0, I1, K0, K1 - Gamma function - Error function - Factorial - Random number - Circle - Cone - Cube - Cylinder - Rectangular Box - Sphere - Stairs - Triangle - Oblique - Triangle - Right - Choice of display precision, including fixed decimal places or Scientific Advantage Calculator related software Title / Version / Description Size License Price tApCalc Scientific tape calculator Pocket PC 1.41 602.0 KB Demo $9.95 tApCalc SciFi is a handy Scientific calculator for Pocket PC. It provides a simulated paper tape that allows users to record calculations and save them for future reference. Paper tape simulation has many advantages. You can start a calculation and continue adding new calculations till you have entered all data, you can edit wrong data in the calculation without re-entering all data again, you can recalculate the calculations recorded and... Orneta Calculator for Smartphone 2002 1.0.2 50.5 KB Freeware Orneta Calculator is a simple and easy to use application that acts like a standard scientific handheld calculator for Windows Mobile based Smartphone's. Just use your Smartphone when a separate handheld calculator is too much. Enjoy the simple access for quick calculation of difficult problems. Solve your math and science problems at work, school, the lab, or even on the road. You can perform any of the standard operations for which you would... Construction Advantage Calculator for Pocket PC 2.0 1.3 MB Shareware $69.95 Here are just a few of the features that make Construction Advantage the best. Keypads and Entry Logic Patented 0-15 keyboard exclusive only on the Jobber User selectable choice of Algebraic or RPN entry modes User selectable scientific or standard keypad Mini-keypad for easy entry when working with Solvers Units and Conversions Works with and displays Feet-Inches-Sixteenths (e.g., 24 7-3 /16") Converts among over 70 different... Calculator Mobile for Smartphone 2.1.0 237.0 KB Demo $14.99 Orneta Calculator is a simple and easy to use application that acts like a standard scientific handheld calculator for Windows Mobile based Smartphone s. Just use your Smartphone when a separate handheld calculator is too much. Enjoy the simple access for quick calculation of difficult problems. Solve your math and science problems at work, school, the lab, or even on the road. You can perform any of the standard operations for which... Calculator Mobile for Pocket PC 2.1.0 237.0 KB Demo $14.99 Orneta Calculator is a simple and easy to use application that acts like a standard scientific handheld calculator for Windows Mobile based Pocket PC s. Just use your Pocket PC when a separate handheld calculator is too much. Enjoy the simple access for quick calculation of difficult problems. Solve your math and science problems at work, school, the lab, or even on the road. You can perform any of the standard operations for which you... MagicPlot Calculator 1.0 72.0 KB Freeware MagicPlot Calculator is a free scientific formula calculator from MagicPlot graphing application. A fast and usable calculator to compute complex formulas! FEATURES: A· Syntax hightlighting in formula and result A· Parentheses matching A· Calculations history A· Built-in functions (sin, cos, atan2, ...) A· Defining user variables A· Advanced errors highlighting A· Previously entered expressions recall A· Portable (runs without... Basic Advantage Calculator 2.0 907.0 KB Demo $19.95 Advantage is Unit Aware and lets you work with feet-inch-fraction dimensional values. Its our simplest model but still loaded with powerful features and options. Advantage Calculators. Microsoft Visual Studio 7.0 Eval 1.0 137.0 KB Freeware Here is a scientific, programmable calculator that allows you to perform most calculations, and in decimal, hexadecimal, octal and binary. It is very well done, but its a shame there isnt as well as the scientific mode, another classic mode for everyday use with bigger buttons and a nicer display than the standard one.. Pocket PC Freeware by Jonathan Sachs. TotalCalc for Pocket PC 1.0 100.0 KB Trial $9.99 Easy to use full scientific expression calculator. Fully functional, modern colorful interface, with easy input. Scientific Calculator for the pocket PC ( PPC ) - TotalCalc by PocketGizmo.... Full scientific calculator for the Pocket PC with full expression evaluation, Free trial available works with windows Mobile 5.0(WM5) pocket pc scientific calculator totalcalc Portable Kalkules 1.8.0.15 1.3 MB Freeware Kalkules is an universal scientific freeware calculator with an amount of untraditional functions, which can be used particularly by high school or university students. It also offers a wide range of tools, which make your calculations easier and faster. FEATURES: A· evaluating whole expressions ( ex: 5 + 10 / 3 ) A· drawing function graphs A· calculating with real complex or modulo numbers A· calculating in four number systems:... New downloads of Business, Math & Scientific Tools Title / Version / Description Size License Price Trades Math Calculator 2.0.1 5.5 MB Shareware $14.99 Solve common machine shop and other trades trigonometry and math problems at a price every trades person can afford! As a machinist or CNC programmer, you often have to use trigonometry to calculate hole positions, chamfers, sine bar stacks, dovetail measurements, bolt circles, etc. You often have to leaf through reference books, drill charts, speed and feed tables, thread wire charts and so on to find the information you need. On the other... Training Manager Enterprise Edition 1.0.1198 7.1 MB Shareware $995 Track your training records, requirements and compliance with Training Manager 2014. Print personnel transcripts, certificates and status reports. Assign training to an individual, group, or job role. Require retraining based on time, version, or one time only. Schedule and manage class sessions, attendance, cancellations, and no-shows. Training Manager 2014 is easy to use and you can get started quickly. Training Manager Features: -... Point Forecaster 2.0 4.7 MB Freeware Comprehensive Hourly Forecasting For Any Location In The United States. All Data Is Taken From The National Weather Service. Features Include Weather Conditions, Precipitations, Temperatures, Dew Points, Wind Speeds, Wind Directions, Cloud Covers And Humidities. Points On A Canvas 1.0 2.0 MB Shareware $19.95 Points on a Canvas lets you measure the distance between any two clicked points on your screen. Sounds simple enough, right? Yes, but wait until you start diving deeper into all of the glorious features that await you in Points on a Canvas. It's simple and powerful all at the same time. Points on a Canvas is designed to accommodate the needs of users who need the ability to take multiple measurements - as in, lots and lots. Sure, at its... Data Curve Fit Creator Add-in 2.5 5.2 MB Shareware $49 The easiest way to do curve fitting, forecasting, and data smoothing in Microsoft Excel... Data Curve Fit Creator Add-in is an easy-to-use data analysis add-in for Microsoft Excel. It adds curve fitting, interpolation, and data smoothing functions to Excel. These functions work just like standard Excel functions, so they are simple to use. Curve fitting functions include polynomial fits and a versatile local regression (loess) function.... Latest Reviews UEditor WYSIWYG HTML Editor (Robert) - Apr 17, 2014 Efficient html editor. Nice and easy to use interface. WinX HD Video Converter Ultra (Tommy) - Apr 14, 2014 User-friendly interface design, very fast. PCB Creator (Steve) - Apr 13, 2014 Very helpful application. Must have software for designing circuits. Free Download Manager (Booker) - Apr 12, 2014 Powerful download manager. It's really good. PLABEL WIN LITE (Ron) - Apr 9, 2014 Great software. This label editor has a easy to use interface. Interactive Calendar (Max) - Apr 8, 2014 It's very efficient using this calender program. Free Mail Commander (Shuko) - Apr 6, 2014 Nice email client indeed. PC HealthBoost (Rachel) - Apr 5, 2014 This product is amazing.My computer was having issues with the registry. After installing, scanning, and fixing the errors my computer now runs like it is brand new.I would recommend this to LightBox Video Web Gallery Creator (Mike) - Apr 3, 2014 I love it. Does exactly what it should do. Very easy to use. Dans Web Album (Nancy) - Apr 2, 2014 Pretty good tool. It's not very complicated at all. All software information on this site, is solely based on what our users submit. Download32.com disclaims that any right and responsibility for the information go to the user who submit the software, games, drivers. Some software may not have details explanation or their price, program version updated. You should contact the provider/actual author of the software for any questions. There are also user reviews/comments posted about various software downloads, please contact us if you believe someone has posted copyrighted information contained on this web site. Copyright © 1996-2013 Download
{"url":"http://www.download32.com/scientific-advantage-calculator-s33897.html","timestamp":"2014-04-19T14:51:22Z","content_type":null,"content_length":"43993","record_id":"<urn:uuid:68666ed9-d70f-429a-af30-34a1ab7dc19a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Interpreting Data - Relationship Between Weight and Height 5809 words (16.6 double-spaced pages) Red (FREE) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Interpreting Data - Relationship Between Weight and Height MayfieldSchoolis a secondary school of 1183 pupils aged 11-16 years of age. For my data handling coursework I have got to investigate a line of enquiry from the pupils' data. Some of the options include; relationship between IQ and Key Stage 3 results, comparing hair colour and eye colour, but I have chosen to investigate the . One of the main reasons being that this line of enquiry means that my data will be numerical, allowing me to produce a more detailed analysis rather than eye or hair colour where I would be quite limited as to what I can do. If I were to make an original prediction of my results, my hypothesis would be; "The taller the pupil, the heavier they will weigh." In this project I will consider the link between height and weight and will eventually be able to state whether my original hypothesis is in fact correct. Other factors I am going to consider when performing this investigation, is the effect of age and gender in my results and I will make further hypothesize when I reach that stage in my project. Collecting Data I have originally decided to take a random sample of 30 girls and 30 boys; this will leave me with a total of 60 pupils. I have chosen to use this amount as I feel this will be an adequate amount to retrieve results and conclusions from, although on the other hand it is not too many which would make my graph work far more difficult and in some cases harder to work with. To retrieve my data I am going to firstly use a random sample as this means that my data is not biased in any way, and all of the pupils will vary in height, weight and age - although I will have an equal gender ratio. To obtain this sample, I could have written the numbers of all 580 girls in one hat, and 603 boys in the other, then selected 30 bits of paper from either hat and look up their details from the number they are in the register. Although I though an easier way of performing this task is by using the 'Rand' button on my calculator. To retrieve 30 random numbers I would have to input; Int, Rand, 1(580,30) for the girls and change the 580 to 603 for the boys. This then means that the calculator will give me 30 whole numbers within the range of 1-580 or 1-603. This is the random sample that I obtained; Girls Boys Height (m) Weight (kg) Height (m) Weight (kg) I need a more useful representive of the data shown above, so I have decided to sort my data out and put it into height and weight frequency tables. As I will be able to see the data far more clearly and it will allow me to plot graphs from the data with less Weight Frequency Tables Girls Boys Weight, w (kg) Weight, w (kg) 20 ≤ w <30 20 ≤ w <30 30 ≤ w <40 30 ≤ w <40 40 ≤ w <50 40 ≤ w <50 50 ≤ w <60 50 ≤ w <60 60 ≤ w <70 60 ≤ w <70 70 ≤ w <80 70 ≤ w <80 80 ≤ w <90 80 ≤ w <90 Height Frequency Tables Girls Boys Height, h(cm) Height, h(cm) 120 ≤ h <130 120 ≤ h <130 130 ≤ h <140 130 ≤ h <140 140 ≤ h <150 140 ≤ h <150 150 ≤ h <160 150 ≤ h <160 160 ≤ h <170 160 ≤ h <170 170 ≤ h <180 170 ≤ h <180 180 ≤ h <190 180 ≤ h <190 190 ≤ h <200 190 ≤ h <200 Because both height and weight are continuous data, I have chosen to group the data in class intervals of tens as this allows me to handle large sets of data more easily and will be easier to use when plotting graphs. In both the height and weight column, '120 ≤ h <130', this means '120 up to but not including 130', any value greater than or equal to 120 but less than 130 would go in this interval. I feel I am now at the stage where I can go on to record my results in graph form. This will then allow me to analyse my data and compare the results for the differing genders, which I am unable to do with the tables above. As I mentioned earlier both height and weight are continuous data so I cannot use bar graphs to represent it, instead I will have to use histograms as this is a suitable form of graph to record grouped continuous data. Before I produce the graph I am going to make another hypothesize that; "Boys will generally weigh more than girls." Histogram of boys' weights Histogram of girls' weights Obviously by looking at the two graphs I can tell there is a contrast between the girls' and boys' weights, but to make a proper comparison I will need to plot both sets of data on the same graph. Plotting two histograms on the same page would not give a very clear graph, which is why I feel by using a frequency polygon it will make the comparison a lot clearer. Frequency polygons for boys' and girls' weights This graph does support my hypothesis, as it shows there were boys that weighed between 80kg and 90 kg, where as there were no girls that weighed past the 60kg-70kg group. Similarly there were girls that weighed between 20kg and 30kg were as the boys weights started in the 30kg-40kg interval. Although by looking at my graph I am able to work out the modal group, but it is not as easy to work out the mean, range and median also. To do this I have decided to produce some stem and leaf diagrams as this will make it very clear what each aspect is, for the main reason I will be able to read each individual weight - rather than look at grouped weights. Stem and leaf diagrams show a very clear way of the individual weights of the pupils rather than just a frequency for the group-which can be quite inaccurate. Girls Boys 20 kg 20 kg 30 kg 30 kg 40 kg 40 kg 50 kg 50 kg 60 kg 60 kg 70 kg 70 kg 80 kg 80 kg From this table I am now able to work out the mean, median, modal group (rather than mode because I have grouped data) and range of results. This is a table showing the results for boys and girls; Weights (kg) Modal Class 50 kg 40-50 kg 50 kg 50 kg 46 kg 40-50 kg 47 kg 31 kg (NB. The values for the mean and median have been rounded to the nearest whole number.) Despite both boys and girls having the majority of their weights in the 40-50kg interval, 13 out of 30 girls (43%) fitted into this category where as only 11 out of 30 (37%) boys did which is easily seen upon my frequency polygon. I could not really include that in supporting my hypothesis as the other aspects do. My evidence shows that the average boy is 4kg heavier than that of the average girl, and also that the median weight for the boys are 3kg above the girls. Another factor my sample would suggest is that the boys' weights were more spread out with a range of 50kg rather than 31kg as the girls results showed. The difference in range is also shown on my frequency polygon where the girls weights are present in 5 class intervals, where as the boys' weights occurred in 6 of them. I am now going to use the height frequency tables to produce similar graphs and tables as I have done with the weight. Obviously as height is continuous data, as mentioned already, I am going to use histograms to show both boys and girls weights. I am also going to make another hypothesis that; "In general the boys will be of a greater height than the girls." Histogram of boys' heights Histogram of girls' heights Similarly as with the weight, I can see the obvious contrasts between the boys' and girls' heights, but the data is not presented in a practical way to perform a comparison, that is why I am going to put the two data sets on a frequency polygon. [IMAGE]Frequency Polygon of Boys' and Girls' Heights This graph does support my hypothesis as the boys' heights reach up to the 190-200cm interval, where as the girls' heights only have data up to the 170-180 cm group. Similarly there were girls that fitted into the 120-130cm category where as the boys' heights started at 130-140cm. As this data is presented in Girls Boys 120 cm 120 cm 130 cm 130 cm 140 cm 140 cm 150 cm 150 cm 160 cm 160 cm 170 cm 170 cm 180 cm 180 cm 190 cm 190 cm With these more detailed results, I can now see the exact frequency of each group and what exact heights fitted into each groups, as you cannot tell where the heights stand with the grouped graphs. For all I know all of the points in the group 140 ≤ h <150 could be at 140cm, which is why I feel it is a sensible idea to see exactly what data points you are dealing with. I can also now work out the mean, median and range or the data, these are the results I worked out; Heights (cm) Modal Class 164 cm 150-160 cm 162 cm 59 cm 158 cm 160-170 cm 161 cm 53 cm Differing from the results from my weight evidence, the heights' modal classes for boys and girls differ, and much to my surprise the girls' modal class is in fact one group higher than the boys. This is very visible on my frequency polygon as the girls data line reaches higher than that of the boys. This doesn't exactly undermine my hypothesis however as the modal class only means the group in which had the highest frequency, not which group has a greater height. On the other hand the average height supports my prediction as the boys average height is 6 cm above the girls. The median height had slightly less of a difference than the weight as there was only one centimetre between the two, although again it was the boys' median that was higher. When it comes to the range of results, similarly to the weight the boys range was vaster than the girls, although there was no where near as greater contrast in the two with a difference of only 6 cm between the two. With all of the work I have done so far, my conclusions are only based on a random sample of 30 boys and girls so they are not necessarily 100% accurate, and therefore I will extend my sample later on in the project. Before I go on to further my investigation, I feel that it is necessary for me to work out the quartiles and medians of both data sets, as this allows me to work with grouped data rather than individual points as in my stem and leaf diagrams. To do this I am going to produce cumulative frequency graphs as this is a very powerful tool when comparing grouped continuous data sets and will allow me to produce a further conclusion when comparing height and weight separately. I am also going to produce box and whisker diagrams for each data set on the same axis as the curves for this allows me to find the median and lower, upper and interquartile ranges very simply (I have attached a small sheet explaining how I can find these results from the graphs I am going to produce). I am firstly going to look at weight, and to produce the best comparison possible I am going to plot boys, girls and mixed population on one graph. Cumulative frequency curves for weight All three of my curves clearly show the trend towards greater weights amongst boys and girls. From looking at my box and whisker diagrams I have obtained the following evidence: Weight (kg) Lower Quartile Upper Quartile These results continue to agree with my prediction made earlier that the boys will be of a heavier weight than the girls. I can see this as the lower quartile, upper quartile and mean are all of lower values than the boys, but also the boys' range of weights is shown to be greater from these results as their interquartile range is two kg higher than the girls. Cumulative frequency for heights These results also show the trend towards a greater height amongst the boys and girls. Similarly as done with my weight diagram, I have obtained the following evidence; Height (cm) Lower Quartile Upper Quartile Similarly as with the weight results, these results continue to further my prediction that the boys would be of a greater height than the girls. As with the weight results this can be seen from the lower quartile, upper quartile and mean points which in the girls' case are all of a value smaller than the boys. From all of the graphs and tables I have produced so far, I can fairly confidently say that the boys weights' and heights' are higher than the girls but none of my evidence collected so far helps me conclude my original hypothesis made; "The taller the pupil, the heavier they will weigh." Although when looking at my cumulative frequency graphs of height and weight, I could make the statement that both diagrams appear to be very similar from appearance although I cannot make any form of relationship between the height and weight. I am now going to extend my investigation and see how height and weight can be related, and to do this the most effective way is by producing scatter diagrams. I will plot boys and girls on separate graphs as I feel the results will produce a stronger correlation when done this way and also to continue with the style I have begun with. Using scatter diagrams allows me to compare the correlations of the two graphs, and the equations of the lines of best fit (best estimation of relationship between height and weight) of each gender. Boys' Scatter diagram of height and weight This graph shows a positive correlation between height and weight, and all of the datum points seem to fit reasonably close to the line of best fit. There are a few points that I have circled which do not really fit in with the line of best fit - these are called anomalous points, it means that they do not fit in with the trend of the Girls' Scatter diagram of height and weight This graph similarly shows a positive correlation, although the correlation is stronger than the boys as the spread is greater on the boys graph than on the girls. The datum points on this graph are quite closely bunched together in the middle where as on the boys graph there is a wider spread of results - which would agree with the conclusion made earlier that the boys' heights and weights are of a larger range than the girls. I have again circled the anomalous points on this graph to show which data did not fit in with the trend of results. As both of my lines of best fit are completely straight, I would assume that the equation of the line would be in the form of; y = mx + c.Wheny represents height in cm, and x represents weight in kg, the equations of the lines of best fit for my data set are (I obtained these equations from my graphs in autograph as an exact result was available, however if I were to find the results myself I would do so by finding the gradients and looking at the point where they intercept the y axis, NB. attached is a small diagram of how I would do so): Boys: y = 0.8004x + 121.6 Girls: y = 0.7539x + 123.6 These equations can be used to make prediction of either weight when you know the height or vice versa. For example, if I were to predict the weight of a girl who is 165 cm tall this is what I'd do: [IMAGE]y = 0.7539x + 123.6 so, x = y - 123.6 [IMAGE]If y = 165 cm then x = 165 - 123.6 = 55.91 Therefore I would predict a girl of 165 cm would weight 56 kg (rounding up to a whole number as used on my graphs and data tables) when using the equations from my lines of best fit. I have checked this, by lightly drawing a pencil line on my graph across from 165 cm up to where it meets on the line of best fit and then dragging it down to the x axis, and after doing so the line met the x axis at around 56 I have now reached a point in my investigation where my random sample of 30 boys and girls is not necessary anymore. There have definitely been some clear conclusions made from my graphs and tables already, which have all in fact fitted in with my predictions made. However my predictions are only based on general trends observed in my data, and in both the girls and boys samples there were individuals whose results did not fit in with the general trend. I cannot have complete confidence in my results so far due to the fact this is only a random sample of 30 girls and boys and age has not been considered which I now feel is a necessary factor. I have spent a good amount of time considering different genders but now I am going to look at age differences. It is only common sense that age is going to affect your height and weight, for you would think a year 7 pupil would be smaller and lighter than a pupil in year 11. As Mayfield is a growing school there would be more pupils in year 7 than in year 11, therefore my random sample was likely to contain more year 7 pupils than year 11 - this is biased and unfair. To ensure that I obtain a data set with an accurate representation of the whole school, I am going to have to take a stratified sample. Stratified sample means that you sample a certain amount from a particular group to proportion that group's size within the whole population, i.e. pupils within year 8, within the whole school. This is a table showing the number of girls and boys in each year at % of WholeSchool Year 7 Year 8 Year 9 Year 10 Year 11 I have decided to continue with a sample of 60 pupils, 30 girls and 30 boys, as I feel from my random sample this amount of data was easy to work with and produced some sufficient results. I have now got to work out how many girls and boys I will need from each year to make sure that my sample is a good representation of the whole school. To do this, I must consider the boys and girls separately as there are 580 girls in the school and 603 boys. When working out the year 7 sample this is what I'd do; Take the total number of year 7 girls-131, and divide that by the total number of girls in the school, 580 … 131/580 = 0.22586207…I then have to multiply that number by 30 as that is the total number of girls data I wish to obtain … 0.22586207 X 30 = 6.7758621 … if I then round that number up to one whole number it means that I need 7 girls from year 7 in my stratified sample. This is the calculations performed to retrieve my stratified sample Year 7 - Girls - 131/580 = 0.22586207 X 30 = 6.7758621 = 7 Year 7 - Boys - 151/603 = 0.25041459 X 30 = 7.5124377 = 8 Year 8 - Girls - 125/580 = 0.21551724 X 30 = 6.4655172 =6 Year 8 - Boys - 145/603 = 0.24046434 X 30 = 7.2139302 = 7 Year 9 - Girls - 143/580 = 0.24655172 X 30 = 7.3965516 = 7 Year 9 - Boys - 118/603 = 0.19568823 X 30 = 5.8706469 = 6 Year 10 - Girls - 94/580 = 0.16206897 X 30 = 4.8620691 = 5 Year 10 - Boys - 106/603 = 0.17578773 X 30 = 5.2736319 = 5 Year 11 - Girls - 86/580 = 0.14827586 X 30 = 4.4482758 = 5 Year 11 - Boys - 84/603 = 0.13930348 X 30 = 4.1791044 =4 Despite my new sample of 60 being stratified, to obtain the particular number of girls and boys from each year, I am going to select them randomly so again no biased is shown. I selected my random pupils using my calculator by performing; (year 7 girls) SHIFT RAN# X 131, I'd repeat this 7 times until I had 7 sets of data. This was obviously repeated for all years but changing the number it was multiplied by depending on how many pupils there were in each group. Using my new stratified sample, I produced a scatter graph for each age and alternate gender, i.e. a boys and girls scatter graph for year 7,8,9,10,11. I am going to maintain the same hypothesis of "the greater the height, the greater the weight, but I can also comment on the older the pupil the greater the height or weight." Year 7 For the year 7 graphs, the lines of best fit appear to be at a similar slope to one another although the boys begin at a higher point on the y axis than the girls - which would determine that the boys were taller. The boy's points appear more spread out but closer to the line of best fit, where as the girls are more sparsely distributed but are situated quite closely together on the line area. Both lines have a positive correlation which would agree with the taller the person the heavier they weigh. Year 8 Differing from the year 8 graphs the lines of best fit are at quite different gradients. These graphs show that the boys in year 8 follow a strong pattern, of the taller you are the heavier you weigh - shown by the positive correlation of the line. However the girls graph differs and has a very slight correlation which could be for many reasons - one being that girls watch their weight slightly more. The points on both of these graph are more sparsely distributed around the lines of best fit, where as the year 7 points were more closely grouped together. This could be for the reason that your body starts to change in many different ways as you grow older. Year 9 The Year 9 graphs show greater contrast again, although of a similar pattern to the year 8 ones. The boys shows an even steeper positive correlation showing the heavier you weigh the taller you are, and similarly the girls show the line of best fit almost positioned horizontally across the page. Both of these graphs have points positioned very closely to the line of best fit, although that could just be coincidence. Year 10 The year 10 graphs show a complete change with both of the graphs consisting of a practically horizontal line of best fit, the girls could be explained due to this gender caring about their appearance more, but the boys change I cannot explain. This could just be a fluke, as there are only 5 points on the graph anyway - which is a small percentage of all the year 10 boys. Year 11 The small amount of data points on these graphs is barely enough for me to make a conclusion, however the boys graphs shows again the positive correlation as before. But the girls' graphs differ again and now create a negative correlation which would predict that the taller you are the less you weigh. Although these graphs have given me some points to consider, one being why the girls graphs tend not to consist of "the taller you are the heavier you weigh" as the age increases. I have come to the conclusion that because as girls reach puberty and start developing they become more aware of their appearance and therefore try to watch their weight a bit more. Although, I only had a small stratified sample to represent the whole school, so it would not be an accurate source of information to draw an efficient conclusion from. However I did produce this table from all of my data points to see whether a further pattern occurred: Year 7 Year 8 Year 9 Year 10 Year 11 Median height (cm) Mean height (cm) Range of heights (cm) Median weight (kg) Mean weight (kg) Range of weights (kg) The only part of the table that I can assume a conclusion from is the mean as, when the age increases the weight and height does so to. Apart from a couple of irregular points further up the school there is a slight trend in the average heights and weights. Seeing as I didn't have a big enough sample to make any meaningful statements within the data, I have decided to further my investigation again and to look in more detail at just one year group to see whether I can draw a better conclusion from these results. I have decided to look at each year group in more detail, however I am only going to write up an example using year 9 girls as I do not feel it is necessary for me to show each one in full. I am going to extract a random sample which will add up to 10% of the total girls in year 9. As there are 143 girls in year 9 at MayfieldHigh School, I will need a total of 14 pupils; this is the random sample I extracted; Height (cm) Weight (kg) I can create a brief summary of the heights and weights in a table, as I have done with the majority of my other samples although I also used graphs with these, these is my summary; Median Weight (kg) Mean Weight (kg) Rangeof Weights(kg) Median Height (kg) Mean Height (kg) Rangeof Heights(kg) From this data, although I have considered the range of results there is another measure of spread which I have not yet considered in my project is standard deviation. Thisis the measure of the scatter of the values about the mean value also thought of as a measure of consistency. Standard deviation uses the square of the deviation from the mean, therefore the bigger the standard deviation the more spread out the data is. I am firstly going to work out the standard deviation of the year 9 girl's heights' where 'x'represents the heights. To find the standard deviation using the equation; we need to first work out the mean value, then square each value and these squares. I have put my values in a table as it is easier to keep of them then. x (cm) x² (cm²) I am also going to work out the standard deviation of the girl's weights, I am going to use the same method, and the only difference being that 'x' now represents weight rather than height. x (kg) x² (kg²) These are the results I obtained for the standard deviation for each year group - boys and girls separately; Mean height (cm) S.D for height (cm) Mean weight (kg) S.D for weight (kg) From looking at the mean averages for each separate gender set, the boys' height and weight increase as the age increases. The biggest increase for the boys' height was from year 7-8 where the average height increased 9cm, from 154cm up to 163cm. Although when looking at the weight increases, it appears there is a slightly more even increase however the biggest jump is 7kg from year 9-10. The girls results' generally appears to increase as their age increases, although there is one fault in the weight section. The girls' heights increase most rapidly from years 7-9 where the height increases 15cm on average (147cm-162cm); although from years 9-11 there is only a minute increase of 2 cm, a centimetre for each year. This could be because girls develop earlier than the boys and therefore grow faster when they are younger, and slow down when they become older. However, when looking at the weight there is one decrease in the average weight as the age increases, from year 9-10 there is a deduction of 3kg, from 54kg-51kg. This could be because this is the prime age when girls start to become far more concerned about their appearance and therefore watch their weight. Despite this one fault, these results would agree with my hypothesis made earlier that the older you are the heavier/taller you will be. When looking at the boys and girls results' together, in each case apart from one, the boys' average height/weight is higher than that of the girls. There is only one point that undermines this pattern, and that is for the weight of the year 9 pupils, where the girls average weight is 2 kg more than the When looking at the standard deviation it shows that the year 11 pupils' heights on whole has the highest level of consistency, with an equal 12 cm deviation for both sexes. Although when looking at the weight the boys in year 11 maintained a deviation of 12 kg for their weight, however the girls' weights proved to be more consistent with a deviation of 9 kg. In general with the weight, the boys' standard deviation is higher than the girls with an average of 3.5 kg difference above them. The only year which differs from this is again the year 9 group - where the girls standard deviation is 4 kg above the boys, this could be related to the girls average weight being heavier than the boys in this section also. I could now say that the girls' weights' are in general more consistent and therefore the data points have a smaller measure of spread. The heights standard deviation does not show much of a pattern, however in years 8,9,10 the standard deviation is higher for the boys than it is for the girls with an average increase of 10 cm difference above the girls. This great difference could be because of the irregular high value of 31cm standard deviation for the year 10 boys, where as the girls only had 11cm. This high value means that the heights for the boys in this year group are quite irregular, and there is a vast measure of spread - I can not see a reason for this however I have to keep in mind this is only a 10% sample of the whole year group therefore it could be that the values selected were just coincidentally a large range of heights. When looking to see if there was any pattern in the standard deviations as the age differs, the girls proved to consist of a slight pattern. From year 7-11 the standard deviation values consisted of; 15cm, 13cm, 13cm, 11cm, 12cm - which shows a general decrease as the pupils grow older, despite the one centimetre increase from year 10-11. All of these values are all very close to each other (within 4 cm of one another), where as the boys values differ slightly more with 12cm, 19cm, 17cm, 31cm, and 12cm (year 7-11). The only conclusion I can draw from this is that the girls heights are overall far more consistent than the boys, and it could be that as the girls increase in age the standard deviation becomes less (more consistent), and the spread of the data points become closer. Before making a final summary of my findings throughout this investigation, I am going to briefly look at one more factor to compare height and weight to, and that is the 'Body Mass Index'. A body mass index defines whether you are underweight, healthy, overweight or obese by calculating; kg/m² = BMI. You can tell whether you are underweight, normal, overweight or obese from the number these are the categories ; Under 17 = underweight 17-25 = normal (between 17 and 22 you are expected to live a longer 25-29.9 = overweight Over 30 = obese Using a new random sample of 60 pupils girls and boys, I have worked out the BMI for each of the pupils and produced a graph comparing the BMI and weight, and the BMI and height. One prediction I would make is "The heavier the person, the higher the BMI." Boys Weight compared to BMI Boys height compared to BMI Girls weight compared to BMI Girls height compared to BMI From looking at the graphs, it proves to be that weight is the greater factor when considering the BMI. I know this as both of the weight graphs for each sex, show that the data points create a positive correlation which would suggest that the heavier the pupil the higher their body mass index - supporting my prediction made. Despite the differing genders, the slope of the line of best fit appears to be very similar although there are far more anomalous points upon the boys graph rather than the girls. When considering height, there appears to be no relationship between the two factors as the data points are scattered everywhere upon the page. However similar to the weight, the girls data points appear to be more sparsely populated around the line of best fit than the boys. From these graphs you could also say that the girls' heights and weights are more consistent than the boys. Additionally I'm going to obtain 10 pupils heights and weight from each year - 5 boys and 5 girls, then I will work out each of their BMI and come up with an average BMI for each separate sex in each year group. I am going to work out one of the pupils just to explain how you work it out. Take for example a boy from year 7, he weighs 47 kg and is 149 cm tall, therefore the calculation for his BMI would be; 47 ÷ 1.49² = 21.2 … therefore this boy is in the normal range This is a table showing the average BMI for each year group (boys and Boys average BMI Girls average BMI Year 7 Year 8 Year 9 Year 10 Year 11 As you can the average BMI for each gender and age group is in the normal/healthy range. The BMI doesn't in fact say, the heavier you are the more your BMI will be, all it states is when you compare your height and weight whether you are normal, underweight, overweight or obese. However there is a pattern occurring within these results, that being that all of the boys BMI's are higher than the girls and also the older that both sexes get, the higher the BMI increases. This would not necessarily happen in all cases, as you could have 5 obese year sevens' in one group and 5 underweight pupils in another group, but coincidentally it has proved to be as your age increase the BMI does also. This could be because you do tend to gain weight far easier as you get older, also because you are growing until around approximately 16-18 years. Knowing that all of these average body mass index results are in the healthy range it would suggest that MayfieldHigh Schoolis in a good area and the children that attend the school live in reasonable conditions. However if all of the results were either underweight or obese, I could suggest that the school may be situated in a deprived area - and children are either not fed properly or over eat from depression or boredom. This is only a very rough suggestion but it could be a possible outcome. Throughout this project I have made many hypothesise including; 1) The heavier the pupil, the taller they will be 2) In general boys will weigh more than girls 3) In general boys will be of a greater height than girls 4) The older the pupil the greater the height/weight 5) The heavier the pupil the higher the BMI will be I have answered all of these predictions throughout the project with either graphs or text, and it is proved that all of my hypothesise made have been in general correct. There have been some slight points which undermine the predictions, but all over they have been successful. My original task was to compare height and weight, although I have not only considered height and weight but including biased factors such as gender and age. Additionally to this, I have also introduced another factor - being the body mass index to see whether height and weight have any relationship to the BMI values of students. As mentioned above, my graphs show that weight does have a relationship with the BMI, where as height does not appear to. When considering age as a biased factor, I produced a stratified sample trying to create a suitable representation of the school on a smaller scale. Using the data for this stratified sample my results proved that in general the older you are the heavier/taller you are, however there was a group of pupils in year 9 which undermined this prediction. These results are however not 100% effective due to there only being a very minimal amount of data for each year group and Despite considering the age factor, I also spent a great deal of time looking at the differing genders to see whether that affected the height and weight of pupils at all. When looking at this I produced histograms, frequency polygons, cumulative frequency graphs and box & whisker diagrams, stem & leaf diagrams and scatter diagrams. The overall conclusion was that boys in general are of greater height and weight - mainly defined by the mean values which were higher than that of the girls. However, all of these hypothesise were all as a part of my main prediction; "The taller the pupil the heavier they will weigh", and from answering all of these other predictions I can confidently say that it is true. I have come to this conclusion based on all of the graphs, diagrams, tables and statements made. On the other hand there were cases where certain data undermined this prediction but that could have been because of the small samples I had allocated myself to obtain. When producing the random sample of 60, I felt that was a satisfactory amount to work with as picking up an analysis and producing graphs from this data was simple and done efficiently. Although when it came to the stratified sample, and I was looking at the different age groups using again a sample of 60 trying to represent the school on a smaller scale - I do not feel it was as successful. If I were to repeat or further this investigation - I would definitely use a larger number of pupils for the stratified sample as when the numbers of the school pupils were put on a smaller scale, I only ended up in some cases with a scatter graph with only 4 datum points upon for the year 11 students. To retrieve accurate results from this method of sampling, I feel it is necessary to use a sample of at least 100. Additionally to the stratified work, if I had a larger sample - I would also produce additional graphs, i.e. cumulative frequency/ box and whisker, as I feel that I could draw a better result from these as I felt the scatter diagrams I produced were rather pointless. I feel my overall strategy for handling the investigation was satisfactory, if I had given myself more time to plan what I was going to do I think I would have come up with a better method and possibly more successful project. One of the positive points about my strategy is that because I used a range of samples it meant that I was not using the same students' data throughout - I instead used a range of data therefore maintaining a better representative of Mayfied school on a whole. There is definitely room for improvements for my investigation - if I were to do it again I would spend a lot more time planning what I was going to do instead of starting the investigation in a hurry. Despite that I feel my investigation was successful as it did allow me to pull out conclusions and summaries from the data used. How to Cite this Page MLA Citation: "Interpreting Data - Relationship Between Weight and Height." 123HelpMe.com. 16 Apr 2014
{"url":"http://www.123helpme.com/view.asp?id=120990","timestamp":"2014-04-16T13:33:53Z","content_type":null,"content_length":"70461","record_id":"<urn:uuid:30dcbce2-fa9a-42e0-b013-b07283c4de8b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00370-ip-10-147-4-33.ec2.internal.warc.gz"}